Design and Data Structure Patterns
1. LRU Cache (HashMap + Doubly Linked List)
| Component | Data Structure | Purpose | Time Complexity |
|---|---|---|---|
| HashMap | Map<key, DLLNode> | O(1) lookup of cache entries | O(1) get/set |
| Doubly Linked List | Head ↔ Node ↔ Node ↔ Tail | Maintain access order (most recent at head) | O(1) add/remove |
| Get Operation | Lookup in map, move to head | Mark as recently used | O(1) |
| Put Operation | Add to map + head, evict tail if full | Add new/update existing, remove LRU | O(1) |
Example: LRU Cache implementation
class LRUCache {
constructor(capacity) {
this.capacity = capacity;
this.cache = new Map(); // key -> node
// Dummy head and tail for easier insertion/deletion
this.head = { key: 0, value: 0, prev: null, next: null };
this.tail = { key: 0, value: 0, prev: null, next: null };
this.head.next = this.tail;
this.tail.prev = this.head;
}
get(key) {
if (!this.cache.has(key)) return -1;
const node = this.cache.get(key);
// Move to front (most recently used)
this.removeNode(node);
this.addToHead(node);
return node.value;
}
put(key, value) {
if (this.cache.has(key)) {
// Update existing
const node = this.cache.get(key);
node.value = value;
this.removeNode(node);
this.addToHead(node);
} else {
// Add new
const newNode = { key, value, prev: null, next: null };
this.cache.set(key, newNode);
this.addToHead(newNode);
// Check capacity
if (this.cache.size > this.capacity) {
// Remove LRU (tail.prev)
const lru = this.tail.prev;
this.removeNode(lru);
this.cache.delete(lru.key);
}
}
}
addToHead(node) {
// Insert between head and head.next
node.prev = this.head;
node.next = this.head.next;
this.head.next.prev = node;
this.head.next = node;
}
removeNode(node) {
// Remove from current position
node.prev.next = node.next;
node.next.prev = node.prev;
}
}
// Usage:
const lru = new LRUCache(2);
lru.put(1, 1); // cache: {1=1}
lru.put(2, 2); // cache: {1=1, 2=2}
lru.get(1); // returns 1, cache: {2=2, 1=1}
lru.put(3, 3); // evicts key 2, cache: {1=1, 3=3}
lru.get(2); // returns -1 (not found)
lru.put(4, 4); // evicts key 1, cache: {3=3, 4=4}
lru.get(1); // returns -1
lru.get(3); // returns 3
lru.get(4); // returns 4
Note: Using dummy head and tail nodes eliminates edge cases when adding/removing at boundaries. Always points: head.next = first real node, tail.prev = last real node. No need to check for null.
2. LFU Cache (Frequency Tracking)
| Component | Data Structure | Purpose | Complexity |
|---|---|---|---|
| Key to Value Map | Map<key, {value, freq}> | Store value and access frequency | O(1) lookup |
| Frequency Lists | Map<freq, Set<key>> | Group keys by frequency | O(1) add/remove |
| Min Frequency | Single variable | Track lowest frequency for eviction | O(1) access |
| Eviction Strategy | Remove first key from minFreq set | LFU with LRU tiebreaker | O(1) |
Example: LFU Cache implementation
class LFUCache {
constructor(capacity) {
this.capacity = capacity;
this.minFreq = 0;
this.keyToVal = new Map(); // key -> {value, freq}
this.freqToKeys = new Map(); // freq -> Set of keys
}
get(key) {
if (!this.keyToVal.has(key)) return -1;
const { value, freq } = this.keyToVal.get(key);
// Increment frequency
this.increaseFreq(key, value, freq);
return value;
}
put(key, value) {
if (this.capacity === 0) return;
if (this.keyToVal.has(key)) {
// Update existing
const { freq } = this.keyToVal.get(key);
this.increaseFreq(key, value, freq);
} else {
// Add new
if (this.keyToVal.size >= this.capacity) {
// Evict least frequently used
const keysWithMinFreq = this.freqToKeys.get(this.minFreq);
const keyToEvict = keysWithMinFreq.values().next().value;
keysWithMinFreq.delete(keyToEvict);
this.keyToVal.delete(keyToEvict);
}
// Insert new key with freq = 1
this.keyToVal.set(key, { value, freq: 1 });
if (!this.freqToKeys.has(1)) {
this.freqToKeys.set(1, new Set());
}
this.freqToKeys.get(1).add(key);
this.minFreq = 1;
}
}
increaseFreq(key, value, freq) {
// Remove from current frequency
this.freqToKeys.get(freq).delete(key);
// If this was the only key with minFreq, increment minFreq
if (freq === this.minFreq && this.freqToKeys.get(freq).size === 0) {
this.minFreq++;
}
// Add to next frequency
const newFreq = freq + 1;
if (!this.freqToKeys.has(newFreq)) {
this.freqToKeys.set(newFreq, new Set());
}
this.freqToKeys.get(newFreq).add(key);
// Update value and frequency
this.keyToVal.set(key, { value, freq: newFreq });
}
}
// Usage:
const lfu = new LFUCache(2);
lfu.put(1, 1); // cache={1=1}, freq: 1->[1]
lfu.put(2, 2); // cache={1=1, 2=2}, freq: 1->[1,2]
lfu.get(1); // returns 1, freq: 1->[2], 2->[1]
lfu.put(3, 3); // evicts 2, cache={1=1, 3=3}
lfu.get(2); // returns -1
lfu.get(3); // returns 3
lfu.put(4, 4); // evicts 3, cache={1=1, 4=4}
lfu.get(1); // returns 1
lfu.get(3); // returns -1
lfu.get(4); // returns 4
Key Insight: Use Set (insertion-ordered in JS) instead of List for O(1) deletion. First element in set = least recently used among same frequency.
MinFreq Update: Only increment minFreq when the last key at minFreq level is promoted. Don't decrement - new keys always start at freq=1.
3. Min Stack with O(1) Operations
| Approach | Method | Space | Time |
|---|---|---|---|
| Two Stacks | Main stack + min stack (parallel) | O(n) |
O(1) all ops |
| Stack of Pairs | Store {val, minSoFar} in each node | O(n) |
O(1) all ops |
| Difference Encoding | Store val - min, update min accordingly | O(n) |
O(1) all ops |
| Optimized Min Stack | Only push to min stack when new min found | O(n) worst, better average |
O(1) all ops |
Example: Two stacks approach
class MinStack {
constructor() {
this.stack = [];
this.minStack = []; // Parallel stack tracking min
}
push(val) {
this.stack.push(val);
// Push current minimum
const currentMin = this.minStack.length === 0
? val
: Math.min(val, this.minStack[this.minStack.length - 1]);
this.minStack.push(currentMin);
}
pop() {
this.stack.pop();
this.minStack.pop();
}
top() {
return this.stack[this.stack.length - 1];
}
getMin() {
return this.minStack[this.minStack.length - 1];
}
}
// Usage:
const minStack = new MinStack();
minStack.push(-2); // stack: [-2], min: [-2]
minStack.push(0); // stack: [-2,0], min: [-2,-2]
minStack.push(-3); // stack: [-2,0,-3], min: [-2,-2,-3]
minStack.getMin(); // returns -3
minStack.pop(); // stack: [-2,0], min: [-2,-2]
minStack.top(); // returns 0
minStack.getMin(); // returns -2
Example: Optimized min stack
class MinStackOptimized {
constructor() {
this.stack = [];
this.minStack = []; // Only stores actual minimums
}
push(val) {
this.stack.push(val);
// Only push if new minimum or equal
if (this.minStack.length === 0 ||
val <= this.minStack[this.minStack.length - 1]) {
this.minStack.push(val);
}
}
pop() {
const val = this.stack.pop();
// Only pop from minStack if removing minimum
if (val === this.minStack[this.minStack.length - 1]) {
this.minStack.pop();
}
}
top() {
return this.stack[this.stack.length - 1];
}
getMin() {
return this.minStack[this.minStack.length - 1];
}
}
// Stack of pairs approach
class MinStackPairs {
constructor() {
this.stack = []; // Array of {val, min}
}
push(val) {
const min = this.stack.length === 0
? val
: Math.min(val, this.stack[this.stack.length - 1].min);
this.stack.push({ val, min });
}
pop() {
this.stack.pop();
}
top() {
return this.stack[this.stack.length - 1].val;
}
getMin() {
return this.stack[this.stack.length - 1].min;
}
}
4. Implement Queue Using Stacks
| Approach | Strategy | Enqueue | Dequeue |
|---|---|---|---|
| Two Stacks (Amortized) | Input stack + output stack, lazy transfer | O(1) |
O(1) amortized |
| Single Stack | Recursive transfer on each operation | O(n) |
O(n) |
| Transfer Timing | Only when output stack is empty | Critical optimization | Each element transferred once |
| Amortized Analysis | Each element pushed/popped at most twice | 2n operations for n enqueue/dequeue | O(1) average |
Example: Queue using two stacks
class MyQueue {
constructor() {
this.input = []; // Newest on top
this.output = []; // Oldest on top
}
push(x) {
// Always push to input stack
this.input.push(x);
}
pop() {
this.peek(); // Ensure output stack has elements
return this.output.pop();
}
peek() {
// If output is empty, transfer from input
if (this.output.length === 0) {
while (this.input.length > 0) {
this.output.push(this.input.pop());
}
}
return this.output[this.output.length - 1];
}
empty() {
return this.input.length === 0 && this.output.length === 0;
}
}
// Example walkthrough:
const q = new MyQueue();
q.push(1); // input: [1], output: []
q.push(2); // input: [1,2], output: []
q.push(3); // input: [1,2,3], output: []
q.peek(); // Transfer: input: [], output: [3,2,1]
// Returns 1
q.pop(); // output: [3,2], returns 1
q.push(4); // input: [4], output: [3,2]
q.pop(); // output: [3], returns 2
q.pop(); // output: [], returns 3
q.pop(); // Transfer from input, returns 4
// Amortized O(1):
// - Element 1: pushed to input (1 op), transferred to output (1 op), popped (1 op) = 3 ops
// - Element 2: pushed to input (1 op), transferred to output (1 op), popped (1 op) = 3 ops
// - Each element: at most 1 push to input, 1 transfer, 1 pop from output = 3 operations total
// - Average per operation: O(1)
5. Design Twitter Timeline (OOD)
| Operation | Data Structure | Implementation | Complexity |
|---|---|---|---|
| Post Tweet | Map<userId, List<tweet>> | Append to user's tweet list with timestamp | O(1) |
| Follow/Unfollow | Map<userId, Set<followeeId>> | Add/remove from followee set | O(1) |
| Get News Feed | Min heap (merge k sorted lists) | Merge tweets from user + followees, top 10 | O(k log k) |
| Tweet Storage | Timestamp + tweetId counter | Auto-incrementing for chronological order | Global counter |
Example: Twitter timeline with merge k lists
class Twitter {
constructor() {
this.timestamp = 0;
this.tweets = new Map(); // userId -> [{id, time}, ...]
this.following = new Map(); // userId -> Set of followeeIds
}
postTweet(userId, tweetId) {
if (!this.tweets.has(userId)) {
this.tweets.set(userId, []);
}
this.tweets.get(userId).push({ id: tweetId, time: this.timestamp++ });
}
getNewsFeed(userId) {
// Get all relevant users (self + followees)
const users = new Set([userId]);
if (this.following.has(userId)) {
for (const followeeId of this.following.get(userId)) {
users.add(followeeId);
}
}
// Merge tweets from all users using max heap
const heap = [];
for (const user of users) {
if (this.tweets.has(user)) {
const userTweets = this.tweets.get(user);
if (userTweets.length > 0) {
// Add most recent tweet from each user
const idx = userTweets.length - 1;
heap.push({
tweetId: userTweets[idx].id,
time: userTweets[idx].time,
user,
index: idx
});
}
}
}
// Build max heap by time
heap.sort((a, b) => b.time - a.time);
const feed = [];
// Get top 10 most recent tweets
while (feed.length < 10 && heap.length > 0) {
// Extract max (most recent)
const { tweetId, user, index } = heap.shift();
feed.push(tweetId);
// Add next tweet from same user if exists
if (index > 0) {
const userTweets = this.tweets.get(user);
const nextIdx = index - 1;
heap.push({
tweetId: userTweets[nextIdx].id,
time: userTweets[nextIdx].time,
user,
index: nextIdx
});
heap.sort((a, b) => b.time - a.time);
}
}
return feed;
}
follow(followerId, followeeId) {
if (followerId === followeeId) return; // Can't follow self
if (!this.following.has(followerId)) {
this.following.set(followerId, new Set());
}
this.following.get(followerId).add(followeeId);
}
unfollow(followerId, followeeId) {
if (this.following.has(followerId)) {
this.following.get(followerId).delete(followeeId);
}
}
}
// Usage:
const twitter = new Twitter();
twitter.postTweet(1, 5);
twitter.getNewsFeed(1); // [5]
twitter.follow(1, 2);
twitter.postTweet(2, 6);
twitter.getNewsFeed(1); // [6, 5]
twitter.unfollow(1, 2);
twitter.getNewsFeed(1); // [5]
6. Design Hit Counter Problem
| Approach | Method | Space | Time (hit/getHits) |
|---|---|---|---|
| Queue | Store timestamps, dequeue old ones | O(n) hits in 5 min |
O(1) / O(n) |
| Circular Array | 300 buckets (1 per second), rotate every 5 min | O(1) - fixed size |
O(1) / O(300) |
| Map with Cleanup | Map<timestamp, count>, remove old entries | O(k) unique timestamps |
O(1) / O(k) |
| Binary Search | Sorted array, binary search for 300s ago | O(n) |
O(log n) / O(log n) |
Example: Queue-based hit counter
class HitCounter {
constructor() {
this.hits = []; // Queue of timestamps
}
hit(timestamp) {
this.hits.push(timestamp);
}
getHits(timestamp) {
// Remove hits older than 300 seconds
while (this.hits.length > 0 &&
this.hits[0] <= timestamp - 300) {
this.hits.shift();
}
return this.hits.length;
}
}
// Usage:
const counter = new HitCounter();
counter.hit(1);
counter.hit(2);
counter.hit(3);
counter.getHits(4); // 3
counter.hit(300);
counter.getHits(300); // 4
counter.getHits(301); // 3 (hit at 1 is >300s old)
Example: Circular array (O(1) space)
class HitCounterOptimized {
constructor() {
this.times = new Array(300).fill(0);
this.hits = new Array(300).fill(0);
}
hit(timestamp) {
const idx = timestamp % 300;
if (this.times[idx] !== timestamp) {
// New time window, reset bucket
this.times[idx] = timestamp;
this.hits[idx] = 1;
} else {
// Same second, increment count
this.hits[idx]++;
}
}
getHits(timestamp) {
let total = 0;
for (let i = 0; i < 300; i++) {
// Only count if within last 300 seconds
if (timestamp - this.times[i] < 300) {
total += this.hits[i];
}
}
return total;
}
}
// Map-based approach
class HitCounterMap {
constructor() {
this.hitMap = new Map(); // timestamp -> count
}
hit(timestamp) {
this.hitMap.set(timestamp,
(this.hitMap.get(timestamp) || 0) + 1);
}
getHits(timestamp) {
let total = 0;
const minTime = timestamp - 299;
for (const [time, count] of this.hitMap) {
if (time >= minTime && time <= timestamp) {
total += count;
} else if (time < minTime) {
// Cleanup old entries
this.hitMap.delete(time);
}
}
return total;
}
}
Warning: For high-traffic systems, circular array approach is optimal - O(1) space instead of O(n) where n is number of hits. Trade-off: getHits always scans 300 buckets vs scanning actual hit count in queue approach.
Summary: Design Pattern Problems
- LRU Cache: HashMap + Doubly Linked List - O(1) get/put by combining O(1) lookup with O(1) reordering
- DLL Benefits: Dummy head/tail eliminate edge cases - always safe to access head.next and tail.prev
- LRU Operations: Get: move to head, Put: add to head + evict tail if full - most recent at head, LRU at tail
- LFU Cache: keyToVal map + freqToKeys map + minFreq tracker - three data structures for O(1) operations
- LFU Strategy: Use Set for insertion order (LRU tiebreaker) - first element in set is LRU among same frequency
- MinFreq Update: Only increment when last key at minFreq promoted - new keys always start at freq=1
- Min Stack: Parallel min stack or stack of pairs - track minimum at each level for O(1) getMin
- Min Stack Optimization: Only push to minStack when new min ≤ current min - saves space, same time complexity
- Queue with Stacks: Input + output stacks, lazy transfer - amortized O(1) for all operations
- Amortized Analysis: Each element pushed once, transferred once, popped once - 3n operations for n elements = O(1) average
- Twitter Timeline: Merge k sorted lists with heap - tweets stored per user, merge for feed generation
- Follow Graph: Map<userId, Set<followeeId>> for O(1) follow/unfollow - use set to prevent duplicates
- Hit Counter Queue: Store timestamps, dequeue old - O(n) space where n = hits in window
- Hit Counter Circular: 300 buckets (1 per second) - O(1) space, O(300) getHits but constant regardless of traffic
- Design Trade-offs: Time vs space, exact vs approximate, simple vs optimized - choose based on constraints