Start by reading the problem statement twice — once to understand the overall goal, once to extract specific constraints and requirements. Misunderstanding costs far more time than reading carefully.
Follow this sequence: (1) Understand problem + constraints, (2) Work examples by hand, (3) Identify the pattern, (4) Brute force first, (5) Optimize step by step.
Never start coding without a clear plan. Even 5 minutes of thinking upfront saves 30 minutes of debugging later.
// Problem-solving framework:
// 1. Clarify: "Is the input sorted? Can there be negatives? What's the max n?"
// 2. Examples: trace input=[1,2,3] manually
// 3. Pattern: "This looks like a sliding window problem"
// 4. Brute force: "O(n²) nested loop"
// 5. Optimize: "Can I eliminate the inner loop with a hash map? O(n)"
// 6. Code with meaningful variable names
// 7. Test: normal case, edge cases, stress test
Constraints reveal the algorithm: n=10 → brute force OK, n=1000 → O(n²) OK, n=10^6 → need O(n log n) or O(n), n=10^9 → need O(log n) or O(1).
Ask before coding: are there duplicates? Can I modify the input? Is there additional memory available? These answers often dramatically change the approach.
Brute force guarantees correctness. Once you have a correct slow solution, you have something to compare against when testing your optimized version. A working brute force is better than a broken optimization.
Brute force also reveals WHY it's slow: which operations are repeated, where the bottlenecks are, and what data structures would help. You can't optimize intelligently without understanding the problem fully.
In interviews, stating "I'll start with brute force O(n²) which I can optimize" scores better than jumping to an incorrect optimization.
// Brute force: two sum O(n²)
function twoSumBrute(arr, target) {
for (let i = 0; i < arr.length; i++)
for (let j = i+1; j < arr.length; j++)
if (arr[i] + arr[j] === target) return [i, j];
}
// Identify bottleneck: inner loop checks arr[j] === target-arr[i]
// Optimization: pre-store arr[j] values in a hash map for O(1) lookup
function twoSumOpt(arr, target) {
const map = new Map();
for (let i = 0; i < arr.length; i++) {
if (map.has(target - arr[i])) return [map.get(target - arr[i]), i];
map.set(arr[i], i);
}
}
Brute force reveals the inner loop pattern — seeing arr[i] + arr[j] === target directly shows that storing the complement in a hash map eliminates the inner loop.
Start simple, make it work, then make it fast. This principle applies to production engineering too — premature optimization is the root of all evil.
Optimization hierarchy: (1) Remove repeated work with memoization/caching, (2) Use better data structures to reduce lookup time, (3) Sort data to enable binary search, (4) Apply algorithmic patterns (sliding window, two pointers), (5) Use mathematical properties.
Each optimization should demonstrably reduce time or space complexity. Moving from O(n²) to O(n log n) or O(n) is significant. Moving from O(n) to O(n - 1) is noise.
Think in terms of what operations are most expensive and what data structures can replace them.
// Step 1: Brute force - O(n²) for finding duplicate
for (let i = 0; i < n; i++)
for (let j = i+1; j < n; j++)
if (arr[i] === arr[j]) return arr[i]; // O(n) inner comparison
// Step 2: Identify bottleneck - inner loop does O(n) membership check
// Optimization: use Set for O(1) membership
const seen = new Set();
for (const x of arr) {
if (seen.has(x)) return x; // O(1) instead of O(n)
seen.add(x);
} // Total: O(n) time, O(n) space
Hash map/set replacement is the most common optimization: any O(n) linear search in an inner loop can often be replaced with O(1) hash lookup.
Two-pointer technique eliminates an entire O(n) sweep when the array is sorted. Sliding window avoids recomputing window sums by incrementally updating.
Match operations you need to perform with data structures that do them in O(1) or O(log n). The bottleneck in your brute force is often an O(n) operation that a better structure can do faster.
Key mapping: fast lookup → HashMap. Ordered data → sorted array/BST. FIFO → Queue. LIFO → Stack. Priority → Heap. Connectivity → Graph. Prefix → Trie. Range queries → Segment Tree.
Ask: what operations does this problem need most frequently? Then choose the data structure that makes those operations fastest.
// Choose based on dominant operation:
// "Find if element exists" => Set (O(1) lookup)
const set = new Set([1, 2, 3]);
set.has(2); // O(1)
// "Map key to value" => Map/HashMap (O(1))
const map = new Map();
map.set('key', 'value');
// "Get min/max repeatedly" => Heap (O(log n) insert/extract)
// Min heap: always gives smallest element
// "Range sum queries" => Prefix Sum Array
const prefix = [0]; // prefix[i] = sum of arr[0..i-1]
for (const x of arr) prefix.push(prefix.at(-1) + x);
const rangeSum = (l, r) => prefix[r+1] - prefix[l]; // O(1) query
Prefix sum array transforms O(n) range sum queries to O(1) at the cost of O(n) preprocessing — ideal when the same range is queried multiple times.
When multiple data structures are needed, compose them: HashMap + Doubly Linked List for LRU cache, HashMap + Heap for top-k elements, etc.
Patterns are reusable algorithm templates that apply to many different problems. Recognizing a pattern lets you skip from problem to solution in seconds instead of reinventing from scratch.
Most commonly tested patterns: Two Pointers (sorted arrays, pairs), Sliding Window (contiguous subarrays), BFS/DFS (graphs/trees), DP (optimization with choices), Binary Search (sorted/monotonic spaces), Backtracking (generate all combinations).
Practice mapping problem descriptions to pattern keywords. When you see "contiguous" + "sum/length" → sliding window. When you see "shortest path" → BFS.
// Pattern recognition keywords:
// "contiguous subarray" + "sum/length" => sliding window
// "sorted array" + "pair/triplet" => two pointers
// "min steps / shortest path" => BFS
// "all combinations / generate all" => backtracking
// "overlapping subproblems" => DP / memoization
// "sorted + search" => binary search
// "connected components / reachability" => DFS/BFS on graph
// "prefix / suffix" => prefix sum or suffix array
// "next greater element" => monotonic stack
Pattern matching is a skill, not knowledge — it must be developed through practice. After solving 100+ problems, pattern recognition becomes intuitive and fast.
Hybrid problems combine two patterns (e.g., binary search + sliding window, BFS + DP). Recognize each component separately, then combine.
Most algorithmic optimizations are time-space tradeoffs: caching/memoization uses O(n) extra space to convert O(n) repeated work into O(1) lookups. Prefix sums trade O(n) preprocessing space for O(1) queries.
Evaluate based on problem constraints: if memory is scarce (embedded systems, large n), prefer O(1) space even if time is worse. If speed is critical (real-time systems), trade space for time.
Typical tradeoff analysis: is the bottleneck time or space? Can we precompute? Is data static or dynamic?
// Time-space tradeoff examples:
// O(n) space for O(1) time:
const isPrime = new Array(N).fill(true); // Sieve: check in O(1)
// vs checking primality from scratch: O(sqrt(n)) per number
// HashMap eliminates O(n) scan:
const freq = new Map();
for (const x of arr) freq.set(x, (freq.get(x)||0) + 1);
// Now freq lookup is O(1) instead of scanning arr each time
// Prefix sum: O(n) preprocess for O(1) range queries
const prefix = [0, ...arr].map((_, i, a) => a.slice(0,i+1).reduce((s,x)=>s+x,0))
// vs O(n) per query without prefix sum
Space is usually cheaper than time in modern systems. Unless memory is explicitly constrained, prefer faster algorithms that use more space.
In distributed systems, space-time tradeoffs also include network bandwidth: caching reduces server roundtrips (time) at the cost of stale data risk (correctness tradeoff).
Divide and conquer: (1) Divide the problem into non-overlapping subproblems, (2) Conquer each subproblem recursively, (3) Combine results. The subproblems are INDEPENDENT (unlike DP where they overlap).
Classic examples: Merge Sort (divide array, sort halves, merge), Quick Sort (partition + recurse), Binary Search (halve the search space), Matrix multiplication (Strassen's), FFT.
Time complexity: T(n) = aT(n/b) + f(n). Use Master Theorem to solve: if f(n) = O(n^c), compare c with log_b(a) to determine which dominates.
// Merge Sort: classic divide and conquer
function mergeSort(arr) {
if (arr.length <= 1) return arr; // base case
const mid = arr.length >> 1;
const left = mergeSort(arr.slice(0, mid)); // divide
const right = mergeSort(arr.slice(mid)); // conquer
return merge(left, right); // combine
}
// Recurrence: T(n) = 2T(n/2) + O(n)
// Master Theorem: a=2, b=2, f(n)=O(n): log_b(a)=1, f(n)=O(n^1) => T(n)=O(n log n)
// Binary Search: halve each step
// T(n) = T(n/2) + O(1) => T(n) = O(log n)
Key distinction from DP: divide-and-conquer subproblems don't overlap (no memoization needed). If they overlap, DP/memoization is more efficient.
Closest pair of points (Shamos-Hoey), counting inversions, and the Karatsuba multiplication algorithm are classic divide-and-conquer examples beyond sorting.
Use sliding window for problems that ask for the maximum/minimum/count of CONTIGUOUS subarrays or substrings meeting some condition. The "window" represents the current subarray being considered.
Fixed-size window: slide one position at a time (add right element, remove left element). Variable-size window: expand when condition not met, shrink when condition violated.
Keywords that suggest sliding window: "subarray of length k", "longest substring without", "smallest subarray with sum >=", "maximum sum window".
// Fixed window: max sum of k consecutive elements
function maxSumK(arr, k) {
let sum = arr.slice(0, k).reduce((a, b) => a + b);
let max = sum;
for (let i = k; i < arr.length; i++) {
sum += arr[i] - arr[i - k]; // slide: add right, remove left
max = Math.max(max, sum);
}
return max;
}
// Variable window: longest substring with at most k distinct chars
function longestSubstringK(s, k) {
const freq = new Map();
let result = 0;
for (let l = 0, r = 0; r < s.length; r++) {
freq.set(s[r], (freq.get(s[r])||0) + 1); // expand right
while (freq.size > k) { // shrink left
freq.set(s[l], freq.get(s[l]) - 1);
if (!freq.get(s[l])) freq.delete(s[l]);
l++;
}
result = Math.max(result, r - l + 1);
}
return result;
}
Sliding window eliminates recomputation: O(1) update (add right, remove left) vs O(k) full recomputation each step. Reduces O(n*k) brute force to O(n).
When expanding the window, update the condition. When shrinking, update to restore the invariant. The invariant is what the window satisfies at all times.
Greedy: make the locally optimal choice at each step, never reconsider. Works when local optimum always leads to global optimum. Easy to implement but only correct for specific problem structures.
DP: when choices at each step affect future options (overlapping subproblems) AND the optimal solution can be built from optimal sub-solutions (optimal substructure). More powerful but more complex.
Greedy fails when a locally good choice locks you into a globally bad solution. Prove greedy correctness using exchange argument: show no swap of greedy choice for any other choice can improve the result.
// Greedy: activity selection (interval scheduling)
// Sort by end time, always pick next non-overlapping activity
function maxActivities(intervals) {
intervals.sort((a, b) => a[1] - b[1]); // sort by end time
const result = [intervals[0]];
for (let i = 1; i < intervals.length; i++)
if (intervals[i][0] >= result.at(-1)[1]) result.push(intervals[i]);
return result.length;
}
// DP: 0/1 Knapsack (greedy fails here!)
// items can't be fractional, greedy by value/weight ratio is not optimal
function knapsack(weights, values, W) {
const dp = Array(W + 1).fill(0);
for (let i = 0; i < weights.length; i++)
for (let w = W; w >= weights[i]; w--) // backwards to avoid reuse
dp[w] = Math.max(dp[w], dp[w - weights[i]] + values[i]);
return dp[W];
}
Quick test: try greedy first. If you can't prove it's always optimal, or you find a counterexample, use DP instead.
Coin change is NOT greedy (unless coins are specially structured like US currency). Fractional knapsack IS greedy (sort by value/weight ratio). Know which variant each problem uses.
Interviewers evaluate your communication as much as your code. Talk before, during, and after writing. Announce your approach before starting, explain each decision as you code, and summarize complexity when done.
If you get stuck, say what you've tried and why it doesn't work. Asking clarifying questions shows maturity and is expected in real engineering discussions.
A mediocre solution communicated excellently often beats a perfect solution delivered in silence.
// Good interview communication pattern:
// 1. "I understand we need to find..."
// 2. "A brute force approach would be O(n²)..."
// 3. "I can optimize this with a hash map to O(n)..."
// 4. "Let me walk through an example: input=[1,2,3]..."
// 5. "Here's my implementation..."
// 6. "Time: O(n), Space: O(n)"
// 7. "Edge cases: empty array, single element, duplicates..."
Think out loud throughout: narrate what you're writing. An interviewer can redirect you if you're going wrong — but only if they know what you're thinking.
After coding, proactively mention what you would improve with more time: better error handling, additional test cases, or alternative approaches.
Expand around center: for each character, try expanding outward while both ends match. Do this for both odd-length (single center) and even-length (pair center) palindromes. O(n²) time, O(1) space.
Manacher's algorithm achieves O(n) by reusing previously computed palindrome radii, but the expand-around-center approach is far simpler to implement correctly.
Dynamic programming approach: dp[i][j] = true if s[i..j] is a palindrome. Fill bottom-up by length. O(n²) time and space.
function longestPalindrome(s) {
let start = 0, maxLen = 1;
function expand(l, r) {
while (l >= 0 && r < s.length && s[l] === s[r]) { l--; r++; }
if (r - l - 1 > maxLen) { maxLen = r - l - 1; start = l + 1; }
}
for (let i = 0; i < s.length; i++) {
expand(i, i); // odd length: center at i
expand(i, i + 1); // even length: center between i and i+1
}
return s.slice(start, start + maxLen);
}
// longestPalindrome("babad") => "bab"
// longestPalindrome("cbbd") => "bb"
Expand from center: the inner while loop naturally terminates when characters don't match — no need to validate palindrome separately.
Common interview follow-up: count total palindromic substrings. Same expand-around-center: count every valid expansion instead of tracking maximum.
Sliding window approach: maintain a window of size p.length. Track character frequency counts for both the window and the pattern. When counts match, the window is an anagram.
Updating the window: when sliding, decrement count of outgoing character (remove from left) and increment count of incoming character (add from right).
O(n) time, O(26) = O(1) space for lowercase letters. Much better than the O(n*k) brute force of checking every substring.
function findAnagrams(s, p) {
const result = [], need = new Array(26).fill(0);
for (const c of p) need[c.charCodeAt(0) - 97]++;
let window = [...need], have = 0, total = 26;
for (let i = 0; i < s.length; i++) {
// simplify: use a 'matches' count approach
if (i >= p.length) {
// slide window
}
}
// Cleaner version with Map:
const pCount = {}, wCount = {};
for (const c of p) pCount[c] = (pCount[c] || 0) + 1;
let formed = 0, required = Object.keys(pCount).length;
for (let l = 0, r = 0; r < s.length; r++) {
wCount[s[r]] = (wCount[s[r]] || 0) + 1;
if (pCount[s[r]] && wCount[s[r]] === pCount[s[r]]) formed++;
if (r - l + 1 === p.length) {
if (formed === required) result.push(l);
if (pCount[s[l]] && wCount[s[l]] === pCount[s[l]]) formed--;
wCount[s[l]]--;
l++;
}
}
return result;
}
// findAnagrams("cbaebabacd", "abc") => [0, 6]
Fixed-size sliding window: window size = p.length is constant. Only slide when window reaches full size.
'formed' tracks how many characters have matching exact counts. When formed === required, the window is a valid anagram.
Token bucket: maintain a count of tokens replenished at a fixed rate. Each request consumes a token. Allows bursts up to bucket capacity. Sliding window log: record timestamps, reject if more than limit in [now-window, now].
Fixed window counter is simplest (reset count every time period) but allows double the requests at window boundaries. Sliding window fixes this.
For distributed systems, use Redis INCR with TTL for atomic token counting across servers.
// Token Bucket rate limiter
class RateLimiter {
constructor(capacity, refillRate) {
this.capacity = capacity;
this.tokens = capacity;
this.refillRate = refillRate; // tokens per ms
this.lastRefill = Date.now();
}
allowRequest() {
const now = Date.now();
const elapsed = now - this.lastRefill;
this.tokens = Math.min(this.capacity, this.tokens + elapsed * this.refillRate);
this.lastRefill = now;
if (this.tokens >= 1) { this.tokens--; return true; }
return false;
}
}
// new RateLimiter(10, 1/1000) allows 10 burst + 1 per second
Token bucket allows bursts up to capacity while maintaining the average rate. Ideal for APIs that need to handle traffic spikes gracefully.
Sliding window log is most accurate (no boundary burst) but uses O(rate) memory per user to store timestamps.
Jump Game I (can you reach the end?): Track the maximum index reachable so far. At each position i, update max_reach = max(max_reach, i + nums[i]). If i > max_reach, we're stuck.
Greedy insight: at each position, we only care about the farthest we can reach, not which path got us there. This makes greedy optimal here.
Jump Game II (minimum jumps): always jump to the farthest reachable position in the current window. O(n) greedy.
// Jump Game I: can reach end?
function canJump(nums) {
let maxReach = 0;
for (let i = 0; i < nums.length; i++) {
if (i > maxReach) return false; // stuck
maxReach = Math.max(maxReach, i + nums[i]);
}
return true;
}
// canJump([2,3,1,1,4]) => true
// canJump([3,2,1,0,4]) => false
// Jump Game II: minimum jumps
function minJumps(nums) {
let jumps = 0, curEnd = 0, farthest = 0;
for (let i = 0; i < nums.length - 1; i++) {
farthest = Math.max(farthest, i + nums[i]);
if (i === curEnd) { jumps++; curEnd = farthest; }
}
return jumps;
}
Greedy works here because reaching position i+k is never worse than reaching i+k-1 first. Always extend to farthest reachable.
DP approach is O(n²): dp[i] = true if reachable. Greedy is O(n). Both are correct; greedy is preferred.
Use Floyd's algorithm to detect the cycle. Then find the cycle start: reset one pointer to head, advance both at speed 1. They meet at the cycle start.
Mathematical proof: if meeting point is k steps from cycle start, head is also k steps from cycle start (by the cycle math). So advancing both at speed 1 makes them meet at the start.
Remove cycle: find node just before cycle start (prev pointer), set prev.next = null.
function detectCycle(head) {
let slow = head, fast = head;
while (fast && fast.next) {
slow = slow.next;
fast = fast.next.next;
if (slow === fast) {
// Cycle exists! Find start.
let ptr = head;
while (ptr !== slow) { ptr = ptr.next; slow = slow.next; }
return ptr; // cycle start node
}
}
return null; // no cycle
}
function removeCycle(head) {
const start = detectCycle(head);
if (!start) return head;
let curr = start;
while (curr.next !== start) curr = curr.next; // find node just before start
curr.next = null; // break cycle
return head;
}
Phase 1: detect cycle with fast/slow. Phase 2: reset one to head, advance both at speed 1 — they meet at cycle start.
This runs in O(n) time, O(1) space. The HashSet approach is simpler (store visited nodes) but uses O(n) space.
DP approach: dp[i] = length of LIS ending at index i. For each i, check all j < i where nums[j] < nums[i]. dp[i] = max(dp[j]) + 1. Answer = max(dp). O(n²) time.
Binary search approach: maintain a 'tails' array where tails[i] is the smallest tail element of all increasing subsequences of length i+1. Replace with binary search. O(n log n).
The O(n log n) approach doesn't reconstruct the actual subsequence easily — use patience sorting visualization to understand it.
// O(n²) DP
function lisDP(nums) {
const dp = Array(nums.length).fill(1);
for (let i = 1; i < nums.length; i++)
for (let j = 0; j < i; j++)
if (nums[j] < nums[i]) dp[i] = Math.max(dp[i], dp[j] + 1);
return Math.max(...dp);
}
// O(n log n) binary search
function lis(nums) {
const tails = [];
for (const n of nums) {
let lo = 0, hi = tails.length;
while (lo < hi) {
const mid = (lo + hi) >> 1;
tails[mid] < n ? lo = mid + 1 : hi = mid;
}
tails[lo] = n;
}
return tails.length;
}
// lis([10,9,2,5,3,7,101,18]) => 4 ([2,3,7,101])
Patience sorting intuition: maintain piles where each pile top is >= incoming card. Place on leftmost pile where top >= card. Number of piles = LIS length.
LIS has applications in diff algorithms, minimum edit distance, and analyzing temporal data sequences.
Consistent hashing maps both nodes (servers) and keys to positions on a virtual ring (0 to 2^32-1). A key is assigned to the first node clockwise from it on the ring.
When a node is added/removed, only keys between the new node and its predecessor need to be moved — far fewer than rehashing everything. Critical for distributed caching and databases.
Virtual nodes (vnodes): each physical node has k virtual positions on the ring for better load distribution. Used in Cassandra, DynamoDB, and Memcached.
class ConsistentHash {
constructor(replicas = 100) {
this.replicas = replicas;
this.ring = new Map(); // hash position => server
this.sortedKeys = [];
}
addServer(server) {
for (let i = 0; i < this.replicas; i++) {
const hash = this.hash(server + ':' + i);
this.ring.set(hash, server);
this.sortedKeys.push(hash);
}
this.sortedKeys.sort((a, b) => a - b);
}
getServer(key) {
const hash = this.hash(key);
const pos = this.sortedKeys.findIndex(k => k >= hash);
const idx = pos === -1 ? 0 : pos;
return this.ring.get(this.sortedKeys[idx]);
}
hash(key) { /* FNV or murmur hash */ return /* ... */ 0; }
}
Only k/n keys migrate when adding a node (where k=total keys, n=nodes). Traditional modulo hashing moves almost all keys when node count changes.
Real implementations use sortedContainers (tree map) for O(log n) server lookup instead of sorted array with binary search.
CPU task scheduler with cooldown n: after each task, the same task cannot run for n units. Use a greedy priority queue: always schedule the most frequent remaining task that isn't on cooldown.
Key insight: if the most frequent task has frequency f, the minimum time is at least (f-1)*(n+1) + count_of_tasks_with_max_freq.
This is a classic greedy scheduling problem with applications in OS scheduling, API rate limiting by task type, and job queues.
function leastInterval(tasks, n) {
const freq = new Array(26).fill(0);
for (const t of tasks) freq[t.charCodeAt(0) - 65]++;
const maxFreq = Math.max(...freq);
const maxCount = freq.filter(f => f === maxFreq).length;
// Minimum slots = either formula result or actual task count
return Math.max(tasks.length, (maxFreq - 1) * (n + 1) + maxCount);
}
// leastInterval(["A","A","A","B","B","B"], 2) => 8
// Sequence: A B _ A B _ A B (or A B C A B C A B)
// Simulation approach (shows actual schedule):
// Use max-heap by frequency, cooldown queue per task
Mathematical formula: (maxFreq-1)*(n+1)+maxCount gives minimum slots needed. The max with tasks.length handles the case where cooldown doesn't constrain.
For actual scheduling (not just counting), use a max-heap (priority queue) with a cooldown queue. Pop most frequent, execute, wait n cycles, re-enqueue.
Kosaraju's algorithm (two-pass DFS): (1) Run DFS on original graph, record finish times. (2) Transpose the graph (reverse all edges). (3) Run DFS in reverse finish-time order on transposed graph — each DFS tree is an SCC.
Tarjan's algorithm does it in one pass using discovery times and low-link values, detecting SCCs as DFS completes subtrees.
SCCs are used in: social network analysis (finding tight-knit communities), map routing (bidirectional components), and compiler analysis (detecting circular dependencies).
// Kosaraju's Algorithm
function kosaraju(graph, n) {
const visited = new Array(n).fill(false), order = [];
// Pass 1: DFS to get finish order
function dfs1(v) {
visited[v] = true;
for (const u of graph[v] || []) if (!visited[u]) dfs1(u);
order.push(v);
}
for (let i = 0; i < n; i++) if (!visited[i]) dfs1(i);
// Build transposed graph
const tGraph = Array.from({length: n}, () => []);
for (let v = 0; v < n; v++) for (const u of graph[v] || []) tGraph[u].push(v);
// Pass 2: DFS in reverse finish order on transposed
const component = new Array(n).fill(-1);
let comp = 0;
function dfs2(v, c) {
component[v] = c;
for (const u of tGraph[v]) if (component[u] === -1) dfs2(u, c);
}
while (order.length) {
const v = order.pop();
if (component[v] === -1) { dfs2(v, comp++); }
}
return { count: comp, component };
}
Two key insight: finishing late in DFS means being "reachable from many" places. Reversing edges swaps sources and sinks — only truly bidirectional components survive.
Tarjan's is slightly more efficient (single DFS) using a stack and low-link values. Both are O(V+E).
Serialization: BFS or DFS with null markers. BFS produces level-order representation [1,2,3,null,null,4,5]. DFS preorder is simpler to deserialize recursively.
Key insight for DFS deserialization: preorder traversal visits root first — when deserializing, read root, then recursively rebuild left and right subtrees from the remaining string.
This is the same format LeetCode uses for tree representation.
// Preorder DFS serialize/deserialize
function serialize(root) {
if (!root) return 'null';
return root.val + ',' + serialize(root.left) + ',' + serialize(root.right);
}
function deserialize(data) {
const vals = data.split(','), iter = [0];
function build() {
const val = vals[iter[0]++];
if (val === 'null') return null;
const node = { val: parseInt(val), left: null, right: null };
node.left = build();
node.right = build();
return node;
}
return build();
}
// serialize([1,2,3,null,null,4,5]) => "1,2,null,null,3,4,null,null,5,null,null"
// deserialize that string => original tree
Preorder + null markers uniquely identify the tree structure — unlike inorder alone which is ambiguous. Deserialization uses a global index to consume tokens sequentially.
BFS serialization creates more compact output for complete trees. Preorder is better for sparse trees. Both are O(n) time and space.
LRU cache evicts the least recently used item when at capacity. Need O(1) get and put. Solution: combine a HashMap (O(1) access) with a Doubly Linked List (O(1) move to front/remove from back).
HashMap stores key → {node reference}. DLL tracks usage order: most recent at head, least recent at tail. On access, move node to head. When full, remove tail node.
JavaScript's Map preserves insertion order and allows O(1) delete and re-insert, making it a clean LRU implementation.
class LRUCache {
constructor(capacity) {
this.capacity = capacity;
this.cache = new Map(); // maintains insertion order for LRU
}
get(key) {
if (!this.cache.has(key)) return -1;
const val = this.cache.get(key);
this.cache.delete(key); // remove and re-insert to mark as recent
this.cache.set(key, val);
return val;
}
put(key, value) {
if (this.cache.has(key)) this.cache.delete(key);
this.cache.set(key, value);
if (this.cache.size > this.capacity) {
const oldest = this.cache.keys().next().value; // first key = oldest
this.cache.delete(oldest);
}
}
}
// const lru = new LRUCache(2);
// lru.put(1,1); lru.put(2,2); lru.get(1); lru.put(3,3);
// lru.get(2) => -1 (evicted)
JavaScript Map insertion order trick: delete + re-insert moves a key to the "most recently used" end. The first key in the Map is always the LRU candidate.
Traditional implementation: HashMap + Doubly Linked List with sentinel head and tail nodes. O(1) all operations, used in OS page replacement and browser/CDN caches.
Stack approach: push opening brackets, pop when closing bracket matches the top. String is valid if stack is empty at end and every closing bracket matches the most recent opening.
The key insight: brackets must be closed in LIFO order — the most recently opened bracket must be closed first. This is exactly what a stack models.
Extended to expressions: validate mathematical expressions with nested parentheses for balanced structure.
function isValid(s) {
const stack = [];
const map = { ')': '(', '}': '{', ']': '[' };
for (const c of s) {
if ('([{'.includes(c)) stack.push(c);
else if (stack.pop() !== map[c]) return false;
}
return stack.length === 0;
}
// isValid("()[]{}") => true
// isValid("([)]") => false
// isValid("{[]}") => true
// Extended: minimum bracket removals
function minRemove(s) {
const stack = [], remove = new Set();
for (let i = 0; i < s.length; i++) {
if (s[i] === '(') stack.push(i);
else if (s[i] === ')') {
stack.length ? stack.pop() : remove.add(i);
}
}
stack.forEach(i => remove.add(i));
return s.split('').filter((_, i) => !remove.has(i)).join('');
}
Map-based matching: store closing→opening pairs in a Map for clean O(1) lookup, handling any number of bracket types without if/else chains.
Follow-up: generate all valid bracket combinations of n pairs — use backtracking with open/close counters. Generate n=3: ["((()))","(()())","(())()","()(())","()()()"].
Fixed-length sliding window: compute sum of first k elements, then slide by adding the next element and removing the leftmost. Track maximum sum seen.
O(n) time, O(1) space. Far better than O(n*k) brute force that recomputes each window sum from scratch.
This is the simplest sliding window problem — foundation for all variable-window and complex sliding window problems.
function maxSumSubarray(arr, k) {
if (arr.length < k) return null;
// Calculate first window sum
let windowSum = arr.slice(0, k).reduce((s, x) => s + x, 0);
let maxSum = windowSum;
// Slide the window
for (let i = k; i < arr.length; i++) {
windowSum += arr[i] - arr[i - k]; // add right, remove left
maxSum = Math.max(maxSum, windowSum);
}
return maxSum;
}
// maxSumSubarray([2,1,5,1,3,2], 3) => 9 (subarray [5,1,3])
// maxSumSubarray([2,3,4,1,5], 2) => 7 (subarray [3,4])
// Return the actual subarray:
function maxSumSubarrayFull(arr, k) {
let windowSum = arr.slice(0, k).reduce((s, x) => s + x, 0);
let maxSum = windowSum, start = 0;
for (let i = k; i < arr.length; i++) {
windowSum += arr[i] - arr[i - k];
if (windowSum > maxSum) { maxSum = windowSum; start = i - k + 1; }
}
return arr.slice(start, start + k);
}
Sliding window update rule: newSum = prevSum + arr[rightEdge] - arr[leftEdge]. This single O(1) operation replaces a full O(k) sum recalculation.
Variable-length windows (e.g., smallest subarray with sum ≥ target) use two pointers that move independently, expanding or shrinking as needed.