Big O notation describes the upper bound of an algorithm's growth rate as input size increases. It tells us the worst-case performance in terms of time or space.
// Big O expresses how runtime scales with input size n
// O(1) — constant: doesn't depend on n
// O(log n) — logarithmic: halves problem each step
// O(n) — linear: one pass through data
// O(n log n) — linearithmic: efficient sorting
// O(n^2) — quadratic: nested loops
// O(2^n) — exponential: recursive subsets
We drop constants and lower-order terms: O(2n + 5) simplifies to O(n).
Common complexity classes with examples:
// O(1) — Array access, hash map lookup
let x = arr[5]; // constant time
// O(n) — Linear search, single loop
for (let i = 0; i < n; i++) { /* process */ }
// O(log n) — Binary search
function bsearch(arr, target) {
let lo = 0, hi = arr.length - 1;
while (lo <= hi) {
const mid = (lo + hi) >> 1;
if (arr[mid] === target) return mid;
if (arr[mid] < target) lo = mid + 1;
else hi = mid - 1;
}
}
O(log n) appears whenever the problem size is halved at each step.
O(n²) typically comes from nested loops where both iterate over n elements:
// Bubble Sort — nested loops
for (let i = 0; i < n; i++) {
for (let j = 0; j < n - i - 1; j++) {
if (arr[j] > arr[j+1])
[arr[j], arr[j+1]] = [arr[j+1], arr[j]];
}
}
// Checking all pairs
for (let i = 0; i < n; i++) {
for (let j = i + 1; j < n; j++) {
// process pair (i, j)
}
} // n*(n-1)/2 iterations = O(n^2)
O(n²) becomes slow for large inputs. For n=10,000, that is 100 million operations.
Space complexity measures the extra memory an algorithm uses relative to input size, excluding the input itself.
// O(1) space — fixed variables
function sum(arr) {
let total = 0; // only one variable
for (const x of arr) total += x;
return total;
}
// O(n) space — new array proportional to input
function double(arr) {
return arr.map(x => x * 2); // new array of size n
}
// O(n) space — recursive call stack
function factorial(n) {
if (n <= 1) return 1;
return n * factorial(n - 1); // n frames on stack
}
Consider both auxiliary space (extra structures) and stack space (recursion depth).
Amortized analysis averages the cost of operations over a sequence, even if some individual operations are expensive. The average cost per operation is low.
// Dynamic array (like JS Array.push):
// Most pushes: O(1) — just add to end
// Occasionally: O(n) — when array doubles capacity
// Amortized cost of push:
// n pushes, only log(n) doublings
// Total cost: n + 1 + 2 + 4 + ... + n = ~3n
// Amortized per push: O(1)
// Example: push 8 elements
// [_] [_,_] [_,_,_,_] [_,_,_,_,_,_,_,_]
// Copies: 1 + 2 + 4 = 7 copies for 8 pushes
Amortized O(1) is not the same as O(1): individual operations can still be costly, but the average is constant.
An algorithm can have different performance depending on the input:
// Linear Search example:
// Best case: O(1) — target is the first element
// Worst case: O(n) — target is last or not present
// Average case: O(n/2) = O(n) — target is somewhere in the middle
// Quick Sort example:
// Best case: O(n log n) — balanced partitions
// Worst case: O(n^2) — already sorted, bad pivot
// Average case: O(n log n) — random inputs
// Insertion Sort example:
// Best case: O(n) — already sorted
// Worst case: O(n^2) — reverse sorted
Big O usually refers to worst case. Average case analysis often requires probability assumptions about input distribution.
Complexity hierarchy from fastest to slowest growth:
// O(1) < O(log n) < O(n) < O(n log n) < O(n^2) < O(2^n) < O(n!)
// For n = 1000:
// O(1) = 1 operation
// O(log n) = ~10 operations
// O(n) = 1,000 operations
// O(n log n) = ~10,000 operations
// O(n^2) = 1,000,000 operations
// O(2^n) = 10^301 — infeasible!
// O(n!) = even larger
Always aim for the lowest complexity. O(n log n) sorting is acceptable for most cases. O(n²) is okay for n < 10,000. O(2<sup>n</sup>) is only feasible for n < ~25.
Use recurrence relations and the Master Theorem for divide-and-conquer recurrences of the form T(n) = aT(n/b) + O(n<sup>d</sup>).
// Binary Search: T(n) = T(n/2) + O(1) => O(log n)
// Merge Sort: T(n) = 2T(n/2) + O(n) => O(n log n)
// Fibonacci (naive): T(n) = T(n-1) + T(n-2) => O(2^n)
// Master Theorem: T(n) = aT(n/b) + O(n^d)
// If d < log_b(a): O(n^(log_b(a)))
// If d = log_b(a): O(n^d * log n)
// If d > log_b(a): O(n^d)
For non-standard recurrences, use the recursion tree method to sum work at each level.
Hash map (object/Map in JS) operations are O(1) average but O(n) worst case due to hash collisions.
const map = new Map();
// Insert — O(1) average
map.set("key", "value");
// Lookup — O(1) average
map.get("key");
// Delete — O(1) average
map.delete("key");
// Check existence — O(1) average
map.has("key");
// Iterate all entries — O(n)
for (const [k, v] of map) { /* ... */ }
Hash maps are ideal for frequency counting, caching, and lookup tables. The O(1) comes from computing a hash and indexing directly.
Binary search halves the search space with each comparison. After k steps, the remaining space is n/2<sup>k</sup>. We stop when n/2<sup>k</sup> = 1, so k = log<sub>2</sub>(n).
// n = 1024 elements
// Step 1: 512 remaining
// Step 2: 256 remaining
// Step 3: 128 remaining
// ...
// Step 10: 1 remaining
// log2(1024) = 10 steps
// In general:
// n = 1,000 => ~10 steps
// n = 1,000,000 => ~20 steps
// n = 1,000,000,000 => ~30 steps
This is why O(log n) is extremely efficient: doubling the input only adds one extra step.
These three notations describe asymptotic bounds on algorithm complexity:
// Big O — upper bound (worst case ceiling)
// f(n) = O(g(n)) means f grows no faster than g
// Example: n^2 + n = O(n^2)
// Big Omega (Ω) — lower bound (best case floor)
// f(n) = Ω(g(n)) means f grows at least as fast as g
// Example: sorting any comparison-based = Ω(n log n)
// Big Theta (Θ) — tight bound (both upper and lower)
// f(n) = Θ(g(n)) means f grows exactly as fast as g
// Example: merge sort = Θ(n log n) in all cases
// Relationship: Θ(g) ⊂ O(g) and Θ(g) ⊂ Ω(g)
// O(n^2) is also O(n^3) — O gives an upper bound, not exact
In practice, Big O is most commonly used to describe worst-case performance. Theta is more precise but harder to prove.
The theoretical lower bound for comparison-based sorting is Ω(n log n). This is proven via the decision tree model.
// Decision tree for sorting n elements:
// Leaves ≥ n! (one for each possible permutation)
// Height of binary tree with n! leaves ≥ log2(n!)
// Stirling's approximation: log2(n!) ≈ n log2(n)
// Therefore: any comparison sort ≥ Ω(n log n)
// Optimal O(n log n) algorithms:
// - Merge Sort: O(n log n) always
// - Heap Sort: O(n log n) always
// - Quick Sort: O(n log n) average, O(n^2) worst
// Can beat O(n log n) WITHOUT comparisons:
// - Counting Sort: O(n + k) where k = range of values
// - Radix Sort: O(d * (n + k)) where d = digits
// - Bucket Sort: O(n + k) average
Non-comparison sorts can be O(n) by exploiting the structure of the keys, not their ordering relative to each other.
Recursive algorithms use the call stack implicitly. Each function call adds a frame. Stack depth = max recursion depth = space complexity.
// Linear recursion: O(n) space
function sum(n) {
if (n === 0) return 0;
return n + sum(n - 1); // n frames on stack
}
// Tree recursion (Fibonacci): O(n) space (depth of tree)
function fib(n) {
if (n <= 1) return n;
return fib(n-1) + fib(n-2); // max depth = n
}
// Note: time is O(2^n) but space is O(n) — only one branch active at a time
// Binary search (recursive): O(log n) space
// Merge sort: O(n) space — O(n) merge arrays + O(log n) stack
// Tail-recursive (with TCO): O(1) space
// Most JS engines don't support TCO; use iterative for O(1) space
Key insight: stack space = max depth of recursion, not total nodes visited.
Know the time complexity of standard operations for each data structure:
// Arrays (dynamic):
// Access: O(1) | Search: O(n) | Insert end: O(1) amortized
// Insert middle: O(n) | Delete middle: O(n)
// Linked List:
// Access: O(n) | Search: O(n) | Insert/Delete at head: O(1)
// Insert/Delete at position: O(n) to traverse
// Stack/Queue:
// Push/Pop/Peek: O(1) (all operations)
// Hash Map (average):
// Get/Set/Delete: O(1) average | O(n) worst (collisions)
// Binary Search Tree (balanced):
// Search/Insert/Delete: O(log n)
// BST unbalanced worst case: O(n)
// Heap:
// Insert: O(log n) | Extract-min/max: O(log n)
// Peek min/max: O(1) | Build heap: O(n)
Choose data structures based on what operations you need most frequently.
Fibonacci is a classic example of how the same problem can have wildly different complexities based on approach:
// Naive recursive: O(2^n) time, O(n) space
function fibNaive(n) {
if (n <= 1) return n;
return fibNaive(n-1) + fibNaive(n-2); // exponential branching
}
// Memoization / DP: O(n) time, O(n) space
function fibDP(n, memo = {}) {
if (n in memo) return memo[n];
if (n <= 1) return n;
return memo[n] = fibDP(n-1, memo) + fibDP(n-2, memo);
}
// Bottom-up DP: O(n) time, O(1) space
function fibOptimal(n) {
let a = 0, b = 1;
for (let i = 0; i < n; i++) [a, b] = [b, a + b];
return a;
}
// Matrix exponentiation: O(log n) time, O(log n) stack space
// [[1,1],[1,0]]^n gives [[fib(n+1), fib(n)], [fib(n), fib(n-1)]]
// Input: n=10 => Output: 55 (all approaches)
// Input: n=50 => Output: 12586269025
Each approach is better by orders of magnitude: O(2ⁿ) → O(n) → O(1) space → O(log n) time.
Often you can reduce time complexity by using more space (precomputation/caching), and vice versa.
// Problem: check if two elements in array sum to target
// O(n^2) time, O(1) space — brute force
function hasPairBrute(arr, target) {
for (let i = 0; i < arr.length; i++)
for (let j = i+1; j < arr.length; j++)
if (arr[i] + arr[j] === target) return true;
return false;
}
// O(n) time, O(n) space — hash set tradeoff
function hasPairFast(arr, target) {
const seen = new Set();
for (const x of arr) {
if (seen.has(target - x)) return true;
seen.add(x);
}
return false;
}
// Input: [2,7,11,15], target=9 => true (2+7)
// Input: [1,3,5,7], target=4 => true (1+3)
// Input: [1,3,5,7], target=2 => false
Other examples: lookup tables, memoization, prefix sums, precomputed hashes — all trade memory for speed.
P = problems solvable in polynomial time O(n^k). NP = problems where a solution can be verified in polynomial time. P vs NP asks: if we can quickly verify a solution, can we also quickly find one?
// P problems (efficient solutions exist):
// - Sorting: O(n log n)
// - Shortest path (Dijkstra): O(E log V)
// - Matrix multiplication: O(n^3) or better
// NP problems (no known polynomial algorithm):
// - Traveling Salesman Problem (TSP)
// - 0/1 Knapsack (pseudo-polynomial O(nW) but NP-hard)
// - Subset Sum
// - Graph Coloring
// - SAT (Boolean satisfiability)
// NP-Complete: hardest problems in NP (NP-hard AND in NP)
// If any NP-complete problem has a polynomial solution, P = NP
// Practical implications:
// TSP with n=20: exact = O(n! * n) infeasible
// Use approximation algorithms or heuristics (greedy, genetic algorithms)
For NP-hard problems, we rely on approximation algorithms, heuristics, or exponential algorithms for small inputs.
Multiply the iterations of each nested loop. Independent loops add; nested loops multiply.
// Single loop: O(n)
for (let i = 0; i < n; i++) { /* O(1) work */ }
// Nested loops: O(n^2)
for (let i = 0; i < n; i++)
for (let j = 0; j < n; j++) { /* O(1) */ }
// Nested but inner depends on outer: O(n^2)
for (let i = 0; i < n; i++)
for (let j = 0; j < i; j++) { /* 0+1+2+...+(n-1) = n(n-1)/2 = O(n^2) */ }
// Logarithmic inner loop: O(n log n)
for (let i = 0; i < n; i++)
for (let j = 1; j < n; j *= 2) { /* O(log n) inner */ }
// Triple nested: O(n^3)
for (let i = 0; i < n; i++)
for (let j = 0; j < n; j++)
for (let k = 0; k < n; k++) { /* O(1) */ }
// Two separate loops: O(n + m)
for (let i = 0; i < n; i++) { /* ... */ }
for (let j = 0; j < m; j++) { /* ... */ }
General rule: Drop lower-order terms and constant factors. O(3n² + 2n + 5) = O(n²).
BFS uses a queue that holds up to one entire level at a time. DFS uses a stack (or recursion) proportional to depth.
// Tree example with n nodes, height h, branching factor b
// BFS space complexity: O(max width of tree)
// Worst case (balanced tree): O(n/2) = O(n) — last level has ~n/2 nodes
// Best case (skewed tree): O(1)
// DFS space complexity: O(h) — O(log n) balanced, O(n) skewed
// Example: perfectly balanced BST with n=1024
// BFS queues up to 512 nodes (last level)
// DFS stack depth = 10 (height = log2(1024))
// Graph BFS: O(V) for visited set
// Graph DFS: O(V) for visited set + O(h) for recursion stack
// Summary:
// Narrow tall tree: BFS better (smaller max width)
// Wide shallow tree: DFS better (smaller max depth)
In practice, BFS is preferred for shortest path problems; DFS for exhaustive search, cycle detection, and topological sort.
Comparison of standard sorting algorithms by time and space:
// Algorithm | Best | Average | Worst | Space
// --------------|------------|------------|-----------|-------
// Bubble Sort | O(n) | O(n^2) | O(n^2) | O(1)
// Selection Sort| O(n^2) | O(n^2) | O(n^2) | O(1)
// Insertion Sort| O(n) | O(n^2) | O(n^2) | O(1)
// Merge Sort | O(n log n) | O(n log n) | O(n log n)| O(n)
// Quick Sort | O(n log n) | O(n log n) | O(n^2) | O(log n)
// Heap Sort | O(n log n) | O(n log n) | O(n log n)| O(1)
// Counting Sort | O(n + k) | O(n + k) | O(n + k) | O(k)
// Radix Sort | O(d(n+k)) | O(d(n+k)) | O(d(n+k)) | O(n+k)
// Tim Sort | O(n) | O(n log n) | O(n log n)| O(n)
// Key facts:
// - Insertion sort best for small or nearly-sorted arrays
// - Quick sort fastest in practice (cache-friendly), but unstable
// - Merge sort stable BUT requires extra O(n) space
// - Counting/Radix only work for integers with bounded range
JavaScript's Array.prototype.sort uses TimSort (hybrid of merge sort and insertion sort).
Tail call optimization (TCO) occurs when a recursive call is the last operation in a function, allowing the engine to reuse the current stack frame instead of adding a new one.
// Normal recursion: O(n) stack space (NOT tail recursive)
function factNormal(n) {
if (n === 0) return 1;
return n * factNormal(n - 1); // n* must wait for result
}
// Tail recursion: O(1) stack space WITH TCO
function factTail(n, acc = 1) {
if (n === 0) return acc;
return factTail(n - 1, n * acc); // last call — no pending work
}
// factTail(5) → factTail(4,5) → factTail(3,20) → factTail(2,60) → ...
// Fibonacci tail recursive:
function fibTail(n, a = 0, b = 1) {
if (n === 0) return a;
return fibTail(n - 1, b, a + b); // tail call
}
// Input: fibTail(10) => Output: 55
// Input: fibTail(15) => Output: 610
// JavaScript: TCO is part of ES6 spec but
// only Safari fully implements it.
// Use iterative loops for O(1) stack in most JS environments.
TCO converts O(n) stack space to O(1) — same benefit as rewriting recursion as iteration.
Prefix sums precompute cumulative sums in O(n), then answer range sum queries in O(1). Classic space-time tradeoff.
function buildPrefixSum(arr) {
const prefix = new Array(arr.length + 1).fill(0);
for (let i = 0; i < arr.length; i++) {
prefix[i + 1] = prefix[i] + arr[i];
}
return prefix;
}
// prefix[i] = sum of arr[0..i-1]
function rangeSum(prefix, l, r) { // inclusive [l, r]
return prefix[r + 1] - prefix[l];
}
// Input: arr=[2,3,1,5,4], query (1,3)
// prefix=[0,2,5,6,11,15]
// rangeSum(prefix, 1, 3) = prefix[4] - prefix[1] = 11 - 2 = 9 ✓
// Input: arr=[1,2,3,4,5], query (0,4)
// rangeSum(prefix, 0, 4) = 15 - 0 = 15
// Input: arr=[1,2,3,4,5], query (2,3)
// rangeSum(prefix, 2, 3) = 10 - 3 = 7
Build: O(n) time and space. Query: O(1). Also works for 2D prefix sums (image queries).