Logical Reasoning

Recursion Puzzles

23 Questions

Define a base case: when n is 0 (or less), return 0. Otherwise, return n added to the result of the function called with n-1. This "unwinds" the problem down to the base case.

The call stack builds up n+1 frames, each holding a pending addition. When n=0 is reached, the stack unwinds: 1+2+3+...+n.

Formula n*(n+1)/2 gives the same result in O(1) — recursion here is for illustration, not efficiency.

function sumN(n) {
  if (n <= 0) return 0;       // base case
  return n + sumN(n - 1);    // recursive case
}
// sumN(5) => 5+4+3+2+1+0 = 15
// Call stack: sumN(5)->sumN(4)->sumN(3)->sumN(2)->sumN(1)->sumN(0)

Stack depth = n: For large n (like 100000), this causes a stack overflow. Iterative or tail-call optimized versions should be used for large inputs.

Tail call optimization (TCO): pass an accumulator argument to allow optimizing compilers to reuse the stack frame. JS engines rarely implement TCO.

Naive recursion: base * power(base, exp-1) repeats n times — O(n). Fast exponentiation cuts this to O(log n) by squaring: if exp is even, compute half = power(base, exp/2), return half*half.

This halves the problem size each time, giving log n recursive calls. The same technique powers cryptography and modular arithmetic.

Handle edge cases: exp=0 returns 1 (any number^0=1), negative exponents return 1/power(base, -exp).

function power(base, exp) {
  if (exp === 0) return 1;
  if (exp < 0) return 1 / power(base, -exp);
  if (exp % 2 === 0) {
    const half = power(base, exp / 2);
    return half * half;          // O(log n)
  }
  return base * power(base, exp - 1); // odd: reduce to even
}
// power(2, 10) => 1024 in only 4 recursive calls!

Fast exponentiation (binary exponentiation): O(log n) calls instead of O(n) — crucial for computing e.g. 2^1000000.

Note: half * half stores the result in a variable — computing power(base, exp/2) twice would negate the O(log n) benefit.

Each recursive call removes one digit by dividing by 10. When the number is less than 10, only one digit remains — that's the base case returning 1.

Handle negative numbers by taking absolute value first. Handle 0 as a special case (0 has 1 digit).

Non-recursive: Math.floor(Math.log10(Math.abs(n))) + 1 works too, but recursion nicely illustrates the idea.

function countDigits(n) {
  n = Math.abs(n);
  if (n === 0) return 1;        // edge case: 0 has 1 digit
  if (n < 10) return 1;         // base case: single digit
  return 1 + countDigits(Math.floor(n / 10));
}
// countDigits(12345) => 5
// countDigits(-999)  => 3
// countDigits(0)     => 1

Integer division strips last digit: Math.floor(n/10) removes the rightmost digit each call (e.g., 12345 → 1234 → 123 → 12 → 1).

String shortcut: n.toString().length gives the same result, but recursion builds intuition for the divide-and-reduce pattern.

Take the last character and concatenate the reversed version of everything except that character. When only one (or zero) characters remain, return as-is.

Alternatively: take the first character and append it AFTER the reversed rest. Both approaches build the reversed string character by character.

JavaScript strings are immutable, so each concatenation creates a new string — total O(n²) operations. Array reverse or array join approach is more efficient.

function reverseStr(s) {
  if (s.length <= 1) return s;
  return s[s.length - 1] + reverseStr(s.slice(0, -1));
}
// reverseStr("hello") => "olleh"
// Call chain: "o" + rev("hell") => "o"+"l"+rev("hel") => ...

// Two-pointer iterative (O(n) time):
function reverseStrIter(s) {
  return s.split('').reverse().join('');
}

Recursive string reverse is O(n²) due to string concatenation and slicing at each level — fine for small strings and learning, inefficient for production.

The two-pointer approach (swapping chars from both ends) is O(n) and doesn't create intermediate strings.

A palindrome reads the same forwards and backwards. Recursively: if first and last characters match, the string is a palindrome if and only if the inner substring (without those characters) is also a palindrome.

Base cases: empty string and single character are always palindromes. Early return false when first and last don't match — no need to recurse deeper.

Common pre-processing: lowercase and remove non-alphanumeric characters before checking (e.g., "A man, a plan, a canal: Panama").

function isPalinRec(s) {
  if (s.length <= 1) return true;                // base cases
  if (s[0] !== s[s.length - 1]) return false;    // early exit
  return isPalinRec(s.slice(1, -1));              // recurse inward
}
// isPalinRec("racecar") => true
// isPalinRec("hello")   => false
// isPalinRec("a")       => true

Two-pointer approach avoids slice overhead: pass lo/hi indices instead of slicing strings — O(n) time, O(n) space (stack) vs O(n²) with slicing.

For practical use, s === s.split('').reverse().join('') is the clearest one-liner in JavaScript.

Each binary digit represents a power of 2. Process from the least significant digit (rightmost): extract with mod 10, multiply by 2^position, recurse to next digit.

Alternative: process from the most significant digit using string representation — each level doubles the accumulated value and adds the current digit.

parseInt('1010', 2) = 10 — JavaScript has a built-in, but implementing it recursively builds strong fundamentals.

// Process from LSB (using number arithmetic)
function binToDec(bin, pos = 0) {
  if (bin === 0) return 0;
  return (bin % 10) * Math.pow(2, pos) + binToDec(Math.floor(bin / 10), pos + 1);
}
// bin=1101: 1*1 + 0*2 + 1*4 + 1*8 = 13

// Process from MSB (using string)
function binToDecStr(s, i = 0) {
  if (i === s.length) return 0;
  return parseInt(s[i]) * Math.pow(2, s.length - 1 - i) + binToDecStr(s, i + 1);
}
// binToDecStr("1101") => 13

Bit-shifting approach: accumulate with result = result * 2 + bit scan left-to-right — simpler and avoids Math.pow.

parseInt with radix 2 is O(n) and handles all edge cases. The recursive version is purely for practice.

Recursion replaces loops — call the function for a smaller input, let it complete, then do your work (or vice versa for reverse order). The call stack acts as the loop counter.

Order matters: call FIRST then print to get 1→N (ascending). Print FIRST then call to get N→1 (descending).

Beyond printing numbers, this pattern applies to traversals, processing sequences, and any "do this N times" operation without explicit loops.

// Print 1 to N (ascending: recurse first, print after)
function print1ToN(n) {
  if (n < 1) return;           // base case
  print1ToN(n - 1);            // go to smallest first
  console.log(n);              // print on the way back up
}
// print1ToN(5) outputs: 1 2 3 4 5

// Print N to 1 (descending: print first, recurse after)
function printNTo1(n) {
  if (n < 1) return;
  console.log(n);              // print first
  printNTo1(n - 1);            // then recurse down
}
// printNTo1(5) outputs: 5 4 3 2 1

Ascending vs descending: the only difference is whether you print before or after the recursive call — a key insight for all recursive output problems.

This also works for generating sequences, building strings in specific orders, and implementing undo/redo-style operations.

The recursive insight: sum of an array = first element + sum of the remaining elements. Keep reducing the array until it's empty (base case returning 0).

This is the pattern underlying many array operations: reduce them to a smaller subproblem by handling one element at a time.

Using arr.slice(1) creates a new array each call — O(n) per call, O(n²) total. Passing an index is much more efficient.

// Slice-based (simple but O(n²))
function sumArray(arr) {
  if (arr.length === 0) return 0;
  return arr[0] + sumArray(arr.slice(1));
}

// Index-based (O(n) time, no extra arrays)
function sumArrayIdx(arr, i = 0) {
  if (i === arr.length) return 0;
  return arr[i] + sumArrayIdx(arr, i + 1);
}
// sumArrayIdx([1,2,3,4,5]) => 15

Always prefer the index-based approach for performance — same recursion depth O(n) but avoids O(n) slice operations at each level.

Array methods: arr.reduce((sum, x) => sum + x, 0) is the idiomatic JavaScript way. Recursion builds understanding of what reduce does internally.

The Euclidean algorithm: GCD(a, b) = GCD(b, a mod b). This works because any common divisor of a and b also divides their remainder (a mod b). Repeat until remainder is 0.

It's one of the oldest algorithms (dating to 300 BCE) and runs in O(log min(a,b)) — very efficient. The number of steps is bounded by the Fibonacci sequence.

LCM(a, b) = a * b / GCD(a, b) — use GCD to compute LCM efficiently.

function gcd(a, b) {
  if (b === 0) return a;      // base case: GCD(a, 0) = a
  return gcd(b, a % b);      // recursive: GCD(a,b) = GCD(b, a%b)
}
// gcd(48, 18) => gcd(18,12) => gcd(12,6) => gcd(6,0) => 6
// gcd(100, 75) => gcd(75,25) => gcd(25,0) => 25

function lcm(a, b) {
  return (a / gcd(a, b)) * b;  // avoid overflow vs a*b/gcd
}

Tail recursive: the recursive call is the last operation — ideal for tail call optimization. Can be easily converted to iteration.

Extended Euclidean algorithm also returns x, y such that ax + by = gcd(a,b) — used in modular inverse and cryptography.

Recursive Fibonacci is elegant but extremely inefficient — it has O(2^n) time complexity because it recalculates the same subproblems many times. For fib(50), this means trillions of calls!

Iterative Fibonacci uses just two variables and a loop — O(n) time, O(1) space. Memoized recursion is O(n) time and O(n) space but uses the recursive structure.

This comparison perfectly illustrates why "clever" recursive code can be dangerously slow and why thinking about time complexity matters.

// Recursive: O(2^n) time, O(n) stack space
function fibRec(n) {
  return n <= 1 ? n : fibRec(n-1) + fibRec(n-2);
}
// fibRec(40) makes ~300 million calls!

// Iterative: O(n) time, O(1) space
function fibIter(n) {
  let a = 0, b = 1;
  for (let i = 2; i <= n; i++) [a, b] = [b, a + b];
  return n === 0 ? a : b;
}
// fibIter(40) = 102334155 (instant)

Memoization bridges the gap: add a cache to recursion to get O(n) time with the recursive style.

Golden ratio formula: fib(n) ≈ φⁿ/√5 where φ=1.618... — but floating-point imprecision makes exact computation impossible for large n.

Place N queens on an N×N chessboard so no two queens attack each other. Use backtracking: try placing a queen in each column of the current row; if it's safe, recurse to the next row; if not, backtrack.

Track attacked positions using three sets: columns, left-diagonals (row-col), right-diagonals (row+col).

N-Queens is the classic backtracking problem demonstrating constraint propagation and state-space search.

function solveNQueens(n) {
  const result = [], queens = Array(n).fill(-1);
  const cols = new Set(), d1 = new Set(), d2 = new Set();
  function bt(row) {
    if (row === n) {
      result.push(queens.map((c, r) => '.'.repeat(c) + 'Q' + '.'.repeat(n-c-1)));
      return;
    }
    for (let col = 0; col < n; col++) {
      if (cols.has(col) || d1.has(row-col) || d2.has(row+col)) continue;
      cols.add(col); d1.add(row-col); d2.add(row+col); queens[row] = col;
      bt(row + 1);
      cols.delete(col); d1.delete(row-col); d2.delete(row+col);
    }
  }
  bt(0);
  return result;
}
// solveNQueens(4) => 2 solutions

Diagonal check trick: cells on the same left-diagonal share (row-col), same right-diagonal share (row+col).

N=8 has 92 solutions, N=12 has 14,200. Count grows super-exponentially but backtracking makes it tractable.

For each element, make a binary choice: include it or exclude it. This gives 2^n subsets. Recursively build all subsets by branching at each element.

Alternatively, treat subset index 0 to 2^n-1 as a bitmask where each bit decides include/exclude.

Backtracking version: add current subset to results, then try adding each remaining element.

function subsets(nums) {
  const result = [];
  function bt(start, current) {
    result.push([...current]);
    for (let i = start; i < nums.length; i++) {
      current.push(nums[i]);
      bt(i + 1, current);
      current.pop(); // backtrack
    }
  }
  bt(0, []);
  return result;
}
// subsets([1,2,3]) => [[],[1],[1,2],[1,2,3],[1,3],[2],[2,3],[3]]

Push current BEFORE recursing — this captures subsets of each length (empty set at the start).

For duplicate elements: sort first, then skip nums[i] === nums[i-1] when i > start to avoid duplicate subsets.

Fill empty cells (0) one by one. For each cell, try digits 1-9. If a digit is valid (not in same row, column, or 3×3 box), place it and recurse. If recursion fails, backtrack and try next digit.

This constraint satisfaction approach prunes the search space heavily — most invalid placements are caught early.

Sudoku is the ideal backtracking practice problem: clear constraints, finite domain, unique solution (usually).

function solveSudoku(board) {
  function isValid(r, c, num) {
    const box = val => board[Math.floor(r/3)*3+Math.floor(val/3)][Math.floor(c/3)*3+val%3];
    for (let i = 0; i < 9; i++) {
      if (board[r][i] == num || board[i][c] == num || box(i) == num) return false;
    }
    return true;
  }
  function solve() {
    for (let r = 0; r < 9; r++) for (let c = 0; c < 9; c++) {
      if (board[r][c] !== '.') continue;
      for (let d = 1; d <= 9; d++) {
        if (!isValid(r, c, d)) continue;
        board[r][c] = String(d);
        if (solve()) return true;
        board[r][c] = '.'; // backtrack
      }
      return false; // no valid digit found
    }
    return true; // all cells filled
  }
  solve();
}

Key backtracking pattern: place value, recurse, always restore ('.' back) on failure.

Optimization: use sets for rows/cols/boxes instead of scanning — O(1) validity check instead of O(9) scan per digit trial.

Swap the current character with each character from current position to end, recurse for the next position, then swap back (restore). When position reaches end, save the permutation.

Time O(n! * n) — unavoidable since there are n! permutations each requiring O(n) to record. Space O(n) for recursion depth.

Classic backtracking with the swap-swap-back pattern to generate all n! orderings.

function permutations(str) {
  const result = [], arr = str.split('');
  function bt(start) {
    if (start === arr.length) { result.push(arr.join('')); return; }
    for (let i = start; i < arr.length; i++) {
      [arr[start], arr[i]] = [arr[i], arr[start]]; // swap
      bt(start + 1);
      [arr[start], arr[i]] = [arr[i], arr[start]]; // undo swap (backtrack)
    }
  }
  bt(0);
  return result;
}
// permutations("abc") => ["abc","acb","bac","bca","cba","cab"]

Avoiding duplicates: sort first and add if (i > start && arr[i] === arr[start]) continue to skip duplicate characters.

For "aab": without dedup you get 6 permutations, with dedup you get 3 unique: "aab","aba","baa".

DFS explores as deep as possible along each branch before backtracking. Recursive implementation uses the call stack naturally — each recursive call goes one level deeper into the graph/tree.

Mark nodes as visited before recursing to prevent infinite loops in graphs with cycles.

DFS is used for: topological sort, cycle detection, strongly connected components, maze solving, and graph connectivity.

// DFS on adjacency list graph
function dfs(graph, start) {
  const visited = new Set(), result = [];
  function explore(node) {
    if (visited.has(node)) return;
    visited.add(node);
    result.push(node);
    for (const neighbor of graph[node] || []) explore(neighbor);
  }
  explore(start);
  return result;
}
// graph = {0:[1,2], 1:[0,3], 2:[0], 3:[1]}
// dfs(graph, 0) => [0, 1, 3, 2] (order may vary)

DFS visits nodes depth-first — goes as deep as possible along one path before exploring siblings.

For trees (no cycles), the visited set is unnecessary. For graphs with cycles, it's essential to prevent infinite recursion.

Merge sort: divide the array in half, recursively sort each half, then merge the two sorted halves. The merge step is key — two sorted arrays can be merged in O(n) using two pointers.

Guaranteed O(n log n) time in all cases (unlike QuickSort which is O(n²) worst case). Stable sort — maintains relative order of equal elements.

Excellent for linked lists and external sorting (when data doesn't fit in memory).

function mergeSort(arr) {
  if (arr.length <= 1) return arr;
  const mid = Math.floor(arr.length / 2);
  const left = mergeSort(arr.slice(0, mid));
  const right = mergeSort(arr.slice(mid));
  const merged = [];
  let i = 0, j = 0;
  while (i < left.length && j < right.length)
    merged.push(left[i] <= right[j] ? left[i++] : right[j++]);
  return [...merged, ...left.slice(i), ...right.slice(j)];
}
// mergeSort([5,2,8,1,9,3]) => [1,2,3,5,8,9]

Guaranteed O(n log n) — log n levels of recursion, each level doing O(n) total work in merge.

Space O(n) for the merged arrays (can't be done in O(1) without complex in-place merging tricks).

The depth (height) of a binary tree is the number of nodes along the longest path from root to leaf. Recursively: 1 + max(depth(left), depth(right)). Empty tree has depth 0.

This is one of the simplest and most elegant recursive tree algorithms — perfectly illustrates how tree problems decompose into subproblems.

Used in: checking balanced BST, calculating tree diameter, solving range-of-vision problems.

function maxDepth(root) {
  if (!root) return 0;
  return 1 + Math.max(maxDepth(root.left), maxDepth(root.right));
}
// Balanced tree [1,[2,[4,null,null],[5,null,null]],[3,[6,null,null],null]]:
// maxDepth => 3
// Skewed tree (like linked list): maxDepth => n

Elegant recursive structure: height = 1 (current node) + max(left height, right height).

Iterative version uses BFS with a queue, counting levels. Both are O(n) time and O(n) space.

Backtracking: at each step, try including the current candidate (can reuse it in unbounded variant), recurse, then exclude it. When remaining sum reaches 0, record the combination.

Pruning: if current element > remaining sum, skip remaining candidates (assuming sorted array).

This covers the "combination sum" family of problems — one of the most common backtracking interview questions.

function combinationSum(candidates, target) {
  candidates.sort((a, b) => a - b);
  const result = [];
  function bt(start, remaining, path) {
    if (remaining === 0) { result.push([...path]); return; }
    for (let i = start; i < candidates.length; i++) {
      if (candidates[i] > remaining) break; // pruning
      path.push(candidates[i]);
      bt(i, remaining - candidates[i], path); // i (not i+1) allows reuse
      path.pop(); // backtrack
    }
  }
  bt(0, target, []);
  return result;
}
// combinationSum([2,3,6,7], 7) => [[2,2,3],[7]]

Pass i (not i+1) to allow reusing the same element. Pass i+1 for combinations without repetition.

For unique combinations (each number used exactly once), sort and add if (i > start && candidates[i] === candidates[i-1]) continue.

Top-down DP via memoization: recursively try each coin, subtract from amount, memoize results to avoid redundant work. Cache maps amount → min coins.

Without memoization, complexity is O(coins^amount) — exponential. WITH memoization it's O(amount * coins) — polynomial.

This exemplifies the difference between pure recursion and dynamic programming (memoized recursion).

function coinChangeMemo(coins, amount) {
  const memo = new Map();
  function dp(remaining) {
    if (remaining === 0) return 0;
    if (remaining < 0) return -1;
    if (memo.has(remaining)) return memo.get(remaining);
    let min = Infinity;
    for (const coin of coins) {
      const result = dp(remaining - coin);
      if (result !== -1) min = Math.min(min, result + 1);
    }
    const ans = min === Infinity ? -1 : min;
    memo.set(remaining, ans);
    return ans;
  }
  return dp(amount);
}
// coinChangeMemo([1,5,11], 15) => 3 (11+3*1 or 5+5+5?)

memo.set() before return caches results so identical subproblems aren't solved more than once.

Top-down (memoized recursion) and bottom-up DP produce the same result. Bottom-up often has better cache performance.

Check each element: if it's an array, recursively flatten it; if not, add it to the result. This handles arbitrary nesting depth.

ES2019+ has Array.flat(Infinity) built-in for this. The recursive implementation is great for understanding recursion and type checking.

Handle the base case carefully: check if each element is an array (Array.isArray) before recursing.

function flatten(arr) {
  return arr.reduce((result, item) => {
    return result.concat(Array.isArray(item) ? flatten(item) : item);
  }, []);
}
// flatten([1,[2,[3,[4]],5],6]) => [1,2,3,4,5,6]

// With depth limit:
function flattenDepth(arr, depth = Infinity) {
  if (depth === 0) return arr.slice();
  return arr.reduce((res, x) =>
    res.concat(Array.isArray(x) ? flattenDepth(x, depth-1) : x), []);
}
// arr.flat(1) => [1,[2,[3,[4]]],5,6]  (one level only)

Array.isArray(item) is the correct check — typeof would return "object" for both arrays and plain objects.

For performance on very deep nesting, iterative approaches using a stack avoid potential call stack overflow.

A balanced binary tree has no node where left and right subtree heights differ by more than 1. Efficient check: return the height if balanced, -1 (sentinel) if not.

Naive approach: check balance at each node separately — O(n log n). Optimized: compute height and check balance in one pass — O(n).

Useful for AVL trees and checking if a BST needs rebalancing after modifications.

function isBalanced(root) {
  function height(node) {
    if (!node) return 0;
    const lh = height(node.left);
    const rh = height(node.right);
    if (lh === -1 || rh === -1) return -1; // propagate imbalance
    if (Math.abs(lh - rh) > 1) return -1; // this node is unbalanced
    return 1 + Math.max(lh, rh);
  }
  return height(root) !== -1;
}
// Balanced [1,2,3,4,5] => true
// Skewed [1,2,null,3,null,4] => false

Return -1 as sentinel for imbalance — once detected, it propagates up through all ancestor calls immediately.

Using -1 as sentinel avoids needing a separate boolean flag, keeping the function clean and single-purpose.

An IPv4 address has 4 parts, each a number from 0-255. Backtrack: try placing dots after 1, 2, or 3 digits from each position. Validate each segment (no leading zeros except "0", value ≤ 255).

Exactly 3 dots must be placed. When 4 segments exist and all digits are used, save the IP.

A classic backtracking problem with string segmentation and validation.

function restoreIpAddresses(s) {
  const result = [];
  function bt(start, parts) {
    if (parts.length === 4) {
      if (start === s.length) result.push(parts.join('.'));
      return;
    }
    for (let len = 1; len <= 3; len++) {
      if (start + len > s.length) break;
      const seg = s.slice(start, start + len);
      if (seg.length > 1 && seg[0] === '0') break; // no leading zeros
      if (parseInt(seg) > 255) break;
      parts.push(seg);
      bt(start + len, parts);
      parts.pop();
    }
  }
  bt(0, []);
  return result;
}
// restoreIpAddresses("25525511135") => ["255.255.11.135","255.255.111.35"]

Early termination: break (not continue) when segment exceeds 255 — since we're iterating lengths in order (1,2,3), longer segments will only be larger.

Valid IP strings have 4-12 characters (1.1.1.1 to 255.255.255.255). Can prune upfront if string length is outside this range.

Find a path from (0,0) to (n-1,n-1) in a grid where 1 = passable cell, 0 = wall. Backtrack: try each direction, mark cell as visited, recurse, unmark if recursion fails.

Can provide all paths (backtracking) or just check if one exists (simpler DFS).

This is the foundation of maze-solving algorithms used in games, robotics, and network routing.

function ratMaze(maze, n) {
  const sol = Array.from({length: n}, () => Array(n).fill(0));
  const dirs = [[0,1],[1,0],[0,-1],[-1,0]];
  function solve(r, c) {
    if (r === n-1 && c === n-1) { sol[r][c] = 1; return true; }
    if (r < 0 || r >= n || c < 0 || c >= n) return false;
    if (maze[r][c] === 0 || sol[r][c] === 1) return false;
    sol[r][c] = 1; // mark path
    for (const [dr, dc] of dirs) if (solve(r+dr, c+dc)) return true;
    sol[r][c] = 0; // backtrack (unmark)
    return false;
  }
  return solve(0, 0) ? sol : null;
}
// Returns solution matrix or null if no path exists

sol[r][c] = 1 marks, sol[r][c] = 0 unmarks — the classic backtracking "mark and unmark" pattern.

Check sol[r][c] === 1 to avoid revisiting cells already in the current path (prevents cycles).