Logical Reasoning

Mathematical Puzzles

19 Questions

The Water Jug Problem is a classic puzzle: given two unmarked jugs of capacity 3L and 5L, measure exactly 4L using only fill, empty, and pour-between operations.

Steps: fill 5L → pour into 3L (2L left in 5L) → empty 3L → pour 2L into 3L → fill 5L again → pour 1L into 3L (fills it) → exactly 4L remains in 5L.

This models BFS on state space where each state is (amount_in_jug1, amount_in_jug2). Any amount divisible by GCD(3,5)=1 is achievable.

// State transitions from (0,0):
// Fill 5L:          (0,0) => (0,5)
// Pour 5L into 3L:  (0,5) => (3,2)
// Empty 3L:         (3,2) => (0,2)
// Pour into 3L:     (0,2) => (2,0)
// Fill 5L:          (2,0) => (2,5)
// Pour 5L into 3L:  (2,5) => (3,4) target achieved!
// 5L jug now has 4L ✓

BFS approach finds the shortest sequence of operations by exploring all reachable states level by level.

Mathematical insight: any target amount k is achievable if and only if GCD(3,5) divides k. Since GCD(3,5)=1, any 0-5L amount is measurable.

Classic DP puzzle: given n eggs and k floors, find the MINIMUM worst-case trials needed to determine the critical floor where eggs first break.

When you drop from floor x: if it breaks, search below with n-1 eggs; if not, search above with n eggs. Minimize worst case over all floor choices.

DP recurrence: dp[eggs][floors] = 1 + min(x from 1..floors) max(dp[eggs-1][x-1], dp[eggs][floors-x]).

function eggDrop(eggs, floors) {
  if (eggs === 1 || floors <= 1) return floors;
  const dp = Array.from({length: eggs+1}, () => Array(floors+1).fill(0));
  for (let f = 1; f <= floors; f++) dp[1][f] = f; // 1 egg: linear
  for (let e = 2; e <= eggs; e++) {
    for (let f = 1; f <= floors; f++) {
      dp[e][f] = Infinity;
      for (let x = 1; x <= f; x++) {
        dp[e][f] = Math.min(dp[e][f], 1 + Math.max(dp[e-1][x-1], dp[e][f-x]));
      }
    }
  }
  return dp[eggs][floors];
}
// eggDrop(2, 10) => 4 (worst case 4 trials)

1 egg: must try every floor from 1 (linear O(k)). 2 eggs: triangular number strategy. More eggs approach O(log k).

Famous variant: 2 eggs, 100 floors — optimal first drop at floor 14 (triangular number strategy gives ceil(√200) ≈ 14).

Classic unbounded knapsack DP problem. dp[i] = minimum coins to make amount i. For each coin denomination, update all amounts from coin-value to target.

Greedy (always take largest coin) does NOT work for all denomination sets. DP is the correct general solution.

Time O(amount * coins), Space O(amount). Initialize dp[0]=0, rest as Infinity (unreachable), then build up.

function coinChange(coins, amount) {
  const dp = Array(amount + 1).fill(Infinity);
  dp[0] = 0;
  for (const c of coins) {
    for (let i = c; i <= amount; i++) {
      dp[i] = Math.min(dp[i], dp[i - c] + 1);
    }
  }
  return dp[amount] === Infinity ? -1 : dp[amount];
}
// coinChange([1,5,6,9], 11) => 2 (5+6)
// coinChange([2], 3)         => -1 (impossible)
// coinChange([1,2,5], 11)    => 3 (5+5+1)

Bottom-up DP: coin in outer loop enables unbounded use (same coin multiple times). Amount in inner loop from coin upward.

Greedy works ONLY for canonical systems (US coins: 25,10,5,1). For arbitrary coins, always use DP.

Classic Fibonacci DP: to reach stair n, you came from n-1 (1 step) or n-2 (2 steps). Total ways = ways(n-1) + ways(n-2).

Base cases: ways(1) = 1, ways(2) = 2 (either 1+1 or 2). These match shifted Fibonacci numbers.

Only two previous values needed at any point — O(1) space solution using rolling variables.

function climbStairs(n) {
  if (n <= 2) return n;
  let a = 1, b = 2;
  for (let i = 3; i <= n; i++) [a, b] = [b, a + b];
  return b;
}
// climbStairs(1) => 1
// climbStairs(2) => 2  (1+1 or 2)
// climbStairs(4) => 5  (five sequences)
// climbStairs(5) => 8

Fibonacci connection: climbStairs(n) = Fibonacci(n+1). The counts (1,2,3,5,8,13...) are Fibonacci numbers shifted by one position.

Generalization: with steps 1,2,3... the recurrence is f(n) = f(n-1)+f(n-2)+...+f(n-k) for k step sizes (tribonacci if k=3).

Tower of Hanoi: move n disks from source to target using auxiliary peg. Never place larger disk on smaller. Recurrence: T(n) = 2*T(n-1) + 1 (move n-1 disks twice, plus 1 for largest). T(n) = 2^n - 1.

The recursive solution is elegant: move top n-1 disks to auxiliary, move largest to target, move n-1 back from auxiliary to target.

This demonstrates how recursion can elegantly express algorithms that are very difficult to implement iteratively.

function hanoiMoves(n) {
  return Math.pow(2, n) - 1;
}
// hanoiMoves(3)  => 7
// hanoiMoves(10) => 1023

function hanoi(n, from, to, aux) {
  if (n === 0) return;
  hanoi(n-1, from, aux, to); // move top n-1 to aux
  console.log('Move disk', n, from, '->', to);
  hanoi(n-1, aux, to, from); // move n-1 from aux to target
}
// hanoi(3, 'A', 'C', 'B') prints 7 moves

T(n) = 2^n - 1: each additional disk DOUBLES moves plus 1. Adding just 1 disk to n doubles the total work.

For 64 disks: 2^64-1 ≈ 18.4 quintillion moves. At 1 move/second = ~585 billion years — longer than the universe's age.

N people in a circle, numbered 1 to N. Count k people clockwise, eliminate every k-th person. Continue until one survivor remains. Find the survivor's position.

Mathematical recurrence (0-indexed): J(1,k)=0; J(n,k)=(J(n-1,k)+k)%n. Add 1 at end to get 1-indexed position.

Historically linked to Flavius Josephus who allegedly used this math to be the last survivor of a group of 41 people using k=3.

function josephus(n, k) {
  let pos = 0; // 0-indexed position of survivor
  for (let i = 2; i <= n; i++) pos = (pos + k) % i;
  return pos + 1; // convert to 1-indexed
}
// josephus(7, 2)  => 7
// josephus(10, 3) => 4
// josephus(6, 2)  => 5

O(n) time, O(1) space using the recurrence. Far better than O(n*k) brute force simulation with a circular array.

For k=2, there's a binary trick: J(n) = 2L + 1 where n = 2^m + L and 0 ≤ L < 2^m (shift highest bit left, set LSB).

A magic square of order n has every row, column, and both main diagonals sum to the same value: the magic constant = n*(n²+1)/2.

Complete validation also verifies the grid contains each integer 1 to n² exactly once.

Famous magic squares: the 3×3 Lo Shu square (magic constant=15), Albrecht Dürer's 4×4 Melencholia I square (magic constant=34).

function isMagicSquare(m) {
  const n = m.length;
  const target = n * (n * n + 1) / 2;
  for (let i = 0; i < n; i++) {
    if (m[i].reduce((a, b) => a + b) !== target) return false;
    if (m.reduce((s, r) => s + r[i], 0) !== target) return false;
  }
  if (m.reduce((s, r, i) => s + r[i], 0) !== target) return false;
  if (m.reduce((s, r, i) => s + r[n-1-i], 0) !== target) return false;
  return true;
}
// 3x3: [[2,7,6],[9,5,1],[4,3,8]] => true (all sums = 15)

Magic constant for n×n = n*(n²+1)/2: for 3×3 = 15, 4×4 = 34, 5×5 = 65.

Only odd-order and doubly-even (4k×4k) methods exist for constructing magic squares. Singly-even (6×6, 10×10) require special algorithms.

In a room of just 23 people, the probability that two share a birthday is ~50.7%. This is far higher than intuition suggests because we're asking about ANY pair, not a specific pair.

Calculate P(no shared birthday) = 365/365 × 364/365 × ... × 343/365. P(shared) = 1 - P(no shared).

With 23 people there are C(23,2) = 253 possible pairs. Each pair has a 1/365 chance of matching. These combine for ~50% probability.

function birthdayProbability(n) {
  let prob = 1.0;
  for (let i = 0; i < n; i++) prob *= (365 - i) / 365;
  return 1 - prob;
}
// birthdayProbability(23) => 0.507 (50.7%)
// birthdayProbability(50) => 0.970
// birthdayProbability(70) => 0.999

Key insight: 253 pairs each with ~0.27% collision probability compound to >50% overall. Shared birthday is "pairwise" not "with you".

Practical application in CS: hash collision probability, probability of UUID clash in distributed systems (birthday attack in cryptography).

Game show puzzle: 3 doors, one hides a car, two hide goats. You pick a door. The host (who knows all) opens a different door revealing a goat. Should you switch your pick?

YES — ALWAYS switch. Switching wins 2/3 of the time. Your original pick had only 1/3 probability. The host's action concentrates the other 2/3 probability onto the remaining door.

When you initially picked wrong (2/3 chance), the host is forced to reveal the other goat, leaving the car in the remaining door — switching wins. When right (1/3 chance), switching loses.

// Verification by simulation:
function montySimulation(trials) {
  let switchWins = 0;
  for (let i = 0; i < trials; i++) {
    const car = Math.floor(Math.random() * 3);
    const pick = Math.floor(Math.random() * 3);
    // Switching wins whenever initial pick was wrong:
    if (pick !== car) switchWins++;
  }
  return switchWins / trials; // approaches 0.667
}
// montySimulation(100000) => ~0.667

Switching wins when initial guess was wrong (2/3 of time). Staying wins when initial guess was right (1/3 of time).

Counter-intuitive because we think two remaining doors means 50/50 — but the host's action is not random (they always show a goat), which gives information.

When n people each shake hands once with every other person, we need to count the number of unique pairs. Each handshake involves 2 people, and we want combinations (order doesn't matter).

Using combination formula C(n,2) = n*(n-1)/2. This is also the sum of the arithmetic series: (n-1) + (n-2) + ... + 1 = n*(n-1)/2.

This pattern appears in many network/graph problems: number of edges in a complete graph K_n also equals n*(n-1)/2.

function handshakes(n) {
  return n * (n - 1) / 2;
}
// handshakes(2)  => 1
// handshakes(5)  => 10
// handshakes(10) => 45
// handshakes(100)=> 4950

Formula C(n,2) = n*(n-1)/2 counts unique pairs. Grows quadratically — doubling n quadruples handshakes.

Application: in a tournament where every team plays every other team once, the number of games is also n*(n-1)/2.

GCD (Greatest Common Divisor) using Euclidean algorithm: gcd(a,b) = gcd(b, a%b), base case gcd(a,0) = a. LCM (Least Common Multiple) uses the relation: LCM = (a * b) / GCD.

The Euclidean algorithm is one of the oldest algorithms, dating back 300 BC. It's O(log min(a,b)) time.

GCD and LCM are foundational for fraction simplification, scheduling problems, and number theory.

function gcd(a, b) {
  return b === 0 ? a : gcd(b, a % b);
}
function lcm(a, b) {
  return (a * b) / gcd(a, b);
}
// gcd(48, 18) => 6  (48=6*8, 18=6*3)
// lcm(4, 6)   => 12 (smallest multiple of both 4 and 6)
// gcd(100, 75)=> 25
// lcm(12, 15) => 60

Euclidean algorithm: repeatedly replace larger with (larger mod smaller) until remainder is 0. Final non-zero value is GCD.

Compute LCM as (a/gcd)*b (not a*b/gcd) to avoid potential integer overflow in languages with bounded integers.

Trial division: divide by 2 (remove all even factors), then try odd divisors from 3 up to √n. Any remaining value > 1 is a prime factor itself.

Only need to check up to √n because if n = a*b and a > √n, then b < √n (so b would already have been found).

Prime factorization is unique per the Fundamental Theorem of Arithmetic — every integer > 1 has exactly one factorization.

function primeFactors(n) {
  const factors = [];
  while (n % 2 === 0) { factors.push(2); n /= 2; }
  for (let i = 3; i * i <= n; i += 2) {
    while (n % i === 0) { factors.push(i); n /= i; }
  }
  if (n > 1) factors.push(n); // remaining prime factor
  return factors;
}
// primeFactors(12)  => [2, 2, 3]    (12 = 2² × 3)
// primeFactors(315) => [3, 3, 5, 7] (315 = 3² × 5 × 7)
// primeFactors(13)  => [13]         (prime itself)

Handle 2 separately then only iterate odd numbers (i+=2) — this halves the iterations needed.

After the loop, if n > 1, the remaining n is a prime factor (it couldn't be divided by anything up to √original_n).

Binary search for the square root in range [1, n]. At each step check mid*mid against n. This gives O(log n) time without floating-point sqrt.

Alternative: Newton's method converges faster — x_next = (x + n/x) / 2. Iterate until stable.

Bit manipulation: perfect squares always have an odd number of factor pairs. Also, for any n, count consecutively subtract odd numbers (1,3,5...) — reach 0 if perfect square.

function isPerfectSquare(n) {
  if (n < 1) return false;
  let left = 1, right = n;
  while (left <= right) {
    const mid = Math.floor((left + right) / 2);
    const sq = mid * mid;
    if (sq === n) return true;
    if (sq < n) left = mid + 1;
    else right = mid - 1;
  }
  return false;
}
// isPerfectSquare(16) => true  (4²)
// isPerfectSquare(14) => false
// isPerfectSquare(1)  => true  (1²)

Binary search approach: O(log n) time and O(1) space. Avoids floating-point precision issues of Math.sqrt().

The odd-number subtraction trick: 1, 1+3=4, 4+5=9, 9+7=16... — subtract consecutive odds, perfect squares reach exactly 0.

Pascal's triangle: each row starts and ends with 1, and each interior element is the sum of the two elements above it. Row k contains the binomial coefficients C(k,0), C(k,1), ..., C(k,k).

Build iteratively: start with [1], then generate each row from the previous by summing adjacent elements and padding 1s at ends.

Pascal's triangle encodes many number theory patterns: Fibonacci numbers (diagonal sums), powers of 2 (row sums), Sierpinski triangle (odd numbers).

function pascalTriangle(n) {
  const result = [];
  for (let i = 0; i < n; i++) {
    const row = [1];
    for (let j = 1; j < i; j++) {
      row.push(result[i-1][j-1] + result[i-1][j]);
    }
    if (i > 0) row.push(1);
    result.push(row);
  }
  return result;
}
// pascalTriangle(5):
// [1]
// [1,1]
// [1,2,1]
// [1,3,3,1]
// [1,4,6,4,1]

Row i has i+1 elements. Row sums are powers of 2 (row 0: 2⁰=1, row 1: 2¹=2, row 4: 2⁴=16).

C(n,r) = Pascal's row n, element r. C(5,2) = 10 is at row 5 (0-indexed), position 2.

Naive recursive Fibonacci is O(2^n) — exponential! Each subproblem is recomputed many times. Memoization caches results, making it O(n) time and O(n) space.

Bottom-up DP (iterative) achieves O(n) time and O(1) space — even better by using only two variables.

Matrix exponentiation can solve Fibonacci in O(log n) for very large n, but is rarely needed in interviews.

// Memoized recursion: O(n) time, O(n) space
function fibMemo(n, memo = {}) {
  if (n <= 1) return n;
  if (memo[n]) return memo[n];
  return memo[n] = fibMemo(n-1, memo) + fibMemo(n-2, memo);
}

// Iterative O(1) space:
function fibIterative(n) {
  let a = 0, b = 1;
  for (let i = 2; i <= n; i++) [a, b] = [b, a + b];
  return n === 0 ? a : b;
}
// fib(10) => 55
// fib(20) => 6765

Memoization = top-down DP: recursive but cache results. Tabulation = bottom-up DP: iterative, even more efficient.

Fibonacci numbers grow exponentially (~φⁿ/√5 where φ≈1.618). fib(50) already exceeds 10 billion.

Extension of the classic stair problem: with 3 possible step sizes (1, 2, 3), each step count is the sum of the previous three: ways(n) = ways(n-1) + ways(n-2) + ways(n-3).

Base cases: ways(0) = 1 (staying at ground), ways(1) = 1, ways(2) = 2. This is the "tribonacci" sequence.

The pattern generalizes: for k step sizes, sum the last k terms.

function climbStairs3(n) {
  if (n <= 1) return 1;
  if (n === 2) return 2;
  let a = 1, b = 1, c = 2;
  for (let i = 3; i <= n; i++) {
    [a, b, c] = [b, c, a + b + c];
  }
  return c;
}
// climbStairs3(3) => 4  (1+1+1, 1+2, 2+1, 3)
// climbStairs3(4) => 7
// climbStairs3(5) => 13

Recurrence: f(n) = f(n-1) + f(n-2) + f(n-3) with base cases f(0)=1, f(1)=1, f(2)=2.

This is "tribonacci" — generalized Fibonacci. Each term is the sum of the previous 3 instead of 2.

Naive approach multiplies n times (O(n)). Fast exponentiation (Exponentiation by Squaring) is O(log n) by halving the exponent at each step.

Key insight: x^n = (x^(n/2))² for even n. For odd n: x^n = x * x^(n-1). This halves the problem each time.

Used in cryptography (modular exponentiation), matrix exponentiation, and competitive programming.

function fastPow(base, exp) {
  if (exp === 0) return 1;
  if (exp < 0) return 1 / fastPow(base, -exp);
  if (exp % 2 === 0) {
    const half = fastPow(base, exp / 2);
    return half * half; // reuse result!
  }
  return base * fastPow(base, exp - 1);
}
// fastPow(2, 10) => 1024 (only 4 multiplications vs 10 naive)
// fastPow(3, 0)  => 1
// fastPow(2, -3) => 0.125

Reuse half result: const half = fastPow(base, exp/2); return half * half — avoids computing the half-exponent twice.

Iterative version uses bit manipulation on the exponent to process each bit O(log n) — more cache-friendly than recursion.

Any two distinct non-parallel, non-coincident lines intersect exactly once. The maximum occurs when no two lines are parallel and no three lines meet at a single point.

With n lines, choose any 2 for an intersection: C(n,2) = n*(n-1)/2 maximum intersections possible.

This combinatorics question tests the connection between geometry and combinations.

function maxIntersections(n) {
  return n * (n - 1) / 2;
}
// 1 line   => 0 intersections (no pair)
// 2 lines  => 1 intersection
// 3 lines  => 3 intersections (C(3,2)=3)
// 4 lines  => 6 intersections (C(4,2)=6)
// 10 lines => 45 intersections
// Same formula as handshakes!

Same formula as handshakes C(n,2) = n*(n-1)/2 — every combinatorics problem about pairing n items follows this.

If some lines are parallel or concurrent (3+ through one point), subtract those impossibilities from the maximum.

For large factorials like 100!, compute the factorial as a BigInt or array of digits, then sum all digits. Simple digit sum formula doesn't apply to factorials.

The digit sum of a number relates to its remainder when divided by 9 (digital root). But factorial digit sums don't have a simple closed form.

However: if the question is just "sum of all digits in n!", compute big factorial then extract digits.

function factorialDigitSum(n) {
  let factorial = BigInt(1);
  for (let i = 2; i <= n; i++) factorial *= BigInt(i);
  return factorial.toString().split('').reduce((s, d) => s + Number(d), 0);
}
// factorialDigitSum(10) => 27  (10! = 3628800, 3+6+2+8+8+0+0)
// factorialDigitSum(20) => 54
// factorialDigitSum(100)=> 648

Use BigInt for large factorials — regular numbers lose precision beyond 2^53 (about 15 significant digits).

100! has 158 digits; 1000! has 2568 digits. BigInt handles this, but performance degrades for very large n.