Logical Reasoning

Coding Challenges

24 Questions

FizzBuzz is the classic beginner interview filter: print each number from 1 to n, but replace multiples of 3 with "Fizz", multiples of 5 with "Buzz", and multiples of both with "FizzBuzz".

The key is to check divisibility by BOTH first (15 = 3×5), then 3 alone, then 5 alone. Order matters: check FizzBuzz before Fizz and Buzz.

A clean trick: build the output string by concatenating "Fizz" and "Buzz" conditionally, then use the empty string default.

function fizzBuzz(n) {
  for (let i = 1; i <= n; i++) {
    let out = '';
    if (i % 3 === 0) out += 'Fizz';
    if (i % 5 === 0) out += 'Buzz';
    console.log(out || i); // if out is empty, print number
  }
}
// fizzBuzz(15): 1,2,Fizz,4,Buzz,Fizz,7,8,Fizz,Buzz,11,Fizz,13,14,FizzBuzz

Building the string: append 'Fizz' if %3===0, append 'Buzz' if %5===0. Both conditions checked independently, so 15 gets 'FizzBuzz' automatically.

Common mistakes: using else if chains or checking %15 first (unnecessary) — the string-building approach handles all cases cleanly.

To reverse an integer mathematically: extract the last digit using n % 10, add it to the reversed number (shift existing digits left by multiplying by 10), then remove the last digit from n using integer division.

Handle negative numbers by preserving the sign separately and working with the absolute value.

This tests pure math skills without string conversion tricks and tests integer overflow awareness.

function reverseInt(n) {
  let rev = 0, sign = Math.sign(n);
  n = Math.abs(n);
  while (n > 0) {
    rev = rev * 10 + n % 10; // extract last digit and build reverse
    n = Math.floor(n / 10);  // remove last digit
  }
  return rev * sign;
}
// reverseInt(123)  => 321
// reverseInt(-456) => -654
// reverseInt(120)  => 21 (leading zeros dropped)

Handle the sign first — work with Math.abs(n) and multiply result by Math.sign(n) at the end.

In languages with 32-bit integers, check for overflow after each step. In JS, numbers are 64-bit floats so overflow is less of a concern.

Use a stack data structure. Iterate the string: push opening brackets, pop on closing brackets and verify the popped value is the matching opening bracket.

If the stack is empty when trying to pop (more closing than opening) or the popped bracket doesn't match, return false immediately.

At the end, the stack must be empty — any remaining items represent unclosed brackets.

function isValid(s) {
  const stack = [], map = { ')': '(', ']': '[', '}': '{' };
  for (const ch of s) {
    if ('([{'.includes(ch)) {
      stack.push(ch);
    } else if (stack.pop() !== map[ch]) {
      return false; // mismatched or empty stack
    }
  }
  return stack.length === 0; // unclosed brackets check
}
// isValid("()[]{}")  => true
// isValid("([)]")    => false
// isValid("{[]}")    => true

Stack-based O(n) solution handles all three bracket types simultaneously without nested conditions.

Edge cases: empty string (return true), single bracket (return false), all closing brackets (stack.pop() returns undefined !== map[ch]).

Roman numerals use additive notation, but when a smaller value precedes a larger one (like IV=4, IX=9), the smaller is subtracted. This is the key rule.

Build a lookup map of character to value. Iterate the string: if the current value is less than the NEXT value, subtract it; otherwise add it.

The 6 subtractive cases are: IV(4), IX(9), XL(40), XC(90), CD(400), CM(900).

function romanToInt(s) {
  const map = {I:1, V:5, X:10, L:50, C:100, D:500, M:1000};
  let result = 0;
  for (let i = 0; i < s.length; i++) {
    const curr = map[s[i]], next = map[s[i+1]];
    result += (curr < next) ? -curr : curr;
  }
  return result;
}
// romanToInt("III")   => 3
// romanToInt("LVIII") => 58
// romanToInt("MCMXCIV") => 1994

Subtractive rule: if current symbol < next symbol, subtract current; otherwise add it.

The reverse is also a common interview question: integer to Roman numeral — use a greedy approach from largest to smallest value.

The Sieve of Eratosthenes efficiently finds all primes up to n. Start with a boolean array of all true. For each prime p starting at 2, mark all its multiples as composite.

Optimization: only check up to √n for outer loop (if p has a composite less than p², it's already been marked by a smaller prime). Start marking at p², not 2p.

Time complexity O(n log log n) — one of the most efficient algorithms for prime generation.

function countPrimes(n) {
  const sieve = new Array(n).fill(true);
  sieve[0] = sieve[1] = false;
  for (let i = 2; i * i < n; i++) {
    if (sieve[i]) {
      for (let j = i * i; j < n; j += i) {
        sieve[j] = false; // mark multiples of i as composite
      }
    }
  }
  return sieve.filter(Boolean).length;
}
// countPrimes(10) => 4  (2,3,5,7)
// countPrimes(20) => 8  (2,3,5,7,11,13,17,19)

Start inner loop at i*i (not 2*i) — all smaller multiples of i were already marked by earlier primes.

For each prime p, its first unmarked multiple is p² because p×2, p×3...p×(p-1) were already marked by 2, 3...p-1.

After sorting the array lexicographically, only the FIRST and LAST strings need to be compared — they represent the most different pair, so their common prefix is the common prefix of ALL strings.

Compare character by character while they match. Return the matching portion.

Alternative: horizontal scanning — start with first string as prefix, reduce it by comparing with each subsequent string until it matches.

function longestCommonPrefix(strs) {
  if (!strs.length) return '';
  strs.sort(); // lexicographic sort
  const first = strs[0], last = strs[strs.length - 1];
  let i = 0;
  while (i < first.length && first[i] === last[i]) i++;
  return first.slice(0, i);
}
// longestCommonPrefix(["flower","flow","flight"]) => "fl"
// longestCommonPrefix(["dog","racecar","car"]) => ""

Sort then compare extremes — O(n log n) for sorting but then O(m) for comparison where m = LCP length.

The horizontal scanning approach is O(S) where S is total characters — better if array is large but sorting overhead is undesirable.

Two strings are isomorphic if there exists a one-to-one mapping between their characters. "egg" and "add" are isomorphic (e→a, g→d), but "foo" and "bar" are not (o maps to both a and r).

Use TWO maps: one from s to t, one from t to s. Bidirectional mapping ensures the mapping is truly one-to-one (bijective).

A single map only ensures s→t is consistent but doesn't catch two s-characters mapping to the same t-character ("ab"↔"aa" would pass with one map).

function isIsomorphic(s, t) {
  if (s.length !== t.length) return false;
  const mapS = {}, mapT = {};
  for (let i = 0; i < s.length; i++) {
    const sc = s[i], tc = t[i];
    if (mapS[sc] !== undefined && mapS[sc] !== tc) return false;
    if (mapT[tc] !== undefined && mapT[tc] !== sc) return false;
    mapS[sc] = tc;
    mapT[tc] = sc;
  }
  return true;
}
// isIsomorphic("egg", "add")  => true  (e=a, g=d)
// isIsomorphic("foo", "bar")  => false (o can't map to both a and r)
// isIsomorphic("ab", "aa")   => false (a and b both map to a)

Bidirectional mapping needed to catch cases where multiple source characters map to the same target character.

Alternative: transform both strings to a canonical form (index of first occurrence) and compare: "egg"→"011", "add"→"011" (equal ∴ isomorphic).

A happy number eventually reaches 1 when repeatedly replaced by the sum of squares of its digits. Unhappy numbers cycle indefinitely (famously always passing through 4).

Use Floyd's cycle detection (fast/slow pointer) or a Set to detect if a cycle exists (unhappy), or if we reach 1 (happy).

For the Set approach: if we see a number we've seen before, it's an unhappy cycle — return false.

function isHappy(n) {
  const sumSquares = n =>
    String(n).split('').reduce((s, d) => s + Number(d) ** 2, 0);
  
  const seen = new Set();
  while (n !== 1) {
    if (seen.has(n)) return false; // cycle detected
    seen.add(n);
    n = sumSquares(n);
  }
  return true;
}
// isHappy(19) => true  (1²+9²=82 → 8²+2²=68 →...=1)
// isHappy(2)  => false (enters cycle)

Unhappy numbers always cycle through 4 — an alternative early-exit: if (n === 4) return false instead of using a Set.

Floyd's cycle detection: use two pointers (slow moves 1 step, fast moves 2 steps). They meet only if there's a cycle. If fast reaches 1, number is happy.

Approach 1 (loop): repeatedly divide by 3 and check if the result reaches exactly 1. If remainder is ever non-zero, it's not a power of 3. O(log n) time.

Approach 2 (math): the largest power of 3 in 32-bit int range is 3^19 = 1,162,261,467. Any power of 3 MUST divide this number evenly.

Approach 3: log base 3 check — but floating-point precision issues make this unreliable without rounding.

// Loop approach: O(log n)
function isPowerOfThree(n) {
  if (n < 1) return false;
  while (n % 3 === 0) n /= 3;
  return n === 1;
}

// Math trick: O(1)
function isPowerOfThreeO1(n) {
  return n > 0 && 1162261467 % n === 0; // 3^19 mod n
}
// isPowerOfThree(9)  => true  (3^2)
// isPowerOfThree(27) => true  (3^3)
// isPowerOfThree(45) => false

The O(1) math trick works specifically for base 3 (a prime number) — the max value in range is divisible ONLY by powers of that prime.

This exact same pattern works for isPowerOfTwo: n > 0 && (n & (n-1)) === 0 (bitwise trick — only one bit set).

A palindrome reads the same forwards and backwards. When ignoring punctuation/spaces, first strip all non-alphanumeric characters and normalize to lowercase before comparing.

The two-pointer approach is memory-efficient: start from both ends and work inward, comparing characters.

This is a classic interview question testing string manipulation and the understanding of palindrome definition.

function isPalindrome(s) {
  let cleaned = s.replace(/[^a-zA-Z0-9]/g, '').toLowerCase();
  return cleaned === cleaned.split('').reverse().join('');
}
// isPalindrome("A man, a plan, a canal: Panama") => true
// isPalindrome("race a car") => false

// Two-pointer O(1) space approach:
function isPalindromePointers(s) {
  let l = 0, r = s.length - 1;
  while (l < r) {
    while (l < r && !/[a-z0-9]/i.test(s[l])) l++;
    while (l < r && !/[a-z0-9]/i.test(s[r])) r--;
    if (s[l].toLowerCase() !== s[r].toLowerCase()) return false;
    l++; r--;
  }
  return true;
}

The regex approach is clean but uses O(n) extra space for the cleaned string.

Two-pointer avoids creating a new string, giving O(1) space complexity while maintaining O(n) time.

The naive approach is O(n²) with nested loops. The optimal solution uses a hash map to store seen values, reducing it to O(n) time.

For each element, check if its complement (target - element) already exists in the map. If yes, return the indices. If no, store the element and its index.

This one-pass hash map approach is the canonical example of trading space for time complexity.

function twoSum(nums, target) {
  const seen = new Map();
  for (let i = 0; i < nums.length; i++) {
    const complement = target - nums[i];
    if (seen.has(complement)) return [seen.get(complement), i];
    seen.set(nums[i], i);
  }
  return []; // no solution
}
// twoSum([2,7,11,15], 9) => [0,1] (2+7=9)
// twoSum([3,2,4], 6)    => [1,2] (2+4=6)

Time O(n), Space O(n) using hash map vs Time O(n²), Space O(1) using nested loops.

For a sorted array, use the two-pointer technique: O(n) time and O(1) space without a hash map.

Floyd's cycle detection (tortoise and hare) algorithm finds the duplicate without modifying the array and using O(1) extra space.

Treat the array as a linked list where each value points to the next index. The duplicate creates a cycle. The cycle's entrance is the duplicate.

Alternative: XOR all indices and values — extra O(n) space hash set approach is simpler to understand.

function findDuplicate(nums) {
  // Floyd's cycle detection
  let slow = nums[0], fast = nums[0];
  do {
    slow = nums[slow];
    fast = nums[nums[fast]];
  } while (slow !== fast);
  slow = nums[0];
  while (slow !== fast) {
    slow = nums[slow];
    fast = nums[fast];
  }
  return slow;
}
// findDuplicate([1,3,4,2,2]) => 2
// findDuplicate([3,1,3,4,2]) => 3

Time O(n), Space O(1) with Floyd's algorithm — no array modification, no extra space.

Simple O(n) space: use a Set, return when adding a duplicate: if (seen.has(n)) return n; seen.add(n);

A prime number is only divisible by 1 and itself. For efficient checking, only test divisors up to the square root — if n has a factor > √n, it must also have one < √n.

Special cases: numbers ≤ 1 are not prime. 2 and 3 are prime. Eliminate multiples of 2 and 3 early, then check 6k±1 candidates.

For checking many numbers, use the Sieve of Eratosthenes (O(n log log n)) instead of individual checks.

function isPrime(n) {
  if (n <= 1) return false;
  if (n <= 3) return true;
  if (n % 2 === 0 || n % 3 === 0) return false;
  for (let i = 5; i * i <= n; i += 6) {
    if (n % i === 0 || n % (i + 2) === 0) return false;
  }
  return true;
}
// isPrime(2)   => true
// isPrime(17)  => true
// isPrime(100) => false

Time O(√n) by checking only up to the square root. The 6k±1 optimization skips multiples of 2 and 3.

All primes > 3 are of the form 6k±1 — this reduces the number of iterations by ~3x compared to checking all odd numbers.

Rotating an array right by k puts the last k elements at the front. The triple-reverse technique does this in-place in O(n) time and O(1) space.

Normalize k by modding with array length to handle k > n efficiently (k=n+1 is same as k=1).

Alternative O(n) space approach: copy last k elements to front — simpler but uses extra memory.

function rotate(nums, k) {
  k = k % nums.length;
  if (k === 0) return nums;
  const reverse = (arr, l, r) => {
    while (l < r) { [arr[l], arr[r]] = [arr[r], arr[l]]; l++; r--; }
  };
  reverse(nums, 0, nums.length - 1); // reverse all
  reverse(nums, 0, k - 1);           // reverse first k
  reverse(nums, k, nums.length - 1); // reverse rest
  return nums;
}
// rotate([1,2,3,4,5,6,7], 3) => [5,6,7,1,2,3,4]

Triple reverse trick: reverse all → reverse first k → reverse remaining. Results in right rotation by k.

For left rotation by k: rotate(arr, n - k) where n is array length.

The sum of first n natural numbers is n*(n+1)/2. Subtract the actual array sum to find the missing number — O(n) time, O(1) space, no sorting needed.

XOR approach also works: XOR all indices 1..n with all array values — the result is the missing number (duplicate XOR values cancel).

Both approaches handle the problem in one pass without extra space.

// Approach 1: Math (sum formula)
function missingNumber(nums) {
  const n = nums.length;
  const expected = n * (n + 1) / 2;
  return expected - nums.reduce((a, b) => a + b, 0);
}

// Approach 2: XOR
function missingNumberXOR(nums) {
  let result = nums.length;
  for (let i = 0; i < nums.length; i++) result ^= i ^ nums[i];
  return result;
}
// missingNumber([3,0,1]) => 2
// missingNumber([9,6,4,2,3,5,7,0,1]) => 8

Math approach can overflow for very large n with 32-bit integers. XOR approach avoids this.

XOR works because: a XOR a = 0, a XOR 0 = a, and XOR is commutative/associative.

Use a Set for O(n) lookup. Convert the first array to a Set, then filter the second array keeping elements present in the Set.

Handle duplicates based on requirements: if unique intersection use Set, if include duplicates use frequency maps.

For sorted arrays, the two-pointer technique gives O(n+m) time with O(1) extra space.

// Unique values intersection
function intersection(nums1, nums2) {
  const set = new Set(nums1);
  return [...new Set(nums2.filter(n => set.has(n)))];
}

// With duplicates (frequency map)
function intersectWithDups(nums1, nums2) {
  const freq = {};
  for (const n of nums1) freq[n] = (freq[n] || 0) + 1;
  return nums2.filter(n => {
    if (freq[n] > 0) { freq[n]--; return true; }
    return false;
  });
}
// intersectWithDups([1,2,2,1],[2,2]) => [2,2]

Time O(n+m) with hash map approach. If arrays are pre-sorted, two pointers give O(1) extra space.

Ask the interviewer: output unique values only? or include duplicates? This changes the algorithm significantly.

Unlike max-sum subarray (Kadane's), we need to track BOTH max and min products because a negative times a negative becomes positive — the minimum can flip to maximum.

At each step, the current max product is the maximum of: current element alone, max*element, min*element (in case of sign flip).

This is one level harder than max-sum subarray and tests understanding of how negatives interact.

function maxProduct(nums) {
  let maxProd = nums[0], minProd = nums[0], result = nums[0];
  for (let i = 1; i < nums.length; i++) {
    if (nums[i] < 0) [maxProd, minProd] = [minProd, maxProd];
    maxProd = Math.max(nums[i], maxProd * nums[i]);
    minProd = Math.min(nums[i], minProd * nums[i]);
    result = Math.max(result, maxProd);
  }
  return result;
}
// maxProduct([2,3,-2,4])  => 6  (subarray [2,3])
// maxProduct([-2,0,-1])   => 0

Track both max and min so when we encounter a negative number, we swap them (negative flips the sign).

Time O(n), Space O(1) — single pass through the array maintaining just two rolling values.

Binary search divides the search space in half at each step by comparing the target with the middle element. This gives O(log n) time vs O(n) for linear search.

Tricky part: correct mid calculation and boundary updates (left = mid+1, right = mid-1) to avoid infinite loops.

Be careful with integer overflow: use mid = left + Math.floor((right - left) / 2) instead of (left + right) / 2 in languages with fixed-size integers.

function binarySearch(arr, target) {
  let left = 0, right = arr.length - 1;
  while (left <= right) {
    const mid = left + Math.floor((right - left) / 2);
    if (arr[mid] === target) return mid;
    if (arr[mid] < target) left = mid + 1;
    else right = mid - 1;
  }
  return -1; // not found
}
// binarySearch([1,3,5,7,9,11], 7) => 3
// binarySearch([1,3,5,7,9,11], 6) => -1

Time O(log n) — doubles the searchable range with each element; 1 billion items need ~30 comparisons.

Common mistakes: off-by-one errors in boundaries, forgetting to update both left AND right in the correct direction.

Each element has 2 choices: include or exclude. For n elements there are 2^n subsets. Using bit manipulation, represent each subset as a bitmask from 0 to 2^n - 1.

Alternatively, use recursive backtracking: start with empty set, add current element and recurse, then remove and recurse.

The iterative approach is often preferred in interviews for its clarity and O(n·2^n) time complexity which matches the size of the output.

function subsets(nums) {
  const result = [[]];
  for (const num of nums) {
    const len = result.length;
    for (let i = 0; i < len; i++) result.push([...result[i], num]);
  }
  return result;
}
// subsets([1,2,3]) => [[],[1],[2],[1,2],[3],[1,3],[2,3],[1,2,3]]
// 2^3 = 8 subsets total

Time O(n·2^n) — we generate 2^n subsets each taking O(n) to copy. Space is the same O(n·2^n).

For duplicate elements, sort first and skip duplicates to avoid generating duplicate subsets.

Expand-around-center: for each character (and each gap between characters), expand outward as long as characters match. Track the longest expansion found.

There are 2n-1 centers (n character centers + n-1 gap centers for even palindromes). Each expansion is O(n) worst case, giving O(n²) total.

The optimal Manacher's algorithm solves this in O(n) but is complex. For interviews, O(n²) expand-around-center is usually acceptable.

function longestPalindrome(s) {
  let start = 0, maxLen = 1;
  function expand(l, r) {
    while (l >= 0 && r < s.length && s[l] === s[r]) { l--; r++; }
    return r - l - 1; // length of palindrome found
  }
  for (let i = 0; i < s.length; i++) {
    const len1 = expand(i, i);   // odd palindromes
    const len2 = expand(i, i+1); // even palindromes
    const maxL = Math.max(len1, len2);
    if (maxL > maxLen) {
      maxLen = maxL;
      start = i - Math.floor((maxL - 1) / 2);
    }
  }
  return s.substring(start, start + maxLen);
}
// longestPalindrome("babad") => "bab"
// longestPalindrome("cbbd")  => "bb"

Two types of palindromes: odd-length (single center character) and even-length (between two characters).

Expand-around-center is intuitive and O(n²) — better than O(n³) brute force checking all substrings.

Use backtracking: track open and close counts. Add '(' if open < n, add ')' if close < open. When close === n, save the combination.

Valid combinations equal the Catalan number C(n) = C(2n,n)/(n+1). For n=3: 5 combinations.

This is a classic backtracking/DFS problem that tests recursive thinking and constraint propagation.

function generateParentheses(n) {
  const result = [];
  function backtrack(s, open, close) {
    if (s.length === 2 * n) { result.push(s); return; }
    if (open < n)     backtrack(s + '(', open + 1, close);
    if (close < open) backtrack(s + ')', open, close + 1);
  }
  backtrack('', 0, 0);
  return result;
}
// generateParentheses(3) =>
// ["((()))","(()())","(())()","()(())","()()()"]

Constraint: close < open ensures we never add a ')' before its matching '(', maintaining validity.

Result count follows Catalan numbers: C(1)=1, C(2)=2, C(3)=5, C(4)=14, C(5)=42.

Use two pointers, one for each array. Compare current elements from both, insert the smaller one, advance that pointer. After one array exhausts, append remainders of the other.

This is the merge step of MergeSort. Understanding it deeply makes implementing merge sort straightforward.

For in-place merge (like LeetCode 88), fill from the BACK — compare from the end and place larger elements last to avoid overwriting.

function mergeSorted(arr1, arr2) {
  let i = 0, j = 0, result = [];
  while (i < arr1.length && j < arr2.length) {
    if (arr1[i] <= arr2[j]) result.push(arr1[i++]);
    else result.push(arr2[j++]);
  }
  return result.concat(arr1.slice(i), arr2.slice(j));
}
// mergeSorted([1,3,5,7],[2,4,6,8]) => [1,2,3,4,5,6,7,8]

Time O(n+m), Space O(n+m) for a new array. In-place LeetCode-88 variant is O(1) space.

Key insight: merge works more efficiently than sorting combined array because both inputs are already sorted.

Use a stack. Map each closing bracket to its expected opening bracket. Push opening brackets; on closing bracket, pop and verify it matches. Return true only if stack is empty at end.

This handles nested brackets of mixed types: {[()]} is valid, {[(])} is not.

Common extensions: count minimum insertions/deletions to balance, or find minimum removal of invalid brackets.

function isBalanced(s) {
  const stack = [];
  const match = { ')': '(', ']': '[', '}': '{' };
  for (const ch of s) {
    if ('([{'.includes(ch)) {
      stack.push(ch);
    } else if (')]}'.includes(ch)) {
      if (stack.pop() !== match[ch]) return false;
    }
  }
  return stack.length === 0;
}
// isBalanced("{[()]}") => true
// isBalanced("{[(])}") => false
// isBalanced("((()") =>  false (unclosed)

Stack.length === 0 check at end catches unclosed opening brackets (e.g., "(((").

Characters that aren't brackets are ignored — the algorithm only tracks bracket pairs.

QuickSelect is the optimal algorithm: partition (like QuickSort) and recurse only into the partition containing the kth element. Average O(n), worst O(n²).

Using a min-heap of size k: maintain k largest elements, the heap root is the kth largest. O(n log k) time.

Sort then index costs O(n log n) — fine for small arrays but not for streaming or huge datasets.

// Min-heap approach: O(n log k)
function findKthLargest(nums, k) {
  // Sort-based (interview acceptable): O(n log n)
  return nums.sort((a, b) => b - a)[k - 1];
}

// Better: QuickSelect O(n) average
function quickSelect(nums, k) {
  const pivot = nums[Math.floor(Math.random() * nums.length)];
  const large  = nums.filter(x => x > pivot);
  const equal  = nums.filter(x => x === pivot);
  const small  = nums.filter(x => x < pivot);
  if (k <= large.length)  return quickSelect(large, k);
  if (k <= large.length + equal.length) return pivot;
  return quickSelect(small, k - large.length - equal.length);
}
// findKthLargest([3,2,1,5,6,4], 2) => 5
// findKthLargest([3,2,3,1,2,4,5,5,6], 4) => 4

QuickSelect average O(n) — best general approach. Randomized pivot avoids worst-case O(n²).

For streaming data where n is unknown: min-heap of size k gives O(n log k) and works online.