Logical Reasoning

Problem Solving Strategies

18 Questions

Solving the wrong problem perfectly is worse than solving the right problem imperfectly. Misunderstanding the goal, inputs, or constraints leads to wasted effort — you build the wrong thing.

Before writing a single line of code: read the problem twice, identify inputs/outputs, note constraints (size, range, uniqueness), and clarify ambiguities. Restate in your own words to verify understanding.

In real engineering, requirements are often unclear. Asking clarifying questions is a professional skill — not a sign of weakness.

// Questions to ask before solving:
// "What is the expected input format? [1,2,3] or a linked list?"
// "Can there be negative numbers?"
// "Can the array be empty or have one element?"
// "Is the array sorted or unsorted?"
// "Should I return index or value?"
// "Can I modify the input array in place?"
// "What should I return if there's no valid answer?"
// "Can n be 0?"

// Proper problem statement from constraints:
// n = 10^5 => need O(n log n) or better
// values in [1, 10^9] => might need BigInt for products
// values can be negative => sum formula-based approaches may not work

Constraints reveal the algorithm: input size n tells you what time complexity is acceptable. Value range tells you if overflow is a concern. Sorted tells you if binary search is applicable.

Always verify your understanding with at least two examples before coding. Pick one normal case and one edge case to make sure your interpretation is correct.

Constraints are the boundaries of valid input — they define your algorithm's requirements. Look for: n (input size), value ranges, uniqueness guarantees, sortedness, types, and special conditions.

Map constraints to complexity requirements: n=10 allows O(2^n), n=1000 allows O(n²), n=10^6 requires O(n) or O(n log n), n=10^9 requires O(log n) or O(1). Choose your algorithm accordingly.

Missing constraints often lead to bugs: assuming no negatives when negatives are possible, or assuming unique values when duplicates exist.

// Constraint mapping:
// n <= 20:        O(2^n) backtracking OK
// n <= 1,000:     O(n²) OK
// n <= 100,000:   Need O(n log n) or O(n)
// n <= 10^9:      Need O(log n) or O(1)

// Value range concerns:
const values_up_to_1e9 = true;
// n * n could overflow 32-bit: use BigInt or be careful

// Uniqueness constraints:
// "distinct values" => no need to handle duplicates
// "may contain duplicates" => dedup or carefully handle

// Sorted constraints:
// "array is sorted" => binary search is viable
// "return sorted output" => consider maintaining sorted order

Read every word of the problem statement: "non-negative", "distinct", "contiguous" — these each affect the approach significantly.

Constraints also tell you when to worry about overflow. For values up to 10^9, products of two values can reach 10^18 which overflows 32-bit integers (JavaScript uses 64-bit floats so it's safer, but know the limits).

Working through examples manually before coding builds intuition for the pattern, catches misunderstandings early, and reveals edge cases you'd never think of abstractly. Your brain finds patterns from concrete data better than from abstract descriptions.

Use at least 2 examples: one moderately complex (shows the general case) and one edge case (empty, single element, or all-same values).

Trace through the expected algorithm mentally. If you can't trace it, you can't code it correctly.

// Problem: find maximum subarray sum
// Trace with arr=[−2,1,−3,4,−1,2,1,−5,4]:
//
// Index:   0  1  2  3  4  5  6  7  8
// Value:  -2  1 -3  4 -1  2  1 -5  4
//
// Track: currentSum, maxSum
// i=0: cur = max(-2, -2) = -2, max = -2
// i=1: cur = max(1, -2+1) = 1, max = 1
// i=2: cur = max(-3, 1-3) = -2, max = 1
// i=3: cur = max(4, -2+4) = 4, max = 4
// i=4: cur = max(-1, 4-1) = 3, max = 4
// i=5: cur = max(2, 3+2) = 5, max = 5
// i=6: cur = max(1, 5+1) = 6, max = 6
// Answer: 6 ([4,-1,2,1])

Tracing reveals the pattern: manually working through Kadane's algorithm on this example makes the logic obvious. Without tracing, the formula max(arr[i], cur+arr[i]) seems arbitrary.

Trace your solution on a DIFFERENT example after coding to verify it's correct — don't reuse the exact input you designed it for.

Edge cases are boundary conditions where behavior might be unexpected or incorrect. Most algorithms that work correctly for large inputs fail at the boundaries.

Standard edge case checklist: empty input ([], '', null, 0), single element ([x], n=1), all-same values ([5,5,5]), maximum size, minimum/negative values, duplicates, already sorted, reverse sorted.

For tree problems: null root, single node, linear chain (degenerate). For linked lists: empty list, single node, cycle. For strings: empty, single char, all-same chars.

// Edge case testing template:
function myAlgorithm(arr) {
  // Edge case 1: empty input
  if (!arr || arr.length === 0) return 0; // or null, or []
  // Edge case 2: single element (often returns trivially)
  if (arr.length === 1) return arr[0];
  // Main algorithm...
}

// Test checklist after implementation:
// myAlgorithm([])         // empty
// myAlgorithm([5])        // single
// myAlgorithm([1,1,1])    // all same
// myAlgorithm([1,2,3])    // already sorted
// myAlgorithm([3,2,1])    // reverse sorted
// myAlgorithm([-1,-2,3])  // negatives
// myAlgorithm([1,1,2,2])  // duplicates

Build an edge case checklist you always run through — makes edge case testing a discipline rather than something you do when prompted.

In interviews, proactively listing edge cases shows thoroughness. "I'd also want to test this with an empty array and with all negative numbers." This demonstrates professional engineering mindset.

Match problem characteristics to known patterns. Building this intuition requires learning which patterns solve which types of problems, and practicing until recognition is rapid.

Decision tree: need minimum/maximum of something? Try greedy or DP. Counting paths or arrangements? Try DP. Finding elements in sorted data? Try binary search. Shortest path? Try BFS.

Pattern recognition is the #1 skill that separates experienced from inexperienced problem solvers. It comes from deliberate practice across many problem types.

// Problem type → Algorithm matching:
//
// "Contiguous subarray" (sum/length)  → Sliding Window
// "Pair/triplet with sum"              → Two Pointers (sorted) or HashMap
// "Shortest/fewest steps"              → BFS
// "All paths/all combinations"         → DFS + Backtracking
// "Maximum/minimum of something"       → DP or Greedy
// "Sorted array + search"              → Binary Search
// "Count subsets"                      → DP (subset sum variant)
// "Cycle in graph"                     → DFS with visited/in-progress sets
// "Top K elements"                     → Heap
// "Range sum queries"                  → Prefix Sum
// "Anagram/permutation check"          → Frequency Map

When in doubt, start with the simplest approach that might work, then optimize if too slow. Don't over-engineer the first attempt.

Hybrid approach: recognize when a problem is a combination of two patterns. "K closest elements to target in sorted array" = binary search (find starting position) + sliding window (expand to k elements).

Pseudocode is implementation-language-agnostic, letting you focus on the LOGIC without worrying about syntax. Mistakes in pseudocode are instant to fix. Mistakes in code take much longer to debug.

It also serves as in-code documentation: well-written pseudocode become the comments in your final implementation. The pseudocode steps become the function structure.

For interviews especially: thinking in pseudocode means you can explain your approach naturally before touching the keyboard.

// Pseudocode for max subarray sum:
// function maxSubarray(arr):
//   maxSoFar = -infinity
//   maxEndingHere = 0
//   for each number in arr:
//     maxEndingHere = max(number, maxEndingHere + number)
//     maxSoFar = max(maxSoFar, maxEndingHere)
//   return maxSoFar

// Direct translation to JavaScript:
function maxSubarray(arr) {
  let maxSoFar = -Infinity;
  let maxEndingHere = 0;
  for (const num of arr) {
    maxEndingHere = Math.max(num, maxEndingHere + num);
    maxSoFar = Math.max(maxSoFar, maxEndingHere);
  }
  return maxSoFar;
}

Pseudocode = blueprint, code = construction. You wouldn't start building a house without a blueprint. Same principle applies to algorithms.

Protip: keep pseudocode as comments in your code. This makes your code self-documenting and helps you stay on track when coding each step.

Translate line by line, keeping pseudocode as comments. Start with the function signature, handle the base/edge cases first, then implement the main logic. Compile mentally as you write each line.

Break complex pseudocode steps into helper functions. This keeps the main function readable and each helper individually testable.

Don't try to write perfect code the first time — get it working first, then clean it up. Premature optimization during the translation phase introduces bugs.

// Pseudocode:
// function twoSum(arr, target):
//   map = empty dictionary
//   for i, num in arr:
//     complement = target - num
//     if complement in map: return [map[complement], i]
//     map[num] = i

// Step 1: function signature
function twoSum(arr, target) {
  // Step 2: initialize data structure
  const map = new Map();
  // Step 3: main loop
  for (let i = 0; i < arr.length; i++) {
    const complement = target - arr[i];
    // Step 4: check condition
    if (map.has(complement)) return [map.get(complement), i];
    // Step 5: update data structure
    map.set(arr[i], i);
  }
  return null; // edge case: no solution
}

Each pseudocode line becomes at most 1-3 lines of code. If a translation is getting complex, refactor the pseudocode to be more specific first.

After translating each section, mentally trace through with the simplest valid input to verify correctness before moving to the next section.

Testing validates that your solution is correct. Run through: the given example, a new example you expect to work, an edge case that might break things, and a case where the answer is different from what you initially expected.

Trace through line by line for at least one input — don't just assume it works after writing. This is especially important in interviews where running code isn't always possible.

For recursive functions, trace the call stack. For DP, trace the dp table values. For sorting, trace the state of the array after each step.

// Testing binary search:
function binarySearch(arr, target) {
  let lo = 0, hi = arr.length - 1;
  while (lo <= hi) {
    const mid = (lo + hi) >> 1;
    if (arr[mid] === target) return mid;
    arr[mid] < target ? lo = mid + 1 : hi = mid - 1;
  }
  return -1;
}
// Test 1: normal case
// arr=[1,3,5,7,9], target=5
// lo=0,hi=4,mid=2: arr[2]=5===5 => return 2 ✓

// Test 2: not found
// arr=[1,3,5,7,9], target=6
// lo=0,hi=4,mid=2: arr[2]=5<6, lo=3
// lo=3,hi=4,mid=3: arr[3]=7>6, hi=2
// lo=3>hi=2: return -1 ✓

// Test 3: edge (single element)
// arr=[5], target=5: lo=0,hi=0,mid=0: arr[0]=5 => return 0 ✓

Step-by-step tracing catches pointer/index bugs (off-by-one, wrong update direction) that look correct visually but fail on specific inputs.

For complex algorithms, create a simple test harness: const tests = [{input:[], expected:0},{input:[1],expected:1},...]. Run all tests and compare automatically.

Optimize AFTER you have a correct solution. Never optimize a program that doesn't work first. Premature optimization introduces complexity and bugs before you've verified correctness.

Optimization target: identify the bottleneck first. Is it the time complexity of the main loop? An inner loop that could use a hash map? Redundant computation that could be memoized?

State the current complexity, identify the bottleneck, explain your optimization, then implement. Run the same test cases on the optimized version to verify it's still correct.

// Optimizing step by step:
// 1. Brute force: O(n²)
function containsDuplicate_brute(arr) {
  for (let i = 0; i < arr.length; i++)
    for (let j = i+1; j < arr.length; j++)
      if (arr[i] === arr[j]) return true;
  return false;
}

// 2. Identify bottleneck: inner loop is O(n) lookup
// 3. Optimization: use Set for O(1) lookup
function containsDuplicate(arr) {
  const seen = new Set();
  for (const x of arr) {
    if (seen.has(x)) return true;
    seen.add(x);
  }
  return false;
}
// O(n²) => O(n) time, O(1) => O(n) space
// Verify: runs correctly on same test cases

Optimization hierarchy by impact: (1) Change algorithm class (O(n²)→O(n log n)), (2) Change data structure for O(n)→O(1) operations, (3) Reduce constant factors (avoid function calls, cache-friendly access).

In interviews, discuss the tradeoffs: "This O(n) space optimization makes the code faster but uses more memory. Is that acceptable given the constraints?"

Problem-solving communication demonstrates that you can work collaboratively on complex technical challenges — a core engineering skill. Your interviewer needs to understand your thought process to assess your engineering judgment, not just your final answer.

Thinking out loud allows your interviewer to give you hints if you're stuck, correct misconceptions early, and learn about your reasoning even if your solution isn't perfect.

Many interviewers explicitly say: a candidate who communicates well with a suboptimal solution is preferred over a silent candidate with a perfect one.

// Good communication pattern during problem solving:
// "I see this is asking for the minimum path sum..."
// "My initial thought is DFS, but that might be O(2^n)..."
// "I notice there could be overlapping subproblems here..."
// "So I'll use DP with a 2D array..."
// "My base case is when we reach the bottom-right cell..."
// "Let me trace through: grid=[[1,3,1],[1,5,1],[4,2,1]]..."
// "Time O(m*n), space O(m*n), but I can reduce to O(n) with rolling array"

Structure your communication: state the problem, your approach, complexity, and edge cases — in that order. This mirrors how professional engineers design systems.

For take-home or live coding interviews: add code comments that explain WHY (not what) each section does. Comments show you think about code readability.

Two pointers work by maintaining two indices that move based on conditions, eliminating the need for nested loops. Requires either a sorted array or a naturally ordered structure.

Patterns: left+right converging (sum problems, palindrome check), same-direction moving (sliding window, fast-slow), anchor+runner (removing duplicates).

O(n) time from a single pass instead of O(n²) nested. The key invariant: what does each pointer represent and how does moving each pointer bring you closer to the solution?

// Two sum (sorted array)
function twoSum(arr, target) {
  let lo = 0, hi = arr.length - 1;
  while (lo < hi) {
    const sum = arr[lo] + arr[hi];
    if (sum === target) return [lo, hi];
    sum < target ? lo++ : hi--;
  }
  return null;
}

// Remove duplicates from sorted array (in-place)
function removeDups(arr) {
  let write = 1; // write pointer
  for (let read = 1; read < arr.length; read++) // read pointer
    if (arr[read] !== arr[read-1]) arr[write++] = arr[read];
  return write; // new length
}
// arr=[1,1,2,3,3] => [1,2,3,...], returns 3

Sort first when needed: two pointers require ordered structure. If unsorted, sorting is O(n log n) but still better than O(n²) brute force.

Three-sum: fix one element, then use two pointers on the rest. Four-sum: fix two elements with nested loops, two pointers on the rest.

Weighted interval scheduling: each job has a start, end, and profit. Select non-overlapping jobs to maximize total profit. Sort by end time. DP[i] = max profit using jobs 1..i.

For each job i: either skip it (dp[i] = dp[i-1]) or take it and add to the best compatible job (last job j that ends before job i starts).

Finding the latest compatible job uses binary search on end times — O(log n) per job, total O(n log n).

function weightedIntervalScheduling(jobs) {
  // jobs = [[start, end, profit], ...]
  jobs.sort((a, b) => a[1] - b[1]); // sort by end time
  const ends = jobs.map(j => j[1]);
  const dp = new Array(jobs.length + 1).fill(0);
  for (let i = 1; i <= jobs.length; i++) {
    const [start, end, profit] = jobs[i-1];
    // Binary search: find last job that ends <= start
    let lo = 0, hi = i - 1, compat = 0;
    while (lo <= hi) {
      const mid = (lo + hi) >> 1;
      ends[mid] <= start ? (compat = mid + 1, lo = mid + 1) : hi = mid - 1;
    }
    dp[i] = Math.max(dp[i-1], profit + dp[compat]);
  }
  return dp[jobs.length];
}
// jobs=[[0,3,3],[2,5,4],[4,6,2],[6,8,5]] => 8 (jobs 0+3 or 1+3)

dp[i] = max(dp[i-1], profit[i] + dp[lastCompatible(i)]) — include or skip. Binary search on sorted end times makes finding lastCompatible O(log n).

Simpler but related: 0/1 knapsack maps directly to interval scheduling. Activity selection (unweighted, maximize count) is solvable with greedy alone.

First determine the graph type: directed/undirected, weighted/unweighted, cyclic/acyclic, connected/disconnected. The type determines the appropriate algorithm.

Build an adjacency list representation (most common). Choose BFS for shortest path in unweighted graphs, Dijkstra for weighted non-negative, DFS for connectivity, topological sort for DAGs.

Represent cities as nodes, roads as edges. Social networks, dependency graphs, state machines — all can be modeled as graphs.

// Build adjacency list from edge list
function buildGraph(n, edges) {
  const graph = Array.from({length: n}, () => []);
  for (const [u, v] of edges) {
    graph[u].push(v);
    graph[v].push(u); // undirected
  }
  return graph;
}

// BFS: shortest path in unweighted graph
function bfs(graph, start, end) {
  const queue = [[start, 0]], visited = new Set([start]);
  while (queue.length) {
    const [node, dist] = queue.shift();
    if (node === end) return dist;
    for (const nb of graph[node]) {
      if (!visited.has(nb)) { visited.add(nb); queue.push([nb, dist+1]); }
    }
  }
  return -1; // unreachable
}

BFS gives shortest path in unweighted graphs because it explores level by level. DFS is simpler but doesn't guarantee shortest path.

When asked for "minimum steps", "fewest moves", or "shortest path" — default to BFS. When asked for "all paths", "exists a path", or "cycle detection" — default to DFS.

String problems often have multiple approaches: brute force O(n²) using nested loops + string concatenation, O(n) using hash maps for frequency counting, or O(n log n) with sorting.

Key operations: character frequency counting, palindrome checking, anagram detection, pattern matching, and transformation. Most have efficient O(n) solutions with hash maps or two pointers.

JavaScript strings are immutable — avoid building strings in loops with += (O(n²)). Use array + join instead (O(n)).

// Character frequency: O(n)
function charFreq(s) {
  const freq = {};
  for (const c of s) freq[c] = (freq[c] || 0) + 1;
  return freq;
}

// Anagram check: O(n)
function isAnagram(s, t) {
  if (s.length !== t.length) return false;
  const freq = charFreq(s);
  for (const c of t) {
    if (!freq[c]) return false;
    freq[c]--;
  }
  return true;
}

// Efficient string building: O(n)
const parts = [];
for (let i = 0; i < n; i++) parts.push(someChar);
const result = parts.join(''); // NOT result += char each time!

map.set / object frequency pattern is the most versatile tool for string problems — character frequency underlies anagrams, permutations, and substrings.

For complex patterns (regex), use the built-in RegExp. For competitive/interview problems, implement manually to demonstrate understanding of the underlying algorithm.

Binary search on the answer is a powerful technique: instead of searching for a specific value in a sorted array, you binary search over the answer space itself. "Can we achieve X?" is a yes/no predicate.

If the predicate is monotone (answers go from false to true or vice versa as X increases), binary search finds the boundary. Common for: minimum time, maximum capacity, smallest/largest feasible value.

Template: lo = minimum possible answer, hi = maximum, check if mid is feasible. Converge to the boundary.

// Example: koko eating bananas (minimum eating speed)
function minEatingSpeed(piles, h) {
  let lo = 1, hi = Math.max(...piles);
  function canFinish(speed) {
    return piles.reduce((t, p) => t + Math.ceil(p/speed), 0) <= h;
  }
  while (lo < hi) {
    const mid = (lo + hi) >> 1;
    canFinish(mid) ? hi = mid : lo = mid + 1; // find minimum valid speed
  }
  return lo;
}
// piles=[3,6,7,11], h=8 => 4 (minimum speed to finish in h hours)

Identify the predicate: restating the problem as "is X feasible?" reveals whether binary search on the answer applies. If the predicate is monotone, you can binary search.

Similar problems: capacity to ship packages, split array largest sum, minimum days to make bouquets. All use binary search on the answer with a feasibility check.

Topological sort orders nodes in a DAG (Directed Acyclic Graph) such that for every edge u→v, u comes before v. Used for dependency resolution: build systems, course prerequisites, task scheduling.

Kahn's algorithm (BFS): start with nodes having in-degree 0, process them, decrement neighbors' in-degrees, add new 0-in-degree nodes to queue.

DFS approach: process a node's dependencies recursively, then add the node to the result. Reverse the result for topological order.

// Kahn's algorithm: O(V+E)
function topoSort(n, edges) {
  const inDeg = Array(n).fill(0);
  const graph = Array.from({length:n}, ()=>[]);
  for (const [u,v] of edges) { graph[u].push(v); inDeg[v]++; }
  const queue = [];
  for (let i = 0; i < n; i++) if (inDeg[i] === 0) queue.push(i);
  const result = [];
  while (queue.length) {
    const node = queue.shift();
    result.push(node);
    for (const nb of graph[node])
      if (--inDeg[nb] === 0) queue.push(nb);
  }
  return result.length === n ? result : []; // empty = cycle detected
}
// edges=[[0,1],[0,2],[1,3],[2,3]] => [0,1,2,3] or [0,2,1,3]

Cycle detection: if result.length !== n after Kahn's, a cycle exists (not all nodes processed). Topological sort is only possible on DAGs.

Course Schedule problems on LeetCode are essentially topological sort. Build prerequisites as edges, check for cycles, or return sorted order.

Space complexity includes: input space, auxiliary space (extra data structures), and stack space (recursion depth). Usually auxiliary space is what we optimize.

Common reductions: DP 2D table → rolling array (O(m*n) → O(n)), DFS recursion → iterative stack (O(h) where h is height → explicit stack), output array reuse (modify input in-place).

Trading space for time usually increases space. Trading time for space usually means recalculating things (slower). Be explicit about which you're doing.

// 2D DP table: O(m*n) space
const dp = Array.from({length:m}, ()=>Array(n).fill(0));
// dp[i][j] depends only on dp[i-1][j] and dp[i][j-1]

// Rolling array: O(n) space (only keep previous row)
let prev = Array(n).fill(0), curr = Array(n).fill(0);
for (let i = 0; i < m; i++) {
  for (let j = 0; j < n; j++)
    curr[j] = prev[j-1] + prev[j]; // use prev row
  [prev, curr] = [curr, prev]; // swap
}

// Recursion depth to stack: O(n) to O(n) but avoids call stack
function iterativeDFS(root) {
  const stack = [root];
  while (stack.length) {
    const node = stack.pop();
    if (node.right) stack.push(node.right);
    if (node.left) stack.push(node.left);
  }
}

Rolling array optimization: works whenever DP[i] only depends on DP[i-1]. Reduces spatial complexity from O(n²) to O(n) for many classic DP problems.

Stack space from recursion is O(depth) — for balanced trees this is O(log n), for unbalanced it's O(n). Use iterative approaches for production code on untrusted data.

Permutations (order matters): n! total for n distinct items. Use backtracking with swap-back or position selection. Common in: anagram generation, arrangement problems.

Combinations (order doesn't matter): C(n,k) = n!/(k!(n-k)!) subsets of size k. Use backtracking with a start index to avoid repetition and skip previously used elements.

Key difference in code: for permutations, iterate from current start to end and swap; for combinations, iterate with start index always increasing.

// Permutations: O(n! * n)
function permute(nums) {
  const result = [];
  function bt(start) {
    if (start === nums.length) { result.push([...nums]); return; }
    for (let i = start; i < nums.length; i++) {
      [nums[start], nums[i]] = [nums[i], nums[start]]; // swap
      bt(start + 1);
      [nums[start], nums[i]] = [nums[i], nums[start]]; // undo
    }
  }
  bt(0);
  return result;
}

// Combinations: O(C(n,k) * k)
function combine(n, k) {
  const result = [];
  function bt(start, current) {
    if (current.length === k) { result.push([...current]); return; }
    for (let i = start; i <= n; i++) {
      current.push(i);
      bt(i + 1, current); // i+1: no reuse, increasing order
      current.pop();
    }
  }
  bt(1, []);
  return result;
}

The difference: permutations loop from start and SWAP (all positions considered), combinations loop from start and PUSH (always increasing, no reuse).

For combinations with repetition: use bt(i, current) instead of bt(i+1, current) to allow reusing the same element.