const numbers = [1, 2, 3, 4];
const doubled = numbers.map(n => n * 2);
console.log(doubled); // [2, 4, 6, 8]
console.log(numbers); // [1, 2, 3, 4] — original unchanged
// With index
["a", "b", "c"].map((val, i) => `${i}:${val}`);
// ["0:a", "1:b", "2:c"]
// Transform object shape
const users = [{ name: "Alice", age: 28 }, { name: "Bob", age: 35 }];
const names = users.map(u => u.name); // ["Alice", "Bob"]
// Chain with filter
const activeNames = users
.filter(u => u.active)
.map(u => u.name);
Why it matters: map() is the most-used array method in React — virtually every list-rendering component calls it. It appears in nearly every JavaScript interview and is the foundation of functional data transformation pipelines.
Real applications: Transforming API response objects into display-ready shapes, rendering lists in React JSX (items.map(item => <Item key={item.id} {...item} />)), extracting property values, and building data pipelines with filter() and reduce().
Common mistakes: Using map() for side effects instead of forEach() (creates a wasted intermediate array), forgetting to return inside a block-body callback (result fills with undefined), and mutating elements inside the callback instead of returning new values.
const numbers = [1, 2, 3, 4, 5, 6];
const evens = numbers.filter(n => n % 2 === 0); // [2, 4, 6]
// Filter objects
const users = [
{ name: "Alice", active: true },
{ name: "Bob", active: false },
{ name: "Carol", active: true }
];
const activeUsers = users.filter(u => u.active);
// [{ name: "Alice", ... }, { name: "Carol", ... }]
// Remove falsy values (common trick)
const raw = [0, "hello", null, 42, "", undefined, false];
const truthy = raw.filter(Boolean); // ["hello", 42]
// Chain filter → map
const activeNames = users
.filter(u => u.active)
.map(u => u.name); // ["Alice", "Carol"]
Why it matters: filter() is essential for building search results, conditional rendering, and data cleaning. Combined with map(), it forms the core of most functional data transformation pipelines in JavaScript applications.
Real applications: Search autocomplete (filtering by prefix), dashboard tables (filtering by status/date), removing null values from API responses, and building permission-based UIs where certain items are hidden from certain roles.
Common mistakes: Using filter().length > 0 to check existence (use some() instead — it short-circuits), mutating the original array inside the callback, and returning an item directly instead of a boolean expression.
const nums = [1, 2, 3, 4];
const sum = nums.reduce((acc, n) => acc + n, 0); // 10
// Build object from array (groupBy)
const items = [
{ type: "fruit", name: "apple" },
{ type: "veggie", name: "carrot" },
{ type: "fruit", name: "banana" }
];
const grouped = items.reduce((acc, item) => {
(acc[item.type] ??= []).push(item.name);
return acc;
}, {});
// { fruit: ["apple", "banana"], veggie: ["carrot"] }
// Flatten array of arrays
[[1,2],[3,4],[5]].reduce((acc, arr) => acc.concat(arr), []);
// [1, 2, 3, 4, 5]
// Frequency count
["a","b","a","c","b","a"].reduce((acc, v) => {
acc[v] = (acc[v] ?? 0) + 1; return acc;
}, {}); // { a: 3, b: 2, c: 1 }
Why it matters: reduce() is the Swiss Army knife of array methods — it can implement map, filter, groupBy, flatten, and more. Interview questions that ask "solve this in a single pass" almost always require reduce().
Real applications: Shopping cart total calculation, grouping API results by category for dashboards, counting word/tag frequencies in analytics, and building lookup tables indexed by ID from flat arrays.
Common mistakes: Omitting the initial value (throws on empty arrays and produces unexpected behavior with one element), not returning the accumulator from the callback (it becomes undefined after the first iteration), and using reduce() when map() or filter() would be clearer and more readable.
const users = [
{ id: 1, name: "Alice", role: "admin" },
{ id: 2, name: "Bob", role: "user" },
{ id: 3, name: "Carol", role: "admin" }
];
users.find(u => u.id === 2); // { id: 2, name: "Bob", role: "user" }
users.find(u => u.id === 99); // undefined
users.findIndex(u => u.id === 2); // 1
users.findIndex(u => u.id === 99); // -1
// ES2023: search from end
users.findLast(u => u.role === "admin");
// { id: 3, name: "Carol", ... } (last admin)
// Common pattern: immutable update in React
const idx = users.findIndex(u => u.id === 2);
if (idx !== -1) {
const updated = [...users];
updated[idx] = { ...users[idx], role: "admin" };
}
Why it matters: find() replaces the verbose filter()[0] pattern and is more performant since it stops at the first match. findIndex() is essential for immutable update patterns in React state management.
Real applications: Looking up a user by ID from a fetched list, finding the currently selected item in a dropdown state, locating an item’s index for an immutable update, and finding the most recent occurrence with findLast().
Common mistakes: Using filter() when only one result is needed (wastes iterations), not checking for -1 from findIndex() before using in splice() (deletes the wrong element), and modifying the object returned by find() without realizing it mutates the original array element (returns a reference, not a copy).
true if at least one element passes the test; every() returns true only if all elements pass. Both short-circuit — some() stops at the first true, every() stops at the first false. On an empty array, some() returns false and every() returns true (vacuous truth — a subtle edge case to know).
const numbers = [1, 2, 3, 4, 5];
numbers.some(n => n > 4); // true (stops at 5)
numbers.some(n => n > 10); // false
numbers.every(n => n > 0); // true (all positive)
numbers.every(n => n > 3); // false (stops at 1)
// Empty array edge cases
[].some(n => n > 0); // false (no elements passed)
[].every(n => n > 0); // true (vacuous truth!)
// Practical use cases
const cart = [
{ name: "Laptop", inStock: true },
{ name: "Mouse", inStock: true }
];
cart.every(item => item.inStock); // true — can checkout
cart.some(item => !item.inStock); // false — no stockouts
// Form validation
const fields = ["name", "email", "phone"];
const allFilled = fields.every(f => formData[f]?.trim());
Why it matters: some() and every() make intent explicit and are more performant than filter().length > 0 or filter().length === arr.length because they stop iterating as early as possible. They are frequently paired in validation logic.
Real applications: Cart checkout validation (all items in stock?), permission checks (does user have all required roles?), form field validation (all required fields filled?), and feature flag evaluation (is any premium feature enabled?).
Common mistakes: Forgetting the empty-array edge case for every() — it returns true with no elements, which can silently pass validation on empty data. Also assuming some() returns the matched element (it only returns a boolean) — use find() for the element.
map().flat(1) but more efficient since it builds only one intermediate array. Use Infinity to fully flatten any nested structure.
const nested = [1, [2, 3], [4, [5, 6]]];
nested.flat(); // [1, 2, 3, 4, [5, 6]] — depth 1
nested.flat(2); // [1, 2, 3, 4, 5, 6]
nested.flat(Infinity); // [1, 2, 3, 4, 5, 6]
// flatMap: split sentences into words
const sentences = ["hello world", "foo bar"];
sentences.flatMap(s => s.split(" "));
// ["hello", "world", "foo", "bar"]
// ONE-TO-MANY with built-in filtering:
const nums = [1, 2, 3, 4];
nums.flatMap(n => n % 2 ? [n, n * 10] : []);
// [1, 10, 3, 30] — odd numbers + their ×10, evens dropped
// Flatten paginated API results
const pages = [[{id:1},{id:2}], [{id:3}]];
pages.flat(); // [{id:1},{id:2},{id:3}]
Why it matters: flatMap() elegantly solves "filter + transform" in a single pass, avoiding the extra intermediate array that filter().map() creates. It is the idiomatic way to handle one-to-many transformations or expand items conditionally.
Real applications: Merging paginated API results into a single list, expanding product variants into individual SKU records, tokenizing text (sentences to words), and optional-value mapping (return [] to skip, [value] to include).
Common mistakes: Calling flat() without an argument and expecting full flattening (only goes depth 1), calling map().flat() instead of the more efficient flatMap(), and not realizing flatMap() only flattens one level — nested arrays within the callback result won’t flatten further.
length + indexed elements) or any iterable (strings, Maps, Sets, NodeLists, generators). An optional second argument acts as a map function, transforming each element during creation — more efficient than creating then mapping. It produces a true array, enabling all array methods on non-array iterables.
// From iterables
Array.from("hello"); // ["h","e","l","l","o"]
Array.from(new Set([1,2,2,3])); // [1,2,3] (deduped)
Array.from(new Map([["a",1]])); // [["a",1]]
// From DOM NodeList
const divs = Array.from(document.querySelectorAll("div"));
divs.map(d => d.textContent); // now all array methods work!
// With map function — generate ranges
Array.from({ length: 5 }, (_, i) => i + 1); // [1,2,3,4,5]
Array.from({ length: 5 }, (_, i) => i * 2); // [0,2,4,6,8]
// 2D grid (independent rows)
Array.from({ length: 3 }, () => Array(3).fill(0));
// [[0,0,0],[0,0,0],[0,0,0]]
// Convert arguments object (legacy code)
function f() { return Array.from(arguments); }
Why it matters: Array.from({length:n}, mapFn) is the idiomatic way to generate structured arrays of arbitrary size. It bridges the gap between DOM APIs (which return array-like NodeLists) and actual arrays, enabling method chaining on query results.
Real applications: Converting querySelectorAll results to enable array methods, creating grids for game boards or calendar UIs, generating test data seeds, and converting Sets back to arrays after deduplication.
Common mistakes: Using new Array(5).fill([]) to create a 2D array — all rows share the same reference; use Array.from({length:5}, () => []) instead. Also forgetting that Array.from() produces a shallow copy — nested objects are still references, not deep clones.
// Default string sort — WRONG for numbers!
[10, 9, 2, 21].sort(); // [10, 2, 21, 9] ← bug!
// Numeric ascending
[10, 9, 2, 21].sort((a, b) => a - b); // [2, 9, 10, 21]
// Numeric descending
[10, 9, 2, 21].sort((a, b) => b - a); // [21, 10, 9, 2]
// Sort objects alphabetically
const users = [
{ name: "Charlie" }, { name: "Alice" }, { name: "Bob" }
];
users.sort((a, b) => a.name.localeCompare(b.name));
// [Alice, Bob, Charlie]
// Immutable sort (React best practice)
const immutable = [...arr].sort((a, b) => a - b);
// ES2023:
const immutable2 = arr.toSorted((a, b) => a - b);
Why it matters: The default sort() behavior is a classic bug source — [10, 9, 2].sort() returns [10, 2, 9], not [2, 9, 10]. Interviewers ask about sort() specifically to catch candidates who don’t know this pitfall.
Real applications: Sorting product listings by price, alphabetically ordering user-generated tags, sorting timestamps in a feed, and stable multi-key sorting (sort by last name then first name) using localeCompare.
Common mistakes: Calling sort() without a comparator on numbers, mutating React state directly by calling sort() on a state array (use toSorted() or [...arr].sort()), and using a - b with non-numeric values (produces NaN, which is undefined sort behavior).
undefined — it cannot be chained. map() returns a new array of transformed values and is chainable. Neither supports early exit (break/continue); use for...of or a regular loop when you need to exit early.
const nums = [1, 2, 3];
// forEach: side effects, returns undefined
const r1 = nums.forEach(n => console.log(n)); // logs 1, 2, 3
console.log(r1); // undefined
// map: transformation, returns new array
const r2 = nums.map(n => n * 2);
console.log(r2); // [2, 4, 6]
// Anti-pattern: map for side effects
nums.map(n => console.log(n)); // works but wasteful — use forEach
// Anti-pattern: forEach to build array
const doubled = [];
nums.forEach(n => doubled.push(n * 2)); // use map instead
// Early exit: forEach can't; use for...of
for (const n of nums) {
if (n > 2) break; // exits early
}
Why it matters: Choosing the correct method signals understanding of intent and API design. Using map() for side effects (discarding the result) creates unnecessary intermediate arrays and indicates a misunderstanding of the method’s contract.
Real applications: Use forEach() for sending analytics events, updating DOM nodes, or writing to an external store. Use map() for building JSX element arrays in React, transforming API payload shapes, and chaining with filter() or reduce().
Common mistakes: Using map() when forEach() is appropriate (semantics mismatch + wasted allocation), attempting to break out of forEach() — it cannot be done; even throwing doesn’t cleanly exit, and building a result array inside forEach() instead of using map().
true if the array contains the specified value, using SameValueZero comparison — like strict equality (===) but correctly handles NaN (treats it as equal to itself). It accepts an optional fromIndex argument (negative values count from the end). includes() replaced the older indexOf(val) !== -1 pattern which cannot detect NaN.
const arr = [1, 2, 3, NaN];
arr.includes(2); // true
arr.includes(4); // false
arr.includes(NaN); // true ← handles NaN correctly!
// indexOf FAILS with NaN
arr.indexOf(NaN); // -1 (cannot find it)
// fromIndex (negative = offset from end)
arr.includes(2, 2); // false (search from index 2)
arr.includes(1, -3); // false (search from length-3)
// Replaces verbose pattern
// Old: arr.indexOf(val) !== -1
// New: arr.includes(val)
// Works on strings too
"hello world".includes("world"); // true
["admin", "user"].includes(userRole); // permission check
Why it matters: includes() is the idiomatic, readable way to check membership. The NaN edge case distinguishes it from indexOf() — a classic interview topic that reveals whether candidates know the SameValueZero comparison algorithm.
Real applications: Feature flag checks (enabledFeatures.includes("darkMode")), role-based permission checks, multi-select UI (filter already-selected items), and whitelist validation.
Common mistakes: Continuing to use indexOf() !== -1 out of habit (less readable), using includes() when the actual index position is needed (use indexOf() or findIndex()), and not knowing that SameValueZero treats +0 and -0 as equal.
const arr = [1, 2, 3, 4, 5];
// slice(start, end) — NON-MUTATING, end is exclusive
arr.slice(1, 3); // [2, 3] (arr unchanged)
arr.slice(-2); // [4, 5] (last 2 elements)
arr.slice(); // [1,2,3,4,5] (shallow copy)
// splice(start, deleteCount, ...items) — MUTATES
const removed = arr.splice(1, 2, 10, 20);
console.log(removed); // [2, 3]
console.log(arr); // [1, 10, 20, 4, 5]
// Common splice operations
arr.splice(2, 0, 99); // insert at index 2 (delete 0)
arr.splice(-1, 1); // remove last element
// Immutable alternative (ES2023)
const result = arr.toSpliced(1, 1, 100); // arr unchanged
// React-safe delete by index
const newState = [...state.slice(0, idx), ...state.slice(idx + 1)];
Why it matters: splice vs slice is a classic interview question testing mutability knowledge. Accidentally calling splice() on React state is a common bug where the UI fails to re-render because the array reference stays the same.
Real applications: Removing an item by index (slice pattern for React state), creating sub-arrays for pagination display, using splice() to implement stack pop or queue dequeue when mutation is acceptable, and toSpliced() for immutable Redux reducer updates.
Common mistakes: Confusing the two by name (splice sounds like "split"), mutating React state arrays with splice() causing silent render failures, and forgetting that slice()’s end is exclusive while splice()’s second argument is a count (not an end index).
const arr = [2, 3, 4];
arr.push(5); // returns 4 → arr = [2,3,4,5]
arr.push(6, 7); // returns 6 → arr = [2,3,4,5,6,7]
arr.pop(); // returns 7 → arr = [2,3,4,5,6]
arr.unshift(1); // returns 6 → arr = [1,2,3,4,5,6]
arr.shift(); // returns 1 → arr = [2,3,4,5,6]
// Stack (LIFO) — push/pop
const stack = [];
stack.push("a"); stack.push("b");
stack.pop(); // "b"
// Queue (FIFO) — push/shift
const queue = [];
queue.push("a"); queue.push("b");
queue.shift(); // "a"
// Immutable alternatives (React-safe):
const withEnd = [...arr, newItem]; // like push
const withStart = [newItem, ...arr]; // like unshift
const withoutEnd = arr.slice(0, -1); // like pop
const withoutStart = arr.slice(1); // like shift
Why it matters: These methods implement classic data structures — Stack (LIFO) via push/pop, Queue (FIFO) via push/shift. Their O(1) vs O(n) complexity difference is tested in performance interviews and matters when handling large arrays.
Real applications: Undo/redo stacks in text editors, browser history navigation, message queue processing, breadcrumb navigation (push/pop routes), and implementing LRU cache eviction using shift when the cache exceeds capacity.
Common mistakes: Using shift() in tight loops on large arrays (O(n) cost compounds), directly mutating React state with push() — the reference stays the same so React won’t re-render, and confusing push/pop (end) with unshift/shift (beginning) under interview pressure.
const arr = [1, 2, 3, 2, 1];
arr.indexOf(2); // 1 (first occurrence)
arr.lastIndexOf(2); // 3 (last occurrence)
arr.indexOf(99); // -1 (not found)
// fromIndex argument
arr.indexOf(2, 2); // 3 (search from index 2 onward)
arr.lastIndexOf(2, 2); // 1 (search from index 2 backward)
// NaN edge case
[NaN].indexOf(NaN); // -1 ← can't find it! Use includes(NaN)
// Remove first occurrence by value
const idx = arr.indexOf(2);
if (idx !== -1) arr.splice(idx, 1);
// Find ALL occurrences
function findAll(arr, val) {
const result = [];
let i = arr.indexOf(val);
while (i !== -1) { result.push(i); i = arr.indexOf(val, i + 1); }
return result;
}
findAll([1, 2, 2, 3, 2], 2); // [1, 2, 4]
Why it matters: indexOf() is the correct method when you need the position of an item for subsequent operations like splice() or manual array surgery. lastIndexOf() is essential for finding duplicate entries or the most recent occurrence in event logs.
Real applications: Removing the first occurrence of a value, detecting duplicates by comparing indexOf vs lastIndexOf, locating the most recent event in a log, finding the last unread notification, and text search with position highlighting.
Common mistakes: Not checking for -1 before passing the result to splice() (deletes the wrong element), using indexOf() to find NaN (always returns -1 — use includes()), and confusing fromIndex direction: for lastIndexOf it’s the rightmost position to start searching backward, not the same as indexOf’s start position.
const arr = [3, 1, 4, 1, 5];
// toSorted() — original preserved
const sorted = arr.toSorted((a, b) => a - b); // [1,1,3,4,5]
console.log(arr); // [3,1,4,1,5] — unchanged
// toReversed()
const rev = arr.toReversed(); // [5,1,4,1,3]
// toSpliced(start, deleteCount, ...items)
const spliced = arr.toSpliced(1, 2, 99); // [3,99,1,5]
// with(index, value)
const changed = arr.with(0, 100); // [100,1,4,1,5]
// React state update — before vs after ES2023
// Before: setItems(prev => [...prev].sort((a,b) => a-b));
// After: setItems(prev => prev.toSorted((a,b) => a-b));
// Chain immutable operations
arr.toSorted((a, b) => a - b).toReversed().with(0, 999);
// [999, 4, 3, 1, 1]
Why it matters: These methods eliminate the error-prone [...arr].sort() pattern and signal JavaScript’s shift toward immutability as the default. Interviewers ask about them to gauge awareness of modern ES features and their practical use in frameworks.
Real applications: React state updates (sort/reverse items without mutation), Redux reducers (immutable array transformations), undo/redo systems that store snapshots, and functional pipelines where chainable immutable operations produce cleaner code.
Common mistakes: Still writing [...arr].sort() in new code when toSorted() is available, not knowing about with() for single-element replacement (avoids the verbose [...arr.slice(0,i), newVal, ...arr.slice(i+1)] pattern), and deploying these without checking browser support in legacy environments.
const users = [
{ name: "Alice", age: 28, active: true },
{ name: "Bob", age: 35, active: false },
{ name: "Carol", age: 22, active: true },
{ name: "Dave", age: 40, active: true }
];
// Classic filter → map → sort chain
const names = users
.filter(u => u.active)
.map(u => ({ name: u.name, age: u.age }))
.sort((a, b) => a.age - b.age)
.map(u => `${u.name} (${u.age})`);
// ["Carol (22)", "Alice (28)", "Dave (40)"]
// Optimized: single-pass with reduce
const names2 = users.reduce((acc, u) => {
if (u.active) acc.push(u.name); return acc;
}, []);
// Or flatMap for filter+map in one step
const names3 = users.flatMap(u =>
u.active ? [u.name] : []
);
// Immutable operations chain (ES2023)
arr.toSorted((a, b) => a - b).toReversed().with(0, 0);
Why it matters: Chaining is a core pattern in JavaScript data processing and is heavily used in React components and data transformation utilities. Knowing when to use chaining vs reduce() vs a single loop shows real-world architectural thinking.
Real applications: Dashboard data pipelines (filter by date, map to chart points, sort by value), building dropdown options from raw API data, transforming response arrays for table display, and composing search-and-rank algorithms.
Common mistakes: Over-chaining when a single reduce() would be more efficient for large arrays, chaining mutating methods (sort(), splice()) mid-chain that accidentally modify state, and not handling the empty array case when filter() returns nothing before calling map().