JavaScript

Functions & Scope

15 Questions

Function declarations are fully hoisted — the name and body become available from the top of their scope allowing calls before the definition line — while function expressions (including arrow functions) assigned to const/let live in the Temporal Dead Zone until the assignment is reached, and those assigned to var are hoisted as undefined.
// Declaration — fully hoisted, callable before it appears
greet(); // "Hello"
function greet() { return "Hello"; }

// Expression with const — TDZ, ReferenceError before it
// sayHi(); // ReferenceError
const sayHi = function() { return "Hi"; };

// Named function expression — name only accessible inside body
const factorial = function fact(n) {
  return n <= 1 ? 1 : n * fact(n - 1);
};
// fact(3); // ReferenceError — not accessible outside

Why it matters: Hoisting is nearly universal in JavaScript interviews; misunderstanding whether a function expression or declaration is in the TDZ causes subtle "not a function" or "not defined" bugs that appear after refactoring.

Real applications: React component files use function declarations for top-level components (callable anywhere in the module) and arrow function expressions for callbacks and event handlers, leveraging both hoisting and lexical this.

Common mistakes: Calling a var-assigned function expression before its definition and expecting a ReferenceError — you actually get a TypeError ("not a function") because var hoists the name as undefined, not as missing.

Arrow functions use a concise syntax and differ from regular functions in three critical ways: they inherit this lexically from the enclosing scope rather than binding their own, they have no arguments object (use rest params instead), and they cannot be used as constructors with new.
const obj = {
  name: "Alice",
  regular() { return this.name; },   // "Alice" — own this
  arrow: () => this.name             // undefined — inherits outer this
};

// No arguments object — use rest params
const sum = (...args) => args.reduce((a, b) => a + b, 0);

// Implicit return for single expressions
const double = x => x * 2;

// Object literal needs parentheses
const toObj = x => ({ value: x });

// Cannot be used as constructor
// new (() => {}); // TypeError

Why it matters: The lexical this binding is the most common source of this-related bugs in callbacks — interviewers probe this distinction in React, DOM event handler, and class method scenarios.

Real applications: React uses arrow functions for JSX callbacks (onClick={() => ...}) because lexical this eliminates the need for .bind(this) in class components; Vue and Angular have similar conventions.

Common mistakes: Using an arrow function as an object method and expecting this to refer to the object — it instead captures the outer (often module or global) scope, silently returning undefined for any property lookups.

An IIFE (Immediately Invoked Function Expression) is a function that is defined and executed in a single step, creating a private scope that prevents its variables from leaking into the surrounding namespace; before ES6 modules, IIFEs were the primary way to implement the module pattern.
// Classic IIFE — private scope
(function() {
  const secret = "hidden";
  console.log(secret); // "hidden"
})();
// console.log(secret); // ReferenceError

// IIFE with return value
const config = (function() {
  const env = "production";
  return { env, debug: env !== "production" };
})();

// Arrow function IIFE — async init pattern
(async () => {
  const data = await fetch("/api/init").then(r => r.json());
  console.log(data);
})();

Why it matters: Understanding IIFEs demonstrates scope knowledge and module pattern history — interviewers use them to test whether you understand why function(){}() fails syntactically while (function(){})() works.

Real applications: jQuery plugins and countless npm packages from the pre-bundler era wrapped their entire codebase in an IIFE; the async IIFE pattern is still used today as a top-level await alternative in older environments.

Common mistakes: Forgetting the wrapping parentheses and writing function(){}() — the parser treats function as a statement declaration, which cannot be anonymous and cannot be immediately invoked, throwing a SyntaxError.

JavaScript uses lexical scoping: when a variable is referenced the engine searches the current scope first, then each enclosing scope outward to the global scope — this lookup path is the scope chain, and it is determined at write time (where functions are defined), not at call time.
const global = "G";

function outer() {
  const outerVar = "O";
  function inner() {
    const innerVar = "I";
    console.log(innerVar); // "I" — own scope
    console.log(outerVar); // "O" — outer scope
    console.log(global);   // "G" — global scope
  }
  inner();
}

// Variable shadowing — inner declaration hides outer
const x = "outer";
function demo() {
  const x = "inner"; // shadows outer x
  console.log(x);    // "inner"
}
console.log(x); // "outer" — unaffected

Why it matters: The scope chain is the engine behind closures, module privacy, and variable-shadowing bugs — interviewers test this to verify you understand how JavaScript resolves variables at runtime vs compile time.

Real applications: React hooks rely on lexical scope so useEffect callbacks correctly close over state values at render time; Node.js module scope prevents global namespace pollution by wrapping each file in a module function.

Common mistakes: Assuming scope chain is determined by where a function is called (dynamic scoping) rather than where it is defined — this leads to wrong predictions about which variable a closure will capture.

Default parameters assign fallback values when an argument is undefined or omitted; they are evaluated at call time (not definition time), can reference earlier parameters in the same signature, and passing null explicitly does not trigger the default — only undefined does.
function greet(name = "World", prefix = "Hello") {
  return `${prefix}, ${name}!`;
}
greet();               // "Hello, World!"
greet("Alice");        // "Hello, Alice!"
greet(null);           // "Hello, null!" — null skips default
greet(undefined, "Hi"); // "Hi, World!"

// Later params can reference earlier ones
function box(w, h = w, d = w * h) {
  return w * h * d;
}
box(2); // 2 * 2 * 4 = 16

// Default from a lazy function call
function build(id = crypto.randomUUID()) {
  return id;
}

Why it matters: The null vs undefined distinction for triggering defaults is a classic interview gotcha; it also explains why the old || default pattern was unreliable for falsy values.

Real applications: Express.js middleware and React component utility functions use default parameters to make options optional without defensive boilerplate, replacing the error-prone param = param || defaultValue anti-pattern.

Common mistakes: Using param = param || defaultValue instead of a default parameter — this incorrectly treats 0, "", and false as missing values, overriding legitimate falsy arguments with the default.

Function declarations are fully hoisted — both name and body are available from the top of their scope — while expressions assigned to let/const remain in the Temporal Dead Zone until their line is reached, and those assigned to var are hoisted as undefined, making them uncallable before the assignment.
// Declaration — fully hoisted, callable anywhere in scope
console.log(add(2, 3)); // 5
function add(a, b) { return a + b; }

// var expression — hoisted as undefined
console.log(typeof sub); // "undefined"
// sub(5, 2); // TypeError: sub is not a function
var sub = function(a, b) { return a - b; };

// const expression — TDZ
// mul(2, 3); // ReferenceError
const mul = (a, b) => a * b;

// Strict mode: block function declarations are block-scoped
"use strict";
if (true) {
  function blockFn() { return "block"; }
}
// blockFn(); // ReferenceError in strict mode

Why it matters: The TypeError vs ReferenceError distinction when calling a var expression before definition is a classic interview differentiator — it proves you understand the difference between "hoisted as undefined" and "not hoisted at all".

Real applications: Babel and TypeScript transpilers rely on correct hoisting semantics when targeting ES5, and misconfigured transpilation can expose hoisting bugs in production that only appear in certain execution orders.

Common mistakes: Expecting a ReferenceError when calling a var function expression before its definition — the variable is hoisted as undefined, so the actual error is TypeError: "sub is not a function".

JavaScript treats functions as first-class citizens — they can be stored in variables, passed as arguments, and returned from other functions; a higher-order function (HOF) is any function that takes a function as an argument or returns one, enabling the powerful abstractions that underpin map, filter, reduce, and most frameworks.
// Functions stored and passed as values
const ops = { add: (a, b) => a + b, mul: (a, b) => a * b };
function apply(fn, a, b) { return fn(a, b); }
apply(ops.add, 3, 4); // 7

// HOF that returns a function (factory pattern)
function multiplier(factor) {
  return (n) => n * factor;
}
const triple = multiplier(3);
triple(5); // 15

// Built-in HOFs
[1, 2, 3].map(x => x * 2);              // [2, 4, 6]
[1, 2, 3].filter(x => x > 1);           // [2, 3]
[1, 2, 3].reduce((acc, x) => acc + x, 0); // 6

Why it matters: First-class functions are the foundation of JavaScript's functional style — every major framework depends on them, and interviewers test whether you can compose functions rather than repeat logic with imperative loops.

Real applications: React uses Array.map() for rendering lists, Redux chains middleware as HOFs, and Express.js middleware pipeline passes the next function as a first-class callback — all of which require functions to be first-class values.

Common mistakes: Invoking the callback immediately when passing it to a HOF — writing .map(transform()) instead of .map(transform), causing the function's return value (not the function itself) to be passed as the callback.

A pure function always returns the same output for the same inputs and produces no side effects — it does not mutate external state, modify its arguments, trigger I/O, write to the DOM, or produce non-deterministic results; this predictability makes pure functions trivially testable, memoizable, and safe to parallelize.
// Pure — deterministic, no side effects
function add(a, b) { return a + b; }

// Impure — modifies external state
let count = 0;
function increment() { return ++count; }

// Impure — mutates the input array
function addItem(arr, item) {
  arr.push(item); return arr;     // mutates original!
}

// Pure version — returns new array
function addItemPure(arr, item) {
  return [...arr, item];          // no mutation
}

// Impure — non-deterministic
const now = () => Date.now();

Why it matters: Purity is prerequisite knowledge for understanding memoization, React's rendering model, and Redux reducers — interviewers use it to gauge whether you think functionally or imperatively.

Real applications: Redux mandates pure reducers; React expects pure render logic so it can safely batch and defer re-renders; Lodash utility functions are all pure, ensuring they can be composed without hidden state changes.

Common mistakes: Mutating an object or array parameter inside a function and assuming it is pure because no external variable is referenced — modifying input arguments is still a side effect that breaks both purity and referential transparency.

Currying transforms a multi-argument function into a chain of unary functionsf(a, b, c) becomes f(a)(b)(c) — enabling partial application, where some arguments are fixed upfront to produce specialized, reusable functions without repeating the shared logic.
// Manual curried function
function curriedAdd(a) {
  return (b) => a + b;
}
const add10 = curriedAdd(10);
add10(5);  // 15
add10(20); // 30

// Generic curry utility
function curry(fn) {
  return function curried(...args) {
    if (args.length >= fn.length) return fn(...args);
    return (...more) => curried(...args, ...more);
  };
}

const sum3 = curry((a, b, c) => a + b + c);
sum3(1)(2)(3);  // 6
sum3(1, 2)(3);  // 6

// Practical reusable validator
const hasMinLength = curry((min, str) => str.length >= min);
const isValidPassword = hasMinLength(8);
isValidPassword("secret"); // false

Why it matters: Currying is a staple functional programming interview topic — it tests closures, partial application, and function arity simultaneously, and the ability to implement a generic curry utility is a common senior-level question.

Real applications: Lodash and Ramda auto-curry all their functions; React event handlers often curry configuration — onClick={handleClick(item.id)} returns an event handler pre-loaded with the item ID.

Common mistakes: Confusing currying with partial application — currying always produces strictly unary functions one at a time, while partial application fixes some arguments but can return a function that accepts multiple remaining arguments at once.

The arguments object is an array-like (not a real Array) available in non-arrow functions containing all passed arguments; rest parameters (...args) introduced in ES6 collect remaining arguments into a true Array instance, work inside arrow functions, and support all array methods directly without conversion.
// arguments — array-like, lacks Array methods
function oldSum() {
  console.log(Array.isArray(arguments)); // false
  return Array.from(arguments).reduce((a, b) => a + b, 0);
}
oldSum(1, 2, 3); // 6

// Rest parameters — true Array
const newSum = (...nums) => nums.reduce((a, b) => a + b, 0);
newSum(1, 2, 3); // 6

// Named params + rest
function log(level, ...msgs) {
  msgs.forEach(m => console.log(`[${level}] ${m}`));
}
log("INFO", "start", "done");

// arguments NOT available in arrow functions
const arrow = () => {
  // arguments; // ReferenceError (inherits outer or undefined)
};

Why it matters: Interviewers ask this to confirm you know why arguments fails in arrow functions and why rest parameters are always preferred — it demonstrates awareness of ES6 improvements and arrow function semantics.

Real applications: Express.js middleware accepts variadic arguments via rest params; legacy AngularJS dependency injection used arguments for function annotation reflection — understanding both is essential when maintaining mixed old/new codebases.

Common mistakes: Calling array methods like .map() directly on arguments without first converting it with Array.from() or spread — since arguments is array-like but not an Array instance, these methods don't exist on it.

call(), apply(), and bind() all explicitly set a function's this value; call passes arguments individually and invokes immediately, apply passes arguments as an array and invokes immediately, while bind returns a new permanently-bound function without invoking it — mnemonic: Commas, Array, Bound.
function greet(greeting, punct) {
  return `${greeting}, ${this.name}${punct}`;
}
const user = { name: "Alice" };

// call — individual args, executes immediately
greet.call(user, "Hello", "!"); // "Hello, Alice!"

// apply — array args, executes immediately
greet.apply(user, ["Hi", "?"]); // "Hi, Alice?"

// bind — returns new function, does NOT execute
const boundGreet = greet.bind(user, "Hey");
boundGreet("."); // "Hey, Alice."

// Method borrowing with call
const arrLike = { 0: "a", 1: "b", length: 2 };
Array.prototype.slice.call(arrLike); // ["a", "b"]

// Spread now replaces most apply use cases
Math.max(...[1, 5, 3]); // 5

Why it matters: These three methods appear in almost every senior JavaScript interview — mastering them proves deep understanding of this binding, function context, and the prototype system.

Real applications: React class components still bind event handlers in the constructor with .bind(this); Lodash internal utilities use call for prototype method borrowing; Node.js core modules use apply for variadic function forwarding.

Common mistakes: Using bind inside a JSX attribute or render method — this creates a new function reference on every render, defeating React.memo and shouldComponentUpdate optimizations; bind in the constructor or use arrow functions instead.

Function composition combines functions so the output of one becomes the input of the next; compose applies functions right-to-left while pipe applies them left-to-right, both returning a new function that performs the entire chain — enabling complex transformations built from small, single-responsibility pure functions.
// pipe — left to right (most readable)
const pipe = (...fns) => (x) => fns.reduce((v, fn) => fn(v), x);

// compose — right to left
const compose = (...fns) => (x) => fns.reduceRight((v, fn) => fn(v), x);

const double  = x => x * 2;
const addOne  = x => x + 1;
const square  = x => x * x;

// pipe: double(3)=6, addOne(6)=7, square(7)=49
const transform = pipe(double, addOne, square);
transform(3); // 49

// Practical data transformation pipeline
const processUser = pipe(
  u => ({ ...u, name: u.name.trim() }),
  u => ({ ...u, email: u.email.toLowerCase() }),
  u => ({ ...u, role: u.role || "user" })
);

Why it matters: Composition is a senior-level functional programming topic that demonstrates ability to build complex transformations from minimal, testable parts — directly relevant to middleware patterns, data pipelines, and declarative UI frameworks.

Real applications: Redux's compose chains store enhancers; RxJS pipeable operators use an identical pipe pattern for observable transformations; Ramda and Lodash/fp are built entirely around auto-curried composable functions.

Common mistakes: Confusing the direction — compose(f, g)(x) calls g first (right to left), not f; many developers memorize "compose reads right to left" but still write the functions in the wrong order when building a transformation chain.

Memoization is a caching optimization for pure functions that stores return values keyed by their arguments, so repeated calls with the same inputs skip all computation and return from the cache instantly — trading memory for speed, converting exponential algorithms into linear ones.
// Generic memoize utility
function memoize(fn) {
  const cache = new Map();
  return function(...args) {
    const key = JSON.stringify(args);
    if (cache.has(key)) return cache.get(key);
    const result = fn.apply(this, args);
    cache.set(key, result);
    return result;
  };
}

// Without memoization — O(2^n) exponential
function fib(n) {
  if (n <= 1) return n;
  return fib(n - 1) + fib(n - 2);
}

// With memoization — O(n) linear
const memoFib = memoize(function fib(n) {
  if (n <= 1) return n;
  return memoFib(n - 1) + memoFib(n - 2);
});
memoFib(50); // 12586269025 — instant

Why it matters: Memoization combines closures, pure functions, and performance optimization in one concept — a rich interview topic that directly underpins React's useMemo, useCallback, and React.memo.

Real applications: React's useMemo prevents expensive recalculations on re-renders; Reselect memoizes Redux selectors to avoid redundant state derivations; Lodash's _.memoize caches expensive API response transformations.

Common mistakes: Memoizing impure functions or using JSON.stringify on arguments containing objects with circular references — this throws an error, and even for deep but non-circular objects it degrades performance enough to negate the caching benefit.

Function scope (var) makes a variable accessible anywhere within its containing function, ignoring block boundaries like if and for. Block scope (let/const) confines a variable to the nearest enclosing {} block. Block scope is the modern standard — it reduces accidental leaks and aligns with how most developers intuitively expect scope to work.
function demo() {
  if (true) {
    var x = 10;   // function-scoped — leaks out of block
    let y = 20;   // block-scoped — stays in block
  }
  console.log(x); // 10 — var ignores block
  // console.log(y); // ReferenceError — let stays in if-block
}

// Loop scoping
for (var i = 0; i < 3; i++) {}
console.log(i); // 3 — var leaks out

for (let j = 0; j < 3; j++) {}
// console.log(j); // ReferenceError

Why it matters: var's function-scope behavior is one of JavaScript's most error-prone legacy features. Understanding function vs block scope is foundational for debugging variable access issues and is tested in virtually every JavaScript interview.

Real applications: let and const are used exclusively in modern codebases. Block scoping enables declaring loop variables that don't pollute outer scope, and allows if/else branches to define differently-named variables in each branch without conflicts.

Common mistakes: Expecting var to be block-scoped (it is not), declaring variables with var inside if blocks expecting them to be confined (they are not), and using var in loops that create closures (all share the same variable — use let instead).

When functions are created inside a var loop, all closures share the same single variable. By the time any callback executes (e.g. after setTimeout), the loop has finished and i is at its final value. The fix is using let (each iteration gets its own block scope), an IIFE, or forEach.
// Problem: var — all closures share same i
for (var i = 0; i < 3; i++) {
  setTimeout(() => console.log(i), 100);
}
// Output: 3 3 3

// Fix 1: let — each iteration has its own i
for (let i = 0; i < 3; i++) {
  setTimeout(() => console.log(i), 100);
}
// Output: 0 1 2

// Fix 2: IIFE — capture current value manually
for (var i = 0; i < 3; i++) {
  (function(j) {
    setTimeout(() => console.log(j), 100);
  })(i);
}
// Output: 0 1 2

Why it matters: This is one of the most common JavaScript interview questions — it tests closure understanding, scope semantics, and async execution order. Explaining WHY it happens (single shared binding vs per-iteration binding) is what separates strong candidates.

Real applications: Event handler attachment in loops (clicking on dynamically generated list items), async operations spawned inside loops (API calls, timers, intervals), and React's useEffect in list render scenarios all encounter this pattern.

Common mistakes: Using var in loops that generate closures (always use let), fixing only the symptom by storing i in a local var inside the loop (which still shares the outer var via the IIFE shadow), and assuming the fix is just switching to arrow functions (the issue is var scope, not function type).