React

Performance Optimization

14 Questions

React.memo is a higher-order component that wraps a functional component and memoizes its rendered output. By default, React re-renders every child component whenever the parent re-renders, even if the child's props haven't changed. React.memo performs a shallow comparison of the previous and current props before deciding whether to re-render. If the props are the same by reference, it skips the render entirely and reuses the last output.

const ExpensiveList = React.memo(function List({ items }) {
  return items.map(item => <li key={item.id}>{item.name}</li>);
});

Why it matters: React re-renders a child every time its parent re-renders. React.memo skips the re-render if the props didn't change.

Real applications: Memoizing list items, chart components, or any expensive child that receives stable props from a frequently-updating parent.

Common mistakes: Applying React.memo without also stabilizing function props with useCallback — the component still re-renders because the function reference changes every render.

useMemo is a hook that caches the result of an expensive calculation between renders. It accepts a factory function and a dependency array — the function only re-runs when one of the dependencies changes. Without useMemo, every re-render repeats the calculation even if the inputs are identical. This is especially useful for operations like sorting, filtering, or aggregating large datasets inside a component.

const sortedItems = useMemo(() => {
  return items.sort((a, b) => a.name.localeCompare(b.name));
}, [items]);

Only recalculates when items changes.

Why it matters: useMemo prevents expensive calculations from running on every re-render when the inputs haven't changed.

Real applications: Sorting or filtering a large list, computing aggregated stats for a dashboard, deriving formatted data from raw API results.

Common mistakes: Wrapping cheap operations like simple string formatting in useMemo — the memoization overhead costs more than the computation saved.

useCallback is a hook that returns a memoized version of a callback function so its reference stays stable across renders. In React, every time a component re-renders, all functions defined inside it are recreated as new object references. When these functions are passed as props to child components, the children see them as changed and re-render even if nothing meaningful changed. useCallback stabilizes the function reference so memoized children only re-render when the callback's dependencies actually change.

const handleDelete = useCallback((id) => {
  setItems(prev => prev.filter(item => item.id !== id));
}, []);

Why it matters: Functions are recreated on every render. Without useCallback, a new function reference causes memoized children to re-render unnecessarily.

Real applications: Event handlers passed to memoized list items, fetch functions passed as dependencies in useEffect, callbacks passed to custom hooks.

Common mistakes: Using useCallback on a handler that isn't passed to a memoized child — the stabilization is wasted because the child re-renders anyway.

Memoization is not free — it adds memory overhead to store the cached value and a comparison cost on every render. For cheap operations and components that render quickly, the bookkeeping cost of memoization can actually exceed the cost of the work you are trying to avoid. Before adding useMemo, useCallback, or React.memo, always verify there is a real, measurable performance problem using the React DevTools Profiler. Premature optimization makes code harder to read without delivering any real user benefit.

  • Simple, fast computations (memoization has overhead)
  • Values that change on every render anyway
  • Components that are cheap to render
  • When it makes code harder to read for minimal gain

Profile first, optimize second.

Why it matters: Unnecessary memoization adds code complexity, increases memory usage, and can actually slow things down for trivial cases.

Real applications: Adding a two-number sum to useMemo is wasteful. The calculation is cheaper than the memoization bookkeeping.

Common mistakes: Applying memoization everywhere "just in case" — measure first with React DevTools Profiler before adding any useMemo or useCallback.

The React DevTools Profiler is a browser extension tab that records component render timings across your entire app. It shows which components rendered, why they rendered (what props or state changed), and exactly how long each render took in milliseconds. The React.Profiler component provides the same data programmatically, letting you log or trigger alerts whenever a specific component renders too slowly in production.

<Profiler id="sidebar" onRender={(id, phase, actualDuration) => {
  console.log(id, phase, actualDuration);
}}>
  <Sidebar />
</Profiler>

Why it matters: Without profiling, you're guessing where the performance problem is. The Profiler shows you exactly which components are slow and how often they render.

Real applications: Identifying slow renders in a long list, finding a context consuming component that re-renders too often, spotting a component that renders 50x on a single interaction.

Common mistakes: Profiling in development mode — always profile a production build for accurate measurements, as development adds extra overhead.

Re-renders are triggered by state changes, prop changes, or parent component re-renders. While a single re-render is fast, too many unnecessary re-renders in components that do expensive work can cause sluggish UI and janky interactions. Understanding the exact triggers lets you fix the problem at the source rather than blindly wrapping components in React.memo. The most common culprits are inline object literals, inline function definitions, and unmemoized context values that create new references on every render.

  • Parent re-renders (all children re-render)
  • Creating new objects/arrays in render (breaks shallow comparison)
  • Inline function definitions as props
  • Context value changes (all consumers re-render)
  • State updates that don't change the value

Why it matters: Knowing the root causes helps you fix re-render problems at the source instead of just wrapping things in memo blindly.

Real applications: Objects or arrays created inline in JSX, event handlers defined inline, context values not memoized — all cause wasted re-renders.

Common mistakes: Blaming React for slowness when the real issue is passing a new object literal every render: value={{ x, y }} creates a new object on every render.

When a context value changes, every component that calls useContext with that context will re-render, regardless of whether the specific part it uses actually changed. The most common mistake is creating the value object inline in JSXvalue={{ user, theme }} creates a new object reference on every parent render, so all consumers re-render even when user and theme are unchanged. Wrapping the value in useMemo ensures the object reference only changes when its actual contents change, preventing unnecessary consumer re-renders.

const value = useMemo(() => ({
  user, theme
}), [user, theme]);

return <AppContext.Provider value={value}>...</AppContext.Provider>;

Why it matters: Every context consumer re-renders when the context value changes. An unmemoized value object triggers all consumers on every parent render.

Real applications: AuthContext with user objects, CartContext with item arrays — both need memoized values to avoid unnecessary re-renders.

Common mistakes: Writing value={{ user, theme }} directly in the Provider without useMemo — creates a new object every render and re-renders all consumers.

When you render a large list all at once, the browser creates DOM nodes for every single item — including thousands that are scrolled far out of view. This causes slow initial renders, high memory usage, and poor scroll performance. Virtualization solves this by rendering only the items currently visible in the viewport plus a small buffer above and below. As the user scrolls, off-screen items are removed and new ones are added, keeping the DOM size constant regardless of how many total items exist.

import { FixedSizeList } from 'react-window';

<FixedSizeList height={400} itemCount={10000} itemSize={35}>
  {({ index, style }) => (
    <div style={style}>Row {index}</div>
  )}
</FixedSizeList>

Why it matters: Rendering 10,000 DOM nodes at once is slow and uses a lot of memory. Virtualization renders only the visible rows, making huge lists fast.

Real applications: Chat message history, spreadsheet rows, infinite product catalog, stock ticker data grids.

Common mistakes: Trying to virtualize items without fixed heights — variable height virtualization requires VariableSizeList or a library like react-virtuoso.

Images are typically the largest assets on a web page and one of the biggest causes of slow load times. Loading all images upfront — even those far below the fold — wastes bandwidth and delays the time-to-interactive. The loading="lazy" attribute tells the browser not to fetch an image until it is close to the viewport. Using modern formats like WebP reduces file sizes by 25–50% compared to JPEG or PNG, and specifying explicit width and height prevents layout shifts during loading.

<img
  src="photo.webp"
  loading="lazy"
  width={400}
  height={300}
  alt="Description"
/>

Why it matters: Images are often the biggest contributor to page weight. Lazy loading and modern formats dramatically reduce load time.

Real applications: Product image galleries, blog post thumbnails, user avatar lists — all benefit from lazy loading and WebP formats.

Common mistakes: Setting loading="lazy" on above-the-fold images that users see immediately — this delays loading of critical images.

Code splitting breaks your JavaScript bundle into smaller chunks that are loaded on demand rather than all at once. Without it, the browser must download, parse, and execute your entire application bundle before the user can interact with anything. Route-level splitting is the most impactful approach — each page only loads the code it needs, not the code for every other page in the app. Library-level splitting ensures that heavy third-party dependencies like charting libraries or rich-text editors don't inflate the initial bundle for users who may never visit the pages that use them.

Split at: route level (most common), component level (heavy components), library level (import only what you need). Use dynamic import(), React.lazy, and analyze bundle with source-map-explorer.

Why it matters: A large JavaScript bundle delays the first meaningful paint. Splitting it ensures users only download the code needed for the current page.

Real applications: Lazy loading the checkout flow, the admin dashboard, or any route that only some users visit.

Common mistakes: Code-splitting at too granular a level (individual small components) — this creates too many small network requests and hurts performance instead.

Measuring render time is the essential first step before applying any optimization. React DevTools Profiler records a flamegraph of every component that rendered during an interaction, including how long each one took and whether it was caused by a prop change, state change, or context change. The built-in React.Profiler component lets you add threshold-based alerts directly in code so slow renders are caught automatically. Any component consistently taking over 16ms is a candidate for optimization since that is the budget to maintain 60 frames per second.

  1. Open React DevTools → Profiler tab
  2. Click Record → interact with your app → Stop
  3. Identify components with high actual duration
  4. Wrap expensive components in React.memo
  5. Memoize props passed to them with useMemo / useCallback
// Also use the Profiler API in code
<Profiler id="ProductList" onRender={(id, phase, duration) => {
  if (duration > 16) console.warn(id, 'slow render:', duration, 'ms');
}}>
  <ProductList />
</Profiler>

Target renders over 16ms — that's the threshold to maintain 60fps. Below that React is unlikely to cause noticeable jank.

Why it matters: Without measuring actual render times, you can't know which components are actually slow — optimization without data is guesswork.

Real applications: Identifying a slow product list, finding a chart component that takes 80ms to render, catching a context consumer that re-renders 30 times per second.

Common mistakes: Using development builds for profiling — dev mode adds extra React checks that make everything look slower than it really is in production.

React.memo and useMemo are both memoization tools but they operate at different levels. React.memo wraps an entire component and prevents it from re-rendering when its props are unchanged — it is applied at the component boundary. useMemo is a hook used inside a component to cache the result of a specific computation, so expensive derived values are not recalculated on every render. The two tools complement each other: you typically apply React.memo to a child component and then use useMemo and useCallback to stabilize the props you pass into it.

// React.memo — prevents re-render of the whole component
const Avatar = React.memo(({ user }) => (
  <img src={user.avatar} alt={user.name} />
));

// useMemo — prevents re-running an expensive computation inside a component
function Dashboard({ logs }) {
  const stats = useMemo(() => computeStats(logs), [logs]);
  return <StatsChart data={stats} />;
}

They complement each other: use React.memo on child components and useMemo/useCallback to stabilize the props passed to them.

Why it matters: Knowing which tool does what prevents misuse — one memoizes a component, the other memoizes a value computed inside a component.

Real applications: React.memo on a heavy chart component; useMemo to compute the data passed to it; useCallback to stabilize the click handler passed to it.

Common mistakes: Thinking React.memo will prevent re-renders caused by new object/function prop references — it won't without also stabilizing those values.

Every kilobyte of JavaScript has to be downloaded, parsed, and executed before your app becomes interactive. A large bundle directly increases load time, especially on mobile devices with slower CPUs and connections. The most impactful wins come from tree-shaking unused code, replacing large libraries with smaller alternatives, and code-splitting heavy pages into separate chunks. Running a bundle analyzer first tells you exactly where the weight is coming from so you can target the biggest offenders rather than guessing.

// ❌ Imports entire library — adds ~500kb
import _ from 'lodash';
const arr = _.chunk([1,2,3,4], 2);

// ✅ Tree-shakable named import — adds only ~1kb
import chunk from 'lodash/chunk';

// ✅ Code-split heavy pages
const Analytics = React.lazy(() => import('./pages/Analytics'));

// ✅ Use date-fns instead of moment.js (5x smaller)
import { format } from 'date-fns';

// ✅ Analyze with:
// npx source-map-explorer build/static/js/*.js
// or vite-bundle-visualizer

Run a bundle analysis before optimizing — you might discover one dependency accounts for most of the weight.

Why it matters: A large bundle means users wait longer before they can interact with your app — especially on slow mobile connections.

Real applications: Switching from Moment.js to date-fns saves hundreds of KB; tree-shaking lodash removes unused utility functions.

Common mistakes: Importing an entire library instead of just the function you need: import _ from 'lodash' vs import debounce from 'lodash/debounce'.

Infinite scroll loads more items automatically as the user scrolls toward the bottom of the list, avoiding the need to fetch all data upfront. The recommended approach uses the browser's native IntersectionObserver API to watch a small invisible sentinel element placed at the end of the list — when it enters the viewport, the next page is fetched. This avoids the scroll event listener approach which fires extremely frequently and requires debouncing. For very long lists, combining infinite scroll with virtualization (react-window) prevents memory issues from accumulating thousands of DOM nodes.

function InfiniteList({ fetchNextPage, hasNextPage }) {
  const sentinelRef = useRef(null);

  useEffect(() => {
    const observer = new IntersectionObserver(
      (entries) => {
        if (entries[0].isIntersecting && hasNextPage) {
          fetchNextPage();
        }
      },
      { threshold: 0.1 }
    );
    if (sentinelRef.current) observer.observe(sentinelRef.current);
    return () => observer.disconnect();
  }, [fetchNextPage, hasNextPage]);

  return (
    <div>
      {/* render items */}
      <div ref={sentinelRef} style={{ height: 1 }} /> {/* sentinel */}
    </div>
  );
}

For very long lists, combine infinite scroll with virtualization (react-window) to avoid memory issues from thousands of DOM nodes.

Why it matters: Infinite scroll loads more items as the user scrolls, avoiding the slow initial load of fetching everything at once.

Real applications: Social media feeds, product search results, notification lists, file browser with thousands of files.

Common mistakes: Loading all items upfront and using infinite scroll only for display — the DOM still has thousands of nodes if you don't pair it with virtualization.

© 2026 InterviewPrep — React Interview Questions