// Execution order:
console.log("1"); // Sync — call stack
setTimeout(() => console.log("2"), 0); // Macrotask queue
Promise.resolve().then(() => console.log("3")); // Microtask queue
console.log("4"); // Sync — call stack
// Output: 1, 4, 3, 2
Synchronous code runs first on the call stack, then all microtasks are drained, and finally one macrotask is picked up. This priority order is why Promise callbacks always execute before setTimeout callbacks.
Why it matters: The event loop is the core mechanism that enables JavaScript to be non-blocking despite being single-threaded. Every interview question about async timing, Promise ordering, and setTimeout behavior ultimately traces back to understanding the event loop.
Real applications: Understanding why Promises resolve before setTimeouts, debugging why UI updates are delayed, explaining why long loops freeze the browser, building mental models for React's rendering pipeline (setState batching), and understanding why Node.js can handle thousands of concurrent connections on one thread.
Common mistakes: Thinking setTimeout(fn, 0) runs immediately (it queues a macrotask, microtasks run first), assuming async functions run in parallel (they run on the same thread with cooperative scheduling), and not knowing that the event loop only processes one macrotask per iteration but drains ALL microtasks before moving on.
function third() { console.log("third"); }
function second() { third(); }
function first() { second(); }
first();
// Call stack progression:
// 1. first() — push first
// 2. second() — push second
// 3. third() — push third
// 4. console.log — push, execute, pop
// 5. third returns — pop third
// 6. second returns — pop second
// 7. first returns — pop first
If the call stack grows too deep (e.g., infinite recursion), JavaScript throws a RangeError: Maximum call stack size exceeded. Understanding the call stack is essential for debugging and understanding how synchronous code executes.
Why it matters: The call stack is the execution context for synchronous code. Every function call pushes a frame, every return pops one. Stack overflow errors, recursive algorithm limits, and the concept of a clean stack (event loop waiting) all relate to this structure.
Real applications: Debugging stack overflow errors from infinite recursion, understanding why deeply nested recursive algorithms need tail-call optimization or trampolining, reading stack traces to understand execution flow, and knowing why you can't make a synchronous call stack frame be "paused" while waiting for I/O (that's what async is for).
Common mistakes: Not recognizing a stack overflow from deep recursion early (it can consume memory before throwing), thinking async functions run on a separate stack (they use the same single call stack per event loop tick), and not knowing that each browser/engine has a different maximum call stack depth (typically 10,000-15,000 frames).
console.log("Start");
setTimeout(() => console.log("Timeout 1"), 0);
setTimeout(() => console.log("Timeout 2"), 0);
console.log("End");
// Output:
// "Start"
// "End"
// "Timeout 1"
// "Timeout 2"
// (macrotasks processed one at a time, in order)
Macrotasks are processed one at a time in FIFO order. Between each macrotask, the event loop checks for microtasks and processes all of them before picking the next macrotask. This ensures Promise callbacks always run promptly.
Why it matters: The macrotask queue is the fundamental queue that drives JavaScript's asynchronous execution. Understanding when callbacks are added to this queue (and when they run relative to microtasks) explains the timing behavior of all async operations.
Real applications: Understanding why multiple setTimeouts with the same delay don't run simultaneously, debugging unexpected ordering in timer-based code, implementing sequential async workflows with setTimeout chaining, and understanding I/O callback timing in Node.js phases.
Common mistakes: Thinking setTimeout(fn, 0) and resolved Promises execute in the same queue (they don't — Promises use the microtask queue), not knowing that a 0ms delay is actually minimum 1ms per spec (browsers enforce 4ms minimums after nesting), and assuming two setTimeout(fn, 0) callbacks run in exactly 0ms (OS scheduling adds latency).
setTimeout(() => console.log("macro"), 0);
Promise.resolve().then(() => console.log("micro 1"));
Promise.resolve().then(() => {
console.log("micro 2");
Promise.resolve().then(() => console.log("micro 3"));
});
// Output: micro 1, micro 2, micro 3, macro
// All microtasks (including newly added) run before macro
This means if microtasks keep adding more microtasks, they can starve macrotasks and block UI rendering. Always be careful not to create infinite microtask loops as they will freeze the browser.
Why it matters: Microtasks are higher priority than macrotasks. This is why Promise chains and MutationObserver callbacks run before the next setTimeout or setInterval. Understanding this priority queue helps predict code execution order in interviews.
Real applications: Promise.then() and async/await resolution, MutationObserver DOM change callbacks, queueMicrotask() for deferred high-priority work, React's batched state updates draining before browser paint, and understanding why long .then() chains can delay rendering.
Common mistakes: Creating recursive microtasks (Promise.resolve().then(recursive)) which starves the macrotask queue and freezes the page, assuming microtasks and macrotasks are in the same queue (they have separate queues with different priorities), and not knowing that async/await suspension points are microtasks (not macrotasks like setTimeout).
console.log("A");
setTimeout(() => console.log("B"), 0);
console.log("C");
// Output: A, C, B
// "B" waits until stack clears even though delay is 0
// The 0ms is not guaranteed — it means "as soon as possible"
// after current execution and microtasks complete
This behavior is commonly used to defer execution to the next event loop iteration, allowing the current synchronous code and any pending microtasks to finish first. It is useful for breaking up long-running tasks.
Why it matters: setTimeout(fn, 0) is one of the most misunderstood patterns in JavaScript. It does NOT mean "execute immediately" — it means "execute in the next macrotask opportunity." This is a common interview question about event loop ordering.
Real applications: Deferring DOM reads to after rendering completes, breaking up large loops to avoid blocking UI (chunked processing), deferring expensive initialization to after page load, working around synchronous third-party library behavior, and scheduling non-urgent work to run after more critical tasks.
Common mistakes: Using setTimeout(fn, 0) as a substitute for proper async patterns (it's a last resort, not a solution), not knowing that all pending microtasks run before the setTimeout callback even if added after setTimeout is called, and assuming 0ms delay means instant execution (minimum delay is ~1ms, often 4ms+ in nested usage).
console.log("1");
setTimeout(() => console.log("2"), 0);
Promise.resolve()
.then(() => console.log("3"))
.then(() => console.log("4"));
console.log("5");
// Output: 1, 5, 3, 4, 2
// Sync first, then microtasks (Promise), then macrotasks (setTimeout)
Even with a 0ms delay on setTimeout, the Promise callbacks run first because microtasks have higher priority. This ordering is consistent across all modern browsers and Node.js environments.
Why it matters: Promise vs setTimeout ordering is one of the most commonly asked event loop interview questions. Candidates are given code snippets and asked to predict the output order. Knowing the microtask/macrotask priority is required to answer correctly.
Real applications: Predicting output in async code interview problems, understanding why React's state update batching (microtasks) completes before setTimeout-based tests, diagnosing test flakiness in async test suites, and building timing-sensitive code that depends on consistent execution order.
Common mistakes: Assuming 0ms setTimeout runs before resolved Promises (it doesn't — Promises always win), forgetting that async/await .then() callbacks are also microtasks (they run before setTimeout too), and not accounting for nested Promises creating additional microtask queue entries before setTimeout runs.
console.log("sync");
requestAnimationFrame(() => console.log("rAF"));
setTimeout(() => console.log("timeout"), 0);
Promise.resolve().then(() => console.log("promise"));
// Typical output: sync, promise, rAF, timeout
// (rAF timing depends on the browser's paint cycle)
// Use for smooth animations
function animate() {
element.style.left = position + "px";
position++;
if (position < 300) requestAnimationFrame(animate);
}
Always prefer requestAnimationFrame over setTimeout for animations. It provides smoother animations because it is synced with the display's refresh rate and avoids unnecessary frame calculations when the page is not visible.
Why it matters: requestAnimationFrame is the correct tool for smooth animation because it aligns with the browser's render cycle. Using setTimeout for animations causes frame rate mismatches, stuttering, and unnecessary work on hidden tabs.
Real applications: CSS animation alternatives using JS, canvas-based game loops, custom scroll-based animation systems, implementing smooth number countdowns, and throttling expensive DOM measurements to the render cycle to prevent layout thrashing.
Common mistakes: Using setTimeout(fn, 16) instead of rAF (assumes 60fps, not adaptive), not canceling rAF with cancelAnimationFrame on component unmount (memory leak), running expensive calculations in the rAF callback (use it only for DOM updates, compute elsewhere), and not knowing rAF is paused on background tabs (intentional — saves CPU).
// This blocks the event loop for ~5 seconds
function blockingTask() {
const start = Date.now();
while (Date.now() - start < 5000) {} // busy wait
console.log("Done");
}
// setTimeout callback delayed until blockingTask finishes
setTimeout(() => console.log("Timeout"), 100);
blockingTask(); // UI frozen for 5 seconds
// Fix: use Web Worker or chunk work
function yieldToEventLoop(tasks) {
if (tasks.length === 0) return;
const task = tasks.shift();
task();
setTimeout(() => yieldToEventLoop(tasks), 0);
}
The chunking approach uses setTimeout to yield control back to the event loop between tasks, allowing the browser to process UI events and repaint. For CPU-intensive work, Web Workers are the preferred solution.
Why it matters: Blocking the event loop freezes the entire UI — no clicks, scrolls, or input responses. Even a 100ms block is noticeable. Understanding what causes blocking (CPU loops, synchronous network calls) and how to prevent it is critical for front-end performance.
Real applications: Sorting/filtering large datasets without UI freeze, parsing large JSON responses, image processing on canvas, report generation in admin apps, and chunked list rendering for large data tables using virtual scrolling techniques.
Common mistakes: Doing expensive synchronous work directly in event handlers (blocks between user interaction and UI response), not knowing that forEach on 100k items can take 50ms+ (use chunked setTimeout or Web Worker), and using setInterval for CPU work thinking it yields (the callback still blocks while running).
console.log("start");
setTimeout(() => console.log("timeout"), 0);
Promise.resolve()
.then(() => {
console.log("promise1");
setTimeout(() => console.log("inner timeout"), 0);
})
.then(() => console.log("promise2"));
console.log("end");
// Output:
// "start" — sync
// "end" — sync
// "promise1" — microtask
// "promise2" — microtask (chained)
// "timeout" — macrotask (queued first)
// "inner timeout" — macrotask (queued during microtask)
Notice that "inner timeout" runs last even though it was scheduled during the first microtask. This is because setTimeout always places its callback in the macrotask queue, and the "timeout" callback was already queued before it.
Why it matters: Output prediction questions are among the most common event loop interview exercises. Being able to trace through code mentally — tracking what's on the call stack, what's in the microtask queue, and what's in the macrotask queue — is the core skill being tested.
Real applications: Debugging unexpected execution order in async code, building mental models for complex Promise chains, understanding why certain React lifecycle effects run in specific orders, and predicting test assertion timing in async tests.
Common mistakes: Assuming Promises inside setTimeouts run before the next setTimeout (they don't if the outer setTimeout fired first), not tracking that new Promises created inside .then() add to the microtask queue before macrotasks run, and forgetting that console.log in the synchronous part always runs before any async callbacks.
console.log("1");
queueMicrotask(() => console.log("2"));
setTimeout(() => console.log("3"), 0);
queueMicrotask(() => console.log("4"));
console.log("5");
// Output: 1, 5, 2, 4, 3
// Microtasks (queueMicrotask) run before macrotasks (setTimeout)
// Use case: batch DOM updates
let needsUpdate = false;
function scheduleUpdate() {
if (!needsUpdate) {
needsUpdate = true;
queueMicrotask(() => { flushUpdates(); needsUpdate = false; });
}
}
The batching pattern shown above is useful for coalescing multiple synchronous state changes into a single update. This avoids redundant DOM operations and is a common pattern in UI frameworks like React and Vue.
Why it matters: queueMicrotask() gives you explicit control over microtask scheduling without the overhead of Promise construction. It's the standard way to defer work to the microtask queue when you don't need a Promise's resolve/reject interface.
Real applications: Batching multiple synchronous state changes into a single re-render, deferring expensive DOM reads to after all synchronous writes in the same tick, building reactive systems that coalesce multiple property changes into one update cycle, and implementing efficient debouncing at the microtask level.
Common mistakes: Using queueMicrotask() for work that should be deferred further (use setTimeout for true deferral), creating recursive microtasks (can starve macrotasks and block rendering), and not knowing that queueMicrotask() was added specifically to avoid the Promise overhead when you only need scheduling, not resolution semantics.
// setTimeout — runs once after delay
setTimeout(() => console.log("Once"), 1000);
// setInterval — runs repeatedly
let count = 0;
const id = setInterval(() => {
count++;
console.log("Tick:", count);
if (count === 3) clearInterval(id);
}, 1000);
// Output: Tick: 1, Tick: 2, Tick: 3
// Better approach: recursive setTimeout for consistent intervals
function reliableInterval(fn, delay) {
function tick() {
fn();
setTimeout(tick, delay);
}
setTimeout(tick, delay);
}
Using recursive setTimeout instead of setInterval guarantees a consistent delay between the end of one callback and the start of the next. With setInterval, the delay is measured from the start of each callback, which can lead to overlapping executions.
Why it matters: Choosing between setTimeout and setInterval for recurring tasks is a practical decision. setInterval can drift and overlap; recursive setTimeout self-regulates. Understanding this prevents subtle polling bugs in production code.
Real applications: Polling APIs for status updates, implementing heartbeat connections for WebSockets, building a countdown timer UI, refreshing authentication tokens, and periodic cache invalidation checks where overlapping executions would be problematic.
Common mistakes: Using setInterval for tasks that may take longer than the interval (overlapping calls), not clearing intervals when components unmount (memory leak and stale updates), using setInterval with a 0ms delay thinking it's a tight loop (Event Loop overhead adds latency), and not accounting for timer drift in high-frequency setinterval polling.
// Node.js specific: process.nextTick vs queueMicrotask
process.nextTick(() => console.log("nextTick"));
queueMicrotask(() => console.log("microtask"));
setTimeout(() => console.log("timeout"), 0);
setImmediate(() => console.log("immediate"));
// Output in Node.js:
// "nextTick" — nextTick queue (highest priority)
// "microtask" — microtask queue
// "timeout" — timers phase
// "immediate" — check phase
// Node.js event loop phases:
// 1. Timers (setTimeout, setInterval)
// 2. Pending callbacks (I/O callbacks)
// 3. Idle, prepare (internal)
// 4. Poll (incoming connections, data)
// 5. Check (setImmediate)
// 6. Close callbacks (socket.on("close"))
In Node.js, process.nextTick() has even higher priority than microtasks — it runs before any other microtask. The setImmediate() function is Node.js specific and runs in the check phase after I/O polling.
Why it matters: Node.js's event loop has multiple phases (timers, I/O, idle, poll, check, close callbacks), each with specific scheduling semantics. Understanding this is crucial for writing correct server-side async Node.js code and is a key senior Node.js interview topic.
Real applications: High-throughput I/O servers (Node.js HTTP, TCP servers), understanding why DB callbacks run in the I/O phase vs timer callbacks, building streaming pipelines, implementing backpressure in stream processing, and debugging Node.js performance issues related to event loop phase timing.
Common mistakes: Confusing the browser event loop phases with Node.js phases (they're different models), using process.nextTick() where queueMicrotask() is more appropriate (nextTick runs before other microtasks, causing unexpected ordering), and not knowing that setImmediate() runs in the check phase AFTER I/O callbacks (unlike setTimeout which runs in the timers phase).
// Priority order demonstration
Promise.resolve().then(() => console.log("promise"));
queueMicrotask(() => console.log("microtask"));
process.nextTick(() => console.log("nextTick"));
// Output:
// "nextTick" — highest priority
// "promise" — microtask queue
// "microtask" — microtask queue
// Danger: recursive nextTick can starve I/O
function starvation() {
process.nextTick(starvation); // never yields to I/O!
}
// Safe alternative: use setImmediate for I/O-friendly deferral
function safe() {
setImmediate(safe); // allows I/O between callbacks
}
Be cautious with recursive process.nextTick() calls as they can starve I/O operations. The microtask queue and nextTick queue are fully drained before the event loop continues, so infinite recursion in either will block everything.
Why it matters: process.nextTick() runs before ALL other async operations including resolved Promises. This is a controversial design decision that can cause surprising ordering and was a source of many Node.js bugs. Understanding when to use nextTick vs queueMicrotask() is important for Node.js backends.
Real applications: Ensuring a callback fires before I/O in the same tick (Node's internal EventEmitter uses nextTick), deferring error events to allow event listeners to be attached first, breaking up large synchronous initialization into chunks, and debugging ordering issues in Node.js async code.
Common mistakes: Using process.nextTick() recursively (starves I/O and makes the server unresponsive), using nextTick where setImmediate would be more appropriate (nextTick runs before I/O, setImmediate after), and not knowing nextTick was intentionally placed before Promises in the queue for backward compatibility reasons.
const target = document.getElementById("myElement");
const observer = new MutationObserver((mutations) => {
mutations.forEach((mutation) => {
console.log("DOM changed:", mutation.type);
console.log("Added:", mutation.addedNodes.length);
console.log("Removed:", mutation.removedNodes.length);
});
});
// Configure and start observing
observer.observe(target, {
childList: true, // watch for added/removed children
attributes: true, // watch for attribute changes
subtree: true, // watch entire subtree
characterData: true // watch for text content changes
});
// Later: stop observing
observer.disconnect();
MutationObserver batches multiple DOM changes into a single callback, making it very efficient. Since it runs as a microtask, you are guaranteed to see DOM changes before the browser repaints, which allows you to make additional modifications without causing visual flicker.
Why it matters: MutationObserver replaced the deprecated DOM mutation events (DOMNodeInserted etc.) which had serious performance problems. Understanding where MutationObserver fits in the event loop (microtask, before paint) is important for building performant DOM change reactions.
Real applications: Lazy loading images when they enter the DOM, implementing accessibility tools that react to dynamic content, building polyfills for custom elements, framework internals (Angular zone.js), detecting third-party script DOM injections in security tools, and implementing infinite scroll when new content is added.
Common mistakes: Not disconnecting observers when done (memory leak), not batching mutations inside the callback (each DOM change inside a MutationObserver callback adds to the next microtask batch), observing too broadly (use specific subtree/childList options to limit scope), and not knowing it batches mutations (a loop of 1000 DOM changes = one callback, not 1000).
// main.js — main thread
const worker = new Worker("worker.js");
worker.postMessage({ data: [1, 2, 3, 4, 5] });
worker.onmessage = (event) => {
console.log("Result from worker:", event.data);
};
worker.onerror = (error) => {
console.error("Worker error:", error.message);
};
// worker.js — separate thread with own event loop
self.onmessage = (event) => {
const numbers = event.data.data;
// Heavy computation without blocking main thread
const sum = numbers.reduce((a, b) => a + b, 0);
self.postMessage(sum);
};
Each Worker has its own call stack, event loop, and memory space. This means heavy computations in a Worker do not block the main thread's event loop, keeping the UI responsive. However, data transfer between threads involves serialization which can be slow for large objects — use Transferable objects for better performance.
Why it matters: Web Workers are the solution to CPU-bound blocking. Without them, JavaScript's single thread means any intensive computation freezes the UI. For performance-critical applications, offloading work to Workers is essential.
Real applications: Image processing and filtering (photo editors), large JSON parsing, machine learning inference in the browser, cryptographic operations, game physics calculations, and virtual DOM diffing in workers (experimental React architecture research).
Common mistakes: Not using Transferable objects for large ArrayBuffers (cloning 100MB is slow — transferring is near-instant), sharing DOM access from a Worker (Workers cannot access the DOM), not terminating workers when done (they run indefinitely consuming CPU), and using Workers for trivial tasks where the serialization overhead exceeds the computation cost.
// Synchronous — blocks the call stack
console.log("A");
const result = heavyComputation(); // blocks until done
console.log("B");
// Order: A, (wait...), B
// Asynchronous — does not block
console.log("A");
fetch("/api/data")
.then(response => response.json())
.then(data => console.log("Data:", data));
console.log("B");
// Order: A, B, (later...) Data: ...
// Common async patterns
// 1. Callbacks — setTimeout, event listeners
// 2. Promises — .then() / .catch()
// 3. Async/Await — syntactic sugar over Promises
// 4. Event emitters — Node.js pattern
The event loop bridges the gap between JavaScript's single-threaded execution model and the underlying multithreaded environment. While your JavaScript code runs on one thread, the browser or Node.js can handle network requests, file I/O, and timers on separate threads.
Why it matters: Understanding that synchronous code blocks the thread while async code doesn't is fundamental to JavaScript performance. This distinction explains why you should avoid synchronous network or file reads, and why async code with callbacks/promises is Node.js's core architecture.
Real applications: Node.js I/O-heavy servers handling thousands of concurrent connections on one thread, SPA architecture separating data fetching from rendering, understanding why fetch() doesn't block while waiting for a response, and designing APIs that avoid blocking the event loop in server-side code.
Common mistakes: Using synchronous file system APIs (fs.readFileSync) in Node.js servers (blocks all incoming requests), making synchronous XHR requests (deprecated, blocks UI thread), mixing synchronous and async patterns without understanding the execution order implications, and not knowing that await only suspends the current async function, not the entire thread.
// The event loop iteration with rendering:
// 1. Pick one macrotask from queue
// 2. Execute it on the call stack
// 3. Drain ALL microtasks
// 4. If ~16.7ms has passed since last render:
// a. Run requestAnimationFrame callbacks
// b. Style calculation
// c. Layout
// d. Paint
// e. Composite
// 5. Go to step 1
// Forced synchronous layout (layout thrashing)
// BAD — triggers layout multiple times
for (let i = 0; i < 100; i++) {
const height = element.offsetHeight; // forces layout
element.style.height = height + 1 + "px"; // invalidates layout
}
// GOOD — batch reads and writes separately
const height = element.offsetHeight; // single read
for (let i = 0; i < 100; i++) {
element.style.height = height + i + "px"; // batch writes
}
Avoid layout thrashing (forced synchronous layout) by batching DOM reads before writes. Reading layout properties like offsetHeight or getBoundingClientRect forces the browser to recalculate layout immediately. Tools like requestAnimationFrame help organize DOM updates efficiently.
Why it matters: Understanding the browser rendering pipeline (JavaScript → style → layout → paint → composite) helps explain visual performance issues. Layout thrashing is one of the most common causes of jank in web apps and is detectable with Chrome DevTools.
Real applications: Optimizing animation performance by separating DOM reads and writes, using CSS transforms instead of top/left (compositor only, no layout), batching DOM mutations in a single rAF callback, detecting layout thrashing with the Performance timeline, and understanding why React batches state updates before re-rendering.
Common mistakes: Reading and writing layout properties in an interleaved loop (classic layout thrashing pattern), not knowing that CSS transforms and opacity are compositor-only (fast) while width/height/top/left trigger layout (slow), and performing DOM measurements in event handlers instead of batching them in rAF.
// Structured clone with Web Worker
const worker = new Worker("worker.js");
const data = {
name: "John",
scores: [90, 85, 92],
metadata: new Map([["key", "value"]]),
date: new Date(),
pattern: /test/gi
};
worker.postMessage(data); // all types cloned correctly
// Manual structured clone (ES2022+)
const clone = structuredClone(data);
clone.scores.push(100); // does not affect original
// What CANNOT be cloned:
// Functions, Symbols, DOM nodes, WeakMap, WeakSet
// Property getters/setters, prototype chain
// Transferable objects — zero-copy transfer
const buffer = new ArrayBuffer(1024);
worker.postMessage(buffer, [buffer]);
// buffer is now empty in main thread (transferred, not copied)
For large data, use Transferable objects (ArrayBuffer, MessagePort, OffscreenCanvas) to transfer ownership instead of copying. This is a zero-copy operation that is much faster than cloning but makes the original reference unusable.
Why it matters: postMessage is the only safe inter-thread communication mechanism in JavaScript. The Structured Clone Algorithm defines what can cross the boundary, making it important to know which types are clonable, which are transferable, and which are neither (DOM nodes, functions).
Real applications: Sending processed ArrayBuffers from Workers to the main thread, transferring OffscreenCanvas for Worker-based rendering, passing complex state objects between iframes via postMessage, sharing images between service workers and pages, and building sandboxed plugin systems using window.postMessage.
Common mistakes: Trying to transfer DOM elements (not structured-cloneable — throws), not using Transferable for large ArrayBuffers (cloning is O(n) copy), transferring an ArrayBuffer and then trying to use it in the original context (it becomes detached/empty), and not validating the origin of incoming postMessage events (security vulnerability).
// Measure execution time
const start = performance.now();
expensiveOperation();
const duration = performance.now() - start;
console.log("Took " + duration.toFixed(2) + "ms");
// Using Performance Observer to detect long tasks
const observer = new PerformanceObserver((list) => {
for (const entry of list.getEntries()) {
if (entry.duration > 50) {
console.warn("Long task detected:", entry.duration + "ms");
}
}
});
observer.observe({ entryTypes: ["longtask"] });
// Break up long tasks
async function processLargeArray(items) {
const CHUNK_SIZE = 100;
for (let i = 0; i < items.length; i += CHUNK_SIZE) {
const chunk = items.slice(i, i + CHUNK_SIZE);
processChunk(chunk);
// Yield to event loop between chunks
await new Promise(resolve => setTimeout(resolve, 0));
}
}
The Long Tasks API helps identify tasks that take more than 50ms and could cause jank. Breaking long tasks into smaller chunks with setTimeout or requestIdleCallback allows the browser to handle user input and rendering between chunks, keeping the application responsive.
Why it matters: Event loop performance profiling is an essential production skill. Understanding how to find blocking tasks, measure their cost, and break them up is critical for building responsive UIs that score well on Core Web Vitals (INP, FID).
Real applications: Identifying performance bottlenecks in complex SPAs, measuring task duration in CI/CD with web-vitals library, diagnosing slow page load in Chrome Lighthouse audits, building custom performance monitoring dashboards, and implementing INP (Interaction to Next Paint) optimization workflows.
Common mistakes: Only profiling in DevTools without production monitoring (different devices have very different performance characteristics), not knowing the 50ms long task threshold (it's the per-frame budget for 60fps interactions), measuring JavaScript performance without accounting for garbage collection pauses, and profiling in development mode (React dev mode is significantly slower than production).
// Basic usage with deadline
requestIdleCallback((deadline) => {
while (deadline.timeRemaining() > 0) {
// Do low-priority work while there is idle time
processNextItem();
}
// If more work remains, schedule another idle callback
if (hasMoreWork()) {
requestIdleCallback(processWork);
}
});
// With timeout — ensures callback runs within specified time
requestIdleCallback(doWork, { timeout: 2000 });
// Will run during idle time OR after 2 seconds (whichever first)
// Priority comparison:
// 1. Sync code (call stack)
// 2. Microtasks (Promise, queueMicrotask)
// 3. requestAnimationFrame (before paint)
// 4. Macrotasks (setTimeout, events)
// 5. requestIdleCallback (idle time only)
Never perform DOM mutations inside requestIdleCallback because it runs outside the rendering cycle. Instead, collect the changes and apply them in a requestAnimationFrame callback. Note that requestIdleCallback is not available in all browsers — Safari added support only recently.
Why it matters: requestIdleCallback is the browser's way of telling you "I have spare time right now." It's the correct place for non-urgent work like analytics, prefetching, and cleanup. Using it correctly prevents work from competing with user-critical rendering.
Real applications: Prefetching resources during idle time, running analytics and telemetry without affecting page performance, lazy-loading below-fold content, processing event queues during idle periods, and polyfilling background sync patterns in browsers without service worker support.
Common mistakes: Performing DOM mutations directly in rIC (runs after render cycle, DOM changes would force another render), not checking deadline.timeRemaining() before doing work chunks (may exceed the idle period), relying on rIC for time-sensitive work (browser may delay it indefinitely under load), and not providing a timeout option (idle callback may never run on busy pages).