null on success), and subsequent arguments carry the results. Most Node.js core modules like fs, http, and dns originally used this pattern before Promises were introduced. It remains common in legacy codebases and older npm packages, making it essential for working with real-world Node.js applications.
const fs = require('fs');
// Error-first callback pattern
fs.readFile('data.txt', 'utf8', (err, data) => {
if (err) {
console.error('Error reading file:', err);
return;
}
console.log('File contents:', data);
});
// Custom function with callback pattern
function fetchUser(id, callback) {
setTimeout(() => {
if (!id) return callback(new Error('ID is required'));
callback(null, { id, name: 'Alice' });
}, 100);
}
fetchUser(1, (err, user) => {
if (err) return console.error(err);
console.log(user);
});
Why it matters: This tests your understanding of Node.js's original async model. Interviewers assess whether you grasp the error-first convention, how callbacks differ from Promises and async/await, and whether you can read and maintain legacy callback-based code.
Real applications: Built-in modules like fs.readFile and dns.lookup use the callback pattern. Many older npm packages still expose callback-based APIs, requiring wrapping with util.promisify when integrating with modern async/await code.
Common mistakes: Forgetting to check the error argument before accessing the result causes crashes on failure. Developers also call the callback twice — once in the error branch and once in the success branch — due to missing return statements, causing unpredictable behavior.
// CALLBACK HELL
getUser(1, (err, user) => {
getOrders(user.id, (err, orders) => {
getOrderDetails(orders[0].id, (err, details) => {
getShipping(details.shippingId, (err, shipping) => {
console.log(shipping); // 4 levels deep!
});
});
});
});
// SOLUTION 1: Named functions
function handleShipping(err, shipping) { console.log(shipping); }
function handleDetails(err, details) { getShipping(details.shippingId, handleShipping); }
function handleOrders(err, orders) { getOrderDetails(orders[0].id, handleDetails); }
getUser(1, (err, user) => getOrders(user.id, handleOrders));
// SOLUTION 2: Promises
getUser(1)
.then(user => getOrders(user.id))
.then(orders => getOrderDetails(orders[0].id))
.then(details => getShipping(details.shippingId))
.then(shipping => console.log(shipping))
.catch(err => console.error(err));
// SOLUTION 3: async/await
const user = await getUser(1);
const orders = await getOrders(user.id);
const details = await getOrderDetails(orders[0].id);
const shipping = await getShipping(details.shippingId);
Why it matters: This tests your knowledge of Node.js async evolution and ability to write maintainable async code. Senior developers must recognize callback hell anti-patterns and refactor them using Promises or async/await for readable, testable code.
Real applications: Multi-step Express routes that read a file, query a database, then call an external API become callback hell without proper structuring. Modern applications refactor these chains using async/await to keep route handlers readable and maintainable.
Common mistakes: Flattening the nesting but forgetting to handle errors in every branch leaves the code functionally broken. Developers also over-engineer by mixing Promise chains and async/await unnecessarily when a simple async/await flow handles everything cleanly.
Promise constructor with resolve and reject to wrap callback-based APIs, and use Promise.resolve() and Promise.reject() for shorthand static promises.
// Create a promise
function readFileAsync(path) {
return new Promise((resolve, reject) => {
const fs = require('fs');
fs.readFile(path, 'utf8', (err, data) => {
if (err) reject(err);
else resolve(data);
});
});
}
// Use the promise
readFileAsync('data.txt')
.then(data => console.log(data))
.catch(err => console.error(err));
// Promise states:
// Pending → initial state
// Fulfilled → resolved with a value
// Rejected → rejected with an error
// Shorthand for resolved/rejected promises
const resolved = Promise.resolve('value');
const rejected = Promise.reject(new Error('fail'));
Why it matters: Understanding how to create and consume Promises is fundamental to modern Node.js development. Interviewers test this to gauge mastery of the async programming model underpinning all async/await code, since async functions always return Promises under the hood.
Real applications: Database drivers, the native fetch API, and Node.js file system operations all return Promises. Custom service wrappers around legacy callback APIs use manual Promise constructors to convert them to the modern standard used throughout codebases.
Common mistakes: Starting async work synchronously inside the Promise executor instead of deferring it creates subtle execution bugs. Developers also forget to attach a .catch() handler, causing unhandled rejection errors that crash Node.js in version 15 and later.
const p1 = Promise.resolve(1);
const p2 = Promise.resolve(2);
const p3 = Promise.reject('error');
// Promise.all — resolves when ALL succeed, rejects on FIRST failure
const all = await Promise.all([p1, p2]); // [1, 2]
// await Promise.all([p1, p3]); // throws 'error'
// Promise.allSettled — waits for ALL to settle (never rejects)
const settled = await Promise.allSettled([p1, p2, p3]);
// [{ status:'fulfilled', value:1 }, { status:'fulfilled', value:2 },
// { status:'rejected', reason:'error' }]
// Promise.race — resolves/rejects with the FIRST to settle
const race = await Promise.race([
fetch('/api/fast'),
new Promise((_, reject) => setTimeout(() => reject('timeout'), 5000))
]);
// Promise.any — resolves with the FIRST to SUCCEED
const any = await Promise.any([p3, p1, p2]); // 1 (ignores p3 rejection)
// Rejects only if ALL reject (AggregateError)
Why it matters: These methods are frequently tested to verify understanding of concurrent async execution patterns. Knowing when to use each one distinguishes developers who default to slow sequential awaiting from those who design efficient parallel execution.
Real applications: E-commerce dashboards use Promise.all to simultaneously fetch user data, cart, and recommendations. Notification services use allSettled to send emails to all users and log failures without halting the entire delivery batch.
Common mistakes: Using Promise.all when one failing operation should not block the rest — allSettled is correct there. Developers also confuse Promise.race (first to settle, including rejections) with Promise.any (first to succeed), leading to unintended error propagation.
async function always returns a Promise, and the await keyword pauses its execution until the awaited Promise settles while the event loop continues processing other tasks. This pattern eliminates deep nesting and long Promise chains, making complex async flows easy to follow, debug, and maintain.
// async function always returns a Promise
async function getUser(id) {
const response = await fetch(`/api/users/${id}`);
const user = await response.json();
return user; // Wrapped in Promise.resolve()
}
// Equivalent with promises
function getUser(id) {
return fetch(`/api/users/${id}`)
.then(response => response.json());
}
// Sequential execution
async function loadData() {
const user = await getUser(1); // Waits for this
const orders = await getOrders(1); // Then this
return { user, orders };
}
// Top-level await (ES modules only)
const config = await loadConfig();
Why it matters: Every modern Node.js codebase uses async/await, and interviewers test whether you understand it is truly asynchronous (non-blocking), how it differs from synchronous code under the hood, and how to correctly handle errors with try/catch.
Real applications: Every Express route handler that queries a database or calls an external API benefits from async/await. Authentication flows, order processing pipelines, and data aggregation services rely on it to remain non-blocking while staying highly readable.
Common mistakes: Forgetting the await keyword when calling async functions causes subtle bugs where you receive a Promise object instead of the resolved value. Using await inside a non-async function also throws a syntax error, a very common mistake for developers new to the pattern.
.catch() chains with familiar synchronous-style error handling. You can catch specific error types and respond differently, while the finally block runs regardless of success or failure — making it ideal for cleanup like releasing resources. For inline error handling, you can chain .catch() directly on an awaited expression to provide fallback values without a full try-catch block.
// Basic error handling
async function fetchData() {
try {
const response = await fetch('/api/data');
if (!response.ok) throw new Error(`HTTP ${response.status}`);
return await response.json();
} catch (error) {
console.error('Fetch failed:', error.message);
throw error; // Re-throw if caller should handle it
}
}
// Handle multiple operations
async function processOrder(orderId) {
try {
const order = await getOrder(orderId);
const payment = await processPayment(order);
await sendConfirmation(order, payment);
} catch (error) {
if (error.code === 'PAYMENT_FAILED') {
await cancelOrder(orderId);
}
logger.error('Order processing failed', { orderId, error: error.message });
} finally {
// Runs whether success or failure
await releaseResources();
}
}
// Catch at the call site
const data = await fetchData().catch(err => defaultData);
Why it matters: Proper error handling is critical in production systems. Interviewers assess whether you write resilient async code that does not swallow errors silently, and whether you understand when to handle errors locally versus propagating them up to a higher-level handler.
Real applications: Database transaction handlers use try-catch to roll back on failure and finally to always release connections. API route handlers catch database errors and map them to appropriate HTTP status codes before returning responses to clients.
Common mistakes: Swallowing errors with an empty catch block hides bugs that become extremely hard to diagnose in production. Developers also leave await calls outside try-catch blocks, causing unhandled promise rejections that crash Node.js in version 15 and later.
// SEQUENTIAL — slow (total = sum of all durations)
async function sequential() {
const users = await fetchUsers(); // 200ms
const products = await fetchProducts(); // 300ms
const orders = await fetchOrders(); // 250ms
// Total: ~750ms
}
// PARALLEL — fast (total = max duration)
async function parallel() {
const [users, products, orders] = await Promise.all([
fetchUsers(), // 200ms
fetchProducts(), // 300ms
fetchOrders() // 250ms
]);
// Total: ~300ms
}
// Parallel with error handling
async function parallelSafe() {
const results = await Promise.allSettled([
fetchUsers(),
fetchProducts(),
fetchOrders()
]);
const succeeded = results
.filter(r => r.status === 'fulfilled')
.map(r => r.value);
}
Why it matters: This is one of the most impactful Node.js performance optimizations. Interviewers test it to ensure you identify unnecessary sequential execution — developers who understand async/await sometimes still await independent operations one-by-one, wasting significant response time.
Real applications: Dashboard APIs fetching user data, analytics, notifications, and recent activity simultaneously use parallel execution. Parallelizing reduces response time from 700–1000ms (sequential) to roughly 300ms (parallel), dramatically improving user-perceived performance.
Common mistakes: Awaiting each operation one-by-one inside a loop when they are independent creates sequential code disguised as async. Developers also accidentally make Promise.all sequential by calling await on each function before collecting their results into the array.
for await...of loops, making them ideal for processing streams, paginated APIs, and large datasets without loading everything into memory. Async generators (using async function* and yield) create custom async iterables that produce values on demand. Node.js Readable streams implement the async iterable protocol natively, so they work directly with for await...of.
// Consuming a readable stream
const fs = require('fs');
async function processFile(path) {
const stream = fs.createReadStream(path, { encoding: 'utf8' });
for await (const chunk of stream) {
console.log('Chunk:', chunk.length, 'bytes');
}
}
// Custom async generator
async function* fetchPages(url) {
let page = 1;
while (true) {
const res = await fetch(`${url}?page=${page}`);
const data = await res.json();
if (data.length === 0) break;
yield data;
page++;
}
}
// Consume the async generator
for await (const pageData of fetchPages('/api/items')) {
console.log('Got page with', pageData.length, 'items');
}
Why it matters: Async iterators are the modern, memory-efficient way to process large data sources in Node.js. Interviewers test this to assess whether you can process streaming data without buffering everything into memory — essential for scalable data pipelines and large file handling.
Real applications: Processing large CSV file uploads line-by-line, consuming paginated REST APIs, and reading database query cursors row-by-row all benefit from async iterators to keep memory usage flat regardless of data size.
Common mistakes: Loading all paginated data into memory with Promise.all before processing defeats the memory-efficiency advantage. Developers also forget to handle early loop termination — breaking from a for await...of loop without releasing the stream reader can cause resource leaks.
const EventEmitter = require('events');
class OrderService extends EventEmitter {
createOrder(data) {
const order = { id: Date.now(), ...data };
this.emit('order:created', order);
return order;
}
cancelOrder(id) {
this.emit('order:cancelled', { id });
}
}
const service = new OrderService();
// Register listeners
service.on('order:created', (order) => {
console.log('Send confirmation email for order', order.id);
});
service.on('order:created', (order) => {
console.log('Update inventory for order', order.id);
});
// Listen once
service.once('order:cancelled', (data) => {
console.log('Order cancelled:', data.id);
});
// Error handling
service.on('error', (err) => {
console.error('Service error:', err);
});
service.createOrder({ product: 'Widget', qty: 2 });
Why it matters: EventEmitter is built into the core of Node.js and understanding it is essential for working with streams, servers, and custom event systems. Interviewers test it to confirm you can build decoupled reactive architectures and understand the dangers of unhandled error events.
Real applications: Order management systems emit events like order:created and order:shipped for inventory, notification, and analytics services to independently react to. Real-time logging pipelines use EventEmitter to decouple log producers from multiple consumer handlers.
Common mistakes: Forgetting to listen for the error event is critical — Node.js throws an uncaught exception that crashes the process when an error event has no listener. Adding listeners inside loops without removing them also creates memory leaks from accumulating duplicate listeners over time.
(err, result) as the last parameter) into promise-returning functions. This makes it easy to wrap legacy callback APIs for use with modern async/await code. Most Node.js core modules now offer a built-in .promises API like fs.promises, dns.promises, and timers/promises, making manual promisification less necessary for new code.
const util = require('util');
const fs = require('fs');
const dns = require('dns');
// Promisify individual functions
const readFile = util.promisify(fs.readFile);
const lookup = util.promisify(dns.lookup);
// Now use with async/await
async function main() {
const data = await readFile('config.json', 'utf8');
const { address } = await lookup('example.com');
console.log(data, address);
}
// fs.promises — already promisified
const fsp = require('fs').promises;
const data = await fsp.readFile('config.json', 'utf8');
await fsp.writeFile('output.txt', 'Hello');
// Custom promisify for non-standard callbacks
function customAsync(arg) {
return new Promise((resolve, reject) => {
legacyFunction(arg, (result, error) => {
if (error) reject(error);
else resolve(result);
});
});
}
Why it matters: Knowing how to bridge the old callback world and modern Promises is a practical skill. Interviewers test this when assessing your ability to modernize legacy codebases or work with third-party libraries that have not yet adopted the Promise-based API style.
Real applications: Migrating legacy Express applications that use callback-based database drivers or file operations to modern async/await requires promisifying those APIs. Custom wrappers around legacy SDKs also need this when integrating with a modern async architecture across the rest of the codebase.
Common mistakes: Using util.promisify on functions that do not follow the standard error-first callback signature produces wrong results silently. Developers also overlook the built-in fs.promises and dns.promises APIs and create unnecessary manual wrappers for function the standard library already provides.
// Using p-limit library
const pLimit = require('p-limit');
const limit = pLimit(5); // Max 5 concurrent operations
const urls = Array.from({ length: 100 }, (_, i) => `/api/items/${i}`);
// Only 5 fetches run at a time
const results = await Promise.all(
urls.map(url => limit(() => fetch(url).then(r => r.json())))
);
// Custom concurrency limiter
class ConcurrencyLimiter {
constructor(maxConcurrent) {
this.max = maxConcurrent;
this.active = 0;
this.queue = [];
}
async run(fn) {
while (this.active >= this.max) {
await new Promise(resolve => this.queue.push(resolve));
}
this.active++;
try {
return await fn();
} finally {
this.active--;
if (this.queue.length > 0) {
this.queue.shift()();
}
}
}
}
const limiter = new ConcurrencyLimiter(3);
const tasks = urls.map(url => limiter.run(() => fetch(url)));
const results = await Promise.all(tasks);
// Batch processing with controlled concurrency
async function processBatch(items, fn, concurrency = 10) {
const results = [];
for (let i = 0; i < items.length; i += concurrency) {
const batch = items.slice(i, i + concurrency);
const batchResults = await Promise.all(batch.map(fn));
results.push(...batchResults);
}
return results;
}
Why it matters: This is a critical production concern for any application processing bulk data, sending notifications, or calling external APIs. Interviewers ask it to verify you have thought about backpressure and resource management beyond simply running everything with Promise.all.
Real applications: Email notification services sending to 100,000 subscribers use concurrency limiting to respect provider API rate limits. Database migration scripts use limited parallelism to update millions of records without exhausting the connection pool or triggering rate-limit errors.
Common mistakes: Using Promise.all on thousands of items without a concurrency limit causes "too many open files" errors and overwhelms downstream APIs, causing cascading failures. Developers also forget to handle failures within the limited pool, causing the entire queue to stall indefinitely.
abort() is called on the controller, all listening operations are immediately cancelled. AbortSignal.timeout() (Node.js 18+) provides a convenient shorthand for creating signals that auto-cancel after a specified duration.
// Cancel a fetch request
const controller = new AbortController();
const { signal } = controller;
// Set a timeout to auto-cancel
setTimeout(() => controller.abort(), 5000); // Cancel after 5 seconds
try {
const response = await fetch('https://api.example.com/data', { signal });
const data = await response.json();
} catch (err) {
if (err.name === 'AbortError') {
console.log('Request was cancelled');
} else {
throw err;
}
}
// Cancel with AbortSignal.timeout() (Node.js 18+)
const response = await fetch(url, {
signal: AbortSignal.timeout(5000) // Built-in timeout signal
});
// Cancel multiple operations with one controller
const controller = new AbortController();
const [users, orders] = await Promise.all([
fetch('/api/users', { signal: controller.signal }),
fetch('/api/orders', { signal: controller.signal })
]);
// Cancels both requests
controller.abort();
// Use with custom async operations
async function longRunningTask(signal) {
for (let i = 0; i < 1000; i++) {
if (signal?.aborted) {
throw new Error('Task cancelled');
}
await processItem(i);
}
}
const ac = new AbortController();
longRunningTask(ac.signal);
setTimeout(() => ac.abort(), 10000); // Cancel after 10s
Why it matters: Without cancellation support, long-running operations waste server resources even after clients disconnect. Interviewers test this to assess whether you design APIs that are resource-efficient and responsive to client cancellations in streaming or expensive computation scenarios.
Real applications: GraphQL subscriptions automatically cancel upstream datastore queries when the client disconnects. Server-sent event streams and expensive search operations use AbortController so the frontend can cancel when the user navigates away, freeing server resources immediately.
Common mistakes: Catching AbortError without checking err.name === 'AbortError' causes normal cancellations to be logged as unexpected errors. Creating a new AbortController per request rather than sharing one controller also misses the opportunity to cancel multiple related operations simultaneously.
// AbortSignal.timeout() — auto-abort after duration
async function fetchWithTimeout(url, ms = 5000) {
const response = await fetch(url, {
signal: AbortSignal.timeout(ms)
});
return response.json();
}
// AbortSignal.any() — abort on ANY condition (Node.js 20+)
async function fetchWithMultipleCancellations(url) {
const userCancel = new AbortController();
const pageUnload = new AbortController();
// Cancel button handler
cancelButton.onclick = () => userCancel.abort();
// Page navigation handler
window.onbeforeunload = () => pageUnload.abort();
const signal = AbortSignal.any([
userCancel.signal, // User clicks cancel
pageUnload.signal, // User navigates away
AbortSignal.timeout(30000) // 30 second timeout
]);
try {
const response = await fetch(url, { signal });
return await response.json();
} catch (err) {
if (err.name === 'AbortError') {
console.log('Request cancelled:', signal.reason);
}
throw err;
}
}
// Using with async iteration
async function* streamData(url, signal) {
const response = await fetch(url, { signal });
const reader = response.body.getReader();
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
yield value;
}
} finally {
reader.releaseLock();
}
}
// Consume with cancellation
const ac = new AbortController();
for await (const chunk of streamData('/api/stream', ac.signal)) {
process(chunk);
if (shouldStop) ac.abort();
}
Why it matters: These are modern Node.js APIs that simplify cancellation composition, and knowing them signals familiarity with current Node.js 20+ features. Interviewers look for whether you apply them to create clean, maintainable timeout logic rather than building complex manual wrappers using Promise.race and setTimeout.
Real applications: Long-running SSE streams use AbortSignal.any() to cancel when either the client disconnects, a timeout occurs, or an admin shutdown command fires — all composable without separate code paths for each condition. Database queries use AbortSignal.timeout() to enforce maximum query durations.
Common mistakes: Using Promise.race with a manual timeout promise instead of AbortSignal.timeout() is more verbose and leaves the cancelled operation running in the background wasting resources. Developers also miss the signal.reason property which identifies why cancellation occurred, useful for logging and debugging.
async function retryWithBackoff(fn, options = {}) {
const {
maxRetries = 3,
baseDelay = 1000,
maxDelay = 30000,
shouldRetry = (err) => true
} = options;
let lastError;
for (let attempt = 0; attempt <= maxRetries; attempt++) {
try {
return await fn(attempt);
} catch (err) {
lastError = err;
if (attempt === maxRetries || !shouldRetry(err)) {
throw err;
}
// Exponential backoff with jitter
const delay = Math.min(
baseDelay * Math.pow(2, attempt) + Math.random() * 1000,
maxDelay
);
console.log(`Retry ${attempt + 1}/${maxRetries} after ${delay.toFixed(0)}ms`);
await new Promise(resolve => setTimeout(resolve, delay));
}
}
throw lastError;
}
// Usage — retry HTTP requests
const data = await retryWithBackoff(
async (attempt) => {
const res = await fetch('/api/data');
if (!res.ok) throw new Error(`HTTP ${res.status}`);
return res.json();
},
{
maxRetries: 3,
baseDelay: 1000,
shouldRetry: (err) => {
// Only retry on server errors and network failures
const status = err.message.match(/HTTP (\d+)/)?.[1];
return !status || parseInt(status) >= 500;
}
}
);
// Usage — retry database operations
const result = await retryWithBackoff(
() => db.query('SELECT * FROM users'),
{
maxRetries: 5,
baseDelay: 500,
shouldRetry: (err) => err.code === 'ECONNREFUSED' || err.code === 'ETIMEDOUT'
}
);
Why it matters: Production systems must handle transient failures gracefully and retry logic is a fundamental resilience pattern. Interviewers assess whether you understand exponential backoff, jitter, and the importance of distinguishing retryable from non-retryable errors when building reliable distributed system clients.
Real applications: Payment gateways retry transaction requests when the provider returns temporary 503 errors. Database clients automatically retry connection attempts during brief network disruptions or database failovers without surfacing errors to end users.
Common mistakes: Retrying without backoff causes a thundering herd where all clients simultaneously retry against a struggling service, making the outage worse. Developers also forget to set a maximum retry count, creating potential infinite retry loops that prevent the error from ever surfacing to monitoring systems.
async_hooks) automatically propagates a context object through the entire chain of async callbacks, Promises, and event handlers initiated within a given scope — without requiring it to be passed as a function parameter. It is the Node.js equivalent of Java's ThreadLocal or a request-scoped dependency injection container. Each incoming HTTP request can have its own isolated context containing user identity, request ID, and transaction state that flows naturally through all downstream async calls.
const { AsyncLocalStorage } = require('async_hooks');
// Create a store for request context
const requestContext = new AsyncLocalStorage();
// Express middleware — set context per request
function contextMiddleware(req, res, next) {
const context = {
requestId: crypto.randomUUID(),
userId: req.user?.id,
startTime: Date.now()
};
// All async operations within this callback inherit the context
requestContext.run(context, () => next());
}
app.use(contextMiddleware);
// Access context anywhere in the request chain — no parameter passing
function getRequestContext() {
return requestContext.getStore();
}
// In a service layer
async function createOrder(orderData) {
const ctx = getRequestContext();
logger.info('Creating order', {
requestId: ctx?.requestId,
userId: ctx?.userId
});
return db.orders.create(orderData);
}
// In a database layer
async function executeQuery(sql, params) {
const ctx = getRequestContext();
const start = Date.now();
const result = await db.query(sql, params);
logger.debug('Query executed', {
requestId: ctx?.requestId,
sql,
duration: Date.now() - start
});
return result;
}
// Logger that auto-includes context
const contextLogger = {
info: (msg, meta = {}) => {
const ctx = getRequestContext();
winston.info(msg, {
...meta,
requestId: ctx?.requestId,
userId: ctx?.userId
});
}
};
Why it matters: Large Node.js applications need contextual data — request IDs, user IDs, transaction state — to flow through multiple service layers without polluting every function signature. Interviewers test this for senior roles when assessing request-scoped logging, tracing, and context propagation design skills.
Real applications: OpenTelemetry distributed tracing uses AsyncLocalStorage to propagate trace context through entire request lifecycles. Custom loggers that automatically include the request ID in every log message across all service layers use it to eliminate manual context threading through function parameters.
Common mistakes: Using global variables or module-level caches for request-scoped data causes context leakage between concurrent requests — each request needs its own isolated context. Developers also overuse AsyncLocalStorage for data that could simply be passed as a function argument, adding unnecessary hidden dependencies to the codebase.