resolve and reject — calling either settles the Promise and triggers the appropriate handler chain. Once settled, a Promise's state is immutable; subsequent calls to resolve or reject are silently ignored.
const promise = new Promise((resolve, reject) => {
setTimeout(() => {
if (Math.random() > 0.5) resolve('Success!');
else reject(new Error('Failed'));
}, 1000);
});
promise
.then(value => console.log('Resolved:', value))
.catch(err => console.error('Rejected:', err.message))
.finally(() => console.log('Always runs'));
Why it matters: Promises are the foundation of all modern async JavaScript — async/await desugars to Promises, fetch returns a Promise, and every combinator (Promise.all, race, any) builds on this primitive. You cannot write production JavaScript without understanding them deeply.
Real applications: Every HTTP request, database query, and file read in modern JavaScript returns a Promise. React Query, SWR, and Axios are all Promise-based libraries that build higher-level abstractions on top of this primitive, which your application depends on daily.
Common mistakes: Forgetting to return inside .then() callbacks (passes undefined to the next handler), not attaching .catch() (causes unhandled rejection warnings in production), and the "Promise constructor antipattern" — wrapping an already-returned Promise in new Promise() unnecessarily.
const p1 = new Promise(resolve => setTimeout(() => resolve('done'), 1000));
// State: pending → fulfilled after 1s
const p2 = Promise.resolve('immediate'); // State: fulfilled immediately
const p3 = Promise.reject(new Error('bad')); // State: rejected immediately
// Settled state is frozen — subsequent calls are ignored
const p4 = new Promise((resolve, reject) => {
resolve('first');
resolve('second'); // ignored
reject('error'); // ignored
});
await p4; // 'first'
Why it matters: Understanding that Promises are single-use objects with immutable settled states is critical for knowing why you cannot reuse or reset them. It also explains why every async operation must return a new Promise rather than resetting an existing one.
Real applications: Caching layers store settled Promises directly — when a user revisits a page, the app checks if the data Promise is already fulfilled and reads the cached value instantly without a new network request, eliminating redundant fetches.
Common mistakes: Confusing "resolved" with "fulfilled" (a Promise resolved with another pending Promise is still pending itself), trying to read Promise state synchronously (impossible — state is inspected only via handlers), and assuming a rejected Promise immediately throws an error synchronously.
.then(onFulfilled) handles a fulfilled value and returns a new Promise, enabling chaining. .catch(onRejected) is syntactic sugar for .then(null, onRejected) and intercepts any rejection from the chain above it. .finally(callback) runs on both fulfillment and rejection — ideal for cleanup — without modifying the value or reason passed to the next handler.
fetch('/api/user')
.then(response => {
if (!response.ok) throw new Error('HTTP ' + response.status);
return response.json();
})
.then(data => console.log('User:', data))
.catch(err => console.error('Error:', err.message))
.finally(() => hideSpinner()); // always hides loading indicator
// .finally passes through the original value unchanged
Promise.resolve(42)
.finally(() => console.log('cleanup')) // runs
.then(val => console.log(val)); // still 42
Why it matters: The key insight is that each handler returns a new Promise, so errors propagate down the chain until the nearest .catch(). One central handler can catch errors from all preceding steps, making pipeline error handling clean and predictable.
Real applications: API service layers use a single .catch() at the end to normalize all HTTP errors into domain-level error objects. .finally() is universally used to hide loading spinners, release locks, and close connections after async operations regardless of outcome.
Common mistakes: Placing .catch() in the middle of a chain (errors after it continue normally), not returning the async result inside a handler (the next .then() receives undefined), and using .then(fn, fn) instead of .then().catch() (the rejection handler misses errors thrown in onFulfilled).
Promise.all(iterable) accepts an iterable of Promises and returns a single Promise that resolves to an array of fulfilled values in input order when every Promise fulfills, or rejects immediately with the first rejection — fail-fast behavior. Non-Promise values are wrapped with Promise.resolve() automatically, and an empty iterable resolves synchronously to an empty array.
const [user, orders, reviews] = await Promise.all([
fetch('/api/user').then(r => r.json()),
fetch('/api/orders').then(r => r.json()),
fetch('/api/reviews').then(r => r.json()),
]); // all 3 requests run in parallel
// Fail-fast on first rejection
Promise.all([
Promise.resolve(1),
Promise.reject('error'),
Promise.resolve(3),
]).catch(err => console.log(err)); // 'error'
// Results maintain INPUT order regardless of completion order
const results = await Promise.all([slowP, fastP]); // [slowResult, fastResult]
Why it matters: Promise.all() is the standard pattern for parallelizing independent async operations. Awaiting 3 API calls sequentially takes 3× longer than using Promise.all() — this is one of the most common and impactful async performance optimizations in JavaScript.
Real applications: Dashboard pages fetch user profile, notifications, and analytics data in parallel at page load. Batch database writes, parallel file processing, and test fixture setup all rely on Promise.all() to run concurrently rather than sequentially.
Common mistakes: Using sequential await for independent operations instead of Promise.all() (easily the most common async performance mistake), and not switching to Promise.allSettled() when partial failure is acceptable — a single rejection aborts the entire operation with Promise.all().
Promise.race(iterable) returns a Promise that settles with the outcome of the first Promise to settle — whether fulfilled or rejected. The other Promises continue executing but their results are discarded. Unlike Promise.any(), which ignores rejections and waits for the first fulfillment, Promise.race() settles on the first result regardless of success or failure — making it ideal for timeout enforcement.
function withTimeout(promise, ms) {
const timeout = new Promise((_, reject) =>
setTimeout(() => reject(new Error(`Timeout after ${ms}ms`)), ms)
);
return Promise.race([promise, timeout]);
}
const data = await withTimeout(fetch('/api/slow'), 5000);
// Race two mirrors for fastest response
const fastest = await Promise.race([
fetch('https://cdn1.example.com/data.json'),
fetch('https://cdn2.example.com/data.json'),
]).then(r => r.json());
Why it matters: Promise.race() is the primary mechanism for adding timeouts to Promise-based operations. HTTP clients, WebSocket connections, and database queries need timeout enforcement to prevent hanging requests from accumulating and eventually exhausting server resources.
Real applications: API gateways add per-request timeouts using Promise.race() against a timeout Promise; CDN failover logic races multiple edge servers for the fastest response; test frameworks automatically fail hanging tests once a time limit is exceeded.
Common mistakes: Confusing race() with any() — if the fastest promise rejects, race() rejects too, but any() ignores it and continues waiting. Also, Promise.race([]) with an empty array returns a Promise that never settles, causing silent hangs in production code.
Promise.allSettled(iterable) waits for all promises to settle — regardless of whether they fulfill or reject — and returns an array of result descriptors, each with a status of "fulfilled" or "rejected" and either a value or reason. It never rejects, making it the right choice when you need results from all operations even if some fail.
const results = await Promise.allSettled([
fetch('/api/user'),
fetch('/api/orders'),
fetch('/api/missing'), // will 404
]);
results.forEach((result, i) => {
if (result.status === 'fulfilled') {
console.log(`Request ${i}: OK`, result.value);
} else {
console.error(`Request ${i}: Failed`, result.reason);
}
});
const successes = results
.filter(r => r.status === 'fulfilled')
.map(r => r.value);
Why it matters: When running independent async operations where partial success is acceptable — sending notifications, fetching optional widgets — you want all results regardless of individual failures. allSettled() gives full visibility without the fail-fast abort of Promise.all().
Real applications: Bulk email or SMS notification systems send to all recipients and report success/failure counts at the end. Dashboard widget loaders fetch from multiple optional data sources and render whatever is available even when some sources are down.
Common mistakes: Using Promise.all() when partial success is acceptable (one failure aborts everything), and forgetting to check result.status before accessing result.value — accessing value on a rejected result returns undefined, not an error.
.then(), .catch(), or .finally() returns a new Promise resolved with the handler's return value. If a handler returns a plain value, the next Promise fulfills with it; if it returns a Promise, the chain waits for that nested Promise to settle before continuing. This creates sequential async pipelines where each step processes the prior step's result.
// Each .then() passes its return value to the next
Promise.resolve(1)
.then(n => n + 1) // 2
.then(n => n * 3) // 6
.then(n => Promise.resolve(n + 4)) // waits for nested Promise
.then(n => console.log(n)); // 10
// Real API pipeline
fetchUser(userId)
.then(user => fetchOrders(user.id)) // returns new Promise
.then(orders => processOrders(orders))
.catch(err => console.error(err)); // catches all above errors
Why it matters: Promise chaining is the foundation of async pipeline patterns. Understanding that each handler creates a new Promise is essential for reasoning about when values propagate, how errors skip fulfilled handlers, and why returning is required — not optional — inside handlers.
Real applications: Authentication flows chain: validate token → look up user → check permissions → load resource. Payment processors chain: validate card → create charge → update inventory → send receipt — each step runs only if the prior step succeeded.
Common mistakes: Nesting .then() handlers instead of returning them (creates promise hell), forgetting to return async operations inside a handler (the next handler fires immediately with undefined), and assuming .catch() stops the chain — subsequent .then() calls after .catch() still execute on recovery.
new Promise(executor) where the executor function receives two callbacks: resolve(value) to fulfill and reject(reason) to reject. The executor runs synchronously when the Promise is created, but .then() handlers always run asynchronously in the microtask queue. Only the first call to resolve or reject has effect — subsequent calls are silently ignored.
function delay(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
function readFileAsync(path) {
return new Promise((resolve, reject) => {
fs.readFile(path, 'utf8', (err, data) => {
if (err) reject(err);
else resolve(data);
});
});
}
// Shorthand constructors
const resolved = Promise.resolve(42);
const rejected = Promise.reject(new Error('fail'));
await delay(1000);
const content = await readFileAsync('./config.json');
Why it matters: Being able to wrap any callback or event into a Promise is a fundamental skill. Legacy APIs, third-party SDKs, and DOM events all use callbacks — wrapping them in Promises enables them to work with async/await and the rest of the Promise ecosystem seamlessly.
Real applications: Wrapping IndexedDB operations, WebSocket message delivery, geolocation API, and browser permissions API into Promises allows them to be composed with Promise.all() and used with async/await cleanly.
Common mistakes: The "Promise constructor antipattern" — wrapping an already-Promise-returning function in new Promise() unnecessarily. Also, throwing synchronously inside the executor without a try/catch can bubble past the Promise in some environments instead of rejecting it.
.catch() handler — any rejection propagates down the chain skipping all fulfilled handlers until it finds a rejection handler. To handle different error types, inspect the error in .catch() and either handle it or rethrow for unknown errors. Errors thrown inside .then() handlers are also caught by downstream .catch() handlers.
fetchUser()
.then(user => fetchPermissions(user.id))
.then(perms => loadDashboard(perms))
.catch(err => {
// Catches errors from ALL steps above
if (err instanceof AuthError) return redirectToLogin();
if (err instanceof NetworkError) return showOfflineMessage();
throw err; // rethrow unknown errors
})
.catch(err => logUnexpectedError(err)); // safety net
// Global fallback (Node.js)
process.on('unhandledRejection', (reason) => {
console.error('Unhandled rejection:', reason);
});
Why it matters: Unhandled Promise rejections crash Node.js processes in production (since Node 15) and cause silent failures in browsers. Proper error handling distinguishes production-quality code from brittle prototypes — failing to rethrow unknown errors loses critical diagnostic information.
Real applications: API service layers wrap all fetch calls in a centralized error handler that translates HTTP status codes to domain errors (401→AuthError, 404→NotFoundError), letting each feature decide how to display errors based on their type.
Common mistakes: Swallowing errors with an empty .catch(() => {}), creating multiple unconnected .catch() silos that each only handle part of a chain, and losing async errors by not returning Promise-returning function calls inside .then() handlers.
Promise.any(iterable) resolves with the first fulfilled Promise, ignoring all rejections until every Promise has rejected — at which point it rejects with an AggregateError containing all rejection reasons. This is the "succeed-fast" counterpart to Promise.all()'s fail-fast behavior. Compared to Promise.race(), which settles on the first settled (including rejected) result, Promise.any() skips rejections and waits for the first success.
// First successful fetch wins — rejections are ignored
const data = await Promise.any([
fetch('https://api1.example.com/data'),
fetch('https://api2.example.com/data'),
fetch('https://api3.example.com/data'),
]).then(r => r.json());
// AggregateError when ALL reject
try {
await Promise.any([Promise.reject('E1'), Promise.reject('E2')]);
} catch (err) {
console.log(err instanceof AggregateError); // true
console.log(err.errors); // ['E1', 'E2']
}
// Summary: all()=all succeed, allSettled()=wait all, race()=first settled, any()=first fulfilled
Why it matters: Promise.any() fills the gap that Promise.race() couldn't — when you have multiple fallback sources and want the fastest successful response, ignoring transient failures from slower ones. It requires handling AggregateError when all might fail.
Real applications: Content delivery from multiple CDN regions uses Promise.any() to serve from whichever edge node responds first successfully. Database read-replica selection picks the nearest healthy replica automatically when others have transient issues.
Common mistakes: Confusing Promise.any() (ES2021) with Promise.race() — race() rejects on the first rejection, any() only rejects when all fail. Also not handling AggregateError when all promises might legitimately reject, leading to unhandled rejection warnings.
.catch(): on rejection, check remaining retries, wait, then call again. Exponential backoff — doubling the delay with each attempt — is the industry-standard retry strategy to prevent overwhelming struggling servers.
function retry(fn, retries = 3, delay = 1000) {
return fn().catch(err => {
if (retries <= 0) throw err;
return new Promise(r => setTimeout(r, delay))
.then(() => retry(fn, retries - 1, delay * 2)); // exponential backoff
});
}
// Usage
const data = await retry(
() => fetch('/api/unreliable').then(r => r.json()),
3,
500 // 500ms → 1s → 2s
);
// With transient error detection
async function retryTransient(fn, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try { return await fn(); }
catch (err) {
if (!isTransientError(err) || attempt === maxRetries - 1) throw err;
await new Promise(r => setTimeout(r, 2 ** attempt * 200));
}
}
}
Why it matters: Network calls fail transiently — DNS timeouts, rate limits, brief outages. Retry logic with exponential backoff is a fundamental reliability pattern; without it, one temporary failure causes a visible error that would have succeeded automatically 500ms later.
Real applications: AWS SDK, Google Cloud, and Stripe client libraries all implement exponential backoff with jitter internally. E-commerce checkout services retry payment gateway calls to handle transient processor timeouts automatically.
Common mistakes: Retrying unconditionally regardless of error type (retrying 400 Bad Request or 401 Unauthorized is pointless), not adding jitter to exponential backoff (multiple clients retry in synchronized waves, creating a thundering herd), and retrying indefinitely without a maximum attempt limit.
Promise.all() from scratch requires tracking the count of resolved promises while maintaining input order in results — since promises may resolve in any order but results must match input positions. Wrap each input in Promise.resolve() to handle non-Promise values, reject immediately on any single failure, and resolve the outer Promise only when the resolved count equals the total input length.
function promiseAll(promises) {
return new Promise((resolve, reject) => {
const arr = Array.from(promises);
if (arr.length === 0) return resolve([]);
const results = new Array(arr.length);
let resolved = 0;
arr.forEach((p, i) => {
Promise.resolve(p).then(value => {
results[i] = value; // preserve order by index
if (++resolved === arr.length) resolve(results);
}).catch(reject); // fail-fast on first rejection
});
});
}
// Test
const [a, b, c] = await promiseAll([
Promise.resolve(1),
new Promise(r => setTimeout(() => r(2), 100)),
Promise.resolve(3),
]);
console.log(a, b, c); // 1 2 3
Why it matters: This is a classic interview exercise testing deep Promise understanding — specifically concurrency, result ordering, fail-fast behavior, and handling mixed Promise/non-Promise values. It demonstrates you know how Promises work internally, not just how to use them.
Real applications: Libraries needing custom Promise orchestration — like a priority-based Promise.all() that resolves high-priority results first — implement their own combinators using this same pattern as a foundation.
Common mistakes: Using results.push(value) instead of results[i] = value (breaks result ordering when promises resolve out of input order), not wrapping each input with Promise.resolve() (crashes on non-Promise values), and not handling the empty array edge case.
.then/.catch/.finally, queueMicrotask(), MutationObserver) and the macrotask queue (setTimeout, setInterval, I/O callbacks). After each macrotask completes, the engine fully drains the microtask queue before picking the next macrotask, giving microtasks effectively higher priority.
console.log('1: sync');
setTimeout(() => console.log('4: macrotask'), 0);
Promise.resolve().then(() => console.log('2: microtask'));
queueMicrotask(() => console.log('3: microtask'));
console.log('1.5: sync');
// Output: 1 → 1.5 → 2 → 3 → 4
// Nested microtasks all run before the next macrotask
Promise.resolve().then(() => {
console.log('micro-1');
Promise.resolve().then(() => console.log('micro-2')); // added during drain
});
setTimeout(() => console.log('macro'), 0);
// micro-1 → micro-2 → macro
Why it matters: Understanding the microtask/macrotask split explains why Promise-based code behaves differently from setTimeout-based code, and why mixing them can produce surprising execution orders. It's fundamental knowledge for debugging async bugs and writing correct async code.
Real applications: React's state batching uses microtasks to flush updates synchronously between renders. Vue's nextTick() schedules DOM update reading as microtasks. Framework authors use this knowledge to reason about rendering guarantees and scheduling.
Common mistakes: Creating unbounded chains of microtasks that starve the macrotask queue (e.g., infinite recursion via Promise.resolve().then(recurse)), assuming async/await runs synchronously (each await creates a microtask suspension), and confusing microtask priority with synchronous execution.
controller.signal to cancellable APIs like fetch(), then call controller.abort(). The API rejects with an AbortError, which you distinguish from real errors by checking err.name === 'AbortError'.
const controller = new AbortController();
fetch('/api/slow-endpoint', { signal: controller.signal })
.then(res => res.json())
.catch(err => {
if (err.name === 'AbortError') console.log('Request cancelled');
else throw err;
});
// React: cancel on unmount
useEffect(() => {
const ctrl = new AbortController();
fetch('/api/data', { signal: ctrl.signal })
.then(r => r.json())
.then(setData)
.catch(err => { if (err.name !== 'AbortError') setError(err); });
return () => ctrl.abort(); // cleanup cancels in-flight request
}, []);
Why it matters: Without cancellation, navigating away from a page mid-request causes "state update on unmounted component" errors in React and wastes bandwidth. AbortController is the modern, standardized solution — it also works with fetch streams, event listeners, and many Web APIs.
Real applications: Search autocomplete cancels the previous fetch before firing a new one on each keystroke. Navigation transitions abort pending data fetches. Form submissions cancel duplicate requests when a button is clicked multiple times quickly.
Common mistakes: Misspelling the check (err.name === 'AbortError' is case-sensitive), not passing the signal to nested fetch calls inside a parent operation, and checking signal.aborted synchronously instead of catching the AbortError asynchronously.
new Promise() so it returns a Promise instead. The util.promisify() utility automates this for any function following the standard callback(error, result) convention. Modern Node.js APIs provide native Promise equivalents (fs.promises, dns.promises) that skip the need for promisification entirely.
// Manual promisify
function myPromisify(fn) {
return function(...args) {
return new Promise((resolve, reject) => {
fn(...args, (err, result) => {
if (err) reject(err);
else resolve(result);
});
});
};
}
// Built-in util.promisify
const { promisify } = require('node:util');
const readFile = promisify(fs.readFile);
const content = await readFile('./config.json', 'utf8');
// Modern: use fs.promises directly (preferred)
const content2 = await fs.promises.readFile('./config.json', 'utf8');
Why it matters: Promisification is the bridge between the legacy callback world and modern async/await. Any Node.js codebase mixing old and new APIs needs it, and understanding it reveals how Promise constructors work under the hood for wrapping arbitrary async patterns.
Real applications: Migrating legacy Node.js codebases to async/await, wrapping third-party SDKs that still use callbacks (AWS SDK v2), and wrapping browser APIs like the Geolocation API and IndexedDB for cleaner async/await usage.
Common mistakes: Not binding this when promisifying object methods (promisify(obj.method.bind(obj))), using promisification on functions that call the callback multiple times (Promises only resolve once — subsequent resolutions are ignored), and forgetting the error-first convention requirement.
Promise.withResolvers() (ES2024) returns an object with three properties: promise, resolve, and reject — the "deferred" pattern previously requiring the awkward workaround of leaking resolve/reject out of the executor via outer variables. This is useful whenever a Promise needs to be resolved or rejected from outside its executor, such as in response to events or WebSocket messages.
// Old workaround — leaking resolve/reject
let resolve, reject;
const promise = new Promise((res, rej) => {
resolve = res; reject = rej;
});
// New ES2024 — clean and explicit
const { promise, resolve, reject } = Promise.withResolvers();
// Real use: wait for a DOM event
function waitForClick(button) {
const { promise, resolve } = Promise.withResolvers();
button.addEventListener('click', resolve, { once: true });
return promise;
}
await waitForClick(submitBtn);
// WebSocket request-response matching
const pending = new Map();
ws.on('message', ({ id, result, error }) => {
const def = pending.get(id);
if (def) error ? def.reject(error) : def.resolve(result);
});
Why it matters: The deferred pattern is extremely common in production code — virtually every WebSocket RPC system, event-to-Promise bridge, and "signal" implementation uses it. Promise.withResolvers() standardizes a decade-old idiom that every developer reinvented independently.
Real applications: WebSocket RPC systems create one deferred per outgoing message, storing resolve/reject by message ID — when the server responds, the matching deferred is resolved. Drag-and-drop systems defer "wait for drop" to coordinate async UI state transitions.
Common mistakes: Using the deferred pattern when a regular Promise constructor suffices (the executor placement is usually cleaner for single-operation Promises), and not realizing that subsequent calls to resolve/reject after the Promise settles are silently ignored just like in a regular Promise.