// Multi-document transaction: transfer money between accounts
const session = client.startSession();
try {
await session.withTransaction(async () => {
const accounts = db.collection('accounts');
const from = await accounts.findOne({ _id: "acct1" }, { session });
if (from.balance < amount) throw new Error("Insufficient funds");
await accounts.updateOne(
{ _id: "acct1" }, { $inc: { balance: -amount } }, { session });
await accounts.updateOne(
{ _id: "acct2" }, { $inc: { balance: +amount } }, { session });
});
} finally {
await session.endSession();
}
Why it matters: Multi-document transactions bridge the gap between MongoDB and SQL for use cases requiring atomicity across multiple collections — a major architectural question in financial and inventory systems.
Real applications: Fund transfers, inventory reservation + order creation, and multi-collection audit trail writes all require transactions to ensure all-or-nothing atomicity.
Common mistakes: Using transactions unnecessarily for single-document operations (they're already atomic) or for high-volume writes (transactions add overhead; overuse causes performance degradation).
session.withTransaction() which automatically handles commit, abort on error, and retry on transient errors. Alternatively, use manual session.startTransaction(), session.commitTransaction(), and session.abortTransaction(). Every database operation inside a transaction must receive the session option — otherwise the operation runs outside the transaction context.
const { MongoClient } = require('mongodb');
const client = new MongoClient(MONGO_URI);
async function placeOrder(userId, cartItems) {
const session = client.startSession();
try {
const result = await session.withTransaction(async () => {
const db = client.db('shop');
// All operations MUST pass { session }
const order = await db.collection('orders').insertOne(
{ userId, items: cartItems, status: 'pending', createdAt: new Date() },
{ session }
);
// Decrement stock for each item
for (const item of cartItems) {
await db.collection('products').updateOne(
{ _id: item.productId, stock: { $gte: item.qty } },
{ $inc: { stock: -item.qty } },
{ session }
);
}
return order.insertedId;
});
return result;
} finally {
await session.endSession(); // always end session!
}
}
Why it matters: Correctly using sessions and the withTransaction API (vs manual transaction management) is critical for production reliability — withTransaction auto-retries on transient failures.
Real applications: E-commerce order placement: atomically create an order AND decrement product stock — preventing overselling where two concurrent orders could both see sufficient stock.
Common mistakes: Forgetting to pass { session } to one of the operations in a transaction — that operation runs outside the transaction and commits immediately, potentially leaving data in an inconsistent state.
{ w: "majority" } (default for transactions) ensures \majority of replica set members have acknowledged the write before committing. Read concern controls how current and isolated the data read during a transaction is. "snapshot" (default for transactions) provides a consistent point-in-time view across all reads in the transaction. Other read concerns: "local", "majority", "linearizable".
// Transaction with explicit concern settings
const session = client.startSession();
await session.withTransaction(
async () => {
// operations...
await db.collection('accounts').updateOne(
{ _id: "acct1" },
{ $inc: { balance: -1000 } },
{ session }
);
},
{
readConcern: { level: "snapshot" }, // consistent view
writeConcern: { w: "majority" }, // majority acknowledgment
readPreference: "primary" // read from primary
}
);
// Write concern options:
// w: 0 — fire and forget (no acknowledgment)
// w: 1 — primary acknowledgment (default)
// w: "majority" — majority acknowledgment (safest)
// j: true — wait for journal flush (durability)
Why it matters: Understanding write concern and read concern is essential for building systems where data durability and consistency guarantees are required — particularly in financial applications.
Real applications: Banking applications use { w: "majority", j: true } for all financial writes — ensuring transactions are written to disk by a majority of replica set members before confirming success.
Common mistakes: Using { w: 0 } (fire and forget) for financial transactions — this provides no durability guarantees and silently discards write errors.
// BEST PRACTICES:
// 1. Keep transactions short (aim for <1 second)
// 2. Only include operations that NEED atomicity
// 3. Avoid long-running computations inside transactions
// 4. Handle TransientTransactionError and UnknownTransactionCommitResult
// withTransaction auto-retries on TransientTransactionError
await session.withTransaction(async () => {
// Keep this block fast and focused
await db.orders.insertOne({ ... }, { session });
await db.inventory.updateOne({ ... }, { session });
// DON'T: send emails, call external APIs, do heavy computation here
});
// Transaction timeout (default 60s, configurable per-op)
await session.withTransaction(async () => { ... }, {
maxTimeMS: 30000 // 30 second max
});
Why it matters: Over-relying on transactions in MongoDB negates its performance advantages. Interviewers want to see that you understand when to use them vs. when single-document atomicity suffices.
Real applications: Order fulfillment systems use transactions only for the critical stock-decrement + order-create pair. Email sending, analytics events, and notifications run outside the transaction asynchronously.
Common mistakes: Including network calls (external APIs, email sending), heavy computation, or long-running queries inside transactions — this holds locks, causes contention, and risks hitting the 60-second timeout.
// SINGLE-DOCUMENT ATOMIC (NO transaction needed):
// This entire update is atomic — all-or-nothing in one document
await db.orders.updateOne(
{ _id: orderId },
{
$set: {
"shipping.status": "dispatched",
"shipping.trackingNumber": "TRK001",
"shipping.dispatchedAt": new Date()
},
$push: {
statusHistory: { status: "dispatched", ts: new Date() }
}
}
);
// MULTI-DOCUMENT TRANSACTION NEEDED:
// When atomically modifying two separate documents
// accounts.updateOne(acct1, $inc:balance:-100) +
// accounts.updateOne(acct2, $inc:balance:+100)
// BOTH must succeed or BOTH must fail → use transaction
Why it matters: A good MongoDB developer knows when NOT to use transactions — leveraging single-document atomicity via schema design leads to simpler, faster code.
Real applications: Order status history uses an embedded statusHistory array within the order document — status updates and history appends are a single atomic operation, no transaction needed.
Common mistakes: Reaching for transactions as the default solution — if better schema design (embedding related data) eliminates the multi-document write, avoid the transaction overhead entirely.
withTransaction() API automatically handles both retry cases, making it the strongly preferred approach over manual transaction management. For manual transactions, you must explicitly implement retry logic.
// withTransaction: auto-retries on TransientTransactionError
// and UnknownTransactionCommitResult (strongly preferred)
await session.withTransaction(async () => {
await doOperationsA(session);
await doOperationsB(session); // automatic retry if transient error
});
// Manual transaction (for reference — NOT recommended)
session.startTransaction();
try {
await doOperations(session);
await session.commitTransaction();
} catch (error) {
if (error.hasErrorLabel('TransientTransactionError')) {
await session.abortTransaction();
// retry the whole function
} else if (error.hasErrorLabel('UnknownTransactionCommitResult')) {
// retry commitTransaction only
await session.commitTransaction();
} else {
await session.abortTransaction();
throw error;
}
} finally {
await session.endSession();
}
Why it matters: Improper error handling in transactions is a common source of data corruption bugs. withTransaction handles the retry complexity automatically, preventing these pitfalls.
Real applications: High-concurrency financial systems experience TransientTransactionErrors during write conflicts — withTransaction's automatic retry ensures these resolve without manual intervention or application crashes.
Common mistakes: Writing manual transaction retry logic that retries only the commit (UnknownTransactionCommitResult) but not the full transaction (TransientTransactionError), causing incomplete retries.
// Shard key design to minimize cross-shard transactions
// Goal: keep related transactional data on same shard
// BAD: Orders and inventory on different shards
// orders sharded by orderId, inventory sharded by productId
// → transfer order data requires cross-shard transaction
// BETTER: Zone sharding so related data collocates
sh.addShardToZone("shard1", "region-mumbai");
sh.updateZoneKeyRange("myapp.orders",
{ region: "mumbai" }, { region: "mumbai" }, "region-mumbai");
// Orders + inventory in Mumbai → same shard → single-shard transaction
// Check if transaction is cross-shard (in explain output):
// executionStats.queryPlanner.winningPlan.shards > 1
// → cross-shard transaction → higher latency expected
Why it matters: Cross-shard transactions can be 10x slower than single-shard — knowing how to design around them shows production-scale MongoDB expertise.
Real applications: Financial platforms shard by customerId — all of a customer's accounts are on the same shard, so intra-customer transfers remain single-shard transactions without coordination overhead.
Common mistakes: Sharding on a key that randomly distributes related transactional data across shards — this forces every transaction to be a cross-shard distributed transaction.
findOneAndUpdate to atomically read a document, check a condition, and update it ONLY if the condition is still true — all in one atomic operation. This is MongoDB's equivalent of an optimistic lock or atomic compare-and-swap CPU instruction. It's a lightweight alternative to full transactions for single-document conditional updates.
// CAS: claim a task only if it's still "pending"
const task = await db.collection('tasks').findOneAndUpdate(
{
status: "pending", // condition: only if still pending
_id: taskId
},
{
$set: {
status: "processing",
claimedBy: workerId,
claimedAt: new Date()
}
},
{ returnDocument: 'after' } // return updated doc
);
if (!task) {
// Another worker already claimed this task
console.log("Task already taken");
return;
}
// task is now claimed — safe to process
await processTask(task);
Why it matters: CAS with findOneAndUpdate is a common interview pattern for concurrency control — it shows you can solve coordination problems without the overhead of full transactions.
Real applications: Job queues, appointment booking, and ticket reservation systems all use CAS to claim items atomically — preventing two workers/users from claiming the same resource.
Common mistakes: Doing a separate find() then update() — the gap between find and update creates a race condition where another request can modify the document.
__v or version) that increments on each write. The update includes the version in the filter: if the version has changed (someone else updated it), the filter matches nothing and the update returns modifiedCount: 0, signaling a conflict that the application must handle.
// Optimistic locking with version field
// Read document (includes version)
const doc = await db.collection('inventory').findOne({ _id: productId });
const currentVersion = doc.__v;
// User modifies quantity in application...
const newQuantity = doc.quantity - orderAmount;
// Write with version check (optimistic lock)
const result = await db.collection('inventory').updateOne(
{
_id: productId,
__v: currentVersion // MUST match current version
},
{
$set: { quantity: newQuantity },
$inc: { __v: 1 } // increment version
}
);
if (result.modifiedCount === 0) {
// Version mismatch — concurrent modification detected!
throw new ConflictError("Inventory was updated by another process. Please retry.");
}
Why it matters: Optimistic locking is a standard concurrency pattern for high-read, low-conflict scenarios — cheaper than transactions with good conflict detection.
Real applications: Content management systems use optimistic locking to prevent two editors from simultaneously overwriting each other's changes to the same article.
Common mistakes: Not retrying after a version conflict — the correct response is to re-read the document (get latest version) and re-apply the change, not just throw an error to the user.
// Modern approach: use native transactions (MongoDB 4.0+)
// The OLD 2PC pattern (for historical knowledge):
// 1. Create transaction record
const txn = await db.transactions.insertOne({
fromId: "acct1", toId: "acct2",
amount: 1000,
state: "initial" // states: initial → pending → applied
});
// 2. Mark both accounts as "pending transaction"
await db.accounts.updateOne({ _id: "acct1" },
{ $push: { pendingTxns: txn.insertedId } });
await db.accounts.updateOne({ _id: "acct2" },
{ $push: { pendingTxns: txn.insertedId } });
// 3. Apply (debit/credit)
await db.transactions.updateOne({ _id: txn.insertedId },
{ $set: { state: "applied" } });
// Recovery: check for "pending" transactions on startup
// and re-apply or rollback based on which accounts were updated
Why it matters: Understanding 2PC shows distributed systems knowledge and explains WHY native MongoDB transactions were a major feature — they replaced this complex error-prone manual pattern.
Real applications: Pre-4.0 MongoDB financial systems used 2PC. Understanding the pattern helps when reading legacy codebases or architecting distributed systems across heterogeneous databases.
Common mistakes: Implementing manual 2PC for new MongoDB 4.0+ applications — just use native transactions (session.withTransaction). Manual 2PC is complex, error-prone, and unnecessary.