GET /api/users // List users
POST /api/users // Create user
GET /api/users/:id // Get one user
PUT /api/users/:id // Replace user
DELETE /api/users/:id // Delete user
Why it matters: Well-designed REST APIs are self-evident to any developer who knows HTTP; poor design (using POST for everything, returning 200 for errors) creates friction, misunderstandings, and brittle integrations.
Real applications: Public APIs from Stripe, GitHub, and Twilio are studied as REST design references; their consistency (plural nouns, correct status codes, pagination metadata) makes them easy to integrate without deep documentation reading.
Common mistakes: Using verbs in URLs (e.g., /api/getUser, /api/deleteOrder) instead of noun resources with HTTP methods — this violates REST's uniform interface constraint and creates an inconsistent, RPC-style API surface.
project/
├── src/
│ ├── controllers/ // Request handlers
│ │ └── userController.js
│ ├── routes/ // Route definitions
│ │ └── userRoutes.js
│ ├── models/ // Data models
│ │ └── User.js
│ ├── middleware/ // Custom middleware
│ │ └── auth.js
│ ├── services/ // Business logic
│ │ └── userService.js
│ └── app.js // Express setup
├── package.json
└── .env
Why it matters: Putting all logic in route handlers makes code untestable and causes spaghetti — the service/controller split means business logic can be unit-tested without HTTP mocking, and controllers stay small and focused.
Real applications: Express APIs with controllers, services, and models are the de-facto structure in production Node.js codebases; TypeScript projects add a repositories layer for data access abstraction.
Common mistakes: Writing all database queries directly in route handlers — this mixes HTTP concerns with data access logic, makes testing difficult, and creates duplication when the same query is needed in multiple routes.
page and limit to control which subset of results is returned, and always include pagination metadata (total, pages) in the response so clients know the full result set size. For large datasets, cursor-based pagination using the last item's ID is more efficient than offset-based.
// GET /api/users?page=2&limit=10
app.get('/api/users', async (req, res) => {
const page = parseInt(req.query.page) || 1;
const limit = parseInt(req.query.limit) || 10;
const skip = (page - 1) * limit;
const users = await User.find().skip(skip).limit(limit);
const total = await User.countDocuments();
res.json({
data: users,
pagination: { page, limit, total, pages: Math.ceil(total / limit) }
});
});
Why it matters: Returning all records from a collection endpoint without pagination loads entire database tables into memory and sends megabytes of JSON over the wire — pagination is required from day one, not added later.
Real applications: Social feeds, product catalogs, admin dashboards, and search results all use pagination; APIs like GitHub's list endpoints return 30 items by default with Link headers for next/previous pages.
Common mistakes: Not enforcing a maximum limit — a client passing limit=10000 bypasses the intent of pagination; always cap at a reasonable maximum (e.g., 100) regardless of what the client requests.
Why it matters: Returning 200 for every response (including errors) breaks retry logic, error tracking, and any HTTP-aware tooling like Nginx load balancers, CDNs, and APM tools that rely on status codes to classify requests.
Real applications: Stripe returns 402 for payment failures, 429 for rate limits, and 422 for validation errors — each code triggers different handling in client SDKs without the client needing to inspect the response body first.
Common mistakes: Always returning 200 with an { error: "...", success: false } body — this forces every client to parse the body before knowing if the request succeeded, and breaks caching, monitoring tools, and circuit breakers.
const Joi = require('joi');
const userSchema = Joi.object({
name: Joi.string().min(2).max(50).required(),
email: Joi.string().email().required(),
age: Joi.number().integer().min(0).max(120)
});
app.post('/api/users', (req, res) => {
const { error, value } = userSchema.validate(req.body);
if (error) return res.status(400).json({ error: error.details[0].message });
// proceed with validated data
});
Why it matters: Unvalidated input is the root cause of injection attacks, data corruption, and confusing runtime errors; server-side validation is an OWASP requirement and the only reliable security boundary.
Real applications: Registration endpoints validate email format, password strength, and age range before touching the database; payment endpoints validate amount ranges and currency codes to prevent logic errors in financial calculations.
Common mistakes: Returning generic "validation failed" errors without specifying which field failed and why — always include field-specific messages like "email must be a valid email address" so API consumers can fix their requests immediately.
/api/v1/users) is the simplest and most widely adopted; header versioning (api-version: 2) keeps URLs clean; query parameter versioning is rare. Always maintain older versions for a defined deprecation window and communicate timelines clearly.
// URL versioning (most common)
app.use('/api/v1/users', v1UserRouter);
app.use('/api/v2/users', v2UserRouter);
// Header versioning
app.use('/api/users', (req, res, next) => {
const version = req.headers['api-version'] || '1';
req.apiVersion = version;
next();
});
// Query parameter
// GET /api/users?version=2
Why it matters: Changing a response structure or removing a field in a live API without versioning immediately breaks all existing integrations; versioning provides a migration path that respects existing consumers.
Real applications: Stripe maintains API versions like 2023-10-16 and lets each customer pin their version indefinitely; GitHub uses URL versioning (/api/v3) with clear deprecation notices and transition guides.
Common mistakes: Not versioning a public API from day one and then being unable to make breaking changes without a disruptive "big bang" migration; always start with /api/v1 even if you never use v2.
- for descending order (e.g., ?sort=-price). Always whitelist allowed fields for both filtering and sorting to prevent arbitrary data exposure or NoSQL injection.
// GET /api/products?category=electronics&minPrice=100&sort=-price
app.get('/api/products', async (req, res) => {
const { category, minPrice, sort } = req.query;
const filter = {};
if (category) filter.category = category;
if (minPrice) filter.price = { $gte: Number(minPrice) };
const sortObj = {};
if (sort) {
const field = sort.replace('-', '');
sortObj[field] = sort.startsWith('-') ? -1 : 1;
}
const products = await Product.find(filter).sort(sortObj);
res.json(products);
});
Why it matters: Without server-side filtering and sorting, clients must download the entire collection and process it locally — this is impractical for collections of thousands of records and wasteful of bandwidth.
Real applications: E-commerce product catalogs let customers filter by category, price range, brand, and rating simultaneously; admin dashboards sort users by signup date or last active, combining filters with pagination for efficient display.
Common mistakes: Passing user-supplied filter field names directly to the database query without whitelisting valid fields — in MongoDB this enables operators like $where or $regex to be injected, causing NoSQL injection attacks.
// Response with HATEOAS links
{
"id": 42,
"name": "Alice",
"email": "alice@example.com",
"links": [
{ "rel": "self", "href": "/api/users/42", "method": "GET" },
{ "rel": "update", "href": "/api/users/42", "method": "PUT" },
{ "rel": "delete", "href": "/api/users/42", "method": "DELETE" },
{ "rel": "orders", "href": "/api/users/42/orders", "method": "GET" }
]
}
Why it matters: HATEOAS decouples client from URL structure — when the server changes a URL pattern, clients following links automatically adapt without code changes; it drives API discoverability and reduces client-side hardcoding.
Real applications: PayPal's REST API includes HATEOAS links in payment responses so clients can follow the approval_url link without knowing its structure; GitHub's API includes next, prev, and last link headers for pagination.
Common mistakes: Hardcoding URLs in clients (e.g., constructing /api/users/{id}/orders on the client side) when the server provides navigational links — this creates tight coupling and requires client code changes whenever URL patterns change.
// GET /api/users/search?q=alice&fields=name,email
app.get('/api/users/search', async (req, res) => {
const { q, fields } = req.query;
if (!q) return res.status(400).json({ error: 'Query required' });
// MongoDB text search
const results = await User.find(
{ $text: { $search: q } },
{ score: { $meta: 'textScore' } }
).sort({ score: { $meta: 'textScore' } });
res.json({ results, count: results.length });
});
Why it matters: Search is one of the most performance-sensitive endpoints in an API — poorly implemented regex search on large collections causes full collection scans, degrading performance for all concurrent users.
Real applications: Product search in e-commerce uses Elasticsearch for relevance scoring, typo tolerance, and faceted filtering; user lookup in admin portals uses MongoDB text indexes for simple name/email search.
Common mistakes: Using unanchored regex search ($regex: userInput) without sanitization allows ReDoS (Regular Expression Denial of Service) attacks with malicious input that causes catastrophic backtracking.
const swaggerJsdoc = require('swagger-jsdoc');
const swaggerUi = require('swagger-ui-express');
/**
* @openapi
* /api/users:
* get:
* summary: List all users
* responses:
* 200:
* description: Array of users
*/
app.get('/api/users', getUsers);
const specs = swaggerJsdoc(options);
app.use('/docs', swaggerUi.serve, swaggerUi.setup(specs));
Why it matters: Well-documented APIs reduce support burden, onboarding time for new integrators, and integration errors; a live Swagger UI is often more useful than written documentation because it's always current and interactive.
Real applications: Internal APIs expose Swagger UI at /docs for frontend and mobile teams to explore; public APIs publish OpenAPI specs to developer portals for automatic SDK generation in multiple languages.
Common mistakes: Writing documentation in a separate wiki that drifts out of sync with the actual API; use JSDoc OpenAPI annotations or a code-first approach so documentation is generated from the implementation and never becomes stale.
// PUT — replaces entire resource (all fields required)
app.put('/api/users/:id', async (req, res) => {
const user = await User.findByIdAndUpdate(
req.params.id,
req.body, // Must contain ALL fields
{ new: true, overwrite: true }
);
res.json(user);
});
// PATCH — partial update (only changed fields)
app.patch('/api/users/:id', async (req, res) => {
const user = await User.findByIdAndUpdate(
req.params.id,
{ $set: req.body }, // Only updates provided fields
{ new: true }
);
res.json(user);
});
Why it matters: Misusing PUT when PATCH is intended is a common API design error that causes data loss — a mobile client sending only changed fields with PUT silently clears all fields not included in the request.
Real applications: User profile updates use PATCH since only the changed field (e.g., phone number) is sent; PATCH also reduces payload size for large objects, important for mobile clients on limited bandwidth.
Common mistakes: Using PUT for partial updates without sending the full resource — if a client sends only { "email": "new@example.com" } via PUT, all other fields are overwritten with null or missing, causing silent data corruption.
/api/users/:userId/posts). Avoid nesting more than two levels deep — deeper nesting creates unwieldy URLs and tightly couples URL structure to the data model.
// Nested routes for related resources
app.get('/api/users/:userId/posts', getUserPosts);
app.get('/api/users/:userId/posts/:postId', getSpecificPost);
app.post('/api/users/:userId/posts', createUserPost);
// Implementation
app.get('/api/users/:userId/posts', async (req, res) => {
const posts = await Post.find({ author: req.params.userId });
res.json(posts);
});
// Alternative: flat routes with query filters
app.get('/api/posts?author=userId', getPostsByAuthor);
Why it matters: Nested URLs make the ownership relationship explicit — GET /api/users/42/orders clearly means "orders belonging to user 42" without any documentation, reducing integration confusion.
Real applications: Blog APIs nest comments under posts (/api/posts/:postId/comments); e-commerce APIs nest order line items under orders; GitHub nests issues, PRs, and branches under repositories.
Common mistakes: Nesting too deeply (e.g., /api/users/:id/orders/:orderId/items/:itemId/reviews) creates brittle, hard-to-maintain routes; beyond 2 levels, switch to flat routes with query parameters like /api/reviews?itemId=:id.
insertMany and bulk writes. Design bulk endpoints with proper error handling for partial success — some items may succeed while others fail, and clients need to know which ones.
// Bulk create
app.post('/api/users/bulk', async (req, res) => {
const { users } = req.body;
const results = await User.insertMany(users, { ordered: false });
res.status(201).json({ created: results.length });
});
// Bulk update
app.patch('/api/users/bulk', async (req, res) => {
const { updates } = req.body; // [{ id, changes }]
const results = await Promise.allSettled(
updates.map(({ id, changes }) =>
User.findByIdAndUpdate(id, changes, { new: true })
)
);
res.json({
succeeded: results.filter(r => r.status === 'fulfilled').length,
failed: results.filter(r => r.status === 'rejected').length
});
});
Why it matters: Sending 1000 individual POST requests to create 1000 records causes 1000 HTTP overhead events and 1000 database round trips; a bulk endpoint reduces this to one round trip with batched database insertion.
Real applications: CSV import features, batch notification sends, and data migration tools all use bulk endpoints; Stripe's batch charge API and bulk update endpoints in CRM systems are common examples.
Common mistakes: Not enforcing an upper limit on bulk request size — a client sending 100,000 records in one request can exhaust memory and trigger timeouts; always validate total count and set a maximum batch size (e.g., 1000).
// Idempotent: PUT always sets the same value
app.put('/api/users/1', handler);
// Call 1: sets name to "Alice" → 200
// Call 2: sets name to "Alice" → 200 (same result)
// NOT idempotent: POST creates a new resource each time
app.post('/api/users', handler);
// Call 1: creates user → 201
// Call 2: creates ANOTHER user → 201 (different result)
// Making POST idempotent with idempotency keys
app.post('/api/payments', async (req, res) => {
const idempotencyKey = req.headers['idempotency-key'];
const existing = await Payment.findOne({ idempotencyKey });
if (existing) return res.json(existing); // Return cached result
const payment = await processPayment(req.body);
await Payment.create({ ...payment, idempotencyKey });
res.status(201).json(payment);
});
Why it matters: In distributed systems, network failures during a POST request leave the client uncertain whether the operation completed; idempotency keys enable safe retries without risk of double-charging or duplicate record creation.
Real applications: Stripe requires an Idempotency-Key header on payment creation requests; this header ensures retrying a failed charge attempt after a timeout never charges the card twice.
Common mistakes: Not implementing idempotency on payment and order creation endpoints — a mobile app that retries on network timeout can create duplicate orders and double charges if the server doesn't deduplicate by idempotency key.
const rateLimit = require('express-rate-limit');
// Key generator based on user or API key
const apiLimiter = rateLimit({
windowMs: 60 * 1000, // 1 minute
max: (req) => {
if (req.user?.plan === 'premium') return 1000;
if (req.user?.plan === 'basic') return 100;
return 20; // anonymous
},
keyGenerator: (req) => {
return req.user?.id || req.headers['x-api-key'] || req.ip;
},
standardHeaders: true,
message: { error: 'Rate limit exceeded', retryAfter: '60s' }
});
app.use('/api', authenticate, apiLimiter);
Why it matters: Rate limiting prevents API abuse, protects backend resources from being overwhelmed, and enables fair usage policies; without it, a single user can monopolize server capacity and degrade service for all other users.
Real applications: Public APIs like GitHub, Twitter, and OpenAI use per-token rate limiting with different tiers (free: 100 req/min, paid: 1000 req/min); enterprise plans get higher limits aligned with their SLA.
Common mistakes: Applying rate limiting only at the reverse proxy level without propagating limit headers to the app — clients need X-RateLimit-Remaining and Retry-After headers to implement proper backoff without hammering the API.