redis package to connect from Node.js; it supports strings, hashes, lists, sets, and sorted sets as data types. Redis is ideal for reducing database load, storing session data, and implementing pub/sub messaging patterns at scale.
const { createClient } = require('redis');
const client = createClient({ url: 'redis://localhost:6379' });
client.on('error', (err) => console.error('Redis error:', err));
await client.connect();
// Store and retrieve data
await client.set('name', 'Alice');
const name = await client.get('name');
console.log(name); // 'Alice'
// Store JSON
await client.set('user:1', JSON.stringify({ name: 'Alice', age: 30 }));
const user = JSON.parse(await client.get('user:1'));
Why it matters: Redis is the most widely adopted caching layer in production Node.js applications, and understanding its API is essential for solving performance bottlenecks.
Real applications: E-commerce platforms use Redis to cache product listings, session data, and rate-limit API requests; social networks use it as a real-time leaderboard and feed cache.
Common mistakes: Forgetting to await client.connect() before sending commands, or not handling the error event, which causes unhandled exceptions when Redis is unreachable.
const { createClient } = require('redis');
const client = createClient();
await client.connect();
// SET — store a value
await client.set('counter', '0');
await client.set('greeting', 'Hello World');
// GET — retrieve a value
const val = await client.get('counter'); // '0'
// DEL — delete one or more keys
await client.del('counter');
await client.del(['key1', 'key2', 'key3']);
// INCR/DECR — atomic increment/decrement
await client.set('visits', '0');
await client.incr('visits'); // 1
await client.incrBy('visits', 5); // 6
// EXISTS — check if key exists
const exists = await client.exists('greeting'); // 1 or 0
// KEYS — find matching keys (avoid in production)
const keys = await client.keys('user:*');
Why it matters: GET, SET, DEL, and INCR are the building blocks of nearly every Redis caching pattern; mastering them is prerequisite to understanding session stores, rate limiters, and pub/sub.
Real applications: Page view counters, API rate limiters, shopping cart storage, and feature flags all rely on basic Redis key operations.
Common mistakes: Using KEYS * in production — it scans all keys and blocks Redis; always use SCAN for iterating over large key spaces instead.
-1 if a key has no expiry and -2 if it doesn't exist. The NX + EX combination is the foundation of distributed locking in Redis.
const client = createClient();
await client.connect();
// Set with expiry in seconds
await client.setEx('session:abc', 3600, 'user-data'); // Expires in 1 hour
// Set with expiry in milliseconds
await client.pSetEx('temp', 5000, 'short-lived'); // 5 seconds
// Set expiry on existing key
await client.set('token', 'abc123');
await client.expire('token', 900); // 15 minutes
// Check remaining TTL
const ttl = await client.ttl('session:abc'); // Seconds remaining
const pttl = await client.pTTL('session:abc'); // Milliseconds remaining
// Remove expiry (make persistent)
await client.persist('token');
// SET with NX (only if not exists) + EX (expiry)
await client.set('lock:resource', 'owner', {
NX: true, // Only set if key doesn't exist
EX: 30 // Expire in 30 seconds
});
Why it matters: Without TTLs, Redis memory grows unboundedly and stale data accumulates; correct expiry strategy is fundamental to a reliable caching layer.
Real applications: Authentication tokens, OTP codes, password-reset links, and temporary rate-limit windows all rely on Redis TTL-based automatic expiry.
Common mistakes: Forgetting to set a TTL on cached items causes indefinite memory growth; and using KEYS to find keys for bulk TTL updates blocks Redis — use SCAN instead.
async function getUser(userId) {
const cacheKey = `user:${userId}`;
// 1. Check cache first
const cached = await redisClient.get(cacheKey);
if (cached) {
console.log('Cache hit');
return JSON.parse(cached);
}
// 2. Cache miss — fetch from database
console.log('Cache miss');
const user = await db.query('SELECT * FROM users WHERE id = $1', [userId]);
// 3. Store in cache with TTL
await redisClient.setEx(cacheKey, 3600, JSON.stringify(user.rows[0]));
return user.rows[0];
}
// Invalidate cache on update
async function updateUser(userId, data) {
await db.query('UPDATE users SET name = $1 WHERE id = $2', [data.name, userId]);
await redisClient.del(`user:${userId}`); // Invalidate cache
}
Why it matters: Cache-aside is the most common pattern for Node.js apps because it's simple, flexible, and allows partial caching of only the data that's actually accessed.
Real applications: Product detail pages, user profile APIs, and database-heavy dashboards all use cache-aside to serve popular records from Redis and reduce DB query load.
Common mistakes: Forgetting to invalidate the cache on writes leads to stale data; and not setting a TTL as a fallback means data can stay stale indefinitely if invalidation logic is missed.
const express = require('express');
const app = express();
// Static assets — cache for 1 year
app.use('/static', express.static('public', {
maxAge: '1y',
immutable: true
}));
// API response — no cache
app.get('/api/user', (req, res) => {
res.set('Cache-Control', 'no-store');
res.json(userData);
});
// API response — cache for 5 minutes, revalidate
app.get('/api/products', (req, res) => {
res.set('Cache-Control', 'public, max-age=300, must-revalidate');
res.json(products);
});
// Private data — only browser cache, not CDN
app.get('/api/profile', (req, res) => {
res.set('Cache-Control', 'private, max-age=60');
res.json(profile);
});
Why it matters: Correct Cache-Control headers can eliminate entire categories of server requests, dramatically reducing infrastructure costs and improving page load times for end users.
Real applications: CDN-backed static sites (Vercel, Cloudflare) use Cache-Control to cache API responses and assets at the edge; SPA builds use immutable on hashed bundle filenames.
Common mistakes: Setting public on responses containing personal or sensitive data makes them CDN-cacheable and potentially visible to other users; always use private for authenticated responses.
app.set('etag', 'strong'), but you can also generate them manually with a content hash.
const crypto = require('crypto');
app.get('/api/data', (req, res) => {
const data = JSON.stringify(getLatestData());
const etag = crypto.createHash('md5').update(data).digest('hex');
// Check if client's cached version matches
if (req.headers['if-none-match'] === etag) {
return res.status(304).end(); // Not Modified
}
res.set('ETag', etag);
res.set('Cache-Control', 'public, max-age=0, must-revalidate');
res.json(JSON.parse(data));
});
// Express has built-in ETag support
app.set('etag', 'strong'); // or 'weak'
Why it matters: ETags enable conditional GET requests that save bandwidth and reduce server processing, making APIs feel faster for clients with cached responses.
Real applications: REST APIs serving large JSON payloads, GitHub's API, and content delivery platforms all use ETags to avoid resending unchanged data on polling clients.
Common mistakes: Generating ETags from mutable fields (like updatedAt timestamps with millisecond precision) can cause unnecessary cache misses; base ETags on a stable hash of the actual content.
// WRITE-THROUGH: write to cache AND database simultaneously
async function writeThrough(key, value) {
// Both happen before returning
await db.query('UPDATE items SET data = $1 WHERE key = $2', [value, key]);
await redisClient.set(key, JSON.stringify(value));
// Consistent but slower writes
}
// WRITE-BEHIND (write-back): write to cache, async update DB later
async function writeBehind(key, value) {
await redisClient.set(key, JSON.stringify(value));
// Queue database update for later
await messageQueue.send('db-sync', { key, value });
// Faster writes but risk of data loss
}
Why it matters: Choosing the wrong write strategy can cause data inconsistency (write-behind) or unnecessary write latency (write-through), directly affecting application reliability.
Real applications: Banking and inventory systems use write-through for immediate consistency; analytics pipelines, logging, and social media counters use write-behind for throughput.
Common mistakes: Using write-behind for critical transactional data (like payments or inventory) risks permanent data loss if the cache crashes before the async DB write completes.
express-session middleware, replacing the in-memory store with a production-ready alternative. Always configure secure cookie options — httpOnly, secure, and sameSite — to prevent session hijacking.
const session = require('express-session');
const RedisStore = require('connect-redis').default;
const { createClient } = require('redis');
const redisClient = createClient({ url: 'redis://localhost:6379' });
await redisClient.connect();
app.use(session({
store: new RedisStore({ client: redisClient }),
secret: process.env.SESSION_SECRET,
resave: false,
saveUninitialized: false,
cookie: {
secure: true, // HTTPS only
httpOnly: true, // No JS access
maxAge: 86400000, // 24 hours
sameSite: 'strict'
}
}));
// Use sessions
app.post('/login', (req, res) => {
req.session.userId = user.id;
res.json({ message: 'Logged in' });
});
Why it matters: The default MemoryStore in express-session leaks memory and doesn't scale; switching to Redis-backed sessions is a required step for any production Node.js deployment.
Real applications: Traditional server-rendered Node.js apps (Express + EJS/Handlebars), admin dashboards, and B2B SaaS backends use Redis session stores for multi-instance deployments.
Common mistakes: Omitting secure: true in production sends session cookies over plain HTTP; not setting saveUninitialized: false creates empty session records for every anonymous visitor.
// 1. TTL-based — data expires automatically
await redisClient.setEx('product:1', 300, data); // 5 min
// 2. Event-driven — invalidate on write
async function updateProduct(id, data) {
await db.updateProduct(id, data);
await redisClient.del(`product:${id}`); // Delete cached item
await redisClient.del('products:list'); // Delete related list
}
// 3. Pattern-based — delete matching keys
async function clearUserCache(userId) {
const keys = await redisClient.keys(`user:${userId}:*`);
if (keys.length) await redisClient.del(keys);
}
// 4. Versioned keys — change key on update
const version = await redisClient.incr('product:1:version');
const key = `product:1:v${version}`;
await redisClient.setEx(key, 3600, data);
Why it matters: Cache invalidation bugs are one of the most common causes of production incidents — understanding these strategies is critical for building reliable caching layers.
Real applications: E-commerce product pages combine TTL-based expiry with event-driven invalidation on product updates; content platforms use versioned keys to instantly switch to new content globally.
Common mistakes: Using pattern-based invalidation with KEYS user:* in production blocks Redis while scanning; always use SCAN with cursor-based iteration for large key sets.
// In-memory cache (simple Map)
const cache = new Map();
function getFromMemory(key, fetchFn, ttlMs = 60000) {
const entry = cache.get(key);
if (entry && Date.now() - entry.time < ttlMs) {
return entry.value;
}
const value = fetchFn();
cache.set(key, { value, time: Date.now() });
return value;
}
// Or use a library like node-cache
const NodeCache = require('node-cache');
const myCache = new NodeCache({ stdTTL: 600 }); // 10 min default
myCache.set('key', 'value');
myCache.get('key');
Why it matters: Understanding when to use in-process vs. distributed caching directly impacts scalability architecture decisions when deploying Node.js apps to multiple instances.
Real applications: Configuration values and token blacklists often use in-memory caches for speed; user sessions, rate limits, and feature flags use Redis for shared state across instances.
Common mistakes: Using a plain Map as an in-memory cache without TTL management causes memory leaks; use node-cache or lru-cache which handle expiry and size limits automatically.
const { createClient } = require('redis');
// Separate clients for pub and sub (required by Redis)
const publisher = createClient();
const subscriber = createClient();
const cache = new Map(); // Local in-memory cache
await publisher.connect();
await subscriber.connect();
// Subscribe to invalidation channel
await subscriber.subscribe('cache:invalidate', (message) => {
const { key, pattern } = JSON.parse(message);
if (key) {
cache.delete(key);
console.log(`Invalidated key: ${key}`);
}
if (pattern) {
for (const k of cache.keys()) {
if (k.startsWith(pattern)) cache.delete(k);
}
}
});
// Publish invalidation event on data change
async function updateProduct(id, data) {
await db.updateProduct(id, data);
await publisher.publish('cache:invalidate',
JSON.stringify({ key: `product:${id}` })
);
}
// Bulk invalidation by pattern
async function clearCategoryCache(categoryId) {
await publisher.publish('cache:invalidate',
JSON.stringify({ pattern: `category:${categoryId}:` })
);
}
Why it matters: Distributed cache invalidation via Pub/Sub ensures all instances see consistent data, which is critical in horizontally scaled deployments where a local cache per instance would diverge.
Real applications: Multi-region Node.js clusters, microservice architectures with shared Redis, and real-time dashboards use Pub/Sub to propagate cache invalidation events across all running instances.
Common mistakes: Reusing the same Redis client for both publishing and subscribing causes errors; once a client enters subscribe mode it can only run subscribe/unsubscribe commands.
const client = createClient();
await client.connect();
// HASHES — store object-like data (user profiles)
await client.hSet('user:1', { name: 'Alice', age: '30', role: 'admin' });
const name = await client.hGet('user:1', 'name');
const user = await client.hGetAll('user:1'); // { name, age, role }
await client.hIncrBy('user:1', 'age', 1); // Increment single field
// LISTS — ordered data (recent activity, queues)
await client.lPush('recent:posts', 'post:5'); // Add to front
await client.rPush('queue:emails', JSON.stringify(email)); // Add to back
const recent = await client.lRange('recent:posts', 0, 9); // Get first 10
await client.lTrim('recent:posts', 0, 99); // Keep only 100 items
// SETS — unique values (tags, online users)
await client.sAdd('online:users', 'user:1', 'user:2');
await client.sRem('online:users', 'user:1');
const isOnline = await client.sIsMember('online:users', 'user:2');
const count = await client.sCard('online:users');
// SORTED SETS — ranked data (leaderboards)
await client.zAdd('leaderboard', { score: 100, value: 'player:1' });
await client.zAdd('leaderboard', { score: 250, value: 'player:2' });
const top10 = await client.zRangeWithScores('leaderboard', 0, 9, { REV: true });
Why it matters: Choosing the right Redis data structure can dramatically reduce memory usage and enable atomic operations that would otherwise require multiple round trips.
Real applications: Leaderboards (Sorted Sets), recent activity feeds (Lists), real-time online user tracking (Sets), and user profile caching (Hashes) are all standard Redis data structure use cases.
Common mistakes: Storing objects as serialized JSON strings (regular SET) instead of Redis Hashes forces you to deserialize the whole object even when you only need one field, wasting CPU and bandwidth.
res.json to capture and store the response before sending it to the client. The middleware should fail open — if Redis is unavailable, requests should proceed normally rather than returning errors.
const { createClient } = require('redis');
const client = createClient();
client.connect();
function cacheMiddleware(ttl = 300) {
return async (req, res, next) => {
// Only cache GET requests
if (req.method !== 'GET') return next();
const key = `cache:${req.originalUrl}`;
try {
const cached = await client.get(key);
if (cached) {
return res.json(JSON.parse(cached));
}
// Override res.json to cache the response
const originalJson = res.json.bind(res);
res.json = async (data) => {
await client.setEx(key, ttl, JSON.stringify(data));
return originalJson(data);
};
next();
} catch (err) {
console.error('Cache error:', err);
next(); // Fail open — continue without cache
}
};
}
// Usage
app.get('/api/products', cacheMiddleware(600), getProducts);
app.get('/api/users/:id', cacheMiddleware(120), getUser);
// Invalidation helper
async function invalidateCache(pattern) {
const keys = await client.keys(`cache:${pattern}`);
if (keys.length) await client.del(keys);
}
Why it matters: A caching middleware is a clean and reusable way to add Redis caching to any Express route without cluttering business logic, and the fail-open pattern ensures Redis downtime doesn't break the app.
Real applications: Public REST APIs, product catalog endpoints, and analytics dashboards use caching middleware to serve thousands of requests per second from Redis rather than the database.
Common mistakes: Caching POST/PUT/DELETE requests, or using req.url (which excludes query strings) instead of req.originalUrl as the cache key, leading to incorrect cache hits.
// 1. DISTRIBUTED LOCK — only one request rebuilds cache
async function getWithLock(key, fetchFn, ttl = 300) {
const cached = await redis.get(key);
if (cached) return JSON.parse(cached);
const lockKey = `lock:${key}`;
const acquired = await redis.set(lockKey, '1', { NX: true, EX: 10 });
if (acquired) {
try {
const data = await fetchFn();
await redis.setEx(key, ttl, JSON.stringify(data));
return data;
} finally {
await redis.del(lockKey);
}
} else {
// Wait and retry — another process is rebuilding
await new Promise(r => setTimeout(r, 100));
return getWithLock(key, fetchFn, ttl);
}
}
// 2. EARLY REFRESH — refresh before expiry
async function getWithEarlyRefresh(key, fetchFn, ttl = 300) {
const cached = await redis.get(key);
const remainingTtl = await redis.ttl(key);
if (cached && remainingTtl > ttl * 0.2) {
return JSON.parse(cached); // Still fresh enough
}
// Refresh in background if getting stale
if (cached) {
fetchFn().then(data =>
redis.setEx(key, ttl, JSON.stringify(data))
);
return JSON.parse(cached); // Return stale data immediately
}
const data = await fetchFn();
await redis.setEx(key, ttl, JSON.stringify(data));
return data;
}
// 3. STALE-WHILE-REVALIDATE — never block on cache miss
async function getStaleWhileRevalidate(key, fetchFn, ttl = 300) {
const cached = await redis.get(key);
if (cached) {
const { data, expiry } = JSON.parse(cached);
if (Date.now() > expiry) {
// Expired — refresh in background, return stale
fetchFn().then(fresh =>
redis.set(key, JSON.stringify({ data: fresh, expiry: Date.now() + ttl * 1000 }))
);
}
return data;
}
const data = await fetchFn();
await redis.set(key, JSON.stringify({ data, expiry: Date.now() + ttl * 1000 }));
return data;
}
Why it matters: Cache stampedes can bring down databases under load; understanding prevention strategies is essential for building resilient high-traffic applications.
Real applications: High-traffic news sites, ticket booking platforms, and flash-sale e-commerce apps are all vulnerable to stampedes when popular cache keys expire simultaneously at peak traffic.
Common mistakes: Setting the same TTL for all cached items means many keys can expire simultaneously — add a small random jitter to TTLs (e.g., ttl + Math.random() * 30) to spread expiry times.
// Using lru-cache package (recommended)
const { LRUCache } = require('lru-cache');
const cache = new LRUCache({
max: 500, // Maximum 500 items
maxSize: 50 * 1024 * 1024, // 50MB max memory
sizeCalculation: (value) => JSON.stringify(value).length,
ttl: 1000 * 60 * 5, // 5 minute TTL
allowStale: true, // Return stale data while refreshing
updateAgeOnGet: true, // Reset TTL on access
});
cache.set('user:1', { name: 'Alice' });
const user = cache.get('user:1');
cache.has('user:1'); // true
cache.delete('user:1');
cache.clear();
// Custom LRU using Map (maintains insertion order)
class SimpleLRU {
constructor(maxSize) {
this.max = maxSize;
this.cache = new Map();
}
get(key) {
if (!this.cache.has(key)) return undefined;
const value = this.cache.get(key);
// Move to end (most recent)
this.cache.delete(key);
this.cache.set(key, value);
return value;
}
set(key, value) {
if (this.cache.has(key)) this.cache.delete(key);
this.cache.set(key, value);
// Evict oldest if over limit
if (this.cache.size > this.max) {
const oldest = this.cache.keys().next().value;
this.cache.delete(oldest);
}
}
}
const lru = new SimpleLRU(100);
lru.set('key', 'value');
lru.get('key');
Why it matters: LRU caches are fundamental for bounding in-process memory usage while keeping the hottest data instantly accessible without a network round trip to Redis.
Real applications: Database query result caches, compiled template caches, and DNS resolution caches in Node.js proxies all benefit from LRU eviction to cap memory footprint.
Common mistakes: Implementing a custom LRU with a plain object instead of Map breaks ordering guarantees; and not setting a size or count limit means the cache grows unboundedly until the process runs out of memory.