package.json with the node test environment; Jest auto-discovers files matching *.test.js or *.spec.js and files in a __tests__ directory. Use --watch mode during development to re-run tests automatically and --coverage to generate HTML coverage reports.
# Install Jest
npm install --save-dev jest
# package.json
{
"scripts": {
"test": "jest",
"test:watch": "jest --watch",
"test:coverage": "jest --coverage"
},
"jest": {
"testEnvironment": "node",
"coverageDirectory": "coverage",
"collectCoverageFrom": ["src/**/*.js"]
}
}
# Create test file: math.test.js
const { add } = require('./math');
test('adds 1 + 2 to equal 3', () => {
expect(add(1, 2)).toBe(3);
});
# Run tests
npm test
Why it matters: Untested Node.js code that works in development frequently breaks in production when edge cases arise; Jest gives you a single dependency that covers assertions, mocking, coverage, and watch mode, eliminating the need to configure multiple separate libraries.
Real applications: Every mature Node.js service uses Jest (or a similar framework) in CI to ensure no regression is deployed; coverage reports integrated with Codecov or Coveralls give teams visibility into which code paths lack test coverage.
Common mistakes: Forgetting "testEnvironment": "node" in the Jest config when testing server-side code — the default is a browser-like JSDOM environment which lacks Node.js globals like process and Buffer, causing tests to fail with confusing errors.
toBe, toEqual, and toThrow. Use beforeEach/afterEach for setup and teardown that runs before/after each test, ensuring each test starts with clean, isolated state. Nest describe blocks to organize tests hierarchically by feature or method, matching the structure of the code being tested.
const UserService = require('./userService');
describe('UserService', () => {
let service;
beforeEach(() => {
service = new UserService();
});
afterEach(() => {
jest.restoreAllMocks();
});
describe('createUser', () => {
it('should create a user with valid data', () => {
const user = service.createUser({ name: 'Alice', email: 'a@b.com' });
expect(user).toBeDefined();
expect(user.name).toBe('Alice');
expect(user.email).toContain('@');
});
it('should throw if name is missing', () => {
expect(() => service.createUser({ email: 'a@b.com' }))
.toThrow('Name is required');
});
});
});
Why it matters: Well-structured test suites using nested describe blocks and clear it descriptions serve as living documentation — when a test fails, the full description chain (UserService > createUser > should throw if name is missing) pinpoints the exact failure instantly without reading test code.
Real applications: A UserService test file nests describe('createUser'), describe('getUser'), and describe('deleteUser') inside the top-level describe('UserService'), with beforeEach resetting a fresh service instance to prevent state from bleeding between tests.
Common mistakes: Sharing mutable state between tests by declaring variables at the describe scope and mutating them in tests without resetting — test execution order can vary and a passing test that depends on a prior test's side effects will fail intermittently.
jest.mock('./module'), Jest auto-mocks all exports; you can then configure return values with mockResolvedValue (async) or mockReturnValue (sync) and verify calls with toHaveBeenCalledWith. Call jest.clearAllMocks() in beforeEach to reset call counts between tests.
// userService.js
const db = require('./database');
class UserService {
async getUser(id) {
return db.findById(id);
}
}
// userService.test.js
jest.mock('./database'); // Auto-mock all exports
const db = require('./database');
const UserService = require('./userService');
describe('UserService', () => {
it('should return user from database', async () => {
const mockUser = { id: 1, name: 'Alice' };
db.findById.mockResolvedValue(mockUser);
const service = new UserService();
const user = await service.getUser(1);
expect(user).toEqual(mockUser);
expect(db.findById).toHaveBeenCalledWith(1);
expect(db.findById).toHaveBeenCalledTimes(1);
});
});
Why it matters: Without mocking, a unit test for a service function would require a live database connection, making tests slow, flaky, and dependent on external infrastructure that may not be available in CI; mocking isolates the test to the logic under test only.
Real applications: Database access modules are mocked in service tests; HTTP client modules (axios, node-fetch) are mocked in tests for code that fetches external APIs; fs module functions are mocked in tests for file processing logic to avoid actual disk I/O.
Common mistakes: Calling jest.mock() inside a describe or it block instead of at the module top level — Jest hoists jest.mock() calls to the top of the file at compile time, meaning the placement doesn't matter but it's confusing and can cause issues if done conditionally.
app without calling app.listen() so Supertest manages the server lifecycle internally; it opens a temporary server for each request and closes it automatically. Chain .set('Authorization', 'Bearer token') to test protected routes without needing a running server.
const request = require('supertest');
const app = require('./app'); // Export your Express app
describe('GET /api/users', () => {
it('should return all users', async () => {
const res = await request(app)
.get('/api/users')
.expect('Content-Type', /json/)
.expect(200);
expect(res.body).toBeInstanceOf(Array);
});
it('should create a user', async () => {
const res = await request(app)
.post('/api/users')
.send({ name: 'Alice', email: 'alice@example.com' })
.expect(201);
expect(res.body.name).toBe('Alice');
});
it('should return 404 for missing user', async () => {
await request(app)
.get('/api/users/999')
.expect(404);
});
});
Why it matters: Supertest tests the full request/response cycle including Express middleware, routing, validation, and error handling in one call — unlike unit tests that mock dependencies, Supertest integration tests verify that all layers of the API work together correctly.
Real applications: REST API test suites use Supertest to verify every endpoint returns the correct status code, response shape, and error messages; authentication tests verify protected routes return 401 without a token and 200 with a valid JWT.
Common mistakes: Calling app.listen() in the exported app module and passing the port to Supertest — this leaves the server running on a port between tests; export the app object without listen() and let Supertest bind to a random ephemeral port automatically.
async/await pattern is the cleanest and most readable; for testing rejected promises, use await expect(promise).rejects.toThrow(). Jest also provides fake timers (jest.useFakeTimers()) for testing code that uses setTimeout, setInterval, or Date.now() without waiting for real time to pass.
// Async/await (recommended)
it('fetches user data', async () => {
const data = await fetchUser(1);
expect(data.name).toBe('Alice');
});
// Returning a promise
it('fetches user data', () => {
return fetchUser(1).then(data => {
expect(data.name).toBe('Alice');
});
});
// Testing rejected promises
it('throws on invalid ID', async () => {
await expect(fetchUser(-1)).rejects.toThrow('Invalid ID');
});
// Testing with done callback
it('calls callback with data', (done) => {
fetchUserCallback(1, (err, data) => {
expect(err).toBeNull();
expect(data.name).toBe('Alice');
done();
});
});
// Timers
jest.useFakeTimers();
it('delays execution', () => {
const fn = jest.fn();
setTimeout(fn, 1000);
jest.advanceTimersByTime(1000);
expect(fn).toHaveBeenCalled();
});
Why it matters: A missing await or return in an async test causes Jest to complete the test before the promise resolves, making the test always pass even when the assertion would fail — this is one of the most common sources of false-positive tests.
Real applications: Service functions that query databases use async/await tests; rate limiting logic that uses setTimeout internally is tested with jest.useFakeTimers() and jest.advanceTimersByTime() to avoid 15-minute test timeouts.
Common mistakes: Using the done callback pattern and forgetting to call done(err) when an error occurs inside the callback — the test hangs until Jest's timeout kills it; the async/await pattern avoids this entirely because thrown errors are caught automatically.
# Run tests with coverage
npx jest --coverage
# Coverage output:
# ----------|---------|----------|---------|---------|
# File | % Stmts | % Branch | % Funcs | % Lines |
# ----------|---------|----------|---------|---------|
# All files | 85.71 | 80 | 100 | 85.71 |
# math.js | 85.71 | 80 | 100 | 85.71 |
# ----------|---------|----------|---------|---------|
# Configure in package.json
{
"jest": {
"collectCoverage": true,
"coverageThreshold": {
"global": {
"branches": 80,
"functions": 80,
"lines": 80,
"statements": 80
}
}
}
}
Why it matters: 100% line coverage is achievable but can be misleading if branches aren't covered — a function can be called in every test but its error handling path (the catch block or else branch) may never execute, leaving bugs hidden in untested flows.
Real applications: Teams enforce 80% branch coverage as a CI gate with Jest's coverageThreshold config; coverage reports uploaded to Codecov in pull request comments show exactly which new lines are uncovered, helping reviewers identify missing test cases.
Common mistakes: Ignoring /* istanbul ignore next */ comments scattered through the codebase to artificially inflate coverage — if code is being ignored from coverage, ask why it's there; untested code is often dead code or rarely executed error paths that hide real bugs.
// fixtures/users.js
module.exports = {
validUser: {
name: 'Alice',
email: 'alice@example.com',
age: 30
},
invalidUser: {
name: '',
email: 'bad-email'
},
users: [
{ id: 1, name: 'Alice', role: 'admin' },
{ id: 2, name: 'Bob', role: 'user' }
]
};
// In tests
const { validUser, invalidUser, users } = require('./fixtures/users');
describe('UserService', () => {
beforeEach(async () => {
await db.collection('users').insertMany(users); // Seed data
});
afterEach(async () => {
await db.collection('users').deleteMany({}); // Clean up
});
it('should create a valid user', async () => {
const result = await service.create(validUser);
expect(result.name).toBe(validUser.name);
});
});
Why it matters: Tests without centralized fixtures often duplicate test data inline across dozens of files; when the data schema changes, every test file must be updated individually, whereas a single fixtures module needs only one change.
Real applications: E-commerce test suites define fixtures for products, users, and orders that are seeded before each integration test and cleaned up after; factory functions generate unique emails per test to avoid unique constraint violations when tests run in parallel.
Common mistakes: Seeding fixture data in beforeAll instead of beforeEach when tests mutate the data — a test that deletes or modifies a fixture record will break subsequent tests in the same suite; always use beforeEach for mutable data to ensure clean state isolation.
sinon.restore() in afterEach to clean up all stubs and spies between tests and prevent cross-test contamination.
const sinon = require('sinon');
const UserService = require('./userService');
const db = require('./database');
describe('UserService', () => {
afterEach(() => sinon.restore()); // Clean up all stubs/spies
// Spy — observes calls without changing behavior
it('should call save', () => {
const spy = sinon.spy(db, 'save');
service.createUser({ name: 'Alice' });
expect(spy.calledOnce).toBe(true);
expect(spy.calledWith({ name: 'Alice' })).toBe(true);
});
// Stub — replaces behavior
it('should return mock user', () => {
const stub = sinon.stub(db, 'findById').returns({ id: 1, name: 'Alice' });
const user = service.getUser(1);
expect(user.name).toBe('Alice');
});
// Stub async
it('should handle async', async () => {
sinon.stub(db, 'findById').resolves({ id: 1, name: 'Alice' });
const user = await service.getUser(1);
expect(user.name).toBe('Alice');
});
});
Why it matters: Sinon offers more granular control than Jest mocks for scenarios like sequential return values (onFirstCall().returns(a).onSecondCall().returns(b)), fake timers via sinon.useFakeTimers(), and conditional stubs based on argument values.
Real applications: Mocha/Chai test suites use Sinon exclusively for all mocking; even Jest-based suites occasionally use Sinon when testing legacy code that depends on global objects or prototype methods that Jest's spyOn can't easily intercept.
Common mistakes: Forgetting to call sinon.restore() after tests — without restoring, stubs persist across tests in the same file because Sinon modifies the original object; subsequent tests see stubbed behavior instead of the real function, causing misleading failures.
// Step 1: RED — write a failing test
describe('Calculator', () => {
it('should add two numbers', () => {
const calc = new Calculator();
expect(calc.add(2, 3)).toBe(5);
});
});
// ✗ ReferenceError: Calculator is not defined
// Step 2: GREEN — make it pass
class Calculator {
add(a, b) { return a + b; }
}
// ✓ Test passes
// Step 3: REFACTOR — improve without changing behavior
// Add more tests and iterate
it('should handle negative numbers', () => {
expect(calc.add(-1, -2)).toBe(-3);
});
it('should handle zero', () => {
expect(calc.add(0, 5)).toBe(5);
});
Why it matters: Writing tests after implementation tends to produce tests that validate the current behavior (including bugs) rather than desired behavior; TDD reverses this by anchoring development in the requirement: the test is the specification.
Real applications: Business logic functions (price calculators, discount engines, validation rules) are ideal for TDD because the expected inputs and outputs are well-defined; TDD is less suited for exploratory UI work or spike implementations where the API is not yet known.
Common mistakes: Writing too much code in the Green phase instead of the minimum required to pass — the purpose of writing minimal code is to force new tests to drive further implementation; writing the full solution immediately skips the test-driving process that makes TDD effective.
// UNIT TEST — isolated with mocks
describe('UserService.getUser', () => {
it('should return user by ID', async () => {
const mockDb = { findById: jest.fn().mockResolvedValue({ id: 1, name: 'Alice' }) };
const service = new UserService(mockDb);
const user = await service.getUser(1);
expect(user.name).toBe('Alice');
});
});
// INTEGRATION TEST — real dependencies
describe('POST /api/users', () => {
beforeAll(async () => {
await db.connect(); // Real database
});
afterAll(async () => {
await db.disconnect();
});
it('should create user in database', async () => {
const res = await request(app)
.post('/api/users')
.send({ name: 'Alice', email: 'alice@test.com' });
expect(res.status).toBe(201);
// Verify in actual database
const user = await db.collection('users').findOne({ email: 'alice@test.com' });
expect(user).toBeDefined();
});
});
Why it matters: A service whose unit tests all pass (because the database is mocked) can still break in production if the actual SQL query has a typo or the ORM behavior doesn't match the mock — integration tests catch this class of bug that unit tests structurally cannot.
Real applications: User authentication logic is unit-tested by mocking the database and bcrypt library; the full login endpoint is integration-tested with a real in-memory database to verify that the service, middleware, and route handler all interact correctly under realistic conditions.
Common mistakes: Writing only integration tests because they give more "confidence" — a test suite of 500 integration tests can take 10+ minutes to run, making fast feedback loops impossible; the pyramid exists for a reason: fast unit tests plus targeted integration tests at key boundaries.
req, res, and next objects that simulate the Express request-response cycle, allowing you to test middleware in complete isolation without starting a server. You can test middleware directly by calling it as a function with mock objects (fast, focused unit test), or test it as part of the full pipeline using Supertest (validates correct integration with routing and other middleware). Mock res.status() to return this using mockReturnThis since Express chains res.status(400).json().
// authMiddleware.js
function authMiddleware(req, res, next) {
const token = req.headers.authorization?.split(' ')[1];
if (!token) {
return res.status(401).json({ error: 'No token provided' });
}
try {
const decoded = jwt.verify(token, process.env.JWT_SECRET);
req.user = decoded;
next();
} catch (err) {
res.status(403).json({ error: 'Invalid token' });
}
}
// UNIT TEST — direct testing with mocks
describe('authMiddleware', () => {
const mockNext = jest.fn();
it('should return 401 if no token', () => {
const req = { headers: {} };
const res = {
status: jest.fn().mockReturnThis(),
json: jest.fn()
};
authMiddleware(req, res, mockNext);
expect(res.status).toHaveBeenCalledWith(401);
expect(mockNext).not.toHaveBeenCalled();
});
it('should set req.user and call next with valid token', () => {
const token = jwt.sign({ id: 1 }, process.env.JWT_SECRET);
const req = { headers: { authorization: `Bearer ${token}` } };
const res = { status: jest.fn(), json: jest.fn() };
authMiddleware(req, res, mockNext);
expect(req.user).toBeDefined();
expect(req.user.id).toBe(1);
expect(mockNext).toHaveBeenCalled();
});
});
// INTEGRATION TEST — with Supertest
it('should block unauthenticated requests', async () => {
await request(app).get('/api/protected').expect(401);
});
Why it matters: Middleware is deployed into every request pipeline and bugs in authentication, validation, or rate limiting middleware affect all routes; testing middleware in isolation ensures each piece of logic works correctly before it's wired into the full application.
Real applications: JWT auth middleware tests verify all paths: no token (401), expired token (403), invalid signature (403), and valid token (req.user set, next called); rate limiting middleware tests verify that the 6th request within the window returns 429.
Common mistakes: Not testing that next() is called in the success path — middleware that sets req.user but forgets to call next() will silently hang every authenticated request; always assert that mockNext was called in the happy path test.
jest --updateSnapshot to update the stored snapshot file. Snapshots work best for testing response shapes and serializable structures; use property matchers like expect.any(String) for dynamic fields such as IDs and timestamps that change between test runs.
const request = require('supertest');
const app = require('./app');
describe('API Snapshots', () => {
it('should match user response structure', async () => {
const res = await request(app).get('/api/users/1');
// Inline snapshot — stored in the test file
expect(res.body).toMatchInlineSnapshot(`
{
"id": 1,
"name": "Alice",
"email": "alice@example.com",
"role": "user"
}
`);
});
it('should match error response', async () => {
const res = await request(app).get('/api/users/999');
// File snapshot — stored in __snapshots__/
expect(res.body).toMatchSnapshot();
});
// Property matchers for dynamic values
it('should match with dynamic fields', async () => {
const res = await request(app).post('/api/users')
.send({ name: 'Alice', email: 'alice@test.com' });
expect(res.body).toMatchSnapshot({
id: expect.any(String), // Dynamic ID
createdAt: expect.any(String), // Dynamic timestamp
name: 'Alice' // Exact match
});
});
});
// Update snapshots when structure intentionally changes:
// npx jest --updateSnapshot
Why it matters: Snapshot tests catch accidental breaking changes to API response shapes without needing to write explicit assertions for every field — a refactor that accidentally removes a required JSON field will fail the snapshot test immediately rather than silently breaking client applications.
Real applications: API test suites snapshot error response bodies to ensure the error message format stays consistent; GraphQL resolvers snapshot their output shapes to detect schema drift; serialized Mongoose document representations are snapshot-tested after schema changes.
Common mistakes: Committing snapshot files automatically without reviewing the diff — when a snapshot is updated because of an intentional change, the diff must be reviewed carefully; blindly running --updateSnapshot to "fix failing tests" can mask real regressions that should have been caught.
MongoMemoryServer (fastest, no external deps), Docker containers (most realistic, matches production), and separate test database instances (simpler but slower). Always clean all collections in afterEach to prevent test interdependence and ensure each test starts with a predictable empty state.
// Using MongoMemoryServer (in-memory MongoDB)
const { MongoMemoryServer } = require('mongodb-memory-server');
const mongoose = require('mongoose');
let mongoServer;
beforeAll(async () => {
mongoServer = await MongoMemoryServer.create();
const uri = mongoServer.getUri();
await mongoose.connect(uri);
});
afterAll(async () => {
await mongoose.disconnect();
await mongoServer.stop();
});
afterEach(async () => {
// Clean all collections between tests
const collections = mongoose.connection.collections;
for (const key in collections) {
await collections[key].deleteMany({});
}
});
// Using environment-based database config
// test.env
// DATABASE_URL=mongodb://localhost:27017/myapp_test
// jest.config.js
module.exports = {
globalSetup: './test/setup.js', // Run before all tests
globalTeardown: './test/teardown.js', // Run after all tests
};
// test/setup.js
module.exports = async () => {
process.env.NODE_ENV = 'test';
process.env.DATABASE_URL = 'mongodb://localhost:27017/myapp_test';
};
// Using Docker for PostgreSQL tests
// docker-compose.test.yml
// services:
// test-db:
// image: postgres:15
// environment:
// POSTGRES_DB: test_db
// POSTGRES_PASSWORD: test
// ports: ["5433:5432"]
Why it matters: Running integration tests against a shared development database leads to flaky tests that depend on what data happens to exist in the database at test time; test databases give every test suite a clean, controlled environment with exactly the data it seeds.
Real applications: Node.js monorepos use mongodb-memory-server in Jest's globalSetup to spin up a single in-memory MongoDB instance for all test files; CI pipelines spin up Docker Compose services (Mongo, Redis, Postgres) as GitHub Actions services before running the test suite.
Common mistakes: Sharing a single test database instance between parallel Jest workers without namespacing collections — parallel tests that insert, read, and delete the same records interfere with each other, causing intermittent failures; either run tests serially (--maxWorkers=1) or create one database instance per worker.
rejects.toThrow() for async service errors, toThrow() for sync errors, and Supertest status assertions for HTTP endpoint error responses.
describe('Error Handling', () => {
// Test validation errors
it('should reject invalid email', async () => {
await expect(userService.create({ email: 'bad' }))
.rejects.toThrow('Invalid email format');
});
// Test not found errors
it('should throw NotFoundError for missing user', async () => {
await expect(userService.getById('nonexistent'))
.rejects.toThrow(NotFoundError);
});
// Test API error responses
it('should return 400 for invalid input', async () => {
const res = await request(app)
.post('/api/users')
.send({ email: 'bad' })
.expect(400);
expect(res.body).toHaveProperty('error');
expect(res.body.error).toContain('email');
});
// Test database failure handling
it('should handle database connection errors', async () => {
jest.spyOn(db, 'query').mockRejectedValue(new Error('ECONNREFUSED'));
const res = await request(app)
.get('/api/users')
.expect(500);
expect(res.body.error).toBe('Internal server error');
});
// Test boundary conditions
it('should handle empty arrays', async () => {
db.query.mockResolvedValue([]);
const res = await request(app).get('/api/users').expect(200);
expect(res.body).toEqual([]);
});
it('should handle null values', () => {
expect(() => processData(null)).toThrow('Data is required');
});
it('should handle very large inputs', async () => {
const largePayload = { name: 'A'.repeat(10000) };
await request(app)
.post('/api/users')
.send(largePayload)
.expect(400);
});
});
Why it matters: Error paths are often the most critical code in production yet the least-tested in development — a database connection failure that returns a 500 with a raw stack trace and SQL query leaks internal architecture to attackers and breaks API contracts for clients.
Real applications: Payment processing services test what happens when the payment gateway returns a timeout; authentication services test expired tokens, revoked tokens, and malformed JWTs; all tests verify that error responses contain a user-safe message and never expose internal implementation details.
Common mistakes: Only testing the happy path because error tests require mocking failures explicitly — use jest.spyOn(db, 'query').mockRejectedValue(new Error('ECONNREFUSED')) to simulate database crashes and verify the global error handler converts them to clean 500 responses with no stack traces.
# .github/workflows/test.yml
name: Node.js CI
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [18, 20, 22]
services:
mongodb:
image: mongo:7
ports:
- 27017:27017
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
- run: npm ci
- run: npm run lint
- run: npm test -- --coverage --ci
env:
DATABASE_URL: mongodb://localhost:27017/test
- name: Upload coverage
uses: codecov/codecov-action@v3
with:
files: ./coverage/lcov.info
# package.json scripts
{
"scripts": {
"lint": "eslint src/",
"test": "jest --forceExit --detectOpenHandles",
"test:ci": "jest --ci --coverage --maxWorkers=2"
}
}
Why it matters: Without CI, tests only run when developers remember to run them locally; CI makes tests mandatory on every change so no untested code is accidentally merged, and parallel matrix builds verify compatibility across Node.js versions before users discover breaking changes.
Real applications: Open source Node.js packages test against Node 18, 20, and 22 on every PR using a matrix strategy; private services run tests, build a Docker image, and deploy to staging on merge to main, rolling back automatically if health checks fail.
Common mistakes: Not using npm ci in CI pipelines (using npm install instead) — npm install can update package-lock.json and install different versions than what was tested locally; npm ci installs exactly the locked versions, ensuring reproducible builds.