fs module provides asynchronous, synchronous, and promise-based methods for reading files; the promise API (fs/promises) is the recommended approach for modern Node.js. Always specify the encoding (e.g., 'utf8') to receive a string; without it, you get a raw Buffer. For large files (logs, exports, binary data) use createReadStream instead of reading the whole file into memory.
const fs = require('fs');
const fsPromises = require('fs').promises;
// Callback-based
fs.readFile('data.txt', 'utf8', (err, data) => {
if (err) throw err;
console.log(data);
});
// Promise-based (preferred)
const data = await fsPromises.readFile('data.txt', 'utf8');
// Synchronous (blocks event loop)
const data = fs.readFileSync('data.txt', 'utf8');
Why it matters: File I/O is one of the most common Node.js operations; understanding async vs sync vs stream-based reading is fundamental to avoiding event loop blocking.
Real applications: Config loaders, static asset servers, CSV importers, and log parsers all read files; using the async promise API keeps the server responsive during file reads.
Common mistakes: Using fs.readFileSync inside a request handler blocks the entire event loop for every other request while the file is being read; always use the async version in servers.
fs.writeFile() to create or overwrite a file and fs.appendFile() to add content to an existing file; both accept an encoding parameter and support promise-based usage. File writing is used for logging, config persistence, export generation, and data caching. For high-frequency writes (e.g., logging), use a writable stream to avoid repeatedly opening and closing the file.
const fs = require('fs').promises;
// Write (creates or overwrites)
await fs.writeFile('output.txt', 'Hello World', 'utf8');
// Append to existing file
await fs.appendFile('log.txt', 'New entry\n', 'utf8');
// Write JSON
const data = { name: 'Alice', age: 30 };
await fs.writeFile('data.json', JSON.stringify(data, null, 2));
Why it matters: Writing files is fundamental to logging, config management, data exports, and any server-side operation that needs to persist data without a database.
Real applications: Application loggers write to log files, report generators export CSV/JSON, and code generators write source files during scaffolding — all using fs.writeFile or append streams.
Common mistakes: Not using { flag: 'wx' } when a file should only be created (not overwritten) can silently destroy existing data; use the exclusive-create flag for safety-critical writes.
fs module provides mkdir, readdir, rmdir, and rm for creating, listing, and removing directories. The { recursive: true } option in mkdir creates all missing parent directories in one call (like mkdir -p), and in rm it deletes a directory and all its contents. Use { withFileTypes: true } in readdir to get Dirent objects that expose isFile() and isDirectory() methods.
const fs = require('fs').promises;
// Create directory (recursive creates parent dirs)
await fs.mkdir('path/to/dir', { recursive: true });
// Read directory contents
const files = await fs.readdir('src');
console.log(files); // ['index.js', 'utils.js']
// Read with file types
const entries = await fs.readdir('src', { withFileTypes: true });
entries.forEach(e => console.log(e.name, e.isDirectory()));
// Remove directory
await fs.rmdir('old-dir');
await fs.rm('old-dir', { recursive: true, force: true });
Why it matters: Directory operations are essential for build tools, file processors, and temp file management; using recursive options avoids ENOENT errors on missing parent directories.
Real applications: Blog generators create output directories for each post, upload handlers create user-specific folders, and build pipelines clean and recreate dist directories on each build.
Common mistakes: Using fs.rmdir() on a non-empty directory throws ENOTEMPTY; always use fs.rm(dir, { recursive: true }) (Node.js 14+) to remove directory trees.
path module provides cross-platform utilities for working with file and directory paths, automatically handling the difference between Windows backslashes and POSIX forward slashes. Use path.join() to combine segments and path.resolve() to produce absolute paths; always prefer these over string concatenation. path.parse() splits a path into its components (root, directory, base, name, extension), and path.extname() extracts the extension.
const path = require('path');
path.join('src', 'utils', 'index.js'); // 'src/utils/index.js'
path.resolve('src', 'app.js'); // '/full/path/src/app.js'
path.basename('/src/app.js'); // 'app.js'
path.dirname('/src/utils/index.js'); // '/src/utils'
path.extname('app.min.js'); // '.js'
path.parse('/src/app.js');
// { root:'/', dir:'/src', base:'app.js', ext:'.js', name:'app' }
Why it matters: Hardcoding path separators with string concatenation breaks cross-platform portability; the path module is the correct and portable way to construct file paths.
Real applications: Build tools, CLI utilities, and any Node.js app that constructs file paths from user input or config values must use path.join or path.resolve to work on both Windows and Unix.
Common mistakes: Using __dirname + '/models/user.js' works on Mac/Linux but uses the wrong separator on Windows; always use path.join(__dirname, 'models', 'user.js').
fs.watch() to monitor files or directories for changes in real time, triggering callbacks with rename or change events. The built-in fs.watch is useful for development tooling but can be unreliable across platforms (especially macOS and Linux differences in event reporting), so chokidar is the recommended choice for production-grade file watching. Chokidar supports glob patterns, ignored paths, and debouncing.
const fs = require('fs');
// Watch a single file
fs.watch('config.json', (eventType, filename) => {
console.log(`${filename} changed: ${eventType}`);
});
// Watch a directory recursively
fs.watch('src', { recursive: true }, (event, filename) => {
console.log(`${event}: ${filename}`);
});
// Using chokidar (more reliable)
const chokidar = require('chokidar');
chokidar.watch('src').on('change', (path) => {
console.log(`File changed: ${path}`);
});
Why it matters: File watching is the foundation of dev-server hot reloading, config auto-reload, and build watch modes; knowing the limitations of the built-in API explains why tools like nodemon and webpack use chokidar.
Real applications: nodemon (auto-restart), webpack/vite (HMR), and config hot-reload systems all use chokidar internally to watch for source file changes.
Common mistakes: Using raw fs.watch in production or on macOS can trigger duplicate events or miss events on network file systems; always use chokidar for reliable cross-platform file watching.
fs.access() or fs.stat() to check file existence — the deprecated fs.exists() should never be used. fs.access() checks accessibility with specific permissions, while fs.stat() returns full metadata (size, timestamps, type). The preferred EAFP (Easier to Ask Forgiveness than Permission) approach is to just try the operation and catch ENOENT, avoiding TOCTOU race conditions.
const fs = require('fs').promises;
// Check existence with access
try {
await fs.access('config.json');
console.log('File exists');
} catch {
console.log('File does not exist');
}
// Get file details with stat
const stats = await fs.stat('data.txt');
console.log(stats.isFile()); // true
console.log(stats.isDirectory()); // false
console.log(stats.size); // size in bytes
Why it matters: The check-then-act pattern (reading existence then performing an operation) creates a race condition; EAFP avoids this and is the idiomatic Node.js approach.
Real applications: Config loaders check for optional config file existence; upload handlers check if a target directory exists before writing; log rotation scripts check file sizes via stat.
Common mistakes: Using fs.existsSync() before a file operation doesn't prevent a race condition where the file could be deleted between the check and the use; catch ENOENT on the operation itself instead.
fs module provides copyFile, rename, cp, and unlink for copying, moving, renaming, and deleting files. rename works as both a rename and a move operation; for cross-device moves (e.g., different partitions) it fails with EXDEV — in that case copy then delete. fs.cp() (Node.js 16+) recursively copies directories, replacing the need for external packages like fs-extra.
const fs = require('fs').promises;
// Copy a file
await fs.copyFile('source.txt', 'dest.txt');
// Rename or move a file
await fs.rename('old-name.txt', 'new-name.txt');
await fs.rename('file.txt', 'archive/file.txt'); // move
// Copy directory recursively (Node 16+)
await fs.cp('src', 'backup', { recursive: true });
// Delete a file
await fs.unlink('temp.txt');
Why it matters: File copy, move, and delete operations are fundamental in build pipelines, file processors, and CMS systems; knowing the limitations of rename across devices prevents hard-to-debug EXDEV errors.
Real applications: Image processors move uploaded files from a temp directory to a permanent storage path; build tools copy static assets to a dist folder; cleanup jobs delete old temp files.
Common mistakes: Using fs.rename() to move files across mount points (different devices/partitions) throws EXDEV; always handle this case by falling back to copy-then-delete.
fs.chmod() to change file permissions using octal notation and fs.stat() to read them; permissions control who can read, write, and execute files, which is critical for security-sensitive files like private keys and config files. Node.js follows the POSIX permission model used by Linux and macOS; on Windows, only the writable attribute is enforced. Pass the mode option to writeFile to create files with specific permissions atomically.
const fs = require('fs').promises;
// Set permissions (owner: rwx, group: rx, others: r)
await fs.chmod('script.sh', 0o754);
// Read permissions
const stats = await fs.stat('script.sh');
console.log(stats.mode.toString(8)); // e.g., '100754'
// Create file with specific permissions
await fs.writeFile('secret.txt', 'data', { mode: 0o600 });
// Change ownership (requires root)
await fs.chown('file.txt', uid, gid);
Why it matters: Setting correct permissions on private key files, config files, and sensitive data is an OWASP Top 10 security requirement; overly permissive files expose credentials to other system users.
Real applications: SSL certificate files should be 0o600 (owner read-write only); shell scripts should be 0o755 (owner execute, world read); public config files 0o644.
Common mistakes: Creating a private key file without 0o600 mode leaves it world-readable if the umask is permissive; always explicitly set the mode when creating sensitive files rather than relying on the system default.
fs.readFile + JSON.parse and write them with JSON.stringify + fs.writeFile; this is the standard pattern for config files, data persistence, and settings management. Always wrap JSON.parse() in a try/catch to handle malformed JSON files gracefully. Pass null, 2 to JSON.stringify for human-readable pretty-printed output.
const fs = require('fs').promises;
// Read JSON
async function readJSON(filePath) {
const raw = await fs.readFile(filePath, 'utf8');
return JSON.parse(raw);
}
// Write JSON (with pretty-printing)
async function writeJSON(filePath, data) {
await fs.writeFile(filePath, JSON.stringify(data, null, 2));
}
// Usage
const config = await readJSON('config.json');
config.port = 4000;
await writeJSON('config.json', config);
Why it matters: JSON file I/O is used constantly in Node.js for config files, feature flags, seed data, and app state persistence; handling parse errors correctly prevents server crashes on corrupt config.
Real applications: package.json, tsconfig.json, eslintrc.json, and custom app config files are all read with this pattern; i18n systems write locale translation files as JSON.
Common mistakes: Using require('./config.json') caches the file at startup; changes to the file at runtime are invisible without clearing the module cache; use fs.readFile + JSON.parse for live-reloadable configs.
fs.mkdtemp() to create a unique temporary directory with a random suffix, combined with os.tmpdir() to get the system temp path. Temp files are essential for processing uploads, generating exports, running test fixtures, and storing intermediate pipeline data. Always clean up temp files in a finally block to prevent disk exhaustion.
const fs = require('fs').promises;
const os = require('os');
const path = require('path');
// Create a unique temp directory
const tmpDir = await fs.mkdtemp(
path.join(os.tmpdir(), 'myapp-')
);
console.log(tmpDir); // /tmp/myapp-AbCdEf
// Write temp file
const tmpFile = path.join(tmpDir, 'data.txt');
await fs.writeFile(tmpFile, 'temporary data');
// Cleanup when done
await fs.rm(tmpDir, { recursive: true, force: true });
Why it matters: Temp files that aren't cleaned up accumulate on disk and can exhaust storage; using mkdtemp gives each operation a unique collision-free directory that can be safely cleaned up afterward.
Real applications: File converters (PDF to image), archive extractors, and upload processors create temp directories to stage files before validating and moving them to permanent storage.
Common mistakes: Not cleaning up temp directories in error cases (only in the happy path) leaves orphaned temp files; always use try/finally to guarantee cleanup even when exceptions occur.
{ recursive: true } option in Node.js 18.17+, or a manually recursive async function in older versions. This pattern is used in build tools, search utilities, static site generators, and code analysis scripts. For very large directory trees, consider using Node.js async generators to stream results rather than collecting everything into a single array.
const fs = require('fs').promises;
const path = require('path');
// Node.js 18.17+ — built-in recursive readdir
const files = await fs.readdir('src', { recursive: true });
// Manual recursive traversal (works in all versions)
async function walkDir(dir) {
const entries = await fs.readdir(dir, { withFileTypes: true });
const files = [];
for (const entry of entries) {
const fullPath = path.join(dir, entry.name);
if (entry.isDirectory()) {
files.push(...await walkDir(fullPath));
} else {
files.push(fullPath);
}
}
return files;
}
const allFiles = await walkDir('./src');
console.log(allFiles); // ['src/index.js', 'src/utils/helper.js']
Why it matters: Every build tool, code scanner, and static site generator needs recursive directory traversal; implementing it correctly with async/await and proper error handling is a key Node.js skill.
Real applications: ESLint scans all JS files, webpack finds all imports, jasmine/jest discovers test files, and documentation generators find all source files — all via recursive directory traversal.
Common mistakes: Not handling symlinks when traversing can cause infinite loops if a symlink points to a parent directory; check entry.isSymbolicLink() and skip or limit symlink depth.
multipart/form-data encoding and require middleware to parse the incoming data; multer is the most popular choice for Express. Multer provides configurable disk or memory storage, file size limits, and fileFilter callbacks for type validation. For cloud deployments, use multer-s3 to stream uploads directly to S3 without saving to local disk.
const multer = require('multer');
const storage = multer.diskStorage({
destination: (req, file, cb) => cb(null, 'uploads/'),
filename: (req, file, cb) => {
const uniqueName = Date.now() + '-' + file.originalname;
cb(null, uniqueName);
}
});
const upload = multer({
storage,
limits: { fileSize: 5 * 1024 * 1024 }, // 5MB limit
fileFilter: (req, file, cb) => {
const allowed = ['image/jpeg', 'image/png', 'image/gif'];
cb(null, allowed.includes(file.mimetype));
}
});
app.post('/upload', upload.single('avatar'), (req, res) => {
res.json({ file: req.file.filename });
});
Why it matters: File upload handling involves security (type validation, size limits) and storage concerns (local disk vs cloud); multer provides a clean middleware API for managing all of these.
Real applications: Profile photo upload, document submission portals, media CMS systems, and CSV bulk import features all need multer-based upload handling with storage and type configuration.
Common mistakes: Trusting the mimetype field sent by the client — it can be spoofed; always verify the actual file content (magic bytes) using a library like file-type in addition to the MIME check.
fs/promises API (imported as require('node:fs/promises')) provides promise-based versions of all file system methods, enabling clean async/await syntax without callbacks. Stabilized in Node.js 14 and recommended for all new code, it integrates naturally with try/catch error handling. Common error codes to handle include ENOENT (not found), EACCES (permission denied), and EEXIST (already exists).
// Import the promises API directly
const fs = require('node:fs/promises');
async function processFile(filePath) {
try {
// Read file
const content = await fs.readFile(filePath, 'utf8');
// Process content
const processed = content.toUpperCase();
// Write result
await fs.writeFile('output.txt', processed);
// Get file info
const stats = await fs.stat('output.txt');
console.log('Written:', stats.size, 'bytes');
} catch (err) {
if (err.code === 'ENOENT') {
console.error('File not found:', filePath);
} else {
throw err;
}
}
}
Why it matters: The fs/promises API is the modern standard for file system operations in Node.js; it eliminates callback hell and integrates cleanly with async/await control flow.
Real applications: All modern Express middleware, CLI tools, and server-side scripts use the fs/promises API for reading configs, writing logs, and managing temp files.
Common mistakes: Mixing require('fs').promises and require('node:fs/promises') inconsistently; prefer the node: protocol prefix in Node.js 16+ for clarity and to prevent issues with local files shadowing built-in modules.
fs.open()) are low-level identifiers for open files that provide fine-grained control including positioned reads and writes at specific byte offsets. This is useful when reading only a portion of a large file or performing multiple operations efficiently without reopening the file. Always close handles in a finally block to prevent file descriptor leaks that cause EMFILE errors.
const fs = require('node:fs/promises');
async function partialRead(filePath) {
const handle = await fs.open(filePath, 'r');
try {
// Read 100 bytes starting at position 50
const buffer = Buffer.alloc(100);
const { bytesRead } = await handle.read(buffer, 0, 100, 50);
console.log('Read', bytesRead, 'bytes:', buffer.toString('utf8', 0, bytesRead));
// Get file stats through handle
const stats = await handle.stat();
console.log('File size:', stats.size);
} finally {
await handle.close(); // Always close the handle
}
}
// Using file handle for efficient writes
const writeHandle = await fs.open('output.txt', 'w');
await writeHandle.write('Hello ');
await writeHandle.write('World');
await writeHandle.close();
Why it matters: Understanding file descriptors and EMFILE errors is essential for diagnosing "too many open files" production issues caused by forgetting to close handles.
Real applications: Binary file parsers (audio, video headers), database file formats, and memory-mapped file readers all use positioned reads on file handles to extract specific byte ranges without reading the whole file.
Common mistakes: Not closing file handles in error paths — every open file consumes a file descriptor; on Linux the default limit is 1024, so leaking even a few handles per request quickly exhausts the limit under load.
fs.readFile() loads the entire file into memory at once as a Buffer or string, which is simple but unsuitable for files larger than a few MB. fs.createReadStream() reads the file in configurable-size chunks, making it memory-efficient for large files like videos, archives, and large logs. When serving file downloads via HTTP, streaming with createReadStream().pipe(res) is the correct approach.
const fs = require('fs');
// readFile — loads entire file into memory
// Good for small files (< 10MB)
const data = await fs.promises.readFile('small.json', 'utf8');
// createReadStream — processes in chunks
// Essential for large files
const stream = fs.createReadStream('large-video.mp4');
let totalBytes = 0;
stream.on('data', (chunk) => {
totalBytes += chunk.length;
// Process chunk without loading entire file
});
stream.on('end', () => console.log('Total:', totalBytes));
// Stream to HTTP response (very efficient)
app.get('/download', (req, res) => {
fs.createReadStream('large-file.zip').pipe(res);
});
Why it matters: Using readFile on a 500MB video file allocates 500MB of memory per request; a streaming approach keeps memory consumption constant regardless of file size.
Real applications: File download endpoints, log streaming to the browser, CSV export pipelines, and media servers all use createReadStream to send files without loading them into memory.
Common mistakes: Using readFile in a route handler to serve a downloadable file buffers the entire file in Node.js memory before sending, which can exhaust RAM under concurrent downloads; always pipe a read stream to the response instead.