security.authorization: enabled in mongod.conf. Without it, anyone with network access can read and write all data — a major security risk. Before enabling auth, create an administrative user in the admin database to avoid locking yourself out. MongoDB supports several authentication mechanisms: SCRAM-SHA-256 (default, password-based), x.509 (certificate-based), LDAP (enterprise), and Kerberos (enterprise). Always enable auth in production — thousands of MongoDB instances have been ransomed due to no authentication being set.
# mongod.conf
security:
authorization: enabled
# Step 1: Connect WITHOUT auth (first time only)
mongosh --port 27017
# Step 2: Create admin user
use admin
db.createUser({
user: "adminUser",
pwd: "SecurePassword!123",
roles: [{ role: "userAdminAnyDatabase", db: "admin" }, "readWriteAnyDatabase"]
})
# Step 3: Reconnect WITH auth
mongosh -u adminUser -p SecurePassword!123 --authenticationDatabase admin
Why it matters: Authentication is the foundation of database security; no auth meant 33,000 MongoDB instances were ransomed in 2017, making it the most critical first step.
Real applications: Production systems require auth enabled by default to prevent unauthorized access; even a brief window without auth can expose the entire database.
Common mistakes: Not enabling auth in early deployments, storing credentials in plaintext in connection strings, or locking yourself out by creating auth without an admin user first.
// Create a read-only reporting user
db.createUser({
user: "reportUser",
pwd: "ReportPass!456",
roles: [{ role: "read", db: "analytics" }]
})
// Create a custom role with specific permissions
db.createRole({
role: "orderProcessor",
privileges: [
{
resource: { db: "shop", collection: "orders" },
actions: ["find", "update"]
}
],
roles: []
})
// Assign custom role to user
db.grantRolesToUser("appUser", [{ role: "orderProcessor", db: "shop" }])
Why it matters: RBAC implements the "principle of least privilege" — a core security principle tested heavily in compliance and security interviews.
Real applications: Service accounts should have only readWrite on their specific database, preventing lateral movement if credentials leak; attackers can't drop databases they have no permissions on.
Common mistakes: Assigning root or dbAdminAnyDatabase roles to application accounts, not creating separate service accounts per environment, or not documenting role purposes.
net.tls settings in mongod.conf. You need a valid certificate (PEM format) and can optionally require client certificate authentication. Use tlsMode: requireTLS for production to reject all unencrypted connections. MongoDB Atlas enforces TLS automatically. For self-hosted deployments, use Let's Encrypt or your organization's CA. The --tlsCAFile option specifies the CA certificate to validate server certificates against.
# mongod.conf
net:
tls:
mode: requireTLS
certificateKeyFile: /etc/ssl/mongodb.pem
CAFile: /etc/ssl/ca.pem
# Connect with TLS from driver
const client = new MongoClient(uri, {
tls: true,
tlsCAFile: '/etc/ssl/ca.pem',
tlsCertificateKeyFile: '/etc/ssl/client.pem'
});
# Connect from mongosh with TLS
mongosh --tls --tlsCAFile /etc/ssl/ca.pem \
--tlsCertificateKeyFile /etc/ssl/client.pem \
"mongodb://hostname:27017"
Why it matters: TLS encrypts credentials and data in transit; without it, network sniffing can capture passwords and sensitive data even on private networks.
Real applications: Production systems use TLS with requireTLS mode to reject all unencrypted connections; MongoDB Atlas enforces it automatically.
Common mistakes: Using tls: true without requireTLS (allowing unencrypted fallback), not validating server certificates, or using self-signed certs without CA validation in production.
# mongod.conf (Enterprise only)
security:
enableEncryption: true
kmip:
serverName: kmip-server.example.com
port: 5696
clientCertificateFile: /etc/ssl/kmip-client.pem
serverCAFile: /etc/ssl/kmip-ca.pem
# Atlas: Encryption at rest with Customer Key Management
# Configured via Atlas UI or API with AWS KMS / Azure Key Vault / GCP KMS
// Application-level encryption (available in all editions via driver)
const encryption = new ClientEncryption(client, {
keyVaultNamespace: 'encryption.__keyVault',
kmsProviders: { aws: { accessKeyId, secretAccessKey } }
});
Why it matters: Encryption at rest protects against stolen disks and compliance violations; it's mandatory for regulated industries like healthcare and finance.
Real applications: MongoDB Atlas and Enterprise provide disk encryption; Community Edition relies on OS-level encryption (LUKS, BitLocker).
Common mistakes: Assuming HTTPS encryption covers data at rest (it doesn't), not managing encryption keys properly, or losing key recovery procedures.
const { ClientEncryption } = require('mongodb-client-encryption');
// Create encryption key
const encryption = new ClientEncryption(client, {
keyVaultNamespace: 'encryption.__keyVault',
kmsProviders: { local: { key: localMasterKey } }
});
const keyId = await encryption.createDataKey('local');
// Encrypt before insert
const encryptedSSN = await encryption.encrypt("123-45-6789", {
algorithm: 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic',
keyId
});
await db.collection('users').insertOne({
name: "Alice",
ssn: encryptedSSN // stored as binary ciphertext
});
// Decrypt on read
const doc = await db.collection('users').findOne({ name: "Alice" });
const plainSSN = await encryption.decrypt(doc.ssn);
Why it matters: CSFLE protects sensitive data (PII, PHI, credentials) even from MongoDB administrators — critical for highly sensitive applications.
Real applications: Financial and healthcare apps use CSFLE to ensure that data remains encrypted even if a DBA's credentials are compromised.
Common mistakes: Not using encryption for PII, using random mode for queryable fields (preventing queries), or not managing encryption keys separately from connection strings.
{ "$gt": "" }) as query parameters, bypassing authentication or exfiltrating data. The fix is to validate and sanitize all user input, using schema validation libraries like Joi or express-validator. Use $eq operator explicitly, or cast user inputs to the expected type (string, number, ObjectId). Libraries like mongo-sanitize strip MongoDB operators from user input. Never construct queries by concatenating strings from user input. Use parameterized patterns consistently.
// VULNERABLE — attacker sends { username: { $gt: "" }, password: { $gt: "" } }
const user = await db.collection('users').findOne({
username: req.body.username, // could be { $gt: "" }
password: req.body.password
});
// SECURE — sanitize with mongo-sanitize
const sanitize = require('mongo-sanitize');
const user = await db.collection('users').findOne({
username: sanitize(req.body.username),
password: sanitize(req.body.password)
});
// BETTER — validate types explicitly
const { username, password } = req.body;
if (typeof username !== 'string' || typeof password !== 'string') {
return res.status(400).json({ error: 'Invalid input' });
}
// Hash password before comparing (never store plaintext)
const user = await db.collection('users').findOne({ username });
Why it matters: NoSQL injection is a common web vulnerability that attackers actively exploit; understanding it is essential for secure API development.
Real applications: Login endpoints and search forms are frequent targets; attackers send { "$gt": "" } to bypass auth or exfiltrate data.
Common mistakes: Not sanitizing user input, using string concatenation in queries, or not validating input types before database operations.
0.0.0.0) — this should be restricted to specific IPs using net.bindIp. In MongoDB Atlas, each project has an IP Access List that must include your application server's IP before any connection succeeds. On self-hosted clusters, use OS firewalls (iptables/ufw) to allow only application server IPs on port 27017. VPC peering and private endpoints eliminate public internet exposure entirely.
# mongod.conf — bind to specific IP only
net:
bindIp: 127.0.0.1,10.0.1.15 # localhost + app server IP only
port: 27017
# iptables rule — allow only app server
iptables -A INPUT -p tcp --dport 27017 -s 10.0.1.0/24 -j ACCEPT
iptables -A INPUT -p tcp --dport 27017 -j DROP
# Atlas IP allowlist via CLI
atlas accessLists create --currentIp --projectId
atlas accessLists create --ipAddress 10.0.1.15 --projectId
# Connection string with auth + TLS (production standard)
mongodb+srv://appUser:password@cluster.mongodb.net/mydb?tls=true
Why it matters: Network security is the first line of defense; binding to 0.0.0.0 is how thousands of MongoDB instances were compromised in ransomware attacks.
Real applications: Production deployments restrict MongoDB to specific subnets; cloud systems use VPC/VPN to eliminate internet exposure.
Common mistakes: Binding to 0.0.0.0 on public networks, not configuring firewall rules, or thinking VPC isolation is sufficient without IP allowlisting.
# mongod.conf (Enterprise)
auditLog:
destination: file
format: JSON
path: /var/log/mongodb/auditLog.json
filter: '{
atype: {
$in: ["authenticate", "createCollection", "dropCollection",
"createUser", "dropUser", "grantRolesToUser",
"authCheck", "logout"]
}
}'
// Application-level audit (Community Edition fallback)
async function auditedDelete(collection, filter, userId) {
const result = await collection.deleteMany(filter);
await db.collection('audit_log').insertOne({
action: 'deleteMany',
collection: collection.collectionName,
filter, userId,
deletedCount: result.deletedCount,
timestamp: new Date()
});
return result;
}
Why it matters: Audit logs provide accountability and compliance proof; without them, breach investigations cannot determine what was accessed or compromised.
Real applications: Compliance audits (SOC2, HIPAA, PCI-DSS) require proof of who accessed what data and when; Atlas audit logging provides this automatically.
Common mistakes: Not enabling auditing, enabling too verbose logging causing storage issues, or not retaining audit logs long enough for compliance.
const bcrypt = require('bcrypt');
// Register: hash before saving
async function registerUser(username, plainPassword) {
const SALT_ROUNDS = 12;
const hashedPassword = await bcrypt.hash(plainPassword, SALT_ROUNDS);
await db.collection('users').insertOne({
username,
password: hashedPassword, // never plaintext
createdAt: new Date()
});
}
// Login: compare with hash
async function loginUser(username, plainPassword) {
const user = await db.collection('users').findOne({ username });
if (!user) return null;
const isValid = await bcrypt.compare(plainPassword, user.password);
return isValid ? user : null;
}
// Also: use projection to exclude password from queries
db.users.find({}, { password: 0 })
Why it matters: Password security is fundamental; plaintext password storage is a guaranteed breach outcome that exposes users on other sites.
Real applications: All production systems use bcrypt with high salt rounds (12+) to make breached password files useless to attackers.
Common mistakes: Using MD5 or SHA-1 for passwords, not salting hashes, storing passwords in ways that can be reversed, or not hashing API keys.
# Create MongoDB user mapped to certificate subject
db.getSiblingDB("$external").createUser({
user: "CN=appService,OU=engineering,O=MyCompany,C=US",
roles: [{ role: "readWrite", db: "myapp" }]
})
# Connect using x.509 certificate
mongosh --tls \
--tlsCertificateKeyFile /etc/ssl/app-client.pem \
--tlsCAFile /etc/ssl/ca.pem \
--authenticationMechanism MONGODB-X509 \
--authenticationDatabase '$external' \
"mongodb://hostname:27017"
# Internal replica set auth with keyfile
# mongod.conf
security:
keyFile: /etc/mongodb/keyfile # same file on all RS members
Why it matters: x.509 authentication eliminates password management for service-to-service and inter-node communication; it's the security standard for infrastructure.
Real applications: Large MongoDB deployments use x.509 for all inter-node communication and service accounts, eliminating password rotation burden.
Common mistakes: Not setting up proper certificate management, using x.509 without verifying certificate chain, or misunderstanding certificate subject DN mapping.
error (reject invalid) or warn (log but allow). Combining schema validation with application-level validation and input sanitization creates defense-in-depth against injection and data corruption.
db.createCollection("users", {
validator: {
$jsonSchema: {
bsonType: "object",
required: ["username", "email", "password"],
properties: {
username: {
bsonType: "string",
minLength: 3,
maxLength: 50,
pattern: "^[a-zA-Z0-9_]+$" // alphanumeric only
},
email: {
bsonType: "string",
pattern: "^[^@]+@[^@]+\\.[^@]+$"
},
age: {
bsonType: "int",
minimum: 0,
maximum: 150
}
},
additionalProperties: false // reject unexpected fields
}
},
validationAction: "error" // reject invalid documents
})
Why it matters: Schema validation provides server-side data quality enforcement; attackers who bypass application validation still can't insert malformed data.
Real applications: Validation prevents injection attacks, ensures data type consistency, and documents expected document structure for developers.
Common mistakes: Using validation only at application level without server-side validation, not restricting additionalProperties, or using warn mode when error is needed.
// Security checklist verification
db.adminCommand({ getParameter: 1, authenticationMechanisms: 1 })
// Expect: { authenticationMechanisms: ['SCRAM-SHA-256', ...] }
// Check users
db.system.users.find({}, { user: 1, roles: 1 })
// Check bind IP
db.adminCommand({ getCmdLineOpts: 1 }).parsed.net.bindIp
// Should NOT be 0.0.0.0 in production
// Verify TLS is required
db.adminCommand({ getCmdLineOpts: 1 }).parsed.net.tls.mode
// Should be 'requireTLS'
// Check for users with excessive privileges
db.system.users.find({ "roles.role": "root" })
Why it matters: Understanding common security mistakes is essential for preventing becoming another victim of data breaches and ransomware attacks.
Real applications: The MongoDB ransomware attacks of 2017 explicitly exploited no-auth + public accessibility combinations that are simple to prevent.
Common mistakes: Treating security as optional ("we're behind a firewall"), assuming cloud platforms provide all security by default, or ignoring security advisories and patches.
.env files (with dotenv) loaded at runtime, never committed to git. In production, use secret management services: AWS Secrets Manager, HashiCorp Vault, Azure Key Vault, or GCP Secret Manager. Rotate credentials regularly. Add `.env` to `.gitignore`. Use different credentials per environment (development, staging, production) with least-privilege roles for each. Atlas allows database users with IP-specific access that can be rotated without changing connection strings.
# .env (NEVER commit this file!)
MONGODB_URI=mongodb+srv://appUser:SecurePass!@cluster.mongodb.net/myapp
# .gitignore
.env
*.pem
*.key
// app.js — load from env
require('dotenv').config();
const client = new MongoClient(process.env.MONGODB_URI, {
// never hardcode credentials here
});
// AWS Secrets Manager integration
const { SecretsManager } = require('@aws-sdk/client-secrets-manager');
async function getMongoURI() {
const client = new SecretsManager({ region: 'us-east-1' });
const secret = await client.getSecretValue({ SecretId: 'prod/mongodb/uri' });
return JSON.parse(secret.SecretString).MONGODB_URI;
}
Why it matters: Credential management is critical; leaked connection strings in logs or version control lead to data breaches within hours.
Real applications: Production systems use environment variables in development, secret vaults in production (AWS Secrets Manager, Vault), and rotate credentials regularly.
Common mistakes: Storing .env files in git, hardcoding connection strings in code, using same credentials across environments, or not rotating credentials regularly.
# Complete mongod.conf security hardening
net:
bindIp: 10.0.1.15,127.0.0.1 # restrict network access
port: 27017
tls:
mode: requireTLS
certificateKeyFile: /etc/ssl/mongodb.pem
CAFile: /etc/ssl/ca.pem
security:
authorization: enabled # require authentication
javascriptEnabled: false # disable server-side JS if not needed
# Run mongod as non-root user
# $ useradd -r -s /bin/false mongod
# $ chown -R mongod:mongod /var/lib/mongodb
# $ sudo -u mongod mongod --config /etc/mongod.conf
# Keep updated
# $ sudo apt update && sudo apt install -y mongodb-org Why it matters: Following a comprehensive hardening checklist ensures you don't miss critical security configurations; it's the standard framework for MongoDB security.
Real applications: MongoDB Atlas uses this checklist as the default configuration, proving it's both necessary and achievablefor all deployments.
Common mistakes: Implementing security partially (e.g., auth but no TLS), ignoring less obvious items (disabling JS eval, running as non-root), or not keeping systems updated.
const rateLimit = require('express-rate-limit');
// Rate limit login endpoint
const loginLimiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 5, // 5 attempts per window
message: 'Too many login attempts, please try again later'
});
app.post('/login', loginLimiter, loginHandler);
// MongoDB query timeout to prevent abuse
const results = await db.collection('products')
.find({ category: userInput })
.maxTimeMS(5000) // kill query after 5 seconds
.toArray();
// Atlas: set operation timeout on MongoClient
const client = new MongoClient(uri, {
serverSelectionTimeoutMS: 5000,
socketTimeoutMS: 45000,
connectTimeoutMS: 10000
});
Why it matters: Rate limiting and abuse protection prevent DoS attacks, brute-force login attempts, and resource exhaustion attacks from succeeding.
Real applications: Social media platforms use rate limiting to prevent API abuse; authentication endpoints use account lockout to prevent brute-force password cracking.
Common mistakes: Only rate-limiting login endpoints (should also protect search, data export endpoints), setting timeouts too lenient, or not monitoring for abuse patterns.