MongoDB

Security

15 Questions

MongoDB authentication is enabled via the --auth flag or security.authorization: enabled in mongod.conf. Without it, anyone with network access can read and write all data — a major security risk. Before enabling auth, create an administrative user in the admin database to avoid locking yourself out. MongoDB supports several authentication mechanisms: SCRAM-SHA-256 (default, password-based), x.509 (certificate-based), LDAP (enterprise), and Kerberos (enterprise). Always enable auth in production — thousands of MongoDB instances have been ransomed due to no authentication being set.
# mongod.conf
security:
  authorization: enabled

# Step 1: Connect WITHOUT auth (first time only)
mongosh --port 27017

# Step 2: Create admin user
use admin
db.createUser({
  user: "adminUser",
  pwd: "SecurePassword!123",
  roles: [{ role: "userAdminAnyDatabase", db: "admin" }, "readWriteAnyDatabase"]
})

# Step 3: Reconnect WITH auth
mongosh -u adminUser -p SecurePassword!123 --authenticationDatabase admin

Why it matters: Authentication is the foundation of database security; no auth meant 33,000 MongoDB instances were ransomed in 2017, making it the most critical first step.

Real applications: Production systems require auth enabled by default to prevent unauthorized access; even a brief window without auth can expose the entire database.

Common mistakes: Not enabling auth in early deployments, storing credentials in plaintext in connection strings, or locking yourself out by creating auth without an admin user first.

RBAC in MongoDB grants users the minimum permissions needed to perform their job (principle of least privilege). Built-in roles include: read, readWrite, dbAdmin, userAdmin, clusterAdmin, and readWriteAnyDatabase. Custom roles let you define exactly which actions on which resources are permitted. Users can have multiple roles across different databases. Application service accounts should have only readWrite on their specific database — never root or dbAdminAnyDatabase. Role inheritance allows building layered permission structures.
// Create a read-only reporting user
db.createUser({
  user: "reportUser",
  pwd: "ReportPass!456",
  roles: [{ role: "read", db: "analytics" }]
})

// Create a custom role with specific permissions
db.createRole({
  role: "orderProcessor",
  privileges: [
    {
      resource: { db: "shop", collection: "orders" },
      actions: ["find", "update"]
    }
  ],
  roles: []
})

// Assign custom role to user
db.grantRolesToUser("appUser", [{ role: "orderProcessor", db: "shop" }])

Why it matters: RBAC implements the "principle of least privilege" — a core security principle tested heavily in compliance and security interviews.

Real applications: Service accounts should have only readWrite on their specific database, preventing lateral movement if credentials leak; attackers can't drop databases they have no permissions on.

Common mistakes: Assigning root or dbAdminAnyDatabase roles to application accounts, not creating separate service accounts per environment, or not documenting role purposes.

TLS (Transport Layer Security) encrypts data in transit between MongoDB clients and servers, preventing eavesdropping and MITM attacks. Configure it via net.tls settings in mongod.conf. You need a valid certificate (PEM format) and can optionally require client certificate authentication. Use tlsMode: requireTLS for production to reject all unencrypted connections. MongoDB Atlas enforces TLS automatically. For self-hosted deployments, use Let's Encrypt or your organization's CA. The --tlsCAFile option specifies the CA certificate to validate server certificates against.
# mongod.conf
net:
  tls:
    mode: requireTLS
    certificateKeyFile: /etc/ssl/mongodb.pem
    CAFile: /etc/ssl/ca.pem

# Connect with TLS from driver
const client = new MongoClient(uri, {
  tls: true,
  tlsCAFile: '/etc/ssl/ca.pem',
  tlsCertificateKeyFile: '/etc/ssl/client.pem'
});

# Connect from mongosh with TLS
mongosh --tls --tlsCAFile /etc/ssl/ca.pem \
  --tlsCertificateKeyFile /etc/ssl/client.pem \
  "mongodb://hostname:27017"

Why it matters: TLS encrypts credentials and data in transit; without it, network sniffing can capture passwords and sensitive data even on private networks.

Real applications: Production systems use TLS with requireTLS mode to reject all unencrypted connections; MongoDB Atlas enforces it automatically.

Common mistakes: Using tls: true without requireTLS (allowing unencrypted fallback), not validating server certificates, or using self-signed certs without CA validation in production.

Encryption at rest protects data stored on disk from unauthorized access if physical media is stolen or a disk is decommissioned. MongoDB Enterprise provides WiredTiger Encrypted Storage Engine using AES-256 encryption. The encryption key is protected by a master key stored in a KMIP-compliant key management server (HashiCorp Vault, AWS KMS, etc.). MongoDB Atlas automatically encrypts all data at rest using disk-level encryption. For Community Edition, OS-level encryption (LUKS on Linux, BitLocker on Windows) is the standard approach. Never store plaintext credentials or secrets in MongoDB documents.
# mongod.conf (Enterprise only)
security:
  enableEncryption: true
  kmip:
    serverName: kmip-server.example.com
    port: 5696
    clientCertificateFile: /etc/ssl/kmip-client.pem
    serverCAFile: /etc/ssl/kmip-ca.pem

# Atlas: Encryption at rest with Customer Key Management
# Configured via Atlas UI or API with AWS KMS / Azure Key Vault / GCP KMS

// Application-level encryption (available in all editions via driver)
const encryption = new ClientEncryption(client, {
  keyVaultNamespace: 'encryption.__keyVault',
  kmsProviders: { aws: { accessKeyId, secretAccessKey } }
});

Why it matters: Encryption at rest protects against stolen disks and compliance violations; it's mandatory for regulated industries like healthcare and finance.

Real applications: MongoDB Atlas and Enterprise provide disk encryption; Community Edition relies on OS-level encryption (LUKS, BitLocker).

Common mistakes: Assuming HTTPS encryption covers data at rest (it doesn't), not managing encryption keys properly, or losing key recovery procedures.

CSFLE encrypts specific document fields on the client before sending data to MongoDB. Even MongoDB server administrators cannot see the plaintext values — only the application with the correct encryption key can decrypt them. This provides encryption in transit, at rest, and from MongoDB itself. Supported encryption algorithms: Deterministic (same value always encrypts to same ciphertext — allows equality queries) and Random (different ciphertext each time — more secure, no querying). MongoDB 6.0 introduced Queryable Encryption (QE) which allows range, equality, and prefix queries on encrypted fields without revealing data to the server.
const { ClientEncryption } = require('mongodb-client-encryption');

// Create encryption key
const encryption = new ClientEncryption(client, {
  keyVaultNamespace: 'encryption.__keyVault',
  kmsProviders: { local: { key: localMasterKey } }
});
const keyId = await encryption.createDataKey('local');

// Encrypt before insert
const encryptedSSN = await encryption.encrypt("123-45-6789", {
  algorithm: 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic',
  keyId
});
await db.collection('users').insertOne({
  name: "Alice",
  ssn: encryptedSSN // stored as binary ciphertext
});

// Decrypt on read
const doc = await db.collection('users').findOne({ name: "Alice" });
const plainSSN = await encryption.decrypt(doc.ssn);

Why it matters: CSFLE protects sensitive data (PII, PHI, credentials) even from MongoDB administrators — critical for highly sensitive applications.

Real applications: Financial and healthcare apps use CSFLE to ensure that data remains encrypted even if a DBA's credentials are compromised.

Common mistakes: Not using encryption for PII, using random mode for queryable fields (preventing queries), or not managing encryption keys separately from connection strings.

NoSQL injection occurs when attackers send malicious operator objects (like { "$gt": "" }) as query parameters, bypassing authentication or exfiltrating data. The fix is to validate and sanitize all user input, using schema validation libraries like Joi or express-validator. Use $eq operator explicitly, or cast user inputs to the expected type (string, number, ObjectId). Libraries like mongo-sanitize strip MongoDB operators from user input. Never construct queries by concatenating strings from user input. Use parameterized patterns consistently.
// VULNERABLE — attacker sends { username: { $gt: "" }, password: { $gt: "" } }
const user = await db.collection('users').findOne({
  username: req.body.username, // could be { $gt: "" }
  password: req.body.password
});

// SECURE — sanitize with mongo-sanitize
const sanitize = require('mongo-sanitize');
const user = await db.collection('users').findOne({
  username: sanitize(req.body.username),
  password: sanitize(req.body.password)
});

// BETTER — validate types explicitly
const { username, password } = req.body;
if (typeof username !== 'string' || typeof password !== 'string') {
  return res.status(400).json({ error: 'Invalid input' });
}
// Hash password before comparing (never store plaintext)
const user = await db.collection('users').findOne({ username });

Why it matters: NoSQL injection is a common web vulnerability that attackers actively exploit; understanding it is essential for secure API development.

Real applications: Login endpoints and search forms are frequent targets; attackers send { "$gt": "" } to bypass auth or exfiltrate data.

Common mistakes: Not sanitizing user input, using string concatenation in queries, or not validating input types before database operations.

Network security for MongoDB involves binding the server to specific interfaces, configuring firewall rules, and using IP allowlisting to restrict which hosts can connect. By default, MongoDB listens on all interfaces (0.0.0.0) — this should be restricted to specific IPs using net.bindIp. In MongoDB Atlas, each project has an IP Access List that must include your application server's IP before any connection succeeds. On self-hosted clusters, use OS firewalls (iptables/ufw) to allow only application server IPs on port 27017. VPC peering and private endpoints eliminate public internet exposure entirely.
# mongod.conf — bind to specific IP only
net:
  bindIp: 127.0.0.1,10.0.1.15  # localhost + app server IP only
  port: 27017

# iptables rule — allow only app server
iptables -A INPUT -p tcp --dport 27017 -s 10.0.1.0/24 -j ACCEPT
iptables -A INPUT -p tcp --dport 27017 -j DROP

# Atlas IP allowlist via CLI
atlas accessLists create --currentIp --projectId 
atlas accessLists create --ipAddress 10.0.1.15 --projectId 

# Connection string with auth + TLS (production standard)
mongodb+srv://appUser:password@cluster.mongodb.net/mydb?tls=true

Why it matters: Network security is the first line of defense; binding to 0.0.0.0 is how thousands of MongoDB instances were compromised in ransomware attacks.

Real applications: Production deployments restrict MongoDB to specific subnets; cloud systems use VPC/VPN to eliminate internet exposure.

Common mistakes: Binding to 0.0.0.0 on public networks, not configuring firewall rules, or thinking VPC isolation is sufficient without IP allowlisting.

MongoDB Auditing (Enterprise/Atlas only) records specified database events — authentication, CRUD operations, admin actions — to a log file or syslog for compliance (SOC2, HIPAA, PCI-DSS). The audit log captures: who performed the action, what action was taken, when, from which IP, and whether it succeeded. Configure audit filters to capture only high-risk operations (e.g., user creation, role changes, data deletion) to avoid excessive log volume. MongoDB Atlas provides Database Auditing via the Atlas UI with built-in retention and encrypted log storage. For Community Edition, application-level audit logging must be implemented manually.
# mongod.conf (Enterprise)
auditLog:
  destination: file
  format: JSON
  path: /var/log/mongodb/auditLog.json
  filter: '{
    atype: {
      $in: ["authenticate", "createCollection", "dropCollection",
            "createUser", "dropUser", "grantRolesToUser",
            "authCheck", "logout"]
    }
  }'

// Application-level audit (Community Edition fallback)
async function auditedDelete(collection, filter, userId) {
  const result = await collection.deleteMany(filter);
  await db.collection('audit_log').insertOne({
    action: 'deleteMany',
    collection: collection.collectionName,
    filter, userId,
    deletedCount: result.deletedCount,
    timestamp: new Date()
  });
  return result;
}

Why it matters: Audit logs provide accountability and compliance proof; without them, breach investigations cannot determine what was accessed or compromised.

Real applications: Compliance audits (SOC2, HIPAA, PCI-DSS) require proof of who accessed what data and when; Atlas audit logging provides this automatically.

Common mistakes: Not enabling auditing, enabling too verbose logging causing storage issues, or not retaining audit logs long enough for compliance.

Passwords must never be stored in plaintext. Use bcrypt, argon2, or scrypt to hash passwords with a random salt before storing. Bcrypt automatically generates a salt and embeds it in the hash, so you never store the salt separately. The work factor (cost) should be high enough to make brute-force attacks expensive (bcrypt rounds: 12–14 for modern hardware). Never use MD5 or SHA-1 for passwords — they are fast hashes designed for data integrity, not password security. Also never store API keys or tokens in plaintext; hash them with SHA-256 before storage.
const bcrypt = require('bcrypt');

// Register: hash before saving
async function registerUser(username, plainPassword) {
  const SALT_ROUNDS = 12;
  const hashedPassword = await bcrypt.hash(plainPassword, SALT_ROUNDS);
  await db.collection('users').insertOne({
    username,
    password: hashedPassword, // never plaintext
    createdAt: new Date()
  });
}

// Login: compare with hash
async function loginUser(username, plainPassword) {
  const user = await db.collection('users').findOne({ username });
  if (!user) return null;
  const isValid = await bcrypt.compare(plainPassword, user.password);
  return isValid ? user : null;
}

// Also: use projection to exclude password from queries
db.users.find({}, { password: 0 })

Why it matters: Password security is fundamental; plaintext password storage is a guaranteed breach outcome that exposes users on other sites.

Real applications: All production systems use bcrypt with high salt rounds (12+) to make breached password files useless to attackers.

Common mistakes: Using MD5 or SHA-1 for passwords, not salting hashes, storing passwords in ways that can be reversed, or not hashing API keys.

x.509 authentication uses TLS certificates instead of usernames/passwords for authenticating clients and replica set/sharded cluster members. The certificate's Subject Distinguished Name (DN) maps to a MongoDB user. This is the preferred authentication method for service accounts and inter-node communication because it eliminates password management. Internal authentication between replica set members uses keyfiles (shared secrets) or x.509 certificates. Certificate-based auth requires a proper PKI infrastructure — typically the same CA that issues your TLS server certificates also issues client certificates.
# Create MongoDB user mapped to certificate subject
db.getSiblingDB("$external").createUser({
  user: "CN=appService,OU=engineering,O=MyCompany,C=US",
  roles: [{ role: "readWrite", db: "myapp" }]
})

# Connect using x.509 certificate
mongosh --tls \
  --tlsCertificateKeyFile /etc/ssl/app-client.pem \
  --tlsCAFile /etc/ssl/ca.pem \
  --authenticationMechanism MONGODB-X509 \
  --authenticationDatabase '$external' \
  "mongodb://hostname:27017"

# Internal replica set auth with keyfile
# mongod.conf
security:
  keyFile: /etc/mongodb/keyfile  # same file on all RS members

Why it matters: x.509 authentication eliminates password management for service-to-service and inter-node communication; it's the security standard for infrastructure.

Real applications: Large MongoDB deployments use x.509 for all inter-node communication and service accounts, eliminating password rotation burden.

Common mistakes: Not setting up proper certificate management, using x.509 without verifying certificate chain, or misunderstanding certificate subject DN mapping.

MongoDB's schema validation (JSON Schema) enforces document structure at the database level, preventing malformed or malicious data from being inserted regardless of application bugs. This acts as a server-side defense layer — attackers who bypass application validation still cannot insert invalid documents. Validation can enforce required fields, data types, value ranges, and string patterns. The validationAction can be error (reject invalid) or warn (log but allow). Combining schema validation with application-level validation and input sanitization creates defense-in-depth against injection and data corruption.
db.createCollection("users", {
  validator: {
    $jsonSchema: {
      bsonType: "object",
      required: ["username", "email", "password"],
      properties: {
        username: {
          bsonType: "string",
          minLength: 3,
          maxLength: 50,
          pattern: "^[a-zA-Z0-9_]+$"  // alphanumeric only
        },
        email: {
          bsonType: "string",
          pattern: "^[^@]+@[^@]+\\.[^@]+$"
        },
        age: {
          bsonType: "int",
          minimum: 0,
          maximum: 150
        }
      },
      additionalProperties: false  // reject unexpected fields
    }
  },
  validationAction: "error"  // reject invalid documents
})

Why it matters: Schema validation provides server-side data quality enforcement; attackers who bypass application validation still can't insert malformed data.

Real applications: Validation prevents injection attacks, ensures data type consistency, and documents expected document structure for developers.

Common mistakes: Using validation only at application level without server-side validation, not restricting additionalProperties, or using warn mode when error is needed.

The most common MongoDB security failures involve: no authentication (MongoDB 3.6 and older shipped with it disabled), binding to 0.0.0.0 without firewall rules making the database internet-accessible, default admin credentials, no TLS (plaintext network traffic), overprivileged service accounts, and storing secrets in documents. These mistakes resulted in the "MongoDB ransomware" attacks of 2017 where 33,000 databases were wiped. Use MongoDB's Security Checklist, enable MongoDB Atlas security defaults, or run the mongodb-security-checklist tool to audit deployments.
// Security checklist verification
db.adminCommand({ getParameter: 1, authenticationMechanisms: 1 })
// Expect: { authenticationMechanisms: ['SCRAM-SHA-256', ...] }

// Check users
db.system.users.find({}, { user: 1, roles: 1 })

// Check bind IP
db.adminCommand({ getCmdLineOpts: 1 }).parsed.net.bindIp
// Should NOT be 0.0.0.0 in production

// Verify TLS is required
db.adminCommand({ getCmdLineOpts: 1 }).parsed.net.tls.mode
// Should be 'requireTLS'

// Check for users with excessive privileges
db.system.users.find({ "roles.role": "root" })

Why it matters: Understanding common security mistakes is essential for preventing becoming another victim of data breaches and ransomware attacks.

Real applications: The MongoDB ransomware attacks of 2017 explicitly exploited no-auth + public accessibility combinations that are simple to prevent.

Common mistakes: Treating security as optional ("we're behind a firewall"), assuming cloud platforms provide all security by default, or ignoring security advisories and patches.

Connection strings contain sensitive credentials and must never be hardcoded or committed to version control. Use environment variables via .env files (with dotenv) loaded at runtime, never committed to git. In production, use secret management services: AWS Secrets Manager, HashiCorp Vault, Azure Key Vault, or GCP Secret Manager. Rotate credentials regularly. Add `.env` to `.gitignore`. Use different credentials per environment (development, staging, production) with least-privilege roles for each. Atlas allows database users with IP-specific access that can be rotated without changing connection strings.
# .env (NEVER commit this file!)
MONGODB_URI=mongodb+srv://appUser:SecurePass!@cluster.mongodb.net/myapp

# .gitignore
.env
*.pem
*.key

// app.js — load from env
require('dotenv').config();
const client = new MongoClient(process.env.MONGODB_URI, {
  // never hardcode credentials here
});

// AWS Secrets Manager integration
const { SecretsManager } = require('@aws-sdk/client-secrets-manager');
async function getMongoURI() {
  const client = new SecretsManager({ region: 'us-east-1' });
  const secret = await client.getSecretValue({ SecretId: 'prod/mongodb/uri' });
  return JSON.parse(secret.SecretString).MONGODB_URI;
}

Why it matters: Credential management is critical; leaked connection strings in logs or version control lead to data breaches within hours.

Real applications: Production systems use environment variables in development, secret vaults in production (AWS Secrets Manager, Vault), and rotate credentials regularly.

Common mistakes: Storing .env files in git, hardcoding connection strings in code, using same credentials across environments, or not rotating credentials regularly.

MongoDB's official security checklist covers: Enable Authentication (--auth), Enable Access Control (RBAC with least privilege), Encrypt Communications (TLS/SSL), Encrypt Data at Rest (WiredTiger encryption or OS-level), Limit Network Exposure (bindIP, firewall rules), Enable Auditing, Run MongoDB with Dedicated OS Account (not root), Disable Unused Features (HTTP admin interface in older versions), and Keep MongoDB Updated (patch security vulnerabilities). For Atlas, most of these are enforced by default, making it significantly easier to maintain a secure posture.
# Complete mongod.conf security hardening
net:
  bindIp: 10.0.1.15,127.0.0.1  # restrict network access
  port: 27017
  tls:
    mode: requireTLS
    certificateKeyFile: /etc/ssl/mongodb.pem
    CAFile: /etc/ssl/ca.pem

security:
  authorization: enabled        # require authentication
  javascriptEnabled: false      # disable server-side JS if not needed

# Run mongod as non-root user
# $ useradd -r -s /bin/false mongod
# $ chown -R mongod:mongod /var/lib/mongodb
# $ sudo -u mongod mongod --config /etc/mongod.conf

# Keep updated
# $ sudo apt update && sudo apt install -y mongodb-org

Why it matters: Following a comprehensive hardening checklist ensures you don't miss critical security configurations; it's the standard framework for MongoDB security.

Real applications: MongoDB Atlas uses this checklist as the default configuration, proving it's both necessary and achievablefor all deployments.

Common mistakes: Implementing security partially (e.g., auth but no TLS), ignoring less obvious items (disabling JS eval, running as non-root), or not keeping systems updated.

MongoDB itself has no built-in rate limiting — it must be implemented at the application or API gateway layer. Use express-rate-limit to throttle API requests. For authentication endpoints, implement account lockout (e.g., lock after 5 failed attempts). Use maxTimeMS on queries to prevent long-running queries from exhausting resources. Implement query timeout at the connection level. Use MongoDB Atlas's IP access list and connection limits per user. Monitor for unusual query patterns (sudden spike in read/write ops) using Atlas alerts or Prometheus metrics to detect attempted data exfiltration.
const rateLimit = require('express-rate-limit');

// Rate limit login endpoint
const loginLimiter = rateLimit({
  windowMs: 15 * 60 * 1000, // 15 minutes
  max: 5, // 5 attempts per window
  message: 'Too many login attempts, please try again later'
});
app.post('/login', loginLimiter, loginHandler);

// MongoDB query timeout to prevent abuse
const results = await db.collection('products')
  .find({ category: userInput })
  .maxTimeMS(5000) // kill query after 5 seconds
  .toArray();

// Atlas: set operation timeout on MongoClient
const client = new MongoClient(uri, {
  serverSelectionTimeoutMS: 5000,
  socketTimeoutMS: 45000,
  connectTimeoutMS: 10000
});

Why it matters: Rate limiting and abuse protection prevent DoS attacks, brute-force login attempts, and resource exhaustion attacks from succeeding.

Real applications: Social media platforms use rate limiting to prevent API abuse; authentication endpoints use account lockout to prevent brute-force password cracking.

Common mistakes: Only rate-limiting login endpoints (should also protect search, data export endpoints), setting timeouts too lenient, or not monitoring for abuse patterns.