for...of with them). They are useful for removing duplicates from an array, tracking visited items, and membership checks.
The main methods are add(), has(), delete(), clear(), and the size property (not length).
Here is how Set works:
const set = new Set([1, 2, 3, 2, 1]); // duplicates removed
console.log(set.size); // 3
console.log(set); // Set { 1, 2, 3 }
set.add(4);
set.add(2); // ignored — already exists
console.log(set.has(3)); // true
console.log(set.has(9)); // false
set.delete(1);
console.log(set.size); // 3
// Remove duplicates from array
const arr = [1, 2, 2, 3, 3, 4];
const unique = [...new Set(arr)];
console.log(unique); // [1, 2, 3, 4]
// Iteration
for (const val of set) {
console.log(val); // values in insertion order
}
Sets use SameValueZero comparison — similar to === but treats +0 and -0 as equal and NaN as equal to NaN.
Unlike arrays, Sets don't have a built-in indexOf or includes method — use has() instead, which is faster than searching an array.
Why it matters: Set's has() is O(1) while array's includes() is O(n) — the performance difference is massive for large datasets. Deduplication with [...new Set(arr)] is the idiomatic one-liner and a standard interview answer.
Real applications: Removing duplicate user IDs, deduplicating API response arrays, tracking visited graph nodes in BFS/DFS, implementing unique tag collections, and fast membership/visited checks in algorithms.
Common mistakes: Using length instead of size (Set uses size), trying to access elements by index (set[0] returns undefined), and assuming Set deduplicates objects by content (it uses reference equality — two separate objects with the same data are different).
size property, and are directly iterable. They are often more efficient than objects for frequent additions and lookups.
The main methods are set(), get(), has(), delete(), and clear().
Here is how Map works:
const map = new Map();
// Any type as key
map.set('string', 1);
map.set(42, 'number key');
map.set(true, 'boolean key');
const objKey = { id: 1 };
map.set(objKey, 'object key');
console.log(map.get('string')); // 1
console.log(map.get(objKey)); // 'object key'
console.log(map.has(42)); // true
console.log(map.size); // 4
map.delete(42);
console.log(map.size); // 3
// Initialize from array of pairs
const map2 = new Map([
['name', 'Alice'],
['age', 30]
]);
// Iterate
for (const [key, value] of map2) {
console.log(`${key}: ${value}`);
}
Convert a Map to a plain object: Object.fromEntries(map). Convert a plain object to a Map: new Map(Object.entries(obj)).
Maps are better than objects when: keys are not strings, key order matters, frequent add/delete operations happen, or you need to store metadata about objects.
Why it matters: Map solves several real problems with plain objects: prototype pollution risk, non-string key requirements, and O(1) size tracking. Knowing when to reach for Map vs object is a daily architectural decision and a common interview question.
Real applications: Character frequency counting (coding interviews), caching expensive computations by object reference, event listener registries keyed by DOM element, and metadata associations where the key is not a string.
Common mistakes: Using length instead of size, using map['key'] instead of map.get('key') (bracket notation doesn't use the Map's storage), and forgetting that Maps can't be directly JSON.stringify'd (convert to object first).
toString and constructor. Maps have no inherited keys, so there are no accidental key collisions with built-in property names.
Maps have a size property (O(1) lookup), while getting an object's key count requires Object.keys(obj).length.
Here is a direct comparison:
// Object keys: only strings/symbols
const obj = {};
obj[1] = 'a'; // key becomes string '1'
obj[{}] = 'b'; // key becomes '[object Object]'
console.log(Object.keys(obj)); // ['1', '[object Object]']
// Map keys: any type
const map = new Map();
map.set(1, 'a'); // key is number 1
map.set({}, 'b'); // key is the actual object reference
map.set(NaN, 'c'); // even NaN works as key
// Prototype collision in object
const safe = {};
safe['constructor'] = 'oops'; // shadows Object.constructor!
// Map has no such issue:
const safeMap = new Map();
safeMap.set('constructor', 'safe'); // no problem
// Performance: Maps are faster for frequent add/delete
// Objects are faster for fixed keys with lots of accesses
Use a plain object when: keys are static strings, you need JSON serialization, or you need to pass to code expecting a normal object.
Use a Map when: keys are not strings, you need to know the count quickly, or you add/remove keys frequently.
Why it matters: This comparison is a standard senior interview question. The prototype pollution security issue with objects using user-supplied keys is a direct OWASP concern — Map is safer for dynamic key collections from untrusted sources.
Real applications: Configuration objects (plain object), router handler registries (plain object), DOM element metadata (Map with object keys), caches keyed by request objects (Map), and user permission registries (Map for fast lookups).
Common mistakes: Using a plain object for a cache keyed by DOM elements (keys become [object HTMLDivElement], all colliding), forgetting JSON.stringify doesn't serialize Maps, and using objects for user-provided keys without prototype pollution protection.
add(), has(), and delete().
The main use case is tracking objects without preventing garbage collection — for example, marking which DOM nodes have been processed.
Here is how WeakSet works:
const processed = new WeakSet();
function process(element) {
if (processed.has(element)) {
console.log('Already processed');
return;
}
// do work...
processed.add(element);
console.log('Processed');
}
const div = document.createElement('div');
process(div); // "Processed"
process(div); // "Already processed"
// When div is removed from DOM and no references remain,
// it can be garbage collected — WeakSet won't hold it alive
// WeakSet does NOT allow primitives
const ws = new WeakSet();
// ws.add(1); // TypeError: Invalid value used in weak set
ws.add({ id: 1 }); // OK
// Not iterable
// for (const item of ws) {} // TypeError
WeakSet is not a replacement for Set — it is specifically for the memory-sensitive use case where you need to associate metadata with objects without affecting their lifetime.
WeakSets automatically remove entries when the referenced object is garbage collected, avoiding memory leaks.
Why it matters: WeakSet solves the specific problem of needing to tag/mark objects without preventing garbage collection. It's the memory-safe alternative to using a regular Set to track DOM elements or component instances.
Real applications: Tracking which DOM elements have been processed by an animation/initialization routine, circular reference detection in custom serializers, and marking objects as "already seen" in traversal algorithms without keeping them alive.
Common mistakes: Trying to iterate a WeakSet (not iterable by design — this is intentional), adding primitives to WeakSet (only objects allowed), and using a regular Set when tracking DOM elements (creates memory leaks when elements are removed from DOM).
size property. They only support get(), set(), has(), and delete().
The key use case is associating private data or metadata with objects without preventing those objects from being garbage collected.
Here is how WeakMap works:
// Private data pattern using WeakMap
const privateData = new WeakMap();
class Person {
constructor(name, age) {
// Store private data keyed by the instance
privateData.set(this, { name, age });
}
getName() {
return privateData.get(this).name;
}
getAge() {
return privateData.get(this).age;
}
}
const alice = new Person('Alice', 30);
console.log(alice.getName()); // 'Alice'
// privateData.get(alice) — can only be accessed from class
// DOM node metadata (no memory leak)
const cache = new WeakMap();
function getOrCreate(node) {
if (!cache.has(node)) {
cache.set(node, { clicks: 0 });
}
return cache.get(node);
}
// When the DOM node is removed, its entry is cleaned up automatically
WeakMap is perfect for the private fields pattern — external code cannot access the WeakMap without a reference to the key object AND the WeakMap itself.
This is also how JavaScript class private fields (#field) effectively work under the hood in some implementations.
Why it matters: WeakMap is the canonical pre-ES2022 way to implement truly private instance data in classes. It also solves a critical memory problem: caching or memoizing results keyed by objects without keeping those objects in memory indefinitely.
Real applications: Private data pattern for class instances, DOM-to-metadata caches (element → computed data), memoization functions that take objects as arguments, and libraries that need to associate state with user-provided objects without leaking memory.
Common mistakes: Using a regular Map for DOM-element caching (memory leak when elements are removed), trying to iterate a WeakMap (impossible by design), using primitive keys in WeakMap (only objects and non-registered symbols are valid keys).
map.keys() (returns keys), map.values() (returns values), and map.entries() (returns [key, value] pairs). They also work directly with for...of which uses entries() by default.
Maps also have a forEach() method like arrays, with a slightly different argument order: callback(value, key, map).
All iteration happens in insertion order.
Here are all the ways to iterate a Map:
const map = new Map([
['one', 1],
['two', 2],
['three', 3]
]);
// 1. for...of with destructuring (most common)
for (const [key, value] of map) {
console.log(`${key} = ${value}`);
}
// 2. forEach (note: value comes before key)
map.forEach((value, key) => {
console.log(`${key}: ${value}`);
});
// 3. keys(), values(), entries()
console.log([...map.keys()]); // ['one', 'two', 'three']
console.log([...map.values()]); // [1, 2, 3]
console.log([...map.entries()]); // [['one',1],['two',2],['three',3]]
// 4. Spread into array of pairs
const pairs = [...map]; // same as [...map.entries()]
Note: in forEach(callback), the callback receives (value, key) — value first, key second. This is opposite of what you might expect, but it is consistent with the Array forEach convention.
Iterating a Map is always in insertion order, which is a guarantee. Plain object key order is more complex and depends on key type.
Why it matters: forEach's reversed (value, key) argument order is a classic gotcha — it's consistent with array forEach but trips up developers expecting (key, value). Map iteration is fundamental for frequency tables and data transformation pipelines.
Real applications: Processing URL query parameters, iterating router handler registries, rendering list items from a Map cache, frequency analysis in coding problems, and any pipeline that needs guaranteed ordering of key-value processing.
Common mistakes: Using map.forEach((key, value) => instead of (value, key) (arguments are reversed vs intuition), forgetting Maps are directly iterable with for...of (no need for .entries()), and converting to object via JSON.stringify and losing the iteration order guarantee.
Set does not have built-in methods for union, intersection, and difference, but they are easy to compute using spread and filtering.
Union: all unique elements from both sets. Intersection: elements that exist in both sets. Difference: elements in one set but not the other.
In modern JavaScript (ES2025+), Sets are getting built-in methods like union(), intersection(), difference(), and symmetricDifference() — check browser support before using them.
Here is how to implement set operations:
const a = new Set([1, 2, 3, 4]);
const b = new Set([3, 4, 5, 6]);
// Union: all items from both
const union = new Set([...a, ...b]);
console.log([...union]); // [1, 2, 3, 4, 5, 6]
// Intersection: only items in both
const intersection = new Set([...a].filter(x => b.has(x)));
console.log([...intersection]); // [3, 4]
// Difference (a - b): items in a but not b
const difference = new Set([...a].filter(x => !b.has(x)));
console.log([...difference]); // [1, 2]
// Symmetric difference: in one but not both
const symDiff = new Set(
[...a, ...b].filter(x => !(a.has(x) && b.has(x)))
);
console.log([...symDiff]); // [1, 2, 5, 6]
// Is subset? (is a a subset of b?)
const isSubset = [...a].every(x => b.has(x));
console.log(isSubset); // false
Using set.has(x) is O(1) — much faster than array.includes(x) which is O(n). When doing intersection on large datasets, using a Set for membership checks is significantly faster.
These operations are useful in permissions systems, tag filtering, and comparing collections.
Why it matters: Set intersection/union/difference are classic interview problems and real-world data manipulation tasks. Knowing that set.has() is O(1) versus array's O(n) lookup is what makes Set-based intersection dramatically faster at scale.
Real applications: Access control (user has all required permissions?), tag filtering (articles matching all selected tags), feature flag intersection, user preference comparison, and any algorithm that needs to find common/unique items across collections.
Common mistakes: Converting to arrays for every operation instead of using has() for the check (loses the O(1) benefit), not knowing about ES2025 built-in Set methods (set.intersection(), set.union()) and reinventing them, and using nested loops O(n²) when one Set + filter gives O(n).
size, or you want no risk of prototype property collisions.
Use a plain object when: keys are static strings you know at compile time, you need JSON serialization (Map cannot be directly serialized), or you pass data to APIs expecting plain objects.
A common mistake is using an object when the keys are dynamic user-provided strings — this opens up prototype pollution attacks. Map is safer in such cases.
Here are the key decision points:
// Use Map when key type varies
const registry = new Map();
const btn = document.querySelector('button');
registry.set(btn, { clickCount: 0 }); // DOM node as key
registry.set(Symbol('unique'), 'data'); // Symbol key
// Use Map for dynamic keys (avoid prototype pollution)
// RISKY with object:
const obj = {};
obj['__proto__'] = { isAdmin: true }; // prototype pollution!
// SAFE with Map:
const map = new Map();
map.set('__proto__', 'harmless'); // just a normal string key
// Use plain object for structured data
const user = { name: 'Alice', age: 30 }; // clearly a data object
JSON.stringify(user); // works easily
// Map for frequency counting (common interview question)
function charFrequency(str) {
const freq = new Map();
for (const ch of str) {
freq.set(ch, (freq.get(ch) || 0) + 1);
}
return freq;
}
console.log(charFrequency('hello')); // Map { h:1, e:1, l:2, o:1 }
The character frequency pattern is very common in coding interviews. Maps make it clean and easy to read.
For small, fixed-shape data structures (config, user object), plain objects are simpler and more readable. Save Maps for dynamic collections.
Why it matters: This architectural choice affects code security (prototype pollution), performance (O(1) size), and correctness (non-string keys). Knowing the tradeoffs is a distinguishing mark for senior engineers and is tested in interviews.
Real applications: Character/word frequency counting in algorithms (Map), HTTP request caching with URL as key (Map), user settings with string keys (plain object), and event-driven architectures where events are keyed by Symbol (Map).
Common mistakes: Always defaulting to plain objects for "dictionaries" even when keys are DOM elements or objects, not knowing map.get() vs bracket notation (map['key'] doesn't use the Map API), and serializing Maps directly to JSON (produces {} — must convert first).
=== with two exceptions: NaN === NaN is considered true, and +0 and -0 are considered equal.
This means you cannot have two entries with key NaN in a Map (it is treated as the same key). Similarly, a Set will not add NaN twice.
For objects, equality is by reference — two different objects with the same contents are considered different keys.
Here is SameValueZero in action:
// NaN handling
const set = new Set();
set.add(NaN);
set.add(NaN); // same as first, not added
console.log(set.size); // 1
const map = new Map();
map.set(NaN, 'found');
console.log(map.get(NaN)); // 'found'
// -0 and +0 treated as equal
set.add(+0);
set.add(-0); // treated as same as +0
console.log(set.size); // 2 (NaN and 0)
// Objects use reference equality
const obj1 = { id: 1 };
const obj2 = { id: 1 }; // same content but different reference
const objSet = new Set([obj1, obj2]);
console.log(objSet.size); // 2 — they are different objects!
const objMap = new Map();
objMap.set(obj1, 'first');
objMap.set(obj2, 'second'); // different key
console.log(objMap.size); // 2
If you need to use objects as Map/Set keys and treat them as equal based on content, you would need to use a string representation as the key (like JSON.stringify(obj)) or a custom data structure.
This reference-equality behavior is why you often see patterns like map.set(someObject, metadata) — the object itself is a unique key tied to that specific instance.
Why it matters: The SameValueZero algorithm's NaN treatment is the reason [NaN].includes(NaN) returns true (uses SameValueZero) while [NaN].indexOf(NaN) returns -1 (uses strict equality). This is a common interview question about array methods.
Real applications: Adding NaN to a Set correctly deduplicates it (unlike array-based dedup using indexOf), using objects as Map keys requires understanding reference equality, and debugging "why isn't my Set removing duplicates" for object-valued collections.
Common mistakes: Adding two separate but equal objects to a Set expecting deduplication (reference equality — they're different), using === to check if a value is in a Set/Map (use has()), and being surprised that map.get(NaN) works when NaN is the key.
[...map]) or Array.from(map). By default this gives an array of [key, value] pairs. Use [...map.keys()] or [...map.values()] for just keys or values.
Go the other way (array to Map) using new Map(arrayOfPairs) — the array must be an array of [key, value] pairs.
You can also convert a plain object to a Map via Object.entries(), and a Map back to an object with Object.fromEntries().
Here is all the conversion methods:
const map = new Map([['a', 1], ['b', 2], ['c', 3]]);
// Map → Array of pairs
const pairs = [...map]; // [['a',1],['b',2],['c',3]]
const samePairs = Array.from(map); // same result
// Map → keys array
const keys = [...map.keys()]; // ['a', 'b', 'c']
// Map → values array
const vals = [...map.values()]; // [1, 2, 3]
// Array of pairs → Map
const map2 = new Map(pairs);
// Plain object → Map
const obj = { x: 10, y: 20 };
const fromObj = new Map(Object.entries(obj));
console.log(fromObj.get('x')); // 10
// Map → plain object
const backToObj = Object.fromEntries(map);
console.log(backToObj); // { a: 1, b: 2, c: 3 }
// JSON (Map → JSON requires conversion)
const json = JSON.stringify(Object.fromEntries(map));
const fromJson = new Map(Object.entries(JSON.parse(json)));
JSON does not natively support Maps — you must convert to an object first. This means Maps with non-string keys cannot be perfectly round-tripped through JSON.
The Array.from(map) method also accepts a mapping function as a second argument: Array.from(map, ([k, v]) => v * 2).
Why it matters: Map/Array conversion is needed constantly since most JavaScript APIs expect arrays. Knowing Object.fromEntries(map) and new Map(Object.entries(obj)) as bidirectional conversion functions is essential for practical Map usage.
Real applications: Converting Map frequency tables to sorted arrays for display, serializing Maps to JSON (via Object.fromEntries), building Maps from server responses (via Object.entries), and transforming Map data through array methods like filter and map.
Common mistakes: JSON.stringify on a Map produces {} (empty object) — must convert first, assuming [...map] gives a flat array (it gives an array of [key, value] pairs), and not knowing that Array.from accepts a map function for one-step conversion-and-transform.
set[0]. Arrays have a rich set of methods (map, filter, reduce) that Sets do not have natively.
Sets are much faster for membership checks (has() is O(1)) compared to arrays (includes() is O(n)). This makes Sets ideal for lookup tables and deduplication.
When you need ordered, indexed data with full array methods: use an array. When you need uniqueness and fast membership testing: use a Set.
Here is a comparison:
// Array: duplicates allowed, index access
const arr = [1, 2, 2, 3];
console.log(arr[0]); // 1 (index access)
console.log(arr.includes(2)); // true (O(n))
arr.push(2); // [1, 2, 2, 3, 2]
// Set: no duplicates, no index
const set = new Set([1, 2, 2, 3]);
// set[0] — undefined (no index access)
console.log(set.has(2)); // true (O(1))
set.add(2); // ignored, already exists
// Performance comparison (large dataset)
const bigArr = Array.from({length: 100000}, (_, i) => i);
const bigSet = new Set(bigArr);
console.time('array'); bigArr.includes(99999); console.timeEnd('array'); // slower
console.time('set'); bigSet.has(99999); console.timeEnd('set'); // faster
// Set → Array when you need array methods
const setArr = [...set];
const doubled = setArr.map(x => x * 2); // [2, 4, 6]
Sets do not support array methods like map, filter, or reduce directly. Convert to array first, then use those methods.
For frequency counting (how many times each value appears), you need an array or Map — a Set only tells you if a value exists, not how many times.
Why it matters: Choosing Set vs Array affects both correctness (sets auto-deduplicate) and performance (O(1) has() vs O(n) includes()). This tradeoff is a core data structure interview topic.
Real applications: Deduplication pipelines use Set, indexed data with order/duplicate significance uses Array, BFS/visited tracking uses Set for O(1) lookups, and any list UI component uses Array since React maps over arrays for rendering.
Common mistakes: Calling array methods directly on a Set (TypeError — convert first), using Array for large-scale membership testing when Set would be dramatically faster, and forgetting that spreading a Set into an array ([...set]) gives elements in insertion order.
// Memory LEAK risk with Map
const cache = new Map();
function process(element) {
cache.set(element, { result: 'computed' });
}
// Even after DOM element is removed, cache keeps it alive!
// The element cannot be garbage collected.
// Safe with WeakMap — no memory leak
const safeCache = new WeakMap();
function safeProcess(element) {
safeCache.set(element, { result: 'computed' });
}
// When element is removed from DOM,
// garbage collector can collect it + its WeakMap entry
// Demonstration of the difference
let obj = { data: 'large data' };
const regular = new Map([[obj, 'value']]);
const weak = new WeakMap([[obj, 'value']]);
obj = null; // remove strong reference
// regular Map still holds obj — cannot be GC'd
// WeakMap allows GC to collect obj if nothing else holds it
The trade-off is that WeakMap/WeakSet are not iterable and have no size property. You cannot see what is inside them — you can only check for specific keys.
Always use WeakMap when you are associating data with DOM elements or other objects whose lifetime you do not control, to prevent memory leaks.
Why it matters: Memory leak via strong Map references to DOM elements is a real production problem in SPAs. When components unmount but a global Map still references them, the browser can't GC them. WeakMap is the correct tool.
Real applications: Analytics trackers that annotate DOM elements with metadata, React libraries that map component instances to internal state, animation systems that cache computed values per element, and any system that needs per-object metadata without owning the object lifecycle.
Common mistakes: Using a regular Map for a cache keyed by DOM nodes (memory leak when nodes are removed from DOM), checking if developers can iterate WeakMap/WeakSet (they can't — non-iterable by design), and thinking you can see the "remaining" size of a WeakMap (you can't — GC timing is non-deterministic).
size property that gives you the count in constant time O(1). Getting the size of a plain object requires Object.keys(obj).length, which creates a new array and is O(n).
For large collections that change frequently, Map's size is significantly more efficient since it is updated automatically with each set() and delete() call.
This is one of the practical performance reasons to prefer Map over Object for dynamic key collections.
Here is the comparison:
// Map has instant size
const map = new Map([['a', 1], ['b', 2], ['c', 3]]);
console.log(map.size); // 3 — O(1)
map.set('d', 4);
console.log(map.size); // 4 — automatically updated
map.delete('a');
console.log(map.size); // 3 — automatically updated
// Object requires creating a new array
const obj = { a: 1, b: 2, c: 3 };
console.log(Object.keys(obj).length); // 3 — O(n), creates array
console.log(Object.values(obj).length); // 3 — same cost
// For Set
const set = new Set([1, 2, 3, 4]);
console.log(set.size); // 4 — O(1)
// Note: 'size' not 'length' for Map and Set
// Arrays use 'length', Maps/Sets use 'size'
Remember: Object.keys() only counts enumerable own properties. If you have non-enumerable properties, they won't be counted. Map's size counts everything.
The size property of Map and Set is read-only — you cannot set it directly.
Why it matters: Map's O(1) size vs. Object's O(n) Object.keys(obj).length is a real performance difference. For dynamic caches and collections that change frequently, this can matter at scale.
Real applications: Rate-limiting implementations that track request counts per client, analytics that track distinct user counts, any cache that needs to evict entries when it exceeds a size threshold, and interview problems asking to maintain a bounded cache (Map.size check).
Common mistakes: Using length on a Map/Set (it's size), calling Object.keys(map).length to get Map's count (returns 0 — use map.size), and trying to set map.size = 0 to clear it (read-only — use map.clear()).
const wordCount = new Map([
['banana', 3],
['apple', 7],
['cherry', 1],
['date', 5]
]);
// Sort by value (ascending)
const sortedAsc = new Map(
[...wordCount].sort((a, b) => a[1] - b[1])
);
console.log([...sortedAsc]);
// [['cherry',1],['banana',3],['date',5],['apple',7]]
// Sort by value (descending)
const sortedDesc = new Map(
[...wordCount].sort((a, b) => b[1] - a[1])
);
console.log([...sortedDesc.keys()]);
// ['apple','date','banana','cherry']
// Sort by key (alphabetically)
const sortedByKey = new Map(
[...wordCount].sort((a, b) => a[0].localeCompare(b[0]))
);
// Common pattern: character frequency sorted by count
function topChars(str) {
const freq = new Map();
for (const ch of str) freq.set(ch, (freq.get(ch) || 0) + 1);
return [...freq].sort((a, b) => b[1] - a[1]);
}
console.log(topChars('hello')); // [['l',2],['h',1],['e',1],['o',1]]
The spread [...map] converts the Map to an array of [key, value] pairs, which can then be sorted using the standard array sort() method.
The resulting sorted array is wrapped in new Map() to restore it as an iterable Map with defined order.
Why it matters: The spread-sort-reconstruct pattern is a standard algorithm interview technique for sorted frequency tables. Top-K frequent elements and word frequency analysis problems require it. Maps don't sort in-place, so this pattern is essential.
Real applications: Leaderboard rendering (sort by score), word frequency analysis, sorting product categories by count, and any UI that displays a map-like collection sorted by value rather than insertion order.
Common mistakes: Trying to call map.sort() directly (Maps don't have a sort method), mutating the original map instead of creating a sorted copy, and forgetting that sort() is lexicographic by default — numeric values need a comparator function (a, b) => a[1] - b[1].