const user = { name: "Alice", age: 30, role: "admin" };
Object.keys(user); // ["name", "age", "role"]
Object.values(user); // ["Alice", 30, "admin"]
Object.entries(user); // [["name","Alice"], ["age",30], ["role","admin"]]
// Iterate with entries
for (const [key, val] of Object.entries(user)) {
console.log(key + ": " + val);
}
// Convert to Map
const map = new Map(Object.entries(user));
Object.entries() is the most versatile of the three because you can use it with for...of, Map constructor, and destructuring. To convert back from entries to an object, use Object.fromEntries().
Why it matters: These three methods are the standard way to iterate and transform plain objects in modern JavaScript. Knowing when to use each and how to chain them with array methods like filter, map, and reduce is essential for data transformation work.
Real applications: Filtering object properties by value, transforming API response objects, building Maps from objects (new Map(Object.entries(obj))), converting URL search params to objects, and serializing form data.
Common mistakes: Using Object.keys() when symbol-keyed properties are needed (use Reflect.ownKeys()), not knowing all three skip inherited and non-enumerable properties, and iterating over entries when for...in would unintentionally include inherited properties.
const target = { a: 1 };
const source = { b: 2, c: 3 };
Object.assign(target, source);
console.log(target); // { a: 1, b: 2, c: 3 }
// Merging (creates new object)
const merged = Object.assign({}, { a: 1 }, { b: 2 }, { a: 99 });
console.log(merged); // { a: 99, b: 2 }
// Shallow copy — nested objects are shared
const original = { data: { x: 1 } };
const copy = Object.assign({}, original);
copy.data.x = 99;
console.log(original.data.x); // 99 (shared reference!)
In modern JavaScript, the spread operator ({ ...obj }) is preferred over Object.assign() for creating new objects because it is more readable. Object.assign() is still useful when you need to mutate an existing target object or when working with dynamic source objects.
Why it matters: Object.assign() is the predecessor to spread for object merging. Understanding both enables you to choose the right tool — spread for pure functional transforms, Object.assign() for in-place mutations and multi-source merges.
Real applications: Creating config objects with defaults (merge user config over base config), implementing mixin patterns, shallow-cloning objects, and initializing objects with optional overrides in library constructors.
Common mistakes: Using Object.assign() thinking it deep-merges (it's shallow — nested objects are not merged, they're replaced), not knowing that Object.assign() triggers setters on the target (spread does not), and using Object.assign({}, ...) when spread { ...defaults, ...overrides } is cleaner.
// freeze — fully immutable
const frozen = Object.freeze({ x: 1, y: 2 });
frozen.x = 99; // silently fails (throws in strict mode)
frozen.z = 3; // silently fails
console.log(frozen); // { x: 1, y: 2 }
// seal — can modify, cannot add/remove
const sealed = Object.seal({ x: 1, y: 2 });
sealed.x = 99; // allowed
sealed.z = 3; // silently fails
delete sealed.x; // silently fails
console.log(sealed); // { x: 99, y: 2 }
console.log(Object.isFrozen(frozen)); // true
console.log(Object.isSealed(sealed)); // true
To deeply freeze an object (including nested objects), you need to recursively call Object.freeze() on all nested objects. Object.isFrozen(), Object.isSealed(), and Object.isExtensible() check the current state of an object.
Why it matters: Immutability is a core concept in functional programming and modern frameworks. Object.freeze() is the native way to enforce it at runtime — important for constants, configs, and shared state that must not change.
Real applications: Freezing Redux action type constants, making API response objects immutable before caching, freezing prototype objects to prevent monkey-patching, and creating record-like immutable value objects in domain models.
Common mistakes: Thinking Object.freeze() is deep (it's shallow — nested objects can still be mutated), expecting errors when mutating a frozen object in non-strict mode (silently fails), not knowing the difference between freeze (immutable values), seal (no add/delete), and preventExtensions (no add only).
const obj = {};
Object.defineProperty(obj, "name", {
value: "Alice",
writable: false, // cannot change value
enumerable: true, // shows in for...in / Object.keys
configurable: false // cannot delete or reconfigure
});
obj.name = "Bob"; // silently fails
console.log(obj.name); // "Alice"
// Getter/Setter
Object.defineProperty(obj, "upper", {
get() { return this.name.toUpperCase(); },
enumerable: true,
configurable: true
});
console.log(obj.upper); // "ALICE"
Use Object.defineProperties() to define multiple properties at once. defineProperty is used internally by frameworks for reactive data binding (Vue 2 uses it extensively) and for creating properties that cannot be accidentally overwritten or enumerated.
Why it matters: Object.defineProperty() is the low-level API that powers getter/setter-based reactivity (Vue 2), non-enumerable internal properties, and read-only constants. Without understanding it, you can't reason about how frameworks implemented reactivity before Proxy.
Real applications: Adding non-enumerable metadata to objects, defining read-only version constants, implementing Vue 2-like reactive properties, creating computed getter properties, and defining configurable vs non-configurable properties in library APIs.
Common mistakes: Forgetting that enumerable, configurable, and writable all default to false when using defineProperty (unlike regular assignment where they're all true), trying to redefine a non-configurable property (TypeError), and mixing accessor descriptors and data descriptors (TypeError).
const defaults = { theme: "light", lang: "en", debug: false };
const userPrefs = { theme: "dark", lang: "fr" };
// Merge with overrides
const config = { ...defaults, ...userPrefs };
console.log(config);
// { theme: "dark", lang: "fr", debug: false }
// Add/override specific properties
const updated = { ...config, debug: true, version: 2 };
// Shallow copy
const original = { nested: { x: 1 } };
const copy = { ...original };
copy.nested.x = 99;
console.log(original.nested.x); // 99 (shared ref!)
The spread operator is essential for immutable state updates in React. Remember that it only creates a shallow copy — for nested objects, you need to spread at each level or use structuredClone() for a deep copy.
Why it matters: Object spread is one of the most fundamental modern JavaScript patterns. It's used in every React state update (setState({ ...prev, key: val })), Redux reducer, functional update pattern, and general object composition.
Real applications: React setState with partial updates, Redux immutable state reducers, merging user preferences with defaults, cloning query parameter objects before modification, and composing configuration objects from multiple sources.
Common mistakes: Thinking spread is a deep copy (it's shallow — nested objects share the same reference), not knowing that later properties override earlier ones (order matters in spread merging), and using spread to merge arrays when concat() or another spread into array literal is clearer.
const key = "name";
const obj = { [key]: "Alice" };
console.log(obj.name); // "Alice"
// Dynamic keys
const prefix = "user";
const data = {
[prefix + "Name"]: "Bob",
[prefix + "Age"]: 25
};
console.log(data.userName); // "Bob"
// With Symbol
const id = Symbol("id");
const item = { [id]: 123 };
console.log(item[id]); // 123
// In methods
const action = "get";
const api = { [action + "User"]() { return "user data"; } };
api.getUser(); // "user data"
Computed property names are useful for dynamic object construction, creating objects from variable keys, and defining methods with dynamic names. They are commonly used with Symbols to create unique non-string property keys.
Why it matters: Computed property names enable dynamic key selection in a single expression, eliminating the need to create an object and then set properties separately. This is critical for clean, functional-style object transformations.
Real applications: Building Redux action creators ({ [actionType]: handler }), creating objects from form field names ({ [field.name]: field.value }), implementing lookup tables with dynamic keys, and using Symbol-keyed properties for well-known behaviors.
Common mistakes: Not knowing you can use any expression (not just strings/variables) as a computed key, forgetting to wrap the Symbol in brackets ([Symbol.iterator]), and using template literals in computed keys unnecessarily when a simple variable would suffice.
// Same as ===
Object.is(1, 1); // true
Object.is("a", "a"); // true
Object.is(null, null); // true
// Different from ===
Object.is(NaN, NaN); // true (=== gives false)
Object.is(+0, -0); // false (=== gives true)
NaN === NaN; // false
+0 === -0; // true
// Use case: reliable equality check
function sameValue(a, b) {
return Object.is(a, b);
}
Object.is() is used by React's useState and useMemo hooks to determine if state has changed. Understanding its behavior with NaN and signed zeros helps explain why certain React re-renders do or do not occur.
Why it matters: The difference between === and Object.is() for NaN and signed zeros is subtle but important. React's reconciliation algorithm uses Object.is() for shallow comparison, which means NaN state values don't trigger re-renders when "unchanged".
Real applications: Custom equality checks in memoization functions, implementing Object.is()-based comparison in shallow equality helpers, understanding why React doesn't re-render when state stays the same NaN value, and building reliable equality utilities for testing.
Common mistakes: Using === where Object.is() semantics are needed (specifically for NaN checks), not knowing that NaN !== NaN but Object.is(NaN, NaN) is true, and not using Number.isNaN() (more reliable than the global isNaN() which coerces values).
// From entries array
const entries = [["name", "Alice"], ["age", 30]];
const obj = Object.fromEntries(entries);
console.log(obj); // { name: "Alice", age: 30 }
// From Map
const map = new Map([["x", 1], ["y", 2]]);
const fromMap = Object.fromEntries(map);
console.log(fromMap); // { x: 1, y: 2 }
// Transform object values
const prices = { apple: 1.5, banana: 0.75 };
const doubled = Object.fromEntries(
Object.entries(prices).map(([k, v]) => [k, v * 2])
);
console.log(doubled); // { apple: 3, banana: 1.5 }
The entries-transform-fromEntries pattern is a powerful way to implement map, filter, and other transformations on objects. You can filter properties, rename keys, or transform values using this pipeline approach.
Why it matters: Object.fromEntries() completes the symmetry with Object.entries(), enabling a full functional transformation pipeline on objects. It's also the standard way to convert a Map back to a plain object and to process URL search parameters.
Real applications: Converting Map results to plain objects for serialization, transforming URLSearchParams to a plain object, filtering object properties by a condition, normalizing API response keys (snake_case to camelCase), and rebuilding objects after transformation pipelines.
Common mistakes: Not knowing that Object.fromEntries() is the reverse of Object.entries() and using manual reduce for the same purpose (verbose), forgetting that duplicate keys result in the last value winning, and not knowing it also accepts any iterable of [key, value] pairs (not just Map).
const obj = { name: "Alice" };
console.log(Object.getOwnPropertyDescriptor(obj, "name"));
// { value: "Alice", writable: true, enumerable: true, configurable: true }
// Properties created with defineProperty default to false
Object.defineProperty(obj, "id", { value: 1 });
console.log(Object.getOwnPropertyDescriptor(obj, "id"));
// { value: 1, writable: false, enumerable: false, configurable: false }
// Get all descriptors
console.log(Object.getOwnPropertyDescriptors(obj));
// { name: { ... }, id: { ... } }
Object.getOwnPropertyDescriptors() is useful for creating exact copies of objects including getters and setters, which Object.assign() and spread cannot preserve. Understanding descriptors is essential for working with Object.defineProperty(), Object.freeze(), and other property configuration methods.
Why it matters: Property descriptors are the low-level metadata that defines how a property behaves. Without understanding them, you can't understand why some properties can't be deleted, why some don't appear in loops, or how Vue 2's reactivity intercepted property assignments.
Real applications: Creating deep-defined property copies with Object.create(proto, Object.getOwnPropertyDescriptors(obj)), building library APIs with non-enumerable internal properties, understanding why Object.assign() doesn't copy getters/setters, and auditing an object's property accessibility.
Common mistakes: Expecting Object.assign() to also copy getter/setter definitions (it executes the getter and copies the value), not knowing that enumerable: false hides from Object.keys() but not from Object.getOwnPropertyNames(), and using delete to remove a non-configurable property (TypeError).
const original = { a: 1, nested: { b: 2 } };
// Shallow copy — nested is shared
const shallow = { ...original };
shallow.nested.b = 99;
console.log(original.nested.b); // 99 (affected!)
// Deep copy with structuredClone (modern)
const deep = structuredClone(original);
deep.nested.b = 42;
console.log(original.nested.b); // 99 (unaffected)
// Deep copy with JSON (limited — no functions, Date, etc.)
const jsonCopy = JSON.parse(JSON.stringify(original));
// structuredClone handles: Date, Map, Set, ArrayBuffer, etc.
// but NOT: functions, DOM nodes, or Symbol properties
structuredClone() is the recommended way to deep copy in modern JavaScript. The JSON method loses functions, undefined values, Dates (become strings), RegExp, and Map/Set. For simple objects without special types, JSON works fine.
Why it matters: Shallow vs deep copy determines whether two code paths share the same data or have independent copies. Choosing wrong causes either unnecessary memory use (deep copy everything) or accidental mutation bugs (shallow copy nested objects).
Real applications: Redux reducers returning new state objects (shallow copy is usually sufficient for one level), form state initialization (deep copy config defaults), API response caching (shallow copy is fine if you don't mutate), and undo/redo systems that snapshot deep state trees.
Common mistakes: Using JSON.parse(JSON.stringify()) for deep copy (loses Date precision, functions, undefined), spreading an object and thinking nested objects are independent copies (they're not), and deep copying everything defensively when shallow copies with immutable update patterns would be correct and faster.
// Create with prototype
const animal = {
speak() { return this.name + " speaks"; }
};
const dog = Object.create(animal);
dog.name = "Rex";
console.log(dog.speak()); // "Rex speaks"
console.log(Object.getPrototypeOf(dog) === animal); // true
// With property descriptors
const cat = Object.create(animal, {
name: { value: "Whiskers", writable: true, enumerable: true }
});
// Null prototype — no inherited methods
const dict = Object.create(null);
dict.key = "value";
console.log(dict.toString); // undefined (no prototype!)
// Useful for safe dictionaries (no prototype pollution)
const safeMap = Object.create(null);
safeMap["__proto__"] = "safe"; // just a normal property
Object.create(null) is used to create prototype-pollution-safe dictionaries. Regular objects inherit methods like toString and constructor from Object.prototype, which can cause unexpected behavior when used as look-up tables.
Why it matters: Object.create() is the explicit, prototype-aware way to create objects. It's the mechanism behind prototypal inheritance and enables creating objects with custom prototypes without using class syntax. Understanding it reveals how JavaScript's object system works at its core.
Real applications: Creating prototype-pollution-safe lookup tables with Object.create(null), implementing classical-style inheritance before ES6 classes, building mixin patterns, and understanding how Object.setPrototypeOf and the prototype chain work.
Common mistakes: Using Object.create(null) objects with methods like .toString() or hasOwnProperty() that they don't inherit (TypeError), not knowing Object.create() takes a second argument (property descriptors), and thinking Object.create(SomeClass.prototype) is equivalent to new SomeClass() (it doesn't run the constructor).
const user = { name: "Alice", age: 30 };
// Object.hasOwn() — recommended (ES2022)
console.log(Object.hasOwn(user, "name")); // true
console.log(Object.hasOwn(user, "toString")); // false (inherited)
// Old way — hasOwnProperty
console.log(user.hasOwnProperty("name")); // true
// Problem 1: fails on null-prototype objects
const dict = Object.create(null);
dict.key = "value";
// dict.hasOwnProperty("key"); // TypeError!
console.log(Object.hasOwn(dict, "key")); // true (works!)
// Problem 2: can be shadowed
const dangerous = { hasOwnProperty: () => false };
console.log(dangerous.hasOwnProperty("hasOwnProperty")); // false (wrong!)
console.log(Object.hasOwn(dangerous, "hasOwnProperty")); // true (correct)
Always prefer Object.hasOwn() over hasOwnProperty() in modern code. If you need to support older browsers, use Object.prototype.hasOwnProperty.call(obj, prop) as a safe alternative — this cannot be shadowed.
Why it matters: hasOwnProperty() is a prototype method that can be overridden or missing (on null-prototype objects). Object.hasOwn() was added specifically to fix this footgun. It's also much shorter. Interviews expect you to know the safe pattern.
Real applications: Checking own properties in for...in loops, validating whether a config object has a specific key set by the caller, distinguishing own properties from inherited ones when processing plain data objects, and security-critical code that uses null-prototype dicts where hasOwnProperty doesn't exist.
Common mistakes: Calling obj.hasOwnProperty(key) on an Object.create(null) object (TypeError — method doesn't exist), not using Object.hasOwn() in modern code (still using the verbose Object.prototype call), and confusing key in obj (includes inherited) with Object.hasOwn(obj, key) (own only).
const person = {
firstName: "John",
lastName: "Doe",
// Getter — computed property
get fullName() {
return this.firstName + " " + this.lastName;
},
// Setter — with validation
set fullName(value) {
const parts = value.split(" ");
if (parts.length !== 2) throw new Error("Need first and last name");
this.firstName = parts[0];
this.lastName = parts[1];
}
};
console.log(person.fullName); // "John Doe" (calls getter)
person.fullName = "Jane Smith"; // calls setter
console.log(person.firstName); // "Jane"
// In classes
class Temperature {
#celsius = 0;
get fahrenheit() {
return this.#celsius * 9/5 + 32;
}
set fahrenheit(f) {
this.#celsius = (f - 32) * 5/9;
}
}
Getters and setters provide encapsulation — they allow you to change internal implementation without changing the public API. They are also useful for lazy computation, validation, logging, and derived values.
Why it matters: Getters/setters are the standard way to add computed or validated properties to objects without changing consumer API. They're used in virtually every framework, library, and ORM for transparent property access with side effects.
Real applications: Vue 2 reactive properties (defineProperty setters notify watchers), computed property patterns (circumference derived from radius), validation on write (reject invalid temperature values), lazy initialization (compute expensive value only when accessed), and debugging (log all reads/writes).
Common mistakes: Creating infinite recursion by reading this.prop inside the getter for prop (use a backing variable like _prop), not knowing getters/setters are automatically inherited via prototypes, and confusing object literal getter/setter syntax with Object.defineProperty() accessor descriptors (same semantics, different syntax).
const parent = { inherited: true };
const obj = Object.create(parent);
obj.name = "Alice";
Object.defineProperty(obj, "hidden", {
value: 42,
enumerable: false
});
obj[Symbol("id")] = 1;
// for...in — own + inherited enumerable
for (const key in obj) console.log(key);
// "name", "inherited"
// Object.keys() — own enumerable only
console.log(Object.keys(obj)); // ["name"]
// Object.getOwnPropertyNames() — own string keys (incl. non-enum)
console.log(Object.getOwnPropertyNames(obj)); // ["name", "hidden"]
// Reflect.ownKeys() — all own keys including Symbols
console.log(Reflect.ownKeys(obj));
// ["name", "hidden", Symbol(id)]
// Object.entries() for key-value iteration
for (const [key, val] of Object.entries(obj)) {
console.log(key + ": " + val); // "name: Alice"
}
Use Object.keys() or Object.entries() for most cases. Use for...in with hasOwnProperty check only when you explicitly need inherited properties. Use Reflect.ownKeys() when you need to access Symbol properties.
Why it matters: Each enumeration method has different scope (own vs inherited, enumerable vs all, string vs symbol keys). Choosing wrong causes either missed properties or processing unexpected inherited ones.
Real applications: Object.entries() for Redux-style state transformations, for...in with hasOwnProperty check in legacy code that extends prototype, Object.getOwnPropertyNames() for debugging and introspection, and Reflect.ownKeys() in serialization utilities that must handle Symbol-keyed properties.
Common mistakes: Using for...in without a hasOwnProperty guard and accidentally iterating prototype-added properties, assuming Object.keys() returns numeric keys sorted (it does for integer indices, by insertion order for strings), and not knowing that Symbol keys are invisible to all Object.keys/values/entries methods.
const products = [
{ name: "Apple", category: "fruit", price: 1.5 },
{ name: "Banana", category: "fruit", price: 0.75 },
{ name: "Carrot", category: "veggie", price: 1.0 },
{ name: "Broccoli", category: "veggie", price: 2.0 }
];
// Group by category
const grouped = Object.groupBy(products, p => p.category);
console.log(grouped.fruit);
// [{ name: "Apple", ... }, { name: "Banana", ... }]
console.log(grouped.veggie);
// [{ name: "Carrot", ... }, { name: "Broccoli", ... }]
// Group by computed value
const byPrice = Object.groupBy(products, p =>
p.price > 1 ? "expensive" : "cheap"
);
// Old way with reduce
const oldGrouped = products.reduce((acc, p) => {
(acc[p.category] ??= []).push(p);
return acc;
}, {});
Object.groupBy() returns a null-prototype object (no inherited properties), making it safe to use as a dictionary. Use Map.groupBy() when you need non-string keys. Both methods are significantly more readable than the reduce-based pattern.
Why it matters: Object.groupBy() is the native solution to one of the most common reduce patterns. Before it was added, developers wrote manual reduce implementations or used lodash groupBy. The null-prototype return also makes the result dict-safe.
Real applications: Grouping API results by category, organizing transactions by date or type, partitioning UI components by feature or state, reporting dashboards grouping data by dimension, and replacing manual reduce-based grouping logic.
Common mistakes: Using Object.groupBy() in environments where it's not yet supported (it's newer than most other Object methods), forgetting it returns a null-prototype object (no .toString() etc. if you try), and using Map.groupBy() when you only need string keys (Object.groupBy() is simpler and serializable).
// Special value handling
JSON.stringify(undefined); // undefined (not valid JSON)
JSON.stringify(null); // "null"
JSON.stringify(NaN); // "null"
JSON.stringify(Infinity); // "null"
// Objects — functions and undefined are omitted
JSON.stringify({
name: "Alice",
fn: function() {},
undef: undefined,
sym: Symbol()
});
// '{"name":"Alice"}' — fn, undef, sym are dropped
// Arrays — become null
JSON.stringify([1, undefined, NaN, function() {}]);
// '[1,null,null,null]'
// Replacer function
JSON.stringify({ a: 1, b: 2, c: 3 }, (key, val) => {
return val > 1 ? val : undefined; // omit values <= 1
}); // '{"b":2,"c":3}'
// Pretty print with spaces
JSON.stringify({ a: 1, b: { c: 2 } }, null, 2);
// Custom toJSON method
const obj = {
data: "secret",
toJSON() { return { redacted: true }; }
};
JSON.stringify(obj); // '{"redacted":true}'
Define a toJSON() method on your objects to customize serialization. Be aware that JSON.stringify() creates circular reference errors — use a replacer function or a library to handle circular objects.
Why it matters: JSON.stringify() is used everywhere for API calls, localStorage, debugging, and data transfer. Knowing its quirks (what it drops, what it transforms) prevents silent data loss. Knowing the replacer/space arguments makes it a much more powerful tool.
Real applications: Serializing state to localStorage, building HTTP request bodies, deep-copying simple objects (poor man's clone), pretty-printing with the space argument, filtering sensitive fields with a replacer, and defining toJSON() on Date/custom objects to control their serialized form.
Common mistakes: Expecting undefined values to survive JSON.stringify() (they're dropped), converting Date objects to JSON and not realizing they become strings (won't revive automatically), not using the reviver parameter of JSON.parse() to restore Dates, and hitting circular reference errors with complex objects.
const obj = {};
Object.defineProperty(obj, "hidden", {
value: 1, enumerable: false
});
obj.visible = 2;
obj[Symbol("sym")] = 3;
obj[1] = "one";
obj[0] = "zero";
// Object.keys() — own enumerable strings only
console.log(Object.keys(obj));
// ["0", "1", "visible"]
// Reflect.ownKeys() — everything
console.log(Reflect.ownKeys(obj));
// ["0", "1", "hidden", "visible", Symbol(sym)]
// Other methods for comparison:
// Object.getOwnPropertyNames() — all own strings (incl non-enum)
console.log(Object.getOwnPropertyNames(obj));
// ["0", "1", "hidden", "visible"]
// Object.getOwnPropertySymbols() — only Symbols
console.log(Object.getOwnPropertySymbols(obj));
// [Symbol(sym)]
Use Object.keys() for normal iteration, Reflect.ownKeys() for complete property inspection (useful in debugging and metaprogramming), and Object.getOwnPropertySymbols() when specifically looking for Symbol-keyed properties.
Why it matters: Knowing the full enumeration toolkit lets you choose the right tool for the right scope. The difference between enumerable and non-enumerable, and string vs symbol keys, determines what each method reveals about an object's structure.
Real applications: Reflect.ownKeys() in serialization and debugging utilities, Object.getOwnPropertySymbols() to audit frameworks that use Symbols as private channels, Object.getOwnPropertyNames() for accessing non-enumerable metadata properties, and building object cloning utilities that must preserve all property kinds.
Common mistakes: Thinking Object.keys() returns ALL properties (it filters out non-enumerables and Symbols), not knowing Reflect.ownKeys() returns both string and Symbol keys, and using for...in where Object.keys() is correct (for...in walks the prototype chain).
const obj = { x: 1, y: 2 };
Object.preventExtensions(obj);
// Can modify existing
obj.x = 99;
console.log(obj.x); // 99
// Can delete existing
delete obj.y;
console.log(obj.y); // undefined
// Cannot add new
obj.z = 3; // silently fails (throws in strict mode)
console.log(obj.z); // undefined
// Check status
console.log(Object.isExtensible(obj)); // false
// Comparison table:
// Method | Add | Delete | Modify
// preventExtensions | No | Yes | Yes
// seal | No | No | Yes
// freeze | No | No | No
These three methods form an immutability hierarchy. A frozen object is also sealed, and a sealed object is also non-extensible. Use preventExtensions() when you want to lock down the structure but still allow value changes.
Why it matters: These methods provide different levels of object lockdown without using classes or Proxy. Understanding the hierarchy clarifies what exactly changes at each level and what operations will silently fail or throw in strict mode.
Real applications: Object.freeze() for immutable configuration objects like the app config or feature flags, preventing accidental property addition on API response objects, sealing state objects in Vuex/Redux to catch incorrect mutation patterns, and locking down class instances after construction.
Common mistakes: Assuming Object.freeze() deep-freezes nested objects (it's shallow — nested objects can still be mutated), expecting mutation to throw in sloppy mode (it fails silently), not knowing the operation still "succeeds" without error unless in strict mode, and forgetting that frozen arrays can't have elements added or removed.
const name = "Alice";
const age = 30;
// Property shorthand (ES6)
const user = { name, age };
// Equivalent to: { name: name, age: age }
// Method shorthand (ES6)
const calc = {
add(a, b) { return a + b; },
subtract(a, b) { return a - b; }
};
// Equivalent to: { add: function(a, b) { ... } }
// Combined in practice
function createUser(name, role) {
return {
name,
role,
greet() {
return "Hi, I am " + this.name;
},
get upperName() {
return this.name.toUpperCase();
}
};
}
// Destructuring with shorthand
function processUser({ name, age, role = "user" }) {
return { name, age, role };
}
Shorthand syntax is widely used in modern JavaScript and frameworks. It reduces boilerplate while maintaining readability. Method shorthand functions have access to super, unlike regular function properties, making them suitable for object inheritance patterns.
Why it matters: Object shorthand is now standard JS style and appears everywhere in React components, Express route handlers, config objects, etc. Understanding shorthand property names requires understanding that variable name becomes the key — reducing copy-paste errors.
Real applications: Building React component props objects ({ value, onChange }), constructing API request payloads from local variables, grouping related functions into a module-like object, and building objects with dynamic constructor arguments that match property names.
Common mistakes: Not knowing method shorthand functions can use super but regular function properties can't, using arrow functions as shorthand methods and losing the dynamic this binding, and overusing shorthand to the point of hindering readability when the key should differ from the variable name.