JavaScript

Proxy & Reflect

14 Questions

A Proxy wraps another object (called the target) and intercepts operations on it — like reading properties, writing values, calling functions, and more. You define what happens during these operations using handler traps. A Proxy is created with new Proxy(target, handler). The handler is an object whose methods (called traps) override default behavior. If a trap is not defined, the operation passes through to the target normally. Proxies are used for validation, logging, default values, reactive data systems (like Vue 3's reactivity), and access control. Here is a simple Proxy example:
const person = { name: 'Alice', age: 30 };

const proxy = new Proxy(person, {
  get(target, key) {
    console.log(`Getting ${key}`);
    return target[key];
  },
  set(target, key, value) {
    if (key === 'age' && value < 0) {
      throw new Error('Age cannot be negative');
    }
    target[key] = value;
    return true; // required: indicate success
  }
});

console.log(proxy.name); // "Getting name"  "Alice"
proxy.age = 25;          // works fine
proxy.age = -1;          // Error: Age cannot be negative
The set trap must return true to indicate the assignment succeeded. Returning false (or nothing) in strict mode throws a TypeError. Proxy is one of the most powerful meta-programming features in JavaScript — it lets you define custom behavior for fundamental operations.

Why it matters: Proxy is the foundation of modern reactive frameworks (Vue 3, MobX) and validation libraries. Understanding it separates developers who use frameworks from those who understand how those frameworks work internally.

Real applications: Vue 3 reactivity system, form validation wrappers, access-control layers, API response mocking in tests, default-value patterns for config objects, and property change observation.

Common mistakes: Forgetting to return true from the set trap (causes silent failure or TypeError in strict mode), not using Reflect for the default behavior inside traps, and proxying built-in objects (Map, Set, Date) without binding methods to the original target.

Proxy traps are the methods in the handler object that intercept operations. Each trap corresponds to a specific JavaScript operation. If a trap is not defined, the operation falls through to the target normally. The most commonly used traps are get, set, has, deleteProperty, and apply. There are 13 traps in total. Each trap receives the target as its first argument, which is the original object being proxied. Here are the key traps:
const handler = {
  get(target, key) {},          // intercepts property read
  set(target, key, value) {},   // intercepts property write
  has(target, key) {},          // intercepts "in" operator
  deleteProperty(target, key) {},// intercepts delete
  apply(target, thisArg, args) {},// intercepts function calls
  construct(target, args) {},   // intercepts new keyword
  ownKeys(target) {},           // intercepts Object.keys()
  getPrototypeOf(target) {},    // intercepts Object.getPrototypeOf()
  defineProperty(target, key, desc) {},
  getOwnPropertyDescriptor(target, key) {},
};

// has trap example
const allowed = new Proxy({a:1, b:2}, {
  has(target, key) {
    return key in target && key !== 'b'; // hide 'b'
  }
});
console.log('a' in allowed); // true
console.log('b' in allowed); // false (hidden)
Not defining a trap means the default behavior happens. You only need to define traps for the operations you want to intercept. The apply and construct traps only work when the proxy target is a function.

Why it matters: Knowing which trap handles which operation is essential for targeted meta-programming. Using the wrong trap (or missing an expected trap) means operations silently fall through to the target without interception.

Real applications: The has trap is used to hide internal properties from for...in, the ownKeys trap is used to expose only a subset of properties, and the deleteProperty trap prevents accidental deletion of critical keys.

Common mistakes: Thinking Proxy intercepts operations on nested objects automatically (it doesn't — you need a recursive proxy), forgetting that apply/construct traps only work on function targets, and not calling Reflect in traps when default behavior is also needed.

Reflect is a built-in object that provides methods corresponding to JavaScript's fundamental operations — the same operations that Proxy traps intercept. It was designed to work hand-in-hand with Proxy. Reflect methods always return predictable values (booleans, values) instead of throwing errors, making them easier to use than the original operators. Inside Proxy traps, it is best practice to call the corresponding Reflect method to perform the default behavior. This ensures you do not accidentally break invariants. Here is how Reflect is used with Proxy:
const proxy = new Proxy({}, {
  get(target, key, receiver) {
    console.log(`Reading ${key}`);
    return Reflect.get(target, key, receiver); // default behavior
  },
  set(target, key, value, receiver) {
    console.log(`Setting ${key} = ${value}`);
    return Reflect.set(target, key, value, receiver);
  }
});

proxy.x = 10; // "Setting x = 10"
proxy.x;      // "Reading x"

// Reflect methods mirror Proxy traps 1:1
Reflect.get(obj, 'key')         // like obj.key
Reflect.set(obj, 'key', val)    // like obj.key = val
Reflect.has(obj, 'key')        // like 'key' in obj
Reflect.deleteProperty(obj,'k') // like delete obj.k
Using Reflect inside traps is important when dealing with inheritance — it correctly handles the receiver argument which tracks the original proxy object. Reflect also makes it easy to check if an operation succeeded without using try/catch: Reflect.set() returns true on success.

Why it matters: Always using Reflect.* inside Proxy traps is best practice because it preserves correctness with inheritance, prototype chains, and the receiver argument. Not using it can cause subtle bugs with getters and setters defined in superclasses.

Real applications: Every production Proxy implementation should use Reflect in traps. Libraries like Vue 3, Immer, and MobX use Reflect internally for correct prototype-aware operation delegation.

Common mistakes: Directly accessing target[key] instead of Reflect.get(target, key, receiver) inside the get trap (breaks getters), forgetting the receiver argument, and not knowing that Reflect methods return booleans instead of throwing — enabling conditional logic without try/catch.

The set trap is perfect for input validation. You intercept every property assignment, validate the value, and either allow it or throw an error. This keeps validation logic in one place instead of scattered everywhere. The Proxy acts as a transparent wrapper — code using the object doesn't change, but all assignments are validated automatically. This pattern is used in form validation libraries, data models, and typed object systems. Here is a validation proxy:
function createValidator(target, validators) {
  return new Proxy(target, {
    set(obj, key, value) {
      if (validators[key]) {
        const error = validators[key](value);
        if (error) throw new TypeError(`${key}: ${error}`);
      }
      obj[key] = value;
      return true;
    }
  });
}

const user = createValidator({}, {
  name: v => typeof v !== 'string' ? 'must be a string' : null,
  age: v => (typeof v !== 'number' || v < 0 || v > 150)
    ? 'must be a number 0-150' : null
});

user.name = 'Alice'; // OK
user.age = 25;       // OK
user.age = -5;       // TypeError: age: must be a number 0-150
user.name = 42;      // TypeError: name: must be a string
You can also use the get trap to return default values when properties are accessed that don't exist — useful for config objects. Validation proxies are especially useful when building APIs or libraries where you want to give clear error messages for wrong usage.

Why it matters: Centralizing validation in a Proxy eliminates scattered guard clauses and ensures that invalid data never enters your data model. This is the Open/Closed Principle applied at the data layer — add new validators without touching the consuming code.

Real applications: Form data models that enforce types, REST API request objects that validate before sending, Redux-like state stores that enforce immutability, and schema-based model validation in ORM-like libraries.

Common mistakes: Not returning true from the set trap after validation passes, throwing generic errors instead of descriptive ones (lose the benefit of centralized validation), and not handling nested objects (validation proxy is shallow by default).

Vue 3's reactivity system uses Proxy to detect when reactive data is read or written. When you read a property, Vue tracks which component depends on it (dependency tracking). When you write to a property, Vue triggers updates for all dependents. This is why you don't need to call special methods to update state in Vue 3 — the Proxy intercepts plain JavaScript assignments and automatically notifies the system. Vue 2 used Object.defineProperty which had limitations (couldn.t detect new properties or array index mutations). Proxy solves both problems. Here is a simplified version of Vue's reactivity:
function reactive(obj) {
  const subscribers = new Map();

  return new Proxy(obj, {
    get(target, key) {
      // Track: "component X reads key"
      if (!subscribers.has(key)) subscribers.set(key, new Set());
      // (In real Vue, the current effect is tracked here)
      return target[key];
    },
    set(target, key, value) {
      target[key] = value;
      // Trigger: notify all subscribers of key
      if (subscribers.has(key)) {
        subscribers.get(key).forEach(fn => fn());
      }
      return true;
    }
  });
}

const state = reactive({ count: 0 });
state.count; // reading triggers dependency tracking
state.count = 1; // writing triggers updates
Unlike Vue 2, Proxy-based reactivity works on any property access, including new properties added after the object is created and array index assignments. The Reflect object is used inside Vue's traps to ensure correct behavior with inheritance and prototype chains.

Why it matters: Understanding how Vue 3 reactivity works under the hood makes you a better Vue developer — you understand why reactive() only works on objects, why you can't destructure reactive state directly, and why ref() exists for primitive values.

Real applications: Vue 3's reactive() API, MobX's observable objects, Immer's draft-based immutable updates, and any custom reactive store built from scratch for educational or lightweight use cases.

Common mistakes: Destructuring a reactive object and losing reactivity (common Vue 3 bug — the destructured values are no longer proxied), thinking Proxy automatically handles deep nesting (Vue recursively proxies nested objects lazily on access), and not knowing that primitives cannot be proxied (hence the need for ref() in Vue 3).

Object.defineProperty lets you intercept get and set operations on a specific property. Proxy intercepts all operations on an entire object through 13 different traps. Key limitations of Object.defineProperty: it cannot detect new property additions, cannot intercept array length changes, and must be applied property-by-property. Proxy solves all these problems. Object.defineProperty is still useful for simple cases (like getters/setters) because it has slightly less overhead than a full Proxy. Here is a comparison:
// Object.defineProperty — per-property, limited
const obj1 = { _x: 0 };
Object.defineProperty(obj1, 'x', {
  get() { return this._x; },
  set(v) { this._x = v * 2; }
});
obj1.x = 5;
console.log(obj1.x); // 10

// Proxy — whole object, full control
const obj2 = {};
const proxy = new Proxy(obj2, {
  set(target, key, value) {
    target[key] = value * 2; // double all values
    return true;
  }
});
proxy.x = 5;
proxy.y = 3; // catches ALL properties
console.log(proxy.x); // 10
console.log(proxy.y); // 6

// Proxy can detect new properties
proxy.newProp = 7; // works!
// Object.defineProperty must be applied to newProp separately
Proxy requires a modern browser or Node.js — it cannot be polyfilled (unlike many other ES6 features) because it does fundamental operator interception. Vue 2 used Object.defineProperty and had to patch arrays manually. Vue 3 switched to Proxy to eliminate these edge cases.

Why it matters: This comparison is a common interview question for senior JS roles. It tests whether you understand the architectural tradeoffs in reactive framework design and why Vue 3 was a major rewrite.

Real applications: Simple computed property patterns use Object.defineProperty (still valid). Full reactive systems use Proxy. Library authors choose between them based on browser support requirements and the scope of interception needed.

Common mistakes: Thinking Proxy is always better — for a single known property, Object.defineProperty is simpler and has less overhead. Also forgetting that Proxy cannot be polyfilled (no IE11 support) while Object.defineProperty can be used in older environments.

Use the set and deleteProperty traps in a Proxy to intercept write operations and throw an error. This makes the object effectively read-only — all reads pass through normally, but any write throws. Unlike Object.freeze(), a Proxy-based read-only wrapper can give custom error messages and can be applied to nested objects recursively. This pattern is useful for config objects, constants, and frozen API responses. Here is a read-only proxy:
function readOnly(target) {
  return new Proxy(target, {
    set(_, key) {
      throw new TypeError(`Cannot set "${key}" — object is read-only`);
    },
    deleteProperty(_, key) {
      throw new TypeError(`Cannot delete "${key}" — object is read-only`);
    }
  });
}

const config = readOnly({ host: 'localhost', port: 3000 });
console.log(config.host); // "localhost" (read works)
config.host = 'example';  // TypeError: Cannot set "host"
delete config.port;        // TypeError: Cannot delete "port"

// Deep read-only (recursive)
function deepReadOnly(target) {
  return new Proxy(target, {
    get(obj, key) {
      const val = obj[key];
      return typeof val === 'object' && val !== null
        ? deepReadOnly(val)
        : val;
    },
    set() { throw new TypeError('Read-only'); }
  });
}
Note that Object.freeze() is simpler for most cases, but it is shallow — nested objects are not frozen. A deep read-only Proxy solves this. The Proxy approach also lets you freeze objects that were created elsewhere without modifying them directly.

Why it matters: Immutable data patterns prevent accidental mutations — a major source of bugs in data-driven UIs. Proxy-based read-only is more ergonomic than Object.freeze because it gives better error messages and works deeply without upfront recursion.

Real applications: Configuration objects passed to third-party code, Redux-like state objects that should never be mutated directly, API response wrappers that prevent accidental write-back, and test fixture objects that should remain pristine across tests.

Common mistakes: Using shallow Object.freeze() and being surprised that nested objects are still mutable, forgetting to also trap deleteProperty (allows deletion even if set is blocked), and not realizing Object.freeze() silently ignores writes in non-strict mode while Proxy can always throw.

Reflect.ownKeys() returns all own property keys of an object — including symbols and non-enumerable properties. It combines Object.getOwnPropertyNames() and Object.getOwnPropertySymbols(). Object.keys() only returns own enumerable string keys — it skips symbol keys and non-enumerable properties. Use Reflect.ownKeys() when you need a complete picture of everything on an object, without filtering by enumerability or type. Here is the comparison:
const sym = Symbol('id');
const obj = Object.defineProperties({ [sym]: 'symVal' }, {
  visible: { value: 1, enumerable: true },
  hidden: { value: 2, enumerable: false }
});

console.log(Object.keys(obj));
// ['visible']  — only enumerable string keys

console.log(Object.getOwnPropertyNames(obj));
// ['visible', 'hidden']  — all string keys (enumerable or not)

console.log(Object.getOwnPropertySymbols(obj));
// [Symbol(id)]  — only symbol keys

console.log(Reflect.ownKeys(obj));
// ['visible', 'hidden', Symbol(id)]  — everything!
Reflect.ownKeys() is particularly useful inside the ownKeys Proxy trap to control which keys are visible to operations like Object.keys() or for...in. The order of results from Reflect.ownKeys() is: integer-like string keys first (sorted numerically), then other string keys (in insertion order), then symbol keys (in insertion order).

Why it matters: When building serialization, introspection, or debugging tools, you need to enumerate ALL properties — including hidden symbol keys and non-enumerable properties. Object.keys() alone will miss critical metadata.

Real applications: Object serializers that need to capture all properties (including non-enumerable metadata), test utilities that compare objects deeply including symbol-keyed properties, and the ownKeys Proxy trap which must use Reflect.ownKeys to get the complete list.

Common mistakes: Using Object.keys() when you mean Reflect.ownKeys() and missing symbol or non-enumerable properties, not knowing the deterministic order of Reflect.ownKeys(), and confusing Reflect.ownKeys (all own keys) with for...in (enumerable own + inherited).

Use the get trap to intercept every property read on an object. Log the key being accessed, then use Reflect.get() to return the actual value so the object behaves normally. This is a common debugging technique — you can temporarily wrap an object with a logging proxy to see exactly what code is reading from it. You can also track write access using the set trap and function calls using the apply trap. Here is a logging proxy:
function withLogging(target, label = 'obj') {
  return new Proxy(target, {
    get(obj, key, receiver) {
      const val = Reflect.get(obj, key, receiver);
      if (typeof val === 'function') {
        console.log(`Called: ${label}.${String(key)}()`);
        return val.bind(obj); // preserve 'this'
      }
      console.log(`Read: ${label}.${String(key)} = ${val}`);
      return val;
    },
    set(obj, key, value, receiver) {
      console.log(`Write: ${label}.${String(key)} = ${value}`);
      return Reflect.set(obj, key, value, receiver);
    }
  });
}

const user = withLogging({ name: 'Alice', greet() { return 'hi'; } }, 'user');
user.name;     // "Read: user.name = Alice"
user.name = 'Bob'; // "Write: user.name = Bob"
user.greet();  // "Called: user.greet()"
Remember to use Reflect.get(target, key, receiver) — passing the receiver ensures prototype-based properties (like getters) work correctly. This pattern is also used in testing and mocking libraries to spy on method calls.

Why it matters: Logging proxies are an excellent debugging technique that requires zero changes to the target code. They make it possible to trace exactly what code path is interacting with a shared object without adding console.logs throughout the codebase.

Real applications: Debugging unexpected property mutations in shared state, building spy/stub utilities in test frameworks, profiling which properties are accessed hot (accessed most frequently), and auditing access to sensitive configuration objects.

Common mistakes: Not binding function values to the original target (causes wrong this context inside methods), logging infinitely by accessing proxied properties inside the trap itself, and forgetting to pass the receiver argument to Reflect.get (breaks inherited getters).

Yes. Proxy.revocable(target, handler) creates a proxy that can be permanently disabled. It returns an object with two properties: proxy (the proxy object) and revoke (a function that kills the proxy). After calling revoke(), any operation on the proxy throws a TypeError. This is useful for time-limited access — give someone a proxy, then revoke it when access should end. Revocable proxies are used in security-sensitive code to ensure objects cannot be accessed after a certain point. Here is how revocable proxies work:
const obj = { secret: 'hidden data' };
const { proxy, revoke } = Proxy.revocable(obj, {
  get(target, key) {
    return target[key];
  }
});

console.log(proxy.secret); // "hidden data"

// Later, revoke access
revoke();

try {
  console.log(proxy.secret); // TypeError: Cannot perform 'get' on revoked proxy
} catch (e) {
  console.log(e.message);
}

// After revoking, the proxy is permanently disabled
// revoke() is safe to call multiple times (no error)
Once revoked, the proxy cannot be un-revoked — it is permanently disabled. Revocable proxies are part of the capability-based security model — pass a proxy instead of the actual object, and revoke access when done.

Why it matters: Revocable proxies implement the principle of least privilege at the object level. Instead of sharing direct object references, you share time-limited proxy capabilities that can be revoked without touching the original object.

Real applications: Session-based object access in multi-tenant applications, temporary API surface exposure to untrusted third-party scripts, sandboxed plugin systems where plugin access should expire, and capability tokens in security-focused architectures.

Common mistakes: Storing the original target reference alongside the proxy (defeating the purpose of revocation), not handling the TypeError thrown after revocation in consuming code, and thinking revoke() destroys the target object (it only disables the proxy — the target is unaffected).

The apply trap intercepts function calls. It triggers when the proxied function is called directly, via call(), or via apply(). It receives the target function, this context, and the arguments array. This lets you wrap functions with logging, memoization, access control, or argument validation — all transparently. The target of a Proxy with an apply trap must be a callable object (a function). Here is the apply trap in action:
function multiply(a, b) { return a * b; }

const proxiedMultiply = new Proxy(multiply, {
  apply(target, thisArg, args) {
    console.log(`Called with args: ${args}`);
    const result = Reflect.apply(target, thisArg, args);
    console.log(`Result: ${result}`);
    return result;
  }
});

proxiedMultiply(3, 4);
// "Called with args: 3,4"
// "Result: 12"

// Works with call() and apply() too
proxiedMultiply.call(null, 5, 6);  // "Called with args: 5,6"
proxiedMultiply.apply(null, [7,8]); // "Called with args: 7,8"

// Memoization using apply trap
function memoize(fn) {
  const cache = new Map();
  return new Proxy(fn, {
    apply(target, thisArg, args) {
      const key = JSON.stringify(args);
      if (cache.has(key)) return cache.get(key);
      const result = Reflect.apply(target, thisArg, args);
      cache.set(key, result);
      return result;
    }
  });
}
The construct trap is similar to apply but intercepts the new keyword. It is useful for logging constructor calls or transforming the returned object. Apply-trapped proxies are used in testing frameworks to create spy functions that record how they were called.

Why it matters: The apply trap enables transparent function decoration — logging, memoization, rate-limiting, and profiling without modifying the original function. This is the Decorator pattern implemented at the JavaScript meta-programming level.

Real applications: Memoization wrappers (cache expensive function results), API call throttling (rate-limit how often a function can be called), test spy functions (record what arguments were passed), and function tracing tools (log all function invocations with arguments and results).

Common mistakes: Applying an apply trap to a non-function target (TypeError), forgetting that Reflect.apply(target, thisArg, args) must be used to invoke the original function inside the trap, and not handling the construct trap when the function is also used as a constructor.

While Proxy is powerful, it has some important limitations. First, Proxy cannot be polyfilled — unlike most ES6 features, there is no way to fully emulate it in older browsers. Second, a Proxy does not fool strict identity checksproxy !== target, so code that stores references to the original object and checks identity will not see Proxy behavior. Also, some built-in objects with internal slots (like Map, Set, Date) cannot be properly proxied in all scenarios because their methods access internal slots directly, bypassing the proxy. Performance is also a concern — every intercepted operation adds overhead, so Proxy should not be used for performance-critical hot paths. Here are the key limitations:
// 1. Not polyfillable — no Babel transform works for Proxy
// Must use modern browsers/Node.js (no IE support)

// 2. Identity issue
const obj = {};
const proxy = new Proxy(obj, {});
console.log(proxy === obj); // false
// Code that stores `obj` still uses original

// 3. Built-ins with internal slots
const map = new Map();
const mapProxy = new Proxy(map, {});
// mapProxy.set('key', 'val'); // TypeError in some environments
// Map methods use internal [[MapData]] slot, not accessible via proxy

// Workaround for built-ins:
const safeMapProxy = new Proxy(map, {
  get(target, key) {
    const val = Reflect.get(target, key);
    return typeof val === 'function' ? val.bind(target) : val;
  }
});
safeMapProxy.set('key', 'val'); // now works
The workaround for built-ins with internal slots is to bind methods to the original target in the get trap. Despite limitations, Proxy is one of the most unique meta-programming features in JavaScript — no other language feature can intercept operators like in, delete, and new at runtime.

Why it matters: Understanding Proxy's limitations helps you make informed architectural decisions — knowing when to use it and when simpler alternatives suffice. This knowledge prevents production bugs when Proxy is applied to contexts it can't handle.

Real applications: The built-in slot workaround (binding methods to target) is used in Vue 3's reactive Map/Set handling. The non-polyfillable nature means Proxy-dependent code (Vue 3, etc.) requires polyfill-free modern environments.

Common mistakes: Trying to Proxy a Map/Set without binding methods to target, assuming Proxy works deeply (nested objects are not automatically proxied), deploying Proxy-dependent code to environments that don't support it (no IE11), and not anticipating the performance overhead in frequently-called hot paths.

The get trap lets you return a default value when a property does not exist on the target, instead of returning undefined. This is useful for config objects, sparse arrays, and optional-with-default patterns. The key check is key in target — if the key exists, return the actual value; otherwise return the default. This is similar to Python's collections.defaultdict. Here is a default-value proxy:
function withDefaults(target, defaults) {
  return new Proxy(target, {
    get(obj, key) {
      return key in obj ? obj[key] : defaults[key];
    }
  });
}

const config = withDefaults(
  { port: 8080 },        // actual values
  { host: 'localhost', port: 3000, debug: false } // defaults
);

console.log(config.port);  // 8080  (own value)
console.log(config.host);  // "localhost" (default)
console.log(config.debug); // false (default)

// Default object factory
function defaultDict(defaultFactory) {
  return new Proxy({}, {
    get(target, key) {
      if (!(key in target)) {
        target[key] = defaultFactory();
      }
      return target[key];
    }
  });
}
const counter = defaultDict(() => 0);
counter.apples++;
counter.apples++;
counter.bananas++;
console.log(counter.apples);  // 2
console.log(counter.bananas); // 1
The default value proxy is transparent — code does not need to know about defaults. It just reads properties normally. Use Reflect.get(target, key, receiver) in the get trap when the target might have prototype getters to ensure correct behavior.

Why it matters: Default values are a fundamental API design problem. The Proxy-based approach is more powerful than the spread/nullish-coalescing approach because it works dynamically for any key without enumerating defaults upfront.

Real applications: Configuration systems with layered defaults (per-environment overrides over base config), Python-style defaultdict for frequency counting, lazy-initialization patterns where values are computed on first access, and multi-tenant settings with global defaults and per-tenant overrides.

Common mistakes: Checking obj[key] !== undefined instead of key in obj (misses properties explicitly set to undefined), not using Reflect.get for correctness with prototype getters, and not handling symbol keys in the defaults object if symbol-keyed configuration is needed.

Meta-programming means writing code that operates on other code — reading, modifying, or controlling how code behaves at runtime. JavaScript supports meta-programming through Proxy, Reflect, Symbol, and property descriptors. With meta-programming you can intercept property access, customize how objects behave with operators, define custom iteration behavior, and change how objects convert to strings or numbers. This is different from regular programming (writing logic to process data) — meta-programming writes logic about the code itself. Here is a summary of meta-programming tools:
// 1. Proxy — intercept object operations
const proxy = new Proxy(obj, handler);

// 2. Reflect — perform default object operations
Reflect.get(obj, 'key');
Reflect.set(obj, 'key', value);

// 3. Symbol — customize built-in behaviors
class MyArray {
  [Symbol.iterator]() { /* custom iteration */ }
  get [Symbol.toStringTag]() { return 'MyArray'; }
}
console.log(Object.prototype.toString.call(new MyArray()));
// "[object MyArray]"

// 4. Property descriptors — define property behavior
Object.defineProperty(obj, 'x', {
  get() { return this._x; },
  set(v) { this._x = v * 2; },
  enumerable: true,
  configurable: false
});
Meta-programming is powerful but should be used carefully — it can make code hard to understand and debug because normal assumptions about how objects work no longer apply. Frameworks like Vue, MobX, and Immer use meta-programming internally so that developers can write simple, natural JavaScript while the framework handles complexity behind the scenes.

Why it matters: Meta-programming is what enables modern framework magic. When you state.count++ and the UI updates automatically, that's meta-programming at work. Understanding it makes you a more effective framework user and a candidate who can explain how frameworks work in depth.

Real applications: Reactive state management (Vue, MobX), immutable state helpers (Immer uses Proxy to track which paths were mutated), serialization libraries that introspect object structure, ORM-like systems that intercept property access to generate SQL, and sandboxing/isolation environments.

Common mistakes: Overusing meta-programming where simpler coding patterns suffice (makes code harder to debug), not documenting which objects are Proxy-wrapped (consumers can't tell from outside), and confusing meta-programming with reflection — reflection reads code structure while meta-programming changes how code behaves.