JavaScript

DOM Manipulation

15 Questions

querySelector() returns the first element that matches a CSS selector, or null if no match is found. querySelectorAll() returns a static NodeList of all matching elements. Both methods accept any valid CSS selector string including complex selectors with combinators, pseudo-classes, and attribute selectors. The NodeList returned by querySelectorAll is static — it does not update when the DOM changes.
const el = document.querySelector('.card');       // first .card
const all = document.querySelectorAll('.card');    // all .card elements

// NodeList is NOT a live collection
document.querySelectorAll('p').forEach(p => {
  p.style.color = 'blue';
});

// By ID, attribute, nested
document.querySelector('#app');
document.querySelector('[data-role="admin"]');
document.querySelector('ul > li:first-child');
Both methods can be called on any element, not just document, to search within a specific subtree. They are the modern replacement for older methods like getElementById() and getElementsByClassName().

Why it matters: querySelector and querySelectorAll are the foundation of all DOM interaction — every frontend interview exercise that touches the DOM uses them. The distinction between live HTMLCollections (from older methods) and static NodeLists (from querySelectorAll) is a frequent interview trap.

Real applications: Every JavaScript-driven UI interaction — reading form input values, toggling classes, attaching events, updating content — starts with selecting elements. Framework-free code and jQuery replacements rely entirely on these APIs.

Common mistakes: Forgetting that querySelectorAll returns a static NodeList (not live), expecting querySelector to throw when nothing is found (it returns null, causing a TypeError on the next operation), and using querySelectorAll + [0] instead of querySelector for single elements.

document.createElement() creates a new element in memory not yet attached to the document; configure its properties first, then insert it with appendChild(), append(), prepend(), before(), or after(). The modern append() family handles multiple arguments and plain text strings in one call, unlike appendChild() which accepts only one Node. For HTML string injection from trusted sources, insertAdjacentHTML() is the most targeted option.
const div = document.createElement('div');
div.textContent = 'Hello World';
div.className = 'greeting';
div.dataset.id = '42';

document.body.appendChild(div);       // insert as last child
document.body.prepend(div);           // insert as first child
existingEl.after(div);                // insert after sibling

// append() accepts strings and multiple nodes at once
parent.append('plain text', div, anotherNode);

// insertAdjacentHTML — four precise positions
// 'beforebegin' | 'afterbegin' | 'beforeend' | 'afterend'
el.insertAdjacentHTML('beforeend', '<p>Trusted HTML</p>');
el.remove(); // modern removal, no parent reference needed

Why it matters: Dynamic element creation is a core interview skill — rendering lists, search results, error messages, or form fields programmatically requires fluency with createElement and the insert methods.

Real applications: Trello creates card DOM elements when you add a card; Slack inserts message elements at runtime; Angular's ViewContainerRef creates component host elements using a mechanism identical in concept to createElement.

Common mistakes: Using innerHTML to insert user-provided content — even one XSS payload in innerHTML executes injected scripts; always use textContent for text and only insertAdjacentHTML with sanitized or developer-authored markup.

addEventListener() attaches a handler without overwriting existing ones, unlike the onclick property which allows only one callback per event. Removing a listener requires the exact same function reference passed to removeEventListener() — anonymous arrow functions defined inline cannot be removed. The options object supports once (auto-removes after first fire), passive (hints no preventDefault for better scroll performance), and signal for AbortController-based bulk cleanup.
function handleClick(e) {
  console.log('clicked', e.target);
}

btn.addEventListener('click', handleClick);
btn.removeEventListener('click', handleClick); // identical reference required

// Options object
el.addEventListener('scroll', onScroll, {
  passive: true,   // browser can optimize scrolling
  capture: false,  // bubbling phase (default)
  once: true       // auto-removes after first invocation
});

// AbortController — clean up multiple listeners at once
const ctrl = new AbortController();
btn.addEventListener('click', handler, { signal: ctrl.signal });
input.addEventListener('input', handler, { signal: ctrl.signal });
ctrl.abort(); // removes both listeners simultaneously

Why it matters: Forgotten event listeners are one of the most common memory leaks in SPAs; interviewers expect knowledge of proper cleanup via removeEventListener or AbortController, especially in component teardown scenarios.

Real applications: React's synthetic event system wraps addEventListener; Vue's v-on directive adds and removes listeners automatically on component mount/unmount; Angular's Renderer2.listen returns a cleanup function that calls removeEventListener.

Common mistakes: Passing an inline arrow function directly to addEventListener — since each call creates a new function object, removeEventListener with another arrow function will never match, making the listener impossible to remove.

Event delegation attaches a single listener on a parent element that handles events from all current and future children by exploiting event bubbling — clicks on children propagate up to the parent where the handler inspects e.target to decide what to do. This is more memory-efficient than attaching individual listeners to each child and automatically covers dynamically added elements without re-binding. Use e.target.closest() rather than e.target.matches() when clickable elements contain child nodes.
// One listener on <ul> handles all current and future <li> clicks
document.querySelector('ul').addEventListener('click', (e) => {
  // matches() — exact element
  if (e.target.matches('li')) {
    console.log('Item:', e.target.textContent);
  }
  // closest() — works even if click lands on a child (e.g., icon in button)
  const item = e.target.closest('.list-item');
  if (item) handleItemClick(item.dataset.id);
});

// Dynamically added items work automatically
function addItem(text) {
  const li = document.createElement('li');
  li.textContent = text;
  document.querySelector('ul').appendChild(li);
  // No new event listener needed!
}

Why it matters: Attaching 1000 individual click listeners versus one delegated listener is a classic performance interview question, and delegation is the expected answer for handling lists with dynamic content.

Real applications: Gmail uses delegation on the email list container; jQuery's .on(event, selector, handler) is built entirely on delegation; React delegates all synthetic events to the root container rather than individual DOM nodes.

Common mistakes: Using e.target.matches('.btn') when the button contains a child icon — the click fires on the icon element, not the button, so matches fails; e.target.closest('.btn') traverses up and correctly finds the button.

DOM events travel in three phases: capturing (window down to target), target (fires on the clicked element), and bubbling (target back up to window). By default addEventListener fires during the bubbling phase; passing { capture: true } fires it during the downward capturing phase instead. e.stopPropagation() halts the event's travel at the current element, while e.stopImmediatePropagation() additionally blocks other handlers on the same element.
// Bubbling (default) — inner fires before outer
child.addEventListener('click', () => console.log('child'));   // fires 1st
parent.addEventListener('click', () => console.log('parent')); // fires 2nd

// Capturing — outer fires before inner
parent.addEventListener('click', () => console.log('parent capture'), true);
child.addEventListener('click', () => console.log('child'));   // fires 2nd

// Stop propagation
child.addEventListener('click', (e) => {
  e.stopPropagation(); // parent listener will NOT fire
});

// stopImmediatePropagation — also blocks sibling handlers on same element
btn.addEventListener('click', (e) => {
  e.stopImmediatePropagation();
  console.log('only this runs');
});
btn.addEventListener('click', () => console.log('never fires'));

Why it matters: Understanding bubbling is prerequisite knowledge for event delegation, debugging double-firing handlers, and building close-on-backdrop-click patterns for modals and dropdowns.

Real applications: React's synthetic event system delegates all events to the root via bubbling; Bootstrap modals use stopPropagation to prevent backdrop clicks from reaching the modal dialog; Google Maps uses capture-phase listeners for map drag interactions.

Common mistakes: Using stopPropagation as a quick fix when an unexpected parent handler fires — it silently breaks delegation elsewhere; the correct approach is to check e.target inside the parent handler and skip irrelevant targets.

The classList property exposes a live DOMTokenList for surgical class manipulation — add(), remove(), toggle(), contains(), and replace() — without touching other classes on the element. Directly setting element.className replaces the entire class string, while classList operates only on the specified class. The toggle() method accepts an optional boolean second argument that force-adds or force-removes the class based on a condition.
const el = document.querySelector('.box');

el.classList.add('active', 'visible');     // multiple classes at once
el.classList.remove('hidden', 'disabled');
el.classList.toggle('open');               // add if absent, remove if present
el.classList.contains('active');           // true/false check
el.classList.replace('loading', 'loaded'); // atomic swap

// Conditional toggle — second arg is boolean force value
el.classList.toggle('dark-mode', prefersDark);    // add if true
el.classList.toggle('expanded', height > 200);   // add if condition is true

// Read all classes
console.log(el.classList.value);           // "box active visible"
console.log([...el.classList]);            // ["box", "active", "visible"]

Why it matters: classList is the idiomatic API for state-driven UI changes in interview exercises — toggling active states for navigation items, tabs, accordions, and modals without inline styles.

Real applications: Bootstrap's JavaScript components exclusively use classList.toggle for show/hide; Angular's [ngClass] binding and Vue's :class directive compile to classList operations; Tailwind CSS projects toggle utility classes dynamically with classList.

Common mistakes: Setting element.className = 'new-class' instead of classList.add — this replaces ALL existing classes including framework-applied utility classes, causing cascading visual breakage.

HTML data-* attributes embed custom data directly in markup and are accessed via the element's dataset property, which automatically converts kebab-case attribute names (data-user-id) to camelCase property names (dataset.userId). All values are stored and returned as strings — numbers, booleans, and objects must be explicitly converted. Dataset values are also addressable in CSS via attribute selectors, enabling data-driven styling without extra JavaScript.
// HTML: <div id="card" data-user-id="42" data-is-admin="true" data-item-count="5">
const el = document.getElementById('card');
console.log(el.dataset.userId);      // "42"   — string, not number!
console.log(el.dataset.isAdmin);     // "true" — string, not boolean!
console.log(el.dataset.itemCount);   // "5"    — kebab to camelCase

// Parse types explicitly
const id      = parseInt(el.dataset.userId, 10);       // 42 (number)
const isAdmin = el.dataset.isAdmin === 'true';         // true (boolean)

// Set and delete
el.dataset.status = 'active';    // adds data-status="active"
delete el.dataset.role;          // removes data-role attribute

// CSS: [data-is-admin="true"] { background: gold; }

Why it matters: Dataset is the canonical bridge between server-rendered HTML and JavaScript; the kebab-to-camelCase conversion is a frequent interview gotcha that trips up developers who expect symmetrical naming.

Real applications: Rails and Django templates pass model IDs via data-* to JavaScript; Stimulus.js (used by Basecamp and HEY) uses data-controller attributes as its core wiring mechanism; htmx reads dataset attributes for AJAX configuration.

Common mistakes: Doing arithmetic directly on dataset values without parsing — dataset.count + 1 concatenates strings ("51" instead of 6) because all dataset values are strings regardless of how they appear in HTML.

innerHTML parses its value as HTML and renders tags as DOM nodes, while textContent treats the value as literal plain text, escaping all tag characters and making it inherently safe for user-supplied data. innerText diverges from textContent by returning only visible text (respecting CSS display/visibility) and triggering layout reflow to compute it, making it slower. Setting innerHTML with user input is a direct XSS vulnerability and should never be done without sanitization.
// innerHTML — parses HTML, renders tags
el.innerHTML = '<b>Bold</b>';           // renders bold
el.innerHTML = userInput;               // XSS VULNERABILITY — never do this

// textContent — safe, treats everything as plain text
el.textContent = '<b>Not bold</b>';    // shows literal "<b>Not bold</b>"
el.textContent = userInput;             // always safe

// innerText vs textContent
// <p>Hello <span style="display:none">secret</span></p>
p.textContent; // "Hello secret"  — includes hidden text nodes
p.innerText;   // "Hello"         — only visible text, forces reflow

// Performance: textContent > innerText (no layout calculation)
// Safe HTML from trusted source:
el.insertAdjacentHTML('beforeend', sanitizedHTML);

Why it matters: XSS via innerHTML is in the OWASP Top 10 — interviewers specifically probe whether candidates default to textContent for user data, reserving innerHTML only for trusted developer-authored strings.

Real applications: React exposes dangerouslySetInnerHTML with a deliberately alarming name; DOMPurify (deployed by GitLab and Atlassian) sanitizes HTML before safe innerHTML injection; WordPress escapes output by default to prevent innerHTML-style XSS in themes.

Common mistakes: Using innerHTML for simple text updates — beyond the XSS risk it is slower than textContent (must re-parse HTML) and destroys then recreates child nodes, losing any event listeners attached to them.

A DocumentFragment is a lightweight in-memory container for DOM nodes not attached to the live document, so building nodes inside it causes zero reflows or repaints. When you append the fragment to the real DOM, all its children are moved in a single operation, triggering only one layout recalculation regardless of how many nodes were added. After insertion, the fragment is automatically emptied because children are moved, not copied.
// Without fragment — each appendChild triggers a reflow
const ul = document.querySelector('ul');
for (let i = 0; i < 1000; i++) {
  const li = document.createElement('li');
  li.textContent = `Item ${i}`;
  ul.appendChild(li); // 1000 separate reflows!
}

// With DocumentFragment — single reflow
const fragment = document.createDocumentFragment();
for (let i = 0; i < 1000; i++) {
  const li = document.createElement('li');
  li.textContent = `Item ${i}`;
  fragment.appendChild(li); // in-memory, no reflow
}
ul.appendChild(fragment); // one reflow — all 1000 items inserted at once

// Fragment is drained after insert
console.log(fragment.childNodes.length); // 0

// Reusable template: use <template> element instead
const tpl = document.querySelector('#row-template');
const clone = tpl.content.cloneNode(true); // fragment-like, reusable

Why it matters: DocumentFragment is the textbook answer to "how do you batch DOM insertions efficiently" in performance interviews — it directly demonstrates knowledge of browser rendering and reflow costs.

Real applications: Vue's virtual DOM diffing batches real DOM mutations similarly; React's <Fragment></Fragment> component is conceptually related (groups JSX without a wrapper node); Handlebars and Mustache build fragments before injecting rendered content.

Common mistakes: Trying to reuse a fragment after appending it — the fragment is empty after insertion; for reusable templates use a <template> HTML element and call template.content.cloneNode(true) each time.

MutationObserver watches for DOM mutations — child node additions/removals, attribute changes, and text content modifications — and delivers them as batched microtask callbacks, making it far more efficient than the deprecated synchronous Mutation Events. Configure it with observe() specifying which mutation types to track, and stop watching with disconnect(). Each callback receives an array of MutationRecord objects describing exactly what changed.
const observer = new MutationObserver((mutations) => {
  mutations.forEach(mutation => {
    if (mutation.type === 'childList') {
      mutation.addedNodes.forEach(n => console.log('Added:', n));
      mutation.removedNodes.forEach(n => console.log('Removed:', n));
    }
    if (mutation.type === 'attributes') {
      console.log(`${mutation.attributeName} changed`);
      console.log('Old:', mutation.oldValue);
    }
  });
});

observer.observe(document.getElementById('app'), {
  childList: true,          // watch child add/remove
  attributes: true,         // watch attribute changes
  attributeOldValue: true,  // capture old attribute value
  subtree: true,            // watch all descendants
  characterData: true       // watch text content changes
});

observer.disconnect();                  // stop observing
const pending = observer.takeRecords(); // flush queued records synchronously

Why it matters: MutationObserver is the go-to solution for reacting to third-party or framework-driven DOM changes — writing polyfills, browser extensions, or accessibility tools requires this API.

Real applications: Angular's zone.js uses MutationObserver to detect async changes and trigger change detection; Google Tag Manager monitors DOM mutations for dynamic page tracking; axe-core (by Deque) re-runs accessibility audits whenever the DOM mutates.

Common mistakes: Forgetting disconnect() when the observer is no longer needed — an active MutationObserver holds strong references to its target and callback, preventing garbage collection and causing memory leaks in long-running SPAs.

The DOM provides several properties for traversing nodes in the document tree. parentNode and parentElement move up, children and childNodes move down, and nextElementSibling / previousElementSibling move sideways. These properties let you navigate without querying the entire document.
const item = document.querySelector('.item');

// Moving up
console.log(item.parentNode);        // parent node (any node type)
console.log(item.parentElement);     // parent element only
console.log(item.closest('.list'));   // nearest ancestor matching selector

// Moving down
console.log(item.children);            // HTMLCollection of child elements
console.log(item.childNodes);          // NodeList including text nodes
console.log(item.firstElementChild);   // first child element
console.log(item.lastElementChild);    // last child element

// Moving sideways
console.log(item.nextElementSibling);      // next sibling element
console.log(item.previousElementSibling);  // previous sibling element

// Walking all children
for (const child of item.children) {
  console.log(child.tagName);
}
Note that childNodes includes text nodes and comments, while children returns only element nodes. Use closest() for ancestor lookup — it walks up the tree returning the first element matching a CSS selector, or null.

Why it matters: DOM traversal is used in event delegation, custom component logic, and any code that must navigate relationships between elements. Knowing the difference between childNodes (all nodes) vs children (elements only) prevents bugs with unexpected text nodes.

Real applications: Building accessible custom components, implementing drag-and-drop reordering, creating table row manipulation utilities, and event delegation (finding the relevant ancestor with closest()) all rely on traversal APIs.

Common mistakes: Using childNodes expecting only element nodes (it includes text/comment nodes — use children), using parentNode when you need an element (it may return a Document or DocumentFragment — use parentElement), and forgetting that closest() includes the element itself in the search.

event.preventDefault() stops the browser's default action for that event (like navigating on link click or submitting a form), while event.stopPropagation() stops the event from bubbling up to parent elements. They serve completely different purposes and can be used together when needed.
// preventDefault — stops default browser behavior
document.querySelector('a').addEventListener('click', (e) => {
  e.preventDefault(); // link will NOT navigate
  console.log('Link clicked but navigation prevented');
});

// stopPropagation — stops event from reaching parent handlers
document.querySelector('.child').addEventListener('click', (e) => {
  e.stopPropagation(); // parent click handler will NOT fire
  console.log('Child clicked');
});

document.querySelector('.parent').addEventListener('click', () => {
  console.log('Parent clicked'); // never fires if child stops propagation
});

// stopImmediatePropagation — also stops other handlers on same element
document.querySelector('.btn').addEventListener('click', (e) => {
  e.stopImmediatePropagation();
  console.log('First handler'); // only this runs
});
document.querySelector('.btn').addEventListener('click', () => {
  console.log('Second handler'); // never fires
});
Use event.stopImmediatePropagation() when you also want to prevent other handlers on the same element from firing. Avoid overusing stopPropagation as it breaks event delegation patterns — prefer checking event.target instead.

Why it matters: Misusing these two methods is a very common source of bugs. Calling stopPropagation when you meant preventDefault (or vice versa) produces hard-to-debug behavior — events fire where they shouldn't, or default actions still happen when expected to be blocked.

Real applications: Modal close-on-backdrop-click (stop inner click from propagating to backdrop), form submit intercept (preventDefault to handle via fetch instead), custom dropdown menus (prevent document click from closing menu when clicking inside).

Common mistakes: Using return false in vanilla JS (does NOT prevent default — only stops propagation; in jQuery it does both, which causes confusion), calling stopPropagation globally and preventing parent components from receiving events they need, and not knowing stopImmediatePropagation exists.

Custom events allow you to define your own event types beyond the built-in DOM events. Create them with the CustomEvent constructor and dispatch them using element.dispatchEvent(). You can pass data through the detail property, making custom events perfect for component communication.
// Creating a custom event with data
const event = new CustomEvent('user-login', {
  detail: { username: 'john', role: 'admin' },
  bubbles: true,      // event will bubble up
  cancelable: true,    // can be prevented
  composed: true       // crosses shadow DOM boundary
});

// Listening for the custom event
document.addEventListener('user-login', (e) => {
  console.log('User:', e.detail.username);  // "john"
  console.log('Role:', e.detail.role);      // "admin"
});

// Dispatching the event
document.dispatchEvent(event);

// Practical example — notify parent of state change
class CartWidget {
  addItem(item) {
    this.items.push(item);
    this.element.dispatchEvent(new CustomEvent('cart-updated', {
      detail: { count: this.items.length, item },
      bubbles: true
    }));
  }
}
Custom events follow the same bubbling and capturing rules as native events. Set bubbles: true for parent elements to catch the event. The composed option is essential for Shadow DOM — it allows events to cross shadow boundaries.

Why it matters: Custom events are the proper decoupled communication mechanism between DOM components — used when a child component needs to notify a parent without tight coupling. They mirror how native DOM events work, making them intuitive for component library authors.

Real applications: Web Components communicating state changes to host pages, e-commerce cart widgets firing cart-updated events, form components dispatching validation-complete events, and any vanilla JS component architecture that avoids framework coupling.

Common mistakes: Forgetting bubbles: true and wondering why parent listeners don't fire, using plain Event constructor instead of CustomEvent when you need to pass data (plain Event has no detail field), and not setting composed: true for Web Components that need to communicate across shadow DOM boundaries.

IntersectionObserver asynchronously watches for changes in the intersection of a target element with an ancestor element or the viewport. It is commonly used for lazy loading images, infinite scrolling, and triggering animations when elements come into view. It replaces expensive scroll event listeners with a performant callback-based approach.
// Basic usage — detect when element enters viewport
const observer = new IntersectionObserver((entries) => {
  entries.forEach(entry => {
    if (entry.isIntersecting) {
      console.log(entry.target.id, 'is visible');
      console.log('Visibility ratio:', entry.intersectionRatio);
    }
  });
}, {
  root: null,          // null = viewport
  rootMargin: '0px',   // margin around root
  threshold: [0, 0.5, 1.0]  // trigger at 0%, 50%, 100% visibility
});

observer.observe(document.querySelector('#section1'));

// Lazy loading images
const imgObserver = new IntersectionObserver((entries) => {
  entries.forEach(entry => {
    if (entry.isIntersecting) {
      const img = entry.target;
      img.src = img.dataset.src;   // load actual image
      imgObserver.unobserve(img);  // stop watching once loaded
    }
  });
}, { rootMargin: '200px' });  // start loading 200px before visible

document.querySelectorAll('img[data-src]').forEach(img => {
  imgObserver.observe(img);
});
The rootMargin option triggers callbacks before the element actually enters the viewport — useful for preloading. Always call unobserve() for one-time observations like lazy loading to avoid memory leaks.

Why it matters: IntersectionObserver replaces expensive scroll event listeners that run on every scroll frame. It's the browser-native, performant solution for lazy loading, infinite scrolling, and scroll-triggered animations — a key modern browser API that every frontend developer should know.

Real applications: Lazy loading images and iframes (core browser performance pattern), infinite scroll feeds (Twitter, Instagram), analytics tracking (fire event when ad/content enters viewport), and triggering CSS animations when sections scroll into view.

Common mistakes: Not calling unobserve() after a one-time observation (lazy load keeps watching after image loads), using scroll event listeners instead of IntersectionObserver for viewport detection (causes jank), and not specifying a rootMargin for preloading (content loads too late, causing visible delay).

cloneNode() creates a copy of a DOM node. When called with false (or no argument), it performs a shallow clone — copying only the node itself with its attributes. When called with true, it performs a deep clone — copying the node and all its descendants including text content and child elements.
const original = document.querySelector('.card');

// Shallow clone — only the element itself (no children)
const shallow = original.cloneNode(false);
console.log(shallow.children.length); // 0
console.log(shallow.className);       // "card" (attributes copied)

// Deep clone — element + all descendants
const deep = original.cloneNode(true);
console.log(deep.children.length);    // same as original
console.log(deep.innerHTML);          // same content as original

// Cloned nodes are NOT in the DOM until appended
document.querySelector('.container').appendChild(deep);

// Important: IDs are also cloned — must update to avoid duplicates
deep.id = 'card-copy';

// Event listeners are NOT cloned
original.addEventListener('click', handler);
const clone = original.cloneNode(true);
// clone does NOT have the click handler
Key things to remember: cloneNode does not copy event listeners — you must reattach them manually. Cloned elements with id attributes create duplicate IDs (invalid HTML). importNode() works similarly but imports nodes from other documents.

Why it matters: Understanding that event listeners are NOT cloned prevents a common bug where developers clone interactive components and wonder why click/input handlers don't work on the copies. This is a regular interview question about the DOM cloning model.

Real applications: Template-based list rendering (clone a template item, populate its data, append), duplicating form rows in dynamic forms, copying table rows, and creating skeleton loading placeholders from existing DOM structure.

Common mistakes: Expecting cloned nodes to retain event listeners (they don't — must reattach), cloning without updating id attributes (creates duplicate IDs that break getElementById), and calling cloneNode() without true and being surprised the clone is empty (shallow by default).

The window object represents the browser window and is the global object in browser JavaScript — all global variables and functions are properties of window. The document object is a property of window and represents the HTML document loaded in the window. Think of window as the container and document as the content.
// Window — browser window and global scope
console.log(window.innerWidth);     // viewport width
console.log(window.innerHeight);    // viewport height
console.log(window.location.href);  // current URL
console.log(window.navigator);     // browser info
window.alert('Hello');              // browser dialog
window.setTimeout(fn, 1000);       // timer

// Global variables are window properties
var x = 10;
console.log(window.x); // 10 (var only, not let/const)

// Document — the HTML document
console.log(document.title);           // page title
console.log(document.URL);            // document URL
console.log(document.readyState);     // loading state
document.querySelector('.el');         // find elements
document.createElement('div');         // create elements

// Window events vs Document events
window.addEventListener('resize', () => console.log('resized'));
window.addEventListener('scroll', () => console.log('scrolled'));
document.addEventListener('DOMContentLoaded', () => console.log('DOM ready'));
document.addEventListener('click', () => console.log('clicked'));
The document is available after HTML is parsed, while certain window properties like innerWidth are available earlier. Events like resize and scroll belong to window; DOM-related events like DOMContentLoaded belong to document.

Why it matters: Confusing window and document is a common beginner mistake with real consequences — attaching resize listeners to document instead of window, or accessing document.innerWidth (undefined) instead of window.innerWidth. This is standard interview territory.

Real applications: Responsive layout logic uses window.innerWidth, navigation uses window.location, cross-tab communication uses window.postMessage, and all DOM queries use document.querySelector. Node.js has no window or document — this distinction matters for SSR code.

Common mistakes: Using var for global variables and being surprised they appear on window (let/const don't), listening for DOMContentLoaded on window (works but is unconventional — should be on document), and accessing window in Node.js/SSR context (it doesn't exist — use globalThis).

getComputedStyle() returns the final, computed values of all CSS properties on an element after all stylesheets and inline styles have been applied. For dimensions, getBoundingClientRect() returns an element's size and position relative to the viewport. These methods are essential for dynamic layout calculations.
const el = document.querySelector('.box');

// getComputedStyle — all resolved CSS values
const styles = window.getComputedStyle(el);
console.log(styles.color);           // "rgb(255, 0, 0)"
console.log(styles.fontSize);        // "16px"
console.log(styles.display);         // "block"
console.log(styles.marginTop);       // "10px"

// Pseudo-element styles
const before = window.getComputedStyle(el, '::before');
console.log(before.content);         // computed content value

// getBoundingClientRect — size and position
const rect = el.getBoundingClientRect();
console.log(rect.width, rect.height);  // element dimensions
console.log(rect.top, rect.left);      // position from viewport
console.log(rect.x, rect.y);          // same as top/left

// Element dimension properties
console.log(el.offsetWidth);    // width + padding + border
console.log(el.clientWidth);    // width + padding (no border)
console.log(el.scrollWidth);    // total scrollable width
console.log(el.offsetTop);     // distance from offset parent
getComputedStyle returns read-only values — use element.style to set inline styles. Reading computed styles or dimensions triggers a layout reflow, so batch these reads and avoid mixing reads and writes in a loop for better performance.

Why it matters: Accessing dimensions before or after CSS transitions/animations, implementing responsive behavior in JavaScript, and calculating positions for tooltips/popovers all require these APIs. The reflow-triggering behavior is directly relevant to layout thrashing performance issues.

Real applications: Positioning tooltips and popovers (getBoundingClientRect), implementing custom scroll-snap, chart libraries calculating available space, drag-and-drop collision detection, and virtual scrolling implementations all read computed dimensions.

Common mistakes: Calling el.style.color expecting the computed style (returns empty string if not set inline — use getComputedStyle), reading getBoundingClientRect() inside a requestAnimationFrame loop before writing (causes layout thrashing), and confusing offsetWidth (includes padding+border) vs clientWidth (includes padding only).

The Shadow DOM provides DOM and CSS encapsulation by attaching a hidden, separate DOM tree to an element. Styles defined inside a shadow tree do not leak out, and external styles do not penetrate in. This is the foundation of Web Components and is used by native elements like <input> and <video>.
// Creating a shadow DOM
const host = document.querySelector('#my-widget');
const shadow = host.attachShadow({ mode: 'open' });

// Add content to shadow DOM
shadow.innerHTML = ' + "'

Shadow content

'" + '; // This style ONLY affects the shadow DOM paragraph // External p { color: blue } will NOT affect it // mode: "open" vs "closed" // open — shadow root accessible via element.shadowRoot console.log(host.shadowRoot); // ShadowRoot object // closed — shadowRoot returns null (true encapsulation) const closed = host.attachShadow({ mode: 'closed' }); console.log(host.shadowRoot); // null // Slots — project light DOM content into shadow DOM shadow.innerHTML = ' + "''" + '; // In light DOM: Title // Styling from outside with CSS custom properties shadow.innerHTML = ' + "'

Styled

'" + '; // Host page: #my-widget { --text-color: blue; }
Shadow DOM elements are invisible to document.querySelector — query within the shadow root itself. CSS custom properties (CSS variables) are the primary way to theme shadow DOM content from outside. The ::part() pseudo-element also allows external styling of exposed parts.

Why it matters: Shadow DOM is the foundation of Web Components and is what makes browser built-in elements like <input>, <video>, and <select> style-isolated. Understanding it is required for writing or consuming Web Components and for debugging styling issues.

Real applications: Web Component libraries like Lit, Angular's view encapsulation (ViewEncapsulation.ShadowDom), browser extension UI injection (shadow DOM prevents page styles from bleeding in), and design system component isolation all rely on Shadow DOM.

Common mistakes: Trying to style shadow DOM content with external CSS (styles don't penetrate — use CSS variables or ::part()), using mode: 'closed' and then wondering why shadowRoot is null, and not setting composed: true on custom events that need to cross shadow boundaries.

Layout thrashing occurs when you repeatedly read and write DOM properties in a loop, forcing the browser to recalculate layout on every read. To avoid it, batch all reads together, then batch all writes together. Use DocumentFragment, requestAnimationFrame, and CSS classes instead of individual style changes.
// BAD — layout thrashing (read-write-read-write)
const items = document.querySelectorAll('.item');
items.forEach(item => {
  const height = item.offsetHeight;    // READ (forces layout)
  item.style.height = height + 10 + 'px'; // WRITE (invalidates layout)
  // next read forces layout recalculation again!
});

// GOOD — batch reads, then batch writes
const heights = [];
items.forEach(item => heights.push(item.offsetHeight)); // all READS
items.forEach((item, i) => {
  item.style.height = heights[i] + 10 + 'px';          // all WRITES
});

// GOOD — use requestAnimationFrame for visual updates
function animate() {
  element.style.transform = ' + "'translateX(' + position + 'px)'" + ';
  requestAnimationFrame(animate);
}
requestAnimationFrame(animate);

// GOOD — use DocumentFragment for bulk inserts
const fragment = document.createDocumentFragment();
for (let i = 0; i < 1000; i++) {
  const li = document.createElement('li');
  li.textContent = 'Item ' + i;
  fragment.appendChild(li);
}
list.appendChild(fragment); // single reflow

// GOOD — toggle CSS class instead of multiple style changes
element.classList.add('active'); // one reflow vs many
Other performance tips: use display: none during bulk updates (removes element from layout), prefer transform and opacity for animations (compositor-only, no reflow), and use will-change CSS to hint the browser. The fastdom library can automate read-write batching.

Why it matters: Layout thrashing is a top cause of janky UIs (frames below 60fps). Understanding that DOM reads after writes force layout recalculation is critical for building smooth animations and lists — this is one of the most impactful browser rendering optimizations available.

Real applications: Virtual scroll implementations, animation-heavy UIs, sticky header calculations, and any code that resizes/repositions multiple elements dynamically all need to batch DOM reads and writes to avoid thrashing Chrome DevTools' Performance tab reveals layout thrashing visually.

Common mistakes: Reading layout properties (offsetHeight, getBoundingClientRect) inside a forEach that also writes styles (each read forces layout), not using requestAnimationFrame for visual updates (misses the sync with browser paint cycle), and animating width/height/top/left instead of transform (triggers layout, not just compositing).

ResizeObserver watches for changes in an element's dimensions and fires a callback when the element is resized. Unlike the window resize event, it tracks individual elements and detects size changes caused by CSS, content changes, or layout shifts — not just viewport resizing.
// Basic usage
const observer = new ResizeObserver((entries) => {
  for (const entry of entries) {
    const { width, height } = entry.contentRect;
    console.log('Element resized:', width, 'x', height);

    // Access different box models
    console.log('Border box:', entry.borderBoxSize[0].inlineSize);
    console.log('Content box:', entry.contentBoxSize[0].inlineSize);
  }
});

// Observe one or more elements
observer.observe(document.querySelector('.panel'));
observer.observe(document.querySelector('.sidebar'));

// Practical example — responsive component
const container = document.querySelector('.card-grid');
const resizeObs = new ResizeObserver((entries) => {
  const width = entries[0].contentRect.width;
  if (width < 400) {
    container.classList.add('compact');
  } else {
    container.classList.remove('compact');
  }
});
resizeObs.observe(container);

// Cleanup
observer.unobserve(element);  // stop watching one element
observer.disconnect();         // stop watching all elements
ResizeObserver is commonly used for container queries (before CSS container queries existed), responsive charts, and components that adapt to their container size rather than the viewport. It avoids polling element dimensions or listening to the global window resize event for element-level changes.

Why it matters: ResizeObserver enables truly responsive components that react to their container, not the viewport. Before it, developers polled with setInterval or used window.resize as an approximation — both are fragile and wasteful.

Real applications: Responsive chart libraries (Chart.js, D3) resize SVG/canvas when container changes, editors adapt toolbar layout based on available width, card grids switch between layouts based on container width, and Angular CDK uses ResizeObserver under the hood for its layout breakpoints.

Common mistakes: Not calling disconnect() or unobserve() when a component is destroyed (memory leak), creating a new ResizeObserver per element instead of observing multiple elements on one instance, and triggering layout changes inside the callback without debouncing (can cause infinite resize loops).