JavaScript

Web APIs & Storage

14 Questions

localStorage stores data that persists forever (until manually cleared). sessionStorage stores data only for the current browser tab session — it is cleared when the tab is closed. Both APIs store data as key-value pairs of strings, have a limit of about 5–10 MB per origin, and are synchronous (which can slow down the main thread for large operations). Neither is sent to the server automatically — unlike cookies. Both are restricted to the same origin (protocol + domain + port). Here is how they compare:
// localStorage — persists across sessions
localStorage.setItem('username', 'Alice');
const name = localStorage.getItem('username'); // 'Alice'
localStorage.removeItem('username');
localStorage.clear(); // remove all items

// sessionStorage — cleared when tab closes
sessionStorage.setItem('tempData', JSON.stringify({ step: 1 }));
const data = JSON.parse(sessionStorage.getItem('tempData'));
console.log(data.step); // 1

// Storage event (fires in OTHER tabs for localStorage changes)
window.addEventListener('storage', (e) => {
  console.log(`${e.key} changed from ${e.oldValue} to ${e.newValue}`);
});
Always use JSON.stringify/parse to store and retrieve objects, since storage only accepts strings. One key difference: sessionStorage is per-tab, so two tabs to the same site have separate sessionStorage. localStorage is shared across all tabs of the same origin.

Why it matters: Choosing the wrong storage mechanism causes security and UX bugs — persisting sensitive auth data in localStorage (accessible by XSS) vs sessionStorage (cleared on tab close). This is a fundamental Web API question in front-end interviews.

Real applications: localStorage for user theme preferences and persistent settings, sessionStorage for per-tab wizard/multi-step form state, cookies with HttpOnly for auth tokens (server-side security), and IndexedDB for offline-first data caching.

Common mistakes: Storing sensitive data (tokens, PII) in localStorage (XSS risk), not handling the 5MB storage quota limit, forgetting all Web Storage values are strings (must JSON serialize objects), and assuming sessionStorage persists across page refreshes (it does — only closes on tab close).

Cookies are small text strings stored by the browser and automatically sent to the server with every HTTP request to the matching domain. Web Storage (localStorage/sessionStorage) is never sent to the server. Cookies have a size limit of about 4 KB per cookie, support expiry dates, can be restricted to HTTPS with the Secure flag, and can be protected from JavaScript with HttpOnly. Web Storage has a larger limit (~5 MB) but no built-in expiry and is only accessible from JavaScript. Here is how to work with cookies:
// Set a cookie (expires in 7 days)
document.cookie = 'user=Alice; max-age=604800; path=/';

// Set a secure, httpOnly cookie (server side only — can't set HttpOnly from JS)
// Set-Cookie: token=abc; HttpOnly; Secure; SameSite=Strict

// Read cookies (returns all cookies as one string)
console.log(document.cookie); // "user=Alice; theme=dark"

// Parse cookies
function getCookie(name) {
  return document.cookie
    .split('; ')
    .find(c => c.startsWith(name + '='))
    ?.split('=')[1];
}

// Delete a cookie (set max-age to 0)
document.cookie = 'user=; max-age=0; path=/';
HttpOnly cookies cannot be read by JavaScript — they are only sent in HTTP headers. This protects against XSS attacks stealing session tokens. The SameSite=Strict attribute prevents cookies from being sent on cross-site requests, protecting against CSRF attacks.

Why it matters: The cookie vs Web Storage distinction is a critical security topic. Auth tokens in localStorage are vulnerable to XSS. HttpOnly cookies are immune to JavaScript access. This tradeoff is central to secure web application design.

Real applications: HttpOnly + Secure + SameSite=Strict cookies for session tokens, localStorage for non-sensitive UX preferences, third-party analytics via cookies with SameSite=None, and session management in SSR applications that need server-readable auth state.

Common mistakes: Not setting HttpOnly on auth cookies (allows JavaScript to steal them), not setting Secure (sends cookie over HTTP), using SameSite=None without Secure (rejected by browsers), and forgetting cookies have a 4KB size limit vs localStorage's ~5MB.

IndexedDB is a browser-based NoSQL database built into the browser. It can store large amounts of structured data (files, blobs, JSON objects) — up to hundreds of MB depending on the browser and device. Unlike localStorage, IndexedDB is asynchronous (uses events or Promises), supports transactions, and can store complex objects with indexes for fast querying. Use IndexedDB for offline-capable web apps, caching large datasets, or storing files locally in the browser. Here is a basic IndexedDB example:
// Open (or create) a database
const request = indexedDB.open('MyDB', 1);

request.onupgradeneeded = (e) => {
  const db = e.target.result;
  // Create an object store (like a table)
  const store = db.createObjectStore('users', { keyPath: 'id' });
  store.createIndex('by_name', 'name', { unique: false });
};

request.onsuccess = (e) => {
  const db = e.target.result;

  // Write data
  const tx = db.transaction('users', 'readwrite');
  const store = tx.objectStore('users');
  store.add({ id: 1, name: 'Alice', age: 30 });

  // Read data
  const getReq = store.get(1);
  getReq.onsuccess = () => console.log(getReq.result);
};
Modern libraries like idb (from Jake Archibald) wrap IndexedDB in a clean Promise-based API, making it much easier to work with. IndexedDB is the right choice when you need structured, queryable, large-scale local storage for offline-first web applications like PWAs.

Why it matters: localStorage's 5MB limit and synchronous API block the main thread. IndexedDB is the production solution for rich offline experiences, large datasets, and background sync — essential for PWAs and sophisticated browser apps.

Real applications: Offline document editors (Google Docs-style), PWAs that cache user-generated content, email clients that cache messages, and any app that needs to store and query structured data locally for offline use.

Common mistakes: Using localStorage for large data (quota errors), not wrapping IndexedDB in a library (like Dexie.js) to avoid verbose callback-based API, forgetting IndexedDB transactions expire (must complete synchronously after opening), and not handling the blocked event when upgrading the database schema.

The Fetch API is the modern way to make HTTP requests in JavaScript. It returns a Promise that resolves to a Response object. You then call methods like .json(), .text(), or .blob() to read the response body (also returns a Promise). One important thing: Fetch only rejects on network errors. HTTP error codes like 404 or 500 do NOT reject — you must check response.ok manually. Fetch replaces the older XMLHttpRequest with a cleaner, Promise-based interface. Here is how Fetch works:
// Basic GET request
const res = await fetch('https://api.example.com/users');
if (!res.ok) throw new Error(`HTTP error: ${res.status}`);
const data = await res.json(); // parse JSON body

// POST request
const response = await fetch('/api/users', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({ name: 'Alice', age: 30 })
});

// Handling errors properly
async function fetchData(url) {
  try {
    const res = await fetch(url);
    if (!res.ok) throw new Error(`Status: ${res.status}`);
    return await res.json();
  } catch (err) {
    console.error('Fetch failed:', err);
  }
}
After calling res.json() or res.text(), the response body is consumed and cannot be read again. If you need it twice, clone the response first with res.clone(). Fetch supports AbortController for cancelling requests — useful for search-as-you-type patterns where each keystroke should cancel the previous request.

Why it matters: Fetch is the standard HTTP client in modern JS. Understanding its two-step Promise pattern, error handling quirks (non-2xx doesn't reject), and streaming capabilities is essential for any front-end role.

Real applications: REST API calls in SPAs, uploading files with FormData, streaming large responses (chunked downloads), cancellable search-as-you-type requests with AbortController, and building custom API client wrappers.

Common mistakes: Not checking response.ok (fetch only rejects on network error, not 4xx/5xx), forgetting to call .json() or .text() to read the body, not handling CORS errors (opaque responses), and creating a new AbortController per request instead of reusing appropriately.

Use AbortController to cancel fetch requests. Create an AbortController, pass its signal to the fetch options, then call controller.abort() to cancel. When aborted, the fetch Promise rejects with an AbortError. Always check err.name === 'AbortError' to distinguish cancellation from real network errors. This pattern is essential for search inputs, infinite scroll, and any scenario where a new request should cancel the previous one. Here is how to cancel a fetch:
let controller = null;

async function search(query) {
  // Cancel previous request
  if (controller) controller.abort();
  controller = new AbortController();

  try {
    const res = await fetch(`/api/search?q=${query}`, {
      signal: controller.signal
    });
    const data = await res.json();
    return data;
  } catch (err) {
    if (err.name === 'AbortError') {
      console.log('Search cancelled');
      return null;
    }
    throw err; // real error, re-throw
  }
}

// Each keystroke cancels the previous search
input.addEventListener('input', (e) => search(e.target.value));

// Also works for timeouts
const timeoutCtrl = new AbortController();
setTimeout(() => timeoutCtrl.abort(), 5000); // 5 second timeout
fetch('/slow-api', { signal: timeoutCtrl.signal });
In modern environments you can also use AbortSignal.timeout(5000) directly — it creates a signal that auto-aborts after 5 seconds, without needing a separate AbortController. AbortController also works with the Web Streams API, EventListeners, and any API that accepts a signal.

Why it matters: Race conditions from uncontrolled async requests are a common React bug (stale state update after component unmounts). AbortController is the standard fix and is explicitly recommended in React's useEffect cleanup.

Real applications: Cancelling search requests on each keystroke (debounce + abort), React useEffect cleanup to cancel in-flight requests when component unmounts, cancelling slow uploads when users navigate away, and implementing request timeout logic.

Common mistakes: Not calling abort() in React useEffect cleanup (causes state-update-on-unmounted-component warnings), not checking error.name === 'AbortError' to distinguish intentional cancellation from real errors, and reusing an already-aborted controller (must create a fresh one).

The storage event fires on the window object when localStorage changes in another tab or window of the same origin. It does NOT fire in the tab that made the change — only in other tabs. The event object has: key (changed key), oldValue, newValue, url (page that changed it), and storageArea (the storage object). This is useful for synchronizing state across multiple open tabs without a server. Here is how the storage event works:
// Tab A: listen for storage changes from other tabs
window.addEventListener('storage', (e) => {
  if (e.key === 'theme') {
    document.body.className = e.newValue; // sync theme change
    console.log(`Theme changed to ${e.newValue} in another tab`);
  }
  if (e.key === null) {
    // localStorage.clear() was called
    console.log('All storage was cleared');
  }
});

// Tab B: changing localStorage triggers the event in Tab A
localStorage.setItem('theme', 'dark');   // Tab A gets notified
localStorage.setItem('theme', 'light');  // Tab A gets notified again
localStorage.removeItem('theme');        // newValue is null in event

// Note: sessionStorage does NOT fire storage events
// because it cannot be shared between tabs
The storage event only fires for actual changes — setting a key to the same value it already has will NOT fire the event. This event is the basis for simple cross-tab communication. For more complex needs, use the BroadcastChannel API which can send any structured data to all tabs.

Why it matters: Cross-tab synchronization is a real-world requirement for apps where users open multiple tabs (shopping carts, auth state, notifications). Knowing how to implement it shows awareness of multi-tab browser behavior.

Real applications: Logging out from one tab and auto-logging out all other tabs, syncing a shopping cart across tabs, broadcasting notification badges across multiple open app tabs, and coordinating whether a background tab should poll for updates.

Common mistakes: Expecting the storage event to fire in the tab that made the change (it doesn't), not parsing the JSON value from event.newValue, and not cleaning up event listeners when the component unmounts (memory leak in SPAs).

The Cache API lets JavaScript store and retrieve HTTP request/response pairs. It is primarily used from Service Workers to cache network resources so the app works offline. Unlike localStorage, the Cache API stores full HTTP responses (not just strings), is asynchronous, and integrates with fetch naturally. Service Workers intercept all fetch requests and can serve responses from cache, enabling offline support and faster loading times. Here is how the Cache API works:
// In a Service Worker
const CACHE_NAME = 'my-app-v1';

// Cache during install
self.addEventListener('install', (event) => {
  event.waitUntil(
    caches.open(CACHE_NAME).then((cache) => {
      return cache.addAll([
        '/',
        '/styles.css',
        '/app.js'
      ]);
    })
  );
});

// Serve from cache during fetch
self.addEventListener('fetch', (event) => {
  event.respondWith(
    caches.match(event.request).then((cached) => {
      return cached || fetch(event.request); // cache-first strategy
    })
  );
});

// From regular page code
const cache = await caches.open('my-cache');
await cache.put('/api/data', new Response(JSON.stringify({a:1})));
const resp = await cache.match('/api/data');
const data = await resp.json();
Common caching strategies include cache-first (use cache, fallback to network), network-first (try network, fallback to cache), and stale-while-revalidate (serve from cache, update in background). Service Workers + Cache API are the foundation of Progressive Web Apps (PWAs).

Why it matters: Offline capability is a key differentiator for modern web apps. The Cache API + Service Worker combination enables app-shell caching strategies that make apps load instantly and work without internet — a core PWA requirement.

Real applications: Caching static assets (JS, CSS, images) for instant load, network-first vs cache-first strategies for dynamic API responses, background sync for offline form submissions, and push notification delivery via service worker.

Common mistakes: Forgetting to update the cache version when deploying new assets (users get stale JS/CSS), caching POST requests (only GET responses should be cached), not handling cache storage quota exhaustion, and developing Service Workers without HTTPS (required in production).

The BroadcastChannel API lets different browser contexts (tabs, windows, iframes) of the same origin communicate with each other by sending messages. It is simpler than the storage event for cross-tab messaging. All contexts that subscribe to the same channel name receive messages posted to it. Unlike the storage event, messages go to all tabs including the sender unless you handle that yourself. It is great for syncing login/logout state, broadcasting notifications, or coordinating data updates across tabs. Here is how BroadcastChannel works:
// Works in all open tabs on the same origin
const channel = new BroadcastChannel('app-sync');

// Tab A: Listen for messages
channel.onmessage = (event) => {
  console.log('Received:', event.data);
  if (event.data.type === 'logout') {
    // Redirect all tabs to login page
    window.location.href = '/login';
  }
};

// Tab B: Send a message to all other tabs
channel.postMessage({ type: 'logout', userId: 123 });

// Send complex data
channel.postMessage({
  type: 'cart-updated',
  items: [{ id: 1, qty: 2 }]
});

// Clean up when done
channel.close();
BroadcastChannel supports any structured-cloneable data — objects, arrays, Blobs, etc. You are not limited to strings like with the storage event. When you close a channel with channel.close(), it stops receiving messages. Creating a new channel with the same name creates a new subscription.

Why it matters: BroadcastChannel is the modern, ergonomic replacement for the storage event hack. It enables rich cross-tab communication with structured data instead of string-only localStorage values. Knowing it demonstrates awareness of modern browser APIs.

Real applications: Auth state sync across tabs (login/logout), coordinating which tab should run a background polling job, broadcasting real-time updates (new messages, notifications) to all open tabs, and progressive enhancement for multi-window apps.

Common mistakes: Not closing the channel in cleanup (memory leak), expecting BroadcastChannel messages to be received by the sending tab itself (they are not — use a local variable for self-consumption), and not knowing BroadcastChannel doesn't work cross-origin (same-origin only).

The Geolocation API lets web applications access the device's geographic location (latitude and longitude). It always requires explicit user permission — the browser asks the user before sharing location. navigator.geolocation.getCurrentPosition() gets the current position once. watchPosition() watches the position continuously and calls the callback whenever it changes. Both methods take a success callback, an optional error callback, and optional options. Here is how the Geolocation API works:
// One-time location
navigator.geolocation.getCurrentPosition(
  (pos) => {
    const { latitude, longitude, accuracy } = pos.coords;
    console.log(`Lat: ${latitude}, Lng: ${longitude}`);
    console.log(`Accuracy: ${accuracy} meters`);
  },
  (err) => {
    if (err.code === 1) console.log('Permission denied');
    if (err.code === 2) console.log('Position unavailable');
    if (err.code === 3) console.log('Timeout');
  },
  { enableHighAccuracy: true, timeout: 5000 }
);

// Watch position (for tracking movement)
const watchId = navigator.geolocation.watchPosition(
  (pos) => console.log('New position:', pos.coords),
  (err) => console.error(err)
);

// Stop watching
navigator.geolocation.clearWatch(watchId);

// Check support
if ('geolocation' in navigator) {
  console.log('Geolocation supported');
}
The Geolocation API only works on HTTPS pages (or localhost) in modern browsers. enableHighAccuracy: true asks for GPS-level accuracy on mobile devices, but this uses more battery. The default is to use Wi-Fi/cell tower triangulation, which is faster and uses less power.

Why it matters: The Geolocation API is the standard way to build location-aware features. Understanding permission handling and the accuracy/battery tradeoff is important for mobile-first web app development.

Real applications: Store locator features (find nearest branch), delivery address pre-fill, weather apps that detect current location, map-based apps showing the user's position, and location-based push notification targeting.

Common mistakes: Not handling the error callback (permission denied, unavailable, timeout), requesting high accuracy when only approximate location is needed (drains battery needlessly), not using watchPosition when tracking live movement (polling getCurrentPosition repeatedly is inefficient), and not calling clearWatch() to stop watching.

Web Workers let you run JavaScript in a background thread separate from the main (UI) thread. This prevents heavy computation from blocking the UI and making the page unresponsive. Workers communicate with the main thread via message passingpostMessage() sends data and the message event receives it. Data is copied (not shared) between threads. Workers cannot access the DOM, window, or document — only the Worker API, fetch, setTimeout, and a few other Web APIs. Here is a basic Web Worker example:
// main.js — create worker
const worker = new Worker('worker.js');

// Send data to worker
worker.postMessage({ action: 'sort', data: [3, 1, 4, 1, 5] });

// Receive result from worker
worker.onmessage = (e) => {
  console.log('Sorted:', e.data); // [1, 1, 3, 4, 5]
};

worker.onerror = (e) => console.error('Worker error:', e);

// Terminate the worker when done
worker.terminate();

// worker.js — runs in background thread
self.onmessage = (e) => {
  if (e.data.action === 'sort') {
    const sorted = [...e.data.data].sort((a, b) => a - b);
    self.postMessage(sorted); // send result back
  }
};
For even better performance, use Transferable objects (like ArrayBuffer) with postMessage(data, [transfer]) — the data is transferred (not copied), which is much faster for large data. Shared Workers can be shared between multiple tabs of the same origin, while regular workers are exclusive to one page.

Why it matters: Web Workers are the standard solution for CPU-intensive tasks that would freeze the UI. They're essential knowledge for building high-performance web apps that do heavy data processing, number crunching, or image manipulation.

Real applications: Image/video processing (filters, compression), large dataset sorting and filtering, real-time collaborative editing conflict resolution, background data synchronization, and encryption/decryption operations.

Common mistakes: Trying to access the DOM from a Worker (it's not available), not handling the onerror event on workers, posting large objects frequently (structured clone overhead — use Transferable objects like ArrayBuffer instead), and forgetting to terminate workers when they're no longer needed.

The Intersection Observer API watches whether elements are visible in the viewport (or within another element). It fires a callback when an element enters or leaves the visible area. This replaces expensive scroll event listeners that recalculate positions on every scroll event. Intersection Observer is asynchronous and runs off the main thread, making it very efficient. Common uses: lazy loading images, infinite scroll, animating elements when they appear, analytics tracking (did the user see this element?). Here is how it works:
const observer = new IntersectionObserver((entries) => {
  entries.forEach((entry) => {
    if (entry.isIntersecting) {
      // Element is visible
      entry.target.classList.add('visible');
      console.log('Visible:', entry.target.id);

      // For lazy loading: stop observing after load
      observer.unobserve(entry.target);
    }
  });
}, {
  root: null,          // null = viewport
  rootMargin: '0px',  // expand/shrink boundary
  threshold: 0.5       // 50% visible triggers callback
});

// Observe multiple elements
document.querySelectorAll('.lazy-img').forEach(img => {
  observer.observe(img);
});

// threshold can be an array
const observer2 = new IntersectionObserver(callback, {
  threshold: [0, 0.25, 0.5, 0.75, 1.0]
  // fires at each 25% visibility change
});
threshold: 0 fires as soon as any pixel of the element is visible. threshold: 1.0 fires only when the entire element is visible. rootMargin adjusts the detection boundary — a positive value starts detecting elements before they enter the viewport (useful for preloading images slightly ahead of scroll).

Why it matters: Intersection Observer replaced the expensive scroll event + getBoundingClientRect() pattern. It improves performance dramatically by using the browser's native intersection detection instead of JavaScript polling on every scroll event.

Real applications: Lazy-loading images below the fold, infinite scroll pagination (load more when last item is visible), triggering CSS animations when elements scroll into view, and tracking ad viewability for billing purposes.

Common mistakes: Using scroll event listener + getBoundingClientRect instead of IntersectionObserver (poor performance), not calling observer.unobserve(entry.target) after a one-time trigger (runs the callback on every intersection), and not understanding that isIntersecting: false fires when the element leaves the viewport too.

MutationObserver watches for changes to the DOM tree — attribute changes, added/removed nodes, and text content changes. It fires a callback with a list of all changes that occurred. It replaces the older DOM mutation events (like DOMNodeInserted) which were synchronous and slow. MutationObserver is used in frameworks to detect DOM changes, in libraries that patch third-party DOM, and for watching dynamically loaded content. Here is how MutationObserver works:
const observer = new MutationObserver((mutations) => {
  mutations.forEach((mutation) => {
    if (mutation.type === 'childList') {
      console.log('Children added:', mutation.addedNodes);
      console.log('Children removed:', mutation.removedNodes);
    }
    if (mutation.type === 'attributes') {
      console.log(`Attribute "${mutation.attributeName}" changed`);
    }
    if (mutation.type === 'characterData') {
      console.log('Text changed');
    }
  });
});

const target = document.getElementById('myDiv');

// Start observing
observer.observe(target, {
  childList: true,     // watch for added/removed child nodes
  attributes: true,   // watch for attribute changes
  subtree: true,       // watch all descendants too
  characterData: true  // watch for text changes
});

// Stop observing
observer.disconnect();
Mutations are delivered as a batch — if multiple changes happen synchronously, they are all reported together in one callback invocation. MutationObserver is used by tools like Grammarly (to watch for new text fields), ad blockers (to detect and remove ads), and framework hydration scripts.

Why it matters: MutationObserver is the standard way to react to dynamic DOM changes without polling. It's used in every serious DOM manipulation library and browser extension. Understanding it shows deep DOM API knowledge.

Real applications: Auto-initializing 3rd-party widgets on dynamically added elements, monitoring for blocked content to trigger fallbacks, detecting when server-rendered HTML is modified by scripts, and watching for form field additions in single-page apps.

Common mistakes: Not calling observer.disconnect() when done (memory leak), creating infinite loops by mutating the DOM inside the mutation callback (triggers another mutation), observing the entire document body instead of a specific scoped element (expensive), and requesting subtree observation unnecessarily.

ResizeObserver watches an element and fires a callback whenever its size changes (width or height). Unlike listening to window.resize, it tracks individual elements, not just the window. This is essential for responsive components — when a container that holds your component changes size, you can re-render or adjust layout accordingly. It reports contentBoxSize (size without padding) and borderBoxSize (size with padding and border). Here is how ResizeObserver works:
const observer = new ResizeObserver((entries) => {
  entries.forEach((entry) => {
    const { width, height } = entry.contentRect;
    console.log(`Resized to ${width}px x ${height}px`);

    // Adjust layout based on size
    const el = entry.target;
    if (width < 400) {
      el.classList.add('compact');
    } else {
      el.classList.remove('compact');
    }
  });
});

// Observe an element
observer.observe(document.querySelector('.chart-container'));

// Stop observing
observer.unobserve(document.querySelector('.chart-container'));
observer.disconnect(); // stop all observations

// borderBoxSize (includes padding + border)
const borderObserver = new ResizeObserver((entries) => {
  for (const entry of entries) {
    if (entry.borderBoxSize) {
      const size = entry.borderBoxSize[0];
      console.log(`Border box: ${size.inlineSize} x ${size.blockSize}`);
    }
  }
});
contentRect gives width and height in CSS pixels. For more details, use contentBoxSize and borderBoxSize (both are arrays to handle multi-column layouts). ResizeObserver is much more efficient than polling element size with setInterval and avoids the pitfall of resize loops (it batches observations to prevent infinite feedback).

Why it matters: Component-level responsive design requires knowing when individual elements resize, not just the window. ResizeObserver enables containerQuery-style behavior in JavaScript and is the foundation of responsive components that adapt to their container size.

Real applications: Charts and canvas elements that redraw when their container resizes, responsive data tables that switch layouts based on available width, virtual list components that recalculate row heights on resize, and split-pane editors that reposition elements when panels are dragged.

Common mistakes: Using window resize event to track individual element size (misses cases where window doesn't change but element does), not calling observer.unobserve(element) when the element is removed (memory leak), and not debouncing expensive resize handlers (ResizeObserver fires very frequently).

The Clipboard API lets you programmatically read from and write to the system clipboard. The modern API is Promise-based and requires user permission for reading. Writing to the clipboard is generally allowed without permission (triggered by user action). Reading requires explicit permission from the user. This replaces the old document.execCommand('copy') approach, which is deprecated. Here is how the Clipboard API works:
// Write text to clipboard (requires user gesture)
async function copyToClipboard(text) {
  try {
    await navigator.clipboard.writeText(text);
    console.log('Copied!');
  } catch (err) {
    console.error('Copy failed:', err);
  }
}

// Read text from clipboard (requires permission)
async function readFromClipboard() {
  try {
    const text = await navigator.clipboard.readText();
    console.log('Clipboard:', text);
  } catch (err) {
    if (err.name === 'NotAllowedError') {
      console.log('Permission denied');
    }
  }
}

// Copy rich content (HTML + plain text)
async function copyRich(htmlContent, plainText) {
  await navigator.clipboard.write([
    new ClipboardItem({
      'text/html': new Blob([htmlContent], { type: 'text/html' }),
      'text/plain': new Blob([plainText], { type: 'text/plain' })
    })
  ]);
}

// Button click handler
document.querySelector('#copyBtn').addEventListener('click', () => {
  copyToClipboard('Hello World!');
});
The Clipboard API only works on HTTPS pages. On HTTP, navigator.clipboard is undefined. ClipboardItem allows copying rich content in multiple formats at once — the OS picks the best format when pasting into different apps.

Why it matters: Modern productivity apps rely on clipboard integration for copy-to-clipboard buttons and rich paste handling. Understanding the permission model and async API is essential for building polished UX features.

Real applications: "Copy to clipboard" buttons in documentation and code playgrounds, rich text editors that handle paste with image support, markdown editors that transform pasted HTML to markdown, and spreadsheet apps that copy tabular data as both HTML and plain text.

Common mistakes: Not handling clipboard permission denial gracefully (requires user gesture), using the deprecated document.execCommand('copy') instead of the Clipboard API, not falling back to execCommand for older browsers, and not being aware that navigator.clipboard.read() for reading clipboard data requires explicit permission (more restricted than write).