next() method that returns an object with two properties: value (the current value) and done (a boolean that is true when iteration is finished).
Iterators allow you to go through a sequence of values one at a time. Arrays, strings, Maps, Sets, and many other built-in objects already have built-in iterators.
You can create a custom iterator by returning a next() function from an object. This is useful when you want to generate values one by one instead of creating a full array upfront.
Here is a simple custom iterator example:
function makeCounter(start, end) {
let current = start;
return {
next() {
if (current <= end) {
return { value: current++, done: false };
}
return { value: undefined, done: true };
}
};
}
const counter = makeCounter(1, 3);
console.log(counter.next()); // { value: 1, done: false }
console.log(counter.next()); // { value: 2, done: false }
console.log(counter.next()); // { value: 3, done: false }
console.log(counter.next()); // { value: undefined, done: true }
Once done is true, the iterator is finished. Calling next() again will keep returning { value: undefined, done: true }.
Iterators are the foundation that powers for...of loops, spread syntax, and destructuring behind the scenes.
Why it matters: Understanding iterators explains how for...of, spread, and destructuring actually work. It also reveals why these features work on arrays, strings, Maps, and Sets but not plain objects — a question that trips up many developers.
Real applications: Custom data structures (linked lists, trees) that need to be iterable with for...of, lazy sequence generators, DOM NodeList traversal, streaming data processing, and any API that should be consumable as a sequence.
Common mistakes: Forgetting to return { done: true } when the iteration ends (creates infinite loops), not implementing [Symbol.iterator] for the iterable protocol alongside the iterator protocol, and confusing an iterator (has next()) with an iterable (has [Symbol.iterator]).
Symbol.iterator that returns an iterator. This is called the iterable protocol.
Built-in iterables in JavaScript include Arrays, Strings, Maps, Sets, and TypedArrays. Plain objects ({}) are NOT iterable by default.
When you use for...of, JavaScript calls Symbol.iterator automatically and gets the iterator from it.
Here is how the iterable protocol works:
const arr = [10, 20, 30];
const iter = arr[Symbol.iterator](); // get iterator
console.log(iter.next()); // { value: 10, done: false }
console.log(iter.next()); // { value: 20, done: false }
console.log(iter.next()); // { value: 30, done: false }
console.log(iter.next()); // { value: undefined, done: true }
// A plain object is NOT iterable
const obj = { a: 1 };
// for (const x of obj) {} // TypeError: obj is not iterable
You can make any object iterable by adding a [Symbol.iterator] method to it that returns an iterator object.
This protocol is what allows for...of, spread (...), and destructuring (const [a, b] = iterable) to work with custom objects.
Why it matters: Any time you need your custom object to work with for...of, spread, or destructuring, you implement this protocol. It's the interface that connects custom data structures to built-in language features.
Real applications: Making custom collection classes iterable, implementing range objects that generate sequences, creating paginated result sets that iterate lazily, and building query result objects that stream data one row at a time.
Common mistakes: Implementing [Symbol.iterator] but not returning an object with next() (breaks the protocol), not returning this from [Symbol.iterator] when the object is both iterable and iterator, and forgetting the value property in the returned object from next().
function* and can pause and resume its execution using the yield keyword. It returns a generator object that is both an iterator and an iterable.
Each time you call next() on a generator, it runs until the next yield, returns the yielded value, and pauses. The function's local variables are kept alive between pauses.
Generators are great for producing sequences of values lazily — only computing the next value when asked, instead of creating a full array upfront.
Here is a basic generator function:
function* counter() {
console.log('Start');
yield 1;
console.log('After 1');
yield 2;
console.log('After 2');
yield 3;
}
const gen = counter();
console.log(gen.next()); // "Start" { value: 1, done: false }
console.log(gen.next()); // "After 1" { value: 2, done: false }
console.log(gen.next()); // "After 2" { value: 3, done: false }
console.log(gen.next()); // { value: undefined, done: true }
The generator function body does NOT run when you call counter(). It only runs when you call next() for the first time.
Generators are useful for implementing infinite sequences, custom iterables, and async-like control flow with libraries like Redux-Saga.
Why it matters: Generators enable a fundamentally different programming model — cooperative multitasking where a function can pause and resume. This underpins async/await (which is syntactic sugar over generators) and lazy evaluation patterns.
Real applications: Infinite ID generators, paginated data fetching, redux-saga middleware (uses generators for async side effects), custom iterators for complex data structures, and any algorithm that needs to produce values on demand.
Common mistakes: Calling a generator function and expecting it to run immediately (it returns a generator object; call .next() to start), forgetting that yield pauses execution (the code after yield runs only on the next .next() call), and not knowing that generator functions can also receive values via next(value).
yield pauses the generator function and sends a value back to the caller. When next() is called again, execution resumes right after the yield statement.
You can also pass a value into the generator via next(value). That value becomes the result of the yield expression inside the generator.
This two-way communication makes generators very powerful for writing state machines and coroutines.
Here is how yield sends and receives values:
function* dialog() {
const name = yield 'What is your name?';
const city = yield `Hello ${name}! Where are you from?`;
return `${name} is from ${city}.`;
}
const gen = dialog();
console.log(gen.next().value); // "What is your name?"
console.log(gen.next('Alice').value); // "Hello Alice! Where are you from?"
console.log(gen.next('Paris').value); // "Alice is from Paris."
The first next() call starts the generator and runs until the first yield. Values passed to the first next() call are always ignored since there is no yield to receive it yet.
yield* (with asterisk) delegates to another iterable or generator, allowing you to compose generators together.
Why it matters: yield* is the key to composable generators — building complex generators from simpler ones. Without it, you'd have to manually iterate sub-generators and yield each value individually.
Real applications: Building tree traversal generators that recursively yield child nodes, composing pipeline generators where each stage feeds into the next, and flattening nested iterables in a lazy, memory-efficient way.
Common mistakes: Using yield instead of yield* when delegating to another generator (yields the entire generator object instead of its values), not knowing that yield* also works with any iterable (arrays, strings, Sets, Maps), and forgetting that yield* returns the return value of the delegated generator.
yield* delegates execution to another iterable or generator. It yields each value from the inner iterable one by one, as if those yield statements were written in the outer generator.
This is useful for composing generators together or for iterating over nested structures in a flat way.
The yield* expression evaluates to the return value of the delegated generator (the value produced by its return statement).
Here is how yield* works:
function* inner() {
yield 'a';
yield 'b';
return 'done-inner';
}
function* outer() {
yield 1;
const result = yield* inner(); // delegates to inner
console.log(result); // "done-inner"
yield 2;
}
for (const val of outer()) {
console.log(val); // 1, 'a', 'b', 2
}
Notice that the return value of inner() ("done-inner") is not yielded to the for...of loop — it becomes the value of the yield* expression inside outer().
yield* also works with any iterable like arrays: yield* [1, 2, 3] yields 1, then 2, then 3.
Why it matters: Being able to send values into a generator via next(value) transforms generators from simple value producers into two-way communication channels. This bidirectional flow is exactly how async/await is implemented internally.
Real applications: Coroutine patterns where external code feeds data into a running computation, building cooperative task schedulers, implementing stateful parsers that consume tokens on demand, and understanding how Babel transpiles async/await to generator-based code.
Common mistakes: Sending a value to the first next() call — it's always ignored (generators start before the first yield), confusing the value sent into next(value) with the value yielded out, and not knowing that yield* with arrays is the concise way to yield multiple values in sequence.
while(true) loop with yield inside the generator. The infinite loop doesn't run continuously — it pauses at each yield and waits for the next next() call.
This pattern is used for things like ID generators, infinite Fibonacci sequences, and paginated data fetching.
Here is an infinite number generator:
function* infiniteNumbers(start = 0) {
let n = start;
while (true) {
yield n++;
}
}
const nums = infiniteNumbers(1);
console.log(nums.next().value); // 1
console.log(nums.next().value); // 2
console.log(nums.next().value); // 3
// Use take() helper to get first N values
function take(gen, n) {
const result = [];
for (const val of gen) {
result.push(val);
if (result.length >= n) break;
}
return result;
}
console.log(take(infiniteNumbers(5), 4)); // [5, 6, 7, 8]
Always use break or a limit when iterating infinite generators with for...of to avoid an infinite loop.
Infinite generators are memory-efficient because they never store the full sequence — they only keep the current value and state.
Why it matters: Infinite sequences are impossible to represent with arrays but trivial with generators. This is a distinguishing capability that shows deep understanding of lazy evaluation and event-driven patterns.
Real applications: Monotonically increasing ID generators, timestamp-based event sequences, infinite scroll data generation in tests, generating test fixtures on demand, and simulating real-time data streams.
Common mistakes: Spreading or iterating an infinite generator with for...of without a break condition (infinite loop), not knowing that [...infiniteGen()] will hang the process, and confusing an infinite generator with a recursive function (generators pause; recursive functions run to completion).
for...of iterates over the values of an iterable object (arrays, strings, Maps, Sets, generators). for...in iterates over the enumerable property keys of an object.
Use for...of for arrays and other iterables. Use for...in only for plain objects when you need to loop through keys — but be careful, it also picks up inherited properties.
For arrays, for...in is almost always the wrong choice because it iterates over indices as strings and can pick up prototype methods if they are enumerable.
Here is the difference between the two:
const arr = ['a', 'b', 'c'];
for (const val of arr) {
console.log(val); // 'a', 'b', 'c' (values)
}
for (const key in arr) {
console.log(key); // '0', '1', '2' (string keys!)
}
const obj = { x: 1, y: 2 };
for (const key in obj) {
console.log(key, obj[key]); // 'x' 1, 'y' 2
}
// for...of on a string iterates characters
for (const ch of 'hi') {
console.log(ch); // 'h', 'i'
}
When using for...in on objects, add if (obj.hasOwnProperty(key)) to skip inherited properties.
for...of cannot be used on plain objects by default because they do not implement Symbol.iterator.
Why it matters: This is why for...of obj throws a TypeError on plain objects while for...in obj works. Understanding this explains when to use for...of vs for...in and when to use Object.entries() as a workaround.
Real applications: Iterating Map/Set (use for...of), iterating object entries (for...of Object.entries(obj)), consuming custom data structures, and looping over DOM NodeLists and HTMLCollections (which are iterable in modern browsers).
Common mistakes: Using for...of on plain objects (TypeError) and using for...in when you want for...of semantics, not handling break inside for...of (which silently calls generator.return()), and forgetting that generators work directly with for...of because they implement both protocols.
[Symbol.iterator] method that returns an iterator. The iterator must have a next() method returning { value, done }.
Once you add this method, the object works with for...of, spread, and destructuring just like built-in iterables.
A common approach is to use a generator function as the [Symbol.iterator] method, which makes the code much shorter.
Here is how to make an object iterable:
const range = {
from: 1,
to: 5,
[Symbol.iterator]() {
let current = this.from;
const last = this.to;
return {
next() {
if (current <= last) {
return { value: current++, done: false };
}
return { value: undefined, done: true };
}
};
}
};
for (const num of range) {
console.log(num); // 1, 2, 3, 4, 5
}
console.log([...range]); // [1, 2, 3, 4, 5]
The [Symbol.iterator] method is called once at the start of iteration, and the returned iterator object is used throughout.
You can also write it as a generator: [Symbol.iterator]: function*() { for (let i = this.from; i <= this.to; i++) yield i; }.
Why it matters: Making custom objects iterable is a key OOP pattern in JavaScript. It lets your domain objects (ranges, trees, linked lists, database result sets) integrate seamlessly with all language iteration features.
Real applications: Range objects for numeric sequences, tree/graph nodes that yield children, file system walkers, paginated API result objects, and date ranges that yield each day between two dates.
Common mistakes: Defining Symbol.iterator as an arrow function when using this (arrow functions don't have their own this), not resetting iterator state so the object can only be iterated once, and forgetting that generator method syntax (*[Symbol.iterator]() {}) is the most concise way to make an object iterable.
// 1. Unique ID generator
function* idGenerator() {
let id = 1;
while (true) yield id++;
}
const nextId = idGenerator();
console.log(nextId.next().value); // 1
console.log(nextId.next().value); // 2
// 2. Pagination (lazy loading)
function* paginate(data, size) {
for (let i = 0; i < data.length; i += size) {
yield data.slice(i, i + size);
}
}
const pages = paginate([1,2,3,4,5,6,7], 3);
console.log(pages.next().value); // [1, 2, 3]
console.log(pages.next().value); // [4, 5, 6]
// 3. Flatten nested arrays
function* flatten(arr) {
for (const item of arr) {
if (Array.isArray(item)) yield* flatten(item);
else yield item;
}
}
console.log([...flatten([1,[2,[3]],4])]); // [1,2,3,4]
Generators shine when you want to decouple value production from value consumption.
For most everyday code, async/await has replaced generator-based async patterns, but generators are still essential for lazy sequences and custom iterables.
Why it matters: Generators were the original async control flow mechanism before async/await existed. Libraries like Redux-Saga still use them. Understanding the pattern explains how async/await works and enables the few cases where generators are still the best tool.
Real applications: Redux-Saga side effect management, co.js-based async control flow, building your own async runner for educational purposes, and understanding how Babel/TypeScript compile async functions to ES5 generator code.
Common mistakes: Choosing generator-based async over async/await for new code (async/await is clearer and better supported), not properly handling generator-based promise rejections (need to wrap in try/catch inside the generator), and mixing yield and await — they are different and not interchangeable outside async generators.
async/await was actually inspired by generators. Before async/await existed, developers used generators with a helper library (like co) to write async code in a synchronous style.
When an async function hits await, it pauses just like yield in a generator. The JavaScript engine internally uses a similar mechanism to suspend and resume execution.
The key difference is that await only works with Promises and is built directly into the language, while generators are general-purpose and need a runner function to handle Promises.
Here is the similarity between them:
// Using generators for async (old way, needs a runner)
function* fetchUser() {
const user = yield fetch('/api/user').then(r => r.json());
console.log(user);
}
// Using async/await (modern, built-in)
async function fetchUser() {
const user = await fetch('/api/user').then(r => r.json());
console.log(user);
}
// Both pause execution and resume when the value is ready
// async/await is syntactic sugar for generator + promise runner
Think of async function as a generator that automatically handles Promises — the engine creates a hidden promise runner for you.
Understanding generators helps you understand how async/await works internally, which is useful for advanced debugging and understanding JavaScript's concurrency model.
Why it matters: Generators expose JavaScript's cooperative multitasking model explicitly. Every await is effectively a yield under the hood. This knowledge gives you a mental model for debugging complex async code and understanding why certain async behaviors occur.
Real applications: Understanding why async function stack traces look the way they do, debugging suspended async operations, building teacher tools to demonstrate JavaScript's event loop and async model, and optimizing advanced async patterns like concurrency limiting.
Common mistakes: Confusing the generator's paused execution with synchronous blocking (generators don't block the event loop — they just save state), thinking you can run a generator concurrently (generators are single-threaded and sequential like all JS), and forgetting that the generator's state machine is GC-eligible once all references are dropped.
gen.return(value) on a generator terminates it immediately and returns { value: value, done: true }. Once terminated, the generator cannot produce more values.
This is useful for early cleanup — for example, stopping an infinite generator or releasing resources when you no longer need the generator.
If the generator has a try...finally block, the finally code runs before the generator is terminated when you call return().
Here is how return() works:
function* counter() {
try {
let i = 0;
while (true) yield i++;
} finally {
console.log('Cleanup!');
}
}
const gen = counter();
console.log(gen.next().value); // 0
console.log(gen.next().value); // 1
console.log(gen.return(99)); // "Cleanup!" { value: 99, done: true }
console.log(gen.next()); // { value: undefined, done: true }
After return() is called, all subsequent next() calls return { value: undefined, done: true }.
The for...of loop automatically calls return() on the iterator when you break out of the loop early.
Why it matters: The return() method enables safe early termination with cleanup — essential for generators that hold resources (open files, network connections, running timers). Without it, resources would leak when consumers stop early.
Real applications: Database cursor generators that close connections when consumers stop reading, file reader generators that close file handles on early break, cleaning up subscriptions when a consumer component unmounts, and implementing timelimited generators.
Common mistakes: Not wrapping resource-holding code in try...finally (the finally block is the cleanup hook that return() triggers), not knowing that for...of break automatically calls return(), and trying to resume a generator after calling return() (it's permanently done).
gen.throw(error) injects an error into the generator at the point where it is paused. If the generator has a try...catch block around the yield, the error is caught inside the generator.
If there is no try...catch, the error propagates out to the caller. Either way, the generator is terminated after an uncaught error.
This allows the external code to signal errors to the generator, which is useful in async patterns.
Here is how throw() works:
function* process() {
try {
const value = yield 'ready';
yield 'processed: ' + value;
} catch (err) {
yield 'caught: ' + err.message;
}
}
const gen = process();
console.log(gen.next().value); // "ready"
console.log(gen.throw(new Error('oops')).value); // "caught: oops"
console.log(gen.next().value); // undefined (done)
If you call throw() before the first next(), the error is thrown at the very beginning of the generator body.
The throw() method is mainly used by async runners to forward Promise rejections into the generator as thrown errors.
Why it matters: The throw() method is how async runners (like the one behind async/await) forward Promise rejections into a generator as catchable errors. Understanding it explains why await expressions throw when a Promise rejects inside an async function.
Real applications: Building async runner utilities, implementing coroutine frameworks, error injection in tests (simulating failure conditions in generators), and Redux-Saga's error handling for side effects.
Common mistakes: Not wrapping yield expressions in try...catch when errors can be thrown into the generator (unhandled errors terminate the generator without cleanup), calling throw() before the first next() (throws at the very start of the generator body), and not calling next() first to advance to a yield point before injecting errors.
for...of works with generators because generators are both iterators and iterables. When you call a generator function, the returned generator object has a [Symbol.iterator] method that returns itself.
The for...of loop calls next() automatically until done is true. It only uses the value property — it never uses the final return value (when done: true).
You can also use spread, destructuring, and Array.from() with generators because they all use the iterable protocol internally.
Here is how for...of works with generators:
function* fruits() {
yield 'apple';
yield 'banana';
yield 'cherry';
return 'done'; // NOT included in for...of
}
for (const fruit of fruits()) {
console.log(fruit); // 'apple', 'banana', 'cherry'
// Note: 'done' is NOT logged
}
// Spread also works
const arr = [...fruits()]; // ['apple', 'banana', 'cherry']
// Destructuring works too
const [first, second] = fruits();
console.log(first, second); // 'apple' 'banana'
The final return value of a generator is ignored by for...of and spread — only yielded values are collected.
When break is used inside a for...of, JavaScript calls gen.return() automatically to cleanly terminate the generator.
Why it matters: This is the link between generators and standard JavaScript iteration syntax. Generators don't need special syntax to be consumed — they work with all existing tools (for...of, spread, destructuring) because they implement the iterable protocol natively.
Real applications: Consuming generator sequences with for...of just like arrays, spreading generator values into array literals, using destructuring to take the first N values from a generator, and feeding generators into any API that accepts iterables.
Common mistakes: Expecting the generator's return value to appear in for...of iteration (it's ignored — only yielded values are consumed), not realizing that [...gen()] on an infinite generator will hang, and forgetting that after spreading or destructuring, the generator advances and cannot be re-consumed from the start.
yield and don't compute the next value until asked.
In contrast, a regular function like Array.map() is eager — it processes every item immediately and creates a full result array in memory.
Lazy evaluation is important when dealing with large datasets, infinite sequences, or expensive computations where you may not need all values.
Here is a comparison of eager vs lazy:
// Eager — processes ALL items immediately
const doubled = [1, 2, 3, 4, 5].map(x => x * 2);
// Full array [2,4,6,8,10] created even if you only need first 2
// Lazy — computes one at a time
function* lazyDouble(arr) {
for (const x of arr) {
yield x * 2;
}
}
const gen = lazyDouble([1, 2, 3, 4, 5]);
console.log(gen.next().value); // 2 (only computed first item)
console.log(gen.next().value); // 4 (only computed second item)
// Items 3, 4, 5 never touched
// Real benefit: large arrays or streams
function* readLines(text) {
for (const line of text.split('
')) {
yield line; // process one line at a time
}
}
Lazy evaluation avoids unnecessary computation and reduces memory usage, especially when processing large data sources like files or API streams.
The downside is that lazy code can be harder to debug and test because values are computed at unpredictable times.
Why it matters: Lazy evaluation is a key performance pattern for large-scale data processing. Knowing when to prefer lazy (generator-based) over eager (array-based) computation is an important optimization skill for interviews and production code.
Real applications: Processing large CSV files line-by-line without loading the whole file, building lazy transformation pipelines (filter → map → take), streaming search results as they arrive, and performance-optimizing large list rendering by generating only visible items.
Common mistakes: Using eager array methods (map/filter) on very large datasets when a lazy generator pipeline would use a fraction of the memory, assuming lazy evaluation is always better (the bookkeeping overhead makes it slower for small arrays), and forgetting that generators can only be iterated once (must recreate the generator for a second pass).
async function*) combine generators and async/await. They yield Promises (or values resolved from Promises) and are consumed with for await...of.
This is perfect for reading data from streams, paginated APIs, or any source that delivers data asynchronously over time.
The for await...of loop automatically awaits each yielded Promise before moving to the next iteration.
Here is an async generator example:
async function* fetchPages(baseUrl) {
let page = 1;
while (true) {
const res = await fetch(`${baseUrl}?page=${page}`);
const data = await res.json();
if (data.length === 0) return; // no more pages
yield data;
page++;
}
}
// Consume with for await...of
async function loadAll() {
for await (const page of fetchPages('/api/items')) {
console.log('Got page:', page);
// process page...
}
}
// Simulated async generator
async function* delay() {
for (let i = 1; i <= 3; i++) {
await new Promise(r => setTimeout(r, 500));
yield i;
}
}
(async () => {
for await (const n of delay()) console.log(n); // 1, 2, 3
})();
Async generators are the cleanest way to handle async data streams in JavaScript without complex callback chains or consuming all data at once.
The Symbol.asyncIterator is the async version of Symbol.iterator — objects can implement it to work with for await...of.
Why it matters: Async generators are the standard tool for consuming async data streams in modern JavaScript. They replace complex promise chains and callback patterns with clean, readable sequential-looking code for paginated APIs, file streams, and WebSocket data.
Real applications: Paginated REST API consumers that fetch-then-yield each page, Node.js file stream readers, WebSocket message processors, real-time database change streams, and any producer/consumer pattern where the producer delivers data asynchronously.
Common mistakes: Using a regular generator when yielding Promises (must use async function*), not using for await...of to consume async generators (using for...of yields the Promises themselves unfulfilled), and forgetting to add error handling for the async operations inside the generator.
next() method following the iterator protocol. A generator is a special function (declared with function*) that automatically creates an iterator for you.
Generators are essentially a shorthand for creating iterators. Writing a generator function is much shorter and clearer than manually creating an iterator object with a next() method.
Every generator is an iterator, but not every iterator is a generator — you can write iterators manually without using generator syntax.
Here is the comparison:
// Manual iterator (verbose)
function makeRange(start, end) {
let current = start;
return {
next() {
return current <= end
? { value: current++, done: false }
: { value: undefined, done: true };
},
[Symbol.iterator]() { return this; } // also iterable
};
}
// Generator (concise, same result)
function* makeRangeGen(start, end) {
for (let i = start; i <= end; i++) {
yield i;
}
}
// Both work the same way
for (const n of makeRange(1, 3)) console.log(n); // 1 2 3
for (const n of makeRangeGen(1, 3)) console.log(n); // 1 2 3
Generators also automatically handle the [Symbol.iterator] method — the generator object returns itself from [Symbol.iterator], making it both an iterator and an iterable.
Use generators whenever you need a custom iterator — they are much easier to write and read than manual iterator objects.
Why it matters: Knowing generato rs are implemented iterators is the key insight that links the two concepts. Every generator is an iterator, and — because they also implement [Symbol.iterator] returning this — they're also iterables. This dual nature is what makes them versatile.
Real applications: Any time you implement a custom data structure that needs to be iterable, a generator is the idiomatic choice over a manual iterator object. This covers linked lists, trees, graphs, and custom ranges.
Common mistakes: Writing manual iterator objects when a generator would be clearer and shorter, not knowing generators are re-invokable (call the generator function again to get a fresh iterator), and forgetting that after a generator is exhausted, calling next() keeps returning { value: undefined, done: true } rather than throwing.