for await…of is serial — not streaming, not parallel
Every senior JS dev reaches for `for await of` when iterating async sources. Almost nobody realises the ECMA-262 algorithm awaits each next() before the body runs, and the body before the next next(). Here's the spec, two timeline animations, and the three iteration patterns you actually want in your toolbox.
Same five async items. Same work. Two schedules — for await…of on the left, Promise.all on the right. Watch the wall clock:
The name for await…of suggests two things to most senior JS devs: that it's streaming, and that it's somehow concurrent. Both readings are wrong. The ECMA-262 spec makes the loop strictly sequential. Every iteration awaits the previous item's arrival, runs the body, then asks the iterator for the next item. If your items are independent, the loop still serialises them.
This post reads the real ECMA-262 algorithm, shows the three iteration patterns you usually want, and explains why for await…of is almost always the wrong answer when items don't depend on each other.
ECMA-262 §14.7.5.7 (ForIn/OfBodyEvaluation) runs a Repeat loop whose body is: 1. call iterator.next(), 2. if async, Await the result, 3. run the loop body, 4. repeat. The Await sits between the iterator's next() and the loop body; the next next() doesn't fire until the body finishes. Independent items wait for each other. Prefer Promise.all for unbounded parallelism or a bounded pool (p-limit, Promise.all with a semaphore) for controlled concurrency.
The algorithm, verbatim
ECMA-262 §14.7.5.7 defines ForIn/OfBodyEvaluation — the abstract operation both for…of and for await…of lower to. Here's the body of the Repeat loop, abridged to the steps that matter for the async case:
Repeat,
a. Let
nextResultbe ? Call(iteratorRecord.[[NextMethod]], iteratorRecord.[[Iterator]]).
b. IfiteratorKindis async, setnextResultto ? Await(nextResult).
c. IfnextResultis not an Object, throw a TypeError exception.
d. Letdonebe ? IteratorComplete(nextResult).
e. Ifdoneis true, return V.
f. LetnextValuebe ? IteratorValue(nextResult).
... bind lhs tonextValue...
x. Letresultbe Completion(Evaluation of stmt).
... loop-continues check ...
Two places to watch:
- Step b. On async iterators, the result of
next()is awaited before the body runs. That Await suspends the enclosing async function. Nothing else in the loop advances. - The Repeat structure. The algorithm is a plain repeat. Step a (call next()) only happens after the previous iteration's body (step x) resolves.
Put those together: for each item the async function suspends twice — once on the next() promise, once on anything the body awaits. Only after the body resumes does next() get called again for item N+1.
So for await (const item of source) { await work(item); } runs:
next() → Await → work(item) → next() → Await → work(item) → …No overlap. The work call for item 2 starts only after work for item 1 has resolved. If each item takes 300ms, ten items take three seconds — even when the work could easily have run in parallel.
Three patterns, side by side
Watch eight items scheduled under three strategies. Each item is 220ms. The only difference is how they're scheduled:
for await (const x of src) { await work(x); }await Promise.all(src.map(work));
await pLimit(3)(src, work);
for await…of(top, rose). 8 × 220ms = 1760ms. Every item waits for the last.Promise.all(middle, amber). All eight start immediately. ~220ms total — but every fetch is live at once. No backpressure, no bound on fan-out.- Pool of 3 (bottom, green). At most three items in flight at once. Three pumps feed eight items; total is
ceil(8/3) * 220 ≈ 660ms. Controlled concurrency.
In production, the middle pattern is almost always wrong. 200 items in a Promise.all fires 200 parallel requests at whatever API you're calling. Rate limits get blown. The event loop fills with microtasks. The bottom pattern (via p-limit or equivalent) is the usual correct answer for "iterate these async things, cap concurrency, let the scheduler breathe".
for await…of isn't wrong — it just means something specific: iterate in strict order, with the next next() held until the current item's body finishes. That's the right thing when:
- You need items in order (streaming protocols, Server-Sent Events, WebSocket messages).
- The body has side effects that must happen one-at-a-time (transactional writes to a DB).
- The body produces backpressure — you want the producer (the source iterator) to stall until the consumer has processed the previous item.
If none of those apply, reach for Promise.all or a bounded pool instead.
The "streaming" misconception
Developers who come to JS from Python or Rust often read for await…of as a streaming construct — like async for over a channel. It is not. The async iterator protocol in ECMA-262 is a pull model: the loop pulls one item, awaits it, processes it, pulls again. The producer has no autonomous clock; it cannot emit faster than the consumer drains.
That's actually useful — it's what makes backpressure work — but it means async iterators are not streams in the "items arrive whenever" sense. The iterator's next() method is a function call that returns a Promise. Until the loop calls it, nothing happens. Async iterators are cooperative, not event-driven.
If you want event-driven delivery (items arrive whether or not the consumer is reading), you want an EventTarget, a ReadableStream, or a buffered channel — not a for await…of loop.
Refactoring the wrong pattern
A shape I see in a lot of senior FE codebases:
for await (const user of users) {
const profile = await fetchProfile(user.id);
results.push(transform(profile));
}Unless fetchProfile has per-item dependencies or side effects, this is N serial round trips where one parallel fan-out would do. The rewrite:
// if fetchProfile is idempotent and the API tolerates bursts
const results = await Promise.all(
users.map((u) => fetchProfile(u.id).then(transform))
);
// if the API has a rate limit you respect
import pLimit from "p-limit";
const limit = pLimit(6);
const results = await Promise.all(
users.map((u) => limit(() => fetchProfile(u.id).then(transform)))
);Both produce the same array in the same order (Promise.all preserves input order). Wall time drops from N × latency to (N / pool) × latency — a 10× win on a 20-item batch is common.
The right time to keep the for await…of is when results.push(transform(profile)) reads state that the previous iteration mutated — then the serial order is load-bearing, and the wall-clock loss is the price of correctness.
The async iterator protocol itself
Worth internalising alongside the loop behaviour: an async iterator is just an object with a [Symbol.asyncIterator]() method that returns something with a next() method that returns a Promise<{ value, done }>. No magic. You can implement one in 30 lines:
function heartbeat(intervalMs: number): AsyncIterable<number> {
return {
[Symbol.asyncIterator]() {
let i = 0;
return {
next(): Promise<IteratorResult<number>> {
return new Promise((resolve) => {
setTimeout(() => resolve({ value: i++, done: false }), intervalMs);
});
},
};
},
};
}
for await (const tick of heartbeat(500)) {
console.log(tick);
if (tick > 5) break;
}Every tick of the loop: 500ms wait + the body + 500ms wait + body + … The timer is serial with the loop, not parallel to it. If the body were to await a slow operation (say, a 400ms network write), the loop wouldn't ask for the next value until that await settled — so the effective interval would be 900ms, not 500ms. That's the backpressure story — the source is held back by the consumer.
Primary sources
- ECMA-262 §14.7.5.7 ForIn/OfBodyEvaluation — the algorithm showing the
Await(nextResult)betweennext()and the body. - ECMA-262 — %AsyncIteratorPrototype% — the prototype every async iterator inherits from.
- ECMA-262 — Iterator Records — the abstract structure both sync and async iteration use (fields:
[[Iterator]],[[NextMethod]],[[Done]]). - MDN —
for await…of— the behaviour-level doc; the "sequential" note is easy to miss but it's there. - WHATWG Streams — ReadableStream's async iterator — an example of an async iterator that exposes backpressure as a first-class feature.