Why Turbopack?
When we set out to create Turbopack, we wanted to solve a problem. We had been working on speed improvements for Next.js. We migrated away from several JS-based tools. Babel, gone. Terser, gone. Our next target was another JS-based tool, Webpack.
Replacing it became our goal. But with what?
A new generation of native-speed bundlers were emerging, like esbuild and swc. But after assessing the bundlers on the market, we decided to build our own. Why?
Bundling vs Native ESM
Frameworks like Vite use a technique where they don’t bundle application source code in development mode. Instead, they rely on the browser’s native ES Modules system. This approach results in incredibly responsive updates since they only have to transform a single file.
However, Vite can hit scaling issues with large applications made up of many modules. A flood of cascading network requests in the browser can lead to a relatively slow startup time. For the browser, it’s faster if it can receive the code it needs in as few network requests as possible - even on a local server.
That’s why we decided that, like Webpack, we wanted Turbopack to bundle the code in the development server. Turbopack can do it much faster, especially for larger applications, because it is written in Rust and skips optimization work that is only necessary for production.
Incremental Computation
There are two ways to make a process faster: do less work or do work in parallel. We knew if we wanted to make the fastest bundler possible, we’d need to pull hard on both levers.
We decided to create a reusable Turbo build engine for distributed and incremental behavior. The Turbo engine works like a scheduler for function calls, allowing calls to functions to be parallelized across all available cores.
The Turbo engine also caches the result of all the functions it schedules, meaning it never needs to do the same work twice. Put simply, it does the minimum work at maximum speed.
Vite and esbuild
Other tools take a different attitude to ‘doing less work’. Vite minimizes work done by using Native ESM in development mode. We decided not to take this approach for the reasons listed above.
Under the hood, Vite uses esbuild for many tasks. esbuild is a bundler - a superbly fast one. It doesn’t force you to use native ESM. But we decided not to adopt esbuild for a few reasons.
esbuild’s code is hyper-optimized for one task - bundling fast. It doesn’t have HMR, which we don’t want to lose from our dev server.
esbuild is an extremely fast bundler, but it doesn’t do much caching. This means you will end up doing the same work again and again, even if that work is at the speed of native.
Evan Wallace refers to esbuild as a proof-of-concept for the next generation of bundlers. We think he’s right. We feel that a Rust-powered bundler with incremental computation could perform better at a larger scale than esbuild.
Lazy bundling
Early versions of Next.js tried to bundle the entire web app in development mode. We quickly realized that this ‘eager’ approach was less than optimal. Modern versions of Next.js bundle only the pages requested by the dev server. For instance, if you go to localhost:3000
, it’ll bundle only pages/index.jsx
, and the modules it imports.
This more ‘lazy’ approach (only bundling assets when absolutely necessary) is key for a fast dev server. Native ESM handles this without much magic - you request a module, which requests other modules. However, we wanted to build a bundler, for the reasons explained above.
esbuild doesn’t have a concept of ‘lazy’ bundling - it’s all-or-nothing, unless you specifically target only certain entry points.
Turbopack’s development mode builds a minimal graph of your app’s imports and exports based on received requests and only bundles the minimal code necessary. Learn more in the core concepts docs.
This strategy makes Turbopack extremely fast when first starting up the dev server. We compute only the code necessary to render the page, then ship it to the browser in a single chunk. At large scale, this ends up being significantly faster than Native ESM.
Summary
We wanted to:
- Build a bundler. Bundlers outperform Native ESM when working on large applications.
- Use incremental computation. The Turbo engine brings this into the core of Turbopack’s architecture - maximizing speed and minimizing work done.
- Optimize our dev server’s startup time. For that, we build a lazy asset graph to compute only the assets requested.
That’s why we chose to build Turbopack.