Turborepo 2.0 Made My Monorepo Builds Actually Fast
Vercel's monorepo tool got a major upgrade. I migrated my 6-app workspace and cut CI time from 18 minutes to 4. Here's my setup.
My CI pipeline was a nightmare. Eighteen minutes to run tests and builds across my monorepo—every single push. I'd watch the GitHub Actions spinner, grab coffee, and by the time I got back, maybe it was done.
Then Turborepo 2.0 dropped in October 2024. I migrated my 6-app workspace over a weekend, tweaked my cache configuration, and cut CI time to 4 minutes. Same tests, same builds, 78% faster.
Here's exactly what I did, what broke during migration, and how you can get similar results.
What's Actually New in Turborepo 2.0
Vercel's Turborepo 2.0 announcement focused on three major improvements:
1. Watch Mode That Actually Works
The new turbo watch command finally makes local development tolerable. Previously, I ran separate terminal tabs for each app. Now one command watches everything:
turbo watch devIt detects file changes across packages and rebuilds only what's affected. Sounds basic, but the implementation is solid—I haven't had a single rebuild miss in three weeks.
2. Rust-Powered Engine
The entire task orchestration layer was rewritten in Rust. The speed improvement is noticeable:
- Cold start: 18s → 3s
- Incremental builds: 40s → 8s (when one package changes)
- CI with remote cache: 18m → 4m
3. Better Remote Caching
Remote caching existed in 1.x but was flaky. I disabled it after cache misses cost me more time than rebuilding. Version 2.0's cache restoration is reliable, and the compression is noticeably better. My average cache artifact dropped from 180MB to 45MB.
My Monorepo Setup
Before diving into migration, here's what I was working with:
- 6 applications: 3 Next.js apps, 2 React Native apps, 1 Express API
- 12 shared packages: UI components, utilities, types, configs
- CI/CD: GitHub Actions deploying to Vercel and AWS
- Solo developer with occasional freelancer help
- Deployment frequency: 8-12 times per day
I'd been running Turborepo 1.10 for about a year. It worked fine locally but CI was painful.
Migration: What I Actually Did
Step 1: Update Dependencies
First, the obvious part:
npm install turbo@latest --save-devMy version jumped from 1.10.16 to 2.0.2. Breaking change warnings appeared immediately in the console.
Step 2: Rewrite turbo.json
This is where things got interesting. The new pipeline configuration format changed significantly.
Old format (1.x):
{
"pipeline": {
"build": {
"dependsOn": ["^build"],
"outputs": [".next/**", "dist/**"]
},
"test": {
"dependsOn": ["build"],
"outputs": ["coverage/**"]
}
}
}New format (2.0):
{
"tasks": {
"build": {
"dependsOn": ["^build"],
"outputs": [".next/**", "dist/**"],
"cache": true
},
"test": {
"dependsOn": ["build"],
"outputs": ["coverage/**"],
"cache": true
}
}
}The pipeline key is now tasks. Not a huge change, but my CI scripts referenced pipeline in error messages, so I had to update those too.
Step 3: Fix Environment Variable Handling
This broke my builds initially. Turborepo 2.0 is stricter about environment variables in the cache key.
I had .env files with API keys that shouldn't affect cache validity. Previously, Turbo ignored them. Now it includes all env vars in the hash by default.
The fix:
{
"tasks": {
"build": {
"env": ["NEXT_PUBLIC_*"],
"passThroughEnv": ["NODE_ENV"]
}
}
}Only NEXT_PUBLIC_* vars affect the cache. NODE_ENV passes through but doesn't invalidate caches. This took me two hours to debug—builds kept missing cache because I had DATABASE_URL in my .env that changed between machines.
Step 4: Configure Remote Caching Properly
I use Vercel for hosting, so I get Vercel Remote Cache for free. The new configuration is cleaner:
# .env
TURBO_TOKEN=your_token_here
TURBO_TEAM=amillionmonkeysPreviously, I had to set these in 3 different places (local, CI, Vercel). Now it's centralized.
The cache hit rate went from ~45% to ~92% after migration. I think the improved compression and hash stability fixed it.
Step 5: Update CI Configuration
My GitHub Actions workflow needed adjustments:
Before:
- name: Build and test
run: npx turbo run build test --filter=[HEAD^1]After:
- name: Build and test
run: npx turbo run build test --filter=[HEAD^1]
env:
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: ${{ secrets.TURBO_TEAM }}The --filter syntax is unchanged, which was a relief. I was worried about rewriting all my selective build logic.
The Results: 18 Minutes → 4 Minutes
Here's the breakdown of where I saved time:
| Stage | Before (1.x) | After (2.0) | Improvement |
|---|---|---|---|
| Dependency installation | 45s | 40s | 11% |
| Task orchestration | 18s | 3s | 83% |
| Build (cache miss) | 12m 30s | 10m 15s | 18% |
| Build (cache hit) | 8m 20s | 2m 45s | 67% |
| Tests | 4m 10s | 50s | 80% |
| Total (typical run) | 18m | 4m | 78% |
The dramatic test time improvement came from better parallelization. Turborepo 2.0's task scheduler is smarter about utilizing all CPU cores. My GitHub Actions runners have 4 cores—previously, they maxed out at 60% utilization. Now at 95%.
Turborepo vs Nx: Why I Chose Turbo
I evaluated Nx before committing to Turborepo 2.0. Here's my honest comparison:
Nx Pros:
- More mature (been around longer)
- Better Visual Studio Code integration
- Powerful generators for scaffolding
- More configuration options
Nx Cons:
- Heavier (larger dependency footprint)
- Steeper learning curve
- Configuration complexity (we spent 2 days on our first Nx setup)
Turborepo Pros:
- Simple configuration (my entire
turbo.jsonis 35 lines) - Fast out of the box (minimal tuning needed)
- Great Vercel integration (my deployment target)
- Excellent documentation
Turborepo Cons:
- Fewer features (no built-in generators)
- Smaller community
- Less flexible for complex dependency graphs
For my use case—solo developer, straightforward monorepo, Vercel deployments—Turborepo was the obvious choice. If I had 20 apps and complex orchestration needs, I'd reconsider Nx.
My Current turbo.json Configuration
Here's my production config, stripped of comments:
{
"$schema": "https://turbo.build/schema.json",
"tasks": {
"build": {
"dependsOn": ["^build"],
"outputs": [".next/**", "dist/**", "build/**"],
"env": ["NEXT_PUBLIC_*", "EXPO_PUBLIC_*"],
"cache": true
},
"test": {
"dependsOn": ["build"],
"outputs": ["coverage/**"],
"cache": true
},
"lint": {
"cache": true
},
"dev": {
"cache": false,
"persistent": true
}
},
"globalEnv": ["NODE_ENV"],
"globalDependencies": ["tsconfig.json"]
}The persistent: true flag on dev is important—without it, turbo watch dev kills the dev servers prematurely.
Gotchas I Hit
1. pnpm Workspace Protocol
I use pnpm. Turborepo 2.0 is stricter about workspace protocol usage:
{
"dependencies": {
"@amm/ui": "workspace:*"
}
}Previously, I could get away with "workspace:0.0.0". Now it has to be workspace:* or the exact version. Broke 6 packages until I fixed it.
2. Output Globs Are Literal
My original config had:
"outputs": [".next/**"]I assumed this matched .next at any depth. Nope—it's relative to the package root only. Deep nested builds weren't cached. Changed to:
"outputs": [".next/**", "**/dist/**"]3. Cache Invalidation on CI
GitHub Actions caching interacts weirdly with Turbo's cache. I was double-caching node_modules and seeing slower builds. Solution:
# Remove GitHub Actions caching for node_modules
# Let Turborepo handle everything
- name: Install dependencies
run: pnpm install --frozen-lockfile
# No cache step here anymoreCounterintuitively, removing GitHub's cache layer made things faster.
Lessons Learned
1. Start with minimal config
My first attempt had 80 lines of configuration. I deleted half of it and builds got faster. Turborepo's defaults are good.
2. Remote cache is worth it
Even if you're not on Vercel, set up remote caching. I tested with Turborepo's GitHub Action and saw similar improvements.
3. Measure before optimizing
I added --summarize to my Turbo commands during migration:
turbo run build --summarizeThis outputs a JSON summary of what ran, what was cached, and timing. Invaluable for debugging slow builds.
4. Watch mode changes workflows
I used to run npm run dev in each app directory. Now I run turbo watch dev once at the root. Took a week to break old habits, but local development is much smoother.
Should You Upgrade?
Upgrade if:
- You're on Turborepo 1.x (migration is straightforward)
- CI builds are slow (>5 minutes)
- You have >3 packages in your monorepo
- You deploy to Vercel (integration is seamless)
Wait if:
- You have complex Turborepo plugins (check compatibility first)
- CI is already fast enough (<2 minutes)
- Your monorepo is tiny (2-3 packages with minimal deps)
For me, the 78% CI time reduction was absolutely worth the migration effort. I spent about 8 hours total—most of it debugging the env var cache key issue.
What's Next for My Setup
I'm exploring a few improvements:
- Parallel CI jobs: Splitting tests across multiple runners now that Turbo is faster
- Better cache analytics: Tracking cache hit rates over time
- Local remote cache: Setting up a self-hosted cache for faster local builds
If you're running a monorepo and haven't tried Turborepo 2.0 yet, it's worth a weekend experiment. The speed improvement is real.
Considering a monorepo migration for your project? I've done this a few times now and can help you avoid the gotchas. Get in touch to discuss your setup.
Related reading: