Your Highlights
Introduction to Frontend Monorepos

Commonly used in large enterprise codebases by companies likes Meta, Google & Microsoft, monorepos are a way to manage multiple projects within a single repository. This article introduces the concept of monorepos, their benefits, and the most popular tools used to manage them.
Introduction to Monorepos
Have you ever built a frontend application that started small? Suddenly, the requirements changed, business wanted to have a custom design system, and you had to build additional shared libraries for it. Then, you started publishing those libraries to npm, and before you knew it, you had a dozen libraries, each with its own versioning, dependencies, and build processes. The complexity grew, and you found yourself spending more time managing dependencies and versions than actually building features.
What if I told you that there is a better way? A way to manage all you libraries, applications, and shared code in a single repository? A way to share code between applications without the hassle of versioning and publishing. A way to scale your frontend codebase to millions of lines of code without losing control. Want to import the latest version of your component library? No more version increase, npm publish
and npm install
. Just import it directly from the monorepo and you are done - simple as that.
This is the main promise of monorepos. It is supposed to make large interconnected codebases easier to manage, with less friction, higher developer productivity and central control over the codebase. It is well known that all the largest most successful companies in the world use monorepos to manage their codebases. Google, Meta, Microsoft, and many others have adopted monorepos as their primary way of managing code. They have seen the benefits of using monorepos to manage complexity, and they would not be able to ship the products they do without them.
But is it really that simple? Are there any downsides to using monorepos? What are the best practices for managing large monorepos? And what tools can help you achieve this?
Types of Monorepos
In theory, a monorepo can be any repository that contains more than one project. You could simply create a git repository and put two directories in the root of the repo; one for the frontend and one for the backend. And there you have, your first monorepo. While this is technically already a monorepo, it is not what we usually mean when we talk about monorepos. In practice, monorepos are usually more complex and contain multiple projects that share internal dependencies, libraries and tooling. The tools powering monorepos need to be able to compute the dependency graphs of the projects, optimize the build and test processes and provide a good developer experience when working with such large codebases.
Regardless of the tools and technologies, monorepos can be categorized into two main types:
-
Single-Project Monorepos: These are repositories that contain a single project with multiple packages or libraries. The project is usually a large application that has many components, utilities, and shared code. The goal is to manage all these components in a single repository to simplify development and deployment. This is the most common type of monorepo you will encounter in the frontend world, especially when you have a product suite with multiple applications, such as a web app, mobile app, and desktop app that share a common codebase and design system. Typical monorepo tools for this type of monorepo are Nx, Turborepo, PNPM Workspaces and Rush.js.
-
Company Monorepos: These are repositories that contain multiple projects from a single organization. The projects can be related or unrelated, but they share a common codebase and infrastructure. The goal is to enable code sharing and collaboration across different teams and projects within the organization. Some of the largest companies in the world, such as Google, Meta use such company monorepos to manage their codebases. Typical monorepo tools for this type of monorepo are Bazel (Google) and Buck2 (Meta).
While both types of monorepos have their place, this article will focus on the first type: single-project monorepos. These are the most common in the frontend world and they are the ones that can benefit the most from the tools and techniques we will cover. I have used such monorepos to build product suites with a shared codebase and multiple applications targeting different platforms, such as web, mobile, and desktop. In theory, you could also have just one single application in a monorepo, where the app itself is pretty empty and often referred to as a shell app. All of the app’s functionality is implemented in many fine-grained libraries, where each library can be linted, tested and built independently. This is a common pattern in enterprise frontend applications, as it allows for more restricted module boundaries and even better CI optimizations, as the task execution is more granular allowing for more parallelization and less redundant work.
The Benefits of Monorepos
While monorepos can be complex and require careful management, they offer several key benefits that make them an attractive option for large codebases:
- Single Version Policy
- Atomic Changes
- Consistent Tooling
- Better CI Performance
Single Version Policy
When you have an app and a library in a monorepo, using the Single Version Policy, you can ensure that the app always uses the latest version of the library. What this really means is that you can import the library directly from inside the monorepo, without having to publish it to npm or any other package registry. The latest ‘version’ in that sense is always the source code in the monorepo as there is no intermediate package versioning involved. This means that you can make changes to the library and immediately see the effects in the app without having to wait for a new version to be published. This is especially useful when you are working on a new feature that requires changes to both the app and the library, as you can iterate quickly without having to worry about versioning and publishing.
Imagine, you have a ton of libraries and application in your monorepo, all of which are using Angular version 19. You want to upgrade all of them to Angular version 20. With a single version policy in place, you can make the change in one place and have it reflected everywhere. This eliminates the need to manually update each library and application, reducing the risk of errors and inconsistencies. It is simply not possible to have a situation where one library is using Angular 19 and another one is using Angular 20, as they are all using the same version of Angular from the monorepo. The same principle applies to all other dependencies, such as RxJS, TypeScript, ESLint, etc. This can be a huge benefit when making migrations and upgrades, as you have this holistic view of your codebase and can make changes in one place, knowing that they will be applied everywhere without incompatibilities.
On the other hand, this can become a challenge, because upgrades and migrations must be done in a coordinated holistic way. But more on this later in the pitfalls section.
Atomic Changes
On the same note, monorepos allow you to make atomic changes across multiple projects. This means that you can make changes to multiple libraries and applications in a single commit, ensuring that they are all in sync. This might at first sound like a small thing, but think about it from the perspective of code reviews and pull requests. When you have a monorepo, you can create a single pull request that contains all the changes needed to implement a new feature or fix a bug across multiple libraries and applications. This makes it much easier to review the changes, as you can see the full context of the changes in one place, rather than having to jump between multiple repositories and pull requests. And not just pull requests, but also in local development, you can search through the entire codebase and find all the references to a specific library or component, regardless of where it is used. This is a huge productivity boost, as you can quickly navigate through the codebase and find what you are looking for without having to switch between multiple repositories. The context is always there, and you can see how the different parts of the codebase are related to each other.
Consistent Tooling
Since, scaffolding projects and all its config files is usually done programmatically, you can ensure that all projects in the monorepo are using the same tooling and configuration. Not just the same version of the tools, but also the same configuration style, linting rules and conventions.
For example, you can have a base tsconfig, which is extended by the individual projects, ensuring that they all share the same baseline configuration. This could ensure that all projects have the same strictness level, the same module resolution strategy, and the same compiler options. This is especially useful when you have a large codebase with many projects, as it ensures that they all follow the same conventions and standards, making it easier to maintain and understand the codebase. Or you can have a base ESLint configuration that is extended by all projects, ensuring that they all follow the same linting rules and conventions. This can help to enforce consistency across the codebase and reduce the risk of errors and inconsistencies.
This is not just handy and keeps config files DRY, but it also has a huge effect on team topologies as you can have a single platform team that is responsible for maintaining the tooling and configuration for the entire monorepo. This can help to reduce the cognitive load on individual teams, as they do not have to worry about maintaining their own tooling and configuration, but can instead focus on building features and delivering value.
Better CI Performance
Most monorepo tools come with built-in support for caching and parallelization of tasks, which can significantly improve the performance of your CI pipelines as well as local development. In the spirit of Don’t Run Twice, you can cache the results of any given task, such as linting, testing, or building, and reuse them across different runs, such that you never run the same task twice. If a particular library has not changed, you can usually skip running the tests, linting and building for that library, as the results are already cached. This can save a lot of time and resources, especially in large codebases with many libraries and applications.
It is not rare to see CI pipelines that take 90+ minutes to run, especially in large codebases with many libraries and applications. But with the right monorepo tools and configuration, you can reduce the CI times to just a few minutes with good caching and parallelization strategies. This can significantly improve the developer experience and productivity, as developers can get feedback on their changes much faster and iterate more quickly. A few minutes of CI time can make a huge difference in the development process, especially when you are working on a large codebase where you have more than 100 engineers working on the same codebase.
The Pitfalls of Monorepos
While monorepos offer many benefits, they also come with their own set of challenges and pitfalls that need to be carefully managed. Here are some of the most common pitfalls of monorepos:
- Organizational Complexity
- Third-Party Dependencies
- Tooling Overhead
Organizational Complexity
Even though the single version policy and atomic changes can be a huge benefit, they also inevitably increase organizational complexity. Coordinating holistic upgrades and migrations can be a challenge. For example, if you want to upgrade Angular Material to a new version, you must update all the libraries and applications at once, as they all depend on the same version of Angular Material. In practice, such holistic upgrades and migrations are ususally only done with code mods - and they need to be developed and tested. This can be a challenge, especially if the organizational complexity is underestimated. Working in large monorepos usually requires a platform team which’s sole responsibility is to maintain the monorepo and CI pipelines. This team is responsible for ensuring that the monorepo is in a healthy state, that the CI pipelines are running smoothly, and that the tooling and configuration are up to date. This of course requires lots of investment and resouces, which must be justified by the benefits of using a monorepo - which is usually only the case when there are lots of engineers working on the codebase. The more engineers you have, the more complex the codebase becomes, and the more you need a platform team to manage it.
On the other hand, if there is a dedicated platform team, they can help enforce best practices and streamline development processes across the monorepo, which makes for a strong synergy between the platform team and the engineering teams. This can help to reduce the cognitive load on individual teams, as they do not have to worry about maintaining their own tooling and configuration, but can instead focus on building features and delivering value.
Therefore, you should carefully consider if you can afford a platform team. A good rule of thumb is that you need one platform engineer for every 50-100 engineers working on the codebase. This is not a hard rule, but it is a good starting point to estimate the resources needed to maintain a monorepo.
Third-Party Dependencies
Even though some companies handroll their own monorepo tools, some times with tools like Gulp, it is generally recommended to use established tools that can handle caching, parallelization, dependency graph computation and other complexities of monorepos. However, when you choose to use any third-party tool to build your engineering system on top of, you are making a huge commitment to that tool. You are betting on the future of that tool and its maintainers. If the tool is not maintained anymore, or if it does not keep up with the latest trends and technologies, you might find yourself in a difficult situation where you have to either switch to a new tool or maintain the old one yourself. This can be a huge risk, especially if you have built your entire engineering system on top of that tool.
Additionally, you are risking to be locked into specific ecosystem and might get paywalled by the tool’s pricing model or licensing terms. A recent example of this is Nx. Which in itself is mostly an open-source package under the MIT license, but has a cloud offering which is opt-in for distributed task execution and remote caching. Until recently, Nx Core did have public APIs, namely the Custom Task Runner API, which allowed you to implement your own distributed remote cache solution. However, this API was deprecated while they rolled out an additional pricy enterprise offering called Nx Powerpack, which included a remote cache solution. This is becoming more and more common with companies that develop open source tools and have a cloud offering. Therefore, you should carefully consider the long-term implications of using any third-party tool and its ecosystem. Make sure to evaluate the tool’s maintainability, community support, and long-term viability before committing to it.
Note: The Paywalling of the Nx remote cache was not received well by the community, and the Nx team has since announced that they will make the Powerpack Remote Cache available for free to all Nx users. This is a good example of how community feedback can influence the direction of a tool and its ecosystem. Nevertheless, it still left a bad taste in the mouth of many users, as they had to adapt to the new pricing model and the deprecation of the Custom Task Runner API.
Therefore, we are going to also highlight the companies and the business models behind the tools we cover in this article, so you can make an informed decision about which tools to use and which ones to avoid.
Tooling Overhead
Depending on the size of your codebase, you might find yourself spending more time managing the tooling and configuration than actually building features. This is especially true if do not have a large codebase, nor many engineers working on it. While, there are some tools which are designed to be simple and easy to use, such as PNPM Workspaces and Nx, there are others, such as Bazel which can be quite complex and require a steep learning curve. This can be a challenge, especially if you do not have a dedicated platform team to manage the tooling and configuration.
Monorepo Tools
By now we have already named a few of the many available monorepo tools. All of which are fundamentally different in their approach and philosophy, but they all share the same goal: to make it easier to manage large codebases with multiple projects. While some focus on specific stacks, other are more general-purpose and can be used with any technology stack. While some are more lean and simple, others are more complex and require a steep learning curve. Here is a comprehensive enumeration of the most popular monorepo tools:
Here is a comprehensive enumeration of the most popular monorepo tools:
PNPM Workspaces
PNPM is a modern package manager that is designed to be fast and efficient alternative to NPM and Yarn. The fundamental difference is that PNPM uses symlinks to store dependencies, which allows it to save disk space and speed up installations.
Symlinks are a type of file that points to another file or directory, allowing you to create a shortcut to it. This means that you can have multiple projects in a monorepo that share the same dependencies, without having to install them multiple times.
PNPM Workspaces is a feature of PNPM, similar to NPM or Yarn workspaces, that allows you to manage multiple packages in a single repository. It is designed to be simple and easy to use. It is a great choice for small to medium-sized monorepos, where the packages do not have complex dependencies inside the monorepo as PNPM does not compute a dependency graph. It is also a good choice if you are already using PNPM as your package manager, as it integrates seamlessly with it.
If you do not yet know how complex your codebase will become, PNPM Workspaces is a good choice to start with, as it is simple and easy to use. If you find yourself needing more advanced features you can always opt into Nx or Turborepo later on, as they both are able to work with PNPM as the package manager.
Nx
Nx is a powerful set of extensible dev tools that helps you manage monorepos. It provides a rich set of features, including advanced code generation, dependency graph visualization, task scheduling and caching, and more. Historically, Nx was used primarily with Angular, but it has a framework and technology agnostic approach, which means that it can be used with any technology stack, including React, Vue, Node.js, Go, Java, .NET and more.
The company behind Nx, used to be called Nrwl back in the day, but has since rebranded to Nx - just like the tool itself. The companies business model is based on providing a best in class developer productivity through their Nx Cloud offering, which provides unique features such as distributed task execution and remote caching, which can significantly improve the performance of your CI pipelines and save hours and days of compute time. Nx Core, which is the open-source part of Nx, is free to use and is developed under the MIT license. You can therefore already benefit extremely from the open-source part of Nx, without having to pay for the cloud offering. However, you should be aware of the fact that the company behind Nx is a for-profit company, and they have a vested interest in promoting their cloud offering. This is not necessarily a bad thing, but it is something to keep in mind when evaluating the tool and its ecosystem.
Nx is a very common choice for managing frontend monorepos and therefore also has a big community around it. It is easily extensible and has a rich documentation and the Nx team is very responsive to community feedback and even has weekly office hours where anyone can ask questions and get first-hand support from the Nx team. This makes it a great choice for teams that are looking for a powerful and flexible monorepo tool that can scale with their needs.
Nx is used by companies such as Microsoft, Cisco, FedEx, Ikea and many others.
Turborepo
Turborepo is relatively similar to Nx, but it is more tailored towards React and especially Next.js applications. It is a relatively new tool, but it has quickly gained popularity in the Next.js community as Turborepo was acquired by Vercel, the company that also acquired Next.js. Similar to Nx, Turborepo provides a rich set of features, including advanced code generation, task scheduling and caching, and more. It is designed to be simple and easy to use, and it integrates seamlessly with Next.js and Turbopack, Vercel’s build tool for Next.js applications. When compared directly to Nx, Turborepo is more opinionated and has a more limited set of features and lacks module boundary enforcement and distributed task execution.
However, Turborepo is also backed by a for-profit company, Vercel, which is known for two things: their aggressive marketing and their expensive cloud offering. Next.js already makes it hard to avoid Vercel’s cloud offering as deploying Next.js to other platforms is not as straightforward as one might hope. Therefore, a careful consideration of the trade-offs is necessary when choosing to adopt Turborepo for your monorepo needs. If you already bought into the Next.js/Vercel ecosystem, Turborepo is a great choice to manage your monorepo. If you are not using Next.js, you might also want to consider Nx or Rush.js instead, as they are more framework and technology agnostic.
Turborepo is used by companies such as Vercel, Shopify, Linear and more.
Rush.js
Rush is a frontend monorepo tool this is designed for JS-based repositories. Rush is developed in-house at Microsoft and it is used for some of Microsoft’s products such as Azure SDK, OneDrive, Windows Store and Sharepoint. It uses symlinks to avoid cascading dependencies and it is designed to be fast and efficient. Rush uses a cache to store the results of tasks, which can significantly improve the performance of your CI pipelines and it even allows for affected builds, such that you only build those projects that have actually changed.
However, Rush is not as widely adopted as Nx or Turborepo. Its community is fairly small and the documentation is not as comprehensive as someone would hope. It is also not as flexible as Nx and it also lacks some of the advanced features that Nx provides, such as distributed task execution. Theoretically, it would be possible to achieve distributed task execution with Rush when paired with BuildXL, but this is not easy as the documentation is not very clear on how to achieve this.
On the other hand, it is fair to say that Rush is the most risk-averse option compared to Nx and Turborepo, as the company behind Rush is Microsoft which has different incentives than Vercel and Nx as Microsoft develops and maintains Rush as it is used for their own products. When choosing a monorepo tool in favor of resiliance and stability, Rush is a good choice, but may require more effort to set up and maintain compared to Nx or Turborepo.
Rush is used by companies such as Microsoft, HBO and Wix.
Bazel
Bazel is fairly different from the other tools mentioned so far, as it has been a general-purpose build tool since day one. Google developed Blaze, which is their internal version of Bazel, to build their own codebase and it has since been open-sourced as Bazel. Its primary focus is on performance and scalability, and it is designed to handle large codebases with many projects and many programming languages. Bazel uses a dependency graph to determine which tasks need to be executed, and it can cache the results of tasks to improve performance. It also supports distributed builds, which can significantly improve the performance of your CI pipelines.
However, Bazel has a steep learning curve and it is not as easy to set up and use as Nx or Turborepo. It requires a lot of configuration and its flexibility can be a double-edged sword, as it can lead to complex and hard-to-maintain build configurations. Bazel is also not as widely adopted in the frontend community, as it is primarily used for backend and infrastructure projects.
Bazel might be a good choice for large polyglot codebases where the build system needs to scale with the codebase and handle multiple programming languages. However, if you are looking for a more frontend-focused monorepo tool, Nx or Turborepo might be a better choice.
Buck2
Buck2 is the least known monorepo tool on this list, but it is also one of the most powerful ones. It was developed by Facebook (now Meta) to build their own codebase and it has since been open-sourced as Buck2. It is designed to be fast and efficient, and it uses a dependency graph to determine which tasks need to be executed. Buck2 also supports distributed builds, which can significantly improve the performance of your CI pipelines.
The documentation is good, but the community is small and the learning curve is steep, mainly because it is primarily used inside Meta and it is not as widely adopted in the community. The concepts of Buck2 are somewhat similar to Bazel, but it is vastly different to the other tools mentioned so far.
Note that Buck2 is the predecessor of Buck, which is the original version of the tool that was used by Meta. Buck2 is a complete rewrite of Buck2 and it is written in Rust.

Stefan Haas
Senior Software Engineer at Microsoft working on Power BI. Passionate about developer experience, monorepos, and scalable frontend architectures.
Comments
Join the discussion and share your thoughts!