How I Resolved 15K Circular Dependencies
15,000 circular dependencies. 1,000+ projects. 7 million lines of code. One year to zero - here is how we actually pulled it off.
This was, by a wide margin, the most challenging project of my career so far. Over the course of exactly one year, I led an initiative that took a large Nx monorepo from roughly 15,000 project-level circular dependencies down to zero. The codebase is around 7 million lines of code spread across more than 1,000 projects. The work pushed me technically in ways I did not expect, forced me to grow in my senior role, and tested my ability to lead a long, grinding effort across an entire organization. This post is the story of how we got there — the tools I built, the traps I fell into, and the patterns that actually worked.
A note on scope up front: when I say “circular dependency,” I mean project-level cycles — cycles in the Nx project graph, not file-level import cycles. Project-level cycles are a different beast. They break incremental builds, tank caching, destroy the meaning of ownership boundaries, and — most insidiously — can be completely invisible to the tooling that is supposed to catch them. That last part is where this story starts.
Problem #1: The cycles were invisible
Nx ships with an enforce-module-boundaries lint rule that can, in principle, detect circular dependencies. On paper this should have been enough. In practice, it detected almost nothing, and for a long time we believed our monorepo was structurally healthier than it actually was.
The reason was structural, and it is worth understanding in detail because I suspect a lot of teams running large custom-built Nx monorepos have the same problem without knowing it.
Nx uses tsconfig.base.json as the single source of truth for resolving TypeScript path aliases into projects. When Nx builds the project graph, it reads those aliases, maps each alias to a project root, and uses that mapping to turn every import '@org/some-lib' statement into a graph edge.
Our monorepo had grown up with a custom build process. Each project had its own tsconfig with its own generated path aliases, and none of those aliases lived in tsconfig.base.json. To Nx, almost every import in the codebase resolved to nothing. The project graph was a ghost of the real one. Most of the cycles in the codebase were simply not represented, and enforce-module-boundaries could not enforce a boundary that it could not see. The circular dependencies were invisible.
Solution: synthesize a unified tsconfig.base.json
I wrote a tool that walked every project in the repo, harvested its generated path aliases, and temporarily merged them into a synthesized tsconfig.base.json. Nothing about the real build changed — the synthesized file was used only to feed Nx an accurate picture of the workspace. With that in place, Nx could finally build a project graph that reflected reality.
The moment the tool ran, the graph exploded in size and complexity. Edges we had never been able to see suddenly lit up. We finally had ground truth.
And then the second problem hit.
Problem #2: The graph was too big to analyze honestly
Enumerating every circular dependency in a graph that size is, practically speaking, hopeless. A single cycle involving N projects has N rotational permutations. Cycles share edges, meaning one “bad edge” can participate in an enormous number of distinct cycles. A naive “find all cycles” algorithm is exponential in both time and space, and on our graph it was never going to finish. If we had been able to run it to completion, the number of cycles would almost certainly have been in the billions.
That is not a useful number. You cannot plot “billions” on a dashboard and watch it go down. We needed a number we could trust to move as we improved the codebase, not a number that was technically correct but impossible to compute or reason about.
Solution: fuzzy cycle detection
I traded completeness for tractability. Instead of enumerating every permutation of every cycle, I built a fuzzy detector that surfaced a representative set of cycles — enough to be a faithful measurement signal, small enough to compute in a reasonable amount of time and memory. It undercounts by design. It misses many permutations. That was fine. I did not need a perfect number. I needed a number I could compute every day, watch trend downward, and use as a shared artifact across the team.
The fuzzy detector reported roughly 15,000 cycles on the first run. That was our baseline. If we had counted every permutation, it would have been astronomically larger — but 15,000 was a number we could plan around.
For the first time, we had visibility. We had data. We had a dashboard we could stare at during planning meetings and use to measure whether the effort was working.
Problem #3: Stopping the bleeding without slowing down developers
Having visibility was necessary but not sufficient. Before spending a single hour resolving existing cycles, I had to make absolutely sure we were not adding new ones. A sinking-ship refactor that takes on water faster than you bail it out is not a refactor — it is a very slow defeat.
The obvious move was snapshot testing in PR pipelines: commit today’s list of cycles as a snapshot, fail the build on any PR that grows it, force developers to fix their new cycle before merging.
This sounded clean. It was not.
The subtle trap was this: taking a completely valid new dependency could create new permutations of existing cycles, and those new permutations would show up in the snapshot as new cycles. In other words, a well-meaning PR that introduced a perfectly reasonable new edge between two projects could be blocked because that edge happened to complete a new rotation of an existing mess. The PR author did not create the cycle. They did not make anything worse. They just had the misfortune of adding an edge that participated in pre-existing tech debt.
If I had shipped that guardrail as-is, it would have been hated within a week. Developers would have — rightly — routed around it, added suppressions, or started a political fight to have it disabled. The whole initiative would have lost credibility.
The insight: not every edge in a cycle is “bad”
I sat with the problem for a while and noticed something important. A cycle is not caused by every edge in the chain. A cycle is caused by one — or occasionally a few — genuinely problematic edges. The rest of the edges in the chain are usually fine. They are normal, healthy dependencies that happen to participate in a cycle because of the one bad edge somewhere else in the loop.
So I flipped the model.
Solution: classified “bad edges” checked into the repo
Instead of snapshotting cycles, I maintained a checked-in list of classified bad edges, exactly one (or a small handful) per cycle. The pipeline would:
- Run the fuzzy detector on the PR.
- Ignore any newly detected cycle that passed through a classified bad edge.
- Only fail the PR if the new cycle contained no classified bad edge — meaning the PR had introduced a genuinely new problematic edge.
The beauty of this model is that legitimate new dependencies can never block a PR just because they rotate existing tech debt. The only thing the guardrail catches is a brand-new bad edge — which is exactly what we want to catch.
To turn this on, I had to classify one bad edge for every one of the 15,000 cycles. That took about two to three weeks of focused work, a lot of squinting at the graph, and a lot of conversations with domain experts across the org who could tell me which of several plausible edges in their area was the “real” offender. It was tedious, but it was finite.
When the pipeline check went live, not a single new project-level circular dependency was introduced to the repo from that day forward. The bleeding was stopped. Now we could focus purely on reducing the existing count.
Problem #4: Actually doing the work — 15,000 cycles, one at a time
There is no silver bullet for breaking a circular dependency. I wish I could tell you I found a magical codemod that fixed thousands of cycles overnight. I did not. Every cycle is a small design conversation, and someone has to have it.
That said, in practice every resolution falls into one of three categories:
1. Shared project (by far the most common)
Break the cycle by moving the depended-on code out of the target project and into a new shared project that both sides can depend on. This is the cleanest and most structurally honest approach.
In my experience, this works well in about 70% of cases.
2. Duplication
If the depended-on code is shallow — an enum, a constant, a small interface, a tiny type — duplicating it into the source project is often cheaper and cleaner than creating a whole new shared project. Used with discipline, this is not a sin. It is sometimes the correct answer.
3. Merging projects
Sometimes the boundary between the two projects is unclear, artificial, or a historical accident. In that case, merging them together is the honest answer. This only helps when the cycle is strictly between two projects — it does not scale to longer chains — but when it applies, it applies cleanly.
What the 70% actually looks like
“Move the code into a shared project” sounds simple, but the judgment call underneath it rarely is. Sometimes a few files were simply misplaced in the wrong project from the start, and the fix was to move them to where they belonged. Sometimes the cycle reflected a genuine architectural problem that required real refactoring of implementation logic, not just file shuffling. Those were the hardest cycles, and they were the ones where I had to lean heavily on domain experts.
Almost every durable fix required decoupling implementation from contracts and abstractions. Over time I came to think of Inversion of Control as the golden rule for both avoiding and resolving project-level cycles. If implementations depend on abstractions instead of on each other, cycles simply do not form in the first place.
The pattern that carried most of the weight: contracts projects
The single most valuable structural pattern — the one I applied over and over again — was one contracts project per implementation project.
A contracts project contains:
- Enums, constants, types, and interfaces.
- Interfaces for services.
- Injection tokens for those services.
The implementation project then depends on its own contracts project, and other implementations consume services through contracts, not through each other. That removes the entire category of “implementation project → implementation project” edges, which is where the overwhelming majority of our cycles lived. It is essentially Inversion of Control applied at the project-graph level, and it is the closest thing to a golden rule I found across the entire year.
Once teams saw this pattern applied to a few of their neighbors, they started applying it themselves. That scaling effect ended up being just as important as the technical pattern itself.
The non-technical half of the project
Resolving 15,000 cycles in a monorepo with more than 1,000 projects is not a thing one engineer does alone. I want to be very clear about that. Much of this project — probably most of it, honestly — was organizational, not technical.
- Teaching. I ran presentations on the tooling, on the classification system, on the refactoring patterns, and on why we were doing this at all. Engineers across the org had to understand the patterns well enough to apply them in their own domains.
- Influence without authority. I did not own most of the code I needed changed. I had to convince other teams that this work mattered, that it was worth their time, and that the patterns I was recommending were the right ones. Some of those conversations were easy. Many of them were not.
- Domain expertise is distributed. For the hardest cycles — the ones that touched parts of the codebase I did not know well — I had to sit down with the people who did know those areas and figure out the right fix together. Classifying bad edges in the first place depended on that same collaboration.
- Measurement discipline. The fuzzy cycle count, plotted over time, became a shared artifact across the organization. A number that only goes down, week after week, is a surprisingly powerful motivator. It also made it much easier to defend the investment when prioritization questions came up.
It took exactly one year to get to zero. A lot of that year was spent in conversations, not in code.
What I took away from this
- If your tooling cannot see the problem, fix the tooling first. Our cycles were invisible to Nx because
tsconfig.base.jsondid not reflect our real path aliases. No amount of effort on resolution would have mattered without first correcting the project graph. Always validate that the tool you are relying on is actually seeing what you think it is seeing. - Perfect measurement is the enemy of any measurement. Fuzzy detection was the unlock. A directional signal you can compute in minutes, every day, beats a precise signal you can never compute.
- Guardrails must not punish valid work. The classified-bad-edge allow-list was the difference between a policy developers respected and one they would have routed around. If you want a rule to hold for a year, it cannot block PRs that are doing nothing wrong.
- Contracts projects and Inversion of Control are the real cure. Every durable fix eventually looked like some version of “implementations depend on abstractions, not on each other.” If I were starting a large Nx monorepo from scratch today, I would bake this pattern in from day one.
- Large refactors are a leadership exercise. The technical design mattered, but the initiative only finished because enough people across the org chose to help. Teaching, influencing, and sustaining momentum over a full year was just as much work as any of the code.
- Stop the bleeding before you start the surgery. Making sure no new cycles could land was, in retrospect, the single most important step. Without it, the year of effort would have been an endless treadmill.
Zero is a nice number to put on a slide. The real win, though, is not the zero itself. It is that the monorepo is now structurally healthier, builds cache more predictably, ownership boundaries actually mean something, and the team has a shared vocabulary — contracts projects, IoC, classified bad edges — for keeping it that way. That is the outcome I am proudest of.
This was the hardest project I have ever worked on. It was also the one I learned the most from.
Comments
Join the discussion and share your thoughts!