For years, the industry moved away from monorepos toward microservices and smaller repositories. Recently, however, the conversation has started to shift again - and AI may be one of the reasons. As AI tools become part of everyday development workflows, the way engineers interact with large codebases is changing.
Piotr Kukiełka has spent more than a decade working with large monorepos and developer tooling in organizations with thousands of engineers. In this conversation, he shares his perspective on how large codebases work in practice and why AI might influence how companies think about monorepos.
You’ve been working with large codebases for quite a long time. What kind of environments have you worked in?
Piotr: Over the years, I’ve worked in a couple of very large engineering organizations.
In one case, the organization had around 2,000 developers, and the codebase was maintained as a monorepo. It was a very active repository — there were over 1.5 million closed pull requests, and typically around 10–15 thousand pull requests open at any given time.
Earlier in my career, I also worked in another large organization where the monorepo contained more than 10 million lines of code and was used by hundreds, possibly even more than a thousand developers.
In both cases, I worked in teams responsible for developer tooling - building systems that help developers work efficiently in such a large codebase.
More recently, my work has been focused on integrating AI tools into the development workflow and exploring how those tools can support developers working with large repositories.
So overall, I’ve been working with monorepos for more than ten years now.
From your perspective, what kinds of companies actually need advanced monorepo tooling?
Piotr: Mostly large organizations.
Smaller companies sometimes technically have something that looks like a monorepo — maybe a single repository where most of the code lives — but they usually don’t need specialized tooling around it.
The real challenges appear when the scale grows to the point where standard open-source tools start struggling. At that stage, companies begin building internal platforms and tooling teams dedicated to developer productivity. And economically, it only makes sense at a certain scale.
If an organization has 500, 1,000, or even 2,000 developers, and a tooling team improves productivity by even a few percent, the impact is huge. So monorepo infrastructure becomes really important, mostly in large engineering organizations.
How do AI assistants actually work in very large codebases?
Piotr: One of the main challenges is context.
When you work in a monorepo with millions of lines of code, it’s not enough for an AI assistant to see just the file you’re editing. It needs to understand the broader context - dependencies, related components, sometimes even how different parts of the system interact. That’s where large repositories become interesting from an AI perspective.
If everything is in one place, it becomes easier to build that context. In smaller repositories or very fragmented systems, the AI often doesn’t have access to all the relevant information, which limits how helpful it can be.
So in a way, the structure of the codebase directly affects how useful AI tools can become.
Over the years, the industry has moved between monorepos and smaller repositories. How do you see that trend today?
Piotr: It’s interesting because the trend has definitely shifted over time.
About 15 years ago, monorepos were quite popular in many organizations. Later, the industry moved toward microservices and splitting systems into smaller repositories, partly because large codebases became difficult for humans to manage. Breaking systems into smaller parts made them easier for teams to understand and maintain.
But recently, it feels like monorepos are becoming more relevant again. One of the reasons might be AI.
Why could AI change how companies think about monorepos?
Piotr: AI works very differently from humans when it comes to navigating large codebases.
For a developer, understanding a system that spans hundreds of files and components can be difficult. That’s why splitting systems into smaller repositories made sense. But for AI systems, reading hundreds of files across many parts of a codebase isn’t necessarily much harder than reading a few.
AI can build context across a system much faster. Because of that, the trade-offs change.
If related pieces of functionality are spread across many repositories, an AI tool might not be able to modify them together in a single change. In a monorepo, everything is already in one place, which makes those kinds of cross-cutting changes much easier.
So in a way, AI might actually strengthen the case for monorepos?
Piotr: Yes, that’s definitely one possible direction. If AI tools become a standard part of development workflows, having a single large codebase might make those tools much more effective.
Instead of dealing with multiple repositories and complicated dependencies between them, the AI can work directly with the entire system. That could shift the balance a bit back toward larger repositories.
Looking ahead, how do you think AI will change the way developers work with large codebases?
Piotr: I think we’re still at the beginning of that shift.
Right now, AI tools mostly help with writing code or navigating files. But over time, they’ll probably get better at understanding entire systems. When that happens, the way we structure codebases might change as well.
Some of the architectural decisions that made sense when systems were optimized primarily for humans might not be optimal when AI becomes a regular collaborator in development. So it will be interesting to see how both things evolve together - the tooling and the structure of large codebases.




