We’re seeing a paradox: more code, faster delivery, but also more time spent on review. Are we actually getting faster?
We are faster, but we’ve moved the bottleneck.
Coding itself is no longer the problem. With LLMs, generating code can be cheap and fast. The issue is everything that comes after: validation, review, testing. Reading code is almost as hard as writing it, especially when a lot of it is AI-generated.
What this means in practice is that the old definition of a “good engineering setup” is changing. Things like proper linting, formatting, and test discipline are no longer nice-to-haves. They’re prerequisites. If you don’t automate those, your review process just collapses.
And honestly, a lot of code review today is a total waste. Half of the comments are about things that should have been automated - formatting, naming, and minor style issues. If your engineers are still debating tabs vs. spaces, you’re already losing.
The real shift is this: humans should focus on architecture, edge cases, and system-level thinking. Everything else should be pushed to automation or LLMs.
There’s a lot of investment in AI tooling, CI/CD, and build systems. Where do you see the highest ROI?
The highest ROI is still in the fundamentals, especially building systems and CI.
If you optimize your build pipeline properly - caching, incremental builds, the right tooling - you can drastically cut costs or scale your throughput without increasing spend. That’s the most immediate, measurable gain.
But the bigger impact is time-to-market. If your pipeline is slow, everything slows down: context switching, waiting for reviews, reworking changes. It compounds. I’ve seen changes that should take days stretch into months just because they kept getting stuck in merge queues.
A good analogy is a parking lot. Even if there are free spots, if it takes you 30 minutes to find one, the system is broken.
And there’s another angle: talent. If developers experience the power of AI, but their environment can’t support it they just get frustrated. And the best people won’t stay in that environment.
Where do companies tend to overspend?
On tooling without fixing the environment.
I see companies investing heavily in AI training, copilots, and workshops, but if their SDLC is not ready, that money is wasted. If your tests are flaky, your pipelines are slow, and your setup is inconsistent, AI just amplifies the chaos. It doesn’t fix it.
There’s a very strong pattern: organisations that already have good engineering environments are the ones benefiting the most from AI. The others struggle.
Let’s talk about architecture. With AI handling complexity better, does the monolith vs. microservices debate change?
It absolutely does.
Microservices were, to a large extent, a workaround for human limitations. We decomposed systems because we couldn’t understand large ones. But now LLMs give us a kind of superpower. They can handle context at a scale that humans struggle with. And that changes the equation.
Monoliths were always simpler in terms of deployment and cost. The problem was complexity. Now that complexity is becoming more manageable.
At the same time, microservices introduce their own problems, especially around boundaries and communication. Loose coupling sounds great, but it’s often harder to reason about. You see APIs, not the full picture. With a well-structured monolith, you can have both: internal flow and clear boundaries where needed. And LLMs can actually understand the whole system.
So yes, I think we’ll see a shift back toward modern, modular monoliths.
Does that mean companies that invested in microservices made a mistake?
Not necessarily.
Microservices still make sense in certain cases, especially in highly distributed systems or heavy data processing. The real issue is that the industry overcorrected. Everyone moved to microservices, even when it didn’t make sense.
Now the opportunity is to re-evaluate with fresh eyes. And importantly, migration has never been easier. With AI, you can prototype architectural changes in weeks, not years. You don’t need to commit upfront. You can experiment.
Take a part of your system, combine it into a monolith, and see what happens. That kind of iteration is now possible. Possible in a matter of days - not years.
What’s the biggest “aha moment” for organizations when it comes to AI, developer productivity, and the SDLC?
That their setup can’t keep up. We’re seeing situations where developers suddenly become 3–5x more productive with AI, and the rest of the system just breaks.
CI pipelines get overloaded. Merge queues explode. Costs spike because of inefficient workflows and token usage. It’s a series of wake-up calls: Why is our CI on fire? Why can’t we handle this many PRs?, Why are costs suddenly 3x higher?.
And at the same time, fixing these problems is easier than ever. You can prototype solutions quickly, validate them in weeks, not months. That’s the shift - not just speed, but the ability to experiment.
What does a modern developer experience stack look like today?
It’s less about specific tools and more about properties: Fast, reliable CI/CD, deterministic builds with strong caching, high test discipline (no flaky tests), automation of all repetitive review tasks, and the ability to prototype and ship quickly.
Tools like Bazel are becoming more accessible now because LLMs can help with the complexity. What used to require deep expertise is now easier to adopt.
But the key is this: your system must allow you to build, test, and validate ideas quickly in a near-production environment. If anything in your SDLC blocks that, it’s your top priority to fix.
If you were designing an engineering setup from scratch today, what would it look like?
Small, highly autonomous teams - 3 to 4 people - cross-functional, moving fast.
On the process side: prototype-first. Don’t start with specs. Start with something working, validate, iterate. On the platform side - strong build system with aggressive caching, scalable CI/CD, ability to deploy quickly and safely, and infrastructure that can handle high experimentation
And importantly, don’t abandon DevOps maturity. Even if we move toward monoliths, we still need robust platforms to ship fast.
What’s the hardest part for companies when it comes to adopting AI in their SDLC?
Realizing that developer productivity is a first-class problem. Most organizations still treat it as a side concern. But if software is your product, then your development environment is your factory.
And every factory in the world has a team focused on efficiency. Software should be no different. The companies that understand this and invest in developer productivity will move ahead very quickly. The rest will struggle to keep up.
What’s the hardest part for companies when it comes to adopting AI in their SDLC?
The hardest part is realizing how big this shift actually is. Most companies still treat AI as an improvement. It’s not. It’s a generational change.
The closest example I can think of is from military history, when HMS Dreadnought was introduced, making all big ships obsolete overnight. White all existing fleet getting obsolete - US, Germany or Japan was able to catch up with the Royal Navy - what even a year before HMS Dreadnought was considered impossible.
And something similar is happening now. AI is changing the game for software development. Your organisation needs to adapt, but at the same it is your best chance to catch up with market leaders.
If you had one piece of advice for engineering leaders adapting their SDLC to AI, what would it be?
Start thinking about developer productivity as your core business objective. Because if your environment can’t keep up with AI, all that extra speed just turns into bottlenecks, costs, and frustration.




