Everyone is generating code these days, but is it enough to optimize the entire Software Development Life Cycle? Artur Skowroński, our Head of Application Development, talks about the most common issues enterprises have stumbled upon over the last few years, and how VirtusLab is working to remove them.
The Problem of Missing Knowledge
Before we explain what VISDOM is, can you tell us more about the context behind it?
Working with clients, we keep seeing the same pattern. Everyone's talking about the AI revolution in software development as if it's a tooling problem. It's not. It's a paradigm shift, but most organizations are treating it like a plugin upgrade.
At VirtusLab, we've spent years creating developer tooling while working with large, mature codebases (the kind where git blame reads like an archaeological dig). And here's the thing nobody wants to hear: the problems AI is supposed to solve? They're the same problems we had before. Poor observability into changes. Missing context. Tribal knowledge is trapped in people's heads. AI didn't create these problems - it just made them (at last) impossible to ignore.
AI tools work much better when the entire codebase is AI-native. But it is something hard to achieve, because there's no transparency into what's actually changing and why. The tacit knowledge that held everything together? It was never documented because it never had to be. Proper foundations are simply missing.
What do problems with tacit knowledge look like in practice?
The longer you work with a problem, the better you understand it, even if you can't fully articulate what you know. And historically, every project had its "local guru", someone who knew all the little mechanisms and details of the project. A lot of past decisions and code changes lived only in that person's memory, invisible to anyone new.
We had an example of this with one of our retail clients. They had a location hierarchy, parent and child nodes, where some locations have themselves assigned as their own parent. From the outside, it looks like a bug. Then, twenty years later, when we were building a new API, we discovered it was actually a deliberate workaround. Historically, only parent locations could fulfill deliveries. So someone assigned the child to itself as a parent to unlock that capability. The bug became a feature. If an AI agent were coming into that codebase, it would have absolutely no way of knowing any of this. Especially if the “local guru” gets reassigned to a different project or changes jobs.
How does this connect to working with AI agents specifically?
It is really difficult to extract tacit knowledge, like that workaround for our retail client. Over the years, we’ve become quite good at it, using dedicated tools that predate AI. But an agent only has access to a snapshot of the codebase. It can’t know what we know
Also, there is a matter of cost.
An AI agent can read git history and look at historical data, but feeding the entire codebase through a model on every task is inefficient and expensive. Approaches like Agent Manifestos (Agents.md, Claude.md) don't really solve this structurally.
Our philosophy at VirtusLab is that agents should operate on deterministic data. After all, a project's history is deterministic; it already happened: rather than reloading it from scratch every single time, you extract it once, process it, and make it available to the agent like a semantic database.
Verifying AI-Generated Code
So that explains the problem with the missing context. What other problems does VISDOM address?
One of them is verifying the code. Because even if AI coding became a solved problem and models get good enough to generate solid code reliably, software engineering as a discipline won’t radically change. We will still need verification, testing, deployment, and validation.
Historically, a senior developer could produce around 100 lines of code per day, not because they typed slowly, but because every line of code needs to be verified and tested.
Maybe it’s a matter of finding the right level of constraint on AI-generated code?
Let me use an analogy here. In technical drawing, there is a principle called minimum dimensioning. You annotate a part with the minimum number of dimensions needed to manufacture it, no more, because the machine needs some degrees of freedom. We're thinking about code generation the same way: what's the minimum set of constraints we can impose without killing the LLM's creativity? Too many constraints and your project becomes overdimensioned. Too few and you lose control.
Are there any other problems that you think need to be solved?
In many organizations, the infrastructure becomes a bottleneck. Scaling with AI isn’t fundamentally different from scaling from a small team of developers to a larger one. However, if that happens with insufficient SDLC infrastructure, it creates 3 new problems:
- CI/CD queues get longer and longer
- Code verification becomes more expensive and slows everything down
- Most of the speed gains from code generation get lost at the deployment stage
Human Oversight
Where does the human fit into all of this?
If a human engineer has to review every piece of code, it slows down the agent, who has to wait, which affects the ROI. On the other hand, removing verification entirely means growing risk, especially in regulated industries like banking under DORA.
The answer is classification algorithms. They determine which parts of the code or which PRs actually need human attention, and which can be verified semi-automatically or validated directly in production. Humans should only be triggered where they're genuinely needed.
And when agents aren't actively coding, their compute power can be redirected to background tasks: paying down technical debt, upgrading dependencies, running security spike checks, and monitoring for leaks.
Not every enterprise can just flip a switch and go fully autonomous. How do you handle that?
You're right, large banks and enterprises operate under regulatory obligations like DORA that require genuine control over the software development process. You can't just parachute in full agent automation. The tool's maturity has to match the organization's maturity.
VirtusLab has mapped out the ecosystem of capabilities at each stage and can advise on the optimal next steps, both for managers and individual developers, so the organization can evolve gradually, without the kind of revolution that almost always ends in failure.
Auditability
Enterprise organizations always ask about auditability. How do you address that?
Every large enterprise needs to know what happened, why, and when. That's why agentic tracing is a core part of what we're building - it records the complete history of agent actions. By design, it provides full auditability.
This covers tracking specific prompt calls, persisting the agent's reasoning loop, all the way to agents demonstrating specific features themselves. We live in times where a human may have never even manually "clicked through" the code at all, which is why we're also working on systems that allow agents to demo code in a way that's accessible to a human operator.
VISDOM as a Solution
So what exactly is VISDOM?
VISDOM is a solution for organizations working with AI-driven code generation. It aggregates everything VirtusLab has learned from working with tooling on mature, complex codebases, from large-scale monorepo work, and from building developer tools going back to Eclipse, through IntelliJ, all the way to VS Code. The main components of VISDOM are:
- Historical knowledge extraction and management of a semantic database of project context
- Verification and CI/CD infrastructure, optimized for the high frequency of agent-generated changes
- PR and code review classification, automatically determining the level of human oversight required
- Agent tracing, full auditability of agent actions
- Organizational maturity mapping is a framework for assessing where an organization is in its AI adoption journey and what its optimal next steps are
Editor’s note: to learn about how VISDOM would fit into your organization specifically, please contact us at: https://virtuslab.com/services/visdom




