Visdom: AI Works Great. At Level Four.
AI adoption gaps stem from a missing common vocabulary. A maturity matrix with five levels and four perspectives helps organizations honestly assess where they truly stand.

AI adoption gaps stem from a missing common vocabulary. A maturity matrix with five levels and four perspectives helps organizations honestly assess where they truly stand.

This practical guide demonstrates how to implement sandboxed LLM coding agents using Agent Sandbox. Learn the complete setup process, from initialization and runtime configuration to managing network policies and handling authentication. Discover advanced patterns for Java projects, IDE integration, and security considerations for safe AI-assisted development workflows.

This is post #4 in The Agent-Ready SDLC series. In post #1 we laid out the Ferrari-in-a-Fiat-500 problem - the engine is great, the chassis isn't. In post #2 we covered the first bottleneck: context. In post #3 we covered the second: feedback loops. Now we're at the third piece - and it's the one nobody wants to talk about.

LLM coding agents moved fast from cloud demos to tools running on developer workstations. They don't just suggest code anymore. They execute it. They start shells, install packages, edit repos, run tests, and sometimes open PRs. All with the same permissions you have. In the first part of the miniseries, Jakub Bocheński will look at Context, Motivation, and available sandboxing tools.

This is post #3 in The Agent-Ready SDLC series. In post #1 we laid out the Ferrari-in-a-Fiat-500 problem - the engine is great, the chassis isn't. In post #2 we covered the first bottleneck: context. Now we're at the second bottleneck - the one that sits between your agent and reality.

As AI tools take over more and more of the actual coding work, a new question emerges: who's watching what they do? Visdom Governance is a tool designed to bring that control back. Krzysztof Grajek, Principal Software Developer at SoftwareMill and the lead engineer behind Visdom Governance, talks about why the rise of AI-generated code demands a completely new approach to trust, auditability, and documentation.

AI-generated code creates cognitive debt as developers accept code they don't understand. Learn how this hidden risk threatens teams and how to fight back.

As AI accelerates code generation, many teams are discovering that speed gains often come with hidden costs in review, validation, and complexity. We sat down with Krzysztof Romanowski to unpack what’s really happening inside modern engineering organizations.

Open any README in your repository. That flagship one. The one that's 800 lines long with a "Getting Started" section written in 2022. Read it with fresh eyes - as if you were a new developer, or better yet - as an AI agent who's never been to a standup, never seen Slack, never heard the legend of why we don't touch the InvoiceReconciler class in the payment service. Now ask yourself one question: based on this README, can you safely modify anything in this service?

While API based LLMs are great for rapid, fast, and easy development, they can be less secure and costly in the long-term horizon for load-intensive applications. The solution are Small Language Models (SLM), self-hosted and finetuned on the downstream task. This article presents a case study of a Supervised Fine-Tuning (SFT) of the SLM on the Invoice Processing task. It shows that while SLMs have higher investment costs at start, they are faster, cheaper, and more secure in the long-term, especially for high-load intensive applications.

You've probably heard about the first METR study from July 2025 - it made the rounds at every conference and every newsletter. 16 experienced open-source developers, a proper randomized controlled trial (not a vendor survey), and the result: 19% slower with AI. In this article, Artur argues that the problem lies in the environment, not the model. Read on to find out exactly.

Monorepo keeps coming up in conversations about large-scale software architecture. For some organizations, it’s a way to bring order to a growing ecosystem of applications. For others, it raises a lot of concerns. We spoke with Bartek Sądel, an expert who works with enterprise monorepos, about how this approach works in practice, what questions companies ask, and what the real benefits and challenges are.
