I run the Application Development department at VirtusLab, and over the past year I've watched something deeply uncomfortable unfold across the industry. Teams are shipping faster than ever, dashboards are green, velocity metrics are up, and yet the same categories of failure keep appearing, over and over, regardless of stack, company size, or budget.
A startup built entirely with AI tooling, "zero hand-written code," gets hacked within days of going live. API keys drained. Users bypassing subscriptions. The AI-generated code looked functional. It was utterly defenseless. A product manager uses an AI CLI to reorganize files and watches months of work disappear in minutes because the model "hallucinated success" and renamed everything into oblivion. An open source project accepts an AI-generated pull request that passes review, ships to users, and steals GitHub tokens and API keys from over 1,400 developers.
These aren't edge cases. They're a pattern. And the pattern has a name: vibe coding gone to production.
The Illusion of Speed
Tomek Lelek and I wrote Vibe Engineering because we kept seeing the same mistake everywhere: teams confusing the speed of generation with the speed of delivery. Vibe coding, that intuition-first, prompt-driven mode where you accept what the AI gives you without deep verification, is genuinely valuable. It's the digital sketchpad. It's how you turn a foggy idea into a working interface in an afternoon. I use it. You probably should too.
But there's a cliff at the end of that road, and most teams don't see it until they've already gone over. The code that comes out of vibe coding is brittle, opaque, and loaded with hidden assumptions. The developer often doesn't fully understand the implementation choices the AI made. The AI has no built-in model of truth. It's a probabilistic system predicting plausible next tokens - and "plausible" and "correct" are not the same word: the irreversibility of rm -rf, or what happens when it cheerfully reports "done!" on a file operation that didn't actually succeed.
Shipping that code without a verification layer isn't speed. It's debt. We call it trust debt: the accumulated, invisible cost of unverified AI output landing in production.
The Myth of the Next Model
One thing I hear constantly: "The next version will be better." And it's true, each generation of models delivers real gains. But the gains are iterative now, not revolutionary. Claude Opus 4.7 is better than previous Claude versions. GPT-5 is better than GPT-4o. None of them have eliminated hallucinations. None of them have removed the need for human verification. The gap between "impressive demo" and "production safe" remains stubbornly wide.
The scaling-law era of exponential improvements is running into structural limits. Easy gains from public web text are shrinking. Labs are increasingly relying on synthetic data, but a model can only teach its successor what it already knows. For engineering teams, the practical implication is clear: the process you build today will matter more than the model you upgrade to tomorrow.
Vibe Engineering: The Discipline
So what's the answer? Not less AI, but more discipline around it. That's what vibe engineering is.
If vibe coding is alchemy, chaotic, fast, unreliable, then vibe engineering is applied chemistry. Same raw curiosity, but with a periodic table, a lab notebook, and a co-worker who'll yell at you for skipping the safety goggles. It wraps the probabilistic core of LLMs in a deterministic shell of human intent.
The key insight is this: the specification is the source of correctness, not the model. We proved this in the book with a simple experiment. We asked three leading models, Gemini, GPT-5, and Claude Sonnet, to implement an ISBN-13 validator using the same human-authored test suite as the contract. All three passed. Their code looked different. The tests didn't care. The engineering work lay in defining the contract, not in fine-tuning prompts for any particular model's quirks.
This reframes the developer's role entirely. You're no longer a code author. You're a system designer and validator. You're designing the factory that produces the software, not hand-crafting every part. An experienced engineer's value shifts up one level of abstraction, and that is precisely where AI amplifies humans most.
Context, Agents, and the Road Ahead
The book also covers two other dimensions that I think are underappreciated.
Context quality is everything. Garbage in, garbage out isn't a cliché; it's the defining constraint of AI-assisted development. The "context vacuum" (not giving the model enough to work with) and "context rot" (giving it too much, so the relevant signal gets buried) are both failure modes we document in detail. Tools like Model Context Protocol are changing this, but they also introduce governance questions that most teams haven't started thinking about.
If your team hasn't sat down and asked "who decides what context our agents see?" - well, that conversation is coming, whether you schedule it or not.
And then there's the agent question. The industry has moved quickly from autocomplete to autonomous agents that plan, write code, run tests, and open PRs with minimal human oversight. Tools like Claude Code are extraordinary. They're also genuinely risky in ways that our existing review processes weren't designed to handle. A misconfigured agent can ship a coherent but subtly wrong feature across multiple services before any reviewer notices. The blast radius of a single oversight scales with the agent's autonomy.
The answer isn't to slow down. It's to build verification infrastructure that matches the speed. Treat prompts as versioned artifacts. Make specifications executable. Run agents in sandboxes. Keep humans genuinely in the loop, not as rubber stamps, but as the designers of the system those agents operate within.
Why We Wrote This Book
Tomek and I wrote Vibe Engineering because the conversation in our industry keeps getting stuck at the wrong level. People argue about which model is best, which IDE plugin to use, which agentic framework to adopt. Those are real questions, but they're downstream of a deeper one: what does it mean to build software responsibly when much of it is generated by a machine?
The answer isn't to reject AI. It isn't to trust it blindly. It's to build the engineering discipline, the mental models, the workflows, the verification culture, that makes AI a reliable amplifier rather than a source of hidden risk.
The shift is already happening. The only question is whether your team leads it or catches up too late.
PS: If you read the MEAP and disagree with any of it - please tell me. The whole point of an early-access program is that readers get to yell at us before the cement dries.
Artur Skowroński is Head of Application Development at VirtusLab and co-author of Vibe Engineering (Manning Publications, 2026). VirtusLab's enterprise AI delivery platform, VISDOM, is available at: https://virtuslab.com/services/visdom




