Welcome back. This newsletter is a monthly, no-hype digest of the AI stuff that actually matters to engineers and leaders - practical, skeptical, and ready to use.
Every morning, I scan Hacker News, newsletters, research papers, and all the weird corners of the web before preschool drop-off. I collect the gems that don’t make the headlines but teach useful lessons or spark technical ideas. Now I’m sharing them here.
My rules are simple: no chasing every new model, no “top 50” lists, and no breathless marketing. I’ll skip bleeding-edge noise, prefer survey-style white papers to unhelpful deep dives, and only recommend tools that survive real-world triage.
What you’ll get: readable long-form pieces, practical deep dives, and architecture-level takeaways aimed at senior engineers and tech leaders. Opinionated? Absolutely - I won’t hide that.
Read on for this edition’s hand-picked stories and lessons. If that sounds like your kind of newsletter, welcome to issue two. 😉
In August, a dry, corporate MIT report somehow went viral - it was everywhere, bombarding my feed from every side, and I kept wondering: why? Did anyone actually expect those top-down “we must do AI” edicts to work? Come on - even without that, most initiatives in this space fall apart midway. Getting anything to reach production is hard - a 5% success rate (which we will be talking about in the moment) is actually quite a success.
Ok, but what actually happened: Fortune ran a piece on the MIT findings, and suddenly the whole Internet acted like this was a revelation. The headline statistic was brutal - roughly 95% of corporate GenAI pilots return no financial value and fail before broad rollout - despite approximately $30-40 billion being poured into experiments. The result, to anyone who’s spent time in the trenches, is depressingly unsurprising.
The real takeaway isn’t “AI sucks.” It’s that companies are terrible at integrating new tech into how people actually work. Folks love the flexibility of consumer tools like ChatGPT and Copilot, but they often criticize enterprise solutions as being “brittle, over-engineered, and poorly matched to real workflows.” In short: enterprise buying-by-feature, driven from the top, collides with messy human workflows - AI only made that mismatch louder and far more expensive.
So if 95% of firms are getting it wrong, what do the successful 5% do differently? This is the question actually worth answering.
This brings us to a much more interesting mental model that Mehmet Yilmaz presented in his essay The Barbell of Software Value. He argues that in the AI era, software value concentrates at two extreme poles while the middle of the market withers away.
On one side, we have platforms and primitives - the fundamental layers everyone else builds on, like AWS, OpenAI models, or GitHub. On the other side is private, internal leverage - software built by companies solely for their own use, deeply embedded in their unique processes and data, creating a moat competitors can’t cross. What’s dying is the middle - generic, horizontal SaaS that’s mercilessly squeezed by how easily AI can replicate functionality and by vendor consolidation pushed by CFOs.
Putting these two pieces together gives a powerful strategic map. The MIT report shows what fails - generic, poorly integrated tools, i.e., that “emptied-out middle” of the barbell. Mehmet’s theory explains why that happens and where the real wins are - building turnkey solutions deeply rooted in how a company actually operates. And will a mom-and-pop company be able to afford such an investment? On one hand, AI might make it (a bit) cheaper - on the other, as the Bible put it:
“For to everyone who has, more will be given, and he will have abundance; but from him who has not, even what he has will be taken away.”Mat25:2
Last month, I talked about Vibe Coding, programming in dialogue with AI, but the senior dev edition. That turned out to be only the beginning - today I have Vibe Coding - The manager’s edition!
August brought a new trend that’s slowly gaining momentum: parallel agentic flows. It turns out the best of us no longer just talk to a single agent; we orchestrate work across several, even dozens, concurrently. Igor Šarčević in Parallel AI Agents Are a Game Changer argues it’s not about smarter AI but pure parallelization. The engineer’s role shifts — no longer just an engineer or an orchestrator of agents, but a choreographer managing many asynchronous tasks: one agent builds the UI, another writes the API endpoints, and a third designs the database schemas. Decomposition skills and preparing tasks with a very rich context become crucial, because everything happens asynchronously.
Zach Wills pushed this further: he ran an experiment managing a “swarm” of 20 AI agents for a week and produced a production-ready app, about 800 commits, and over 100 pull requests. His lessons are priceless for any dev-agent manager: agree on a plan with the AIs (not just a goal), actively manage their memory, and checkpoint progress. And mercilessly restart agents the moment they stray.
You can probably rework the old “cattle, not pets” into “resources, not colleagues.”
Managing twenty threads at once sounds like a burnout recipe. Here’s a small but revolutionary ergonomic change, both Wills, but also Katie Parrott, in her piece fort he every.to blog mention: they switched from typing prompts to dictating them.
When we write, we instinctively optimize for fewer characters. When we speak, we naturally narrate more, provide richer context, and explain the why behind a task. Parrott goes further and claims typing introduces “invisible friction” - micro-decisions about phrasing and punctuation that pull us away from the core idea. Speaking to the computer let her thoughts flow, and structure emerged naturally.
If the engineer’s main job becomes delivering rich context to many asynchronous agents, voice is the most natural and efficient interface for that work. I use it and recommend it for writing too (the secret of my productivity is, among other things, exactly this).
This new, wonderful world of agent swarms still needs some rails so it doesn’t descend into chaos. Enter the Agent Client Protocol (ACP), created by the makers of the Zed editor in collaboration with Google. Its goal is simple: to be for AI agents what the Language Server Protocol - LSP - became for programming languages. Just as LSP separated language logic from the monolithic IDE (you can learn more about it from A Beginner's Guide to Using Scala Metals With its Model Context Protocol Server), ACP aims to let you switch between different agents without changing editors. For an engineering leader, that’s strategically important. Adopting tools built on an open standard like ACP is protection against vendor lock-in - it preserves flexibility and helps avoid the costly mistake of betting on the wrong closed ecosystem later on.
For the first time at scale, the authors analysed payroll data and produced hard evidence of how AI is reshaping the labor market for the youngest workers. They find a clear pattern: since late 2022, employment among 22–25-year-olds in the occupations most exposed to generative AI has fallen by about 13% relative to older cohorts.
Crucially, in the same roles, employment of older, more experienced colleagues has stayed stable or even increased. The change hasn’t primarily come through mass layoffs, but rather through quietly not renewing and not replacing junior positions - the career ladder is being pulled up.
But AI didn’t create this structural problem - it poured gasoline on a smouldering fire. Back in 2016, Dan Luu’s classic essay Programmer Moneyball laid out how poorly our industry identifies and develops talent: instead of looking for undervalued candidates and investing in training, many tech firms overpay a small group of fashionable hires and underinvest in real onboarding and mentoring.
The industry’s “trainingball” - the practice of hiring less-polished people and training them into stars - is weak or absent in many places for decades, and while combining that neglect with AI automating the textbook tasks that historically formed the apprenticeship rung for juniors, you get a terrifying result: we are building a generation of hyper-productive senior conductors while destroying the very system that produces their successors.
Joshua Gans’s text If AI and workers were strong complements, what would we see? helps connect the dots. He points out that if AI and human workers are strong complements, we should observe exactly the pattern the Stanford data shows: AI raises the value of tacit, experience-based judgment - the kind of tacit knowledge older workers hold - and substitutes for codified, bookish knowledge that younger, formally educated workers primarily contribute.
In other words, AI doesn’t simply sit next to middle-aged expertise as an alternative, but amplifies that expertise. That makes experienced judgement more valuable and can incentivise firms to retain or even hire relatively more experienced staff while contracting the entry rungs where codified tasks were learned. The distributional consequence is that AI-driven productivity gains can increase demand for experience while shrinking opportunities for novices - unless organizations change who pays for on-the-job training.
Things got a bit gloomy, so to finish on a positive note, I have a funny (it made me smile) and extremely practical blog post by Stuart Spence, titled Why I’m declining your AI generated MR. Spence compiled a list of reasons why he rejects AI-generated merge requests without further comment, sending the author to his article. It’s a sort of guide to spotting and rejecting low-quality “AI slop.” The list is worth remembering because it’s brilliant in its simplicity. In short, reject an MR if:
Removing most of the code would make it better.
The author doesn’t understand the basics of the language they wrote the code.
The documentation is mindless spam.
The code is flagrantly inconsistent with the rest of the project.
The author tries to handle a million edge cases that they obviously didn’t test.
The problems on that list aren’t technical AI bugs. They are failures of human critical thinking, a lack of ownership, and poor craftsmanship.
Rejecting “AI slop,” therefore, isn’t an anti-AI stance - it’s an act of enforcing responsibility. And most importantly, a rejected MR paired with Spence’s article becomes a powerful mentoring tool. The guy basically invented a viral idea for himself - kudos.
This is exactly how, on the front lines, we teach those junior engineers mentioned above to use these powerful tools responsibly. This is our small, grassroots “trainingball.” That’s how we can fight negative trends and make sure the next generation of engineers learn to be masters of the tools, not their mindless, clueless drivers. Because you know what the alternative is.