Our suggestions
What might interest you
How MCP and LLM tool calls work

VirtusLab's Guide to Agentic Programming on the JVM Part 1

Providing library documentation to AI coding assistants

Digitalization and AI in Insurance

This Month We AIed #1

Most of the time, the knowledge base we want to chat and reason about with an LLM has strong inter-relations. Even the famous PageRank algorithm, which gave Google a competitive advantage and made it ahead of others, is based on the quantity and quality of links between websites. The relations within a knowledge base are crucial to fully understand it. The problem with classical RAG is that it chunks the text, discarding all internal relations. So can we do better?

Today's project dropped on Hacker News frontpage just days ago and instantly sparked one of the most interesting security discussions I've seen in a while. We're looking at Matchlock by Jingkai He - a CLI tool for running AI agents in ephemeral microVMs with network allowlisting and secret injection via MITM proxy. Built to answer a question that every developer running claude --dangerously-skip-permissions should be asking: "What's the worst that could happen?"

In the second part, Adam will focus on reinforcement learning networks by summarizing the "1000 Layer Networks for Self-Supervised RL: Scaling Depth Can Enable New Goal-Reaching Capabilities", an article featured in NeurIPS 2025.

Agents are LLMs that move in the partially observed environment, interact with it, reason, plan, act, and adapt to the changing environment. They need to use tools to gather information in the iterative cycles. The information required to complete the task is hidden and requires online querying or tool use to obtain it. If your environment is fully observed and determined, tasks are repetitive, and you don’t need to adapt to unplanned changes, a simple LLM pipeline is just enough.

With the rapid growth of generative AI, we have a new great model coming out every month. It makes it hard to keep track of all the different kinds of models and choose the right one for your task. In this article, I will cover the best generative AI models at the doors of 2026. I hope to make your selection of a model at least a bit easier.

Traditional machine learning creates a significant bottleneck: manual labeling can cost millions of dollars for large-scale projects, require months of expert time, and remain prone to human error and inconsistency. Unsupervised learning offers a fundamentally different approach, allowing algorithms to discover structure and patterns in visual data without any human-provided labels. In this article, we will discuss unsupervised approaches used for training image models, with no labels given a priority.

In this series, every other Wednesday, I pick one trending repo and take it apart piece by piece. This week, I pulled out one key component of the entire GasTown engine, the thing without which the agents are basically useless. That component is Beads.

Adam Kaczmarek will break down an article, "Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free," featured in NeurIPS 2025. He will also explain the background for this paper: different types of attention and gating mechanisms.

Every other Wednesday we pick one trending project and break it down into its core pieces. Not another React framework or even an agent framework - but something fresh, something that helps us understand where our industry is heading. Today’s pick: BloopAI/vibe-kanban. A tool written in Rust that tries to bring order to the chaos of working with coding agents.

This guide explains how developers can craft small, precise rules that make AI coding tools more reliable. It shows practical techniques for structuring, organizing, and enforcing rules to achieve consistent, production-grade output.

Agentic systems require a new testing paradigm focused on evaluating trajectories, not just outcomes. This post details core test types, metrics, and tools.

I've always claimed there's no better way to learn anything than to build something with your own hands... and the second best way is to do a Code Review of someone else's code. Today, we are taking on a project that made waves on Twitter (or X) and GitHub, not so much because of the complexity of the code, but because of the philosophy behind it, and above all, because of its author, Andrej Karpathy. And in this article, we'll discuss his AI consensus mechanism called llm-council.
