Published: Nov 17, 2025|16 min read16 minutes read
Welcome to the fourth article in the This Month We AIed series. If you want to find out what we've done with artificial intelligence in previous months, read: #1, #2, #3. Meanwhile, let's move on to what you'll learn this time:
Łukasz Lenart demonstrates how a simple CLAUDE.md file can transform chaos into clarity by enabling conditional context loading, resulting in faster and more precise AI responses.
Jacek Centkowski reveals how he used ChatGPT to transform a lightning talk into presentation-as-code, complete with automated rehearsal plans and structured feedback loops.
Artur Węgrzyn takes us on a journey from chaotic vibe coding to disciplined specification-driven development, showing how the right frameworks and model choices can turn AI into a true coding partner.
Mateusz Gołąbek introduces the idea of running an AI hackathon in a real, commercial project.
Hi! This month, I'd like to share an interesting trick that significantly improved my collaboration with Claude Code. It's about using the CLAUDE.md file and smart conditional document loading.
Problem: Too Much Context = Chaos
Working with various projects, I noticed that Claude sometimes gets lost in information overload. I have many tools and integrations, but not all are needed in every conversation.
Solution: CLAUDE.md as a "Conductor"
I created a simple file `~/.claude/CLAUDE.md` that acts as the main controller for Claude:
1CLAUDE.md
2This file provides an overall guidance to Claude Code (claude.ai/code) when working with code in any repository - this file is used in many conversation with Claude Code.
3
4Jetbrains MCP
5If Jetbrains MCP server is available you can read detailed instruction how to use it from @~/.claude/jetbrains_mcp.md - DO NOT READ THIS file in any other case
How It Works
1. The main file is always loaded - Claude reads CLAUDE.md at the beginning of each session
2. Conditional instructions - Claude loads additional files ONLY when specific conditions are met
3. Token efficiency - we don't waste context on unnecessary information
Real Example: JetBrains’s MCP
Currently, I have only one such "module" - `jetbrains_mcp.md` with instructions for IDE integration. Claude loads it ONLY when the MCP server is actually available. Without this condition, Claude would receive instructions for a tool that it cannot use.
Future Plans
I'm thinking about adding similar files for languages I work with:
1Scala Projects
2When you detect build.sbt or *.scala files in the repository, load @~/.claude/scala_conventions.md for Scala-specific guidelines.
3
4Java Projects
5If you see pom.xml or build.gradle, especially with Struts or OGNL dependencies,
6consult @~/.claude/java_guidelines.md for project-specific patterns.
Why does this make sense? Because Scala conventions (e.g., using `Option` instead of nulls) are useless when I'm working with TypeScript. And the specifics of working with Apache Struts aren't needed when I'm debugging a simple Python script.
Key Point: Precise Conditions
The most important thing is to clearly define WHEN to load a file:
"If JetBrains MCP server is available" - specific technical condition.
"When you detect build.sbt" - presence of a specific file.
"User explicitly asks about..." - explicit user query.
Pro Tip: Negative Conditions Are Equally Important!
Sometimes you need to specify when NOT to load a file. A great example is performance optimizations:
1Performance Optimizations
2Read @~/.claude/performance.md ONLY when:
3- User explicitly asks about performance
4- You detect performance issues in code review
5- Working with high-load systems or benchmarks
6
7DO NOT read this file for:
8- Initial prototypes
9- Simple CRUD operations
10- Educational examples
11- Quick scripts or one-off tools
Why is this important? Because premature optimization is the root of all evil (as Knuth said). I don't want Claude suggesting micro-optimizations when I'm writing a simple prototype or an example for juniors.
Why Is This Useful?
1. Faster responses - Claude doesn't analyze unnecessary documents
2. Better precision - less noise = better context understanding
3. Modularity - each project/language can have its own guidelines
4. Task-dependent context - prototype vs production means different priorities
Summary
This approach is like lazy loading for AI - we load only what's needed, when it's needed. I'm starting small (one file for JetBrains MCP) and will expand when I see real needs.
If you have similar experiences with organizing context for AI, let me know in the comments. I'm curious if anyone uses a similar approach.
PS. Yes, I know it's basically just if-else statements, but sometimes the simplest solutions are the best
Recently, I was preparing a “How to do Code Review like a Pro” talk for fellow software engineers. I had the storyline, research, and insights (including a joke) ready, but I faced three problems:
I wasn’t sure how many slides I could comfortably cover.
I wanted the presentation as code, preferably in Markdown, so I could review and iterate on it.
I had no rehearsal plan.
Where AI helped me the most
First, I used ChatGPT to research a sensible slide count for a lightning talk by a fluent-but-non-native speaker (~11 slides).
Second, I used it to draft initial slides in Marp Markdown and set up CI/CD to publish to GitHub Pages. I also offloaded grammar and spelling checks to it (called locally).
Finally, once I was happy with the content, I asked for:
a memorization script - totally not my style, but it helped me gauge how many sentences/words each slide should carry, and I built my own narrative from there;
a rehearsal plan - I provided the number of days left, and it generated an interactive plan (tracking fluency, prompting for slide details, simulating Q&A, etc.) that I could run daily - that was a game-changer to me.
AI helped me turn my plan and content into a clear, reviewable presentation-as-code that I could tailor to specific needs. It also sped up my prep.
I started around May/June with Junie (JetBrains). I was working on a few pretty simple tools in Python. And yeah, I got done what I had to do, but it was a nightmare. I had the classic "vibe coding" problems – it would generate something, but not quite right, and in the process, it would break something that was already working. After that experience, Junie landed in my "for simple tasks" drawer. Mostly, I used it for stuff where I couldn't remember the syntax (some bash, some Python).
Starting with Cursor AI and "vibing" on two editors
At the beginning of September, I decided to give Cursor AI a shot. Initially, I used it in "Auto" mode, without picking models. I immediately noticed it worked better than Junie. The UI itself is way better – for example, being able to tag a specific file mid-sentence instead of rigidly attaching everything at the bottom of the prompt. The downside was definitely working outside of IntelliJ, that I love working in. From that point on, I used Junie literally a few times, and the main motivation was simply to utilize my subscription and save money on Cursor AI.
Unfortunately, with Cursor AI I kept hitting the same wall: non-working solutions, constant manual fixes, and breaking parts of the code that were already working. I started working by having one project open in parallel in Cursor AI and in my IntelliJ. I'd work in Cursor AI, but for my own convenience and control, I'd review the changes in IntelliJ (the EditorJumper plugin for switching between IDEs turned out to be useful).
The Breakthrough: Claude Code and "getting hooked"
At the turn of September/October, I checked out Claude Code. At the same "operator" level, it gave better results than Cursor AI – I suspect the difference came from the model (e.g., Sonnet 4.5) and the workflow (terminal + IntelliJ instead of a VS Code clone). Adding my slow development as an operator to the mix, it hit me for the first time that... "okay, these AIs might actually make sense and this really is the next breakthrough." I got a bit hooked then. A few times, I ended up staying up until 1 AM just because I couldn't pull myself away. After a moment of enthusiasm, I started to analyze the costs more carefully. And sometimes it was painful, like when I "vibed" something out for 2.5 bucks and a second later I had to throw it in the trash and start from scratch.
Game Changer: From vibe coding to specification-driven development and ai-devs-tasks
In the meantime, I started attending the 10xDevs course. I began learning new techniques and choosing models for specific tasks more consciously. More and more often, I'd start with some document or plan, and only then move on to implementation. However, the real game changer in my adventure was discovering the ai-devs-tasks framework. For my use cases, it works perfectly. Most of the vibe coding problems disappeared. The agent finally started implementing what I wanted and how I wanted it. I formally moved from vibe-coding to specification-driven development.
Sonnet's Cost vs. Grok's Speed
Sonnet, which I was using in Claude Code, was cool, but it burned through dollars pretty quickly. I noticed that in Cursor AI, after using up the package, you can still use the service, probably as part of some "free usage". But that didn't last forever either. I hit the limit. One of the few models I could still use was grok-code-fast-1, which is currently free. And you know what? It's not as good as Sonnet, but it's not bad at all. Additionally, it appears to be significantly faster than Sonnet, making the work smoother and more enjoyable. Most importantly, it handles the workflow in that ai-devs-tasks framework (the create-prd -> generate tasks -> implementation sequence). I use the same model to build the plan iteratively and then to crank out the code.
My most important takeaways, or what helped me
1. Model choice. It seriously matters. Sonnet 4.5 is good, but expensive. Meanwhile, grok-code-fast-1 is also sufficient for many tasks, and it's much faster. 2. Spec-Driven Development. This is key. Creating a document describing what you want to do (a PRD - Product Requirement Document), then slicing it into tasks, and in the third step implementing it step-by-step. No more "vibing". 3. The Socratic Method. Instead of telling the model what and how to do something, you prompt it to ask *you* questions. This lets you discover "unknown unknowns". 4. Meta prompting. That is, using one prompt to enhance another. 5. A good AGENTS.md (or CLAUDE.md). You have to check this out: https://agents.md/. Thanks to this, the model starts with important project context every time. We save tokens and time on gathering that info with every new conversation. Plus, the model is more "deterministic" because it always starts with the same set of knowledge. When it gathered info itself every time, it didn't always build the context from the same things – sometimes it added something unnecessary, and other times it missed something important.
That's my story. I hope it helps someone get past that frustrating "vibe coding" stage.
Have you ever found yourself in a situation where “just let AI handle it” quickly turns into “wait, what did it just do”? It’s one thing when that happens in a side project - but in a real, commercial system, things can get serious fast. These systems aren’t playgrounds. They have been in production for years, full of dependencies, business rules, and integrations that don’t always play nice.
With that in mind, we asked AI to help with actual work. We ran a half-day internal hackathon in a large-scale logistics platform written in Scala, using AI as part of the regular workflow - writing scripts, improving tools, refreshing UIs, and even building new full-stack features.
That’s all for this time. The key takeaways from today’s lessons are:
Smart, conditional context loading (via CLAUDE.md) makes AI coding assistants more precise, efficient, and modular by providing only the relevant project context when it’s actually needed.
Using AI as a structured creative partner – from slide design to rehearsal planning – turns presentation preparation into an organized, iterative, and time-saving workflow.
Transitioning from “vibe coding” to specification-driven development with AI agents enables consistent and reliable code generation, powered by thoughtful model selection, structured prompts, and well-defined context files.
We’ll return next month with another batch of AI hacks for developers. If you have an AI tale of your own, share it with us in comments or ping us on X, Mastodon, or Bluesky!