Most people use Claude Code like a chatbot. They type a question, get an answer, type another question. That works — the same way using a car to drive to your mailbox works. Technically functional. Wildly underusing the machine.
I use Claude Code every day. For my portfolio site, for the agent systems I'm building, for blog publishing pipelines, for refactoring sessions that would take me hours to do manually. After a few months of daily use, certain patterns start to crystallize. Not best practices in the tutorial sense — more like things you figure out by doing it wrong enough times.
The CLAUDE.md is the whole game
If you take one thing from this post, take this: the CLAUDE.md file is the single most important thing you can do to improve your Claude Code experience. Everything else is marginal by comparison.
A CLAUDE.md sits in your project root and tells Claude what it's working with. Project architecture, key commands, conventions, where things live. Every time Claude Code starts a session, it reads this file. Without it, Claude has to figure out your project from scratch every time — reading files, guessing at structure, making assumptions that may or may not be right.
Here's what mine looks like for this site:
## Commands npm run dev # Start Next.js dev server npm run build # Production build npm run lint # Run ESLint ## Architecture Personal portfolio built with Next.js 16, React 19, Tailwind CSS v4. ### Blog System Blog posts are MDX files in content/ with frontmatter: title, publishedAt, author, summary, image, tags ### Key Directories src/app/ - Next.js App Router pages src/components/ - UI and section components content/ - Blog posts (MDX) src/data/resume.tsx - All portfolio content config
Nothing fancy. Just the stuff I'd tell a new developer on their first day. That's the mental model — you're onboarding a very fast, very literal colleague who has amnesia between sessions.
The thing most people miss: CLAUDE.md isn't a prompt. It's project documentation that happens to be consumed by an AI. Write it the way you'd write a good README. If it would confuse a junior developer, it'll confuse Claude too.
Specific beats vague — but not the way tutorials show it
Every tutorial about Claude Code says "be specific with your instructions." They're right, but the examples are always these pristine, numbered-list specifications that nobody actually writes in practice.
The real pattern is more like: give Claude enough context to make the right decisions without specifying every decision. There's a sweet spot between "make my code better" and a 400-word specification document. I've found it usually sounds like a sentence or two of context plus a clear ask.
"The blog pagination is broken on the tags pages — posts show on page 1 but page 2 returns nothing. The relevant files are in src/app/blog/tags/. Fix the pagination logic."
That's usually enough. Claude can read the files, understand the pattern, and fix the issue. If I over-specify — "change line 47 to use Math.ceil instead of Math.floor and update the query parameter to offset by pageSize times pageIndex" — I'm doing the debugging myself and just using Claude as a typist. That's the car-to-the-mailbox problem again.
The exception: when you're asking Claude to make architectural decisions, be extremely specific about constraints. "Add authentication" is a recipe for Claude picking whatever pattern it's seen most often in training data. "Add NextAuth with Google OAuth, store sessions in Supabase, use the App Router middleware pattern" gives it the guardrails it needs.
Read before you edit
IndieDevDan calls this the Ralph Wiggum pattern — have Claude state what it sees before it does anything. The name is funny but the idea is genuinely useful.
Before any significant refactor, I tell Claude to read the relevant code and explain what it thinks is happening. Not because I don't know — because I want to verify that Claude's mental model matches mine before it starts making changes. If Claude misunderstands the architecture, I'd rather catch that before it rewrites six files based on a wrong assumption.
This sounds like it would slow things down. It doesn't. The time you spend on a thirty-second read-back saves you from the fifteen-minute "wait, why did you restructure the entire component tree?" conversation. It's cheap insurance.
When to checkpoint and when to let it run
This is the thing you learn through experience and can't really learn from a tutorial: knowing when to let Claude run autonomously and when to stop and verify.
Small, well-scoped tasks — fix this bug, add this component, write tests for this function — let it run. Claude is very good at these. It'll read the relevant files, make the changes, often run the tests itself, and come back with working code. The whole loop takes a minute or two.
Larger refactors are where things get interesting. Claude can absolutely handle "refactor this service to use the repository pattern" — but about 30% of the time, it'll make a decision partway through that cascades into unexpected changes elsewhere. A renamed interface that breaks imports in five other files. A changed return type that the calling code doesn't expect. These aren't bugs exactly — they're the kind of thing a human developer would catch by pausing and checking, but Claude tends to push forward.
The pattern I've settled on: for anything that touches more than three or four files, break it into stages. "First, show me what you'd change and why. Then do the data layer. Stop. Then do the service layer. Stop." It takes slightly longer. The results are dramatically better.
What goes sideways
I'm not going to pretend this is all smooth. Here's what reliably doesn't work well:
Long sessions lose coherence. After 20-30 back-and-forth exchanges, Claude starts losing track of decisions made earlier in the conversation. It'll re-introduce a pattern you explicitly asked it to remove three messages ago. The fix is boring but effective: start a new session. The CLAUDE.md means it picks up project context instantly. You only lose conversational context — which was degrading anyway.
Complex cross-cutting refactors. Changing a data model that's used across the entire application — Claude handles any individual file fine, but the coordination across files gets shaky. I've had it update a type definition in one file and then write code in another file that uses the old type. The model is great at local reasoning. Global reasoning across a large codebase is still genuinely hard.
Assumptions about your preferences. Claude will add error handling you didn't ask for, restructure imports into an order it prefers, sometimes add comments explaining obvious code. These aren't wrong, exactly, but they're not what you asked for. CLAUDE.md helps here — you can specify "don't add comments to code unless asked" — but Claude will still occasionally freelance.
None of these are dealbreakers. They're just the actual texture of working with the tool daily, which is different from the texture of demos and tutorials.
The compounding effect
Here's what surprised me most: the productivity gain isn't linear. It compounds.
Week one, you're figuring out how to phrase requests. Week two, your CLAUDE.md is dialed in and sessions start faster. Week three, you've internalized which tasks to delegate and which to do yourself. By month two, you're not thinking about how to use Claude Code anymore — you're just building things faster.
The shift that happens is subtle. You stop thinking "I need to write a pagination component" and start thinking "I need pagination — let me describe the constraints and let Claude handle the implementation." Your job becomes specifying what you want and verifying it's right. The mechanical act of typing code becomes a smaller part of the work. The thinking and reviewing part grows.
This is genuinely different from autocomplete tools. GitHub Copilot suggests the next line. Claude Code handles the next task. That's not a difference in degree — it's a difference in the unit of work you're delegating. And it changes how you think about your day.
Where this is going
I don't know exactly where the Claude Code workflow leads. I know the tool is getting better faster than I'm getting better at using it. I know that the CLAUDE.md pattern — teaching your tools about your project through structured documentation — feels like it generalizes beyond this one tool. And I know that the people who figure out how to work with these systems effectively aren't necessarily the best programmers. They're the people who are best at specifying what they want and recognizing when they've gotten it.
That's a different skill than writing code. Whether it's a better one is still an open question.