Claude Code best practices have evolved rapidly since its launch. What started as a terminal-based assistant has become a full agentic development environment with skills, hooks, subagents, plugins, and multi-session orchestration. The developers who get the most out of Claude Code are not the ones who type the best prompts — they are the ones who engineer their context.
This guide is organized by workflow stage — from project setup through implementation to review — and draws from Anthropic's official documentation, community discussions, and real production workflows. Every section addresses a pain point that experienced users have solved.
The 80% Problem: Why Best Practices Matter
A 2026 survey found that 66% of developers report what Addy Osmani calls "the 80% problem" — AI solutions that are almost right, but not quite. Even more striking: 45% say debugging AI code takes longer than writing it themselves.
These numbers drop dramatically when developers follow structured practices. The root causes are predictable: vague prompts, polluted context, missing verification, and the butterfly effect — where the model misunderstands something early, builds an entire feature on faulty premises, and nobody notices until five PRs deep.
The core insight
As one r/ExperiencedDevs user put it: "Treat agents as leverage, not magic. Use them aggressively where speed is cheap, and be conservative where mistakes are expensive." Every best practice in this guide serves that principle.
Stage 1: Project Setup — CLAUDE.md as Infrastructure
The community consensus in 2026 is clear: CLAUDE.md is no longer optional. It is as important as your .gitignore. Martin Fowler popularized the term "context engineering" to describe the discipline of crafting the right context for AI tools, and CLAUDE.md is the foundation.
Run /init to generate a starter CLAUDE.md, then refine over time. The golden rule: for each line, ask "Would removing this cause Claude to make mistakes?" If not, cut it.
| Include | Exclude |
|---|---|
| Bash commands Claude can't guess | Anything Claude can figure out by reading code |
| Code style rules that differ from defaults | Standard language conventions Claude already knows |
| Testing instructions & preferred runners | Detailed API docs (link instead) |
| Repo etiquette (branch naming, PR rules) | Information that changes frequently |
| Architecture decisions & env quirks | File-by-file codebase descriptions |
| Common gotchas & non-obvious behaviors | Self-evident practices like "write clean code" |
CLAUDE.md Location Hierarchy
~/.claude/CLAUDE.md
User-level defaults across all projects. Personal preferences like commit style, testing approach, and coding conventions.
./CLAUDE.md (project root)
Team-shared project context. Check into git. Build commands, architecture, repo etiquette. This is the primary file.
Parent directories
Monorepo support. Both root/CLAUDE.md and root/packages/api/CLAUDE.md load automatically when working in the api package.
Child directories
On-demand loading. Claude pulls in child CLAUDE.md files when working with files in those directories.
CLAUDE.md Templates That Actually Work
The biggest mistake developers make is generating a CLAUDE.md with /init and never refining it. Here are two battle-tested templates for common stacks.
Next.js / TypeScript CLAUDE.md
# Commands
- Dev: `bun run dev` (port 3000)
- Test: `bun test` (single: `bun test path/to/file`)
- Lint: `bun run lint`
- Typecheck: `bun run typecheck`
- Build: `bun run build`
# Code style
- Use ES modules (import/export), not CommonJS
- Destructure imports when possible
- Prefer named exports over default exports
- Use TypeScript strict mode — no `any` types
# Architecture
- Next.js 15 App Router, React 19, TypeScript strict
- Database: PostgreSQL with Drizzle ORM
- Auth: Clerk middleware protects /dashboard/*
- Payments: Stripe webhooks at /api/webhooks/stripe
- See @docs/api-patterns.md for REST conventions
# Workflow
- IMPORTANT: Always typecheck after a series of code changes
- Prefer running single tests, not the whole suite
- Commit messages: imperative mood, reference issue number
- Never push directly to main — create a branch and PR
# Gotchas
- IMPORTANT: Never import from @/lib/server in client components
- The /api/webhooks route must remain unprotected by auth
- Drizzle migrations: run `bun run db:push`, never edit SQL directlyPython / Django CLAUDE.md
# Commands
- Dev: `python manage.py runserver`
- Test: `pytest` (single: `pytest path/to/test.py -k test_name`)
- Lint: `ruff check .`
- Format: `ruff format .`
- Migrations: `python manage.py makemigrations && python manage.py migrate`
# Code style
- Python 3.12+, type hints on all function signatures
- Use pathlib over os.path
- Prefer dataclasses or Pydantic over raw dicts
# Architecture
- Django 5.1, DRF for API, Celery for background tasks
- Database: PostgreSQL, no raw SQL — use ORM
- Auth: django-allauth with JWT tokens
- See @docs/api-standards.md for endpoint conventions
# Workflow
- IMPORTANT: Run pytest after changes, don't skip
- Write migrations for every model change
- Use Django signals sparingly — prefer explicit calls
# Gotchas
- YOU MUST NOT edit files in /core/auth/ without approval
- Background tasks in /tasks/ must be idempotent
- The WEBHOOK_SECRET env var must be set for StripeEmphasis markers work
Adding IMPORTANT or YOU MUST on critical rules measurably improves compliance. But use this sparingly — if everything is important, nothing is. Check CLAUDE.md into git so your team can contribute and refine it over time.
Token Economics: Stop Wasting Context
Claude Code uses 5.5x fewer tokens than Cursor for equivalent tasks. But most developers still waste tokens by feeding too much context. Every file Claude reads consumes context, whether it was relevant or not.
Create a .claudeignore
Just like .gitignore keeps your repo clean, .claudeignore keeps your context clean. Exclude build artifacts, dependencies, and generated files.
.claudeignore — stop wasting context on noise
# Dependencies
node_modules/
.venv/
vendor/
# Build artifacts
dist/
build/
.next/
__pycache__/
# Generated files
*.lock
*.min.js
*.map
coverage/
# Large assets
public/assets/
*.png
*.jpg
*.svgStage 2: Task Scoping — Prevent the Butterfly Effect
The butterfly effect is the most destructive failure mode in agentic coding. Developers on r/ChatGPTCoding report that asking AI to tweak something tiny breaks unrelated features. The model misunderstands something early, builds an entire feature on faulty premises, and nobody notices until five PRs deep.
The fix is ruthlessly tight task scoping. The more precise your instructions, the fewer corrections you need.
| Strategy | Vague Prompt (causes butterfly effect) | Scoped Prompt (prevents it) |
|---|---|---|
| Scope the task | "add tests for foo.py" | "write a test for foo.py covering the edge case where user is logged out. avoid mocks." |
| Point to patterns | "add a calendar widget" | "look at HotDogWidget.php for the pattern, then implement a calendar widget following the same structure" |
| Describe symptoms | "fix the login bug" | "login fails after session timeout. check auth flow in src/auth/, especially token refresh. write a failing test, then fix it." |
| Investigate history | "why is ExecutionFactory weird?" | "look through ExecutionFactory's git history and summarize how its API came to be" |
The Interview Pattern
For larger features, have Claude interview you before building anything:
Let Claude interview you before implementing
I want to build [brief description]. Interview me in detail
using the AskUserQuestion tool.
Ask about technical implementation, UI/UX, edge cases,
concerns, and tradeoffs. Don't ask obvious questions —
dig into the hard parts I might not have considered.
Keep interviewing until we've covered everything,
then write a complete spec to SPEC.md.Once the spec is complete, start a fresh session to execute it. The new session has clean context focused entirely on implementation.
Plan Mode: Think Before You Code
Plan Mode separates research from implementation, preventing Claude from solving the wrong problem. Toggle it with Shift+Tab (twice).
Explore
Enter Plan Mode. Claude reads files and answers questions without making changes.
Plan
Ask Claude to create a detailed implementation plan. Press Ctrl+G to edit in your editor.
Implement
Switch to Normal Mode. Claude codes against its plan, verifying as it goes.
Commit
Ask Claude to commit with a descriptive message and open a PR.
When to skip Plan Mode
Planning adds overhead. Skip it when the scope is clear and the fix is small — fixing a typo, adding a log line, renaming a variable. If you could describe the diff in one sentence, skip the plan. Planning is most valuable when you're uncertain about the approach, the change modifies multiple files, or you're unfamiliar with the code.
Stage 3: During the Session — Context Hygiene
The #1 practical tip from experienced Claude Code users: start fresh sessions per task. Long sessions degrade. Both Claude Code and Cursor lose quality in extended conversations. The fix is not to manage degradation — it is to prevent it.
| Command | When to Use | Effect |
|---|---|---|
| /clear | Between unrelated tasks | Resets entire context window |
| /compact | When context is 70%+ full | Summarizes while preserving key info |
| /compact <focus> | When specific work matters | Targeted summarization |
| Esc + Esc | Mid-session recovery | Rewind to checkpoint, optionally summarize |
| Subagents | Codebase research | Searches in separate context, returns summary |
The Two-Correction Rule
If you've corrected Claude more than twice on the same issue, the context is cluttered with failed approaches. Run /clear and start fresh. A clean session with a better prompt almost always outperforms a long session with accumulated corrections.
Batch Your Reviews — Avoid Decision Fatigue
A viral tweet from a developer put it bluntly: "Agentic coding has made work faster, but subordinates constantly demanding decisions is quite exhausting." The fix: don't approve every single edit individually. Let Claude work through a series of related changes, then review the batch with git diff.
Customizing compaction in CLAUDE.md
# CLAUDE.md
When compacting, always preserve:
- The full list of modified files
- Any test commands and their results
- The current implementation plan
- Error messages that haven't been resolvedSubagents for Clean Context
Subagents run in separate context windows and report back summaries. When Claude researches a codebase, it reads many files — all consuming your main context. A subagent explores in isolation and returns only what is relevant, saving 40%+ of input tokens.
Serial Search (main context)
- Claude greps "auth" — reads 3 files
- Greps "validate" — reads 4 more files
- Greps "permission" — reads 5 more files
- 12 files in context, mostly irrelevant
- Claude reasons over all of it — slower, less accurate
Subagent Search (isolated context)
- Subagent explores 8 paths in parallel
- Filters results before returning
- Returns: auth/validator.ts lines 45-89
- Only ~400 lines added to main context
- Claude stays focused on relevant code
Using subagents effectively
# Investigation — runs in separate context
"Use subagents to investigate how our authentication
system handles token refresh, and whether we have
any existing OAuth utilities I should reuse."
# Verification — also in separate context
"Use a subagent to review this code for edge cases
and security vulnerabilities."
# Custom subagent definition (.claude/agents/security-reviewer.md)
---
name: security-reviewer
description: Reviews code for security vulnerabilities
tools: Read, Grep, Glob, Bash
model: opus
---
You are a senior security engineer. Review code for:
- Injection vulnerabilities (SQL, XSS, command injection)
- Authentication and authorization flaws
- Secrets or credentials in codeWarpGrep: The Recommended Code Search Subagent
The default code search in Claude Code uses Haiku or Sonnet as its search model. WarpGrep replaces this with a dedicated search subagent that outperforms both Haiku and Sonnet on code search tasks while keeping your main context clean.
The difference is significant in practice. WarpGrep is purpose-built for code retrieval — it understands code structure, function boundaries, and import chains in a way that general-purpose small models do not. When Claude delegates a search to WarpGrep, the search runs in a separate context window and returns only the relevant line ranges, not entire files.
CLAUDE.md — configure WarpGrep as your search subagent
# CLAUDE.md
# Code Search
Use WarpGrep as the main code search subagent.
It outperforms Haiku and Sonnet on code search.
Use it for broad queries: "Find the auth flow",
"How does payment processing work?",
"Where is the rate limiting handled?"
Do NOT use it for exact keyword matching — use
Grep directly for that. WarpGrep is for semantic
queries where you need to understand code in context.When to use WarpGrep vs Grep
Use WarpGrep for broad semantic queries: "Find the authentication flow", "How does the payment system work?", "Where is error handling for API timeouts?" Use Grep directly for exact keyword matches: className="auth", import { foo }. WarpGrep searches in a separate context window, so your main session stays clean for implementation work.
Agent Teams: Parallel Orchestration
Agent teams let you spawn multiple Claude instances that work in parallel on different parts of a task. This is the most powerful scaling pattern in Claude Code — but it requires discipline around model selection and communication rules.
Specify the Model for Every Teammate
Different tasks need different models. Research and exploration tasks can use cheaper, faster models. Implementation and complex reasoning tasks should use the strongest available model. Always specify the model explicitly — never rely on the default.
| Task Type | Recommended Model | Why |
|---|---|---|
| Codebase exploration | haiku | Fast, cheap — ideal for reading and summarizing files |
| Code search / retrieval | WarpGrep MCP | Purpose-built for code search, outperforms Haiku/Sonnet |
| Implementation | sonnet or opus | Needs strong reasoning for correct code generation |
| Complex refactoring | opus | Multi-file changes with cross-cutting concerns need the strongest model |
| Test generation | sonnet | Good balance of speed and quality for straightforward generation |
| Code review | opus | Security and correctness review benefits from strongest reasoning |
Communication Rules for Teams
Without explicit communication rules, agent teams devolve into chaos — teammates step on each other's work, duplicate effort, or produce conflicting outputs. Define these rules in your CLAUDE.md or agent frontmatter.
CLAUDE.md — agent team communication rules
# Agent Teams
When spawning agent teams:
- ALWAYS specify the model for each teammate:
- Use haiku for research and exploration tasks
- Use sonnet for implementation and test generation
- Use opus for complex refactoring and code review
- Communication rules:
- Teammates MUST use SendMessage to communicate — plain text output is invisible to other agents
- Team lead assigns tasks via TaskCreate and TaskUpdate
- Teammates mark tasks completed via TaskUpdate before going idle
- Use message type "message" for direct communication (NOT "broadcast" unless critical)
- Each teammate works on ONE task at a time — no multitasking
- File locking:
- No two teammates should edit the same file simultaneously
- If a file needs sequential edits, use task dependencies (blockedBy)
- Always shut down teammates when work is completeCustom agent definition with model and rules
# .claude/agents/test-writer.md
---
name: test-writer
description: Writes comprehensive test suites
tools: Read, Grep, Glob, Bash, Edit, Write
model: sonnet
---
You are a test engineer. For each file assigned:
1. Read the source file to understand its API
2. Check for existing tests
3. Write tests covering: happy path, edge cases,
error handling, boundary conditions
4. Run the tests to verify they pass
5. Mark your task as completed
Communication: Send a message to the team lead
when you finish each file. Include the test
file path and pass/fail status.Nested CLAUDE.md for Team Context
In monorepos or multi-package projects, nested CLAUDE.md files give each team or package its own context. Claude loads the nearest CLAUDE.md files hierarchically — root-level rules apply everywhere, while child-level rules apply only when working in that directory.
Nested CLAUDE.md structure for a monorepo
# Project structure with nested CLAUDE.md
myproject/
├── CLAUDE.md # Root: shared commands, repo etiquette
├── packages/
│ ├── api/
│ │ └── CLAUDE.md # API-specific: "Use Drizzle ORM, not raw SQL"
│ ├── web/
│ │ └── CLAUDE.md # Frontend: "Use Tailwind, prefer server components"
│ └── shared/
│ └── CLAUDE.md # Shared: "Changes here affect all packages — run full test suite"
└── .claude/
└── agents/ # Custom agent definitions
├── api-dev.md
└── frontend-dev.md
# Root CLAUDE.md (applies to all packages)
## Commands
- Install: `bun install` (from root)
- Test all: `bun run test`
- Typecheck: `bun run typecheck:fast`
## Workflow
- IMPORTANT: Always push to a branch, never directly to main
- Run typecheck after every series of code changes
# packages/api/CLAUDE.md (@import from root)
@../CLAUDE.md
## API-Specific Rules
- Use Drizzle ORM for all database operations
- Test: `bun test packages/api`
- Migrations: `bun run db:push` (never edit SQL directly)
# packages/web/CLAUDE.md
@../CLAUDE.md
## Frontend-Specific Rules
- Prefer React Server Components, use 'use client' only when needed
- Use Tailwind CSS, no inline styles
- Test: `bun test packages/web`Push to branches, not main
Add this to every project's CLAUDE.md: "IMPORTANT: Always push to a branch and create a PR. Never push directly to main." This is especially critical when running agent teams or parallel sessions — multiple agents pushing to main simultaneously will create merge conflicts and lost work. Branch-based workflows give you a review checkpoint before changes land.
Stage 4: Review and Verification
Giving Claude verification criteria is the single highest-leverage practice. Claude performs dramatically better when it can verify its own work — run tests, compare screenshots, validate outputs.
| Strategy | Before (vague) | After (verifiable) |
|---|---|---|
| Provide test cases | "implement email validation" | "write validateEmail function. test: user@example.com is true, invalid is false, user@.com is false. run tests after." |
| Visual verification | "make the dashboard look better" | "[paste screenshot] implement this design. screenshot the result and compare to original." |
| Root cause analysis | "the build is failing" | "build fails with [paste error]. fix root cause, don't suppress. verify build succeeds." |
Security: The Non-Negotiable Review
AI-generated code has 1.5-2x more security bugs
Research shows AI-generated code contains significantly more security vulnerabilities than human-written code — improper password handling, insecure direct object references, and missing input validation are the most common. Best practices:
- Always manually review auth, payment, and data mutation code
- Use hooks for automated security checks on every edit
- Never fully delegate security-critical code paths
- Create a
security-reviewersubagent for automated security audits
The Delegation Spectrum: What to Delegate, What to Supervise
Anthropic's 2026 developer report shows that developers use AI in 60% of their work but fully delegate only 0-20% of tasks. The best developers are deliberate about which tasks fall where.
| Category | Fully Delegate | Supervise Closely |
|---|---|---|
| Code generation | Test generation, boilerplate, scaffolding | Auth flows, payment logic, data mutations |
| Refactoring | Migrations, renames, format changes | Security-critical refactors, API contracts |
| Documentation | Docstrings, READMEs, inline comments | Security docs, compliance documentation |
| Debugging | Lint errors, type errors, build fixes | Race conditions, data corruption, auth bugs |
| Infrastructure | CI config, Dockerfiles, env setup | Production deployments, database schemas |
The terminal-first workflow
Power users report a shift: "Since I started using Claude Code, I use IDEs less. Claude keeps me in terminal while it finishes tasks." The workflow is: describe what you want, Claude builds it, review the git diff, commit or iterate. The IDE becomes a viewer, not an editor.
Stage 5: Advanced Patterns
The Writer/Reviewer Pattern
Session A: Writer
"Implement a rate limiter for our API endpoints"
Session B: Reviewer
"Review the rate limiter in @src/middleware/rateLimiter.ts. Look for edge cases, race conditions, and consistency with existing middleware."
The reviewer has clean context and won't be biased toward code it just wrote. This produces significantly better review than self-review in the same session.
Hooks: Deterministic Guardrails
Hooks run scripts automatically at specific lifecycle points. Unlike CLAUDE.md rules which are advisory, hooks execute every time. Use them for linting after edits, blocking writes to protected files, and sending notifications.
Skills: On-Demand Knowledge
Skills extend Claude with project-specific workflows that load on demand — unlike CLAUDE.md which loads every session. Create SKILL.md files in .claude/skills/ for domain knowledge and repeatable workflows like /fix-issue 1234.
Fan-Out for Large Migrations
Distributing work across parallel Claude invocations
# Generate a task list
claude -p "List all Python files that need migrating" > files.txt
# Process each file in parallel
for file in $(cat files.txt); do
claude -p "Migrate $file from unittest to pytest. Return OK or FAIL." \
--allowedTools "Edit,Bash(git commit *)" &
done
waitCommon Failure Patterns to Avoid
The Kitchen Sink Session
You start with one task, ask something unrelated, then go back. Context fills with irrelevant information. Fix: /clear between unrelated tasks.
The Over-Specified CLAUDE.md
Your CLAUDE.md is too long. Important rules get lost in the noise and Claude ignores half of it. Fix: ruthless pruning — delete rules Claude follows without prompting.
The Butterfly Effect
A tiny change cascades into broken features. The model misunderstood something early and built on faulty premises. Fix: tight task scoping and Plan Mode.
The Trust-Then-Verify Gap
Claude produces plausible code that doesn't handle edge cases. Fix: always provide verification (tests, scripts, screenshots). If you can't verify it, don't ship it.
Frequently Asked Questions
What is a CLAUDE.md file and why do I need one?
CLAUDE.md is a markdown file that Claude Code reads at the start of every conversation. It provides persistent context — build commands, code style rules, architecture decisions, and gotchas that Claude cannot infer from code alone. In 2026, the community consensus is that CLAUDE.md is as important as .gitignore — essential infrastructure, not optional documentation.
How long should my CLAUDE.md file be?
Keep your root CLAUDE.md to 50-100 lines with @imports for detailed sections. For each line, ask: "Would removing this cause Claude to make mistakes?" If not, cut it.
How do I prevent Claude Code from losing context in long sessions?
The #1 tip from experienced users: start fresh sessions per task. Use /clear between unrelated tasks, compact proactively at 70% context usage, delegate research to subagents, and scope each task narrowly. If you've corrected Claude more than twice on the same issue, clear and start fresh.
What tasks should I fully delegate vs. supervise closely?
Fully delegate: test generation, boilerplate, migrations, documentation, formatting. Supervise closely: authentication, payments, data mutations, security logic. Research shows AI-generated code has 1.5-2x more security bugs, so always manually review security-critical paths.
What is the 80% problem with AI coding?
The 80% problem refers to AI solutions that are almost right but not quite — 66% of developers report this. The fix is tight task scoping, verification criteria, and Plan Mode to prevent Claude from solving the wrong problem.
What is Plan Mode and when should I use it?
Plan Mode separates research from implementation. Use it when you're uncertain about the approach, changes span multiple files, or you're unfamiliar with the code. Skip it for small, well-defined changes like fixing typos.
How do I reduce token usage in Claude Code?
Use .claudeignore to exclude node_modules/dist/build artifacts, delegate research to subagents (saves 40%+ tokens), use /clear between tasks, and compact proactively at 70% context usage. Claude Code already uses 5.5x fewer tokens than Cursor for equivalent tasks.
Supercharge Your Claude Code Workflow
WarpGrep gives Claude Code a search subagent that keeps your context clean — saving 40% of input tokens and maintaining accuracy throughout long sessions.