How I Use Claude Code
January 6, 2026
These are notes on how I've been using Claude Code at work on a travel booking platform (Next.js, tRPC, Drizzle, monorepo). Not a tutorial, just what's working for me.
The Basic Loop
I typically have 2-3 Claude Code agents running in separate terminal tabs. That's roughly my context-switching capacity - your mileage may vary. Sometimes it's:
- One doing a planned feature or bugfix
- One creating or refining a GitHub issue (with codebase context)
- One in a holding pattern waiting for CI or code review
Sometimes two agents are working on the same feature - one exploring an approach while I steer another toward implementation.
I flick between them. It's not like managing junior devs - it's more like having slow, thorough tools that I check in on. The cognitive load is lower than it sounds because each agent has its own isolated context.
Creating GitHub Issues From Claude Code
When I need to file an issue, I ask Claude Code to write it using the gh CLI. But before drafting, I ask it to:
- Search for duplicate issues
- Audit the relevant code paths
- Verify the bug exists where I think it does
- Flesh out reproduction steps with actual file paths and line numbers
The result is an issue that's already grounded in the codebase. Claude often catches things I missed - "this function is also called from X" or "there's a related TODO at Y". The issue ends up more accurate than what I'd write from memory.
You can make this a slash command with arguments:
# Create GitHub Issue
Create a GitHub issue for: $ARGUMENTS
## Instructions
1. Search existing issues for duplicates: `gh issue list --search "keywords"`
2. Audit the codebase to understand the scope and tweak terminology - for example, it's great to upgrade from fuzzy langauge to more concrete references to models, react components, services etc.
3. Create with `gh issue create`
4. Audit may reveal things to ask - but just to remove ambiguity in issue description, not to start problem solving
Then invoke: /create-issue checkout fails when cart has mixed currency items
The PR Feedback Loop
I use Greptile for automated code review. It posts inline comments on PRs - catches circular dependencies, race conditions, missing error handling, obvious mistakes.
I have a /pr-feedback slash command that:
- Waits for CI to complete (
gh pr checks --watch) - Fetches the Greptile review via MCP
- Addresses the feedback
- Pushes fixes
- Triggers a re-review
- Repeats until clean
Most Greptile comments lead to actual changes. It's not noise - it catches real issues. When Greptile flags a false positive, Claude is good at recognizing it and moving on with a brief "intentionally skipped" note.
The loop means I can kick off a PR, context-switch to something else, then come back when it's review-ready.
Context Management: Plan Mode + Parallel Subagents
Long sessions eventually hit context limits. When I’m getting close to the cap (say, ~5–10% left), the best move is usually to stop trying to squeeze more into the same thread and instead flip into a fresh context.
What works well now is:
- Ask Claude to produce a concrete plan (with clear, verifiable steps)
- Split larger work into parallelizable chunks
- Run those chunks as separate subagents so each gets a clean context
- Merge the results back into the parent thread
This is strictly better than “dump everything to a file and rehydrate later” because each sub-task starts clean, stays focused, and you don’t pay compaction/summarization tax mid-debug.
A prompt I use:
We’re near the context limit. Don’t continue in this thread.
Instead: propose a plan with 3–7 steps.
If the task is large, split it into sub-tasks that can run in parallel.
For each sub-task, specify:
- Goal
- Inputs (files/paths to inspect)
- Output (patch, notes, commands to run)
Then run the plan using subagents (one per sub-task) and report back with a merged summary.
The key is that subagents operate in fresh contexts and then return the distilled results to the parent. That keeps the main thread small and high-signal.
Closing the Loop
After Claude finishes a task, there's usually leftover context - things it learned about the codebase, explanations I gave, edge cases we discovered. Instead of letting that evaporate, I've started asking Claude to capture it.
Two prompts I use regularly:
"Summarize everything you learned and create a new Claude skill"
If I explained a non-obvious workflow (testing a specific integration, deploying to staging, debugging a particular service), Claude can turn that into a reusable skill file. Next time I - or another agent - needs to do the same thing, the knowledge is already there.
"Add the context I explained to the right CLAUDE.md"
During feature work, I often explain the purpose of things - what users are trying to accomplish, why a flow works the way it does, what problem we're actually solving. This high-level context seems to help Claude formulate better code solutions. Non-code goals lead to better code goals.
I used to think of this as throwaway conversation, but I've started asking Claude to persist it to the appropriate CLAUDE.md file (root level for project-wide context, or package-specific for localized knowledge). The codebase gets smarter over time, and I spend less time re-explaining the same context to fresh agents.
Specialized Agents
I've set up custom agents for specific tasks:
code-validator (Haiku)
Runs pnpm lint:fix and pnpm turbo typecheck, reports issues. That's it. Using Haiku because:
- Validation doesn't need intelligence
- Faster and cheaper
- Haiku won't try to "fix" things or suggest improvements - it just reports
i18n-translator
Knows our 6 locales, understands ICU message format, has jq patterns for bulk operations. Specialized knowledge in a reusable agent.
software-architect-analyzer (Opus)
For planning complex features. Has an "ultrathink" prompt that forces thorough analysis before proposing anything. Overkill for small tasks, essential for architectural decisions.
What I Want: Worktree Isolation
Right now I'm limited to one PR at a time, which constrains the 2-3 agent setup. The solution is git worktrees - each agent gets its own worktree so they can work on separate PRs without interfering.
I've set up the infrastructure but haven't adopted it yet. There's a PreToolUse hook ready that blocks any cd command trying to escape the worktree:
# .claude/scripts/validate-worktree-dir.sh
# Blocks navigation to parent directories or the main repo
# when running inside a .conductor/* worktree
This would prevent agents from accidentally modifying the wrong branch. Learning to spin up worktrees smoothly is next on my list.
Visual Indicators for Terminal Tabs (Kitty)
With multiple agents running, I want to see at a glance which tabs need attention. Kitty's remote control lets me highlight inactive tabs that are waiting for input - they glow yellow in the tab bar.
First, enable remote control in kitty.conf:
allow_remote_control yes
listen_on unix:/tmp/kitty-{kitty_pid}
Then add hooks in ~/.claude/settings.json:
{
"hooks": {
"Stop": [
{
"hooks": [
{
"type": "command",
"command": "/Users/you/.claude/hooks/stop.sh"
}
]
}
],
"SessionStart": [
{
"hooks": [
{
"type": "command",
"command": "/Users/you/.claude/hooks/start.sh"
}
]
}
],
"UserPromptSubmit": [
{
"hooks": [
{
"type": "command",
"command": "/Users/you/.claude/hooks/start.sh"
}
]
}
]
}
}
The start.sh script clears the highlight when the agent starts working:
#!/bin/bash
socket="unix:/tmp/kitty-$KITTY_PID"
kitty @ --to "$socket" set-tab-color inactive_bg=none
The stop.sh script highlights the tab yellow when waiting:
#!/bin/bash
socket="unix:/tmp/kitty-$KITTY_PID"
kitty @ --to "$socket" set-tab-color inactive_bg=#b58900
Now when scanning my tab bar, yellow tabs need attention - they're blocked waiting for me.
Quick Tips
Use pbcopy and pbpaste on macOS. Tell Claude to pipe output to the clipboard instead of printing it. Copying multiline text from the TUI often introduces unwanted linebreaks and spaces - the clipboard sidesteps this entirely.
What's Still Rough
CI is slow. The /pr-feedback loop is great but each iteration takes 5-10 minutes for CI. I want to parallelize more.
Worktrees. I haven't adopted worktrees yet, which means I'm stuck on one PR at a time. That's the main bottleneck - not agent capacity, but branch isolation.
Context limits are real. Even with /stash, long debugging sessions hit limits. I'm experimenting with more aggressive pruning - explicitly telling the agent what to forget.
The Mental Model
It's not "AI writes my code." It's more like:
- Claude Code is a slow, thorough tool that I orchestrate
- Multiple agents let me parallelize waiting (CI, reviews, long operations)
- Explicit context management beats automatic summarization
- Specialized agents beat general prompts for repeated tasks
- The feedback loop (Greptile → fix → re-review) catches real bugs
The setup takes investment. Custom agents, slash commands, hooks - it's configuration work. But the payoff is a workflow where I'm rarely blocked waiting for one thing to finish.