How I Use Claude Code
January 6, 2026
These are notes on how I've been using Claude Code at work on a travel booking platform (Next.js, tRPC, Drizzle, monorepo). Not a tutorial, just what's working for me.
The Basic Loop
I typically have 2-3 Claude Code agents running in separate terminal tabs. That's roughly my context-switching capacity - your mileage may vary. Sometimes it's:
- One doing a planned feature or bugfix
- One creating or refining a GitHub issue (with codebase context)
- One in a holding pattern waiting for CI or code review
Sometimes two agents are working on the same feature - one exploring an approach while I steer another toward implementation.
I flick between them. It's not like managing junior devs - it's more like having slow, thorough tools that I check in on. The cognitive load is lower than it sounds because each agent has its own isolated context.
Creating GitHub Issues From Claude Code
When I need to file an issue, I ask Claude Code to write it using the gh CLI. But before drafting, I ask it to:
- Search for duplicate issues
- Audit the relevant code paths
- Verify the bug exists where I think it does
- Flesh out reproduction steps with actual file paths and line numbers
The result is an issue that's already grounded in the codebase. Claude often catches things I missed - "this function is also called from X" or "there's a related TODO at Y". The issue ends up more accurate than what I'd write from memory.
You can make this a slash command with arguments:
# Create GitHub Issue
Create a GitHub issue for: $ARGUMENTS
## Instructions
1. Search existing issues for duplicates: `gh issue list --search "keywords"`
2. Audit the codebase to understand the scope and tweak terminology - for example, it's great to upgrade from fuzzy langauge to more concrete references to models, react components, services etc.
3. Create with `gh issue create`
4. Audit may reveal things to ask - but just to remove ambiguity in issue description, not to start problem solving
Then invoke: /create-issue checkout fails when cart has mixed currency items
The PR Feedback Loop
I use Greptile for automated code review. It posts inline comments on PRs - catches circular dependencies, race conditions, missing error handling, obvious mistakes.
I have a /pr-feedback slash command that:
- Waits for CI to complete (
gh pr checks --watch) - Fetches the Greptile review via MCP
- Addresses the feedback
- Pushes fixes
- Triggers a re-review
- Repeats until clean
Most Greptile comments lead to actual changes. It's not noise - it catches real issues. When Greptile flags a false positive, Claude is good at recognizing it and moving on with a brief "intentionally skipped" note.
The loop means I can kick off a PR, context-switch to something else, then come back when it's review-ready.
Context Management: /stash and /catchup
Claude Code's automatic compaction degrades agent quality. The summarization loses nuance - especially mid-debug when you've built up a mental model of the problem.
My solution: explicit checkpoints with /stash.
When I need to clear context (or just want a clean break), I run /stash. I frame it to the agent as "good day's work, we'll pick this up tomorrow" - which seems to generate better PLAN.md files. Of course I'm picking things up immediately with a fresh agent, but the human framing helps. It dumps everything to a PLAN.md file:
# [Task Title] - Implementation Plan
## Next Up
> **Immediate next task**: Fix the type coercion in detectLegacyCart
> **Context needed**: The function checks `=== null` but Drizzle might return undefined
## Current State
[What I've learned, code snippets, file paths]
## What I've Tried
[Dead ends, test results, hypotheses ruled out]
## Key Decisions
- Using ghost order approach instead of cookie invalidation because...
## Remaining Steps
- [ ] Add debug logging
- [ ] Test with production data
Then /catchup reads this file and resumes with full context. The agent picks up exactly where it left off.
The key insight: I control what gets preserved. Sometimes I tell it "note what you learned" (keeps debug findings, prunes implementation details). Sometimes "note next steps" (prunes completed work, keeps the plan). This beats automatic summarization because I know what matters.
PLAN.md is gitignored - it's ephemeral working memory, not documentation.
Specialized Agents
I've set up custom agents for specific tasks:
code-validator (Haiku)
Runs pnpm lint:fix and pnpm turbo typecheck, reports issues. That's it. Using Haiku because:
- Validation doesn't need intelligence
- Faster and cheaper
- Haiku won't try to "fix" things or suggest improvements - it just reports
i18n-translator
Knows our 6 locales, understands ICU message format, has jq patterns for bulk operations. Specialized knowledge in a reusable agent.
software-architect-analyzer (Opus)
For planning complex features. Has an "ultrathink" prompt that forces thorough analysis before proposing anything. Overkill for small tasks, essential for architectural decisions.
What I Want: Worktree Isolation
Right now I'm limited to one PR at a time, which constrains the 2-3 agent setup. The solution is git worktrees - each agent gets its own worktree so they can work on separate PRs without interfering.
I've set up the infrastructure but haven't adopted it yet. There's a PreToolUse hook ready that blocks any cd command trying to escape the worktree:
# .claude/scripts/validate-worktree-dir.sh
# Blocks navigation to parent directories or the main repo
# when running inside a .conductor/* worktree
This would prevent agents from accidentally modifying the wrong branch. Learning to spin up worktrees smoothly is next on my list.
Visual Indicators for Terminal Tabs (Kitty)
With multiple agents running, I want to see at a glance which tabs need attention. Kitty's remote control lets me highlight inactive tabs that are waiting for input - they glow yellow in the tab bar.
First, enable remote control in kitty.conf:
allow_remote_control yes
listen_on unix:/tmp/kitty-{kitty_pid}
Then add hooks in ~/.claude/settings.json:
{
"hooks": {
"Stop": [
{
"hooks": [
{
"type": "command",
"command": "/Users/you/.claude/hooks/stop.sh"
}
]
}
],
"SessionStart": [
{
"hooks": [
{
"type": "command",
"command": "/Users/you/.claude/hooks/start.sh"
}
]
}
],
"UserPromptSubmit": [
{
"hooks": [
{
"type": "command",
"command": "/Users/you/.claude/hooks/start.sh"
}
]
}
]
}
}
The start.sh script clears the highlight when the agent starts working:
#!/bin/bash
socket="unix:/tmp/kitty-$KITTY_PID"
kitty @ --to "$socket" set-tab-color inactive_bg=none
The stop.sh script highlights the tab yellow when waiting:
#!/bin/bash
socket="unix:/tmp/kitty-$KITTY_PID"
kitty @ --to "$socket" set-tab-color inactive_bg=#b58900
Now when scanning my tab bar, yellow tabs need attention - they're blocked waiting for me.
Quick Tips
Use pbcopy and pbpaste on macOS. Tell Claude to pipe output to the clipboard instead of printing it. Copying multiline text from the TUI often introduces unwanted linebreaks and spaces - the clipboard sidesteps this entirely.
What's Still Rough
CI is slow. The /pr-feedback loop is great but each iteration takes 5-10 minutes for CI. I want to parallelize more.
Worktrees. I haven't adopted worktrees yet, which means I'm stuck on one PR at a time. That's the main bottleneck - not agent capacity, but branch isolation.
Context limits are real. Even with /stash, long debugging sessions hit limits. I'm experimenting with more aggressive pruning - explicitly telling the agent what to forget.
The Mental Model
It's not "AI writes my code." It's more like:
- Claude Code is a slow, thorough tool that I orchestrate
- Multiple agents let me parallelize waiting (CI, reviews, long operations)
- Explicit context management beats automatic summarization
- Specialized agents beat general prompts for repeated tasks
- The feedback loop (Greptile → fix → re-review) catches real bugs
The setup takes investment. Custom agents, slash commands, hooks - it's configuration work. But the payoff is a workflow where I'm rarely blocked waiting for one thing to finish.