The Context Problem: Notes on Sentry's Seer
July 8, 2025
I’ve been spending a lot of time with the new wave of AI coding agents, and a clear pattern is emerging. The core challenge isn't the model's raw intelligence; it's the tedious, manual work of feeding it the right information. You can't just throw your entire codebase at an AI and ask it to "fix the bug." It's a constant dance of tailoring, pruning, and managing context.
You have to prompt it carefully, steer it away from rabbit holes, dig up more details when it gets stuck, and ideally give it tools to create a feedback loop — some way to check its own work. This process of being a "context wrangler" for an AI can sometimes feel as time-consuming as fixing the bug yourself.
This is why my recent experience with Sentry's Seer has been such an eye-opener. It feels like Sentry looked at this exact context problem and realized they’ve already spent the last decade building the solution.
Seer works so well because it operates on a foundation of perfectly-pruned context that Sentry has been mastering for years. When an error report comes in, Sentry doesn't just capture a stack trace. It captures the full story: sourcemapped code from a specific commit, the breadcrumbs of events leading to the failure, the affected user, the environment details, traces across services, and more. This isn't just data; it's a narrative.
Sentry takes this entire narrative — a package of context that would be a pain for a developer to manually assemble and feed to a generic LLM — and hands it over to Seer.
This all hinges on a correctly configured Sentry instance, of course. No sourcemaps or GitHub integration, and Seer’s guesses are as good as yours without lines of code and access to source. It creates a strong new incentive to get Sentry configuration right. It's no longer just for easier manual debugging; it's now the prerequisite for automated fixes.
The setup is straightforward enough with their official libraries. The GitHub integration is what closes the loop, giving Seer read access to the code and write access to open PRs.
Example
- An HTTP
fetch
call to a Slack webhook was throwingSyntaxError
when resolving theresponse.json()
promise. - I asked Seer to draft a PR. It correctly identified the root cause of the error, which is that Slack returns plaintext responses.
- Seer, when asked to draft a PR, also correctly proposed using
response.text()
instead - an acceptable solution, although it probably could have proposed not parsing the response and just looking at the status code instead.
Two parting thoughts
- Can this be pointed at CI failures? It feels like a similar problem space. Or should another agent pick that work up? I'm excited to see how coding agents work together to keep projects clean, quietly refactor things, tackle feature work and propose bug fixes based on production data.
- If I add screen recording, will that get added to the context?