The session went well. We worked through a tricky bug, refactored a messy function, and sketched out the next three features. We're in flow. The AI understands our codebase perfectly.
Then we close the terminal.
Tomorrow, all of that context vanishes. The AI won't remember the bug, the refactor, or the feature plan. We'll start from zero, trying to reconstruct what we learned today.
This is the cost of not capturing context before a session ends.
What Gets Lost
When a session ends, several types of knowledge disappear:
Decisions made - We chose approach A over approach B. We know why. The AI knew why. Tomorrow, neither of us will remember without documentation.
Dead ends explored - We tried three things that didn't work before finding what did. Without a record, we might try those same three things again.
Implicit understanding - Over the course of a session, the AI builds a mental model of our code. File relationships, naming patterns, architectural quirks. All of it evaporates.
Next steps identified - We know exactly what to do next. We can see it clearly. In 24 hours, that clarity fades.
The Five-Minute Capture
Before closing a productive session, we spend five minutes on capture. Not documentation, but capture. The distinction matters.
Documentation is comprehensive, polished, intended for others. Capture is quick, rough, intended for future-us and future-AI.
What we capture:
1. What changed
## Session: 2025-12-17
- Refactored `api_collector.py` to use retry logic
- Added `RetryConfig` dataclass in `config/retry.py`
- Fixed edge case where empty responses caused crash
2. Why it changed
Retry logic needed because API returns 429 during peak hours.
Chose exponential backoff over linear because [reason].
Empty response fix required after discovering production logs showing silent failures.
3. What's next
Next session:
- Add unit tests for retry logic
- Implement circuit breaker pattern (see notes in retry.py comments)
- Check if rate limiting affects batch endpoints
4. What we tried that didn't work
Rejected approaches:
- Simple retry counter: didn't handle varying error types
- requests-retry library: incompatible with our async pattern
Where to Put It
We've tried several approaches:
Append to CLAUDE.md - Works for current state, but file gets bloated over time.
Dedicated session log - docs/session-log.md with dated entries. Good for history, but AI doesn't automatically read it.
Hybrid approach - What we've settled on:
- Update CLAUDE.md "Current State" section (AI reads this)
- Append detailed notes to session log (for human reference)
- Add inline comments for decisions that should live with the code
Different types of context belong in different places. What the AI needs tomorrow goes in CLAUDE.md. What we might need in a month goes in the session log. What future developers need goes in code comments.
The Session Summary Prompt
When we're ready to end a session, we ask the AI to help capture:
"Before we end, summarize what we accomplished, what decisions we made and why, what we tried that didn't work, and what the next steps are. Format it for the CLAUDE.md current state section."
The AI has perfect recall of the session. It can extract the key points faster than we can. We review, edit if needed, and paste into the appropriate places.
This takes two minutes and saves twenty minutes tomorrow.
Signs We Skipped Capture
We know we've been sloppy about session hygiene when:
- We start a session with "what were we working on yesterday?"
- We re-implement something we already built
- We re-explore a dead end we already ruled out
- We can't remember why we made a particular decision
Each of these costs more time than capture would have taken. The math is clear: five minutes of capture beats twenty minutes of reconstruction.
The Commit Checkpoint
A natural capture point: right before committing code.
If we're about to commit, we've presumably finished something. That's the moment to ask:
- What did we just complete?
- What decisions went into it?
- What's the next logical step?
Some teams build this into their workflow with pre-commit hooks or commit message templates. We prefer a manual prompt: it forces us to think rather than fill in blanks mechanically.
What Not to Capture
Capture isn't about recording everything. It's about recording what's not obvious from the code itself.
Don't capture:
- Changes that are self-evident from the diff
- Decisions that are documented in code comments
- Standard patterns that don't need explanation
Do capture:
- The "why" behind non-obvious choices
- Context that influenced decisions (deadlines, constraints, trade-offs)
- Plans that exist only in our heads
- Approaches we rejected and reasons
The test: if we read just the code, would we understand this? If yes, don't capture. If no, capture.
Building the Habit
End-of-session capture doesn't happen naturally. We're in flow, we're tired, we want to close the laptop. The last thing we want is more typing.
What helps:
Time-box it. Five minutes maximum. If capture takes longer, we're over-documenting.
Make it a ritual. Same prompt every time. Same location for notes. Same quick review of CLAUDE.md.
Do it before the mental shift. Once we've decided we're done, we've already started forgetting. Capture happens while we're still in the work, not after we've mentally checked out.
Let the AI draft. The summary prompt takes thirty seconds to type. The AI does the heavy lifting. We just review.
The Payoff
Good session hygiene compounds over time.
After a week, we have a record of progress and decisions. After a month, we have a searchable history of why the code looks the way it does. After six months, we can hand the project to someone else (or our future forgetful selves) with confidence.
The context that felt so clear in the moment? It would have been gone by morning. Instead, it's preserved, ready to warm-start the next session.
Five minutes of capture. Twenty minutes saved tomorrow. And a project history that actually explains what happened.
This is part of our series on AI-assisted research workflows. Next: Context Window Budgeting, treating tokens as a finite resource.