"Help me with this analysis."
It's a reasonable request. But it's also why so many AI-assisted research sessions go sideways. The AI doesn't know if we want to explore possibilities, implement a solution, or document what we've done.
These are different tasks. They need different prompts.
Three Phases, Three Modes
Research work moves through phases. Each phase has its own goal, its own kind of thinking, and its own ideal AI interaction:
Exploration: We don't know what we're doing yet. We're reading, searching, trying things, building understanding. The goal is to find direction.
Implementation: We know what we're doing. We're building, coding, executing the plan. The goal is to make it real.
Documentation: We've done the work. Now we need to explain it: to others, to reviewers, to our future selves. The goal is clarity.
The prompts that work for exploration actively hurt implementation. The prompts that work for implementation produce terrible documentation. Matching prompt style to phase makes everything work better.
Exploration Prompts
In exploration, we're not looking for answers. We're looking for questions, options, and understanding.
What works:
- "What approaches exist for [problem]?"
- "What are the trade-offs between [option A] and [option B]?"
- "Help me understand how [concept] works"
- "What might go wrong with [approach]?"
- "What am I not considering?"
What doesn't work:
- "Implement [solution]" (We don't know what solution yet)
- "Write the code for [feature]" (We haven't decided on the approach)
- "Fix this error" (We're not at the fixing stage)
Exploration prompts are open-ended. They invite the AI to expand our thinking, not narrow it.
Example exploration session:
"We need to classify 25,000 grocery stores by quality tier without visiting them. What data sources might help? What classification approaches should we consider?"
The AI might suggest satellite imagery, business databases, delivery platforms, name-based heuristics, chain matching, or proximity analysis. Some ideas we'll reject. Some we'll explore further. The point is generating options, not selecting one.
Implementation Prompts
In implementation, we've decided what to do. Now we need to do it efficiently and correctly.
What works:
- "Implement [specific function] that does [specific thing]"
- "Refactor [file] to use [pattern]"
- "Add error handling for [specific case]"
- "Write tests for [specific behavior]"
What doesn't work:
- "What's the best way to..." (That's exploration, not implementation)
- "Should we use X or Y?" (Decide first, then implement)
- "Implement a solution" (Too vague, will produce generic code)
Implementation prompts are specific. They tell the AI exactly what success looks like.
Example implementation prompt:
"Insrc/classifiers/chain_matcher.py, add a functionmatch_chain(store_name: str) -> Optional[str]that checks the store name against our chain database indata/chains.csv. Return the canonical chain name if matched, None otherwise. Use fuzzy matching with a threshold of 0.85."
This prompt specifies: the file, the function name, the signature, the data source, the matching approach, and the threshold. The AI can't wander because we've defined the boundaries.
Documentation Prompts
Documentation is neither exploration nor implementation. It's translation, taking what we did and making it understandable.
What works:
- "Explain what this code does for someone unfamiliar with the project"
- "Document the decision to use [approach] and why we rejected [alternatives]"
- "Write a methods section describing [analysis]"
- "Summarize the key findings for [audience]"
What doesn't work:
- "Document the code" (Too vague, produces generic boilerplate)
- "Add comments" (Which comments? Where? Saying what?)
- "Explain this" (Explain to whom? For what purpose?)
Documentation prompts specify the audience and purpose. Technical documentation for maintainers differs from method sections for reviewers differs from summaries for stakeholders.
Example documentation prompt:
"Write a methods section for a paper. Audience: applied economics journal readers. Explain the store classification algorithm, focusing on reproducibility. Include the chain matching threshold, the classification decision tree, and the validation approach. Use past tense, third person."
The prompt defines audience, purpose, content requirements, and style. The AI knows exactly what to produce.
Phase Transitions
The tricky moments are transitions between phases. How do we know when to shift?
Exploration to Implementation
We're ready to implement when we can answer:
- What approach are we using? (One approach, not several)
- What does success look like? (Specific, measurable)
- What are we explicitly not doing? (Scope is clear)
If we can't answer these, we're still exploring.
Implementation to Documentation
We're ready to document when:
- The code works (tests pass, outputs are correct)
- We've made the key decisions (no major pivots expected)
- We understand why it works (not just that it works)
If we're still changing the approach, we're still implementing.
Mixing Phases (Carefully)
Sometimes we need to mix phases. We're implementing but hit an unexpected problem that requires exploration. We're documenting but realize we need to implement a missing piece.
This is fine, but we should shift prompts consciously:
"I'm implementing the chain matcher, but I'm not sure how to handle stores with multiple names. Let's pause implementation and explore: what are our options for handling name variants?"
The explicit phase shift helps the AI switch modes. Without it, exploration answers might look like implementation (premature code) or implementation answers might look like exploration (endless options without commitment).
The Meta-Prompt
When we're not sure what phase we're in, the meta-prompt helps:
"I'm working on [problem]. I'm not sure if I should be exploring options, implementing a solution, or documenting what I've done. Based on what I've shared, which phase am I in, and what would be a good next prompt?"
This invites the AI to help us figure out where we are before diving into work that might be premature or misguided.
Phase-Appropriate Follow-ups
Follow-up questions should match the phase too.
Exploration follow-ups:
- "What else?"
- "What about [alternative]?"
- "What's the downside of [option]?"
- "Help me think through [aspect]"
Implementation follow-ups:
- "That's almost right, but [specific adjustment]"
- "Now add [specific feature]"
- "Refactor this to [specific change]"
- "Fix [specific error]"
Documentation follow-ups:
- "Make this clearer for [audience]"
- "Add more detail about [topic]"
- "This is too technical, simplify"
- "Connect this to [earlier section]"
The follow-ups reinforce the phase. Exploration follow-ups keep expanding. Implementation follow-ups keep refining. Documentation follow-ups keep clarifying.
Practical Recommendations
- Name the phase explicitly. "I'm in exploration mode" or "I'm ready to implement" sets expectations for both us and the AI.
- Match prompt specificity to phase. Exploration = open-ended. Implementation = specific. Documentation = audience-aware.
- Don't implement during exploration. If the AI generates code when we're still exploring, we've prompted wrong, or we're ready to implement.
- Don't explore during implementation. If we're second-guessing the approach, pause and return to exploration explicitly.
- Document when the work is stable. Documentation of moving targets is wasted effort.
- Use phase transitions as checkpoints. Before moving to the next phase, verify the current one is complete.
The right prompt in the wrong phase produces wrong outputs. Match the prompt to the phase, and the AI becomes a much more useful collaborator.
This is part of our series on AI-assisted research workflows. See also: The Cold Start Problem, End-of-Session Hygiene, Context Window Budgeting, and The Verification Tax.
Suggested Citation
Cholette, V. (2026, January 28). Research phases need different prompts. Too Early To Say. https://tooearlytosay.com/research/methodology/research-phases-prompts/Copy citation