AI Research Workflows

Guides for integrating Claude Code into academic research. From configuring context files to building a personal AI infrastructure that improves over time.

AI-Assisted Research: What Changes and What Stays the Same

The integration of AI into research workflows changes the speed of implementation, not the standard of evidence. A well-configured Claude Code session can translate a methodology paragraph into working code in minutes. It can refactor a sprawling research codebase into something maintainable. It can draft documentation while the analysis is still fresh.

What it cannot do is replace the researcher's judgment about what to measure, how to interpret results, or when to be skeptical of convenient findings. The domain expertise that takes years to develop remains essential. AI accelerates the translation of that expertise into artifacts: code, documentation, analysis pipelines.

The key artifact in this workflow is the CLAUDE.md file. It loads automatically when Claude Code starts a session, providing project requirements, variable definitions, data constraints, and methodological decisions in one place. Without it, every session starts from zero. With it, the AI reads the same context the researcher would provide to a new collaborator, and the conversation picks up where it left off.

These guides document the practices we developed over hundreds of research sessions. The verification tax, the cold start problem, context window budgeting, end-of-session hygiene: each addresses a specific failure mode that surfaces when AI assistance is integrated into work that demands accuracy. The goal is not to automate research, but to make the human-AI collaboration more reliable.

[1] Anthropic. "Claude Code overview." docs.anthropic.com/en/docs/claude-code/overview, accessed February 2026.
[2] Anthropic. "Claude Code best practices: CLAUDE.md." docs.anthropic.com/en/docs/claude-code/claude-md, accessed February 2026.

Agent-based vs. chat-based AI for research

Capability Claude Code (Agent) Chat-based AI
Reads project files Directly, across full codebase Only what is pasted in
Writes and edits files Yes, in place Copy-paste only
Persistent context CLAUDE.md loads automatically [2] Manual re-explanation each session
Runs shell commands Yes (git, python, tests) No
External data sources MCP servers (FRED, Census, Scholar) Web search only
Custom automation Skills, hooks, sub-agents None
Cost $20/month (Pro) or usage-based [1] $20/month (typical)

Getting Started

Your First Session: What Claude Code Is and Isn't

A practical walkthrough of what Claude Code can and cannot do, with prompting patterns and a complete first-task example.

December 2025

Claude Code Guide: From First Session to Personal AI Infrastructure

A comprehensive 13-article guide to mastering Claude Code. Context management, session workflows, agent spawning, and building personal AI infrastructure.

January 2026

Why It Forgot Everything: Understanding Context

Understanding how AI context windows work, why sessions reset, and how to work with this fundamental limitation of large language models.

December 2025

Claude Code Guide Series

Guide Series

Creating Skills for Research

Skills are recipe cards for research tasks. Write the steps once, save them in a file, and Claude Code follows those instructions whenever needed.

January 2026
Guide Series

Creating Helpers: When to Delegate Work

When to create separate Claude Code helpers for focused work, how to design tasks that are easy to hand off, and patterns for running multiple helpers at once.

January 2026
Guide Series

Building Our Research System: Putting It All Together

How CLAUDE.md, skills, hooks, and MCP servers combine into a personal research system that becomes more valuable over time.

January 2026
Guide Series

Hooks: Automation Without Asking

Hooks are automatic triggers that run without asking, like auto-save but for research tasks. A power-user feature, entirely optional.

January 2026
Guide Series

Connecting Claude to Outside Services: FRED, Census, and Beyond

How to connect Claude Code to external data sources like FRED, Census, and Google Scholar, bringing integrated research workflows into natural conversation.

January 2026

Research Workflow

One Context File, Zero Re-Explanations

Stop re-explaining the project every AI session. A single CLAUDE.md file loads automatically. Write context once; the agent reads it every time.

October 2025

From Methods Paragraph to Working Pipeline

A well-written methodology section is almost executable code. The gap between describing a procedure and implementing it has narrowed with agent-based tools.

October 2025

Research Phases Need Different Prompts

Exploration, implementation, and documentation require different AI prompting strategies. Match the prompt to the phase.

November 2025

Question-First Data Tagging: Finding Forgotten Datasets

Tag data by the questions it can answer, not just what it contains. How question-first tagging turns dormant datasets into discoverable assets.

November 2025

Code Management

47 Scripts to 15: Cleaning a Research Codebase

Research codebases accumulate cruft. We used Claude to consolidate 47 scripts to 15, with counterfactual tests proving nothing broke.

November 2025

Reading Our Analysis Files: How Claude Sees Our Research Code

How Claude Code explores research projects using three core tools: Read (look at a file), Glob (find files by pattern), and Grep (search inside files).

November 2025

The Limits of Copy-Paste AI Coding

The difference between chatbot-based and agent-based coding is categorical, not incremental. What changes when AI reads the entire codebase.

October 2025

The Verification Tax: Every AI Output Needs Checking

Every AI output needs checking. Building verification into workflow to catch hallucinations before they compound.

October 2025

Advanced Topics

Context Window Budgeting: Treating Tokens as a Finite Resource

Treating tokens as a finite resource, and knowing when to spawn agents versus work directly.

December 2025

The Cold Start Problem: Why the First Five Minutes Matter Most

Why the first five minutes of an AI session matter most, and how CLAUDE.md solves the context problem.

December 2025

End-of-Session Hygiene: What to Capture Before Context Resets

What to capture before context resets, and how five minutes of capture saves twenty minutes tomorrow.

December 2025

Monitoring Government Data Portals

A case study in tracking California health data releases with Claude Code. Catch new data releases without manual checking.

December 2025

Building a Literature Surveillance Skill

Automating academic paper discovery with Claude Code. Turn weekly manual searches across SSRN, NBER, and Google Scholar into a single command.

December 2025

Staging LinkedIn Posts with Browser Automation

A case study in form-filling workflows that keep humans in the loop. Browser automation handles navigation while the human retains final approval.

December 2025

Fix AI Data Visualization: Why Claude Fails (+ Solution)

AI writes matplotlib code but cannot see if labels overlap or legends clip. Antigravity prompts solve this.

December 2025

Reading Your Own Data: What Claude Code /insights Reveals

How to interpret the Claude Code /insights report at beginner and intermediate levels. Same data, different lessons.

January 2026

Key takeaways

  • AI changes speed, not standards. A well-configured Claude Code session translates methodology into working code in minutes, but domain expertise still determines what to build.
  • CLAUDE.md eliminates the cold start problem. One context file loads project requirements, variable definitions, and methodological decisions automatically each session.
  • Every AI output needs checking. The verification tax is real: code compiles and runs but may contain subtle errors that compound across a research project.
  • 24 guides from hundreds of sessions. Each addresses a specific failure mode: context resets, hallucinated citations, token budget exhaustion, and more.

Frequently Asked Questions

What is Claude Code?

Claude Code is a command-line AI tool from Anthropic that reads and writes files in a project directory. Unlike chat-based AI, it has direct access to the codebase and maintains context through CLAUDE.md files.

Do I need programming experience?

Basic command-line familiarity is helpful. The guides focus on configuring Claude Code through context files and workflow design, not on writing code from scratch.

What is the verification tax?

The time and effort required to check every AI output. AI-generated code compiles and runs but may contain subtle errors. Our verification tax article explains how to build checking into the workflow.

Can AI replace domain expertise?

No. AI accelerates the translation of expertise into code, documentation, and analysis. The researcher still needs to understand what to measure, how to interpret results, and when to be skeptical.