Beginner Tier - Article 1 of 3

Your First Session: What Claude Code Is and Isn't

A practical walkthrough of our first real session: what to expect, how to prompt effectively, and what Claude Code can and cannot do.

Prerequisites

Before diving in, we need a few things in place:

  • An active Claude Code subscription (the CLI tool, distinct from Claude.ai)
  • Basic terminal familiarity: we can open a terminal and type commands
  • A research project with code files (Stata do-files, R scripts, or Python analysis code)

We do not need to be terminal power users. If we can navigate to a folder and run a command, we have enough.

Claude Code is a command-line AI assistant that lives in our terminal and works directly with our files. Unlike the Claude.ai web interface, it can read, write, and execute code in our actual projects. This article walks through our first real session: what to expect, how to prompt effectively, and what Claude Code can and cannot do. By the end, we will have completed a small task from start to finish.


1. What Is Claude Code?

Claude Code is a command-line interface to Claude that operates on our filesystem. It runs in our terminal alongside our other research tools: Stata, R, Python, git, whatever we use. The key difference from the Claude.ai web interface is that Claude Code works with our analysis code, rather than discussing code abstractly.

Here is what that means in practice:

  • It can read files. Point it at a Stata do-file, an R script, or a data documentation file, and it will read the contents and understand the structure.
  • It can write files. Ask it to add variable labels, fix a merge issue, or create a new cleaning script, and it will make the actual changes.
  • It can run commands. Need to check variable distributions, run a regression, or verify file paths? It can execute terminal commands on our behalf.

This is fundamentally different from copying code into a chat window and asking for advice. Claude Code operates in our project directory, sees our actual file structure, and makes real modifications.

That said, it is still an AI assistant. It has no persistent memory between sessions. When we close Claude Code and reopen it tomorrow, it will not remember today's conversation. We will explore why this matters (and what to do about it) in Article B2.


2. The Terminal Interface

Let us walk through what a session looks like.

Launching Claude Code

To start, we open our terminal and navigate to our project directory:

cd ~/projects/snap-enrollment-analysis
claude

That claude command launches Claude Code in the current directory. We will see a prompt waiting for our input.

The Conversation Interface

The interface is simple: we type a message, press enter, and Claude responds. It looks like a chat, but the conversation happens in the terminal:

> What files are in this directory?

Claude: Let me check...
[Reading directory]

I can see the following files:
- 01_clean_acs.do (ACS microdata cleaning)
- 02_merge_snap.do (SNAP enrollment merge)
- data/raw/ (directory with Census extracts)
...

Understanding Tool Use

As Claude works, it shows us what tools it is using. These indicators tell us what is happening:

  • [Reading file] - Claude is looking at a file's contents
  • [Writing file] - Claude is about to modify or create a file
  • [Running command] - Claude is executing a terminal command

This transparency matters. We can see exactly what Claude is doing before changes happen.

Exiting a Session

To end a session, we can type /exit or press Ctrl+C. The session closes, and any context Claude had is lost. Tomorrow's session starts fresh.


3. Basic Prompting Patterns

The way we ask determines the quality of what we get. Here are the patterns that work.

Be Specific

Vague prompts get vague results. Compare these:

# Vague
> Tell me about my code

# Specific
> Read 01_clean_acs.do and explain what the poverty variable construction does

The specific prompt tells Claude exactly which file to read and what to explain. We get a focused answer instead of a general overview.

Give Context

Claude does not know our project until we tell it. A bit of context goes a long way:

> We're analyzing SNAP (Supplemental Nutrition Assistance Program)
> enrollment using ACS (American Community Survey) microdata.
> The 01_clean_acs.do file constructs household-level poverty measures.
> Can you read it and check if the poverty threshold logic handles
> household size correctly?

Now Claude knows the purpose of the project and what to look for.

Ask for What We Want

The difference between asking and telling matters. Here is the asking version:

> Is there a way to add variable labels to this dataset?

Here is the telling version:

> Add Stata variable labels to the income and poverty variables
> using Census documentation standards

The second prompt is actionable. Claude knows exactly what to do.

Iterate

Our first prompt rarely produces exactly what we need. Iteration is normal:

> Add comments explaining the poverty threshold calculation

Claude: [Adds basic comments]

> That's close, but include the Census source year and note that
> thresholds are household-size adjusted

Claude: [Revises with the correct details]

Think of it as a conversation. We refine until we get what we need.

Using AI to Learn Prompting

When we are unsure how to approach a task, we can use AI to help us understand the task itself. Different AI tools structure problems differently, and we can use this to our advantage.

Perplexity excels at mapping the territory:

> What are the considerations when building a difference-in-differences
> event study? What decisions need to be made?

This gives us the landscape of the problem—the questions we should be asking.

Then we can ask Claude how to approach it:

> Given these considerations, how should I prompt you to help me
> build this event study?

Claude knows its own capabilities and constraints. Using both in sequence—Perplexity to understand what matters, Claude to execute—often produces better results than using either alone.

Why "Make It Good" Is Not a Prompt

When we ask AI to produce something "good" or "beautiful" without specifying what that means, we are not delegating a task. We are surrendering our judgment.

Consider slides. There is no universal agreement on what makes the best slides for a topic. It depends on the audience (experts or newcomers?), the purpose (persuade, inform, or provoke discussion?), and the content (data-heavy or conceptual?). The confluence of these factors determines what "good" means in any specific case.

When we prompt "create beautiful slides on difference-in-differences" and accept what comes back, we are not evaluating whether the output serves our purpose. We are accepting that it matches our vague, undefined expectation of "beautiful." The AI has not succeeded at a task. It has reflected our bias back to us, and we have mistaken that reflection for quality.

This matters because even without memory across sessions, there is still an abstraction of us—the user—that shapes outputs. Our prompting patterns, our acceptance thresholds, our implicit preferences all influence what the AI produces. When we consistently accept vague outputs, we train ourselves (not the model) to expect less.

The claim that AI produces excellent slides with minimal prompting is only true when one condition holds: the presenter's opinion is the only thing that matters, and they only ever speak with one purpose to one audience. For the rest of us—who present to different audiences, for different purposes, with different constraints—the slide is not the deliverable. The thinking about what should be on the slide is the deliverable.

This is where AI actually shines. Not in producing final outputs from vague prompts, but in building workflows that help us think more deeply about what should be in a presentation. Who is the audience? What do they already know? What decision are we trying to support? AI can help us systematically answer these questions and generate tailored deliverables for different contexts. That is the compound value. "Make it beautiful" is not.


4. What Claude Code Can Do

Let us be concrete about capabilities.

Read and Understand Code

Claude can read files in any programming language and explain what they do. Stata, Python, SAS, even obscure data formats: if it is text, Claude can read it.

> Read the entire analysis/ directory and summarize what each do-file does

Write and Modify Code

Claude can create new files and modify existing ones:

> Create a new file called codebook_income.md documenting all income
> variables with their definitions and valid ranges

Or for modifications:

> In 02_merge_snap.do, add a check that verifies the merge was 1:1
> on household ID

Run Terminal Commands

With permission, Claude can execute commands:

> Run the Stata command "tab poverty_status" and tell me the distribution

Claude will execute the command and summarize the results.

Search Across the Codebase

Claude can search for patterns, find variable definitions, and locate usages:

> Find all files that use the variable hh_income

It uses tools like grep and glob patterns to explore our codebase efficiently.

Generate Documentation and Replication Materials

> Generate a codebook for all variables in the final analysis dataset
> Create a README that explains how to run this replication package
> from raw data to final tables

Explain and Suggest Refactors

> This do-file is 400 lines. Suggest how to break it into logical modules.

Claude can analyze code structure and propose improvements.


5. What Claude Code Cannot Do

Understanding limitations saves frustration.

Remember Previous Sessions

This is the big one. When we close Claude Code and reopen it, the slate is wiped clean. No memory of what we discussed, what decisions we made, or what files we edited. Every session starts from zero.

This is not a bug. It is how large language models work. We will cover this in depth in Article B2 and learn practices to work around it.

Access the Internet

By default, Claude Code cannot fetch URLs, call APIs (Application Programming Interfaces), or access web content. It works with local files only. (This limitation can be extended with MCP—Model Context Protocol—servers, covered in the Advanced tier.)

Run Indefinitely

Claude operates within a context window: a fixed amount of space for our conversation. As we work, that space fills up. Very long sessions eventually hit this limit, and older context gets pushed out. Large files consume substantial portions of this space.

Guarantee Correctness

Claude can write code that looks right but is not. The regression syntax might be wrong. Variable names might not match the dataset. Edge cases in data cleaning might be missed.

Verification is our job. We should always review changes, run the code, and check that the output matches our expectations. Never assume correctness.

Access Restricted Systems

Claude Code can only access what we can access. If a file requires elevated permissions, Claude will not be able to read it. If data requires secure access credentials, Claude will need them provided.

Read Minds

Claude does not know what we want until we say it. Being explicit is not optional. Vague prompts produce vague results. The more precise our instructions, the better the output.


6. Our First Task: A Complete Walkthrough

Let us put this into practice. We will add data documentation to a research project.

Starting with a Simple Goal

Our goal: create a variable codebook for the income and poverty variables in our analysis. Nothing complex, but enough to see the full workflow.

Giving Context

We start the session with context:

> I'm working on a SNAP enrollment analysis using ACS microdata.
> I need to create documentation for the income and poverty variables
> we constructed in the cleaning stage.
> The main cleaning script is code/01_clean_acs.do

Claude now understands the project type, the requirement, and where the code lives.

Making the Request

Now we make a specific request:

> Read code/01_clean_acs.do, then create a file called docs/codebook_poverty.md
> that documents each poverty-related variable: its name, definition,
> valid range, and the Census source used in construction.

This prompt is actionable. Claude knows:

  • Which file to read
  • What file to create
  • What information to include
  • The level of detail needed

Reviewing the Output

Claude will read the do-file and propose creating something like:

# Poverty Variable Codebook

## hh_income
- **Definition:** Total household income, sum of all income sources
- **Valid range:** 0 to 9,999,999
- **Source:** ACS HINCP variable, 2019-2023 5-year estimates
- **Notes:** Top-coded at state-specific thresholds

## poverty_ratio
- **Definition:** Ratio of household income to federal poverty line
- **Valid range:** 0.00 to 10.00 (capped)
- **Source:** Constructed from hh_income / poverty_threshold
- **Notes:** Accounts for household size and composition

## poverty_status
- **Definition:** Categorical poverty indicator
- **Valid values:** 1=Deep poverty (<50% FPL), 2=Poverty (50-100% FPL),
  3=Near-poverty (100-150% FPL), 4=Above (>150% FPL)
- **Source:** Derived from poverty_ratio

We review: Does this match what we wanted? The structure looks right. The variable definitions are accurate. The source documentation is included.

Iterating

Let us say we want to add information about missing values. We iterate:

> Good, but also add a section for each variable showing the count
> and percentage of missing values in the final sample.

Claude will revise the codebook to include missingness statistics.

Committing the Change

Once we are satisfied, we can ask Claude to commit:

> Commit this change with the message "Add poverty variable codebook"

Claude will stage the file and create the commit.

We have just completed our first task: context, request, review, iterate, commit.


7. What Comes Next

We have covered the basics: what Claude Code is, how to prompt effectively, and what it can and cannot do. This foundation prepares us for two challenges that emerge as we use Claude Code more seriously.

Article B2: Why It Forgot Everything explores the memory limitation. Every session starts from scratch. This feels frustrating at first. Understanding why it happens, and how to work with it, transforms our workflow.

Article B3: Reading Our Codebase examines how Claude explores files. It does not magically know our code. It has to read files, search for patterns, and build understanding. Learning the tools Claude uses helps us ask better questions.

From there, the Intermediate tier covers session hygiene, context budgeting, and verification practices. These are the skills that take us from occasional Claude Code user to effective AI research collaborator.

But first, we practice. Open Claude Code in a project. Try the exercises below. Get comfortable with the prompt-response-iterate cycle. Everything else builds from here.


Practical Exercises

  1. Launch and explore: Open Claude Code in a research project directory. Ask it to list the files and explain the project structure. Notice how it uses the Read tool.
  2. Read and explain: Point Claude at a specific do-file or R script and ask for an explanation. Compare its explanation to our own understanding. Where does it match? Where does it miss context?
  3. Simple edit: Ask Claude to add comments or documentation to a data cleaning step. Review the proposed change before accepting it. Does it capture the logic correctly?
  4. First iteration: If the edit is not quite right, refine the request and try again. Practice the feedback loop.
  5. Exit and restart: Close the session, reopen Claude Code, and try "Continue where we left off." Observe what happens. (Spoiler: it will not remember.)

Key Takeaways

  • Claude Code is a command-line interface that works directly with our files, rather than discussing code in a chat window.
  • Effective prompting requires specificity, context, and iteration.
  • Claude can read, write, search, and execute. It cannot remember, access the internet, or guarantee correctness.
  • Verification is always our responsibility.
  • Every session starts from scratch. Understanding this limitation is step one.

This article is part of the Claude Code Guide, a series teaching effective AI-assisted research from first principles. The Beginner tier covers fundamentals. The Intermediate tier addresses session management and context discipline. The Advanced tier builds personal AI infrastructure.

Suggested Citation

Cholette, V. (2026, February 4). Your first session: What Claude Code is and isn't. Too Early To Say. https://tooearlytosay.com/research/methodology/first-session-claude-code/
Copy citation