The Loop
The Loop
Section titled “The Loop”At its heart, ralph is beautifully simple: it’s a loop. The same loop that Geoffrey Huntley described when he coined the technique:
“Ralph is a Bash loop.”
But this simplicity hides surprising power. Let’s understand why.
The Basic Loop
Section titled “The Basic Loop”Here’s ralph in its purest form:
while ! done; do ai_agent "$PROMPT"doneThat’s it. Run the AI with a prompt. When it exits, check if we’re done. If not, run it again with the same prompt.
The key insight: the prompt doesn’t change, but the codebase does.
How It Works
Section titled “How It Works”Each iteration:
- Fresh Context — The AI starts with an empty context window
- Read State — It reads your prompt, checks files, runs commands
- Do Work — Makes progress (writes code, fixes bugs, adds tests)
- Save State — Commits changes, persists learning and updates progress markers
- Exit — The AI session ends (either naturally or via timeout)
- Check Conditions — ralph evaluates if the task is complete
- Loop or Exit — Continue if not done, stop if complete
Why This Works
Section titled “Why This Works”The Context Window Problem
Section titled “The Context Window Problem”AI models have a fixed context window—the amount of text they can “see” at once. As you interact with a model:
- Tool calls accumulate
- File contents pile up
- Conversation history grows
- Errors and corrections add noise
Eventually, the model is swimming in context. It forgets earlier instructions. It gets confused by old errors it already fixed. It starts making mistakes it didn’t make before.
The Reset Solution
Section titled “The Reset Solution”ralph solves this by resetting context between iterations:
Iteration 1: Fresh model → reads files → does work → persists learning → exitsIteration 2: Fresh model → reads updated files and learning → work → learn → exits...Iteration N: Fresh model → reads further updates and learning → finishes → exitsEach iteration, the model:
- Starts completely fresh
- Learns the current state from files and git history
- Has maximum cognitive capacity for the actual task
- Isn’t burdened by accumulated conversation baggage
State Externalization
Section titled “State Externalization”The magic trick is that state lives in the codebase, not the conversation.
The model doesn’t need to remember what it did. It can:
- Read git log to see recent changes
- Check test results to see what’s passing
- Look at TODO comments or progress files
- Examine the actual code it wrote
This is more reliable than conversation memory anyway. Files don’t hallucinate.
The Prompt Paradox
Section titled “The Prompt Paradox”Here’s what confuses people at first: the same prompt runs every iteration.
Won’t it just do the same thing over and over?
No, because the codebase changes. Consider this prompt:
Add unit tests for all untested functions in src/.Check progress.txt for what's already done.- Iteration 1: Finds 20 untested functions. Tests 5 of them.
- Iteration 2: Fresh model reads code. Finds 15 untested. Tests 5 more.
- Iteration 3: Finds 10 untested. Tests 5 more.
- …
- Iteration N: Finds 0 untested. Task complete.
The prompt is constant. The codebase evolves. The model adapts.
Iteration Dynamics
Section titled “Iteration Dynamics”Early Iterations
Section titled “Early Iterations”In the first few iterations, the model is orienting:
- Understanding the codebase structure
- Setting up patterns and conventions
- Making big-picture progress
Middle Iterations
Section titled “Middle Iterations”The bulk of the work happens here:
- Systematic progress through the task
- Building on established patterns
- Handling the “boring” repetitive work
Late Iterations
Section titled “Late Iterations”Final polish and edge cases:
- Fixing tests that were broken by earlier changes
- Handling corner cases
- Final cleanup
Why “Ralph Wiggum”?
Section titled “Why “Ralph Wiggum”?”The name perfectly captures the technique’s nature:
Deterministically stubborn — Same prompt, every time. No clever adaptation. Just persistence.
Surprisingly effective — Like Ralph occasionally saying something profound, this simple approach produces remarkable results.
Optimistic to a fault — It just keeps trying. Failed test? Loop again. Error? Loop again. Eventually, it works.
The humor isn’t accidental. There’s something delightfully absurd about a technique this simple being this powerful.
Common Misconceptions
Section titled “Common Misconceptions””Won’t it loop forever?”
Section titled “”Won’t it loop forever?””No. Exit conditions stop the loop:
- Tests pass
- No more errors
- Max iterations reached
- Completion marker found
”Won’t it undo its own work?”
Section titled “”Won’t it undo its own work?””Rarely. The model reads what exists and continues from there. Git history provides additional context about what changed and why.
”Isn’t this wasteful?”
Section titled “”Isn’t this wasteful?””It’s actually efficient. Each iteration runs at peak performance. Compare to a single long session where the model gets progressively worse.
”Can’t I just use a longer context window?”
Section titled “”Can’t I just use a longer context window?””Longer windows help, but degradation still occurs. The issue isn’t just length—it’s accumulated noise. Fresh context always wins.
Next Steps
Section titled “Next Steps”- Context Windows — Deep dive into why resets matter
- Exit Conditions — Configure when to stop
- State Persistence — How state survives resets