Skip to content

State Persistence

When ralph resets context, how does progress survive? This is the key question—and the answer is surprisingly elegant.

Each iteration starts fresh. The AI has no memory of previous iterations. So how does it know:

  • What work has been completed?
  • What patterns and conventions to follow?
  • What approaches failed?
  • What’s left to do?

State lives in the codebase, not in conversation. The AI reconstructs context by reading files.

State Externalization
Conversation Memory (lost on reset):
❌ “We decided to use this pattern…”
❌ “The user said they prefer…”
❌ “Earlier we tried X and it failed…”
File-Based State (persists always):
src/auth.test.js→ Pattern visible in code
.plans/progress.txt→ Explicit status log
git log→ History of changes
.plans/prd.json→ Feature requirements

ralph uses several channels to persist state:

Git History

Every commit tells a story. The AI reads recent commits to understand what changed and why.

Code Itself

Patterns, conventions, and completed work are visible in the codebase. The AI reads and follows existing code.

Progress Files

Explicit state files (progress.txt, prd.json) track what’s done and what remains.

Test Results

Running tests shows current status. Passing tests = completed work. Failing tests = work remaining.

Git is ralph’s most powerful state channel.

Terminal window
# Recent changes
git log --oneline -10
# What files changed
git diff HEAD~3
# Full history of a file
git log -p src/auth.test.js
# Uncommitted work
git status

Each iteration, the AI can read git history to understand:

  • What has been done (committed files)
  • What was attempted (commit messages)
  • What’s in progress (uncommitted changes)
  • The evolution of decisions (diff history)

Good commit messages are state documents:

Terminal window
# Bad - no useful state
git commit -m "Updates"
# Good - captures state
git commit -m "Add tests for UserAuth module
- Added 5 test cases covering login, logout, refresh
- Edge cases: expired tokens, invalid credentials
- Remaining: SessionManager, TokenService modules"

ralph creates .plans/progress.txt to track learning across iterations.

# Progress Log
## Completed
- UserAuth module tests (5 tests)
- SessionManager tests (3 tests)
## In Progress
- TokenService tests
## Remaining
- OAuth integration tests
- SAML provider tests
## Learnings
- Using Jest with async/await pattern
- Each module follows AAA pattern (Arrange, Act, Assert)
- Mock external services, don't call real APIs

In your prompt:

Track your progress in progress.txt:
1. Note completed items
2. Add learnings about patterns and decisions
3. Update after each significant milestone

The most reliable state is the code itself. The AI:

  1. Reads existing code to understand patterns
  2. Follows conventions it observes
  3. Continues from where code stops

If the AI is adding tests and finds:

src/auth.test.js
describe('UserAuth', () => {
it('should login with valid credentials', async () => {
// ... test code
});
it('should reject invalid passwords', async () => {
// ... test code
});
});

It knows:

  • Tests use Jest (describe, it)
  • Tests are async
  • The pattern to follow
  • UserAuth is already tested

Next iteration, it looks for modules without .test.js files and continues.

Running tests provides immediate state:

Terminal window
$ npm test
PASS src/auth.test.js
PASS src/user.test.js
FAIL src/payment.test.js
PaymentService should process refunds
Expected: true
Received: false
Tests: 14 passed, 1 failed

The AI now knows:

  • auth and user are complete
  • payment has a failing test to fix
  • 14 tests exist (progress metric)

ralph creates this structure in your project:

project/
├── .ralph/
│ └── config.toml # Configuration
└── .plans/
├── prd.json # Feature requirements (PRD)
├── PROMPT.md # System prompt for the AI
└── progress.txt # Learning log

The Product Requirements Document defines features for the AI to work on:

{
"features": [
{
"id": "auth-tests",
"name": "Auth Module Tests",
"status": "in_progress"
},
{
"id": "user-tests",
"name": "User Module Tests",
"status": "pending"
}
]
}

The system prompt that runs every iteration:

# Task: Add Test Coverage
Add tests for all modules in src/.
## Progress
Check progress.txt for completed work.
## Completion
When coverage is 80%+, output:
<promise>COMPLETE</promise>

If something goes wrong, state helps recover:

Terminal window
# See what ralph did
git log --oneline -20
# Undo last iteration's work
git reset --hard HEAD~1
# Continue from a known good state
git checkout known-good-commit
ralph run
## State Management
1. Update progress.txt after completing each module
2. Commit after each significant piece of work
3. Use descriptive commit messages
4. Check git status before starting work
Terminal window
# ralph init creates the .plans/ directory with:
# - prd.json (features)
# - PROMPT.md (your task)
# - progress.txt (learning log)
ralph init

Tests are objective. A passing test suite is more reliable than any progress file.

If there’s a conflict between a progress file and actual code, trust the code. Files can get out of sync; code is ground truth.