Skip to content

Test Coverage

Adding comprehensive tests is tedious. It’s also perfect for ralph—systematic, repetitive work where each iteration builds on the last.

  • Consistent patterns — The AI establishes patterns early and follows them
  • Incremental progress — Each iteration adds more tests
  • Self-verifying — Test output shows what’s done and what’s not
  • Natural exit condition — Coverage percentage provides clear completion criteria

Add tests until coverage reaches 80%.

Terminal window
ralph init

Edit .ralph/config.toml:

adapter = "claude"
maxIterations = 50
# Add Test Coverage
Goal: Achieve 80%+ test coverage for the codebase.
## Current Status
Run `npm test -- --coverage` to see current coverage.
## Guidelines
- Use Jest (already configured)
- Follow patterns in existing tests at `src/__tests__/`
- Each module should have a corresponding `.test.ts` file
- Test:
- Happy paths
- Error conditions
- Edge cases (null, undefined, empty arrays)
## What to Test
Priority order:
1. Untested files (0% coverage)
2. Low coverage files (<50%)
3. Critical paths (auth, payments, data)
## Test Structure
```typescript
describe('ModuleName', () => {
describe('functionName', () => {
it('should do expected behavior', () => {
// Arrange
// Act
// Assert
});
it('should handle edge case', () => {
// ...
});
it('should throw on invalid input', () => {
// ...
});
});
});
  1. Run npm test to verify new tests pass
  2. Check coverage improved
  3. Commit: “test(module): add tests for [name]“

Update progress.txt after each tested module.

When npm test -- --coverage shows 80%+ coverage, output: COMPLETE

## Example: API Endpoint Tests
Add integration tests for all API endpoints.
### Prompt
```markdown
# Add API Endpoint Tests
Add integration tests for all API endpoints.
## Find Untested Endpoints
```bash
# Routes without tests
for f in src/routes/*.ts; do
test_file="${f%.ts}.test.ts"
if [ ! -f "$test_file" ]; then
echo "$f needs tests"
fi
done
import request from 'supertest';
import app from '../app';
import { setupTestDb, teardownTestDb } from './helpers';
describe('GET /api/users', () => {
beforeAll(async () => {
await setupTestDb();
});
afterAll(async () => {
await teardownTestDb();
});
it('returns list of users', async () => {
const response = await request(app)
.get('/api/users')
.set('Authorization', 'Bearer test-token');
expect(response.status).toBe(200);
expect(response.body).toHaveProperty('users');
expect(Array.isArray(response.body.users)).toBe(true);
});
it('requires authentication', async () => {
const response = await request(app)
.get('/api/users');
expect(response.status).toBe(401);
});
});
  • Success case (200/201)
  • Authentication required (401)
  • Authorization/permissions (403)
  • Not found (404)
  • Validation errors (400)

Track completed routes in progress.txt

When every route file has a corresponding test file, output: COMPLETE

## Example: Snapshot Testing
Add snapshot tests for React components.
### Prompt
```markdown
# Add Component Snapshot Tests
Add snapshot and behavioral tests for all React components.
## Find Components Needing Tests
```bash
find src/components -name '*.tsx' ! -name '*.test.tsx' ! -name '*.stories.tsx'
import { render, screen, fireEvent } from '@testing-library/react';
import { Button } from './Button';
describe('Button', () => {
// Snapshot test
it('matches snapshot', () => {
const { container } = render(<Button>Click me</Button>);
expect(container).toMatchSnapshot();
});
// Behavioral tests
it('calls onClick when clicked', () => {
const onClick = jest.fn();
render(<Button onClick={onClick}>Click me</Button>);
fireEvent.click(screen.getByText('Click me'));
expect(onClick).toHaveBeenCalledTimes(1);
});
it('does not call onClick when disabled', () => {
const onClick = jest.fn();
render(<Button disabled onClick={onClick}>Click me</Button>);
fireEvent.click(screen.getByText('Click me'));
expect(onClick).not.toHaveBeenCalled();
});
});
  1. Find untested component
  2. Create [Component].test.tsx
  3. Add snapshot tests for variations
  4. Add behavioral tests
  5. Run: npm test -- --updateSnapshot (first time only)
  6. Commit

When all components have test files with passing tests, output: COMPLETE

## Tips for Test Generation
### 1. Provide Test Examples
The AI follows patterns it sees. Include a good example:
```markdown
## Example Test (follow this pattern)
[Your best existing test]

Be explicit about what “done” means:

## Coverage Requirements
- Line coverage: 80%+
- Branch coverage: 70%+
- All public methods tested

Tell the AI how to handle fixtures:

## Test Data
- Use factories in `tests/factories/`
- Don't hardcode IDs
- Clean up after tests

List edge cases to test:

## Edge Cases to Cover
- Empty arrays
- Null/undefined inputs
- Maximum length strings
- Negative numbers