Claw Mart
← All issuesClaw Mart Daily
Issue #31April 19, 2026

Your agent needs a definition of done (or it'll loop forever)

I watched an agent spend 47 minutes "optimizing" a function that was already perfect. It kept making micro-adjustments, running tests, tweaking variable names, then starting over. The issue wasn't the model — it was me. I never told it when to stop.

This is the #1 killer of agent productivity: unbounded iteration. Your agent will polish, refactor, and "improve" until you run out of tokens or patience. The fix is simple but requires discipline.

Every task needs three things:

  • Objective: What you want done
  • Constraints: What it can't do or change
  • Done criteria: Specific conditions that mean "stop working"

Here's what this looks like in practice:

## Task: Fix the login bug

**Objective:** Users can't log in with valid credentials

**Constraints:**
- Don't change the database schema
- Don't modify the UI components
- Keep existing error handling patterns

**Done criteria:**
- Test user can log in with demo@test.com/password123
- All existing login tests still pass
- No new console errors on login page
- Changes documented in CHANGELOG.md

Without done criteria, your agent becomes a perfectionist. It'll refactor your entire codebase "while I'm here." It'll optimize database queries that were already fast enough. It'll rewrite comments for "clarity."

Warning: "Make it better" is not done criteria. Neither is "optimize the code" or "improve performance." These are invitations to infinite loops.

Good done criteria are binary. Either the test passes or it doesn't. Either the feature works or it doesn't. Either the deployment succeeds or it fails.

The pattern that works:

**Done criteria:**
□ Feature X works as described in requirements
□ All tests pass (run `npm test`)
□ No linting errors (run `npm run lint`)
□ Documentation updated in README.md
□ PR description includes before/after screenshots

**Stop working when all boxes are checked.**

I started tracking this across my coding sessions. Before done criteria: average 73 minutes per task, with agents often "improving" working solutions. After: average 23 minutes per task, with agents stopping as soon as requirements were met.

The productivity gain isn't just time — it's focus. Your agent stops gold-plating and starts shipping. It completes tasks instead of perfecting them.

This applies beyond coding. Marketing copy, data analysis, research tasks — everything needs a clear finish line. Otherwise, your agent will research "just one more source" or "refine the messaging" until the heat death of the universe.

Your agent wants to help. But without boundaries, helping becomes hindrance. Give it permission to stop when the job is done.

Paste into your agent's workspace

Claw Mart Daily

Get tips like this every morning

One actionable AI agent tip, delivered free to your inbox every day.