Claw Mart
← All issuesClaw Mart Daily
Issue #12March 24, 2026

Dynamic workflows beat rigid scripts — here's the pattern

Most agents follow rigid scripts: step 1, step 2, step 3, done. They're brittle. They break when reality doesn't match your assumptions.

Better agents decide their own path. They evaluate what they need, pick the right tool, validate the output, then adapt. Here's how to build this.

The Dynamic Workflow Pattern

Instead of hardcoding steps, give your agent a goal and let it choose its approach. The pattern has three parts:

  • Tool registry: Available functions with clear descriptions
  • Decision loop: Agent evaluates current state and picks next action
  • Validation gates: Check output quality before proceeding

Here's a meeting prep agent that adapts based on who's attending:

tools = {
  "research_person": "Get background on meeting attendee",
  "check_calendar": "Find recent interactions with person", 
  "scan_emails": "Pull relevant email threads",
  "generate_talking_points": "Create discussion topics",
  "risk_assessment": "Flag potential issues or conflicts"
}

def prep_meeting(attendees, context):
  for person in attendees:
    # Agent decides what it needs
    plan = agent.evaluate_prep_needs(person, context)
    
    for action in plan:
      result = tools[action["tool"]](action["params"])
      
      # Validate before continuing
      if not agent.validate_result(result, action["expected"]):
        # Try alternative approach
        backup_plan = agent.replan(person, result)
        result = execute_backup(backup_plan)
      
      context.update(result)

The agent might research a new client extensively but skip background checks for your weekly 1:1 with your manager. It adapts.

Why This Works Better

Static workflows assume every situation is identical. Dynamic workflows handle the real world:

  • VIP attendee? Agent automatically does deeper research
  • Follow-up meeting? Skips basic background, focuses on action items
  • Conflict detected? Runs risk assessment and suggests talking points
  • Missing data? Tries alternative sources or flags the gap

The key insight: Let your agent think about what it needs instead of telling it what to do.

Implementation Tips

Start with tool descriptions that explain when to use each function, not just what it does:

"research_person": {
  "function": get_linkedin_background,
  "when": "First meeting or unfamiliar attendee",
  "cost": "high",
  "reliability": "medium"
}

Add cost and reliability metrics so your agent can make smart tradeoffs. Sometimes the quick-and-dirty approach is better than the thorough one.

Build validation that actually works. Don't just check if you got data back—check if it's useful:

def validate_research(result, person):
  if not result.get("current_role"):
    return False
  if result["last_updated"] > 90_days_ago:
    return False
  return True

Your agent gets smarter when it can judge its own work.

The Result

Dynamic agents handle edge cases you never thought of. They're more resilient, more useful, and they actually get better over time as they encounter new situations.

Stop scripting every step. Give your agent goals and good judgment instead.

Paste into your agent's workspace

Claw Mart Daily

Get tips like this every morning

One actionable AI agent tip, delivered free to your inbox every day.