Claw Mart
← Back to Blog
March 21, 20269 min readClaw Mart Team

Cron Jobs in OpenClaw: Scheduling Recurring Agent Tasks

Cron Jobs in OpenClaw: Scheduling Recurring Agent Tasks

Cron Jobs in OpenClaw: Scheduling Recurring Agent Tasks

Let's be honest about something: most people building AI agents are running them manually like it's 2019 and they're testing a Python script on their laptop. They fire up the agent, watch it do its thing, maybe screenshot the output, and call it a day.

That's fine for demos. It's terrible for anything real.

The moment you want an agent to actually work for you — checking inventory daily, summarizing reports every Monday morning, monitoring competitors weekly, pulling and processing data overnight — you need scheduling. You need cron. And you need it to not fall apart the second you look away.

OpenClaw makes this dramatically easier than stitching together five different tools and praying to the reliability gods. Let me walk you through exactly how to set up recurring agent tasks with cron jobs in OpenClaw, what pitfalls to avoid, and how to build something that actually runs at 3am without your supervision and doesn't blow up.

Why Most Scheduled Agents Fail (And Why You Should Care)

Before we get into the how, let's talk about why this is even a problem worth solving carefully.

When developers first try scheduling AI agents, they typically do something like this:

# The naive approach
0 9 * * * cd /home/user/my-agent && python run_agent.py

Looks clean. Works never. Here's what actually happens:

Problem 1: Environment Hell. Cron runs in a stripped-down shell environment. Your virtual environment isn't activated. Your .env file isn't sourced. Your API keys don't exist. The agent starts, immediately fails because it can't find its dependencies or credentials, and you don't find out until you manually check hours later.

Problem 2: Silent Failures. The LLM returns a rate limit error. A tool call times out. The agent hallucinates and produces garbage output. Cron doesn't care. Cron doesn't retry. Cron doesn't tell you. It just... moves on. Your "daily report" agent produced nothing, and you won't know until someone asks where the report is.

Problem 3: No Memory Between Runs. Your agent runs at 9am Monday, does great work, and then the process dies. Tuesday at 9am, it starts completely fresh. No memory of what it did yesterday. No context. No checkpoint. It's Groundhog Day for your AI.

Problem 4: Cost Explosions. A scheduled agent that hits a retry loop, or gets stuck in a reasoning chain, can burn through tokens like a flamethrower through tissue paper. Without guardrails, you wake up to a $200 API bill from a single overnight run.

OpenClaw addresses every single one of these. Not with band-aids, but architecturally.

OpenClaw's Scheduling Model: How It Actually Works

OpenClaw treats agent runs as durable workflows, not disposable scripts. This is the fundamental difference. When you schedule a recurring task in OpenClaw, you're not just setting a timer on a Python file. You're creating a managed execution with built-in persistence, observability, retry logic, and cost controls.

Here's the basic structure of a scheduled agent in OpenClaw:

from openclaw import Agent, Schedule, Tool
from openclaw.schedules import cron

# Define your agent
inventory_checker = Agent(
    name="inventory-monitor",
    instructions="""
    You are an inventory monitoring agent. Each run, you:
    1. Pull current inventory levels from the database
    2. Compare against minimum threshold levels
    3. Flag any items below threshold
    4. Generate a summary report
    5. Send alerts for critical shortages
    """,
    tools=[
        Tool.database_query,
        Tool.slack_notify,
        Tool.email_send,
    ],
    model="openclaw-default",
    max_tokens_per_run=4000,
)

# Schedule it
inventory_checker.schedule(
    cron("0 8 * * *"),  # Every day at 8am
    timezone="America/New_York",
    retry_policy={
        "max_retries": 3,
        "backoff": "exponential",
        "initial_delay_seconds": 30,
    },
    on_failure="notify",
    notify_channel="slack:#ops-alerts",
)

Let's break down what's happening here because every line matters.

The Agent Definition

The Agent object is your core unit. The instructions field is where you define what the agent should do on each run. Notice this isn't a one-shot prompt — it's a persistent behavioral definition. OpenClaw keeps this consistent across every scheduled execution.

The tools list defines what the agent can actually do in the world. Database queries, Slack notifications, email — these are the agent's hands. OpenClaw sandboxes tool execution, which matters enormously when you're running things unattended. You don't want a scheduled agent accidentally dropping a database table at 3am because it misinterpreted a query.

The max_tokens_per_run parameter is your cost guard. This is a hard ceiling. If the agent hits this limit, the run terminates gracefully instead of spiraling. This single parameter has probably saved more money than any other feature in OpenClaw's scheduling system.

The Schedule Definition

The cron() function accepts standard cron syntax. If you've used cron before, you're at home. If you haven't, here's the quick cheat sheet:

ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ minute (0–59)
│ ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ hour (0–23)
│ │ ā”Œā”€ā”€ā”€ā”€ā”€ day of month (1–31)
│ │ │ ā”Œā”€ā”€ā”€ month (1–12)
│ │ │ │ ā”Œā”€ day of week (0–7, where 0 and 7 = Sunday)
│ │ │ │ │
* * * * *

Some practical examples:

cron("0 9 * * 1")       # Every Monday at 9am
cron("*/30 * * * *")    # Every 30 minutes
cron("0 0 1 * *")       # First day of every month at midnight
cron("0 8,17 * * 1-5")  # 8am and 5pm, weekdays only

The timezone parameter is deceptively important. Cron traditionally runs in the server's timezone, which leads to bugs when your team is distributed or when daylight saving time shifts things by an hour. OpenClaw lets you specify the timezone explicitly, and it handles DST transitions correctly. Small thing. Huge quality-of-life improvement.

The Retry Policy

This is where OpenClaw diverges most sharply from raw cron. The retry_policy gives you automatic, configurable retries with backoff. When an LLM call fails — and it will, because API rate limits, network blips, and model timeouts are facts of life — the agent doesn't just die. It waits 30 seconds, tries again. If it fails again, it waits 60 seconds. Then 120. Up to 3 attempts.

retry_policy={
    "max_retries": 3,
    "backoff": "exponential",     # or "linear" or "fixed"
    "initial_delay_seconds": 30,
    "retry_on": ["rate_limit", "timeout", "tool_error"],  # selective retry
}

The retry_on parameter lets you be selective. Maybe you want to retry on rate limits and timeouts but not on tool errors (because a tool error might mean the agent is doing something wrong, and retrying would just repeat the mistake). This level of control matters in production.

State and Memory Between Runs

Here's where things get genuinely powerful. OpenClaw maintains a run context that persists between scheduled executions. Your agent isn't starting from scratch every time.

from openclaw import Agent, Schedule, RunContext

report_agent = Agent(
    name="weekly-competitor-report",
    instructions="""
    You monitor competitor pricing and generate weekly reports.
    Use your run context to compare this week's findings against
    previous weeks. Highlight significant changes.
    """,
    tools=[Tool.web_scrape, Tool.spreadsheet_write, Tool.email_send],
    context=RunContext(
        persist=True,
        storage="openclaw-default",
        retain_last_n_runs=12,  # Keep context from last 12 runs
    ),
)

The RunContext with persist=True means the agent has access to its own history. It knows what it found last week. It can identify trends. It can say "competitor X dropped their price by 15% since last Tuesday" instead of just reporting a snapshot.

The retain_last_n_runs parameter keeps storage manageable. You don't need infinite history — you need enough context to be useful. For a weekly report, 12 runs gives you about three months of context. Adjust based on your use case.

You can also inject data into the context manually:

report_agent.context.set("competitor_list", [
    "competitor-a.com",
    "competitor-b.com",
    "competitor-c.com",
])

report_agent.context.set("alert_threshold_percent", 10)

This lets you update the agent's configuration without redeploying. Change the competitor list, adjust thresholds, add new monitoring targets — all without touching the agent's core instructions or schedule.

Observability: Seeing What Happened at 3am

Running an agent on a schedule without observability is like launching a satellite without telemetry. You know it's up there. You have no idea what it's doing.

OpenClaw provides a built-in run log for every scheduled execution:

from openclaw import RunLog

# Get the last 5 runs for an agent
runs = RunLog.get("inventory-monitor", last=5)

for run in runs:
    print(f"Run ID: {run.id}")
    print(f"Started: {run.started_at}")
    print(f"Status: {run.status}")  # success, failed, retrying, timeout
    print(f"Tokens used: {run.tokens_used}")
    print(f"Cost: ${run.cost_usd:.4f}")
    print(f"Duration: {run.duration_seconds}s")
    print(f"Tool calls: {run.tool_calls}")
    print(f"Output summary: {run.output[:200]}")
    print("---")

Every tool call, every reasoning step, every token spent — it's all logged. When something goes wrong at 3am, you don't have to guess. You pull up the run log, see exactly what happened, and fix it.

You can also set up real-time notifications:

inventory_checker.schedule(
    cron("0 8 * * *"),
    on_success="notify",
    on_failure="notify",
    notify_channel="slack:#agent-logs",
    notify_on_cost_exceed=0.50,  # Alert if a single run costs more than $0.50
)

That notify_on_cost_exceed parameter is a lifesaver. If your agent gets into a weird loop and starts burning tokens, you get an alert immediately instead of discovering it on your credit card statement.

A Real-World Example: The Daily Digest Agent

Let me put this all together with something practical. Say you want an agent that runs every morning, pulls key metrics from your business tools, and sends you a digest before you start your day.

from openclaw import Agent, Schedule, Tool, RunContext
from openclaw.schedules import cron

daily_digest = Agent(
    name="morning-digest",
    instructions="""
    You are a daily business digest agent. Every morning, you:

    1. Pull yesterday's sales data from the database
    2. Check for any new customer support tickets marked 'urgent'
    3. Summarize key metrics: revenue, new customers, churn, open tickets
    4. Compare against the previous day and the same day last week
    5. Highlight anything unusual (>10% deviation from norm)
    6. Format as a clean, scannable summary
    7. Send via email to the team distribution list

    Be concise. No fluff. Lead with the most important changes.
    If everything is normal, say so in one line and move on.
    """,
    tools=[
        Tool.database_query,
        Tool.email_send,
        Tool.calculator,
    ],
    model="openclaw-default",
    max_tokens_per_run=3000,
    context=RunContext(
        persist=True,
        retain_last_n_runs=30,  # One month of context
    ),
)

daily_digest.schedule(
    cron("0 7 * * 1-5"),  # 7am, weekdays only
    timezone="America/Chicago",
    retry_policy={
        "max_retries": 2,
        "backoff": "exponential",
        "initial_delay_seconds": 60,
    },
    on_failure="notify",
    notify_channel="slack:#ops-alerts",
    on_success="log",
    notify_on_cost_exceed=0.25,
)

This agent has persistent context (it remembers yesterday's metrics for comparison), cost guards, retry logic, failure alerts, and runs only on weekdays. It took about 30 lines of code. Try doing that with raw cron and a handful of shell scripts.

Managing Multiple Scheduled Agents

In practice, you'll have more than one scheduled agent. OpenClaw lets you manage them as a group:

from openclaw import AgentGroup

ops_agents = AgentGroup(
    name="operations",
    agents=[inventory_checker, daily_digest, report_agent],
    shared_config={
        "notify_channel": "slack:#ops-alerts",
        "max_tokens_per_run": 5000,
    },
)

# Pause all agents in the group
ops_agents.pause_all()

# Resume
ops_agents.resume_all()

# Get status of all scheduled agents
for agent in ops_agents.list():
    print(f"{agent.name}: next run at {agent.next_run_at}, last status: {agent.last_status}")

This is especially useful when you need to pause everything for maintenance, or when you want a single dashboard view of all your running agents.

Common Patterns and Anti-Patterns

After seeing a lot of teams set up scheduled agents, here are the patterns that work and the ones that don't:

Do this:

  • Start with a simple schedule and a narrow task. Get one agent running reliably before adding complexity.
  • Set max_tokens_per_run from day one. Adjust up if needed, but always have a ceiling.
  • Use the retry policy. LLM APIs are not as reliable as your database. Plan for failures.
  • Log everything. You will need the logs. It's never a question of if, it's when.
  • Test your agent manually before scheduling it. Obvious, but people skip it.

Don't do this:

  • Don't schedule an agent every minute "just to be safe." Be intentional about frequency.
  • Don't give scheduled agents tools they don't need. Principle of least privilege applies here too.
  • Don't skip the timezone parameter. "It worked on my machine" is the scheduling version of dependency hell.
  • Don't ignore cost. Even small runs add up when they happen 24/7/365.
  • Don't schedule agents that require human judgment without a human-in-the-loop step. Use on_complete="await_approval" for sensitive actions.

Getting Started Without the Setup Headache

If you're reading this and thinking "I want to try this but I don't want to spend a weekend configuring everything from scratch," I get it. That's exactly why Felix's OpenClaw Starter Pack exists. It includes pre-configured agent templates, scheduling patterns, and the boilerplate you'd otherwise spend hours writing yourself. It's a solid starting point whether you're building your first scheduled agent or your tenth — saves you from reinventing the wheel on stuff like retry policies, context management, and notification setup.

Think of it as the difference between building a house from raw lumber versus starting with a framed structure. You still customize everything, but you skip the tedious foundation work.

Debugging Scheduled Runs

When a scheduled agent fails (and eventually, one will), here's the debugging workflow in OpenClaw:

from openclaw import RunLog

# Get the failed run
failed_run = RunLog.get("inventory-monitor", status="failed", last=1)[0]

# See the full execution trace
for step in failed_run.trace:
    print(f"[{step.timestamp}] {step.type}: {step.content[:100]}")

# Check if it was a tool failure
for call in failed_run.tool_calls:
    if call.status == "error":
        print(f"Tool '{call.tool_name}' failed: {call.error_message}")

# Check token usage (was it a cost ceiling hit?)
print(f"Tokens used: {failed_run.tokens_used} / {failed_run.max_tokens}")

# Replay the run manually for debugging
failed_run.replay(dry_run=True)  # Re-runs with same inputs, no side effects

The replay feature with dry_run=True is particularly valuable. It re-executes the agent with the same inputs and context from the failed run, but without actually sending emails, writing to databases, or making any external changes. You can see exactly what the agent would do, step by step, without any risk.

Scaling Considerations

A few things to think about as you add more scheduled agents:

Stagger your schedules. If you have five agents all scheduled for 9:00am, they'll all compete for resources and API rate limits simultaneously. Spread them out:

agent_a.schedule(cron("0 8 * * *"))   # 8:00am
agent_b.schedule(cron("15 8 * * *"))  # 8:15am
agent_c.schedule(cron("30 8 * * *"))  # 8:30am

Monitor aggregate cost. Individual run costs might be small, but 10 agents running daily adds up. Use OpenClaw's cost dashboard to track total spend across all scheduled agents.

Clean up old run data. retain_last_n_runs handles context, but also periodically review and archive old run logs. Disk is cheap, but clutter makes debugging harder.

What's Next

If you're running AI agents manually today, scheduling them is the single highest-leverage improvement you can make. The gap between "I run this when I remember to" and "this runs reliably every day at 8am and tells me if something goes wrong" is enormous in terms of practical value.

Start with one agent. One simple task. One schedule. Get it running, get it reliable, then expand. OpenClaw's scheduling system is built for exactly this progression — start simple, add complexity as needed, and never lose visibility into what your agents are actually doing.

The agents that create the most value are the ones that run without you. Set them up right, and they become the most reliable members of your team.

Claw Mart Daily

Get one AI agent tip every morning

Free daily tips to make your OpenClaw agent smarter. No spam, unsubscribe anytime.

More From the Blog