How to Configure OpenClaw Cron Jobs for Your Agents
How to Configure OpenClaw Cron Jobs for Your Agents

Let's cut to the chase: you built an agent in OpenClaw, it works beautifully when you run it manually, and now you want it to fire off automatically every morning at 7 AM, pull fresh data, do its thing, and drop results somewhere useful. Simple, right?
Except it's not. Because the moment you slap a cron schedule on an AI agent, you enter a world of silent failures, runaway token costs, lost state, and mysterious 3 AM errors that no one notices until the weekly report is missing on Monday morning.
I've been running OpenClaw agents on automated schedules for months now, and I've hit every single one of these problems. This post is the guide I wish I'd had when I started. We're going to cover exactly how to configure OpenClaw cron jobs, how to avoid the traps that catch everyone, and how to build something that actually runs reliably without babysitting.
The Core Problem: Agents Aren't Scripts
Here's what most people miss. A traditional cron job runs a deterministic script. Input goes in, output comes out, same thing every time. An OpenClaw agent is fundamentally different — it's non-deterministic. The same agent with the same prompt can take a different code path every single run. It might call three tools one day and seventeen the next. It might finish in 8 seconds or spin for 4 minutes.
This matters because all the assumptions baked into traditional cron scheduling — fixed execution time, predictable resource usage, binary pass/fail — break down with agents. You need a different mental model, and OpenClaw gives you the primitives to handle it. You just need to know how to wire them up.
Setting Up Your First OpenClaw Cron Job
OpenClaw's scheduling system lives in the openclaw.schedule module. Here's the most basic version:
# openclaw.config.yaml
agent: daily-research-agent
schedule:
cron: "0 7 * * *" # Every day at 7:00 AM
timezone: "America/New_York"
That's the hello-world version. Deploy it with:
openclaw deploy --config openclaw.config.yaml
Your agent will now run every morning at 7 AM Eastern. And it will work perfectly for about three days before something breaks. Here's why, and how to fix each issue before it bites you.
Problem #1: Silent Failures
This is the number one issue people hit. Your agent runs, the LLM call fails — rate limit, timeout, malformed JSON response, tool exception — and the job exits with code 0. No retry, no alert. You just don't get your output, and you don't know about it until you go looking.
The fix is OpenClaw's retry and notification config:
# openclaw.config.yaml
agent: daily-research-agent
schedule:
cron: "0 7 * * *"
timezone: "America/New_York"
execution:
max_retries: 3
retry_backoff: exponential # 30s, 60s, 120s
timeout_seconds: 300 # Hard kill after 5 minutes
on_failure:
notify:
- type: webhook
url: "https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK"
- type: email
to: "you@company.com"
dead_letter: true # Save failed run state for debugging
The timeout_seconds is non-negotiable. I cannot stress this enough. Without it, an agent that enters an infinite tool-calling loop will just keep running, burning tokens, until your API credit runs dry. I've seen this happen. Someone in one of the OpenClaw community channels described a competitor analysis agent that racked up $180 in a single night because it recursively followed every outbound link on every page it found. A 5-minute timeout would have capped that at maybe $2.
The dead_letter: true flag is equally important. When a run fails after all retries, OpenClaw serializes the full execution state — the agent's chain-of-thought, every tool call, every response — and saves it. You can inspect it later:
openclaw runs inspect --agent daily-research-agent --status failed --last 5
This gives you the actual trace of what happened, not just a stack trace. You can see that the agent called the search tool, got a 429 back, tried to parse the error as search results, hallucinated data, then crashed when it tried to write to the output. That level of visibility is the difference between fixing a bug in 5 minutes and staring at logs for an hour.
Problem #2: State and Memory Loss
Most agent frameworks assume you're in an interactive session. When your agent runs from cron, there's no persistent memory by default. Every invocation starts fresh. Which means your "daily research agent" has no idea what it found yesterday, and it'll happily report the same findings every single day.
OpenClaw handles this with the state block:
# openclaw.config.yaml
agent: daily-research-agent
schedule:
cron: "0 7 * * *"
timezone: "America/New_York"
state:
persistence: true
store: "openclaw-state-store" # Built-in key-value store
hydrate_on_start: true # Load previous run's state automatically
keys:
- last_run_timestamp
- seen_item_ids
- running_summary
With hydrate_on_start: true, your agent gets its previous state injected into context automatically when the cron job fires. Inside your agent skill, you access it like this:
from openclaw.state import get_state, set_state
# Read what we found last time
seen_ids = get_state("seen_item_ids", default=[])
last_run = get_state("last_run_timestamp", default=None)
# ... do your agent work, find new items ...
# Persist for next run
new_ids = seen_ids + [item.id for item in new_items]
set_state("seen_item_ids", new_ids)
set_state("last_run_timestamp", datetime.utcnow().isoformat())
This is simple, but it's the thing that takes people from "cool demo" to "actually useful automation." The pattern of tracking last_run_timestamp and seen_item_ids is so common that you'll use it in probably 80% of your scheduled agents.
Problem #3: Runaway Costs
Non-deterministic execution means non-deterministic costs. An agent that usually costs $0.03 per run can randomly decide to "do more research" and cost $5. Over 30 days of daily runs, that variability adds up fast.
OpenClaw has token budgets built in:
execution:
token_budget:
max_input_tokens: 50000
max_output_tokens: 10000
max_total_cost_usd: 0.50 # Hard stop at 50 cents per run
on_budget_exceeded: graceful_stop # vs. hard_kill
The graceful_stop option is key. Instead of just killing the process, it signals to the agent that it's running out of budget and should wrap up with whatever it has. This means you still get partial results instead of nothing. For a daily briefing agent, getting 6 out of 10 research items is way better than getting an error email.
Problem #4: Overlapping Runs
What happens when your 7 AM agent is still running at 7 AM the next day? By default, you get two instances running simultaneously, potentially writing to the same state store and corrupting each other's data.
schedule:
cron: "0 7 * * *"
timezone: "America/New_York"
concurrency: skip # Options: skip, queue, allow
skip means "if the previous run is still going, don't start a new one." This is almost always what you want. queue will wait and run it after the previous one finishes. allow lets them run in parallel (use this only if your agent is truly stateless and idempotent).
Problem #5: Environment and Secrets
The classic "works on my machine, fails in cron" issue. Your agent needs API keys for the LLM provider, maybe credentials for a database, webhook URLs, etc. Hardcoding them is a non-starter. Relying on shell environment variables is fragile.
OpenClaw has a secrets manager:
openclaw secrets set SEARCH_API_KEY "your-key-here"
openclaw secrets set DATABASE_URL "postgres://..."
openclaw secrets set SLACK_WEBHOOK "https://hooks.slack.com/..."
In your config:
env:
SEARCH_API_KEY: ${secrets.SEARCH_API_KEY}
DATABASE_URL: ${secrets.DATABASE_URL}
These get injected at runtime. They're encrypted at rest, never logged, and available in your agent code through normal environment variable access. No more "the cron job can't find my API key because it runs as a different user" nonsense.
A Complete, Production-Ready Config
Here's what a real, battle-tested config looks like with all the pieces together:
# openclaw.config.yaml
agent: daily-competitor-monitor
description: "Checks competitor pricing and features daily"
schedule:
cron: "0 8 * * 1-5" # Weekdays at 8 AM
timezone: "America/New_York"
concurrency: skip
state:
persistence: true
store: "openclaw-state-store"
hydrate_on_start: true
keys:
- last_run_timestamp
- known_competitor_prices
- change_history
execution:
max_retries: 3
retry_backoff: exponential
timeout_seconds: 300
token_budget:
max_input_tokens: 80000
max_output_tokens: 15000
max_total_cost_usd: 1.00
on_budget_exceeded: graceful_stop
on_failure:
notify:
- type: webhook
url: ${secrets.SLACK_WEBHOOK}
dead_letter: true
env:
SEARCH_API_KEY: ${secrets.SEARCH_API_KEY}
DATABASE_URL: ${secrets.DATABASE_URL}
output:
- type: webhook
url: ${secrets.SLACK_CHANNEL_WEBHOOK}
format: markdown
- type: storage
path: "runs/competitor-monitor/"
format: json
Deploy it:
openclaw deploy --config openclaw.config.yaml
Check that it's registered:
openclaw schedules list
You'll see something like:
AGENT CRON NEXT RUN STATUS
daily-competitor-monitor 0 8 * * 1-5 2026-01-20 08:00 ET active
daily-research-agent 0 7 * * * 2026-01-20 07:00 ET active
Monitoring and Observability
Once your jobs are running, you need to actually see what's happening. OpenClaw's built-in trace export handles this:
observability:
trace_export:
enabled: true
format: opentelemetry # Compatible with LangFuse, Phoenix, etc.
endpoint: ${secrets.LANGFUSE_ENDPOINT}
log_level: info # debug for troubleshooting
metrics:
export: true
include:
- token_usage
- execution_time
- tool_call_count
- retry_count
This gives you full chain-of-thought traces for every automated run, not just interactive ones. You can go back and see exactly what your agent did at 8 AM last Tuesday, what tools it called, what data it got back, and what decisions it made. When something goes wrong — and it will, eventually — this is how you figure out why.
Conditional Execution: Only Run When It Matters
Sometimes you don't want an agent running every single day. Maybe it should only trigger when there's new data to process, or skip holidays, or run more frequently during earnings season.
schedule:
cron: "0 8 * * 1-5"
timezone: "America/New_York"
conditions:
- type: state_check
key: "data_updated"
equals: true
- type: calendar
skip_dates:
- "2026-01-20" # MLK Day
- "2026-02-17" # Presidents Day
The state_check condition is particularly powerful. You can have a lightweight "data watcher" agent that runs every hour, checks if anything changed, and flips a flag. Then your expensive analysis agent only runs when there's actually something new to analyze. This alone can cut your monthly agent costs by 60-70%.
The Shortcut: Felix's OpenClaw Starter Pack
Look, I just walked you through a lot of configuration. And honestly, this is the simplified version — I left out agent graph composition, multi-step output pipelines, and advanced retry strategies because this post is already long enough.
If you don't want to set all of this up manually, Felix's OpenClaw Starter Pack on Claw Mart includes pre-built versions of the most common scheduled agent patterns. It's $29 and includes pre-configured skills for daily research briefings, competitor monitoring, content pipelines, and data sync agents — all with the retry logic, state management, token budgets, and observability config already wired up. I used it as my starting point and customized from there. It saved me probably a full weekend of trial-and-error on the state hydration patterns alone.
It's not a requirement — everything I've described above works with vanilla OpenClaw. But if you're the kind of person who'd rather start with a working template and modify it than build from scratch, it's genuinely the fastest path I've found.
Common Gotchas (Quick Hits)
Timezone confusion. Always set timezone explicitly. OpenClaw defaults to UTC if you don't, and you'll wonder why your 7 AM agent ran at 2 AM.
State store size. If you're appending to lists (like seen_item_ids), add a cleanup mechanism. After a few months, your state blob gets big enough to eat into your context window when hydrated.
# Keep only last 1000 seen IDs
seen_ids = get_state("seen_item_ids", default=[])
if len(seen_ids) > 1000:
seen_ids = seen_ids[-1000:]
set_state("seen_item_ids", seen_ids)
Testing your schedule without waiting. Don't sit around until 8 AM to see if your config works:
openclaw run --config openclaw.config.yaml --now --dry-run
The --dry-run flag simulates the full execution including state hydration and secret injection, but doesn't actually call the LLM or write outputs. Use it liberally.
Log rotation. If you're writing logs to disk (in addition to the trace export), set up rotation. Agent logs are verbose. A single agent running daily can generate 50MB+ of logs per month.
Next Steps
Here's what I'd do if I were starting fresh today:
- Pick one agent that you're currently running manually and that would benefit from daily automation. A research briefing, a data check, a content draft — something with clear input and output.
- Start with the complete config template above. Fill in your agent name, schedule, and secrets.
- Set the timeout to 5 minutes and the cost budget to $0.50. You can loosen these later once you see how your agent actually behaves on autopilot.
- Deploy it and watch the first 3-5 runs closely. Check the traces, verify the state is persisting correctly, make sure the output is landing where you expect.
- Then stop watching it. That's the whole point. If your retry logic, budget guards, and failure notifications are configured correctly, you'll only hear about it when something needs your attention.
The gap between "I built a cool agent" and "I have a reliable automated system" is real, but it's mostly configuration, not code. OpenClaw gives you the right knobs to turn. Now you know which ones to turn and how far.
Go ship something that runs while you sleep.