Advanced Cron Scheduling Patterns for OpenClaw Agents
Advanced Cron Scheduling Patterns for OpenClaw Agents

Most people set up their first OpenClaw agent, watch it run once, feel like a genius, and then immediately face the ugly reality: running an AI agent once is trivial. Running it reliably on a schedule — at 2am on Tuesdays, every 15 minutes during market hours, on the first of every month but only if the previous run succeeded — that's where things get real.
I've been deep in OpenClaw for a while now, and the scheduling layer is where I see people either unlock serious value or rage-quit because their agent silently failed at 3am for the fifth time in a row. This post is everything I've learned about advanced cron scheduling patterns for OpenClaw agents — the stuff that separates a cute demo from a production system you actually trust.
If you're brand new to OpenClaw, do yourself a favor and grab Felix's OpenClaw Starter Pack before diving into the advanced stuff. It'll get your environment configured correctly and save you from a bunch of rookie mistakes that make the scheduling patterns below way harder to debug. Seriously — half the "my cron agent is broken" issues I see trace back to a misconfigured base setup.
Alright. Let's get into it.
Why Plain Cron Breaks Down With AI Agents
Traditional cron was built for deterministic scripts. Run a backup. Rotate some logs. Send a report. These things take predictable amounts of time, rarely fail in ambiguous ways, and don't cost money per execution token.
AI agents are the opposite of all of that. They're non-deterministic. They take variable amounts of time. They can get stuck in loops. They burn API credits. They fail in weird, partial ways where half the work got done but the other half didn't.
Here's what goes wrong when you slap a naive crontab entry on an OpenClaw agent:
Overlapping runs. Your agent is supposed to run every 30 minutes, but sometimes it takes 45 minutes. Now you've got two instances running simultaneously, potentially doing duplicate work or — worse — conflicting with each other.
Silent failures. The agent hit a rate limit, threw an exception, and exited. Cron doesn't care. No retry. No alert. Your daily report just... didn't happen, and you don't notice until Thursday.
No state between runs. Every scheduled run starts from absolute zero. The agent doesn't know what it did yesterday, so it repeats work, misses context, or makes decisions that contradict previous actions.
Cost explosions. A bad loop in a one-off run costs you a few bucks. A bad loop in a cron job that runs every 15 minutes costs you your entire monthly budget by lunchtime.
OpenClaw gives you the tools to solve all of these. You just have to use them.
The Foundation: OpenClaw's Scheduling Configuration
Before we get into advanced patterns, let's establish how basic scheduling works in OpenClaw. Your agent's schedule lives in the agent configuration file:
# agent.openclaw.yaml
agent:
name: "market-scanner"
description: "Scans market data and generates daily briefing"
schedule:
cron: "0 6 * * 1-5" # 6am, weekdays only
timezone: "America/New_York"
execution:
timeout: 300 # 5 minute max runtime
retries: 2
retry_delay: 60 # seconds between retries
This is the starting point. A cron expression, a timezone (always set this explicitly — I cannot stress this enough), a timeout, and basic retry logic. If you're coming from Felix's Starter Pack, this structure should look familiar.
But this is just level one. Let's go deeper.
Pattern 1: Lock-Based Execution Guards
The single most common issue I see with scheduled OpenClaw agents is overlapping runs. The fix is a lock mechanism that prevents a new run from starting while a previous one is still executing.
schedule:
cron: "*/15 * * * *"
timezone: "UTC"
execution:
timeout: 600
lock:
enabled: true
strategy: "skip" # Options: "skip", "queue", "kill_previous"
lock_ttl: 900 # Auto-release lock after 15 min (safety net)
Three strategies, each useful in different situations:
skip — If the previous run is still going, just skip this one entirely. Best for reporting agents where missing one cycle isn't critical.
queue — Wait for the previous run to finish, then execute. Best when every run matters and you don't mind them backing up (within reason).
kill_previous — Terminate the old run and start fresh. Best for agents that scan current data where stale runs are worthless.
In practice, skip is the right choice about 80% of the time. Here's the thing most people miss though: you need the lock_ttl safety net. If your agent crashes hard (like, segfault-level hard), the lock never gets released. Without a TTL, your agent never runs again. I've seen people go weeks without noticing.
# You can also implement this programmatically in your agent code
from openclaw import Agent, ExecutionLock
agent = Agent.load("market-scanner")
with ExecutionLock(agent, strategy="skip", ttl=900) as lock:
if lock.acquired:
agent.run()
else:
print("Previous run still active. Skipping.")
Pattern 2: Stateful Scheduling With Run Memory
This is where OpenClaw really shines compared to cobbling something together with raw cron and a Python script. You can persist state between scheduled runs natively.
schedule:
cron: "0 */4 * * *" # Every 4 hours
state:
persistence: true
store: "local" # Options: "local", "redis", "s3"
retention: 30 # Keep 30 days of state history
memory:
cross_run: true
context_window: 5 # Remember the last 5 runs
With cross_run memory enabled, your agent has access to what it did in previous scheduled executions. This is enormous. Instead of your agent re-scanning an entire dataset every run, it can pick up where it left off:
from openclaw import Agent, RunContext
agent = Agent.load("content-monitor")
@agent.on_schedule
def monitor(ctx: RunContext):
# Get the last processed timestamp from the previous run
last_check = ctx.previous_run.state.get("last_processed_timestamp")
if last_check:
# Only process new items since last run
new_items = fetch_items(since=last_check)
else:
# First run ever — process last 24 hours
new_items = fetch_items(since=hours_ago(24))
results = agent.process(new_items)
# Save state for next run
ctx.state["last_processed_timestamp"] = now()
ctx.state["items_processed"] = len(new_items)
return results
This pattern — incremental processing with state checkpointing — is the single biggest upgrade you can make to a scheduled agent. It's faster, cheaper (fewer tokens), and produces better results because the agent has context.
Pattern 3: Conditional and Dependent Schedules
Sometimes you don't want pure time-based scheduling. You want "run this agent at 9am, but only if the data-collection agent succeeded first." OpenClaw handles this with dependent schedules:
# data-collector.openclaw.yaml
agent:
name: "data-collector"
schedule:
cron: "0 8 * * *"
outputs:
- name: "daily_data"
format: "json"
# report-generator.openclaw.yaml
agent:
name: "report-generator"
schedule:
cron: "0 9 * * *"
depends_on:
- agent: "data-collector"
status: "success"
max_age: 7200 # Must have succeeded within the last 2 hours
on_dependency_failure: "skip" # Options: "skip", "retry_later", "run_anyway"
The max_age parameter is subtle but important. You don't want your report generator to run based on a three-day-old successful data collection. You want it to confirm that today's collection succeeded. Setting max_age to 7200 seconds (2 hours) means the dependency only counts if the data collector succeeded within that window.
You can chain as many dependencies as you need. I've seen people build five- or six-agent pipelines this way, and it works beautifully as long as you're thoughtful about the on_dependency_failure strategy.
Pattern 4: Cost Guards and Token Budgeting
This pattern has saved me more money than I want to admit. OpenClaw lets you set token and cost budgets directly in the schedule configuration:
schedule:
cron: "*/30 * * * *"
budget:
per_run:
max_tokens: 50000
max_cost_usd: 0.50
daily:
max_tokens: 500000
max_cost_usd: 5.00
monthly:
max_cost_usd: 100.00
on_budget_exceeded: "pause_and_alert"
When any budget threshold is hit, the agent pauses and (if you've configured notifications) sends you an alert. The per_run limit is your protection against infinite loops — if the agent enters a bad reasoning cycle and starts burning tokens, it gets cut off. The daily and monthly limits are your sanity checks.
Here's the implementation pattern I use for every scheduled agent:
from openclaw import Agent, BudgetGuard
agent = Agent.load("research-agent")
@agent.on_schedule
def research(ctx):
guard = BudgetGuard(
per_run_tokens=50000,
per_run_cost=0.50,
on_exceed="checkpoint_and_stop"
)
with guard:
results = agent.run(task=ctx.scheduled_task)
if guard.exceeded:
# Save partial work so next run can continue
ctx.state["partial_results"] = guard.checkpoint_data
ctx.notify(f"Budget exceeded after {guard.tokens_used} tokens. Partial results saved.")
return results
The checkpoint_and_stop strategy is gold for expensive agents. Instead of just killing the run, it saves whatever work was completed so the next scheduled run can pick up where things left off (using Pattern 2's state management).
Pattern 5: Adaptive Scheduling
This is the most advanced pattern and honestly the most fun. Instead of a fixed cron schedule, the agent adjusts its own run frequency based on conditions:
schedule:
cron: "*/60 * * * *" # Base: every hour
adaptive:
enabled: true
min_interval: 300 # Never more often than every 5 minutes
max_interval: 86400 # Never less often than once a day
increase_frequency_when:
- condition: "high_activity"
metric: "items_detected"
threshold: 10
new_interval: 300 # Speed up to every 5 min
decrease_frequency_when:
- condition: "low_activity"
metric: "items_detected"
threshold: 0
consecutive_runs: 3 # 3 runs with zero items
new_interval: 14400 # Slow down to every 4 hours
In code, this looks like:
from openclaw import Agent, AdaptiveScheduler
agent = Agent.load("event-monitor")
@agent.on_schedule
def monitor(ctx):
events = agent.scan_for_events()
# Report metrics that drive adaptive scheduling
ctx.report_metric("items_detected", len(events))
if events:
processed = agent.process_events(events)
ctx.state["last_events"] = processed
return processed
return {"status": "no_events"}
The agent starts by running every hour. If it detects a bunch of activity, it speeds up to every 5 minutes to capture more data. If things go quiet, it backs off to every 4 hours to save resources. This is incredibly efficient for monitoring use cases where activity is bursty.
Pattern 6: Multi-Timezone and Market-Hours Scheduling
A quick but important one. If you're building agents that interact with global markets, APIs with regional rate limits, or teams across time zones:
schedule:
windows:
- cron: "*/5 * * * *"
timezone: "America/New_York"
active_hours: "09:30-16:00"
active_days: "mon-fri"
label: "us_market_hours"
- cron: "*/5 * * * *"
timezone: "Asia/Tokyo"
active_hours: "09:00-15:00"
active_days: "mon-fri"
label: "jp_market_hours"
- cron: "0 */6 * * *"
label: "off_hours_check"
exclude_windows: ["us_market_hours", "jp_market_hours"]
This gives you granular control over when agents are active without writing a bunch of conditional logic in your agent code. The agent scans frequently during market hours, then drops to a lazy check every 6 hours outside of them.
Observability: Knowing What Your Scheduled Agents Actually Did
None of these patterns matter if you can't see what's happening. OpenClaw's built-in tracing gives you per-run logs, but for scheduled agents, you want summaries:
notifications:
on_success:
channel: "slack"
webhook: "${SLACK_WEBHOOK_URL}"
template: "summary" # Sends a condensed summary, not raw logs
on_failure:
channel: "slack"
webhook: "${SLACK_WEBHOOK_URL}"
template: "detailed" # Full error trace
daily_digest:
enabled: true
time: "08:00"
timezone: "America/New_York"
include: ["runs", "costs", "errors", "state_changes"]
The daily digest is my favorite feature for scheduled agents. Every morning I get a single message: here's what all your agents did overnight, here's what they spent, and here's what went wrong (if anything). It turns an opaque system into something I actually trust.
Putting It All Together
Here's a real-world example combining multiple patterns — a content research agent that runs on a schedule, remembers previous work, respects cost limits, and adapts its frequency:
agent:
name: "content-researcher"
description: "Monitors industry sources and compiles research briefs"
schedule:
cron: "0 */2 * * *"
timezone: "America/New_York"
adaptive:
enabled: true
min_interval: 1800
max_interval: 28800
increase_frequency_when:
- metric: "new_sources_found"
threshold: 5
new_interval: 1800
decrease_frequency_when:
- metric: "new_sources_found"
threshold: 0
consecutive_runs: 4
new_interval: 28800
execution:
timeout: 600
retries: 2
retry_delay: 120
lock:
enabled: true
strategy: "skip"
lock_ttl: 900
state:
persistence: true
store: "local"
retention: 90
memory:
cross_run: true
context_window: 10
budget:
per_run:
max_tokens: 75000
max_cost_usd: 1.00
daily:
max_cost_usd: 8.00
monthly:
max_cost_usd: 150.00
on_budget_exceeded: "pause_and_alert"
notifications:
on_failure:
channel: "slack"
webhook: "${SLACK_WEBHOOK_URL}"
daily_digest:
enabled: true
time: "08:00"
timezone: "America/New_York"
That's a production-grade scheduled agent configuration. It handles overlapping runs, persists state, manages costs, adapts its frequency, and tells you what's going on. Not bad for a YAML file.
Getting Started
If you're reading this and thinking "okay, I need to actually set this up" — here's my recommended path:
-
Start with Felix's OpenClaw Starter Pack. This gets your OpenClaw environment properly configured with sane defaults. The scheduling patterns above all assume a correctly set up base environment, and Felix's pack handles the annoying foundational stuff (credential management, directory structure, logging configuration) so you can focus on the interesting parts.
-
Build one agent that runs on a basic cron schedule. Get it working reliably with a simple
cronexpression, atimeout, andretries. Don't add complexity yet. -
Add the lock pattern. This is the single highest-ROI addition and takes about two minutes.
-
Add state persistence and cross-run memory. This is where your agent goes from "script that runs repeatedly" to "system that builds knowledge over time."
-
Add cost guards. Especially if you're running anything more frequently than daily.
-
Graduate to adaptive scheduling once you have enough run data to know what the right frequency thresholds are.
Each step builds on the previous one. Don't try to implement all six patterns on day one — you'll just create a debugging nightmare.
The gap between "I have an AI agent" and "I have an AI agent that runs reliably on a schedule, manages its own costs, remembers what it did, and tells me what's going on" is enormous. But with OpenClaw and these patterns, it's not actually that hard to close. It just requires being intentional about it.
Go build something that runs while you sleep.