Claw Mart
← Back to Blog
March 20, 20267 min readClaw Mart Team

Setting Up Cron Jobs in OpenClaw: Complete Beginner Guide

Setting Up Cron Jobs in OpenClaw: Complete Beginner Guide

Setting Up Cron Jobs in OpenClaw: Complete Beginner Guide

Let me be honest with you: most people who try to schedule their OpenClaw agents with cron jobs end up in the same frustrated place. The agent works perfectly when you run it manually. You feel great. You set up a cron job to automate it. And then... silence. You wake up three days later to discover it hasn't actually run successfully since the first night, your logs are empty, and you have no idea what went wrong.

I've been there. Multiple times. And after months of running OpenClaw agents on schedules — everything from daily content pipelines to hourly data monitors — I've landed on a setup that actually works reliably. This post is the guide I wish I'd had when I started.

Why Cron Jobs With AI Agents Are Uniquely Painful

Before we get into the how, let's talk about why this is harder than scheduling a normal script.

Traditional cron jobs run deterministic code. A backup script takes roughly the same amount of time every night. A database cleanup runs predictably. You set it, you forget it, and it works.

OpenClaw agents are a different beast entirely. They make LLM calls that can take 30 seconds or 12 minutes depending on load. They use tools and skills that interact with external APIs. They maintain state and memory across runs. They can — and sometimes do — get stuck in reasoning loops that burn through tokens.

Cron doesn't care about any of this. Cron fires, runs your command, and moves on. If the agent was still running from the last invocation? Now you've got two instances fighting over the same resources. If an API key wasn't loaded into the environment? Silent failure. If the agent hit a rate limit and needs to retry? Cron doesn't do retries.

The mismatch between cron's simplicity and the complexity of AI agent workloads is where all the pain comes from. But the good news is that with the right wrapper setup, you can make this work incredibly well inside OpenClaw.

The Basic Setup: Getting Your First Cron Job Running

Let's start from zero. You have an OpenClaw agent (or "skill chain" in OpenClaw terms) that you've been running manually, and you want it to execute on a schedule.

Step 1: Create a dedicated runner script.

Don't put your OpenClaw command directly in the crontab. Create a shell script that handles environment setup, logging, and error handling. Here's the template I use for every single scheduled agent:

#!/bin/bash
# /home/user/openclaw-jobs/run-agent.sh

# === ENVIRONMENT SETUP ===
export PATH="/usr/local/bin:/usr/bin:/bin:$HOME/.local/bin"
export OPENCLAW_HOME="$HOME/.openclaw"
export OPENCLAW_CONFIG="$HOME/.openclaw/config.yaml"

# Load API keys from a secure env file (NOT hardcoded)
set -a
source "$HOME/.openclaw/secrets.env"
set +a

# === LOGGING ===
LOG_DIR="$HOME/openclaw-jobs/logs"
mkdir -p "$LOG_DIR"
LOG_FILE="$LOG_DIR/agent-$(date +%Y%m%d-%H%M%S).log"

# === LOCK FILE (prevent overlapping runs) ===
LOCK_FILE="/tmp/openclaw-agent.lock"
if [ -f "$LOCK_FILE" ]; then
    LOCK_PID=$(cat "$LOCK_FILE")
    if kill -0 "$LOCK_PID" 2>/dev/null; then
        echo "$(date): Agent still running (PID $LOCK_PID), skipping." >> "$LOG_DIR/skipped.log"
        exit 0
    else
        echo "$(date): Stale lock file found, removing." >> "$LOG_DIR/skipped.log"
        rm -f "$LOCK_FILE"
    fi
fi
echo $$ > "$LOCK_FILE"
trap "rm -f $LOCK_FILE" EXIT

# === RUN THE AGENT ===
echo "$(date): Starting agent run" >> "$LOG_FILE"
cd "$HOME/openclaw-jobs"

openclaw run my-daily-agent \
    --timeout 600 \
    --max-tokens 50000 \
    --memory-store ./memory/agent-state.json \
    >> "$LOG_FILE" 2>&1

EXIT_CODE=$?

# === POST-RUN HANDLING ===
if [ $EXIT_CODE -ne 0 ]; then
    echo "$(date): Agent failed with exit code $EXIT_CODE" >> "$LOG_FILE"
    # Send alert (pick your method)
    curl -s -o /dev/null "https://hc-ping.com/YOUR-HEALTHCHECK-UUID/fail"
else
    echo "$(date): Agent completed successfully" >> "$LOG_FILE"
    curl -s -o /dev/null "https://hc-ping.com/YOUR-HEALTHCHECK-UUID"
fi

Let me walk through why each section matters.

Environment setup is the number one reason cron jobs fail silently. When cron runs your script, it uses a minimal environment — not your normal shell environment. Your PATH is different. Your Python virtualenv isn't activated. Your API keys from .bashrc aren't loaded. Explicitly setting everything in the script eliminates this entire category of bugs.

The lock file mechanism prevents overlapping runs. This is critical for OpenClaw agents because their execution time is non-deterministic. If your agent usually takes 2 minutes but occasionally takes 15 due to a complex reasoning chain or slow API response, you don't want a second instance kicking off while the first is still working.

Healthcheck pings give you observability. I use healthchecks.io (free tier is generous) — it expects a ping within a certain window. If the ping never comes, it sends you an alert. This transforms cron from a "fire and pray" system into something you can actually monitor.

Step 2: Set up the crontab entry.

crontab -e

Then add:

# Run OpenClaw daily agent at 6am every day
0 6 * * * /home/user/openclaw-jobs/run-agent.sh

# Run OpenClaw monitoring agent every 2 hours
0 */2 * * * /home/user/openclaw-jobs/run-monitor.sh

Make sure your script is executable:

chmod +x /home/user/openclaw-jobs/run-agent.sh

Step 3: Test it properly.

Don't wait until 6am to see if it works. Simulate the cron environment:

env -i HOME=$HOME SHELL=/bin/bash /home/user/openclaw-jobs/run-agent.sh

The env -i flag strips your environment down to almost nothing, mimicking what cron does. If your script works with this command, it'll work in cron. If it doesn't, you'll immediately see what environment variables or paths are missing.

Managing State and Memory Across Runs

This is where most people's setups fall apart, and it's where OpenClaw's architecture actually helps you.

AI agents need memory. They need to know what they did last time, what data they've already processed, what context they're working with. Cron is inherently stateless — each invocation is a fresh start.

In OpenClaw, you handle this with memory stores. The key flag in the runner script above is:

--memory-store ./memory/agent-state.json

This tells OpenClaw to persist the agent's working memory to a file between runs. On the next invocation, the agent loads this file and picks up where it left off.

Here's a more detailed OpenClaw config that handles memory properly for scheduled runs:

# openclaw-config.yaml
agent:
  name: daily-content-agent
  schedule_mode: true

memory:
  backend: file
  path: ./memory/agent-state.json
  max_context_window: 4096
  pruning_strategy: relevance

skills:
  - name: web-research
    timeout: 120
  - name: content-draft
    timeout: 300
  - name: publish-review
    timeout: 60

execution:
  max_total_tokens: 50000
  max_retries: 3
  retry_delay: 30
  on_failure: save_state_and_exit

The on_failure: save_state_and_exit setting is crucial for scheduled agents. Instead of crashing and losing all progress, the agent saves its current state so the next run can potentially recover or at least give you diagnostic information about where things went wrong.

For agents that process data incrementally — say, monitoring a feed or processing a queue — you'll also want a checkpoint mechanism:

checkpoint:
  enabled: true
  path: ./checkpoints/
  strategy: after_each_skill

This writes a checkpoint after each skill completes. If the agent crashes halfway through a multi-step pipeline, the next cron invocation can resume from the last successful skill instead of starting over.

Handling Token Costs and Runaway Agents

Here's a real scenario that has cost people real money: an agent scheduled via cron gets into a reasoning loop. Maybe it's trying to solve an impossible subtask, or an external API is returning unexpected data that keeps the agent retrying. Without guardrails, this agent will burn through tokens until your API billing limit stops it.

In your OpenClaw config, always set hard limits:

execution:
  max_total_tokens: 50000
  max_execution_time: 600  # seconds
  max_skill_iterations: 10
  cost_limit: 2.00  # USD per run

And in your runner script, use the --timeout flag as a belt-and-suspenders approach:

timeout 900 openclaw run my-agent --timeout 600

The timeout Linux command will kill the entire process after 900 seconds (15 minutes), even if OpenClaw's internal timeout somehow fails. Multiple layers of protection.

I also recommend setting up a weekly cost review. Add this to your crontab:

# Weekly cost report every Sunday at 9am
0 9 * * 0 openclaw stats --period 7d --format summary >> /home/user/openclaw-jobs/logs/weekly-cost.log

Securing Your Secrets

Never put API keys directly in your crontab or in your runner script. Use a separate, permission-restricted secrets file:

# Create the secrets file
touch ~/.openclaw/secrets.env
chmod 600 ~/.openclaw/secrets.env
# ~/.openclaw/secrets.env
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
OPENCLAW_API_KEY=oc-...

The chmod 600 ensures only your user can read this file. Your runner script sources it with set -a / set +a to export all variables without them appearing in process listings.

If you're on a shared server or want even better security, consider using your system's secret manager (systemd credentials, HashiCorp Vault, or even a simple encrypted file that gets decrypted at runtime).

Log Rotation and Cleanup

Your scheduled agents will generate logs. A lot of logs. Without rotation, these will eventually fill your disk. Add a cleanup cron job:

# Clean up logs older than 30 days, every Sunday at midnight
0 0 * * 0 find /home/user/openclaw-jobs/logs -name "*.log" -mtime +30 -delete

Or if you want to keep compressed archives:

0 0 * * 0 find /home/user/openclaw-jobs/logs -name "*.log" -mtime +7 -exec gzip {} \;
0 0 1 * * find /home/user/openclaw-jobs/logs -name "*.log.gz" -mtime +90 -delete

A Real Example: Daily Research Agent

Let me put it all together with a practical example. Say you want an OpenClaw agent that runs every morning to research a topic, compile findings, and save a summary.

The OpenClaw skill config:

agent:
  name: morning-research
  description: "Daily research compilation on AI industry news"
  schedule_mode: true

memory:
  backend: file
  path: ./memory/research-memory.json
  max_context_window: 8192
  pruning_strategy: recency

skills:
  - name: source-scan
    config:
      sources:
        - arxiv-recent
        - hackernews-top
        - industry-feeds
      lookback_hours: 24
    timeout: 180

  - name: synthesize
    config:
      output_format: markdown
      max_length: 2000
    timeout: 240

  - name: save-output
    config:
      path: ./output/daily-research/
      filename_template: "research-{date}.md"
    timeout: 30

execution:
  max_total_tokens: 30000
  max_execution_time: 480
  cost_limit: 1.50
  on_failure: save_state_and_exit

The runner script follows the same template from above, just pointed at this config.

The crontab entry:

0 7 * * * /home/user/openclaw-jobs/run-research.sh

Every morning at 7am, you wake up to a fresh research summary. No manual work. No babysitting. And with the lock file, healthcheck, and cost limits in place, you can trust it to run reliably without draining your wallet.

Skip the Manual Setup: Felix's OpenClaw Starter Pack

Here's the honest truth: setting all of this up from scratch — the runner scripts, the proper config structure, the memory management, the monitoring — takes a solid afternoon of work. And you'll probably debug edge cases for another week.

If you'd rather skip straight to the "it just works" part, Felix's OpenClaw Starter Pack on Claw Mart includes pre-configured skills with cron-ready runner scripts, proper logging, lock files, and healthcheck integration already baked in. It's $29 and saves you the trial-and-error period entirely. I've recommended it to a few people who were frustrated with their initial setup and the response has been universally "why didn't I just start with this." It doesn't lock you into anything — you get the actual config files and scripts, so you can customize everything once you understand how the pieces fit together.

What I'd Actually Recommend

If you're just getting started with scheduling OpenClaw agents, here's my honest priority list:

  1. Start with one agent, one schedule. Don't try to orchestrate five agents on day one. Get one working reliably.
  2. Always use a runner script wrapper. Never put raw commands in crontab.
  3. Set up healthchecks from day one. Not after your first silent failure. Before it.
  4. Set hard token and cost limits. You will thank yourself later.
  5. Use lock files. Overlapping agent runs cause subtle, maddening bugs.
  6. Test in a stripped environment. Use env -i before trusting cron to handle it.

Cron is a 40-year-old tool. It's beautifully simple and ruthlessly minimal. With the right wrapper around it, it's all you need to run OpenClaw agents on a reliable schedule. The key is acknowledging that the complexity isn't in the scheduling — it's in handling the non-deterministic, stateful, potentially expensive nature of AI agent workloads. Handle that properly, and cron does the rest.

Now go automate something.

Claw Mart Daily

Get one AI agent tip every morning

Free daily tips to make your OpenClaw agent smarter. No spam, unsubscribe anytime.

More From the Blog