Claw Mart
← Back to Blog
March 20, 20268 min readClaw Mart Team

How to Set Up Reliable Cron Jobs in OpenClaw

How to Set Up Reliable Cron Jobs in OpenClaw

How to Set Up Reliable Cron Jobs in OpenClaw

Let me be real with you: if your OpenClaw cron jobs aren't working, you're not alone, and it's probably not your fault. This is one of the most common issues people run into when they move from "I got my agent working locally" to "I want this thing to run automatically every day at 6 AM." The gap between those two states is deceptively large, and cron is where dreams go to die if you don't set it up correctly.

I've been running OpenClaw agents on schedules for months now — everything from daily content pipelines to hourly data monitors — and I've hit every single failure mode along the way. This post is the guide I wish I had when I started. We're going to walk through exactly why your cron jobs are failing, how to fix them, and how to set up a scheduling system that's actually reliable enough to trust with autonomous agent execution.

The Core Problem: Cron Doesn't Know About Your Environment

Here's what happens when most people set up their first OpenClaw cron job. They get their agent running perfectly in a terminal session. Everything works — skills fire, tools connect, the LLM responds, outputs land where they should. So they open crontab -e and add something like:

0 6 * * * python /home/user/my-agent/run.py

Then they wait until 6 AM. Nothing happens. Or worse, something happens but it's broken in a way that produces zero useful error output.

The root cause is almost always the same: cron runs in a completely stripped-down shell environment. It doesn't source your .bashrc. It doesn't activate your virtual environment. It doesn't load your .env file. It doesn't even have the same PATH you're used to. When cron executes your command, it's working with something like PATH=/usr/bin:/bin and literally nothing else.

For a basic Python script, this is annoying but manageable. For an OpenClaw agent — which depends on API keys, model configurations, skill definitions, tool credentials, and potentially a vector store or database connection — it's catastrophic. Your agent can't find its dependencies, can't authenticate with anything, and fails silently into the void.

Step 1: Use Absolute Paths for Everything

This is non-negotiable. Every single path in your cron job needs to be absolute. Not just the script path — the Python interpreter path too.

Wrong:

0 6 * * * python run.py

Wrong (but closer):

0 6 * * * python /home/user/my-agent/run.py

Right:

0 6 * * * /home/user/my-agent/.venv/bin/python /home/user/my-agent/run.py

Find your exact Python path by running this with your virtual environment activated:

which python

That output is what goes in your cron job. If you're using a conda environment, it'll look something like /home/user/miniconda3/envs/openclaw/bin/python. Use the full path. Always.

Step 2: Load Your Environment Variables Properly

Your OpenClaw agent almost certainly needs API keys and configuration values. If you've been storing these in a .env file (which you should be), cron won't load them automatically. You have three good options:

Option A: Source the .env File Inline

0 6 * * * cd /home/user/my-agent && set -a && source .env && set +a && /home/user/my-agent/.venv/bin/python run.py

The set -a tells the shell to export every variable that gets defined. source .env reads the file. set +a turns off auto-export. This works but it's getting long and ugly.

Option B: Use a Wrapper Script (Recommended)

Create a file called run_cron.sh:

#!/bin/bash
set -euo pipefail

# Navigate to project directory
cd /home/user/my-agent

# Load environment variables
set -a
source .env
set +a

# Activate virtual environment
source .venv/bin/activate

# Run the agent
python run.py >> /home/user/my-agent/logs/cron.log 2>&1

Make it executable:

chmod +x /home/user/my-agent/run_cron.sh

Then your cron entry becomes clean and simple:

0 6 * * * /home/user/my-agent/run_cron.sh

This is the approach I use for every single scheduled OpenClaw agent. The wrapper script is where all the environment setup lives, and it gives you one place to add logging, error handling, and any other pre-flight checks.

Option C: Load Env Vars Inside Your Python Script

If you're using python-dotenv, you can handle this in code:

import os
from pathlib import Path
from dotenv import load_dotenv

# Explicitly load .env from the project directory
project_dir = Path(__file__).resolve().parent
load_dotenv(project_dir / ".env")

# Now your OpenClaw config will find its keys
# ... rest of your agent code

This works, but I prefer the wrapper script approach because it keeps environment concerns out of your application code and handles things like the working directory and virtualenv activation too.

Step 3: Set Up Actual Logging

The default behavior of cron is to send output to local system mail. Almost nobody checks this. When your OpenClaw agent fails at 3 AM because it hit a rate limit on the second step of a five-step reasoning chain, you need to know exactly what happened.

At minimum, redirect stdout and stderr to a log file:

python run.py >> /home/user/my-agent/logs/cron.log 2>&1

But for OpenClaw agents specifically, you should go further. Add structured logging inside your agent script:

import logging
from datetime import datetime

# Set up logging
log_dir = Path(__file__).resolve().parent / "logs"
log_dir.mkdir(exist_ok=True)

logging.basicConfig(
    level=logging.INFO,
    format="%(asctime)s [%(levelname)s] %(message)s",
    handlers=[
        logging.FileHandler(log_dir / f"agent_{datetime.now().strftime('%Y%m%d_%H%M%S')}.log"),
        logging.StreamHandler()
    ]
)
logger = logging.getLogger(__name__)

logger.info("Agent run starting")
logger.info(f"Working directory: {os.getcwd()}")
logger.info(f"Python executable: {os.sys.executable}")
logger.info(f"OPENCLAW_API_KEY present: {'OPENCLAW_API_KEY' in os.environ}")

That last line is gold for debugging. Log whether your critical environment variables are present (not their values — just whether they exist). This saves you hours of wondering "is this an auth problem or something else?"

Step 4: Prevent Overlapping Runs

This is the one that bites people hard with AI agents specifically. Traditional cron jobs run a script that takes 2 seconds. AI agents are non-deterministic — sometimes your OpenClaw agent finishes in 30 seconds, sometimes it takes 10 minutes because the LLM decided to use more tools or the API was slow.

If you're running a job every 15 minutes and one run takes 20 minutes, you now have two instances of the same agent running simultaneously. This can corrupt output files, double-post content, make duplicate API calls, or create race conditions with shared state.

Use a lock file. Add this to your wrapper script:

#!/bin/bash
set -euo pipefail

LOCKFILE="/tmp/openclaw-agent.lock"

# Check if already running
if [ -f "$LOCKFILE" ]; then
    # Check if the process is actually still alive
    if kill -0 $(cat "$LOCKFILE") 2>/dev/null; then
        echo "Agent already running (PID $(cat $LOCKFILE)). Skipping."
        exit 0
    else
        echo "Stale lock file found. Removing."
        rm -f "$LOCKFILE"
    fi
fi

# Create lock file with our PID
echo $$ > "$LOCKFILE"

# Ensure lock file gets removed on exit
trap "rm -f $LOCKFILE" EXIT

cd /home/user/my-agent
set -a && source .env && set +a
source .venv/bin/activate
python run.py >> logs/cron.log 2>&1

Alternatively, if you're on Linux, you can use flock:

0 6 * * * /usr/bin/flock -n /tmp/openclaw-agent.lock /home/user/my-agent/run_cron.sh

The -n flag means "don't wait, just skip if locked." Simple and effective.

Step 5: Add Retry Logic for LLM Flakiness

LLM APIs fail. Rate limits hit. Connections time out. This is the reality of building on top of language models. Your OpenClaw agent needs to handle this gracefully, especially when running unattended.

Build retry logic into your agent runner:

import time
import logging

logger = logging.getLogger(__name__)

MAX_RETRIES = 3
RETRY_DELAY_BASE = 30  # seconds

def run_agent():
    """Your actual OpenClaw agent execution logic."""
    # ... your agent code here ...
    pass

def run_with_retries():
    for attempt in range(1, MAX_RETRIES + 1):
        try:
            logger.info(f"Attempt {attempt}/{MAX_RETRIES}")
            run_agent()
            logger.info("Agent completed successfully")
            return True
        except Exception as e:
            logger.error(f"Attempt {attempt} failed: {e}")
            if attempt < MAX_RETRIES:
                delay = RETRY_DELAY_BASE * (2 ** (attempt - 1))  # Exponential backoff
                logger.info(f"Retrying in {delay} seconds...")
                time.sleep(delay)
            else:
                logger.critical(f"All {MAX_RETRIES} attempts failed. Giving up.")
                raise

if __name__ == "__main__":
    run_with_retries()

Exponential backoff is essential here. If you're hitting a rate limit, hammering the API immediately again is the worst thing you can do. Wait 30 seconds, then 60, then 120. Most transient failures resolve within that window.

Step 6: Handle Timezone Sanity

Cron uses your system's local time by default. If your server is in UTC but you're thinking in Eastern time, your "6 AM" job runs at 1 AM or 11 AM depending on the season. This seems minor until your daily news summarizer runs before the news exists.

Set the timezone explicitly in your crontab:

CRON_TZ=America/New_York
0 6 * * * /home/user/my-agent/run_cron.sh

Or, better yet, keep everything in UTC on your server and do the conversion in your head. Less magic, fewer surprises.

Step 7: Add Health Check Notifications

For agents running unattended, you need to know when things break. Don't wait until you notice the output is missing three days later.

A simple approach is to add a notification at the end of your wrapper script:

# At the end of run_cron.sh
EXIT_CODE=$?
if [ $EXIT_CODE -ne 0 ]; then
    curl -s -X POST "https://your-webhook-url" \
        -H "Content-Type: application/json" \
        -d "{\"text\": \"OpenClaw agent failed with exit code $EXIT_CODE at $(date)\"}"
fi

You can point this at a Slack webhook, Discord webhook, Telegram bot, or any notification service you use. The point is: your agent should scream when it fails, not suffer in silence.

The Complete Setup

Here's what a production-ready OpenClaw cron setup looks like, putting it all together:

run_cron.sh:

#!/bin/bash
set -euo pipefail

LOCKFILE="/tmp/openclaw-daily-agent.lock"
PROJECT_DIR="/home/user/my-agent"
LOG_DIR="$PROJECT_DIR/logs"

# Ensure log directory exists
mkdir -p "$LOG_DIR"

# Lock check
if ! /usr/bin/flock -n "$LOCKFILE" true 2>/dev/null; then
    echo "$(date): Agent already running. Skipping." >> "$LOG_DIR/skipped.log"
    exit 0
fi

exec /usr/bin/flock -n "$LOCKFILE" bash -c "
    cd $PROJECT_DIR
    set -a && source .env && set +a
    source .venv/bin/activate
    python run.py >> $LOG_DIR/cron.log 2>&1
    EXIT_CODE=\$?
    if [ \$EXIT_CODE -ne 0 ]; then
        curl -s -X POST 'https://your-webhook-url' \
            -H 'Content-Type: application/json' \
            -d '{\"text\": \"OpenClaw agent failed with exit code '\$EXIT_CODE' at \$(date)\"}' || true
    fi
"

Crontab:

CRON_TZ=America/New_York
0 6 * * * /home/user/my-agent/run_cron.sh

Log rotation (add to /etc/logrotate.d/openclaw-agent):

/home/user/my-agent/logs/*.log {
    daily
    rotate 14
    compress
    missingok
    notifempty
}

Skip the Manual Setup: Felix's OpenClaw Starter Pack

If you've read this far and thought "this is a lot of infrastructure work just to run an agent on a schedule" — you're right. It is. And honestly, most of the time you're better off not reinventing this wheel.

Felix's OpenClaw Starter Pack on Claw Mart is $29 and includes pre-configured skills that handle exactly this problem — scheduling, environment management, retry logic, and logging are all baked in. If you don't want to manually wire up wrapper scripts, lock files, and log rotation, the Starter Pack gives you a working foundation out of the box. I've recommended it to a few people who were struggling with exactly these cron issues, and every one of them was up and running the same day. It's the fastest way to go from "my agent works locally" to "my agent runs reliably on a schedule" without the pain described in this post.

Debugging Checklist

When your cron job still isn't working after all of the above, run through this checklist:

  1. Can you run the wrapper script manually? Execute bash /home/user/my-agent/run_cron.sh from a fresh terminal. If it fails here, the problem is in your script, not cron.

  2. Is cron actually running your job? Check grep CRON /var/log/syslog (Ubuntu/Debian) or journalctl -u cron to confirm the job is being triggered.

  3. Are environment variables loading? Add env >> /tmp/cron-env-debug.txt as a temporary cron entry to see exactly what environment cron provides.

  4. Is the right Python running? Add which python && python --version to the top of your wrapper script to verify.

  5. Check permissions. Can the cron user read your project directory, write to the log directory, and execute the scripts?

  6. Check disk space. Agents that write outputs can fill up a disk faster than you expect, especially if they're generating embeddings or caching model responses.

What's Next

Once you have reliable cron-based scheduling working, you're in a good position. But if you find yourself needing more complex workflows — agents that trigger other agents, conditional scheduling based on previous results, or multi-step pipelines with different retry strategies — you'll eventually want to look at more sophisticated orchestration.

For most people, though, a well-configured cron job with proper environment handling, logging, lock files, and retry logic covers 90% of what you need. OpenClaw agents are powerful enough to handle complex autonomous tasks; the scheduling layer just needs to get out of the way and let them run.

Get the basics right first. Make sure your agent runs reliably once before you worry about running it elegantly. The wrapper script approach outlined here has been rock-solid for me across a dozen different agents, and it'll serve you well until your needs outgrow it.

Now go fix that cron job.

Claw Mart Daily

Get one AI agent tip every morning

Free daily tips to make your OpenClaw agent smarter. No spam, unsubscribe anytime.

More From the Blog