Debugging Cron Jobs in OpenClaw That Won't Run
Debugging Cron Jobs in OpenClaw That Won't Run

If you're reading this, there's a solid chance you've been staring at your terminal for the last forty-five minutes wondering why your OpenClaw agent isn't doing its job. Literally. The cron job you set up is sitting there in your crontab, looking perfectly fine, and yet — nothing. No logs, no output, no errors. Just silence. Your agent runs beautifully when you execute it manually, but the moment you hand it over to cron, it acts like it's never heard of your API keys, your Python packages, or your entire project directory.
I've been there. Multiple times. And I can tell you right now: this is one of the most common and most fixable problems in the OpenClaw ecosystem. The issue almost never lies with your agent logic. It's almost always the environment. Let me walk you through every single thing I check when an OpenClaw cron job refuses to run, so you can stop guessing and start fixing.
The Core Problem: Cron Doesn't Know Who You Are
Here's the thing most people don't realize until it bites them: cron doesn't load your shell profile. It doesn't source your .bashrc, your .zshrc, your .env file, or anything else you're used to having available when you open a terminal. Cron runs with an absurdly minimal environment. We're talking PATH=/usr/bin:/bin and basically nothing else.
That means:
- Your
OPENAI_API_KEY? Gone. - Your
ANTHROPIC_API_KEY? Doesn't exist. - Your virtual environment with all your OpenClaw dependencies? Not activated.
- That
.envfile you carefully set up in your project root? Cron has never heard of it. - Relative paths in your script? They resolve to wherever cron's working directory is, which is probably not where you think.
This is why your agent works when you run python run_agent.py from your terminal but does absolutely nothing under cron. Your terminal session has all the context. Cron has almost none.
Step Zero: Confirm Cron Itself Is Actually Working
Before you debug anything OpenClaw-specific, make sure cron is even running on your system. I know this sounds obvious, but I've seen people spend hours debugging agent code when the cron daemon itself was stopped.
# Check if cron is running
sudo systemctl status cron
# or on some systems:
sudo systemctl status crond
If it's not active, start it:
sudo systemctl start cron
sudo systemctl enable cron
Next, verify that cron can execute anything at all. Add this line to your crontab (crontab -e):
* * * * * /bin/echo "cron is alive" >> /tmp/cron-test.log 2>&1
Wait two minutes. Check /tmp/cron-test.log. If you see "cron is alive" appearing every minute, your cron daemon is fine and the problem is specific to your OpenClaw job. Remove that test line and move on.
If you don't see anything in that log file, your cron setup is fundamentally broken — fix that first before touching anything else. Check /var/log/syslog or /var/log/cron for clues.
Step One: Use Absolute Paths for Everything
This is non-negotiable. Every single path in your cron line needs to be absolute. Not relative. Not relying on ~. Absolute.
Here's what a bad cron entry looks like:
0 */6 * * * cd my-project && python run_agent.py
Here's what a good one looks like:
0 */6 * * * cd /home/youruser/my-project && /home/youruser/my-project/.venv/bin/python /home/youruser/my-project/run_agent.py >> /home/youruser/my-project/logs/agent.log 2>&1
Yes, it's verbose. Yes, it's ugly. It also works. Find the absolute path to your Python binary inside your virtual environment:
which python
# or if you're using a venv:
echo $VIRTUAL_ENV/bin/python
Use that full path in your cron entry. Same goes for the OpenClaw CLI if you're using it directly:
which openclaw
Whatever path that returns, that's what goes in your crontab.
Step Two: Load Your Environment Variables Explicitly
This is the single biggest source of silent failures for OpenClaw agents running under cron. Your agent calls out to an LLM provider, OpenClaw needs the API key, the key doesn't exist in cron's environment, and the whole thing dies silently because you didn't redirect stderr.
You have a few options here, and I'd recommend combining them.
Option A: Source your .env file directly in the cron line.
0 */6 * * * set -a && source /home/youruser/my-project/.env && set +a && cd /home/youruser/my-project && /home/youruser/my-project/.venv/bin/python run_agent.py >> /home/youruser/my-project/logs/agent.log 2>&1
The set -a and set +a ensure that every variable in your .env file gets exported to the environment. Without set -a, sourcing the file defines the variables but doesn't export them to child processes — which means your Python script still won't see them.
Option B: Declare variables directly in the crontab.
You can set environment variables at the top of your crontab before any job lines:
OPENAI_API_KEY=sk-your-key-here
OPENCLAW_HOME=/home/youruser/my-project
PATH=/home/youruser/.local/bin:/usr/local/bin:/usr/bin:/bin
0 */6 * * * cd $OPENCLAW_HOME && $OPENCLAW_HOME/.venv/bin/python run_agent.py >> $OPENCLAW_HOME/logs/agent.log 2>&1
This works but has the downside of putting secrets directly in your crontab file. Fine for a personal VPS, less ideal for shared environments.
Option C: Use a wrapper script.
This is what I actually recommend for any non-trivial OpenClaw setup. Create a shell script that handles all the environment setup:
#!/bin/bash
# /home/youruser/my-project/run_cron.sh
# Load environment variables
set -a
source /home/youruser/my-project/.env
set +a
# Activate virtual environment
source /home/youruser/my-project/.venv/bin/activate
# Change to project directory
cd /home/youruser/my-project
# Run the agent
python run_agent.py
# Exit with the agent's exit code
exit $?
Make it executable:
chmod +x /home/youruser/my-project/run_cron.sh
Then your cron entry becomes clean and simple:
0 */6 * * * /home/youruser/my-project/run_cron.sh >> /home/youruser/my-project/logs/agent.log 2>&1
This wrapper script approach is golden because you can test it independently. Just run /home/youruser/my-project/run_cron.sh from a minimal shell (not your normal terminal — try env -i /bin/bash --norc --noprofile first) to simulate what cron will do.
Step Three: Always Redirect Output
I cannot stress this enough: always add >> /path/to/logfile.log 2>&1 to your cron lines. Without this, you are flying blind. Cron will try to email output to the local user, and if your system doesn't have a mail transfer agent configured (most VPS setups don't), that output just vanishes into the void.
The 2>&1 part is critical — it redirects stderr to the same file as stdout. Without it, error messages (which is exactly what you need when debugging) go nowhere.
If you want to get fancy, timestamp your log entries:
#!/bin/bash
# In your wrapper script
exec > >(while read line; do echo "$(date '+%Y-%m-%d %H:%M:%S') $line"; done) 2>&1
Add that near the top of your run_cron.sh and every line of output will get a timestamp. This is incredibly useful when you're trying to figure out if your job ran, when it ran, and where it failed.
Step Four: Check Permissions and User Context
Your crontab belongs to a specific user. Run crontab -l to see jobs for the current user. If you set up the job as root but your project files are owned by youruser (or vice versa), things will break.
Common permission issues:
- The log file directory doesn't exist or isn't writable by the cron user
- The
.envfile isn't readable by the cron user - The virtual environment was created by a different user
- SQLite databases or vector stores created by your agent have wrong ownership
Check all of this:
ls -la /home/youruser/my-project/.env
ls -la /home/youruser/my-project/.venv/bin/python
ls -la /home/youruser/my-project/logs/
Make sure the user whose crontab this is in has read/execute access to everything in the chain.
Step Five: Handle OpenClaw-Specific Gotchas
Now that we've handled the generic cron issues, let's talk about things specific to running OpenClaw agents on a schedule.
State persistence between runs. If your OpenClaw agent maintains state (conversation history, memory, embeddings, a local knowledge base), make sure the paths to those state files are absolute and that the cron user can read and write to them. I've seen agents that work fine on first run but fail on subsequent cron executions because the state file gets created with wrong permissions.
Rate limiting and retries. LLM API calls fail. It's not a question of if, it's when. If your OpenClaw agent doesn't handle transient errors gracefully, a single rate limit response will kill your cron job. Build retry logic into your agent, or at minimum, wrap your main execution in a try/except that logs the error and exits cleanly:
import time
import logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
def run_agent():
# Your OpenClaw agent logic here
pass
MAX_RETRIES = 3
for attempt in range(MAX_RETRIES):
try:
logger.info(f"Starting agent run (attempt {attempt + 1}/{MAX_RETRIES})")
run_agent()
logger.info("Agent run completed successfully")
break
except Exception as e:
logger.error(f"Agent run failed: {e}")
if attempt < MAX_RETRIES - 1:
wait_time = 2 ** attempt * 30 # 30s, 60s, 120s
logger.info(f"Retrying in {wait_time} seconds...")
time.sleep(wait_time)
else:
logger.critical("All retry attempts exhausted. Giving up.")
raise
Long-running agents. Some OpenClaw agents take a while — pulling research, synthesizing information across multiple LLM calls, running tool chains. If your agent takes 20 minutes to complete and your cron runs every 15 minutes, you'll get overlapping executions. Use a lockfile to prevent this:
import fcntl
import sys
lock_file = open('/tmp/openclaw-agent.lock', 'w')
try:
fcntl.flock(lock_file, fcntl.LOCK_EX | fcntl.LOCK_NB)
except IOError:
print("Another instance is already running. Exiting.")
sys.exit(0)
# Your agent code here
Or use flock directly in your cron line:
0 */6 * * * /usr/bin/flock -n /tmp/openclaw-agent.lock /home/youruser/my-project/run_cron.sh >> /home/youruser/my-project/logs/agent.log 2>&1
The Full Debugging Checklist
When your OpenClaw cron job won't run, go through this in order:
- Is cron running?
systemctl status cron - Can cron run anything? Test with a simple echo job.
- Are all paths absolute? Python binary, script, project directory, log files.
- Are environment variables loaded? API keys, config paths, everything.
- Is output being captured?
>> logfile.log 2>&1on every job. - Do permissions check out? Right user, readable files, writable directories.
- Does the wrapper script work in a clean shell? Test with
env -i bash. - Are you handling errors and retries? LLM APIs will fail eventually.
- Is there a lock to prevent overlapping runs? Use
flockfor long agents. - Check the system logs.
grep CRON /var/log/syslogshows cron execution history.
Starting Fresh with OpenClaw the Right Way
If you're just getting into OpenClaw and want to avoid this entire class of headaches from the start, the best move is to start with a known-good setup rather than piecing things together from scattered docs and forum posts. Felix's OpenClaw Starter Pack is the resource I point people to because it gives you a solid foundation — proper project structure, environment management patterns, and the kind of configuration defaults that prevent exactly the problems we've been talking about. Starting with a tested template is always faster than debugging a broken one you stitched together at midnight.
Building reliable scheduled agents is genuinely one of the most useful things you can do with OpenClaw. A well-configured cron job means your agent works while you sleep — pulling research, monitoring sources, updating dashboards, whatever you've built it to do. But "well-configured" is the operative phrase. The agent logic is the fun part. The environment setup is the boring part that makes everything else possible.
What to Do Next
If you're currently stuck, start with the wrapper script approach. It solves about 80% of cron-related OpenClaw failures in one shot. Create the script, test it in a clean shell, then point cron at it.
If you're setting up a new OpenClaw project, invest 20 minutes upfront to get your environment management right. Use a .env file, use absolute paths from day one, set up log rotation, and add basic retry logic. Future you will be grateful.
And if you've been using systemd timers or another scheduler, honestly, stick with whatever works for you. But cron is universal, it's on every Linux system you'll ever touch, and once you understand the environment isolation issue, it's perfectly reliable for running OpenClaw agents. The trick is just knowing that cron's minimalism is a feature, not a bug — you just have to meet it halfway.
Now go check your crontab. Your agent's been waiting.