Setting Up Cron Jobs in OpenClaw: Daily AI Employee Tasks
Setting Up Cron Jobs in OpenClaw: Daily AI Employee Tasks

Look, I'll save you the frustration I went through: getting a cron job to reliably trigger an AI employee in OpenClaw is not hard, but the default way most people set it up is wrong. They slap a * * * * * on a shell script, walk away, and then wonder why their daily reporting agent silently stopped working eleven days ago, or why their OpenAI bill tripled because the agent got stuck in a retry loop at 3am with nobody watching.
I've been running scheduled OpenClaw agents in production for months now โ daily summaries, weekly research digests, nightly data cleanup tasks โ and I've landed on a setup that actually works. This post is the guide I wish I had when I started.
The Real Problem: Cron Was Not Built for This
Cron was designed in the 1970s to run simple shell scripts. It gives you an exit code, maybe an email if you configure it, and that's it. AI agents are a completely different beast. They're probabilistic, stateful, expensive per execution, and they fail in weird, silent ways that a simple exit code won't catch.
Here's what actually goes wrong when you naively cron an OpenClaw agent:
Silent failures. Your API key rotates, a rate limit kicks in, or the model endpoint has a blip. Cron doesn't care. It ran the script. The script errored. Nobody knows until your boss asks why the daily brief hasn't shown up in Slack for a week.
Environment issues. Your OpenClaw skills work perfectly when you run them manually in your terminal. But cron runs in a stripped-down environment โ different PATH, no virtualenv activated, no access to your .env file. Classic "works on my machine" syndrome, amplified.
Cost explosions. An OpenClaw agent with tool-calling capabilities can decide to make 40 API calls instead of 4. Without guardrails, a task that costs $0.30 per run can randomly spike to $15 because the agent decided to be thorough at 2am. Multiply that by 30 days and you've got a problem.
Overlapping runs. Your analysis agent usually takes 10 minutes. One day it takes 90 minutes because of slow API responses. Meanwhile, cron fires the next run. Now you have two agents writing to the same database, sending duplicate Slack messages, or worse.
No state management. Many OpenClaw workflows need context from previous runs โ what was already processed, what changed since yesterday, where the agent left off. Cron is stateless by design.
None of these are OpenClaw's fault. They're cron's fault. But since cron is what most of us reach for first, you need to build the right scaffolding around it.
The Setup That Actually Works
Here's the architecture I use for every scheduled OpenClaw agent. It's not complicated, but each piece solves a specific failure mode.
Step 1: Wrap Your OpenClaw Skill in a Runner Script
Don't point cron directly at your OpenClaw skill. Create a wrapper script that handles environment setup, locking, logging, and error handling.
#!/bin/bash
# run_daily_summary.sh โ Wrapper for OpenClaw daily summary agent
set -euo pipefail
# ---- Configuration ----
LOCK_FILE="/tmp/openclaw_daily_summary.lock"
LOG_DIR="$HOME/openclaw-logs"
LOG_FILE="$LOG_DIR/daily_summary_$(date +%Y%m%d_%H%M%S).log"
MAX_RUNTIME=900 # 15 minutes max
OPENCLAW_DIR="$HOME/openclaw-projects/daily-summary"
# ---- Ensure log directory exists ----
mkdir -p "$LOG_DIR"
# ---- Prevent overlapping runs ----
if [ -f "$LOCK_FILE" ]; then
LOCK_PID=$(cat "$LOCK_FILE")
if kill -0 "$LOCK_PID" 2>/dev/null; then
echo "$(date): Previous run (PID $LOCK_PID) still active. Skipping." >> "$LOG_DIR/skipped.log"
exit 0
else
echo "$(date): Stale lock file found. Removing." >> "$LOG_DIR/skipped.log"
rm -f "$LOCK_FILE"
fi
fi
echo $$ > "$LOCK_FILE"
trap "rm -f $LOCK_FILE" EXIT
# ---- Load environment ----
source "$OPENCLAW_DIR/.env"
source "$OPENCLAW_DIR/venv/bin/activate"
# ---- Run with timeout ----
cd "$OPENCLAW_DIR"
timeout "$MAX_RUNTIME" openclaw run daily-summary \
--config config/daily_summary.yaml \
--output-dir outputs/ \
2>&1 | tee "$LOG_FILE"
EXIT_CODE=${PIPESTATUS[0]}
# ---- Handle failure ----
if [ "$EXIT_CODE" -ne 0 ]; then
echo "$(date): FAILED with exit code $EXIT_CODE" >> "$LOG_DIR/failures.log"
# Send alert (pick your poison: Slack webhook, email, Pushover, etc.)
curl -s -X POST "$SLACK_WEBHOOK_URL" \
-H 'Content-type: application/json' \
-d "{\"text\":\"๐จ OpenClaw daily summary agent FAILED (exit code $EXIT_CODE). Check logs: $LOG_FILE\"}"
exit "$EXIT_CODE"
fi
echo "$(date): SUCCESS" >> "$LOG_DIR/successes.log"
Let me break down what this does and why each piece matters:
set -euo pipefailโ Fails fast on any error instead of silently continuing.- Lock file with PID check โ Prevents overlapping runs. If the previous execution is still going, this one exits cleanly. If the lock is stale (crashed process), it cleans up and proceeds.
timeoutโ Kills the agent if it exceeds your maximum expected runtime. This is your cost circuit breaker. If your daily summary should take 5 minutes and it's been running for 15, something is wrong.- Explicit environment loading โ Sources the
.envand activates the virtualenv manually. Cron doesn't know about your shell configuration. - Structured logging โ Timestamped log files that you can actually search through when something breaks at 2am on a Saturday.
- Failure alerting โ Sends a Slack message (or whatever) immediately on failure. You'll know within seconds, not days.
Step 2: Configure the Crontab Properly
# Edit crontab
crontab -e
# OpenClaw Daily Summary โ runs at 6:15 AM every weekday
# Uses full path to avoid PATH issues
15 6 * * 1-5 /home/deploy/openclaw-projects/daily-summary/run_daily_summary.sh
# OpenClaw Weekly Research Digest โ runs Sunday at 8 PM
0 20 * * 0 /home/deploy/openclaw-projects/research-digest/run_research_digest.sh
# OpenClaw Nightly Data Cleanup โ runs at 1 AM daily
0 1 * * * /home/deploy/openclaw-projects/data-cleanup/run_data_cleanup.sh
A few things people get wrong here:
Use absolute paths for everything. Cron's PATH is minimal. Don't assume openclaw is on the path. Either use the full path to the binary or set PATH at the top of your crontab:
PATH=/usr/local/bin:/usr/bin:/bin:/home/deploy/.local/bin
Don't schedule at minute :00. Everyone schedules at the top of the hour. API rate limits, shared infrastructure, and external services all get hammered at :00. Use :15, :37, :42 โ whatever. Spread your jobs out.
Be intentional about days. If your summary agent only matters on weekdays, use 1-5. Don't waste money running it Saturday and Sunday.
Step 3: The OpenClaw Configuration File
Here's a practical config/daily_summary.yaml for an OpenClaw skill that pulls from multiple sources and generates a summary:
# config/daily_summary.yaml
skill: daily-executive-summary
version: "1.0"
agent:
model: gpt-4o
temperature: 0.3
max_tokens: 4000
max_tool_calls: 20 # Hard cap โ prevents runaway agents
sources:
- type: slack
channels: ["#engineering", "#sales", "#incidents"]
lookback_hours: 24
- type: jira
project: "CORE"
status_changed_since: "24h"
- type: github
repos: ["myorg/backend", "myorg/frontend"]
events: ["pull_request", "release"]
output:
format: markdown
destinations:
- type: slack
channel: "#daily-brief"
- type: email
recipients: ["leadership@company.com"]
- type: file
path: "outputs/summary_{{date}}.md"
guardrails:
max_cost_per_run: 0.75
timeout_seconds: 600
retry_policy:
max_retries: 2
backoff_multiplier: 2
initial_delay_seconds: 30
idempotency_key: "daily-summary-{{date}}"
The guardrails section is the most important part of this config, and the one most people skip.
max_cost_per_run โ OpenClaw can track token usage. If the agent is about to exceed your budget, it stops. This single setting would have saved me about $200 in my first month.
max_tool_calls โ Caps how many external API calls the agent can make. Without this, a confused agent can loop, calling the same tool over and over.
retry_policy โ Exponential backoff is essential when you're hitting external APIs. A naive retry (fail โ immediately retry โ fail โ retry) just burns tokens and gets you rate-limited faster.
idempotency_key โ If the job runs twice on the same day (overlap, manual re-run, whatever), OpenClaw can detect the duplicate and skip or deduplicate the output.
Step 4: Log Rotation and Cleanup
Your logs will pile up. Add a simple rotation:
# Add to crontab โ clean up logs older than 30 days
0 2 * * 0 find /home/deploy/openclaw-logs -name "*.log" -mtime +30 -delete
Or use logrotate if you want to be more formal about it:
# /etc/logrotate.d/openclaw
/home/deploy/openclaw-logs/*.log {
weekly
rotate 8
compress
missingok
notifempty
}
Step 5: Health Check Monitoring
Even with alerting on failures, you want to know about absence of success. If the cron daemon itself dies, or the server reboots and cron doesn't restart, your failure alert won't fire โ because nothing ran at all.
I use a dead man's switch pattern. After a successful run, ping an external monitoring service:
# Add to the end of run_daily_summary.sh, after the success log line
curl -s "https://hc-ping.com/your-unique-uuid" > /dev/null
Services like Healthchecks.io (free tier is plenty), Cronitor, or Better Uptime will alert you if they don't receive the ping within your expected window. This catches the failure mode that no amount of in-script alerting can catch: the script never running at all.
Advanced: Moving Beyond Cron
Once you have more than three or four scheduled OpenClaw agents, raw cron starts to strain. Here's when I'd recommend leveling up:
Celery Beat + Redis โ If you need dynamic scheduling (change intervals without editing crontab), queuing (run jobs in order), or worker pools (run multiple agents concurrently with controlled parallelism).
Systemd timers โ A better cron that's already on your Linux box. Gives you dependency management, better logging (journalctl), and resource controls (memory limits, CPU quotas). Seriously underrated.
# /etc/systemd/system/openclaw-daily-summary.timer
[Unit]
Description=OpenClaw Daily Summary Timer
[Timer]
OnCalendar=Mon..Fri 06:15
Persistent=true
RandomizedDelaySec=120
[Install]
WantedBy=timers.target
# /etc/systemd/system/openclaw-daily-summary.service
[Unit]
Description=OpenClaw Daily Summary Agent
[Service]
Type=oneshot
User=deploy
WorkingDirectory=/home/deploy/openclaw-projects/daily-summary
ExecStart=/home/deploy/openclaw-projects/daily-summary/run_daily_summary.sh
MemoryMax=2G
TimeoutStartSec=900
Persistent=true means if the server was off when the timer should have fired, it runs immediately on boot. RandomizedDelaySec adds jitter to prevent thundering herd problems. MemoryMax prevents a runaway agent from killing your server. This is all stuff cron simply cannot do.
Skip the Setup: Felix's OpenClaw Starter Pack
If you've read this far and thought "this is a lot of scaffolding for what should be a simple scheduled task" โ I agree. That's why I'd point you toward Felix's OpenClaw Starter Pack on Claw Mart.
It's a $29 bundle that includes pre-configured OpenClaw skills with the guardrails, logging, alerting, and wrapper scripts already built in. The daily summary agent, the research digest, the data cleanup patterns โ they're all in there, tested and ready to deploy. The config files include sensible defaults for cost caps, retry policies, and idempotency. You basically clone it, drop in your API keys and data source credentials, set up your crontab (or systemd timer), and you're running.
I'm not getting a commission on this. I genuinely wish it existed when I started, because I spent two weekends building exactly what Felix already packaged up. If you don't want to set all of this up manually, it's the fastest way to get reliable scheduled OpenClaw agents running.
Common Gotchas (Quick Reference)
"My agent works manually but fails in cron." โ 99% of the time: environment variables aren't loaded, virtualenv isn't activated, or PATH doesn't include the OpenClaw binary. Source everything explicitly in your wrapper script.
"My agent sends duplicate outputs." โ Use idempotency keys in your OpenClaw config. Also check for overlapping runs โ add the lock file pattern from above.
"My costs are unpredictable."
โ Set max_cost_per_run and max_tool_calls in every scheduled agent config. No exceptions. Treat these like you treat memory limits on a container.
"I don't know if my agents are actually running." โ Dead man's switch. Ping an external service on success. Alert on absence of ping, not just on failure.
"My agent sometimes takes way longer than expected."
โ Use timeout in your wrapper script AND timeout_seconds in your OpenClaw config. Belt and suspenders.
Next Steps
- Start with one agent. Pick your simplest scheduled task โ probably a daily summary or data sync. Get it rock-solid before adding more.
- Build the wrapper script first. Copy the template above. Modify for your paths and alert preferences. The scaffolding matters more than the agent logic.
- Set cost guardrails before you set the schedule. Seriously. Do this first. You'll thank me later.
- Graduate to systemd timers when cron starts feeling limiting. It's not much more complex, and the resource controls alone are worth it.
- If you want to skip the grunt work, grab Felix's OpenClaw Starter Pack and have the whole thing running by tonight.
Scheduled AI agents are genuinely useful โ they're the closest thing to having an employee who actually shows up at 6am every day without complaining. But they need the right infrastructure around them. Cron plus some thoughtful guardrails gets you surprisingly far. Build it right the first time and you won't be debugging silent failures at midnight.
Recommended for this post


