How to Install OpenClaw on Linux (Ubuntu/Debian)
How to Install OpenClaw on Linux (Ubuntu/Debian)

Let me be straightforward with you: installing OpenClaw on Linux should take about fifteen minutes. In practice, it takes most people somewhere between two hours and an entire weekend, and the reason is almost never OpenClaw itself. It's the GPU compute layer underneath it ā the tangled mess of drivers, ICD loaders, and vendor-specific libraries that Linux users have been quietly suffering through for years.
I've set up OpenClaw on four different machines now ā two NVIDIA boxes, an AMD Radeon rig, and a sad little Intel NUC with integrated graphics ā and every single one had a different failure mode during installation. So I'm going to walk you through the entire process, including the parts that the official docs skip over because they assume you have a functioning GPU compute stack. (You probably don't. That's fine. We'll fix it.)
What OpenClaw Actually Needs From Your System
Before we touch OpenClaw itself, let's talk about what's actually happening under the hood when you run it. OpenClaw orchestrates AI agent workflows locally on your machine. That means it needs to talk to your GPU for inference ā running the actual language model that powers your agents' reasoning, tool use, and decision-making.
On macOS, this is mostly invisible because Apple Silicon handles everything through Metal. On Windows with an NVIDIA card, CUDA usually just works. On Linux, you're entering a world where three different GPU vendors have three completely different driver stacks, and the "universal" layer that's supposed to abstract all of this away (OpenCL) is held together with duct tape and good intentions.
Here's what OpenClaw actually requires on Linux:
- Python 3.10+ (3.11 recommended)
- A functioning GPU compute backend (CUDA, ROCm, or OpenCL ā yes, OpenCL is still used as a fallback)
- The OpenCL ICD loader (even if you're primarily using CUDA or ROCm, OpenClaw's dependency chain pulls this in)
- CMake, build-essential, and the usual C/C++ build tools
- At least 8GB of RAM (16GB+ recommended if you're running 7B+ parameter models inside your agents)
The GPU compute backend is where 90% of installation problems live. Let's deal with it.
Step 1: Get Your GPU Compute Stack Working First
I cannot stress this enough: do not try to install OpenClaw until your GPU compute layer is verified and working. I've watched people in the OpenClaw Discord spend hours debugging what they think is an OpenClaw problem, only to discover their system can't even see their GPU.
NVIDIA Users (The Easiest Path)
If you have an NVIDIA card, congratulations ā you have the least painful road ahead. Install the proprietary driver and CUDA toolkit:
# Add NVIDIA package repository (Ubuntu 22.04/24.04)
sudo apt update
sudo apt install -y nvidia-driver-535 nvidia-cuda-toolkit
# Reboot (yes, you actually have to reboot)
sudo reboot
After reboot, verify it's working:
nvidia-smi
You should see your GPU listed with driver version and CUDA version. If you see an error, don't proceed ā fix this first. Common issues:
- Secure Boot is blocking the driver. Either disable Secure Boot in BIOS or sign the kernel module with MOK.
- Wayland + NVIDIA is being weird. Switch to X11 for now if your display is broken after driver install.
- You installed the wrong driver version. Check which version supports your card on NVIDIA's site.
Now install the OpenCL ICD loader (OpenClaw needs this regardless):
sudo apt install -y ocl-icd-opencl-dev opencl-headers clinfo
Run clinfo and confirm it shows your NVIDIA device. You should see something like:
Number of platforms: 1
Platform Name: NVIDIA CUDA
Number of devices: 1
Device Name: NVIDIA GeForce RTX 3080
AMD Users (The "Character Building" Path)
This is where things get spicy. AMD GPU support on Linux has improved dramatically, but the installation process is still more fragile than it should be.
For AMD, you need ROCm, which includes an OpenCL implementation:
# Install prerequisites
sudo apt update
sudo apt install -y wget gnupg2
# Add ROCm repository (Ubuntu 22.04)
wget https://repo.radeon.com/rocm/rocm.gpg.key -O - | \
gpg --dearmor | sudo tee /etc/apt/keyrings/rocm.gpg > /dev/null
echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/rocm.gpg] \
https://repo.radeon.com/rocm/apt/latest jammy main" | \
sudo tee /etc/apt/sources.list.d/rocm.list
sudo apt update
sudo apt install -y rocm-opencl-runtime rocm-hip-runtime clinfo
Add your user to the render and video groups:
sudo usermod -aG render,video $USER
Log out and back in (or reboot), then test:
clinfo
The most common AMD failure: clinfo returns "Number of platforms: 0." This almost always means one of three things:
- Your GPU isn't supported by the ROCm version you installed. Check the ROCm supported GPU list ā not all AMD cards are covered, especially consumer Radeon cards.
- The ICD file isn't in the right place. Check that
/etc/OpenCL/vendors/contains an.icdfile pointing to the ROCm OpenCL library. - Permissions. The render/video group membership didn't take effect. Try
groupsto check.
If you have a newer RDNA 3 card (7900 XTX, etc.), you may need to set an environment variable to force ROCm to recognize it:
export HSA_OVERRIDE_GFX_VERSION=11.0.0
Add that to your .bashrc if it fixes your clinfo output.
Intel Users (Integrated Graphics)
If you're running on an Intel system with integrated graphics (common for lightweight setups and NUCs):
sudo apt install -y intel-opencl-icd clinfo
This actually tends to work pretty cleanly on recent Intel hardware. Run clinfo to verify. You won't get blazing performance, but for smaller models and lighter agent workloads, it's serviceable.
Step 2: Install OpenClaw
Now that your GPU compute stack is actually functional, installing OpenClaw itself is straightforward. Create a virtual environment first ā don't install into your system Python:
# Create and activate a virtual environment
python3 -m venv ~/openclaw-env
source ~/openclaw-env/bin/activate
# Upgrade pip (seriously, do this ā old pip causes weird build failures)
pip install --upgrade pip setuptools wheel
Now install OpenClaw:
pip install openclaw
If you're on NVIDIA and want to make sure it picks up CUDA for maximum performance:
pip install openclaw[cuda]
For AMD/ROCm:
pip install openclaw[rocm]
Verify the installation:
openclaw --version
openclaw doctor
The openclaw doctor command is your best friend. It checks your GPU detection, compute backend, model availability, and configuration. If something is wrong, it'll tell you exactly what. Pay attention to its output.
A healthy output looks something like:
OpenClaw Doctor
===============
ā Python 3.11.6
ā GPU detected: NVIDIA GeForce RTX 3080 (CUDA 12.2)
ā OpenCL ICD loader: working (1 platform, 1 device)
ā Compute backend: CUDA (primary), OpenCL (fallback)
ā Available memory: 10240 MB VRAM
ā Configuration: ~/.openclaw/config.yaml (valid)
ā No skills installed ā run 'openclaw skills list' to browse
That last line brings us to the next step.
Step 3: Configure OpenClaw and Install Skills
OpenClaw's configuration lives in ~/.openclaw/config.yaml. On first run, it creates a default config. You'll want to edit it:
mkdir -p ~/.openclaw
nano ~/.openclaw/config.yaml
Here's a solid starting configuration:
# ~/.openclaw/config.yaml
compute:
backend: auto # auto-detects CUDA > ROCm > OpenCL > CPU
memory_limit: 0.8 # use up to 80% of available VRAM
threads: 8 # CPU threads for non-GPU operations
models:
default: "mistral-7b-instruct"
cache_dir: "~/.openclaw/models"
agent:
max_iterations: 25
timeout: 300 # seconds per task
verbose: true # helpful while you're learning
skills:
directory: "~/.openclaw/skills"
The backend: auto setting is important ā it means OpenClaw will try CUDA first, fall back to ROCm, then OpenCL, then CPU. You can hardcode it to cuda or rocm if auto-detection is being flaky.
Step 4: Pull a Model and Test
Download a model to use for your agents:
openclaw models pull mistral-7b-instruct
This will download the model to your cache directory. Depending on your internet speed, this takes a few minutes.
Now run a quick test:
openclaw run --task "List the top 5 largest files in my home directory" --verbose
If this works ā if you see the agent reasoning through the task, deciding to use shell commands, executing them, and returning results ā your installation is good. You can stop troubleshooting and start building.
Common Installation Failures and Fixes
Even with the steps above, things go wrong. Here are the issues I've personally hit or seen others hit repeatedly:
"libOpenCL.so: cannot open shared object file"
The OpenCL shared library isn't where OpenClaw expects it. Fix:
sudo apt install -y ocl-icd-libopencl1
sudo ldconfig
If that doesn't work, find it manually:
find /usr -name "libOpenCL*" 2>/dev/null
Then symlink it:
sudo ln -s /usr/lib/x86_64-linux-gnu/libOpenCL.so.1 /usr/lib/libOpenCL.so
"clGetPlatformIDs() returned -1001"
This is the "no platforms found" error. Your ICD loader can't find any vendor implementations. Check:
ls /etc/OpenCL/vendors/
You should see at least one .icd file (like nvidia.icd or amdocl64.icd). If the directory is empty, your vendor driver isn't properly installed. Go back to Step 1.
Build fails with CMake errors
Some OpenClaw dependencies compile native extensions. Make sure you have build tools:
sudo apt install -y build-essential cmake pkg-config \
libssl-dev libffi-dev python3-dev
"CUDA out of memory" on first run
Your model is too large for your VRAM. Either use a smaller model or enable offloading in config:
compute:
offload_layers: 20 # offload some layers to CPU RAM
The Shortcut: Felix's OpenClaw Starter Pack
Here's where I'll save you potentially hours of additional setup time. Everything I've described above gets your OpenClaw installation working. But you still need to configure skills, set up useful agent templates, and figure out which model configurations actually work well for different tasks.
I burned a lot of time on that last part when I was starting out. Trying different prompting strategies for tool use, figuring out which skills play well together, tweaking agent iteration limits for different task types.
If you don't want to do all of that from scratch, Felix's OpenClaw Starter Pack on Claw Mart is genuinely the fastest way to go from "I just installed this" to "I have agents doing useful work." It's $29 and includes pre-configured skills and agent templates that are already tuned and tested. You drop them into your ~/.openclaw/skills directory and you're immediately productive.
I specifically like it because the skills are well-documented ā you can actually read through them and understand what they're doing, which teaches you how to build your own. It's not a black box. Think of it less as "buying a product" and more as "buying back the 6 hours you'd spend figuring out best practices the hard way."
Where to Go From Here
Once your installation is working:
- Start with simple tasks. File operations, web lookups, data formatting. Make sure the basic agent loop is solid before you try complex multi-step workflows.
- Read the skill documentation. Understanding how skills are structured is the key to building your own custom ones.
- Monitor GPU usage. Run
nvidia-smi -l 1(NVIDIA) orrocm-smi(AMD) in a separate terminal while your agents work. You'll quickly get a feel for how compute-intensive different operations are. - Join the community. The OpenClaw Discord is active and people are generally helpful, especially with Linux-specific issues. The r/LocalLLaMA subreddit also has OpenClaw users who share configurations and tips.
The Linux installation experience for GPU-accelerated AI tools is genuinely painful right now. It's not OpenClaw's fault ā it's a fragmented driver ecosystem that's been a mess for years. But once you push through the initial setup, running agents locally on your own hardware without sending your data to any cloud API is absolutely worth the upfront hassle.
Get the compute stack right first, then install OpenClaw on top of it, and you'll be fine.