The CLI for your AI agent
Create, run, and destroy sandboxed agents from your terminal
No Docker. No cloud. Open-source Rust CLI that boots isolated Linux environments in seconds — so every agent gets its own disposable sandbox.
$ cargo install clawstainer
Give your agent the superpower
to spin up its own infrastructure
Agent needs a clean environment? It spins one up itself.
Task done? It tears down the sandbox. No cleanup needed.
Need to retry? It restores from snapshot and goes again.
Create a sandbox
One command spins up an isolated Linux environment with its own filesystem, networking, and resource limits. Boots in 2-3 seconds.
Provision an agent
Built-in templates for Claude Code, Hermes, and OpenClaw. All dependencies included — one command from empty box to running agent.
Use it, then destroy it
Run commands, forward ports, copy files out. When done, tear it down — or snapshot it for instant reuse later.
Local-first, sandboxed,
built for fleets.
Local-First
Infrastructure
Runs on your mac. No cloud account needed. Privacy and speed. Run it with cargo install, create your first sandbox, let your agent free.
Complete Namespace
Isolation
Each sandbox has its own filesystem, network, and process space. If something breaks inside, nothing outside is affected.
Fleet Management
at Scale
Define all your sandboxes in one YAML file. Clawstainer creates and provisions them in parallel. Tear down a group or everything at once.
What you build with disposable sandboxes
Benchmark agent performance across LLMs
Spin up identical sandboxes for GPT-5, Claude, Gemini, and Llama — same environment, same task, different models. Compare token efficiency, tool use accuracy, and task completion rates in true isolation. No cross-contamination between runs.
Define your benchmark matrix in a fleet YAML. Each model gets its own sandboxes with identical provisioning. Run them in parallel, collect results, destroy everything. Repeat with different prompts or tool configurations.
machines:
- name: bench-gpt5
count: 5
memory: 2048
provision: openclaw
- name: bench-claude
count: 5
memory: 2048
provision: claude-code
- name: bench-llama
count: 5
memory: 4096
provision: hermes-agent
Give everyone on your team their own agent
Each engineer, PM, or designer gets a persistent sandbox with their preferred AI agent pre-provisioned. Personal environments that don't step on each other — with resource limits enforced per box.
Port-forward into any team member's sandbox for pair debugging. Snapshot a well-configured environment and clone it for new hires. Everyone's agent has root access to their box — and zero access to anyone else's.
$ clawstainer create --name alice-agent --memory 2048
$ clawstainer create --name bob-agent --memory 2048
$ clawstainer create --name carol-agent --memory 2048
$ clawstainer provision alice-agent --components claude-code
$ clawstainer provision bob-agent --components hermes-agent
$ clawstainer provision carol-agent --components openclaw
# Pair debug with Alice
$ clawstainer port-forward alice-agent 8080:8080
Run untrusted code from LLM outputs
Let agents write and execute arbitrary code without risking your host machine. Every sandbox is disposable — if an agent rm -rfs everything, destroy it and spin up a fresh one in seconds.
Namespace and cgroup isolation means a rogue agent can't escape its box. No shared filesystem, no network access to the host, no privilege escalation. The sandbox is the blast radius.
# Agent does something destructive
$ clawstainer exec <id> "rm -rf /"
# Host is untouched. Sandbox is toast.
# Spin up a fresh one from snapshot
$ clawstainer create --name fresh-box --from python3-ready
# ✓ Created fresh-box in 0.8s (from snapshot)
Local AI agent development
Build and test agents without cloud costs. Iterate fast on your laptop — port-forward into the sandbox, watch logs in real time, copy artifacts out. Snapshot when something works. Move to production when ready.
Works on macOS via Lima (one brew install) and natively on Linux. No cloud account, no billing dashboard, no cold starts. Your machine is the data center.
# Develop locally
$ clawstainer create --name dev --memory 4096 --cpus 4
$ clawstainer provision dev --components python3,git,curl
$ clawstainer cp ./agent.py dev:/root/
$ clawstainer port-forward dev 3000:3000
# It works — snapshot it
$ clawstainer snapshot create dev --name agent-v1
# Check resource usage
$ clawstainer stats dev
One YAML. Hundreds of agents.
Define your entire fleet in a declarative config. Clawstainer creates all machines first, then provisions them in parallel batches. Destroy by group or nuke everything.
machines:
- name: hermes-worker
count: 3
memory: 2048
cpus: 2
provision: hermes-agent
linger: true
- name: openclaw
count: 10
memory: 1024
cpus: 2
provision: openclaw
linger: true
$ clawstainer fleet create --file fleet.yaml --parallel 5
# ✓ Created 13 machines
# ✓ Provisioning [5/13]...
# ✓ All 13 machines provisioned
$ clawstainer fleet destroy --all
# ✓ Destroyed 13 machines
on Clawstainer
Open-source. Local-first. Ready when you are.