Deploying OpenClaw¶
OpenClaw is an AI personal assistant that connects to your editor via a WebSocket gateway. The openclaw runtime is bundled with Language Operator and is installed automatically with the Helm chart.
Prerequisites¶
- Language Operator installed
- An LLM provider API key, or a local model endpoint (e.g. Ollama)
Instructions¶
Create a Cluster¶
kubectl apply -f - <<EOF
apiVersion: langop.io/v1alpha1
kind: LanguageCluster
metadata:
name: demo-cluster
spec:
domain: demo-cluster.<your-domain>
EOF
kubectl wait languagecluster/demo-cluster --for=condition=Ready --timeout=60s
kubectl config set-context --current --namespace=demo-cluster
Configure a Model¶
kubectl create secret generic anthropic-credentials \
--from-literal=api-key=sk-ant-your-key-here
kubectl apply -f - <<EOF
apiVersion: langop.io/v1alpha1
kind: LanguageModel
metadata:
name: claude-sonnet
spec:
provider: anthropic
modelName: claude-sonnet-4-5
apiKeySecretRef:
name: anthropic-credentials
key: api-key
EOF
Deploy OpenClaw¶
The operator will auto-generate a gateway token and store it in a Secret named openclaw-runtime after creation.
Verify¶
Wait for the pod to reach Running and the LanguageAgent to show Ready=True.
Get Credentials¶
Retrieve the auto-generated token:
TOKEN=$(kubectl get secret openclaw-runtime \
-o jsonpath='{.data.OPENCLAW_GATEWAY_TOKEN}' | base64 -d)
echo "Token: $TOKEN"
Connect¶
Log in with your token at https://openclaw.demo-cluster.
What the Operator Created¶
| Resource | Name | Purpose |
|---|---|---|
| Namespace | my-cluster |
Isolated workload namespace |
| Deployment | openclaw |
Runs the OpenClaw container |
| Service | openclaw |
ClusterIP on port 18789 |
| Secret | openclaw-runtime |
Auto-generated gateway token |
| NetworkPolicy | openclaw |
Allows inbound from other agents in this namespace |
| PVC | openclaw-workspace |
10Gi persistent workspace |
| ConfigMap | openclaw-agent |
Injected at /etc/agent/config.yaml |