feat: refactor sgclaw around zeroclaw compat runtime
This commit is contained in:
10
third_party/zeroclaw/scripts/99-act-led.rules
vendored
Normal file
10
third_party/zeroclaw/scripts/99-act-led.rules
vendored
Normal file
@@ -0,0 +1,10 @@
|
||||
# Allow the gpio group to control the Raspberry Pi onboard ACT LED
|
||||
# via the Linux LED subsystem sysfs interface.
|
||||
#
|
||||
# Without this rule /sys/class/leds/ACT/{brightness,trigger} are
|
||||
# root-only writable, which prevents zeroclaw from blinking the LED.
|
||||
SUBSYSTEM=="leds", KERNEL=="ACT", ACTION=="add", \
|
||||
RUN+="/bin/chgrp gpio /sys/%p/brightness", \
|
||||
RUN+="/bin/chmod g+w /sys/%p/brightness", \
|
||||
RUN+="/bin/chgrp gpio /sys/%p/trigger", \
|
||||
RUN+="/bin/chmod g+w /sys/%p/trigger"
|
||||
232
third_party/zeroclaw/scripts/README.md
vendored
Normal file
232
third_party/zeroclaw/scripts/README.md
vendored
Normal file
@@ -0,0 +1,232 @@
|
||||
# scripts/ — Raspberry Pi Deployment Guide
|
||||
|
||||
This directory contains everything needed to cross-compile ZeroClaw and deploy it to a Raspberry Pi over SSH.
|
||||
|
||||
## Contents
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `deploy-rpi.sh` | One-shot cross-compile and deploy script |
|
||||
| `rpi-config.toml` | Production config template deployed to `~/.zeroclaw/config.toml` |
|
||||
| `zeroclaw.service` | systemd unit file installed on the Pi |
|
||||
| `99-act-led.rules` | udev rule for ACT LED sysfs access without sudo |
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Cross-compilation toolchain (pick one)
|
||||
|
||||
#### Option A — cargo-zigbuild (recommended for Apple Silicon)
|
||||
|
||||
```bash
|
||||
brew install zig
|
||||
cargo install cargo-zigbuild
|
||||
rustup target add aarch64-unknown-linux-gnu
|
||||
```
|
||||
|
||||
#### Option B — cross (Docker-based)
|
||||
|
||||
```bash
|
||||
cargo install cross
|
||||
rustup target add aarch64-unknown-linux-gnu
|
||||
# Docker must be running
|
||||
```
|
||||
|
||||
The deploy script auto-detects which tool is available, preferring `cargo-zigbuild`.
|
||||
Force a specific tool with `CROSS_TOOL=zigbuild` or `CROSS_TOOL=cross`.
|
||||
|
||||
### Optional: passwordless SSH
|
||||
|
||||
If you can't use SSH key authentication, install `sshpass` and set the `RPI_PASS` environment variable:
|
||||
|
||||
```bash
|
||||
brew install sshpass # macOS
|
||||
sudo apt install sshpass # Linux
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
RPI_HOST=raspberrypi.local RPI_USER=pi ./scripts/deploy-rpi.sh
|
||||
```
|
||||
|
||||
After the first deploy, you must set your API key on the Pi (see [First-Time Setup](#first-time-setup)).
|
||||
|
||||
---
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Default | Description |
|
||||
|----------|---------|-------------|
|
||||
| `RPI_HOST` | `raspberrypi.local` | Pi hostname or IP address |
|
||||
| `RPI_USER` | `pi` | SSH username |
|
||||
| `RPI_PORT` | `22` | SSH port |
|
||||
| `RPI_DIR` | `~/zeroclaw` | Remote directory for the binary and `.env` |
|
||||
| `RPI_PASS` | _(unset)_ | SSH password — uses `sshpass` if set; key auth used otherwise |
|
||||
| `CROSS_TOOL` | _(auto-detect)_ | Force `zigbuild` or `cross` |
|
||||
|
||||
---
|
||||
|
||||
## What the Deploy Script Does
|
||||
|
||||
1. **Cross-compile** — builds a release binary for `aarch64-unknown-linux-gnu` with `--features hardware,peripheral-rpi`.
|
||||
2. **Stop service** — runs `sudo systemctl stop zeroclaw` on the Pi (continues if not yet installed).
|
||||
3. **Create remote directory** — ensures `$RPI_DIR` exists on the Pi.
|
||||
4. **Copy binary** — SCPs the compiled binary to `$RPI_DIR/zeroclaw`.
|
||||
5. **Create `.env`** — writes an `.env` skeleton with an `ANTHROPIC_API_KEY=` placeholder to `$RPI_DIR/.env` with mode `600`. Skipped if the file already exists so an existing key is not overwritten.
|
||||
6. **Deploy config** — copies `rpi-config.toml` to `~/.zeroclaw/config.toml`, preserving any `api_key` already present in the file.
|
||||
7. **Install systemd service** — copies `zeroclaw.service` to `/etc/systemd/system/`, then enables and restarts it.
|
||||
8. **Hardware permissions** — adds the deploy user to the `gpio` group, copies `99-act-led.rules` to `/etc/udev/rules.d/`, and resets the ACT LED trigger.
|
||||
|
||||
---
|
||||
|
||||
## First-Time Setup
|
||||
|
||||
After the first successful deploy, SSH into the Pi and fill in your API key:
|
||||
|
||||
```bash
|
||||
ssh pi@raspberrypi.local
|
||||
nano ~/zeroclaw/.env
|
||||
# Set: ANTHROPIC_API_KEY=sk-ant-...
|
||||
sudo systemctl restart zeroclaw
|
||||
```
|
||||
|
||||
The `.env` is loaded by the systemd service as an `EnvironmentFile`.
|
||||
|
||||
---
|
||||
|
||||
## Interacting with ZeroClaw on the Pi
|
||||
|
||||
Once the service is running the gateway listens on port **8080**.
|
||||
|
||||
### Health check
|
||||
|
||||
```bash
|
||||
curl http://raspberrypi.local:8080/health
|
||||
```
|
||||
|
||||
### Send a message
|
||||
|
||||
```bash
|
||||
curl -s -X POST http://raspberrypi.local:8080/api/chat \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"message": "What is the CPU temperature?"}' | jq .
|
||||
```
|
||||
|
||||
### Stream a conversation
|
||||
|
||||
```bash
|
||||
curl -N -s -X POST http://raspberrypi.local:8080/api/chat \
|
||||
-H 'Content-Type: application/json' \
|
||||
-H 'Accept: text/event-stream' \
|
||||
-d '{"message": "List connected hardware devices", "stream": true}'
|
||||
```
|
||||
|
||||
### Follow service logs
|
||||
|
||||
```bash
|
||||
ssh pi@raspberrypi.local 'journalctl -u zeroclaw -f'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Hardware Features
|
||||
|
||||
### GPIO tools
|
||||
|
||||
ZeroClaw is deployed with the `peripheral-rpi` feature, which enables two LLM-callable tools:
|
||||
|
||||
- **`gpio_read`** — reads a GPIO pin value via sysfs (`/sys/class/gpio/...`).
|
||||
- **`gpio_write`** — writes a GPIO pin value.
|
||||
|
||||
These tools let the agent directly control hardware in response to natural-language instructions.
|
||||
|
||||
### ACT LED
|
||||
|
||||
The udev rule `99-act-led.rules` grants the `gpio` group write access to:
|
||||
|
||||
```
|
||||
/sys/class/leds/ACT/trigger
|
||||
/sys/class/leds/ACT/brightness
|
||||
```
|
||||
|
||||
This allows toggling the Pi's green ACT LED without `sudo`.
|
||||
|
||||
### Aardvark I2C/SPI adapter
|
||||
|
||||
If a Total Phase Aardvark adapter is connected, the `hardware` feature enables I2C/SPI communication with external devices. No extra setup is needed — the device is auto-detected via USB.
|
||||
|
||||
---
|
||||
|
||||
## Files Deployed to the Pi
|
||||
|
||||
| Remote path | Source | Description |
|
||||
|------------|--------|-------------|
|
||||
| `~/zeroclaw/zeroclaw` | compiled binary | Main agent binary |
|
||||
| `~/zeroclaw/.env` | created on first deploy | API key and environment variables |
|
||||
| `~/.zeroclaw/config.toml` | `rpi-config.toml` | Agent configuration |
|
||||
| `/etc/systemd/system/zeroclaw.service` | `zeroclaw.service` | systemd service unit |
|
||||
| `/etc/udev/rules.d/99-act-led.rules` | `99-act-led.rules` | ACT LED permissions |
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
`rpi-config.toml` is the production config template. Key defaults:
|
||||
|
||||
- **Provider**: `anthropic-custom:https://api.z.ai/api/anthropic`
|
||||
- **Model**: `claude-3-5-sonnet-20241022`
|
||||
- **Autonomy**: `full`
|
||||
- **Allowed shell commands**: `git`, `cargo`, `npm`, `mkdir`, `touch`, `cp`, `mv`, `ls`, `cat`, `grep`, `find`, `echo`, `pwd`, `wc`, `head`, `tail`, `date`
|
||||
|
||||
To customise, edit `~/.zeroclaw/config.toml` directly on the Pi and restart the service.
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Service won't start
|
||||
|
||||
```bash
|
||||
ssh pi@raspberrypi.local 'sudo systemctl status zeroclaw'
|
||||
ssh pi@raspberrypi.local 'journalctl -u zeroclaw -n 50 --no-pager'
|
||||
```
|
||||
|
||||
### GPIO permission denied
|
||||
|
||||
Make sure the deploy user is in the `gpio` group and that a fresh login session has been started:
|
||||
|
||||
```bash
|
||||
ssh pi@raspberrypi.local 'groups'
|
||||
# Should include: gpio
|
||||
```
|
||||
|
||||
If the group was just added, log out and back in, or run `newgrp gpio`.
|
||||
|
||||
### Wrong architecture / binary won't run
|
||||
|
||||
Re-run the deploy script. Confirm the target:
|
||||
|
||||
```bash
|
||||
ssh pi@raspberrypi.local 'file ~/zeroclaw/zeroclaw'
|
||||
# Expected: ELF 64-bit LSB pie executable, ARM aarch64
|
||||
```
|
||||
|
||||
### Force a specific cross-compilation tool
|
||||
|
||||
```bash
|
||||
CROSS_TOOL=zigbuild RPI_HOST=raspberrypi.local ./scripts/deploy-rpi.sh
|
||||
# or
|
||||
CROSS_TOOL=cross RPI_HOST=raspberrypi.local ./scripts/deploy-rpi.sh
|
||||
```
|
||||
|
||||
### Rebuild locally without deploying
|
||||
|
||||
```bash
|
||||
cargo zigbuild --release \
|
||||
--target aarch64-unknown-linux-gnu \
|
||||
--features hardware,peripheral-rpi
|
||||
```
|
||||
21
third_party/zeroclaw/scripts/browser/start-browser.sh
vendored
Executable file
21
third_party/zeroclaw/scripts/browser/start-browser.sh
vendored
Executable file
@@ -0,0 +1,21 @@
|
||||
#!/bin/bash
|
||||
# Start a browser on a virtual display
|
||||
# Usage: ./start-browser.sh [display_num] [url]
|
||||
|
||||
set -e
|
||||
|
||||
DISPLAY_NUM=${1:-99}
|
||||
URL=${2:-"https://google.com"}
|
||||
|
||||
export DISPLAY=:$DISPLAY_NUM
|
||||
|
||||
# Check if display is running
|
||||
if ! xdpyinfo -display :$DISPLAY_NUM &>/dev/null; then
|
||||
echo "Error: Display :$DISPLAY_NUM not running."
|
||||
echo "Start VNC first: ./start-vnc.sh"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
google-chrome --no-sandbox --disable-gpu --disable-setuid-sandbox "$URL" &
|
||||
echo "Chrome started on display :$DISPLAY_NUM"
|
||||
echo "View via VNC or noVNC"
|
||||
52
third_party/zeroclaw/scripts/browser/start-vnc.sh
vendored
Executable file
52
third_party/zeroclaw/scripts/browser/start-vnc.sh
vendored
Executable file
@@ -0,0 +1,52 @@
|
||||
#!/bin/bash
|
||||
# Start virtual display with VNC access for browser GUI
|
||||
# Usage: ./start-vnc.sh [display_num] [vnc_port] [novnc_port] [resolution]
|
||||
|
||||
set -e
|
||||
|
||||
DISPLAY_NUM=${1:-99}
|
||||
VNC_PORT=${2:-5900}
|
||||
NOVNC_PORT=${3:-6080}
|
||||
RESOLUTION=${4:-1920x1080x24}
|
||||
|
||||
echo "Starting virtual display :$DISPLAY_NUM at $RESOLUTION"
|
||||
|
||||
# Kill any existing sessions
|
||||
pkill -f "Xvfb :$DISPLAY_NUM" 2>/dev/null || true
|
||||
pkill -f "x11vnc.*:$DISPLAY_NUM" 2>/dev/null || true
|
||||
pkill -f "websockify.*$NOVNC_PORT" 2>/dev/null || true
|
||||
sleep 1
|
||||
|
||||
# Start Xvfb (virtual framebuffer)
|
||||
Xvfb :$DISPLAY_NUM -screen 0 $RESOLUTION -ac &
|
||||
XVFB_PID=$!
|
||||
sleep 1
|
||||
|
||||
# Set DISPLAY
|
||||
export DISPLAY=:$DISPLAY_NUM
|
||||
|
||||
# Start window manager
|
||||
fluxbox -display :$DISPLAY_NUM 2>/dev/null &
|
||||
sleep 1
|
||||
|
||||
# Start x11vnc
|
||||
x11vnc -display :$DISPLAY_NUM -rfbport $VNC_PORT -forever -shared -nopw -bg 2>/dev/null
|
||||
sleep 1
|
||||
|
||||
# Start noVNC (web-based VNC client)
|
||||
websockify --web=/usr/share/novnc $NOVNC_PORT localhost:$VNC_PORT &
|
||||
NOVNC_PID=$!
|
||||
|
||||
echo ""
|
||||
echo "==================================="
|
||||
echo "VNC Server started!"
|
||||
echo "==================================="
|
||||
echo "VNC Direct: localhost:$VNC_PORT"
|
||||
echo "noVNC Web: http://localhost:$NOVNC_PORT/vnc.html"
|
||||
echo "Display: :$DISPLAY_NUM"
|
||||
echo "==================================="
|
||||
echo ""
|
||||
echo "To start a browser, run:"
|
||||
echo " DISPLAY=:$DISPLAY_NUM google-chrome &"
|
||||
echo ""
|
||||
echo "To stop, run: pkill -f 'Xvfb :$DISPLAY_NUM'"
|
||||
11
third_party/zeroclaw/scripts/browser/stop-vnc.sh
vendored
Executable file
11
third_party/zeroclaw/scripts/browser/stop-vnc.sh
vendored
Executable file
@@ -0,0 +1,11 @@
|
||||
#!/bin/bash
|
||||
# Stop virtual display and VNC server
|
||||
# Usage: ./stop-vnc.sh [display_num]
|
||||
|
||||
DISPLAY_NUM=${1:-99}
|
||||
|
||||
pkill -f "Xvfb :$DISPLAY_NUM" 2>/dev/null || true
|
||||
pkill -f "x11vnc.*:$DISPLAY_NUM" 2>/dev/null || true
|
||||
pkill -f "websockify.*6080" 2>/dev/null || true
|
||||
|
||||
echo "VNC server stopped"
|
||||
46
third_party/zeroclaw/scripts/ci/check_binary_size.sh
vendored
Executable file
46
third_party/zeroclaw/scripts/ci/check_binary_size.sh
vendored
Executable file
@@ -0,0 +1,46 @@
|
||||
#!/usr/bin/env bash
|
||||
# Check binary file size against safeguard thresholds.
|
||||
#
|
||||
# Usage: check_binary_size.sh <binary_path> [label]
|
||||
#
|
||||
# Arguments:
|
||||
# binary_path Path to the binary to check (required)
|
||||
# label Optional label for step summary (e.g. target triple)
|
||||
#
|
||||
# Thresholds:
|
||||
# >20MB — hard error (safeguard)
|
||||
# >15MB — warning (advisory)
|
||||
# >5MB — warning (target)
|
||||
#
|
||||
# Writes to GITHUB_STEP_SUMMARY when the variable is set and label is provided.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
BIN="${1:?Usage: check_binary_size.sh <binary_path> [label]}"
|
||||
LABEL="${2:-}"
|
||||
|
||||
if [ ! -f "$BIN" ]; then
|
||||
echo "::error::Binary not found at $BIN"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# macOS stat uses -f%z, Linux stat uses -c%s
|
||||
SIZE=$(stat -f%z "$BIN" 2>/dev/null || stat -c%s "$BIN")
|
||||
SIZE_MB=$((SIZE / 1024 / 1024))
|
||||
echo "Binary size: ${SIZE_MB}MB ($SIZE bytes)"
|
||||
|
||||
if [ -n "$LABEL" ] && [ -n "${GITHUB_STEP_SUMMARY:-}" ]; then
|
||||
echo "### Binary Size: $LABEL" >> "$GITHUB_STEP_SUMMARY"
|
||||
echo "- Size: ${SIZE_MB}MB ($SIZE bytes)" >> "$GITHUB_STEP_SUMMARY"
|
||||
fi
|
||||
|
||||
if [ "$SIZE" -gt 20971520 ]; then
|
||||
echo "::error::Binary exceeds 20MB safeguard (${SIZE_MB}MB)"
|
||||
exit 1
|
||||
elif [ "$SIZE" -gt 15728640 ]; then
|
||||
echo "::warning::Binary exceeds 15MB advisory target (${SIZE_MB}MB)"
|
||||
elif [ "$SIZE" -gt 5242880 ]; then
|
||||
echo "::warning::Binary exceeds 5MB target (${SIZE_MB}MB)"
|
||||
else
|
||||
echo "Binary size within target."
|
||||
fi
|
||||
178
third_party/zeroclaw/scripts/ci/collect_changed_links.py
vendored
Executable file
178
third_party/zeroclaw/scripts/ci/collect_changed_links.py
vendored
Executable file
@@ -0,0 +1,178 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
DOC_PATH_RE = re.compile(r"\.mdx?$")
|
||||
URL_RE = re.compile(r"https?://[^\s<>'\"]+")
|
||||
INLINE_LINK_RE = re.compile(r"!?\[[^\]]*\]\(([^)]+)\)")
|
||||
REF_LINK_RE = re.compile(r"^\s*\[[^\]]+\]:\s*(\S+)")
|
||||
TRAILING_PUNCTUATION = ").,;:!?]}'\""
|
||||
|
||||
|
||||
def run_git(args: list[str]) -> subprocess.CompletedProcess[str]:
|
||||
return subprocess.run(["git", *args], check=False, capture_output=True, text=True)
|
||||
|
||||
|
||||
def commit_exists(rev: str) -> bool:
|
||||
if not rev:
|
||||
return False
|
||||
return run_git(["cat-file", "-e", f"{rev}^{{commit}}"]).returncode == 0
|
||||
|
||||
|
||||
def normalize_docs_files(raw: str) -> list[str]:
|
||||
if not raw:
|
||||
return []
|
||||
files: list[str] = []
|
||||
for line in raw.splitlines():
|
||||
path = line.strip()
|
||||
if path:
|
||||
files.append(path)
|
||||
return files
|
||||
|
||||
|
||||
def infer_base_sha(provided: str) -> str:
|
||||
if commit_exists(provided):
|
||||
return provided
|
||||
if run_git(["rev-parse", "--verify", "origin/master"]).returncode != 0:
|
||||
return ""
|
||||
proc = run_git(["merge-base", "origin/master", "HEAD"])
|
||||
candidate = proc.stdout.strip()
|
||||
return candidate if commit_exists(candidate) else ""
|
||||
|
||||
|
||||
def infer_docs_files(base_sha: str, provided: list[str]) -> list[str]:
|
||||
if provided:
|
||||
return provided
|
||||
if not base_sha:
|
||||
return []
|
||||
diff = run_git(["diff", "--name-only", base_sha, "HEAD"])
|
||||
files: list[str] = []
|
||||
for line in diff.stdout.splitlines():
|
||||
path = line.strip()
|
||||
if not path:
|
||||
continue
|
||||
if DOC_PATH_RE.search(path) or path in {"LICENSE", ".github/pull_request_template.md"}:
|
||||
files.append(path)
|
||||
return files
|
||||
|
||||
|
||||
def normalize_link_target(raw_target: str, source_path: str) -> str | None:
|
||||
target = raw_target.strip()
|
||||
if target.startswith("<") and target.endswith(">"):
|
||||
target = target[1:-1].strip()
|
||||
|
||||
if not target:
|
||||
return None
|
||||
|
||||
if " " in target:
|
||||
target = target.split()[0].strip()
|
||||
|
||||
if not target or target.startswith("#"):
|
||||
return None
|
||||
|
||||
lower = target.lower()
|
||||
if lower.startswith(("mailto:", "tel:", "javascript:")):
|
||||
return None
|
||||
|
||||
if target.startswith(("http://", "https://")):
|
||||
return target.rstrip(TRAILING_PUNCTUATION)
|
||||
|
||||
path_without_fragment = target.split("#", 1)[0].split("?", 1)[0]
|
||||
if not path_without_fragment:
|
||||
return None
|
||||
|
||||
if path_without_fragment.startswith("/"):
|
||||
resolved = path_without_fragment.lstrip("/")
|
||||
else:
|
||||
resolved = os.path.normpath(
|
||||
os.path.join(os.path.dirname(source_path) or ".", path_without_fragment)
|
||||
)
|
||||
|
||||
if not resolved or resolved == ".":
|
||||
return None
|
||||
|
||||
return resolved
|
||||
|
||||
|
||||
def extract_links(text: str, source_path: str) -> list[str]:
|
||||
links: list[str] = []
|
||||
for match in URL_RE.findall(text):
|
||||
url = match.rstrip(TRAILING_PUNCTUATION)
|
||||
if url:
|
||||
links.append(url)
|
||||
|
||||
for match in INLINE_LINK_RE.findall(text):
|
||||
normalized = normalize_link_target(match, source_path)
|
||||
if normalized:
|
||||
links.append(normalized)
|
||||
|
||||
ref_match = REF_LINK_RE.match(text)
|
||||
if ref_match:
|
||||
normalized = normalize_link_target(ref_match.group(1), source_path)
|
||||
if normalized:
|
||||
links.append(normalized)
|
||||
|
||||
return links
|
||||
|
||||
|
||||
def added_lines_for_file(base_sha: str, path: str) -> list[str]:
|
||||
if base_sha:
|
||||
diff = run_git(["diff", "--unified=0", base_sha, "HEAD", "--", path])
|
||||
lines: list[str] = []
|
||||
for raw_line in diff.stdout.splitlines():
|
||||
if raw_line.startswith("+++"):
|
||||
continue
|
||||
if raw_line.startswith("+"):
|
||||
lines.append(raw_line[1:])
|
||||
return lines
|
||||
|
||||
file_path = Path(path)
|
||||
if not file_path.is_file():
|
||||
return []
|
||||
return file_path.read_text(encoding="utf-8", errors="ignore").splitlines()
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser(description="Collect HTTP(S) links added in changed docs lines")
|
||||
parser.add_argument("--base", default="", help="Base commit SHA")
|
||||
parser.add_argument(
|
||||
"--docs-files",
|
||||
default="",
|
||||
help="Newline-separated docs files list",
|
||||
)
|
||||
parser.add_argument("--output", required=True, help="Output file for unique URLs")
|
||||
args = parser.parse_args()
|
||||
|
||||
base_sha = infer_base_sha(args.base)
|
||||
docs_files = infer_docs_files(base_sha, normalize_docs_files(args.docs_files))
|
||||
|
||||
existing_files = [path for path in docs_files if Path(path).is_file()]
|
||||
if not existing_files:
|
||||
Path(args.output).write_text("", encoding="utf-8")
|
||||
print("No docs files available for link collection.")
|
||||
return 0
|
||||
|
||||
unique_urls: list[str] = []
|
||||
seen: set[str] = set()
|
||||
for path in existing_files:
|
||||
for line in added_lines_for_file(base_sha, path):
|
||||
for link in extract_links(line, path):
|
||||
if link not in seen:
|
||||
seen.add(link)
|
||||
unique_urls.append(link)
|
||||
|
||||
Path(args.output).write_text("\n".join(unique_urls) + ("\n" if unique_urls else ""), encoding="utf-8")
|
||||
print(f"Collected {len(unique_urls)} added link(s) from {len(existing_files)} docs file(s).")
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
28
third_party/zeroclaw/scripts/ci/docs_links_gate.sh
vendored
Executable file
28
third_party/zeroclaw/scripts/ci/docs_links_gate.sh
vendored
Executable file
@@ -0,0 +1,28 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
BASE_SHA="${BASE_SHA:-}"
|
||||
DOCS_FILES_RAW="${DOCS_FILES:-}"
|
||||
|
||||
LINKS_FILE="$(mktemp)"
|
||||
trap 'rm -f "$LINKS_FILE"' EXIT
|
||||
|
||||
python3 ./scripts/ci/collect_changed_links.py \
|
||||
--base "$BASE_SHA" \
|
||||
--docs-files "$DOCS_FILES_RAW" \
|
||||
--output "$LINKS_FILE"
|
||||
|
||||
if [ ! -s "$LINKS_FILE" ]; then
|
||||
echo "No added links detected in changed docs lines."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if ! command -v lychee >/dev/null 2>&1; then
|
||||
echo "lychee is required to run docs link gate locally."
|
||||
echo "Install via: cargo install lychee"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Checking added links with lychee (offline mode)..."
|
||||
lychee --offline --no-progress --format detailed "$LINKS_FILE"
|
||||
186
third_party/zeroclaw/scripts/ci/docs_quality_gate.sh
vendored
Executable file
186
third_party/zeroclaw/scripts/ci/docs_quality_gate.sh
vendored
Executable file
@@ -0,0 +1,186 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
BASE_SHA="${BASE_SHA:-}"
|
||||
DOCS_FILES_RAW="${DOCS_FILES:-}"
|
||||
|
||||
if [ -z "$BASE_SHA" ] && git rev-parse --verify origin/master >/dev/null 2>&1; then
|
||||
BASE_SHA="$(git merge-base origin/master HEAD)"
|
||||
fi
|
||||
|
||||
if [ -z "$DOCS_FILES_RAW" ] && [ -n "$BASE_SHA" ] && git cat-file -e "$BASE_SHA^{commit}" 2>/dev/null; then
|
||||
DOCS_FILES_RAW="$(git diff --name-only "$BASE_SHA" HEAD | awk '
|
||||
/\.md$/ || /\.mdx$/ || $0 == "LICENSE" || $0 == ".github/pull_request_template.md" {
|
||||
print
|
||||
}
|
||||
')"
|
||||
fi
|
||||
|
||||
if [ -z "$DOCS_FILES_RAW" ]; then
|
||||
echo "No docs files detected; skipping docs quality gate."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [ -z "$BASE_SHA" ] || ! git cat-file -e "$BASE_SHA^{commit}" 2>/dev/null; then
|
||||
echo "BASE_SHA is missing or invalid; falling back to full-file markdown lint."
|
||||
BASE_SHA=""
|
||||
fi
|
||||
|
||||
ALL_FILES=()
|
||||
while IFS= read -r file; do
|
||||
if [ -n "$file" ]; then
|
||||
ALL_FILES+=("$file")
|
||||
fi
|
||||
done < <(printf '%s\n' "$DOCS_FILES_RAW")
|
||||
|
||||
if [ "${#ALL_FILES[@]}" -eq 0 ]; then
|
||||
echo "No docs files detected after normalization; skipping docs quality gate."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
EXISTING_FILES=()
|
||||
for file in "${ALL_FILES[@]}"; do
|
||||
if [ -f "$file" ]; then
|
||||
EXISTING_FILES+=("$file")
|
||||
fi
|
||||
done
|
||||
|
||||
if [ "${#EXISTING_FILES[@]}" -eq 0 ]; then
|
||||
echo "No existing docs files to lint; skipping docs quality gate."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if command -v npx >/dev/null 2>&1; then
|
||||
MD_CMD=(npx --yes markdownlint-cli2@0.20.0)
|
||||
elif command -v markdownlint-cli2 >/dev/null 2>&1; then
|
||||
MD_CMD=(markdownlint-cli2)
|
||||
else
|
||||
echo "markdownlint-cli2 is required (via npx or local binary)."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Linting docs files: ${EXISTING_FILES[*]}"
|
||||
|
||||
LINT_OUTPUT_FILE="$(mktemp)"
|
||||
set +e
|
||||
"${MD_CMD[@]}" "${EXISTING_FILES[@]}" >"$LINT_OUTPUT_FILE" 2>&1
|
||||
LINT_EXIT=$?
|
||||
set -e
|
||||
|
||||
if [ "$LINT_EXIT" -eq 0 ]; then
|
||||
cat "$LINT_OUTPUT_FILE"
|
||||
rm -f "$LINT_OUTPUT_FILE"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [ -z "$BASE_SHA" ]; then
|
||||
cat "$LINT_OUTPUT_FILE"
|
||||
rm -f "$LINT_OUTPUT_FILE"
|
||||
exit "$LINT_EXIT"
|
||||
fi
|
||||
|
||||
CHANGED_LINES_JSON_FILE="$(mktemp)"
|
||||
python3 - "$BASE_SHA" "${EXISTING_FILES[@]}" >"$CHANGED_LINES_JSON_FILE" <<'PY'
|
||||
import json
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
base = sys.argv[1]
|
||||
files = sys.argv[2:]
|
||||
|
||||
changed = {}
|
||||
hunk = re.compile(r"^@@ -\d+(?:,\d+)? \+(\d+)(?:,(\d+))? @@")
|
||||
|
||||
for path in files:
|
||||
proc = subprocess.run(
|
||||
["git", "diff", "--unified=0", base, "HEAD", "--", path],
|
||||
check=False,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
)
|
||||
ranges = []
|
||||
for line in proc.stdout.splitlines():
|
||||
m = hunk.match(line)
|
||||
if not m:
|
||||
continue
|
||||
start = int(m.group(1))
|
||||
count = int(m.group(2) or "1")
|
||||
if count > 0:
|
||||
ranges.append([start, start + count - 1])
|
||||
changed[path] = ranges
|
||||
|
||||
print(json.dumps(changed))
|
||||
PY
|
||||
|
||||
FILTERED_OUTPUT_FILE="$(mktemp)"
|
||||
set +e
|
||||
python3 - "$LINT_OUTPUT_FILE" "$CHANGED_LINES_JSON_FILE" >"$FILTERED_OUTPUT_FILE" <<'PY'
|
||||
import json
|
||||
import re
|
||||
import sys
|
||||
|
||||
lint_file = sys.argv[1]
|
||||
changed_file = sys.argv[2]
|
||||
|
||||
with open(changed_file, "r", encoding="utf-8") as f:
|
||||
changed = json.load(f)
|
||||
|
||||
line_re = re.compile(r"^(.+?):(\d+)\s+error\s+(MD\d+(?:/[^\s]+)?)\s+(.*)$")
|
||||
|
||||
blocking = []
|
||||
baseline = []
|
||||
other_lines = []
|
||||
|
||||
with open(lint_file, "r", encoding="utf-8") as f:
|
||||
for raw_line in f:
|
||||
line = raw_line.rstrip("\n")
|
||||
m = line_re.match(line)
|
||||
if not m:
|
||||
other_lines.append(line)
|
||||
continue
|
||||
|
||||
path, line_no_s, rule, msg = m.groups()
|
||||
line_no = int(line_no_s)
|
||||
ranges = changed.get(path, [])
|
||||
|
||||
is_changed_line = any(start <= line_no <= end for start, end in ranges)
|
||||
entry = f"{path}:{line_no} {rule} {msg}"
|
||||
if is_changed_line:
|
||||
blocking.append(entry)
|
||||
else:
|
||||
baseline.append(entry)
|
||||
|
||||
if baseline:
|
||||
print("Existing markdown issues outside changed lines (non-blocking):")
|
||||
for entry in baseline:
|
||||
print(f" - {entry}")
|
||||
|
||||
if blocking:
|
||||
print("Markdown issues introduced on changed lines (blocking):")
|
||||
for entry in blocking:
|
||||
print(f" - {entry}")
|
||||
print(f"Blocking markdown issues: {len(blocking)}")
|
||||
sys.exit(1)
|
||||
|
||||
if baseline:
|
||||
print("No blocking markdown issues on changed lines.")
|
||||
sys.exit(0)
|
||||
|
||||
for line in other_lines:
|
||||
print(line)
|
||||
|
||||
if any(line.strip() for line in other_lines):
|
||||
print("markdownlint exited non-zero with unclassified output; failing safe.")
|
||||
sys.exit(2)
|
||||
|
||||
print("No blocking markdown issues on changed lines.")
|
||||
PY
|
||||
SCRIPT_EXIT=$?
|
||||
set -e
|
||||
|
||||
cat "$FILTERED_OUTPUT_FILE"
|
||||
|
||||
rm -f "$LINT_OUTPUT_FILE" "$CHANGED_LINES_JSON_FILE" "$FILTERED_OUTPUT_FILE"
|
||||
exit "$SCRIPT_EXIT"
|
||||
209
third_party/zeroclaw/scripts/ci/fetch_actions_data.py
vendored
Normal file
209
third_party/zeroclaw/scripts/ci/fetch_actions_data.py
vendored
Normal file
@@ -0,0 +1,209 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Fetch GitHub Actions workflow runs for a given date and summarize costs.
|
||||
|
||||
Usage:
|
||||
python fetch_actions_data.py [OPTIONS]
|
||||
|
||||
Options:
|
||||
--date YYYY-MM-DD Date to query (default: yesterday)
|
||||
--mode brief|full Output mode (default: full)
|
||||
brief: billable minutes/hours table only
|
||||
full: detailed breakdown with per-run list
|
||||
--repo OWNER/NAME Repository (default: zeroclaw-labs/zeroclaw)
|
||||
-h, --help Show this help message
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import subprocess
|
||||
from datetime import datetime, timedelta, timezone
|
||||
|
||||
|
||||
def parse_args():
|
||||
"""Parse command-line arguments."""
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Fetch GitHub Actions workflow runs and summarize costs.",
|
||||
)
|
||||
yesterday = (datetime.now(timezone.utc) - timedelta(days=1)).strftime("%Y-%m-%d")
|
||||
parser.add_argument(
|
||||
"--date",
|
||||
default=yesterday,
|
||||
help="Date to query in YYYY-MM-DD format (default: yesterday)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--mode",
|
||||
choices=["brief", "full"],
|
||||
default="full",
|
||||
help="Output mode: 'brief' for billable hours only, 'full' for detailed breakdown (default: full)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--repo",
|
||||
default="zeroclaw-labs/zeroclaw",
|
||||
help="Repository in OWNER/NAME format (default: zeroclaw-labs/zeroclaw)",
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def fetch_runs(repo, date_str, page=1, per_page=100):
|
||||
"""Fetch completed workflow runs for a given date."""
|
||||
url = (
|
||||
f"https://api.github.com/repos/{repo}/actions/runs"
|
||||
f"?created={date_str}&per_page={per_page}&page={page}"
|
||||
)
|
||||
result = subprocess.run(
|
||||
["curl", "-sS", "-H", "Accept: application/vnd.github+json", url],
|
||||
capture_output=True, text=True
|
||||
)
|
||||
return json.loads(result.stdout)
|
||||
|
||||
|
||||
def fetch_jobs(repo, run_id):
|
||||
"""Fetch jobs for a specific run."""
|
||||
url = f"https://api.github.com/repos/{repo}/actions/runs/{run_id}/jobs?per_page=100"
|
||||
result = subprocess.run(
|
||||
["curl", "-sS", "-H", "Accept: application/vnd.github+json", url],
|
||||
capture_output=True, text=True
|
||||
)
|
||||
return json.loads(result.stdout)
|
||||
|
||||
|
||||
def parse_duration(started, completed):
|
||||
"""Return duration in seconds between two ISO timestamps."""
|
||||
if not started or not completed:
|
||||
return 0
|
||||
try:
|
||||
s = datetime.fromisoformat(started.replace("Z", "+00:00"))
|
||||
c = datetime.fromisoformat(completed.replace("Z", "+00:00"))
|
||||
return max(0, (c - s).total_seconds())
|
||||
except Exception:
|
||||
return 0
|
||||
|
||||
|
||||
def main():
|
||||
args = parse_args()
|
||||
repo = args.repo
|
||||
date_str = args.date
|
||||
brief = args.mode == "brief"
|
||||
|
||||
print(f"Fetching workflow runs for {repo} on {date_str}...")
|
||||
print("=" * 100)
|
||||
|
||||
all_runs = []
|
||||
for page in range(1, 5): # up to 400 runs
|
||||
data = fetch_runs(repo, date_str, page=page)
|
||||
runs = data.get("workflow_runs", [])
|
||||
if not runs:
|
||||
break
|
||||
all_runs.extend(runs)
|
||||
if len(runs) < 100:
|
||||
break
|
||||
|
||||
print(f"Total workflow runs found: {len(all_runs)}")
|
||||
print()
|
||||
|
||||
# Group by workflow name
|
||||
workflow_stats = {}
|
||||
for run in all_runs:
|
||||
name = run.get("name", "Unknown")
|
||||
event = run.get("event", "unknown")
|
||||
conclusion = run.get("conclusion", "unknown")
|
||||
run_id = run.get("id")
|
||||
|
||||
if name not in workflow_stats:
|
||||
workflow_stats[name] = {
|
||||
"count": 0,
|
||||
"events": {},
|
||||
"conclusions": {},
|
||||
"total_job_seconds": 0,
|
||||
"total_jobs": 0,
|
||||
"run_ids": [],
|
||||
}
|
||||
|
||||
workflow_stats[name]["count"] += 1
|
||||
workflow_stats[name]["events"][event] = workflow_stats[name]["events"].get(event, 0) + 1
|
||||
workflow_stats[name]["conclusions"][conclusion] = workflow_stats[name]["conclusions"].get(conclusion, 0) + 1
|
||||
workflow_stats[name]["run_ids"].append(run_id)
|
||||
|
||||
# For each workflow, sample up to 3 runs to get job-level timing
|
||||
print("Sampling job-level timing (up to 3 runs per workflow)...")
|
||||
print()
|
||||
|
||||
for name, stats in workflow_stats.items():
|
||||
sample_ids = stats["run_ids"][:3]
|
||||
for run_id in sample_ids:
|
||||
jobs_data = fetch_jobs(repo, run_id)
|
||||
jobs = jobs_data.get("jobs", [])
|
||||
for job in jobs:
|
||||
started = job.get("started_at")
|
||||
completed = job.get("completed_at")
|
||||
duration = parse_duration(started, completed)
|
||||
stats["total_job_seconds"] += duration
|
||||
stats["total_jobs"] += 1
|
||||
|
||||
# Extrapolate: if we sampled N runs but there are M total, scale up
|
||||
sampled = len(sample_ids)
|
||||
total = stats["count"]
|
||||
if sampled > 0 and sampled < total:
|
||||
scale = total / sampled
|
||||
stats["estimated_total_seconds"] = stats["total_job_seconds"] * scale
|
||||
else:
|
||||
stats["estimated_total_seconds"] = stats["total_job_seconds"]
|
||||
|
||||
# Print summary sorted by estimated cost (descending)
|
||||
sorted_workflows = sorted(
|
||||
workflow_stats.items(),
|
||||
key=lambda x: x[1]["estimated_total_seconds"],
|
||||
reverse=True
|
||||
)
|
||||
|
||||
if brief:
|
||||
# Brief mode: compact billable hours table
|
||||
print(f"{'Workflow':<40} {'Runs':>5} {'Est.Mins':>9} {'Est.Hours':>10}")
|
||||
print("-" * 68)
|
||||
grand_total_minutes = 0
|
||||
for name, stats in sorted_workflows:
|
||||
est_mins = stats["estimated_total_seconds"] / 60
|
||||
grand_total_minutes += est_mins
|
||||
print(f"{name:<40} {stats['count']:>5} {est_mins:>9.1f} {est_mins/60:>10.2f}")
|
||||
print("-" * 68)
|
||||
print(f"{'TOTAL':<40} {len(all_runs):>5} {grand_total_minutes:>9.0f} {grand_total_minutes/60:>10.1f}")
|
||||
print(f"\nProjected monthly: ~{grand_total_minutes/60*30:.0f} hours")
|
||||
else:
|
||||
# Full mode: detailed breakdown with per-run list
|
||||
print("=" * 100)
|
||||
print(f"{'Workflow':<40} {'Runs':>5} {'SampledJobs':>12} {'SampledMins':>12} {'Est.TotalMins':>14} {'Events'}")
|
||||
print("-" * 100)
|
||||
|
||||
grand_total_minutes = 0
|
||||
for name, stats in sorted_workflows:
|
||||
sampled_mins = stats["total_job_seconds"] / 60
|
||||
est_total_mins = stats["estimated_total_seconds"] / 60
|
||||
grand_total_minutes += est_total_mins
|
||||
events_str = ", ".join(f"{k}={v}" for k, v in stats["events"].items())
|
||||
conclusions_str = ", ".join(f"{k}={v}" for k, v in stats["conclusions"].items())
|
||||
print(
|
||||
f"{name:<40} {stats['count']:>5} {stats['total_jobs']:>12} "
|
||||
f"{sampled_mins:>12.1f} {est_total_mins:>14.1f} {events_str}"
|
||||
)
|
||||
print(f"{'':>40} {'':>5} {'':>12} {'':>12} {'':>14} outcomes: {conclusions_str}")
|
||||
|
||||
print("-" * 100)
|
||||
print(f"{'GRAND TOTAL':>40} {len(all_runs):>5} {'':>12} {'':>12} {grand_total_minutes:>14.1f}")
|
||||
print(f"\nEstimated total billable minutes on {date_str}: {grand_total_minutes:.0f} min ({grand_total_minutes/60:.1f} hours)")
|
||||
print()
|
||||
|
||||
# Also show raw run list
|
||||
print("\n" + "=" * 100)
|
||||
print("DETAILED RUN LIST")
|
||||
print("=" * 100)
|
||||
for run in all_runs:
|
||||
name = run.get("name", "Unknown")
|
||||
event = run.get("event", "unknown")
|
||||
conclusion = run.get("conclusion", "unknown")
|
||||
run_id = run.get("id")
|
||||
started = run.get("run_started_at", "?")
|
||||
print(f" [{run_id}] {name:<40} conclusion={conclusion:<12} event={event:<20} started={started}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
19
third_party/zeroclaw/scripts/ci/rust_quality_gate.sh
vendored
Executable file
19
third_party/zeroclaw/scripts/ci/rust_quality_gate.sh
vendored
Executable file
@@ -0,0 +1,19 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
MODE="correctness"
|
||||
if [ "${1:-}" = "--strict" ]; then
|
||||
MODE="strict"
|
||||
fi
|
||||
|
||||
echo "==> rust quality: cargo fmt --all -- --check"
|
||||
cargo fmt --all -- --check
|
||||
|
||||
if [ "$MODE" = "strict" ]; then
|
||||
echo "==> rust quality: cargo clippy --locked --all-targets -- -D warnings"
|
||||
cargo clippy --locked --all-targets -- -D warnings
|
||||
else
|
||||
echo "==> rust quality: cargo clippy --locked --all-targets -- -D clippy::correctness"
|
||||
cargo clippy --locked --all-targets -- -D clippy::correctness
|
||||
fi
|
||||
237
third_party/zeroclaw/scripts/ci/rust_strict_delta_gate.sh
vendored
Executable file
237
third_party/zeroclaw/scripts/ci/rust_strict_delta_gate.sh
vendored
Executable file
@@ -0,0 +1,237 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
BASE_SHA="${BASE_SHA:-}"
|
||||
RUST_FILES_RAW="${RUST_FILES:-}"
|
||||
|
||||
if [ -z "$BASE_SHA" ] && git rev-parse --verify origin/master >/dev/null 2>&1; then
|
||||
BASE_SHA="$(git merge-base origin/master HEAD)"
|
||||
fi
|
||||
|
||||
if [ -z "$BASE_SHA" ] && git rev-parse --verify HEAD~1 >/dev/null 2>&1; then
|
||||
BASE_SHA="$(git rev-parse HEAD~1)"
|
||||
fi
|
||||
|
||||
if [ -z "$BASE_SHA" ] || ! git cat-file -e "$BASE_SHA^{commit}" 2>/dev/null; then
|
||||
echo "BASE_SHA is missing or invalid for strict delta gate."
|
||||
echo "Set BASE_SHA explicitly or ensure origin/master is available."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -z "$RUST_FILES_RAW" ]; then
|
||||
RUST_FILES_RAW="$(git diff --name-only "$BASE_SHA" HEAD | awk '/\.rs$/ { print }')"
|
||||
fi
|
||||
|
||||
ALL_FILES=()
|
||||
while IFS= read -r file; do
|
||||
if [ -n "$file" ]; then
|
||||
ALL_FILES+=("$file")
|
||||
fi
|
||||
done < <(printf '%s\n' "$RUST_FILES_RAW")
|
||||
|
||||
if [ "${#ALL_FILES[@]}" -eq 0 ]; then
|
||||
echo "No Rust source files changed; skipping strict delta gate."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
EXISTING_FILES=()
|
||||
for file in "${ALL_FILES[@]}"; do
|
||||
if [ -f "$file" ]; then
|
||||
EXISTING_FILES+=("$file")
|
||||
fi
|
||||
done
|
||||
|
||||
if [ "${#EXISTING_FILES[@]}" -eq 0 ]; then
|
||||
echo "No existing changed Rust files to lint; skipping strict delta gate."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo "Strict delta linting changed Rust files: ${EXISTING_FILES[*]}"
|
||||
|
||||
CHANGED_LINES_JSON_FILE="$(mktemp)"
|
||||
CLIPPY_JSON_FILE="$(mktemp)"
|
||||
CLIPPY_STDERR_FILE="$(mktemp)"
|
||||
FILTERED_OUTPUT_FILE="$(mktemp)"
|
||||
trap 'rm -f "$CHANGED_LINES_JSON_FILE" "$CLIPPY_JSON_FILE" "$CLIPPY_STDERR_FILE" "$FILTERED_OUTPUT_FILE"' EXIT
|
||||
|
||||
python3 - "$BASE_SHA" "${EXISTING_FILES[@]}" >"$CHANGED_LINES_JSON_FILE" <<'PY'
|
||||
import json
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
base = sys.argv[1]
|
||||
files = sys.argv[2:]
|
||||
hunk = re.compile(r"^@@ -\d+(?:,\d+)? \+(\d+)(?:,(\d+))? @@")
|
||||
changed = {}
|
||||
|
||||
for path in files:
|
||||
proc = subprocess.run(
|
||||
["git", "diff", "--unified=0", base, "HEAD", "--", path],
|
||||
check=False,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
)
|
||||
ranges = []
|
||||
for line in proc.stdout.splitlines():
|
||||
match = hunk.match(line)
|
||||
if not match:
|
||||
continue
|
||||
start = int(match.group(1))
|
||||
count = int(match.group(2) or "1")
|
||||
if count > 0:
|
||||
ranges.append([start, start + count - 1])
|
||||
changed[path] = ranges
|
||||
|
||||
print(json.dumps(changed))
|
||||
PY
|
||||
|
||||
set +e
|
||||
cargo clippy --quiet --locked --all-targets --message-format=json -- -D warnings >"$CLIPPY_JSON_FILE" 2>"$CLIPPY_STDERR_FILE"
|
||||
CLIPPY_EXIT=$?
|
||||
set -e
|
||||
|
||||
if [ "$CLIPPY_EXIT" -eq 0 ]; then
|
||||
echo "Strict delta gate passed: no strict warnings/errors."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
set +e
|
||||
python3 - "$CLIPPY_JSON_FILE" "$CHANGED_LINES_JSON_FILE" >"$FILTERED_OUTPUT_FILE" <<'PY'
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
messages_file = sys.argv[1]
|
||||
changed_file = sys.argv[2]
|
||||
|
||||
with open(changed_file, "r", encoding="utf-8") as f:
|
||||
changed = json.load(f)
|
||||
|
||||
cwd = Path.cwd().resolve()
|
||||
|
||||
|
||||
def normalize_path(path_value: str) -> str:
|
||||
path = Path(path_value)
|
||||
if path.is_absolute():
|
||||
try:
|
||||
return path.resolve().relative_to(cwd).as_posix()
|
||||
except Exception:
|
||||
return path.as_posix()
|
||||
return path.as_posix()
|
||||
|
||||
|
||||
blocking = []
|
||||
baseline = []
|
||||
unclassified = []
|
||||
classified_count = 0
|
||||
|
||||
with open(messages_file, "r", encoding="utf-8", errors="ignore") as f:
|
||||
for raw_line in f:
|
||||
line = raw_line.strip()
|
||||
if not line:
|
||||
continue
|
||||
|
||||
try:
|
||||
payload = json.loads(line)
|
||||
except json.JSONDecodeError:
|
||||
continue
|
||||
|
||||
if payload.get("reason") != "compiler-message":
|
||||
continue
|
||||
|
||||
message = payload.get("message", {})
|
||||
level = message.get("level")
|
||||
if level not in {"warning", "error"}:
|
||||
continue
|
||||
|
||||
code_obj = message.get("code") or {}
|
||||
code = code_obj.get("code") if isinstance(code_obj, dict) else None
|
||||
text = message.get("message", "")
|
||||
spans = message.get("spans") or []
|
||||
|
||||
candidate_spans = [span for span in spans if span.get("is_primary")]
|
||||
if not candidate_spans:
|
||||
candidate_spans = spans
|
||||
|
||||
span_entries = []
|
||||
for span in candidate_spans:
|
||||
file_name = span.get("file_name")
|
||||
line_start = span.get("line_start")
|
||||
line_end = span.get("line_end")
|
||||
if not file_name or line_start is None:
|
||||
continue
|
||||
norm_path = normalize_path(file_name)
|
||||
span_entries.append((norm_path, int(line_start), int(line_end or line_start)))
|
||||
|
||||
if not span_entries:
|
||||
unclassified.append(f"{level.upper()} {code or '-'} {text}")
|
||||
continue
|
||||
|
||||
is_changed_line = False
|
||||
best_path, best_line, _ = span_entries[0]
|
||||
for path, line_start, line_end in span_entries:
|
||||
ranges = changed.get(path)
|
||||
if ranges is None:
|
||||
continue
|
||||
|
||||
for start, end in ranges:
|
||||
if line_end >= start and line_start <= end:
|
||||
is_changed_line = True
|
||||
best_path, best_line = path, line_start
|
||||
break
|
||||
if is_changed_line:
|
||||
break
|
||||
|
||||
entry = f"{best_path}:{best_line} {level.upper()} {code or '-'} {text}"
|
||||
classified_count += 1
|
||||
if is_changed_line:
|
||||
blocking.append(entry)
|
||||
else:
|
||||
baseline.append(entry)
|
||||
|
||||
if baseline:
|
||||
print("Existing strict lint issues outside changed Rust lines (non-blocking):")
|
||||
for entry in baseline:
|
||||
print(f" - {entry}")
|
||||
|
||||
if blocking:
|
||||
print("Strict lint issues introduced on changed Rust lines (blocking):")
|
||||
for entry in blocking:
|
||||
print(f" - {entry}")
|
||||
print(f"Blocking strict lint issues: {len(blocking)}")
|
||||
sys.exit(1)
|
||||
|
||||
if classified_count > 0:
|
||||
print("No blocking strict lint issues on changed Rust lines.")
|
||||
sys.exit(0)
|
||||
|
||||
if unclassified:
|
||||
print("Strict lint exited non-zero with unclassified diagnostics; failing safe:")
|
||||
for entry in unclassified[:20]:
|
||||
print(f" - {entry}")
|
||||
sys.exit(2)
|
||||
|
||||
print("Strict lint exited non-zero without parsable diagnostics; failing safe.")
|
||||
sys.exit(2)
|
||||
PY
|
||||
FILTER_EXIT=$?
|
||||
set -e
|
||||
|
||||
cat "$FILTERED_OUTPUT_FILE"
|
||||
|
||||
if [ "$FILTER_EXIT" -eq 0 ]; then
|
||||
if [ -s "$CLIPPY_STDERR_FILE" ]; then
|
||||
echo "clippy stderr summary (informational):"
|
||||
cat "$CLIPPY_STDERR_FILE"
|
||||
fi
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [ -s "$CLIPPY_STDERR_FILE" ]; then
|
||||
echo "clippy stderr summary:"
|
||||
cat "$CLIPPY_STDERR_FILE"
|
||||
fi
|
||||
|
||||
exit "$FILTER_EXIT"
|
||||
223
third_party/zeroclaw/scripts/deploy-rpi.sh
vendored
Executable file
223
third_party/zeroclaw/scripts/deploy-rpi.sh
vendored
Executable file
@@ -0,0 +1,223 @@
|
||||
#!/usr/bin/env bash
|
||||
# deploy-rpi.sh — cross-compile ZeroClaw for Raspberry Pi and deploy via SSH.
|
||||
#
|
||||
# Cross-compilation (pick ONE — the script auto-detects):
|
||||
#
|
||||
# Option A — cargo-zigbuild (recommended; works on Apple Silicon + Intel, no Docker)
|
||||
# brew install zig
|
||||
# cargo install cargo-zigbuild
|
||||
# rustup target add aarch64-unknown-linux-gnu
|
||||
#
|
||||
# Option B — cross (Docker-based; requires Docker Desktop running)
|
||||
# cargo install cross
|
||||
#
|
||||
# Usage:
|
||||
# RPI_HOST=raspberrypi.local RPI_USER=pi ./scripts/deploy-rpi.sh
|
||||
#
|
||||
# Optional env vars:
|
||||
# RPI_HOST — hostname or IP of the Pi (default: raspberrypi.local)
|
||||
# RPI_USER — SSH user on the Pi (default: pi)
|
||||
# RPI_PORT — SSH port (default: 22)
|
||||
# RPI_DIR — remote deployment dir (default: /home/$RPI_USER/zeroclaw)
|
||||
# RPI_PASS — SSH password (uses sshpass) (default: prompt interactively)
|
||||
# CROSS_TOOL — force "zigbuild" or "cross" (default: auto-detect)
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
RPI_HOST="${RPI_HOST:-raspberrypi.local}"
|
||||
RPI_USER="${RPI_USER:-pi}"
|
||||
RPI_PORT="${RPI_PORT:-22}"
|
||||
RPI_DIR="${RPI_DIR:-/home/${RPI_USER}/zeroclaw}"
|
||||
TARGET="aarch64-unknown-linux-gnu"
|
||||
FEATURES="hardware,peripheral-rpi"
|
||||
BINARY="target/${TARGET}/release/zeroclaw"
|
||||
SSH_OPTS="-p ${RPI_PORT} -o StrictHostKeyChecking=no -o ConnectTimeout=10"
|
||||
# scp uses -P (uppercase) for port; ssh uses -p (lowercase)
|
||||
SCP_OPTS="-P ${RPI_PORT} -o StrictHostKeyChecking=no -o ConnectTimeout=10"
|
||||
|
||||
# If RPI_PASS is set, wrap ssh/scp with sshpass for non-interactive auth.
|
||||
SSH_CMD="ssh"
|
||||
SCP_CMD="scp"
|
||||
if [[ -n "${RPI_PASS:-}" ]]; then
|
||||
if ! command -v sshpass &>/dev/null; then
|
||||
echo "ERROR: RPI_PASS is set but sshpass is not installed."
|
||||
echo " brew install hudochenkov/sshpass/sshpass"
|
||||
exit 1
|
||||
fi
|
||||
SSH_CMD="sshpass -p ${RPI_PASS} ssh"
|
||||
SCP_CMD="sshpass -p ${RPI_PASS} scp"
|
||||
fi
|
||||
|
||||
echo "==> Building ZeroClaw for Raspberry Pi (${TARGET})"
|
||||
echo " Features: ${FEATURES}"
|
||||
echo " Target host: ${RPI_USER}@${RPI_HOST}:${RPI_PORT}"
|
||||
echo ""
|
||||
|
||||
# ── 1. Cross-compile — auto-detect best available tool ───────────────────────
|
||||
# Prefer cargo-zigbuild: it works on Apple Silicon without Docker and avoids
|
||||
# the rustup-toolchain-install errors that affect cross v0.2.x on arm64 Macs.
|
||||
_detect_cross_tool() {
|
||||
if [[ "${CROSS_TOOL:-}" == "cross" ]]; then
|
||||
echo "cross"; return
|
||||
fi
|
||||
if [[ "${CROSS_TOOL:-}" == "zigbuild" ]]; then
|
||||
echo "zigbuild"; return
|
||||
fi
|
||||
if command -v cargo-zigbuild &>/dev/null && command -v zig &>/dev/null; then
|
||||
echo "zigbuild"; return
|
||||
fi
|
||||
if command -v cross &>/dev/null; then
|
||||
echo "cross"; return
|
||||
fi
|
||||
echo "none"
|
||||
}
|
||||
|
||||
TOOL=$(_detect_cross_tool)
|
||||
|
||||
case "${TOOL}" in
|
||||
zigbuild)
|
||||
echo "==> Using cargo-zigbuild (Zig cross-linker)"
|
||||
# Ensure the target sysroot is registered with rustup.
|
||||
rustup target add "${TARGET}" 2>/dev/null || true
|
||||
cargo zigbuild \
|
||||
--target "${TARGET}" \
|
||||
--features "${FEATURES}" \
|
||||
--release
|
||||
;;
|
||||
cross)
|
||||
echo "==> Using cross (Docker-based)"
|
||||
# Verify Docker is running before handing off — gives a clear error message
|
||||
# instead of the confusing rustup-toolchain failure from cross v0.2.x.
|
||||
if ! docker info &>/dev/null; then
|
||||
echo ""
|
||||
echo "ERROR: Docker is not running."
|
||||
echo " Start Docker Desktop and retry, or install cargo-zigbuild instead:"
|
||||
echo " brew install zig && cargo install cargo-zigbuild"
|
||||
echo " rustup target add ${TARGET}"
|
||||
exit 1
|
||||
fi
|
||||
cross build \
|
||||
--target "${TARGET}" \
|
||||
--features "${FEATURES}" \
|
||||
--release
|
||||
;;
|
||||
none)
|
||||
echo ""
|
||||
echo "ERROR: No cross-compilation tool found."
|
||||
echo ""
|
||||
echo "Install one of the following and retry:"
|
||||
echo ""
|
||||
echo " Option A — cargo-zigbuild (recommended; works on Apple Silicon, no Docker):"
|
||||
echo " brew install zig"
|
||||
echo " cargo install cargo-zigbuild"
|
||||
echo " rustup target add ${TARGET}"
|
||||
echo ""
|
||||
echo " Option B — cross (requires Docker Desktop running):"
|
||||
echo " cargo install cross"
|
||||
echo ""
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
echo ""
|
||||
echo "==> Build complete: ${BINARY}"
|
||||
ls -lh "${BINARY}"
|
||||
|
||||
# ── 2. Stop running service (if any) so binary can be overwritten ─────────────
|
||||
echo ""
|
||||
echo "==> Stopping zeroclaw service (if running)"
|
||||
# shellcheck disable=SC2029
|
||||
${SSH_CMD} ${SSH_OPTS} "${RPI_USER}@${RPI_HOST}" \
|
||||
"sudo systemctl stop zeroclaw 2>/dev/null || true"
|
||||
|
||||
# ── 3. Create remote directory ────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "==> Creating remote directory ${RPI_DIR}"
|
||||
# shellcheck disable=SC2029
|
||||
${SSH_CMD} ${SSH_OPTS} "${RPI_USER}@${RPI_HOST}" "mkdir -p ${RPI_DIR}"
|
||||
|
||||
# ── 4. Deploy binary ──────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "==> Deploying binary to ${RPI_USER}@${RPI_HOST}:${RPI_DIR}/zeroclaw"
|
||||
${SCP_CMD} ${SCP_OPTS} "${BINARY}" "${RPI_USER}@${RPI_HOST}:${RPI_DIR}/zeroclaw"
|
||||
|
||||
# ── 4. Create .env skeleton (if it doesn't exist) ────────────────────────────
|
||||
ENV_DEST="${RPI_DIR}/.env"
|
||||
echo ""
|
||||
echo "==> Checking for ${ENV_DEST}"
|
||||
# shellcheck disable=SC2029
|
||||
if ${SSH_CMD} ${SSH_OPTS} "${RPI_USER}@${RPI_HOST}" "[ -f ${ENV_DEST} ]"; then
|
||||
echo " .env already exists — skipping"
|
||||
else
|
||||
echo " Creating .env skeleton with 600 permissions"
|
||||
# shellcheck disable=SC2029
|
||||
${SSH_CMD} ${SSH_OPTS} "${RPI_USER}@${RPI_HOST}" \
|
||||
"mkdir -p ${RPI_DIR} && \
|
||||
printf '# Set your API key here\nANTHROPIC_API_KEY=sk-ant-\n' > ${ENV_DEST} && \
|
||||
chmod 600 ${ENV_DEST}"
|
||||
echo " IMPORTANT: edit ${ENV_DEST} on the Pi and set ANTHROPIC_API_KEY"
|
||||
fi
|
||||
|
||||
# ── 5. Deploy config ─────────────────────────────────────────────────────────
|
||||
CONFIG_DEST="/home/${RPI_USER}/.zeroclaw/config.toml"
|
||||
echo ""
|
||||
echo "==> Deploying config to ${CONFIG_DEST}"
|
||||
# shellcheck disable=SC2029
|
||||
${SSH_CMD} ${SSH_OPTS} "${RPI_USER}@${RPI_HOST}" "mkdir -p /home/${RPI_USER}/.zeroclaw"
|
||||
# Preserve existing api_key from the remote config if present.
|
||||
# shellcheck disable=SC2029
|
||||
EXISTING_API_KEY=$(${SSH_CMD} ${SSH_OPTS} "${RPI_USER}@${RPI_HOST}" \
|
||||
"grep -m1 '^api_key' ${CONFIG_DEST} 2>/dev/null || true")
|
||||
${SCP_CMD} ${SCP_OPTS} "scripts/rpi-config.toml" "${RPI_USER}@${RPI_HOST}:${CONFIG_DEST}"
|
||||
if [[ -n "${EXISTING_API_KEY}" ]]; then
|
||||
echo " Restoring existing api_key from previous config"
|
||||
# shellcheck disable=SC2029
|
||||
${SSH_CMD} ${SSH_OPTS} "${RPI_USER}@${RPI_HOST}" \
|
||||
"sed -i 's|^# api_key = .*|${EXISTING_API_KEY}|' ${CONFIG_DEST}"
|
||||
fi
|
||||
|
||||
# ── 6. Deploy and enable systemd service ─────────────────────────────────────
|
||||
SERVICE_DEST="/etc/systemd/system/zeroclaw.service"
|
||||
echo ""
|
||||
echo "==> Installing systemd service (requires sudo on the Pi)"
|
||||
${SCP_CMD} ${SCP_OPTS} "scripts/zeroclaw.service" "${RPI_USER}@${RPI_HOST}:/tmp/zeroclaw.service"
|
||||
# shellcheck disable=SC2029
|
||||
${SSH_CMD} ${SSH_OPTS} "${RPI_USER}@${RPI_HOST}" \
|
||||
"sudo mv /tmp/zeroclaw.service ${SERVICE_DEST} && \
|
||||
sudo systemctl daemon-reload && \
|
||||
sudo systemctl enable zeroclaw && \
|
||||
sudo systemctl restart zeroclaw && \
|
||||
sudo systemctl status zeroclaw --no-pager || true"
|
||||
|
||||
# ── 7. Runtime permissions ───────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "==> Granting ${RPI_USER} access to GPIO group"
|
||||
# shellcheck disable=SC2029
|
||||
${SSH_CMD} ${SSH_OPTS} "${RPI_USER}@${RPI_HOST}" \
|
||||
"sudo usermod -aG gpio ${RPI_USER} || true"
|
||||
|
||||
# ── 8. Reset ACT LED trigger so ZeroClaw can control it ──────────────────────
|
||||
echo ""
|
||||
echo "==> Installing udev rule for ACT LED sysfs access by gpio group"
|
||||
${SCP_CMD} ${SCP_OPTS} "scripts/99-act-led.rules" "${RPI_USER}@${RPI_HOST}:/tmp/99-act-led.rules"
|
||||
# shellcheck disable=SC2029
|
||||
${SSH_CMD} ${SSH_OPTS} "${RPI_USER}@${RPI_HOST}" \
|
||||
"sudo mv /tmp/99-act-led.rules /etc/udev/rules.d/99-act-led.rules && \
|
||||
sudo udevadm control --reload-rules && \
|
||||
sudo chgrp gpio /sys/class/leds/ACT/brightness /sys/class/leds/ACT/trigger 2>/dev/null || true && \
|
||||
sudo chmod g+w /sys/class/leds/ACT/brightness /sys/class/leds/ACT/trigger 2>/dev/null || true"
|
||||
|
||||
echo ""
|
||||
echo "==> Resetting ACT LED trigger (none)"
|
||||
# shellcheck disable=SC2029
|
||||
${SSH_CMD} ${SSH_OPTS} "${RPI_USER}@${RPI_HOST}" \
|
||||
"echo none | sudo tee /sys/class/leds/ACT/trigger > /dev/null 2>&1 || true"
|
||||
|
||||
echo ""
|
||||
echo "==> Deployment complete!"
|
||||
echo ""
|
||||
echo " ZeroClaw is running at http://${RPI_HOST}:8080"
|
||||
echo " POST /api/chat — chat with the agent"
|
||||
echo " GET /health — health check"
|
||||
echo ""
|
||||
echo " To check logs: ssh ${RPI_USER}@${RPI_HOST} 'journalctl -u zeroclaw -f'"
|
||||
85
third_party/zeroclaw/scripts/release/cut_release_tag.sh
vendored
Executable file
85
third_party/zeroclaw/scripts/release/cut_release_tag.sh
vendored
Executable file
@@ -0,0 +1,85 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
usage() {
|
||||
cat <<'USAGE'
|
||||
Usage: scripts/release/cut_release_tag.sh <tag> [--push]
|
||||
|
||||
Create an annotated release tag from the current checkout.
|
||||
|
||||
Requirements:
|
||||
- tag must match vX.Y.Z (optional suffix like -rc.1)
|
||||
- working tree must be clean
|
||||
- HEAD must match origin/master
|
||||
- tag must not already exist locally or on origin
|
||||
|
||||
Options:
|
||||
--push Push the tag to origin after creating it
|
||||
USAGE
|
||||
}
|
||||
|
||||
if [[ $# -lt 1 || $# -gt 2 ]]; then
|
||||
usage
|
||||
exit 1
|
||||
fi
|
||||
|
||||
TAG="$1"
|
||||
PUSH_TAG="false"
|
||||
if [[ $# -eq 2 ]]; then
|
||||
if [[ "$2" != "--push" ]]; then
|
||||
usage
|
||||
exit 1
|
||||
fi
|
||||
PUSH_TAG="true"
|
||||
fi
|
||||
|
||||
SEMVER_PATTERN='^v[0-9]+\.[0-9]+\.[0-9]+([.-][0-9A-Za-z.-]+)?$'
|
||||
if [[ ! "$TAG" =~ $SEMVER_PATTERN ]]; then
|
||||
echo "error: tag must match vX.Y.Z or vX.Y.Z-suffix (received: $TAG)" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! git rev-parse --is-inside-work-tree >/dev/null 2>&1; then
|
||||
echo "error: run this script inside the git repository" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! git diff --quiet || ! git diff --cached --quiet; then
|
||||
echo "error: working tree is not clean; commit or stash changes first" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Fetching origin/master and tags..."
|
||||
git fetch --quiet origin master --tags
|
||||
|
||||
HEAD_SHA="$(git rev-parse HEAD)"
|
||||
MASTER_SHA="$(git rev-parse origin/master)"
|
||||
if [[ "$HEAD_SHA" != "$MASTER_SHA" ]]; then
|
||||
echo "error: HEAD ($HEAD_SHA) is not origin/master ($MASTER_SHA)." >&2
|
||||
echo "hint: checkout/update master before cutting a release tag." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if git show-ref --tags --verify --quiet "refs/tags/$TAG"; then
|
||||
echo "error: tag already exists locally: $TAG" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if git ls-remote --exit-code --tags origin "refs/tags/$TAG" >/dev/null 2>&1; then
|
||||
echo "error: tag already exists on origin: $TAG" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
MESSAGE="zeroclaw $TAG"
|
||||
git tag -a "$TAG" -m "$MESSAGE"
|
||||
echo "Created annotated tag: $TAG"
|
||||
|
||||
if [[ "$PUSH_TAG" == "true" ]]; then
|
||||
git push origin "$TAG"
|
||||
echo "Pushed tag to origin: $TAG"
|
||||
echo "Release Stable workflow will auto-trigger via tag push."
|
||||
echo "Monitor: gh workflow view 'Release Stable' --web"
|
||||
else
|
||||
echo "Next step: git push origin $TAG"
|
||||
echo "This will auto-trigger the Release Stable workflow (builds, Docker, crates.io, website, Scoop, AUR, Homebrew, tweet)."
|
||||
fi
|
||||
631
third_party/zeroclaw/scripts/rpi-config.toml
vendored
Normal file
631
third_party/zeroclaw/scripts/rpi-config.toml
vendored
Normal file
@@ -0,0 +1,631 @@
|
||||
# ZeroClaw — Raspberry Pi production configuration
|
||||
#
|
||||
# Copy this to ~/.zeroclaw/config.toml on the Pi.
|
||||
# deploy-rpi.sh does this automatically.
|
||||
#
|
||||
# API key is loaded from ~/.zeroclaw/.env (EnvironmentFile in systemd).
|
||||
# Set it there as: ANTHROPIC_API_KEY=your-key-here
|
||||
# Or set api_key directly below (not recommended for version control).
|
||||
|
||||
# api_key = ""
|
||||
default_provider = "anthropic-custom:https://api.z.ai/api/anthropic"
|
||||
default_model = "claude-3-5-sonnet-20241022"
|
||||
default_temperature = 0.4
|
||||
model_routes = []
|
||||
embedding_routes = []
|
||||
|
||||
[model_providers]
|
||||
|
||||
[provider]
|
||||
|
||||
[observability]
|
||||
backend = "none"
|
||||
runtime_trace_mode = "none"
|
||||
runtime_trace_path = "state/runtime-trace.jsonl"
|
||||
runtime_trace_max_entries = 200
|
||||
|
||||
[autonomy]
|
||||
level = "full"
|
||||
workspace_only = false
|
||||
allowed_commands = [
|
||||
"git",
|
||||
"npm",
|
||||
"cargo",
|
||||
"mkdir",
|
||||
"touch",
|
||||
"cp",
|
||||
"mv",
|
||||
"ls",
|
||||
"cat",
|
||||
"grep",
|
||||
"find",
|
||||
"echo",
|
||||
"pwd",
|
||||
"wc",
|
||||
"head",
|
||||
"tail",
|
||||
"date",
|
||||
]
|
||||
command_context_rules = []
|
||||
forbidden_paths = [
|
||||
"/etc",
|
||||
"/root",
|
||||
"/home",
|
||||
"/usr",
|
||||
"/bin",
|
||||
"/sbin",
|
||||
"/lib",
|
||||
"/opt",
|
||||
"/boot",
|
||||
"/dev",
|
||||
"/proc",
|
||||
"/sys",
|
||||
"/var",
|
||||
"/tmp",
|
||||
"/mnt",
|
||||
"~/.ssh",
|
||||
"~/.gnupg",
|
||||
"~/.aws",
|
||||
"~/.config",
|
||||
]
|
||||
max_actions_per_hour = 100
|
||||
max_cost_per_day_cents = 1000
|
||||
require_approval_for_medium_risk = true
|
||||
block_high_risk_commands = true
|
||||
shell_env_passthrough = []
|
||||
allow_sensitive_file_reads = false
|
||||
allow_sensitive_file_writes = false
|
||||
auto_approve = [
|
||||
"file_read",
|
||||
"memory_recall",
|
||||
]
|
||||
always_ask = []
|
||||
allowed_roots = []
|
||||
non_cli_excluded_tools = [
|
||||
"shell",
|
||||
"process",
|
||||
"file_write",
|
||||
"file_edit",
|
||||
"git_operations",
|
||||
"browser",
|
||||
"browser_open",
|
||||
"http_request",
|
||||
"schedule",
|
||||
"cron_add",
|
||||
"cron_remove",
|
||||
"cron_update",
|
||||
"cron_run",
|
||||
"memory_store",
|
||||
"memory_forget",
|
||||
"proxy_config",
|
||||
"web_search_config",
|
||||
"web_access_config",
|
||||
"model_routing_config",
|
||||
"channel_ack_config",
|
||||
"pushover",
|
||||
"composio",
|
||||
"delegate",
|
||||
"screenshot",
|
||||
"image_info",
|
||||
]
|
||||
non_cli_approval_approvers = []
|
||||
non_cli_natural_language_approval_mode = "direct"
|
||||
|
||||
[autonomy.non_cli_natural_language_approval_mode_by_channel]
|
||||
|
||||
[security]
|
||||
roles = []
|
||||
|
||||
[security.sandbox]
|
||||
backend = "auto"
|
||||
firejail_args = []
|
||||
|
||||
[security.resources]
|
||||
max_memory_mb = 512
|
||||
max_cpu_time_seconds = 60
|
||||
max_subprocesses = 10
|
||||
memory_monitoring = true
|
||||
|
||||
[security.audit]
|
||||
enabled = true
|
||||
log_path = "audit.log"
|
||||
max_size_mb = 100
|
||||
sign_events = false
|
||||
|
||||
[security.otp]
|
||||
enabled = true
|
||||
method = "totp"
|
||||
token_ttl_secs = 30
|
||||
cache_valid_secs = 300
|
||||
gated_actions = [
|
||||
"shell",
|
||||
"file_write",
|
||||
"browser_open",
|
||||
"browser",
|
||||
"memory_forget",
|
||||
]
|
||||
gated_domains = []
|
||||
gated_domain_categories = []
|
||||
challenge_delivery = "dm"
|
||||
challenge_timeout_secs = 120
|
||||
challenge_max_attempts = 3
|
||||
|
||||
[security.estop]
|
||||
enabled = false
|
||||
state_file = "~/.zeroclaw/estop-state.json"
|
||||
require_otp_to_resume = true
|
||||
|
||||
[security.syscall_anomaly]
|
||||
enabled = true
|
||||
strict_mode = false
|
||||
alert_on_unknown_syscall = true
|
||||
max_denied_events_per_minute = 5
|
||||
max_total_events_per_minute = 120
|
||||
max_alerts_per_minute = 30
|
||||
alert_cooldown_secs = 20
|
||||
log_path = "syscall-anomalies.log"
|
||||
baseline_syscalls = [
|
||||
"read",
|
||||
"write",
|
||||
"open",
|
||||
"openat",
|
||||
"close",
|
||||
"stat",
|
||||
"fstat",
|
||||
"newfstatat",
|
||||
"lseek",
|
||||
"mmap",
|
||||
"mprotect",
|
||||
"munmap",
|
||||
"brk",
|
||||
"rt_sigaction",
|
||||
"rt_sigprocmask",
|
||||
"ioctl",
|
||||
"fcntl",
|
||||
"access",
|
||||
"pipe2",
|
||||
"dup",
|
||||
"dup2",
|
||||
"dup3",
|
||||
"epoll_create1",
|
||||
"epoll_ctl",
|
||||
"epoll_wait",
|
||||
"poll",
|
||||
"ppoll",
|
||||
"select",
|
||||
"futex",
|
||||
"clock_gettime",
|
||||
"nanosleep",
|
||||
"getpid",
|
||||
"gettid",
|
||||
"set_tid_address",
|
||||
"set_robust_list",
|
||||
"clone",
|
||||
"clone3",
|
||||
"fork",
|
||||
"execve",
|
||||
"wait4",
|
||||
"exit",
|
||||
"exit_group",
|
||||
"socket",
|
||||
"connect",
|
||||
"accept",
|
||||
"accept4",
|
||||
"listen",
|
||||
"sendto",
|
||||
"recvfrom",
|
||||
"sendmsg",
|
||||
"recvmsg",
|
||||
"getsockname",
|
||||
"getpeername",
|
||||
"setsockopt",
|
||||
"getsockopt",
|
||||
"getrandom",
|
||||
"statx",
|
||||
]
|
||||
|
||||
[security.perplexity_filter]
|
||||
enable_perplexity_filter = false
|
||||
perplexity_threshold = 18.0
|
||||
suffix_window_chars = 64
|
||||
min_prompt_chars = 32
|
||||
symbol_ratio_threshold = 0.2
|
||||
|
||||
[security.outbound_leak_guard]
|
||||
enabled = true
|
||||
action = "redact"
|
||||
sensitivity = 0.7
|
||||
|
||||
[security.url_access]
|
||||
block_private_ip = true
|
||||
allow_cidrs = []
|
||||
allow_domains = []
|
||||
allow_loopback = false
|
||||
require_first_visit_approval = false
|
||||
enforce_domain_allowlist = false
|
||||
domain_allowlist = []
|
||||
domain_blocklist = []
|
||||
approved_domains = []
|
||||
|
||||
[runtime]
|
||||
kind = "native"
|
||||
|
||||
[runtime.docker]
|
||||
image = "alpine:3.20"
|
||||
network = "none"
|
||||
memory_limit_mb = 512
|
||||
cpu_limit = 1.0
|
||||
read_only_rootfs = true
|
||||
mount_workspace = true
|
||||
allowed_workspace_roots = []
|
||||
|
||||
[runtime.wasm]
|
||||
tools_dir = "tools/wasm"
|
||||
fuel_limit = 1000000
|
||||
memory_limit_mb = 64
|
||||
max_module_size_mb = 50
|
||||
allow_workspace_read = false
|
||||
allow_workspace_write = false
|
||||
allowed_hosts = []
|
||||
|
||||
[runtime.wasm.security]
|
||||
require_workspace_relative_tools_dir = true
|
||||
reject_symlink_modules = true
|
||||
reject_symlink_tools_dir = true
|
||||
strict_host_validation = true
|
||||
capability_escalation_mode = "deny"
|
||||
module_hash_policy = "warn"
|
||||
|
||||
[runtime.wasm.security.module_sha256]
|
||||
|
||||
[research]
|
||||
enabled = false
|
||||
trigger = "never"
|
||||
keywords = [
|
||||
"find",
|
||||
"search",
|
||||
"check",
|
||||
"investigate",
|
||||
"look",
|
||||
"research",
|
||||
"найди",
|
||||
"проверь",
|
||||
"исследуй",
|
||||
"поищи",
|
||||
]
|
||||
min_message_length = 50
|
||||
max_iterations = 5
|
||||
show_progress = true
|
||||
system_prompt_prefix = ""
|
||||
|
||||
[reliability]
|
||||
provider_retries = 2
|
||||
provider_backoff_ms = 500
|
||||
fallback_providers = []
|
||||
api_keys = []
|
||||
channel_initial_backoff_secs = 2
|
||||
channel_max_backoff_secs = 60
|
||||
scheduler_poll_secs = 15
|
||||
scheduler_retries = 2
|
||||
|
||||
[reliability.model_fallbacks]
|
||||
|
||||
[scheduler]
|
||||
enabled = true
|
||||
max_tasks = 64
|
||||
max_concurrent = 4
|
||||
|
||||
[agent]
|
||||
compact_context = true
|
||||
max_tool_iterations = 20
|
||||
max_history_messages = 50
|
||||
parallel_tools = false
|
||||
tool_dispatcher = "auto"
|
||||
loop_detection_no_progress_threshold = 3
|
||||
loop_detection_ping_pong_cycles = 2
|
||||
loop_detection_failure_streak = 3
|
||||
safety_heartbeat_interval = 5
|
||||
safety_heartbeat_turn_interval = 10
|
||||
|
||||
[agent.session]
|
||||
backend = "none"
|
||||
strategy = "per-sender"
|
||||
ttl_seconds = 3600
|
||||
max_messages = 50
|
||||
|
||||
[agent.teams]
|
||||
enabled = true
|
||||
auto_activate = true
|
||||
max_agents = 32
|
||||
strategy = "adaptive"
|
||||
load_window_secs = 120
|
||||
inflight_penalty = 8
|
||||
recent_selection_penalty = 2
|
||||
recent_failure_penalty = 12
|
||||
|
||||
[agent.subagents]
|
||||
enabled = true
|
||||
auto_activate = true
|
||||
max_concurrent = 10
|
||||
strategy = "adaptive"
|
||||
load_window_secs = 180
|
||||
inflight_penalty = 10
|
||||
recent_selection_penalty = 3
|
||||
recent_failure_penalty = 16
|
||||
queue_wait_ms = 15000
|
||||
queue_poll_ms = 200
|
||||
|
||||
[skills]
|
||||
open_skills_enabled = false
|
||||
trusted_skill_roots = []
|
||||
allow_scripts = false
|
||||
prompt_injection_mode = "full"
|
||||
|
||||
[query_classification]
|
||||
enabled = false
|
||||
rules = []
|
||||
|
||||
[heartbeat]
|
||||
enabled = false
|
||||
interval_minutes = 30
|
||||
|
||||
[cron]
|
||||
enabled = true
|
||||
max_run_history = 50
|
||||
|
||||
[goal_loop]
|
||||
enabled = false
|
||||
interval_minutes = 10
|
||||
step_timeout_secs = 120
|
||||
max_steps_per_cycle = 3
|
||||
|
||||
[channels_config]
|
||||
cli = true
|
||||
message_timeout_secs = 300
|
||||
|
||||
[channels_config.webhook]
|
||||
port = 8080
|
||||
secret = "mytoken123"
|
||||
|
||||
[channels_config.ack_reaction]
|
||||
|
||||
[memory]
|
||||
backend = "sqlite"
|
||||
auto_save = true
|
||||
hygiene_enabled = true
|
||||
archive_after_days = 7
|
||||
purge_after_days = 30
|
||||
conversation_retention_days = 30
|
||||
embedding_provider = "none"
|
||||
embedding_model = "text-embedding-3-small"
|
||||
embedding_dimensions = 1536
|
||||
vector_weight = 0.7
|
||||
keyword_weight = 0.3
|
||||
min_relevance_score = 0.4
|
||||
embedding_cache_size = 10000
|
||||
chunk_max_tokens = 512
|
||||
response_cache_enabled = false
|
||||
response_cache_ttl_minutes = 60
|
||||
response_cache_max_entries = 5000
|
||||
snapshot_enabled = false
|
||||
snapshot_on_hygiene = false
|
||||
auto_hydrate = true
|
||||
sqlite_journal_mode = "wal"
|
||||
|
||||
[memory.qdrant]
|
||||
collection = "zeroclaw_memories"
|
||||
|
||||
[storage.provider.config]
|
||||
provider = ""
|
||||
schema = "public"
|
||||
table = "memories"
|
||||
tls = false
|
||||
|
||||
[tunnel]
|
||||
provider = "none"
|
||||
|
||||
[gateway]
|
||||
port = 8080
|
||||
host = "0.0.0.0"
|
||||
require_pairing = false
|
||||
trusted_ips = ["0.0.0.0/0"]
|
||||
allow_public_bind = true
|
||||
paired_tokens = []
|
||||
pair_rate_limit_per_minute = 10
|
||||
webhook_rate_limit_per_minute = 60
|
||||
trust_forwarded_headers = false
|
||||
rate_limit_max_keys = 10000
|
||||
idempotency_ttl_secs = 300
|
||||
idempotency_max_keys = 10000
|
||||
webhook_secret = "mytoken123"
|
||||
|
||||
[gateway.node_control]
|
||||
enabled = false
|
||||
allowed_node_ids = []
|
||||
|
||||
[composio]
|
||||
enabled = false
|
||||
entity_id = "default"
|
||||
|
||||
[secrets]
|
||||
encrypt = true
|
||||
|
||||
[browser]
|
||||
enabled = false
|
||||
allowed_domains = []
|
||||
browser_open = "default"
|
||||
backend = "agent_browser"
|
||||
auto_backend_priority = []
|
||||
agent_browser_command = "agent-browser"
|
||||
agent_browser_extra_args = []
|
||||
agent_browser_timeout_ms = 30000
|
||||
native_headless = true
|
||||
native_webdriver_url = "http://127.0.0.1:9515"
|
||||
|
||||
[browser.computer_use]
|
||||
endpoint = "http://127.0.0.1:8787/v1/actions"
|
||||
timeout_ms = 15000
|
||||
allow_remote_endpoint = false
|
||||
window_allowlist = []
|
||||
|
||||
[http_request]
|
||||
enabled = false
|
||||
allowed_domains = []
|
||||
max_response_size = 1000000
|
||||
timeout_secs = 30
|
||||
user_agent = "ZeroClaw/1.0"
|
||||
|
||||
[http_request.credential_profiles]
|
||||
|
||||
[multimodal]
|
||||
max_images = 4
|
||||
max_image_size_mb = 5
|
||||
allow_remote_fetch = false
|
||||
|
||||
[web_fetch]
|
||||
enabled = false
|
||||
provider = "fast_html2md"
|
||||
allowed_domains = ["*"]
|
||||
blocked_domains = []
|
||||
max_response_size = 500000
|
||||
timeout_secs = 30
|
||||
user_agent = "ZeroClaw/1.0"
|
||||
|
||||
[web_search]
|
||||
enabled = false
|
||||
provider = "duckduckgo"
|
||||
fallback_providers = []
|
||||
retries_per_provider = 0
|
||||
retry_backoff_ms = 250
|
||||
domain_filter = []
|
||||
language_filter = []
|
||||
exa_search_type = "auto"
|
||||
exa_include_text = false
|
||||
jina_site_filters = []
|
||||
max_results = 5
|
||||
timeout_secs = 15
|
||||
user_agent = "ZeroClaw/1.0"
|
||||
|
||||
[proxy]
|
||||
enabled = false
|
||||
no_proxy = []
|
||||
scope = "zeroclaw"
|
||||
services = []
|
||||
|
||||
[identity]
|
||||
format = "openclaw"
|
||||
extra_files = []
|
||||
|
||||
[cost]
|
||||
enabled = false
|
||||
daily_limit_usd = 10.0
|
||||
monthly_limit_usd = 100.0
|
||||
warn_at_percent = 80
|
||||
allow_override = false
|
||||
|
||||
[cost.prices."anthropic/claude-opus-4-20250514"]
|
||||
input = 15.0
|
||||
output = 75.0
|
||||
|
||||
[cost.prices."openai/gpt-4o"]
|
||||
input = 5.0
|
||||
output = 15.0
|
||||
|
||||
[cost.prices."openai/gpt-4o-mini"]
|
||||
input = 0.15
|
||||
output = 0.6
|
||||
|
||||
[cost.prices."anthropic/claude-sonnet-4-20250514"]
|
||||
input = 3.0
|
||||
output = 15.0
|
||||
|
||||
[cost.prices."openai/o1-preview"]
|
||||
input = 15.0
|
||||
output = 60.0
|
||||
|
||||
[cost.prices."anthropic/claude-3-haiku"]
|
||||
input = 0.25
|
||||
output = 1.25
|
||||
|
||||
[cost.prices."google/gemini-2.0-flash"]
|
||||
input = 0.1
|
||||
output = 0.4
|
||||
|
||||
[cost.prices."anthropic/claude-3.5-sonnet"]
|
||||
input = 3.0
|
||||
output = 15.0
|
||||
|
||||
[cost.prices."google/gemini-1.5-pro"]
|
||||
input = 1.25
|
||||
output = 5.0
|
||||
|
||||
[cost.enforcement]
|
||||
mode = "warn"
|
||||
route_down_model = "hint:fast"
|
||||
reserve_percent = 10
|
||||
|
||||
[economic]
|
||||
enabled = false
|
||||
initial_balance = 1000.0
|
||||
min_evaluation_threshold = 0.6
|
||||
|
||||
[economic.token_pricing]
|
||||
input_price_per_million = 3.0
|
||||
output_price_per_million = 15.0
|
||||
|
||||
[peripherals]
|
||||
enabled = true
|
||||
boards = []
|
||||
|
||||
[agents]
|
||||
|
||||
[coordination]
|
||||
enabled = true
|
||||
lead_agent = "delegate-lead"
|
||||
max_inbox_messages_per_agent = 256
|
||||
max_dead_letters = 256
|
||||
max_context_entries = 512
|
||||
max_seen_message_ids = 4096
|
||||
|
||||
[hooks]
|
||||
enabled = true
|
||||
|
||||
[hooks.builtin]
|
||||
boot_script = false
|
||||
command_logger = false
|
||||
session_memory = false
|
||||
|
||||
[plugins]
|
||||
enabled = true
|
||||
allow = []
|
||||
deny = []
|
||||
load_paths = []
|
||||
|
||||
[plugins.entries]
|
||||
|
||||
[hardware]
|
||||
enabled = true
|
||||
transport = "None"
|
||||
baud_rate = 115200
|
||||
workspace_datasheets = false
|
||||
|
||||
[transcription]
|
||||
enabled = false
|
||||
api_url = "https://api.groq.com/openai/v1/audio/transcriptions"
|
||||
model = "whisper-large-v3-turbo"
|
||||
max_duration_secs = 120
|
||||
|
||||
[agents_ipc]
|
||||
enabled = false
|
||||
db_path = "~/.zeroclaw/agents.db"
|
||||
staleness_secs = 300
|
||||
|
||||
[mcp]
|
||||
enabled = false
|
||||
servers = []
|
||||
|
||||
[wasm]
|
||||
enabled = true
|
||||
memory_limit_mb = 64
|
||||
fuel_limit = 1000000000
|
||||
registry_url = "https://zeromarket.vercel.app/api"
|
||||
22
third_party/zeroclaw/scripts/zeroclaw.service
vendored
Normal file
22
third_party/zeroclaw/scripts/zeroclaw.service
vendored
Normal file
@@ -0,0 +1,22 @@
|
||||
[Unit]
|
||||
Description=ZeroClaw AI Hardware Agent
|
||||
Documentation=https://github.com/zeroclaw/zeroclaw
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=pi
|
||||
SupplementaryGroups=gpio spi i2c
|
||||
WorkingDirectory=/home/pi/zeroclaw
|
||||
ExecStart=/home/pi/zeroclaw/zeroclaw gateway --host 0.0.0.0 --port 8080
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
EnvironmentFile=/home/pi/zeroclaw/.env
|
||||
Environment=RUST_LOG=info
|
||||
|
||||
# Expand ~ in config path
|
||||
Environment=HOME=/home/pi
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
Reference in New Issue
Block a user