feat: refactor sgclaw around zeroclaw compat runtime

This commit is contained in:
zyl
2026-03-26 16:23:31 +08:00
parent bca5b75801
commit ff0771a83f
1059 changed files with 409460 additions and 23 deletions

2118
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -4,6 +4,10 @@ version = "0.1.0"
edition = "2021"
[dependencies]
anyhow = "1"
async-trait = "0.1"
chrono = { version = "0.4", default-features = false, features = ["clock"] }
futures-util = "0.3"
hex = "0.4"
hmac = "0.12"
reqwest = { version = "0.12", default-features = false, features = ["blocking", "json", "rustls-tls"] }
@@ -11,4 +15,6 @@ serde = { version = "1", features = ["derive"] }
serde_json = "1"
sha2 = "0.10"
thiserror = "1"
tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "macros"] }
uuid = { version = "1", features = ["v4"] }
zeroclaw = { package = "zeroclawlabs", path = "third_party/zeroclaw", default-features = false }

View File

@@ -16,6 +16,26 @@ sgClaw 项目仓库。
cargo test
cargo test --test planner_test -q
cargo test --test agent_runtime_test -q
node --test tools/browser_smoke/fake_deepseek_server.test.mjs
node tools/browser_smoke/run_deepseek_browser_smoke.mjs
cargo run
bash frontend/sgClaw验证/serve.sh
```
## 浏览器侧 DeepSeek smoke
在已经可用的 SuperRPA 浏览器构建目录上,可以通过下面的组合验证浏览器侧 `sgclaw` 是否真的走了 ZeroClaw/DeepSeek compat runtime而不是回退到本地 planner
```bash
python3 /home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/build_sgclaw.py \
--manifest-path /home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/Cargo.toml \
--out /home/zyl/projects/superRpa/src/out/KylinRelease/sgclaw
node tools/browser_smoke/run_deepseek_browser_smoke.mjs
```
该 wrapper 会:
- 启动本地 fake DeepSeek 服务
- 注入 `DEEPSEEK_API_KEY` / `DEEPSEEK_BASE_URL` / `DEEPSEEK_MODEL`
- 调用现有 `/home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/sgclaw_chat_smoke.mjs`
- 在 smoke 通过后,再额外确认 fake 服务确实收到了百度和知乎两组 provider 请求

View File

@@ -0,0 +1,134 @@
# DeepSeek Browser Smoke Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** Add a repo-local verification path that exercises the browser-delivered `sgclaw` binary through the ZeroClaw/DeepSeek compat runtime without requiring a real DeepSeek account.
**Architecture:** Keep the existing SuperRPA browser smoke script unchanged. Add a small sgClaw-owned helper module that behaves like a fake OpenAI-compatible DeepSeek server and a runner script that starts that server, injects `DEEPSEEK_*` into the browser process environment, and delegates the actual browser/UI verification to the existing `sgclaw_chat_smoke.mjs`.
**Tech Stack:** Node.js ESM, Node built-in `node:test`, local HTTP server, Chromium `build_sgclaw.py`, existing SuperRPA `sgclaw_chat_smoke.mjs`.
### Task 1: Add Fake DeepSeek Response Planner
**Files:**
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/tools/browser_smoke/fake_deepseek_server.mjs`
- Test: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/tools/browser_smoke/fake_deepseek_server.test.mjs`
**Step 1: Write the failing test**
Add `node:test` coverage that proves the fake server planner:
- returns Baidu tool calls for `打开百度搜索天气`
- returns Zhihu navigate tool calls for `打开知乎搜索天气`
- returns final summaries matching the existing smoke script expectations
- rejects unsupported instructions clearly
**Step 2: Run test to verify it fails**
Run:
```bash
node --test tools/browser_smoke/fake_deepseek_server.test.mjs
```
Expected: FAIL because the helper module does not exist yet.
**Step 3: Implement the minimal helper**
The helper should:
- inspect the latest user message / tool-result phase
- emit OpenAI-compatible `choices[0].message.tool_calls` for the first round
- emit `choices[0].message.content` for the second round
- keep summaries identical to the current smoke assertions
**Step 4: Run test to verify it passes**
Run:
```bash
node --test tools/browser_smoke/fake_deepseek_server.test.mjs
```
Expected: PASS
### Task 2: Add DeepSeek Smoke Wrapper Script
**Files:**
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/tools/browser_smoke/run_deepseek_browser_smoke.mjs`
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/README.md`
**Step 1: Write the failing wrapper expectation**
Add a small test or dry-run seam in the helper test that proves the wrapper environment includes:
- `DEEPSEEK_API_KEY`
- `DEEPSEEK_BASE_URL`
- `DEEPSEEK_MODEL`
and points at the fake local server.
**Step 2: Run the targeted test to verify it fails**
Run:
```bash
node --test tools/browser_smoke/fake_deepseek_server.test.mjs
```
Expected: FAIL because no wrapper/env builder exists yet.
**Step 3: Implement the wrapper**
The wrapper should:
- start the fake DeepSeek server
- invoke:
```bash
node /home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/sgclaw_chat_smoke.mjs
```
- inject `DEEPSEEK_*` into the child environment
- print the child stdout/stderr through
- stop the fake server on exit
**Step 4: Run the targeted test to verify it passes**
Run:
```bash
node --test tools/browser_smoke/fake_deepseek_server.test.mjs
```
Expected: PASS
### Task 3: Verify the Browser-Delivered DeepSeek Path
**Files:**
- Verify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/tools/browser_smoke/*`
- Verify: `/home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/build_sgclaw.py`
**Step 1: Build the browser-delivered binary from the worktree**
Run:
```bash
python3 /home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/build_sgclaw.py \
--manifest-path /home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/Cargo.toml \
--out /home/zyl/projects/superRpa/src/out/KylinRelease/sgclaw
```
Expected: PASS
**Step 2: Run the DeepSeek smoke wrapper**
Run:
```bash
node tools/browser_smoke/run_deepseek_browser_smoke.mjs
```
Expected:
- existing browser smoke passes
- `sgclaw` is forced down the compat runtime path through `DEEPSEEK_*`
- Baidu and Zhihu tasks still complete
**Step 3: Re-run full Rust tests to guard against regressions**
Run:
```bash
python3 /home/zyl/projects/superRpa/src/tools/crates/run_cargo.py test \
--manifest-path /home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/Cargo.toml \
--tests
```
Expected: PASS

View File

@@ -0,0 +1,274 @@
# ZeroClaw Core Refactor Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** Rebuild `sgClaw` on top of vendored ZeroClaw core while preserving the existing SuperRPA browser pipe protocol, `FunctionsUI` bridge names, and `sgclaw` binary contract.
**Architecture:** Keep `sgclaw` as the compatibility shell and replace its current minimal runtime with a ZeroClaw-based core adapter. Vendor the upstream ZeroClaw workspace into this repository for reproducible builds, then build a `compat` layer that translates `submit_task` / `task_complete` / log events to and from ZeroClaw agent, memory, cron, and tool abstractions. Do not integrate the upstream ZeroClaw gateway in this phase; the future standalone gateway will reuse the same vendored core through a separate entrypoint.
**Tech Stack:** Rust workspace, vendored upstream ZeroClaw (`zeroclawlabs`), current sgClaw pipe protocol and browser tool, DeepSeek via ZeroClaw provider routing, SQLite memory backends, Chromium `run_cargo.py` build flow.
### Task 1: Vendor ZeroClaw Upstream Snapshot
**Files:**
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/third_party/zeroclaw/**`
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/third_party/zeroclaw/VENDORED_FROM.md`
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/.gitignore`
**Step 1: Copy the upstream snapshot into the repo**
Source:
```bash
/home/zyl/Downloads/zeroclaw-master.zip
```
Destination:
```bash
/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/third_party/zeroclaw
```
Strip the top-level `zeroclaw-master/` folder so the vendored directory itself is the workspace root.
**Step 2: Record provenance**
Write `third_party/zeroclaw/VENDORED_FROM.md` with:
- upstream repo URL
- upstream default branch (`master`)
- source ZIP filename
- vendoring date
- a note that this copy is used to guarantee offline/reproducible browser builds
**Step 3: Verify the vendor tree exists**
Run:
```bash
find third_party/zeroclaw -maxdepth 2 -name Cargo.toml -o -name README.md
```
Expected: upstream workspace files are present.
### Task 2: Convert sgClaw into a ZeroClaw-Backed Workspace Shell
**Files:**
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/Cargo.toml`
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/lib.rs`
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/main.rs`
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/compat/mod.rs`
**Step 1: Add the vendored ZeroClaw dependency**
Use a local path dependency:
```toml
zeroclaw = { package = "zeroclawlabs", path = "third_party/zeroclaw" }
tokio = { version = "1", features = ["rt-multi-thread", "macros"] }
```
Do not use a git dependency. Browser builds must not depend on network access.
**Step 2: Preserve the root crate identity**
Keep:
- package name `sgclaw`
- binary name `sgclaw`
- current manifest path used by SuperRPA browser build scripts
This avoids breaking `/home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/build_sgclaw.py`.
**Step 3: Route the process entrypoint through the compatibility layer**
`src/lib.rs` should keep:
- current handshake
- current `BrowserPipeTool`
- current message loop
But delegate task execution to `compat::runtime`, not directly to the current thin planner/runtime path.
### Task 3: Introduce the sgClaw Compatibility Layer
**Files:**
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/compat/runtime.rs`
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/compat/browser_tool_adapter.rs`
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/compat/config_adapter.rs`
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/compat/event_bridge.rs`
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/compat/memory_adapter.rs`
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/agent/mod.rs`
**Step 1: Define the boundary**
`compat::runtime` owns:
- creating the ZeroClaw config/provider/runtime/memory/tool registry
- executing a task from a browser `submit_task`
- translating ZeroClaw progress into current `AgentMessage::LogEntry`
- returning the final summary string for current `task_complete`
`compat::event_bridge` owns all formatting decisions for:
- `[info] ...`
- `[error] ...`
- final summary propagation
**Step 2: Keep the browser protocol unchanged**
Do not change these wire-level contracts:
- `BrowserMessage::SubmitTask`
- `AgentMessage::TaskComplete`
- `AgentMessage::LogEntry`
- `init/init_ack`
The browser side must not need a corresponding protocol change.
**Step 3: Retire direct planner ownership from the main path**
`src/agent/mod.rs` should stop owning the main task intelligence flow. The current rule-based planner can remain only as:
- transitional fallback, or
- deterministic test fixture
It must no longer be the primary execution engine.
### Task 4: Adapt BrowserPipeTool into a ZeroClaw Tool
**Files:**
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/compat/browser_tool_adapter.rs`
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/pipe/browser_tool.rs`
- Test: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/tests/compat_browser_tool_test.rs`
**Step 1: Write the failing adapter test**
Add a focused test that proves:
- a ZeroClaw tool invocation can issue `navigate`, `type`, `click`, `getText`
- domain validation still flows through current MAC/rules enforcement
- returned observation data includes browser response payload and AOM snapshot
**Step 2: Verify RED**
Run:
```bash
python3 /home/zyl/projects/superRpa/src/tools/crates/run_cargo.py test \
--manifest-path /home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/Cargo.toml \
--test compat_browser_tool_test
```
Expected: fail because the adapter does not exist yet.
**Step 3: Implement the adapter**
Wrap current `BrowserPipeTool` behind ZeroClaws async `Tool` trait:
- tool name should stay stable and sgClaw-specific, for example `browser_action`
- schema should only expose the currently supported safe actions
- `ToolResult` should include serialized `data`, `aom_snapshot`, `timing`
### Task 5: Build the DeepSeek-Backed ZeroClaw Runtime Path
**Files:**
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/tests/compat_runtime_test.rs`
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/compat/runtime.rs`
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/compat/config_adapter.rs`
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/config/settings.rs`
**Step 1: Write the failing runtime test**
Add a compatibility runtime test that proves:
- when `DEEPSEEK_API_KEY` is configured, sgClaw uses the ZeroClaw provider path
- the runtime can execute a simple mocked `browser_action` sequence
- the final result is returned as current sgClaw `task_complete`
Use a fake provider or deterministic ZeroClaw test seam for RED/GREEN speed.
**Step 2: Verify RED**
Run:
```bash
python3 /home/zyl/projects/superRpa/src/tools/crates/run_cargo.py test \
--manifest-path /home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/Cargo.toml \
--test compat_runtime_test
```
Expected: fail because the compatibility runtime is not wired yet.
**Step 3: Implement DeepSeek mapping**
Map current sgClaw env/config into ZeroClaw provider config:
- `DEEPSEEK_API_KEY`
- `DEEPSEEK_BASE_URL`
- `DEEPSEEK_MODEL`
DeepSeek should be treated as OpenAI-compatible routing under ZeroClaw, not via the old local `DeepSeekProvider`.
### Task 6: Introduce Memory and Cron Through the Compatibility Core
**Files:**
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/compat/config_adapter.rs`
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/compat/memory_adapter.rs`
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/tests/compat_memory_test.rs`
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/tests/compat_cron_test.rs`
**Step 1: Memory**
Configure a workspace-local ZeroClaw memory backend suitable for browser embedding:
- default to SQLite
- keep storage under sgClaw-owned data path
- avoid enabling unrelated gateway/channel storage
**Step 2: Cron**
Expose ZeroClaw cron internally, but do not yet bind it to browser UI.
This phase only requires:
- creating validated agent jobs
- listing/running due jobs in tests
The future standalone gateway will surface management UI for cron.
### Task 7: Verification and Browser Integration
**Files:**
- Verify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/tests/*.rs`
- Verify: `/home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/build_sgclaw.py`
- Verify: `/home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/sgclaw_chat_smoke.mjs`
**Step 1: Run the full Rust test baseline**
Run:
```bash
python3 /home/zyl/projects/superRpa/src/tools/crates/run_cargo.py test \
--manifest-path /home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/Cargo.toml \
--tests
```
Expected: current protocol/tool/planner compatibility tests still pass or are consciously replaced with equivalent compat tests.
**Step 2: Build the browser-delivered binary from the worktree**
Run:
```bash
python3 /home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/build_sgclaw.py \
--manifest-path /home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/Cargo.toml \
--out /home/zyl/projects/superRpa/src/out/KylinRelease/sgclaw
```
Expected: the compatibility-shell binary is produced at the same output path as today.
**Step 3: Run browser smoke**
Run:
```bash
node /home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/sgclaw_chat_smoke.mjs
```
Expected:
- browser protocol still starts and stops correctly
- Baidu task still succeeds
- Zhihu task still succeeds
- no browser-side API/bridge changes are required
### Non-Goals for This Refactor
- Do not replace the current SuperRPA browser protocol with ZeroClaw gateway protocols.
- Do not expose the upstream ZeroClaw web dashboard inside FunctionsUI.
- Do not ship the standalone gateway in this phase.
- Do not migrate browser-side code to a new transport.
### Phase 2 After This Refactor
After this compatibility refactor is stable:
- add a separate `gateway` crate or binary that uses the same vendored ZeroClaw core
- expose memory/cron/agent management there
- keep browser-side `sgclaw` as a thin local execution shell

View File

@@ -1,13 +1,15 @@
{
"version": "1.0",
"demo_only_domains": ["baidu.com", "www.baidu.com"],
"demo_only_domains": ["baidu.com", "www.baidu.com", "zhihu.com", "www.zhihu.com"],
"domains": {
"allowed": [
"oa.example.com",
"erp.example.com",
"hr.example.com",
"baidu.com",
"www.baidu.com"
"www.baidu.com",
"zhihu.com",
"www.zhihu.com"
]
},
"pipe_actions": {

View File

@@ -1,7 +1,9 @@
pub mod planner;
pub mod runtime;
use crate::llm::DeepSeekProvider;
use std::path::PathBuf;
use crate::config::DeepSeekSettings;
use crate::pipe::{AgentMessage, BrowserMessage, BrowserPipeTool, PipeError, Transport};
pub fn execute_task<T: Transport>(
@@ -34,19 +36,19 @@ pub fn execute_task<T: Transport>(
Ok(plan.summary)
}
pub fn handle_browser_message<T: Transport>(
pub fn handle_browser_message<T: Transport + 'static>(
transport: &T,
browser_tool: &BrowserPipeTool<T>,
message: BrowserMessage,
) -> Result<(), PipeError> {
match message {
BrowserMessage::SubmitTask { instruction } => {
let completion = match DeepSeekProvider::from_env() {
Ok(provider) => match runtime::execute_task_with_provider(
let completion = match DeepSeekSettings::from_env() {
Ok(_) => match crate::compat::runtime::execute_task(
transport,
browser_tool,
&provider,
browser_tool.clone(),
&instruction,
&std::env::current_dir().unwrap_or_else(|_| PathBuf::from(".")),
) {
Ok(summary) => AgentMessage::TaskComplete {
success: true,

View File

@@ -1,3 +1,4 @@
use reqwest::Url;
use serde_json::{json, Value};
use thiserror::Error;
@@ -7,6 +8,8 @@ const BAIDU_URL: &str = "https://www.baidu.com";
const BAIDU_DOMAIN: &str = "www.baidu.com";
const BAIDU_INPUT_SELECTOR: &str = "#kw";
const BAIDU_SEARCH_BUTTON_SELECTOR: &str = "#su";
const ZHIHU_URL: &str = "https://www.zhihu.com/search";
const ZHIHU_DOMAIN: &str = "www.zhihu.com";
#[derive(Debug, Clone, PartialEq)]
pub struct PlannedStep {
@@ -32,17 +35,38 @@ pub enum PlannerError {
pub fn plan_instruction(instruction: &str) -> Result<TaskPlan, PlannerError> {
let trimmed = instruction.trim();
let query = trimmed
.strip_prefix("打开百度搜索")
.or_else(|| trimmed.strip_prefix("打开百度并搜索"))
.ok_or_else(|| PlannerError::UnsupportedInstruction(trimmed.to_string()))?
.trim();
if let Some(query) = extract_query(trimmed, &["打开百度搜索", "打开百度并搜索"])? {
return Ok(plan_baidu_search(query));
}
if let Some(query) = extract_query(trimmed, &["打开知乎搜索", "打开知乎并搜索"])? {
return Ok(plan_zhihu_search(query));
}
Err(PlannerError::UnsupportedInstruction(trimmed.to_string()))
}
fn extract_query<'a>(
instruction: &'a str,
prefixes: &[&str],
) -> Result<Option<&'a str>, PlannerError> {
let Some(query) = prefixes
.iter()
.find_map(|prefix| instruction.strip_prefix(prefix))
else {
return Ok(None);
};
let query = query.trim();
if query.is_empty() {
return Err(PlannerError::MissingQuery);
}
Ok(TaskPlan {
Ok(Some(query))
}
fn plan_baidu_search(query: &str) -> TaskPlan {
TaskPlan {
summary: format!("已在百度搜索{query}"),
steps: vec![
PlannedStep {
@@ -68,5 +92,21 @@ pub fn plan_instruction(instruction: &str) -> Result<TaskPlan, PlannerError> {
log_message: format!("click {BAIDU_SEARCH_BUTTON_SELECTOR}"),
},
],
})
}
}
fn plan_zhihu_search(query: &str) -> TaskPlan {
let url = Url::parse_with_params(ZHIHU_URL, &[("type", "content"), ("q", query)])
.expect("valid Zhihu search URL");
let url: String = url.into();
TaskPlan {
summary: format!("已在知乎搜索{query}"),
steps: vec![PlannedStep {
action: Action::Navigate,
params: json!({ "url": url }),
expected_domain: ZHIHU_DOMAIN.to_string(),
log_message: format!("navigate {url}"),
}],
}
}

View File

@@ -0,0 +1,156 @@
use async_trait::async_trait;
use serde_json::{json, Map, Value};
use zeroclaw::tools::{Tool, ToolResult};
use crate::pipe::{Action, BrowserPipeTool, Transport};
pub const BROWSER_ACTION_TOOL_NAME: &str = "browser_action";
pub struct ZeroClawBrowserTool<T: Transport> {
browser_tool: BrowserPipeTool<T>,
}
impl<T: Transport> ZeroClawBrowserTool<T> {
pub fn new(browser_tool: BrowserPipeTool<T>) -> Self {
Self { browser_tool }
}
}
#[async_trait]
impl<T: Transport + 'static> Tool for ZeroClawBrowserTool<T> {
fn name(&self) -> &str {
BROWSER_ACTION_TOOL_NAME
}
fn description(&self) -> &str {
"Execute browser actions in SuperRPA through the existing sgClaw pipe protocol."
}
fn parameters_schema(&self) -> Value {
json!({
"type": "object",
"required": ["action", "expected_domain"],
"properties": {
"action": {
"type": "string",
"enum": ["click", "type", "navigate", "getText"]
},
"expected_domain": {
"type": "string"
},
"selector": {
"type": "string"
},
"text": {
"type": "string"
},
"url": {
"type": "string"
},
"clear_first": {
"type": "boolean"
}
}
})
}
async fn execute(&self, args: Value) -> anyhow::Result<ToolResult> {
let request = match parse_browser_action_request(args) {
Ok(request) => request,
Err(err) => return Ok(failed_tool_result(err.to_string())),
};
let result = match self.browser_tool.invoke(
request.action,
request.params,
&request.expected_domain,
) {
Ok(result) => result,
Err(err) => return Ok(failed_tool_result(err.to_string())),
};
let output = serde_json::to_string(&json!({
"seq": result.seq,
"success": result.success,
"data": result.data,
"aom_snapshot": result.aom_snapshot,
"timing": result.timing
}))?;
Ok(ToolResult {
success: result.success,
output,
error: (!result.success).then(|| "browser action returned success=false".to_string()),
})
}
}
struct BrowserActionRequest {
action: Action,
expected_domain: String,
params: Value,
}
fn parse_browser_action_request(args: Value) -> Result<BrowserActionRequest, BrowserActionAdapterError> {
let mut args = match args {
Value::Object(args) => args,
other => {
return Err(BrowserActionAdapterError::InvalidArguments(format!(
"expected object arguments, got {other}"
)))
}
};
let action_name = take_required_string(&mut args, "action")?;
let expected_domain = take_required_string(&mut args, "expected_domain")?;
let action = parse_action(&action_name)?;
Ok(BrowserActionRequest {
action,
expected_domain,
params: Value::Object(args),
})
}
fn parse_action(action_name: &str) -> Result<Action, BrowserActionAdapterError> {
match action_name {
"click" => Ok(Action::Click),
"type" => Ok(Action::Type),
"navigate" => Ok(Action::Navigate),
"getText" => Ok(Action::GetText),
other => Err(BrowserActionAdapterError::UnsupportedAction(
other.to_string(),
)),
}
}
fn take_required_string(
args: &mut Map<String, Value>,
key: &'static str,
) -> Result<String, BrowserActionAdapterError> {
match args.remove(key) {
Some(Value::String(value)) if !value.trim().is_empty() => Ok(value),
Some(other) => Err(BrowserActionAdapterError::InvalidArguments(format!(
"{key} must be a non-empty string, got {other}"
))),
None => Err(BrowserActionAdapterError::MissingField(key)),
}
}
fn failed_tool_result(error: String) -> ToolResult {
ToolResult {
success: false,
output: String::new(),
error: Some(error),
}
}
#[derive(Debug, thiserror::Error)]
enum BrowserActionAdapterError {
#[error("unsupported action: {0}")]
UnsupportedAction(String),
#[error("missing required field: {0}")]
MissingField(&'static str),
#[error("invalid tool arguments: {0}")]
InvalidArguments(String),
}

View File

@@ -0,0 +1,38 @@
use std::path::{Path, PathBuf};
use zeroclaw::Config as ZeroClawConfig;
use crate::compat::cron_adapter::configure_embedded_cron;
use crate::compat::memory_adapter::configure_embedded_memory;
use crate::config::DeepSeekSettings;
const SGCLAW_ZEROCLAW_WORKSPACE_DIR: &str = ".sgclaw-zeroclaw-workspace";
pub fn build_zeroclaw_config(workspace_root: &Path) -> Result<ZeroClawConfig, crate::config::ConfigError> {
let settings = DeepSeekSettings::from_env()?;
Ok(build_zeroclaw_config_from_settings(
workspace_root,
&settings,
))
}
pub fn build_zeroclaw_config_from_settings(
workspace_root: &Path,
settings: &DeepSeekSettings,
) -> ZeroClawConfig {
let workspace_dir = zeroclaw_workspace_dir(workspace_root);
let mut config = ZeroClawConfig::default();
config.workspace_dir = workspace_dir.clone();
config.config_path = workspace_dir.join("config.toml");
config.default_provider = Some("deepseek".to_string());
config.default_model = Some(settings.model.clone());
config.api_key = Some(settings.api_key.clone());
config.api_url = Some(settings.base_url.clone());
configure_embedded_memory(&mut config);
configure_embedded_cron(&mut config);
config
}
pub fn zeroclaw_workspace_dir(workspace_root: &Path) -> PathBuf {
workspace_root.join(SGCLAW_ZEROCLAW_WORKSPACE_DIR)
}

View File

@@ -0,0 +1,98 @@
use std::future::Future;
use chrono::{DateTime, Utc};
use zeroclaw::config::Config as ZeroClawConfig;
use zeroclaw::cron::{self, CronJob, CronRun, JobType, Schedule, SessionTarget};
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct CronExecutionResult {
pub job_id: String,
pub success: bool,
pub output: String,
}
pub fn configure_embedded_cron(config: &mut ZeroClawConfig) {
config.cron.enabled = true;
config.cron.catch_up_on_startup = false;
config.scheduler.enabled = false;
config.scheduler.max_concurrent = 1;
config.scheduler.max_tasks = config.scheduler.max_tasks.max(1);
}
pub fn add_agent_job(
config: &ZeroClawConfig,
name: Option<String>,
schedule: Schedule,
prompt: &str,
allowed_tools: Option<Vec<String>>,
) -> anyhow::Result<CronJob> {
cron::add_agent_job(
config,
name,
schedule,
prompt,
SessionTarget::Isolated,
None,
None,
false,
allowed_tools,
)
}
pub fn list_jobs(config: &ZeroClawConfig) -> anyhow::Result<Vec<CronJob>> {
cron::list_jobs(config)
}
pub fn list_runs(
config: &ZeroClawConfig,
job_id: &str,
limit: usize,
) -> anyhow::Result<Vec<CronRun>> {
cron::list_runs(config, job_id, limit)
}
pub async fn run_due_jobs<F, Fut>(
config: &ZeroClawConfig,
now: DateTime<Utc>,
mut runner: F,
) -> anyhow::Result<Vec<CronExecutionResult>>
where
F: FnMut(&CronJob) -> Fut,
Fut: Future<Output = anyhow::Result<String>>,
{
let jobs = cron::due_jobs(config, now)?;
let mut results = Vec::with_capacity(jobs.len());
for job in jobs {
if !matches!(job.job_type, JobType::Agent) {
anyhow::bail!("unsupported cron job type in sgclaw compat: {:?}", job.job_type);
}
let started_at = Utc::now();
let (success, output) = match runner(&job).await {
Ok(output) => (true, output),
Err(err) => (false, err.to_string()),
};
let finished_at = Utc::now();
let duration_ms = (finished_at - started_at).num_milliseconds();
cron::record_run(
config,
&job.id,
started_at,
finished_at,
if success { "ok" } else { "error" },
Some(&output),
duration_ms,
)?;
cron::reschedule_after_run(config, &job, success, &output)?;
results.push(CronExecutionResult {
job_id: job.id,
success,
output,
});
}
Ok(results)
}

View File

@@ -0,0 +1,63 @@
use serde_json::Value;
use zeroclaw::agent::TurnEvent;
use crate::pipe::AgentMessage;
pub fn log_entry_for_turn_event(event: &TurnEvent) -> Option<AgentMessage> {
match event {
TurnEvent::ToolCall { name, args } => Some(AgentMessage::LogEntry {
level: "info".to_string(),
message: format_tool_call(name, args),
}),
TurnEvent::ToolResult { output, .. } if is_tool_error(output) => Some(AgentMessage::LogEntry {
level: "error".to_string(),
message: output.trim_start_matches("Error: ").to_string(),
}),
_ => None,
}
}
fn format_tool_call(name: &str, args: &Value) -> String {
if name != "browser_action" {
return format!("call {name}");
}
let action = args
.get("action")
.and_then(Value::as_str)
.unwrap_or("unknown");
match action {
"navigate" => {
let url = args.get("url").and_then(Value::as_str).unwrap_or("<missing-url>");
format!("navigate {url}")
}
"type" => {
let text = args.get("text").and_then(Value::as_str).unwrap_or("");
let selector = args
.get("selector")
.and_then(Value::as_str)
.unwrap_or("<missing-selector>");
format!("type {text} into {selector}")
}
"click" => {
let selector = args
.get("selector")
.and_then(Value::as_str)
.unwrap_or("<missing-selector>");
format!("click {selector}")
}
"getText" => {
let selector = args
.get("selector")
.and_then(Value::as_str)
.unwrap_or("<missing-selector>");
format!("getText {selector}")
}
other => format!("browser_action {other}"),
}
}
fn is_tool_error(output: &str) -> bool {
output.starts_with("Error:")
}

View File

@@ -0,0 +1,30 @@
use std::path::{Path, PathBuf};
use zeroclaw::config::Config as ZeroClawConfig;
use zeroclaw::memory::{self, Memory};
pub fn configure_embedded_memory(config: &mut ZeroClawConfig) {
config.memory.backend = "sqlite".to_string();
config.memory.embedding_provider = "none".to_string();
config.memory.response_cache_enabled = false;
config.memory.snapshot_enabled = false;
config.memory.snapshot_on_hygiene = false;
config.storage.provider.config.provider.clear();
config.storage.provider.config.db_url = None;
config.storage.provider.config.connect_timeout_secs = None;
}
pub fn build_memory(config: &ZeroClawConfig) -> anyhow::Result<Box<dyn Memory>> {
memory::create_memory_with_storage_and_routes(
&config.memory,
&config.embedding_routes,
Some(&config.storage.provider.config),
&config.workspace_dir,
config.api_key.as_deref(),
)
}
pub fn brain_db_path(workspace_dir: &Path) -> PathBuf {
workspace_dir.join("memory").join("brain.db")
}

6
src/compat/mod.rs Normal file
View File

@@ -0,0 +1,6 @@
pub mod browser_tool_adapter;
pub mod config_adapter;
pub mod cron_adapter;
pub mod event_bridge;
pub mod memory_adapter;
pub mod runtime;

197
src/compat/runtime.rs Normal file
View File

@@ -0,0 +1,197 @@
use std::path::Path;
use std::sync::Arc;
use async_trait::async_trait;
use futures_util::{stream, StreamExt};
use zeroclaw::agent::dispatcher::NativeToolDispatcher;
use zeroclaw::agent::{Agent, TurnEvent};
use zeroclaw::config::Config as ZeroClawConfig;
use zeroclaw::observability::{NoopObserver, Observer};
use zeroclaw::providers::{
self, ChatMessage, ChatRequest, ChatResponse, Provider,
};
use zeroclaw::providers::traits::{
ProviderCapabilities, StreamEvent, StreamOptions, StreamResult,
};
use crate::compat::browser_tool_adapter::{ZeroClawBrowserTool, BROWSER_ACTION_TOOL_NAME};
use crate::compat::config_adapter::build_zeroclaw_config;
use crate::compat::event_bridge::log_entry_for_turn_event;
use crate::compat::memory_adapter::build_memory;
use crate::pipe::{BrowserPipeTool, PipeError, Transport};
pub fn execute_task<T: Transport + 'static>(
transport: &T,
browser_tool: BrowserPipeTool<T>,
instruction: &str,
workspace_root: &Path,
) -> Result<String, PipeError> {
let config = build_zeroclaw_config(workspace_root)
.map_err(|err| PipeError::Protocol(err.to_string()))?;
let provider = build_provider(&config)?;
let runtime = tokio::runtime::Runtime::new()
.map_err(|err| PipeError::Protocol(format!("failed to create tokio runtime: {err}")))?;
runtime.block_on(execute_task_with_provider(
transport,
browser_tool,
provider,
instruction,
config,
))
}
pub async fn execute_task_with_provider<T: Transport + 'static>(
transport: &T,
browser_tool: BrowserPipeTool<T>,
provider: Box<dyn Provider>,
instruction: &str,
config: ZeroClawConfig,
) -> Result<String, PipeError> {
let mut agent = build_agent(browser_tool, provider, &config)?;
let (event_tx, mut event_rx) = tokio::sync::mpsc::channel::<TurnEvent>(32);
let instruction = instruction.to_string();
let task = tokio::spawn(async move { agent.turn_streamed(&instruction, event_tx).await });
while let Some(event) = event_rx.recv().await {
if let Some(log_entry) = log_entry_for_turn_event(&event) {
transport.send(&log_entry)?;
}
}
task.await
.map_err(|err| PipeError::Protocol(format!("zeroclaw task join failed: {err}")))?
.map_err(|err| PipeError::Protocol(err.to_string()))
}
fn build_agent<T: Transport + 'static>(
browser_tool: BrowserPipeTool<T>,
provider: Box<dyn Provider>,
config: &ZeroClawConfig,
) -> Result<Agent, PipeError> {
let memory = build_memory(config).map_err(map_anyhow_to_pipe_error)?;
let observer: Arc<dyn Observer> = Arc::new(NoopObserver);
let tools: Vec<Box<dyn zeroclaw::tools::Tool>> =
vec![Box::new(ZeroClawBrowserTool::new(browser_tool))];
Agent::builder()
.provider(provider)
.tools(tools)
.memory(Arc::from(memory))
.observer(observer)
.tool_dispatcher(Box::new(NativeToolDispatcher))
.config(config.agent.clone())
.model_name(
config
.default_model
.clone()
.unwrap_or_else(|| "deepseek-chat".to_string()),
)
.temperature(config.default_temperature)
.workspace_dir(config.workspace_dir.clone())
.allowed_tools(Some(vec![BROWSER_ACTION_TOOL_NAME.to_string()]))
.build()
.map_err(map_anyhow_to_pipe_error)
}
fn build_provider(config: &ZeroClawConfig) -> Result<Box<dyn Provider>, PipeError> {
let provider_name = config.default_provider.as_deref().unwrap_or("deepseek");
let model_name = config
.default_model
.as_deref()
.unwrap_or("deepseek-chat");
let runtime_options = providers::provider_runtime_options_from_config(config);
let resolved_provider_name = if provider_name == "deepseek" {
config
.api_url
.as_deref()
.map(str::trim)
.filter(|url| !url.is_empty())
.map(|url| format!("custom:{url}"))
.unwrap_or_else(|| provider_name.to_string())
} else {
provider_name.to_string()
};
let provider = providers::create_routed_provider_with_options(
&resolved_provider_name,
config.api_key.as_deref(),
config.api_url.as_deref(),
&config.reliability,
&config.model_routes,
model_name,
&runtime_options,
)
.map_err(map_anyhow_to_pipe_error)?;
Ok(Box::new(NonStreamingProvider::new(provider)))
}
fn map_anyhow_to_pipe_error(err: anyhow::Error) -> PipeError {
PipeError::Protocol(err.to_string())
}
struct NonStreamingProvider {
inner: Box<dyn Provider>,
}
impl NonStreamingProvider {
fn new(inner: Box<dyn Provider>) -> Self {
Self { inner }
}
}
#[async_trait]
impl Provider for NonStreamingProvider {
fn capabilities(&self) -> ProviderCapabilities {
self.inner.capabilities()
}
async fn chat_with_system(
&self,
system_prompt: Option<&str>,
message: &str,
model: &str,
temperature: f64,
) -> anyhow::Result<String> {
self.inner
.chat_with_system(system_prompt, message, model, temperature)
.await
}
async fn chat_with_history(
&self,
messages: &[ChatMessage],
model: &str,
temperature: f64,
) -> anyhow::Result<String> {
self.inner.chat_with_history(messages, model, temperature).await
}
async fn chat(
&self,
request: ChatRequest<'_>,
model: &str,
temperature: f64,
) -> anyhow::Result<ChatResponse> {
self.inner.chat(request, model, temperature).await
}
fn supports_streaming(&self) -> bool {
false
}
fn supports_streaming_tool_events(&self) -> bool {
false
}
fn stream_chat(
&self,
_request: ChatRequest<'_>,
_model: &str,
_temperature: f64,
_options: StreamOptions,
) -> stream::BoxStream<'static, StreamResult<StreamEvent>> {
stream::empty().boxed()
}
}

View File

@@ -1,4 +1,5 @@
pub mod agent;
pub mod compat;
pub mod config;
pub mod llm;
pub mod pipe;

View File

@@ -21,17 +21,29 @@ pub struct BrowserPipeTool<T: Transport> {
transport: Arc<T>,
mac_policy: MacPolicy,
session_key: Vec<u8>,
next_seq: AtomicU64,
next_seq: Arc<AtomicU64>,
response_timeout: Duration,
}
impl<T: Transport> Clone for BrowserPipeTool<T> {
fn clone(&self) -> Self {
Self {
transport: self.transport.clone(),
mac_policy: self.mac_policy.clone(),
session_key: self.session_key.clone(),
next_seq: self.next_seq.clone(),
response_timeout: self.response_timeout,
}
}
}
impl<T: Transport> BrowserPipeTool<T> {
pub fn new(transport: Arc<T>, mac_policy: MacPolicy, session_key: Vec<u8>) -> Self {
Self {
transport,
mac_policy,
session_key,
next_seq: AtomicU64::new(1),
next_seq: Arc::new(AtomicU64::new(1)),
response_timeout: Duration::from_secs(30),
}
}

View File

@@ -1,5 +1,6 @@
mod common;
use std::path::PathBuf;
use std::sync::Arc;
use std::time::Duration;
@@ -82,3 +83,13 @@ fn browser_tool_rejects_action_when_mac_policy_blocks_it() {
assert!(err.to_string().contains("action is not allowed"));
}
#[test]
fn default_rules_allow_zhihu_navigation() {
let rules_path = PathBuf::from(env!("CARGO_MANIFEST_DIR"))
.join("resources")
.join("rules.json");
let policy = MacPolicy::load_from_path(rules_path).unwrap();
policy.validate(&Action::Navigate, "www.zhihu.com").unwrap();
}

View File

@@ -0,0 +1,203 @@
mod common;
use std::sync::Arc;
use std::time::Duration;
use common::MockTransport;
use serde_json::{json, Value};
use sgclaw::security::MacPolicy;
use sgclaw::{
compat::browser_tool_adapter::ZeroClawBrowserTool,
pipe::{Action, AgentMessage, BrowserMessage, BrowserPipeTool, Timing},
};
use zeroclaw::tools::Tool;
fn test_policy() -> MacPolicy {
MacPolicy::from_json_str(
r#"{
"version": "1.0",
"domains": { "allowed": ["www.baidu.com"] },
"pipe_actions": {
"allowed": ["click", "type", "navigate", "getText"],
"blocked": ["eval", "executeJsInPage"]
}
}"#,
)
.unwrap()
}
fn build_adapter(messages: Vec<BrowserMessage>) -> (Arc<MockTransport>, ZeroClawBrowserTool<MockTransport>) {
let transport = Arc::new(MockTransport::new(messages));
let browser_tool = BrowserPipeTool::new(
transport.clone(),
test_policy(),
vec![1, 2, 3, 4, 5, 6, 7, 8],
)
.with_response_timeout(Duration::from_secs(1));
(transport, ZeroClawBrowserTool::new(browser_tool))
}
#[test]
fn zeroclaw_browser_tool_schema_exposes_only_supported_safe_actions() {
let (_, tool) = build_adapter(vec![]);
let schema = tool.parameters_schema();
assert_eq!(tool.name(), "browser_action");
assert_eq!(
schema["properties"]["action"]["enum"],
json!(["click", "type", "navigate", "getText"])
);
assert_eq!(schema["required"], json!(["action", "expected_domain"]));
}
#[tokio::test]
async fn zeroclaw_browser_tool_executes_supported_actions_and_returns_observation_payload() {
let (transport, tool) = build_adapter(vec![
BrowserMessage::Response {
seq: 1,
success: true,
data: json!({ "navigated": true }),
aom_snapshot: vec![],
timing: Timing {
queue_ms: 1,
exec_ms: 11,
},
},
BrowserMessage::Response {
seq: 2,
success: true,
data: json!({ "typed": true }),
aom_snapshot: vec![],
timing: Timing {
queue_ms: 2,
exec_ms: 12,
},
},
BrowserMessage::Response {
seq: 3,
success: true,
data: json!({ "clicked": true }),
aom_snapshot: vec![],
timing: Timing {
queue_ms: 3,
exec_ms: 13,
},
},
BrowserMessage::Response {
seq: 4,
success: true,
data: json!({ "text": "天气" }),
aom_snapshot: vec![json!({
"role": "textbox",
"name": "百度一下"
})],
timing: Timing {
queue_ms: 4,
exec_ms: 14,
},
},
]);
let navigate = tool
.execute(json!({
"action": "navigate",
"expected_domain": "www.baidu.com",
"url": "https://www.baidu.com"
}))
.await
.unwrap();
let type_text = tool
.execute(json!({
"action": "type",
"expected_domain": "www.baidu.com",
"selector": "#kw",
"text": "天气",
"clear_first": true
}))
.await
.unwrap();
let click = tool
.execute(json!({
"action": "click",
"expected_domain": "www.baidu.com",
"selector": "#su"
}))
.await
.unwrap();
let get_text = tool
.execute(json!({
"action": "getText",
"expected_domain": "www.baidu.com",
"selector": "#content_left"
}))
.await
.unwrap();
let navigate_output: Value = serde_json::from_str(&navigate.output).unwrap();
let get_text_output: Value = serde_json::from_str(&get_text.output).unwrap();
let sent = transport.sent_messages();
assert!(navigate.success);
assert!(type_text.success);
assert!(click.success);
assert!(get_text.success);
assert_eq!(navigate_output["data"], json!({ "navigated": true }));
assert_eq!(get_text_output["data"], json!({ "text": "天气" }));
assert_eq!(
get_text_output["aom_snapshot"],
json!([{ "role": "textbox", "name": "百度一下" }])
);
assert_eq!(
get_text_output["timing"],
json!({
"queue_ms": 4,
"exec_ms": 14
})
);
assert!(matches!(
&sent[0],
AgentMessage::Command { seq, action, .. }
if *seq == 1 && action == &Action::Navigate
));
assert!(matches!(
&sent[1],
AgentMessage::Command { seq, action, .. }
if *seq == 2 && action == &Action::Type
));
assert!(matches!(
&sent[2],
AgentMessage::Command { seq, action, .. }
if *seq == 3 && action == &Action::Click
));
assert!(matches!(
&sent[3],
AgentMessage::Command { seq, action, .. }
if *seq == 4 && action == &Action::GetText
));
}
#[tokio::test]
async fn zeroclaw_browser_tool_keeps_domain_validation_in_mac_policy() {
let (transport, tool) = build_adapter(vec![]);
let result = tool
.execute(json!({
"action": "navigate",
"expected_domain": "www.zhihu.com",
"url": "https://www.zhihu.com"
}))
.await
.unwrap();
assert!(!result.success);
assert!(result.output.is_empty());
assert_eq!(transport.sent_messages().len(), 0);
assert!(
result
.error
.as_deref()
.unwrap()
.contains("domain is not allowed")
);
}

View File

@@ -0,0 +1,55 @@
use std::path::Path;
use std::sync::{Mutex, OnceLock};
use sgclaw::compat::config_adapter::{
build_zeroclaw_config,
build_zeroclaw_config_from_settings,
zeroclaw_workspace_dir,
};
use sgclaw::config::DeepSeekSettings;
fn env_lock() -> &'static Mutex<()> {
static LOCK: OnceLock<Mutex<()>> = OnceLock::new();
LOCK.get_or_init(|| Mutex::new(()))
}
#[test]
fn zeroclaw_config_adapter_maps_deepseek_env_to_zeroclaw_config() {
let _guard = env_lock().lock().unwrap();
std::env::set_var("DEEPSEEK_API_KEY", "deepseek-test-key");
std::env::set_var("DEEPSEEK_BASE_URL", "https://api.deepseek.com");
std::env::set_var("DEEPSEEK_MODEL", "deepseek-chat");
let config = build_zeroclaw_config(Path::new("/tmp/sgclaw")).unwrap();
assert_eq!(config.default_provider.as_deref(), Some("deepseek"));
assert_eq!(config.default_model.as_deref(), Some("deepseek-chat"));
assert_eq!(config.api_key.as_deref(), Some("deepseek-test-key"));
assert_eq!(config.api_url.as_deref(), Some("https://api.deepseek.com"));
assert_eq!(
config.workspace_dir,
Path::new("/tmp/sgclaw/.sgclaw-zeroclaw-workspace")
);
assert_eq!(
config.config_path,
Path::new("/tmp/sgclaw/.sgclaw-zeroclaw-workspace/config.toml")
);
}
#[test]
fn zeroclaw_config_adapter_uses_deterministic_workspace_dir() {
let settings = DeepSeekSettings {
api_key: "key".to_string(),
base_url: "https://proxy.example.com/v1".to_string(),
model: "deepseek-reasoner".to_string(),
};
let workspace_dir = zeroclaw_workspace_dir(Path::new("/var/lib/sgclaw"));
let config = build_zeroclaw_config_from_settings(Path::new("/var/lib/sgclaw"), &settings);
assert_eq!(workspace_dir, Path::new("/var/lib/sgclaw/.sgclaw-zeroclaw-workspace"));
assert_eq!(config.workspace_dir, workspace_dir);
assert_eq!(config.default_provider.as_deref(), Some("deepseek"));
assert_eq!(config.default_model.as_deref(), Some("deepseek-reasoner"));
assert_eq!(config.api_url.as_deref(), Some("https://proxy.example.com/v1"));
}

63
tests/compat_cron_test.rs Normal file
View File

@@ -0,0 +1,63 @@
use std::path::{Path, PathBuf};
use chrono::Duration;
use sgclaw::compat::config_adapter::build_zeroclaw_config_from_settings;
use sgclaw::config::DeepSeekSettings;
use zeroclaw::cron::Schedule;
fn workspace_root(label: &str) -> PathBuf {
let root = std::env::temp_dir().join(format!("{label}-{}", uuid::Uuid::new_v4()));
std::fs::create_dir_all(&root).unwrap();
root
}
#[tokio::test]
async fn compat_cron_adapter_creates_lists_and_runs_due_agent_jobs() {
let settings = DeepSeekSettings {
api_key: "key".to_string(),
base_url: "https://api.deepseek.com".to_string(),
model: "deepseek-chat".to_string(),
};
let workspace_root = workspace_root("sgclaw-cron");
let config = build_zeroclaw_config_from_settings(Path::new(&workspace_root), &settings);
assert!(config.cron.enabled);
assert!(!config.cron.catch_up_on_startup);
assert!(!config.scheduler.enabled);
let created = sgclaw::compat::cron_adapter::add_agent_job(
&config,
Some("search-weather".to_string()),
Schedule::Every { every_ms: 1 },
"打开百度搜索天气",
Some(vec!["browser_action".to_string()]),
)
.unwrap();
let listed = sgclaw::compat::cron_adapter::list_jobs(&config).unwrap();
assert_eq!(listed.len(), 1);
assert_eq!(listed[0].id, created.id);
assert_eq!(listed[0].prompt.as_deref(), Some("打开百度搜索天气"));
let results = sgclaw::compat::cron_adapter::run_due_jobs(
&config,
created.next_run + Duration::milliseconds(1),
|job| {
let output = format!("ran {}", job.prompt.as_deref().unwrap_or_default());
async move { Ok::<String, anyhow::Error>(output) }
},
)
.await
.unwrap();
let runs = sgclaw::compat::cron_adapter::list_runs(&config, &created.id, 10).unwrap();
let updated = sgclaw::compat::cron_adapter::list_jobs(&config).unwrap();
assert_eq!(results.len(), 1);
assert!(results[0].success);
assert_eq!(results[0].job_id, created.id);
assert_eq!(runs.len(), 1);
assert_eq!(runs[0].status, "ok");
assert!(updated[0].last_status.as_deref() == Some("ok"));
assert!(updated[0].next_run > created.next_run);
}

View File

@@ -0,0 +1,42 @@
use std::path::{Path, PathBuf};
use sgclaw::compat::config_adapter::build_zeroclaw_config_from_settings;
use sgclaw::config::DeepSeekSettings;
use zeroclaw::memory::MemoryCategory;
fn workspace_root(label: &str) -> PathBuf {
let root = std::env::temp_dir().join(format!("{label}-{}", uuid::Uuid::new_v4()));
std::fs::create_dir_all(&root).unwrap();
root
}
#[tokio::test]
async fn compat_memory_adapter_uses_workspace_local_sqlite_backend() {
let settings = DeepSeekSettings {
api_key: "key".to_string(),
base_url: "https://api.deepseek.com".to_string(),
model: "deepseek-chat".to_string(),
};
let workspace_root = workspace_root("sgclaw-memory");
let config = build_zeroclaw_config_from_settings(Path::new(&workspace_root), &settings);
assert_eq!(config.memory.backend, "sqlite");
assert_eq!(config.memory.embedding_provider, "none");
assert!(!config.memory.response_cache_enabled);
assert!(!config.memory.snapshot_enabled);
assert!(config.storage.provider.config.provider.is_empty());
let memory = sgclaw::compat::memory_adapter::build_memory(&config).unwrap();
memory
.store(
"weather",
"remember today's weather workflow",
MemoryCategory::Conversation,
None,
)
.await
.unwrap();
assert_eq!(memory.count().await.unwrap(), 1);
assert!(sgclaw::compat::memory_adapter::brain_db_path(&config.workspace_dir).exists());
}

View File

@@ -0,0 +1,334 @@
mod common;
use std::io::{Read, Write};
use std::net::TcpListener;
use std::path::PathBuf;
use std::sync::{Arc, Mutex, OnceLock};
use std::thread;
use std::time::Duration;
use common::MockTransport;
use serde_json::{json, Value};
use sgclaw::agent::handle_browser_message;
use sgclaw::compat::runtime::execute_task;
use sgclaw::pipe::{Action, AgentMessage, BrowserMessage, BrowserPipeTool, Timing};
use sgclaw::security::MacPolicy;
use uuid::Uuid;
fn env_lock() -> &'static Mutex<()> {
static LOCK: OnceLock<Mutex<()>> = OnceLock::new();
LOCK.get_or_init(|| Mutex::new(()))
}
fn test_policy() -> MacPolicy {
MacPolicy::from_json_str(
r#"{
"version": "1.0",
"domains": { "allowed": ["www.baidu.com"] },
"pipe_actions": {
"allowed": ["click", "type", "navigate", "getText"],
"blocked": []
}
}"#,
)
.unwrap()
}
fn temp_workspace_root() -> PathBuf {
let root = std::env::temp_dir().join(format!("sgclaw-compat-runtime-{}", Uuid::new_v4()));
std::fs::create_dir_all(&root).unwrap();
root
}
fn start_fake_deepseek_server(
responses: Vec<Value>,
) -> (String, Arc<Mutex<Vec<Value>>>, thread::JoinHandle<()>) {
let listener = TcpListener::bind("127.0.0.1:0").unwrap();
listener.set_nonblocking(true).unwrap();
let address = format!("http://{}", listener.local_addr().unwrap());
let requests = Arc::new(Mutex::new(Vec::new()));
let request_log = requests.clone();
let handle = thread::spawn(move || {
for response in responses {
let deadline = std::time::Instant::now() + Duration::from_secs(5);
let (mut stream, _) = loop {
match listener.accept() {
Ok(pair) => break pair,
Err(err) if err.kind() == std::io::ErrorKind::WouldBlock => {
assert!(
std::time::Instant::now() < deadline,
"timed out waiting for provider request"
);
thread::sleep(Duration::from_millis(10));
}
Err(err) => panic!("failed to accept provider request: {err}"),
}
};
let body = read_http_json_body(&mut stream);
request_log.lock().unwrap().push(body);
let payload = response.to_string();
let reply = format!(
"HTTP/1.1 200 OK\r\nContent-Type: application/json\r\nContent-Length: {}\r\nConnection: close\r\n\r\n{}",
payload.as_bytes().len(),
payload
);
stream.write_all(reply.as_bytes()).unwrap();
stream.flush().unwrap();
}
});
(address, requests, handle)
}
fn read_http_json_body(stream: &mut impl Read) -> Value {
let mut buffer = Vec::new();
let mut headers_end = None;
while headers_end.is_none() {
let mut chunk = [0_u8; 1024];
let bytes = stream.read(&mut chunk).unwrap();
assert!(bytes > 0, "unexpected EOF while reading headers");
buffer.extend_from_slice(&chunk[..bytes]);
headers_end = buffer.windows(4).position(|window| window == b"\r\n\r\n");
}
let headers_end = headers_end.unwrap() + 4;
let headers = String::from_utf8(buffer[..headers_end].to_vec()).unwrap();
let content_length = headers
.lines()
.find_map(|line| {
let (name, value) = line.split_once(':')?;
name.eq_ignore_ascii_case("content-length")
.then(|| value.trim().parse::<usize>().unwrap())
})
.unwrap();
while buffer.len() < headers_end + content_length {
let mut chunk = vec![0_u8; content_length];
let bytes = stream.read(&mut chunk).unwrap();
assert!(bytes > 0, "unexpected EOF while reading body");
buffer.extend_from_slice(&chunk[..bytes]);
}
serde_json::from_slice(&buffer[headers_end..headers_end + content_length]).unwrap()
}
#[test]
fn compat_runtime_uses_zeroclaw_provider_path_and_executes_browser_actions() {
let _guard = env_lock().lock().unwrap_or_else(|err| err.into_inner());
let first_response = json!({
"choices": [{
"message": {
"content": "",
"tool_calls": [
{
"id": "call_1",
"type": "function",
"function": {
"name": "browser_action",
"arguments": serde_json::to_string(&json!({
"action": "navigate",
"expected_domain": "www.baidu.com",
"url": "https://www.baidu.com"
})).unwrap()
}
},
{
"id": "call_2",
"type": "function",
"function": {
"name": "browser_action",
"arguments": serde_json::to_string(&json!({
"action": "type",
"expected_domain": "www.baidu.com",
"selector": "#kw",
"text": "天气",
"clear_first": true
})).unwrap()
}
}
]
}
}],
"usage": {
"prompt_tokens": 12,
"completion_tokens": 7
}
});
let second_response = json!({
"choices": [{
"message": {
"content": "已通过 ZeroClaw 执行任务: 打开百度搜索天气"
}
}],
"usage": {
"prompt_tokens": 15,
"completion_tokens": 8
}
});
let (base_url, requests, server_handle) =
start_fake_deepseek_server(vec![first_response, second_response]);
std::env::set_var("DEEPSEEK_API_KEY", "deepseek-test-key");
std::env::set_var("DEEPSEEK_BASE_URL", base_url);
std::env::set_var("DEEPSEEK_MODEL", "deepseek-chat");
let workspace_root = temp_workspace_root();
let transport = Arc::new(MockTransport::new(vec![
BrowserMessage::Response {
seq: 1,
success: true,
data: json!({ "navigated": true }),
aom_snapshot: vec![],
timing: Timing {
queue_ms: 1,
exec_ms: 10,
},
},
BrowserMessage::Response {
seq: 2,
success: true,
data: json!({ "typed": true }),
aom_snapshot: vec![],
timing: Timing {
queue_ms: 1,
exec_ms: 11,
},
},
]));
let browser_tool = BrowserPipeTool::new(
transport.clone(),
test_policy(),
vec![1, 2, 3, 4, 5, 6, 7, 8],
)
.with_response_timeout(Duration::from_secs(1));
let summary = execute_task(
transport.as_ref(),
browser_tool,
"打开百度搜索天气",
&workspace_root,
)
.unwrap();
server_handle.join().unwrap();
let request_bodies = requests.lock().unwrap().clone();
let sent = transport.sent_messages();
assert_eq!(summary, "已通过 ZeroClaw 执行任务: 打开百度搜索天气");
assert_eq!(request_bodies.len(), 2);
assert_eq!(request_bodies[0]["model"], json!("deepseek-chat"));
assert_eq!(
request_bodies[0]["tools"][0]["function"]["name"],
json!("browser_action")
);
assert!(request_bodies[1].to_string().contains("tool_call_id"));
assert!(sent.iter().any(|message| {
matches!(
message,
AgentMessage::LogEntry { level, message }
if level == "info" && message == "navigate https://www.baidu.com"
)
}));
assert!(sent.iter().any(|message| {
matches!(
message,
AgentMessage::LogEntry { level, message }
if level == "info" && message == "type 天气 into #kw"
)
}));
assert!(sent.iter().any(|message| {
matches!(
message,
AgentMessage::Command { action, .. } if action == &Action::Navigate
)
}));
assert!(sent.iter().any(|message| {
matches!(
message,
AgentMessage::Command { action, .. } if action == &Action::Type
)
}));
}
#[test]
fn handle_browser_message_uses_compat_runtime_summary_when_deepseek_env_is_set() {
let _guard = env_lock().lock().unwrap_or_else(|err| err.into_inner());
let first_response = json!({
"choices": [{
"message": {
"content": "",
"tool_calls": [{
"id": "call_1",
"type": "function",
"function": {
"name": "browser_action",
"arguments": serde_json::to_string(&json!({
"action": "navigate",
"expected_domain": "www.baidu.com",
"url": "https://www.baidu.com"
})).unwrap()
}
}]
}
}]
});
let second_response = json!({
"choices": [{
"message": {
"content": "来自 ZeroClaw runtime"
}
}]
});
let (base_url, _, server_handle) = start_fake_deepseek_server(vec![first_response, second_response]);
std::env::set_var("DEEPSEEK_API_KEY", "deepseek-test-key");
std::env::set_var("DEEPSEEK_BASE_URL", base_url);
std::env::set_var("DEEPSEEK_MODEL", "deepseek-chat");
let workspace_root = temp_workspace_root();
let original_dir = std::env::current_dir().unwrap();
std::env::set_current_dir(&workspace_root).unwrap();
let transport = Arc::new(MockTransport::new(vec![BrowserMessage::Response {
seq: 1,
success: true,
data: json!({ "navigated": true }),
aom_snapshot: vec![],
timing: Timing {
queue_ms: 1,
exec_ms: 10,
},
}]));
let browser_tool = BrowserPipeTool::new(
transport.clone(),
test_policy(),
vec![1, 2, 3, 4, 5, 6, 7, 8],
)
.with_response_timeout(Duration::from_secs(1));
handle_browser_message(
transport.as_ref(),
&browser_tool,
BrowserMessage::SubmitTask {
instruction: "打开百度搜索天气".to_string(),
},
)
.unwrap();
server_handle.join().unwrap();
std::env::set_current_dir(original_dir).unwrap();
let sent = transport.sent_messages();
assert!(sent.iter().any(|message| {
matches!(
message,
AgentMessage::TaskComplete { success, summary }
if *success && summary == "来自 ZeroClaw runtime"
)
}));
}

View File

@@ -30,6 +30,24 @@ fn planner_supports_baidu_search_variant_with_conjunction() {
assert_eq!(plan.steps[1].params["text"], "电网调度");
}
#[test]
fn planner_supports_zhihu_search_instruction_with_direct_search_url() {
let plan = plan_instruction("打开知乎搜索天气").unwrap();
assert_eq!(plan.summary, "已在知乎搜索天气");
assert_eq!(plan.steps.len(), 1);
assert_eq!(plan.steps[0].action, Action::Navigate);
assert_eq!(
plan.steps[0].params,
json!({ "url": "https://www.zhihu.com/search?type=content&q=%E5%A4%A9%E6%B0%94" })
);
assert_eq!(plan.steps[0].expected_domain, "www.zhihu.com");
assert_eq!(
plan.steps[0].log_message,
"navigate https://www.zhihu.com/search?type=content&q=%E5%A4%A9%E6%B0%94"
);
}
#[test]
fn planner_rejects_unrelated_instruction() {
let err = plan_instruction("打开谷歌搜索天气").unwrap_err();

12
third_party/zeroclaw/.cargo/audit.toml vendored Normal file
View File

@@ -0,0 +1,12 @@
# cargo-audit configuration
# https://rustsec.org/
[advisories]
ignore = [
# wasmtime vulns via extism 1.13.0 — no upstream fix; plugins feature-gated
"RUSTSEC-2026-0006", # wasmtime f64.copysign segfault on x86-64
"RUSTSEC-2026-0020", # WASI guest-controlled resource exhaustion
"RUSTSEC-2026-0021", # WASI http fields panic
# instant crate unmaintained — transitive dep via nostr; no upstream fix
"RUSTSEC-2024-0384",
]

13
third_party/zeroclaw/.cargo/config.toml vendored Normal file
View File

@@ -0,0 +1,13 @@
[target.x86_64-unknown-linux-musl]
rustflags = ["-C", "link-arg=-static"]
[target.aarch64-unknown-linux-musl]
rustflags = ["-C", "link-arg=-static", "-C", "link-arg=-Wl,-z,stack-size=8388608"]
# Android targets (NDK toolchain)
[target.armv7-linux-androideabi]
linker = "armv7a-linux-androideabi21-clang"
[target.aarch64-linux-android]
linker = "aarch64-linux-android21-clang"
rustflags = ["-C", "link-arg=-Wl,-z,stack-size=8388608"]

View File

@@ -0,0 +1,133 @@
# Skill: github-issue
File a structured GitHub issue (bug report or feature request) for ZeroClaw interactively from Claude Code.
## When to Use
Trigger when the user wants to file a GitHub issue, report a bug, or request a feature for ZeroClaw. Keywords: "file issue", "report bug", "feature request", "open issue", "create issue", "github issue".
## Instructions
You are filing a GitHub issue against the ZeroClaw repository using structured issue forms. Follow this workflow exactly.
### Step 1: Detect Issue Type and Read the Template
Determine from the user's message whether this is a **bug report** or **feature request**.
- If unclear, use AskUserQuestion to ask: "Is this a bug report or a feature request?"
Then read the corresponding issue template to understand the required fields:
- Bug report: `.github/ISSUE_TEMPLATE/bug_report.yml`
- Feature request: `.github/ISSUE_TEMPLATE/feature_request.yml`
Parse the YAML to extract:
- The `title` prefix (e.g. `[Bug]: `, `[Feature]: `)
- The `labels` array
- Each field in the `body` array: its `type` (dropdown, textarea, input, checkboxes, markdown), `id`, `attributes.label`, `attributes.options` (for dropdowns), `attributes.description`, `attributes.placeholder`, and `validations.required`
This is the source of truth for what fields exist, what they're called, what options are available, and which are required. Do not assume or hardcode any field names or options — always derive them from the template file.
### Step 2: Auto-Gather Context
Before asking the user anything, silently gather environment and repo context:
```bash
# Git context
git log --oneline -5
git status --short
git diff --stat HEAD~1 2>/dev/null
# For bug reports — environment detection
uname -s -r -m # OS info
sw_vers 2>/dev/null # macOS version
rustc --version 2>/dev/null # Rust version
cargo metadata --format-version=1 --no-deps 2>/dev/null | jq -r '.packages[] | select(.name=="zeroclaw") | .version' 2>/dev/null # ZeroClaw version
git rev-parse --short HEAD # commit SHA fallback
```
Also read recently changed files to infer the affected component and architecture impact.
### Step 3: Pre-Fill and Present the Form
Using the parsed template fields and gathered context, draft values for ALL fields from the template:
- **dropdown** fields: select the most likely option from `attributes.options` based on context. For dropdowns where you're uncertain, note your best guess and flag it for the user.
- **textarea** fields: draft content based on the user's description, git context, and the field's `attributes.description`/`attributes.placeholder` for guidance on what's expected.
- **input** fields: fill with auto-detected values (versions, OS) or draft from user context.
- **checkboxes** fields: auto-check all items (the skill itself ensures compliance with the stated checks).
- **markdown** fields: skip these — they're informational headers, not form inputs.
- **optional fields** (where `validations.required` is false): fill if there's enough context, otherwise note "(optional — not enough context to fill)".
Present the complete draft to the user in a clean readable format:
```
## Issue Draft: [Bug]: <title> / [Feature]: <title>
**Labels**: <from template>
### <Field Label>
<proposed value or selection>
### <Field Label>
<proposed value>
...
```
Use AskUserQuestion to ask the user to review:
- "Here's the pre-filled issue. Please review and let me know what to change, or say 'submit' to file it."
If the user requests changes, update the draft and re-present. Iterate until the user approves.
### Step 4: Scope Guard
Before final submission, analyze the collected content for scope creep:
- Does the bug report describe multiple independent defects?
- Does the feature request bundle unrelated changes?
If multi-concept issues are detected:
1. Inform the user: "This issue appears to cover multiple distinct topics. Focused, single-concept issues are strongly preferred and more likely to be accepted."
2. Break down the distinct groups found.
3. Offer to file separate issues for each group, reusing shared context (environment, etc.).
4. Let the user decide: proceed as-is or split.
### Step 5: Construct Issue Body
Build the issue body as markdown sections matching GitHub's form-field rendering format. GitHub renders form-submitted issues with `### <Field Label>` sections, so use that exact structure.
For each non-markdown field from the template, in order:
```markdown
### <attributes.label>
<value>
```
For optional fields with no content, use `_No response_` as the value (this matches GitHub's native rendering for empty optional fields).
For checkbox fields, render each option as:
```markdown
- [X] <option label text>
```
### Step 6: Final Preview and Submit
Show the final constructed issue (title + labels + full body) for one last confirmation.
Then submit using a HEREDOC for the body to preserve formatting:
```bash
gh issue create --title "<title prefix><user title>" --label "<label1>,<label2>" --body "$(cat <<'ISSUE_EOF'
<body content>
ISSUE_EOF
)"
```
Return the resulting issue URL to the user.
### Important Rules
- **Always read the template file** — never assume field names, options, or structure. The templates are the source of truth and may change over time.
- **Never include personal/sensitive data** in the issue. Redact secrets, tokens, emails, real names.
- **Use neutral project-scoped placeholders** per ZeroClaw's privacy contract.
- **One concept per issue** — enforce the scope guard.
- **Auto-detect, don't guess** — use real command output for environment fields.
- **Match GitHub's rendering** — use `### Field Label` sections so issues look consistent whether filed via web UI or this skill.

View File

@@ -0,0 +1,209 @@
# Skill: github-pr
Open or update a GitHub Pull Request for ZeroClaw. Handles creating new PRs with a fully filled-out template body, and updating existing PRs (title, body sections, labels, comments). Use this skill whenever the user wants to open a PR, create a pull request, update a PR, edit PR description, add labels to a PR, or sync a PR after new commits — even if they don't say "PR" explicitly (e.g., "submit this for review", "push and open for merge").
## Instructions
This skill supports two modes: **Open** (create a new PR) and **Update** (edit an existing PR). Detect the mode from context — if there's already an open PR for the current branch and the user didn't say "open a new PR", default to update mode.
The PR template at `.github/pull_request_template.md` is the source of truth for the PR body structure. Read it every time — never assume or hardcode section names, fields, or their order. The template may change over time and the skill should always reflect its current state.
---
## Shared: Read the PR Template
Before opening or updating a PR body, read `.github/pull_request_template.md` and parse it to understand:
- The `## ` section headers (these are the top-level sections of the PR body)
- The bullet points, fields, and prompts within each section
- Which sections are marked `(required)` vs optional/recommended
- Any inline formatting conventions (backtick options, Yes/No fields, etc.)
This parsed structure drives how you fill, present, and edit the PR body.
---
## Mode: Open a New PR
### Step 1: Gather Context
Collect information to pre-fill the PR body. Run these in parallel:
```bash
# Branch and commit context
git branch --show-current
git log master..HEAD --oneline
git diff master...HEAD --stat
# Check if branch is pushed
git rev-parse --abbrev-ref --symbolic-full-name @{u} 2>/dev/null
# Environment (for validation evidence)
rustc --version 2>/dev/null
```
Also review the changed files and commit messages to understand the nature of the change (bug fix, feature, refactor, docs, chore, etc.) and which subsystems are affected.
### Step 2: Pre-Fill the Template
Using the parsed template structure and gathered context, draft a complete PR body:
- For each `## ` section from the template, fill in the bullet points and fields based on context from the commits, diff, and changed files.
- Use the field descriptions and placeholder text in the template as guidance for what each field expects.
- For Yes/No fields, infer from the diff (e.g., if no files in `src/security/` changed, security impact is likely all No).
- For required sections, always provide a substantive answer. For optional sections, fill if there's enough context, otherwise leave the template prompts in place.
- Draft a conventional commit-style PR title based on the changes (e.g., `feat(provider): add retry budget override`, `fix(channel): handle disconnect gracefully`, `chore(ci): update workflow targets`).
### Step 3: Present Draft for Review
Show the user the complete draft:
```
## PR Draft: <title>
**Branch**: <head> -> master
**Labels**: <suggested labels>
<full body with all sections filled>
```
Ask the user to review: "Here's the pre-filled PR. Review and let me know what to change, or say 'submit' to open it."
Iterate on changes until the user approves.
### Step 4: Push and Create
1. If the branch isn't pushed yet, push it:
```bash
git push -u origin <branch>
```
2. Create the PR using a HEREDOC for the body:
```bash
gh pr create --title "<title>" --base master --body "$(cat <<'PR_BODY_EOF'
<full body>
PR_BODY_EOF
)"
```
3. If labels were agreed on, add them:
```bash
gh pr edit <number> --add-label "<label1>,<label2>"
```
4. Return the PR URL to the user.
---
## Mode: Update an Existing PR
### Step 1: Identify the PR
1. **If a PR number or URL is given**: use that directly.
2. **If on a branch with an open PR**: auto-detect:
```bash
gh pr view --json number,title,body,labels,state,author,url,headRefName 2>/dev/null
```
3. **If neither**: ask the user for the PR number.
Verify the current user is the PR author:
```bash
CURRENT_USER=$(gh api user --jq '.login')
PR_AUTHOR=$(gh pr view <number> --json author --jq '.author.login')
```
If not the author, stop and inform the user.
### Step 2: Fetch Current State
```bash
gh pr view <number> --json number,title,body,labels,state,baseRefName,headRefName,url,author,reviewDecision,statusCheckRollup,commits
```
Display a summary:
```
## PR #<number>: <title>
**State**: <open/closed/merged>
**Branch**: <head> -> <base>
**Labels**: <label list>
**Checks**: <pass/fail/pending>
**URL**: <url>
```
### Step 3: Determine What to Update
Support these operations:
| Operation | How |
|---|---|
| **Edit title** | `gh pr edit <number> --title "<new title>"` |
| **Edit full body** | `gh pr edit <number> --body "<new body>"` |
| **Add labels** | `gh pr edit <number> --add-label "<label1>,<label2>"` |
| **Remove labels** | `gh pr edit <number> --remove-label "<label1>"` |
| **Edit specific section** | Parse body by `## ` headers, modify target section, re-submit full body |
| **Add a comment** | `gh pr comment <number> --body "<comment>"` |
| **Link an issue** | Edit the linked-issue section in the body |
| **Smart update after new commits** | Re-analyze and suggest section updates |
### Step 4: Handle Body Section Edits
When editing a specific section:
1. Parse the current PR body into sections by `## ` headers
2. Match the user's request to the corresponding section from the template
3. Show the current content of that section and the proposed replacement
4. On confirmation, modify only that section, reconstruct the full body, and submit
### Step 5: Smart Update After New Commits
When the user wants to sync the PR description after pushing new changes:
1. Identify new commits:
```bash
gh pr view <number> --json commits --jq '.commits[].messageHeadline'
git log <base>..<head> --oneline
git diff <base>...<head> --stat
```
2. Re-read the PR template. Analyze which sections are now stale based on the new changes — use the template's section names and field descriptions to identify what needs updating rather than relying on hardcoded assumptions.
3. Present proposed updates section-by-section and confirm before applying.
### Step 6: Apply Updates
For title/label changes, use direct `gh pr edit` flags.
For body edits, use a HEREDOC:
```bash
gh pr edit <number> --body "$(cat <<'PR_BODY_EOF'
<full updated body>
PR_BODY_EOF
)"
```
For comments:
```bash
gh pr comment <number> --body "$(cat <<'COMMENT_EOF'
<comment text>
COMMENT_EOF
)"
```
### Step 7: Confirm
Fetch and display the updated state:
```bash
gh pr view <number> --json number,title,labels,url
```
Return the PR URL.
---
## Important Rules
- **Always read `.github/pull_request_template.md`** before filling or editing a PR body. Never assume section names, fields, or structure — derive everything from the template. It's the source of truth and may change.
- **For updates, only modify requested sections.** Preserve everything else exactly as-is.
- **Always show diffs before applying body edits.** Present current vs proposed for each changed section.
- **Never include personal/sensitive data** in PR content per ZeroClaw's privacy contract.
- **For label changes**, only use labels that exist in the repository. Check with `gh label list` if unsure.
- **Fetch the latest body before editing** to avoid clobbering concurrent changes.
- **For new PRs**, push the branch before creating (with `-u` to set upstream tracking).

View File

@@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,485 @@
---
name: skill-creator
description: Create new skills, modify and improve existing skills, and measure skill performance. Use when users want to create a skill from scratch, edit, or optimize an existing skill, run evals to test a skill, benchmark skill performance with variance analysis, or optimize a skill's description for better triggering accuracy.
---
# Skill Creator
A skill for creating new skills and iteratively improving them.
At a high level, the process of creating a skill goes like this:
- Decide what you want the skill to do and roughly how it should do it
- Write a draft of the skill
- Create a few test prompts and run claude-with-access-to-the-skill on them
- Help the user evaluate the results both qualitatively and quantitatively
- While the runs happen in the background, draft some quantitative evals if there aren't any (if there are some, you can either use as is or modify if you feel something needs to change about them). Then explain them to the user (or if they already existed, explain the ones that already exist)
- Use the `eval-viewer/generate_review.py` script to show the user the results for them to look at, and also let them look at the quantitative metrics
- Rewrite the skill based on feedback from the user's evaluation of the results (and also if there are any glaring flaws that become apparent from the quantitative benchmarks)
- Repeat until you're satisfied
- Expand the test set and try again at larger scale
Your job when using this skill is to figure out where the user is in this process and then jump in and help them progress through these stages. So for instance, maybe they're like "I want to make a skill for X". You can help narrow down what they mean, write a draft, write the test cases, figure out how they want to evaluate, run all the prompts, and repeat.
On the other hand, maybe they already have a draft of the skill. In this case you can go straight to the eval/iterate part of the loop.
Of course, you should always be flexible and if the user is like "I don't need to run a bunch of evaluations, just vibe with me", you can do that instead.
Then after the skill is done (but again, the order is flexible), you can also run the skill description improver, which we have a whole separate script for, to optimize the triggering of the skill.
Cool? Cool.
## Communicating with the user
The skill creator is liable to be used by people across a wide range of familiarity with coding jargon. If you haven't heard (and how could you, it's only very recently that it started), there's a trend now where the power of Claude is inspiring plumbers to open up their terminals, parents and grandparents to google "how to install npm". On the other hand, the bulk of users are probably fairly computer-literate.
So please pay attention to context cues to understand how to phrase your communication! In the default case, just to give you some idea:
- "evaluation" and "benchmark" are borderline, but OK
- for "JSON" and "assertion" you want to see serious cues from the user that they know what those things are before using them without explaining them
It's OK to briefly explain terms if you're in doubt, and feel free to clarify terms with a short definition if you're unsure if the user will get it.
---
## Creating a skill
### Capture Intent
Start by understanding the user's intent. The current conversation might already contain a workflow the user wants to capture (e.g., they say "turn this into a skill"). If so, extract answers from the conversation history first — the tools used, the sequence of steps, corrections the user made, input/output formats observed. The user may need to fill the gaps, and should confirm before proceeding to the next step.
1. What should this skill enable Claude to do?
2. When should this skill trigger? (what user phrases/contexts)
3. What's the expected output format?
4. Should we set up test cases to verify the skill works? Skills with objectively verifiable outputs (file transforms, data extraction, code generation, fixed workflow steps) benefit from test cases. Skills with subjective outputs (writing style, art) often don't need them. Suggest the appropriate default based on the skill type, but let the user decide.
### Interview and Research
Proactively ask questions about edge cases, input/output formats, example files, success criteria, and dependencies. Wait to write test prompts until you've got this part ironed out.
Check available MCPs - if useful for research (searching docs, finding similar skills, looking up best practices), research in parallel via subagents if available, otherwise inline. Come prepared with context to reduce burden on the user.
### Write the SKILL.md
Based on the user interview, fill in these components:
- **name**: Skill identifier
- **description**: When to trigger, what it does. This is the primary triggering mechanism - include both what the skill does AND specific contexts for when to use it. All "when to use" info goes here, not in the body. Note: currently Claude has a tendency to "undertrigger" skills -- to not use them when they'd be useful. To combat this, please make the skill descriptions a little bit "pushy". So for instance, instead of "How to build a simple fast dashboard to display internal Anthropic data.", you might write "How to build a simple fast dashboard to display internal Anthropic data. Make sure to use this skill whenever the user mentions dashboards, data visualization, internal metrics, or wants to display any kind of company data, even if they don't explicitly ask for a 'dashboard.'"
- **compatibility**: Required tools, dependencies (optional, rarely needed)
- **the rest of the skill :)**
### Skill Writing Guide
#### Anatomy of a Skill
```
skill-name/
├── SKILL.md (required)
│ ├── YAML frontmatter (name, description required)
│ └── Markdown instructions
└── Bundled Resources (optional)
├── scripts/ - Executable code for deterministic/repetitive tasks
├── references/ - Docs loaded into context as needed
└── assets/ - Files used in output (templates, icons, fonts)
```
#### Progressive Disclosure
Skills use a three-level loading system:
1. **Metadata** (name + description) - Always in context (~100 words)
2. **SKILL.md body** - In context whenever skill triggers (<500 lines ideal)
3. **Bundled resources** - As needed (unlimited, scripts can execute without loading)
These word counts are approximate and you can feel free to go longer if needed.
**Key patterns:**
- Keep SKILL.md under 500 lines; if you're approaching this limit, add an additional layer of hierarchy along with clear pointers about where the model using the skill should go next to follow up.
- Reference files clearly from SKILL.md with guidance on when to read them
- For large reference files (>300 lines), include a table of contents
**Domain organization**: When a skill supports multiple domains/frameworks, organize by variant:
```
cloud-deploy/
├── SKILL.md (workflow + selection)
└── references/
├── aws.md
├── gcp.md
└── azure.md
```
Claude reads only the relevant reference file.
#### Principle of Lack of Surprise
This goes without saying, but skills must not contain malware, exploit code, or any content that could compromise system security. A skill's contents should not surprise the user in their intent if described. Don't go along with requests to create misleading skills or skills designed to facilitate unauthorized access, data exfiltration, or other malicious activities. Things like a "roleplay as an XYZ" are OK though.
#### Writing Patterns
Prefer using the imperative form in instructions.
**Defining output formats** - You can do it like this:
```markdown
## Report structure
ALWAYS use this exact template:
# [Title]
## Executive summary
## Key findings
## Recommendations
```
**Examples pattern** - It's useful to include examples. You can format them like this (but if "Input" and "Output" are in the examples you might want to deviate a little):
```markdown
## Commit message format
**Example 1:**
Input: Added user authentication with JWT tokens
Output: feat(auth): implement JWT-based authentication
```
### Writing Style
Try to explain to the model why things are important in lieu of heavy-handed musty MUSTs. Use theory of mind and try to make the skill general and not super-narrow to specific examples. Start by writing a draft and then look at it with fresh eyes and improve it.
### Test Cases
After writing the skill draft, come up with 2-3 realistic test prompts — the kind of thing a real user would actually say. Share them with the user: [you don't have to use this exact language] "Here are a few test cases I'd like to try. Do these look right, or do you want to add more?" Then run them.
Save test cases to `evals/evals.json`. Don't write assertions yet — just the prompts. You'll draft assertions in the next step while the runs are in progress.
```json
{
"skill_name": "example-skill",
"evals": [
{
"id": 1,
"prompt": "User's task prompt",
"expected_output": "Description of expected result",
"files": []
}
]
}
```
See `references/schemas.md` for the full schema (including the `assertions` field, which you'll add later).
## Running and evaluating test cases
This section is one continuous sequence — don't stop partway through. Do NOT use `/skill-test` or any other testing skill.
Put results in `<skill-name>-workspace/` as a sibling to the skill directory. Within the workspace, organize results by iteration (`iteration-1/`, `iteration-2/`, etc.) and within that, each test case gets a directory (`eval-0/`, `eval-1/`, etc.). Don't create all of this upfront — just create directories as you go.
### Step 1: Spawn all runs (with-skill AND baseline) in the same turn
For each test case, spawn two subagents in the same turn — one with the skill, one without. This is important: don't spawn the with-skill runs first and then come back for baselines later. Launch everything at once so it all finishes around the same time.
**With-skill run:**
```
Execute this task:
- Skill path: <path-to-skill>
- Task: <eval prompt>
- Input files: <eval files if any, or "none">
- Save outputs to: <workspace>/iteration-<N>/eval-<ID>/with_skill/outputs/
- Outputs to save: <what the user cares about — e.g., "the .docx file", "the final CSV">
```
**Baseline run** (same prompt, but the baseline depends on context):
- **Creating a new skill**: no skill at all. Same prompt, no skill path, save to `without_skill/outputs/`.
- **Improving an existing skill**: the old version. Before editing, snapshot the skill (`cp -r <skill-path> <workspace>/skill-snapshot/`), then point the baseline subagent at the snapshot. Save to `old_skill/outputs/`.
Write an `eval_metadata.json` for each test case (assertions can be empty for now). Give each eval a descriptive name based on what it's testing — not just "eval-0". Use this name for the directory too. If this iteration uses new or modified eval prompts, create these files for each new eval directory — don't assume they carry over from previous iterations.
```json
{
"eval_id": 0,
"eval_name": "descriptive-name-here",
"prompt": "The user's task prompt",
"assertions": []
}
```
### Step 2: While runs are in progress, draft assertions
Don't just wait for the runs to finish — you can use this time productively. Draft quantitative assertions for each test case and explain them to the user. If assertions already exist in `evals/evals.json`, review them and explain what they check.
Good assertions are objectively verifiable and have descriptive names — they should read clearly in the benchmark viewer so someone glancing at the results immediately understands what each one checks. Subjective skills (writing style, design quality) are better evaluated qualitatively — don't force assertions onto things that need human judgment.
Update the `eval_metadata.json` files and `evals/evals.json` with the assertions once drafted. Also explain to the user what they'll see in the viewer — both the qualitative outputs and the quantitative benchmark.
### Step 3: As runs complete, capture timing data
When each subagent task completes, you receive a notification containing `total_tokens` and `duration_ms`. Save this data immediately to `timing.json` in the run directory:
```json
{
"total_tokens": 84852,
"duration_ms": 23332,
"total_duration_seconds": 23.3
}
```
This is the only opportunity to capture this data — it comes through the task notification and isn't persisted elsewhere. Process each notification as it arrives rather than trying to batch them.
### Step 4: Grade, aggregate, and launch the viewer
Once all runs are done:
1. **Grade each run** — spawn a grader subagent (or grade inline) that reads `agents/grader.md` and evaluates each assertion against the outputs. Save results to `grading.json` in each run directory. The grading.json expectations array must use the fields `text`, `passed`, and `evidence` (not `name`/`met`/`details` or other variants) — the viewer depends on these exact field names. For assertions that can be checked programmatically, write and run a script rather than eyeballing it — scripts are faster, more reliable, and can be reused across iterations.
2. **Aggregate into benchmark** — run the aggregation script from the skill-creator directory:
```bash
python -m scripts.aggregate_benchmark <workspace>/iteration-N --skill-name <name>
```
This produces `benchmark.json` and `benchmark.md` with pass_rate, time, and tokens for each configuration, with mean ± stddev and the delta. If generating benchmark.json manually, see `references/schemas.md` for the exact schema the viewer expects.
Put each with_skill version before its baseline counterpart.
3. **Do an analyst pass** — read the benchmark data and surface patterns the aggregate stats might hide. See `agents/analyzer.md` (the "Analyzing Benchmark Results" section) for what to look for — things like assertions that always pass regardless of skill (non-discriminating), high-variance evals (possibly flaky), and time/token tradeoffs.
4. **Launch the viewer** with both qualitative outputs and quantitative data:
```bash
nohup python <skill-creator-path>/eval-viewer/generate_review.py \
<workspace>/iteration-N \
--skill-name "my-skill" \
--benchmark <workspace>/iteration-N/benchmark.json \
> /dev/null 2>&1 &
VIEWER_PID=$!
```
For iteration 2+, also pass `--previous-workspace <workspace>/iteration-<N-1>`.
**Cowork / headless environments:** If `webbrowser.open()` is not available or the environment has no display, use `--static <output_path>` to write a standalone HTML file instead of starting a server. Feedback will be downloaded as a `feedback.json` file when the user clicks "Submit All Reviews". After download, copy `feedback.json` into the workspace directory for the next iteration to pick up.
Note: please use generate_review.py to create the viewer; there's no need to write custom HTML.
5. **Tell the user** something like: "I've opened the results in your browser. There are two tabs — 'Outputs' lets you click through each test case and leave feedback, 'Benchmark' shows the quantitative comparison. When you're done, come back here and let me know."
### What the user sees in the viewer
The "Outputs" tab shows one test case at a time:
- **Prompt**: the task that was given
- **Output**: the files the skill produced, rendered inline where possible
- **Previous Output** (iteration 2+): collapsed section showing last iteration's output
- **Formal Grades** (if grading was run): collapsed section showing assertion pass/fail
- **Feedback**: a textbox that auto-saves as they type
- **Previous Feedback** (iteration 2+): their comments from last time, shown below the textbox
The "Benchmark" tab shows the stats summary: pass rates, timing, and token usage for each configuration, with per-eval breakdowns and analyst observations.
Navigation is via prev/next buttons or arrow keys. When done, they click "Submit All Reviews" which saves all feedback to `feedback.json`.
### Step 5: Read the feedback
When the user tells you they're done, read `feedback.json`:
```json
{
"reviews": [
{"run_id": "eval-0-with_skill", "feedback": "the chart is missing axis labels", "timestamp": "..."},
{"run_id": "eval-1-with_skill", "feedback": "", "timestamp": "..."},
{"run_id": "eval-2-with_skill", "feedback": "perfect, love this", "timestamp": "..."}
],
"status": "complete"
}
```
Empty feedback means the user thought it was fine. Focus your improvements on the test cases where the user had specific complaints.
Kill the viewer server when you're done with it:
```bash
kill $VIEWER_PID 2>/dev/null
```
---
## Improving the skill
This is the heart of the loop. You've run the test cases, the user has reviewed the results, and now you need to make the skill better based on their feedback.
### How to think about improvements
1. **Generalize from the feedback.** The big picture thing that's happening here is that we're trying to create skills that can be used a million times (maybe literally, maybe even more who knows) across many different prompts. Here you and the user are iterating on only a few examples over and over again because it helps move faster. The user knows these examples in and out and it's quick for them to assess new outputs. But if the skill you and the user are codeveloping works only for those examples, it's useless. Rather than put in fiddly overfitty changes, or oppressively constrictive MUSTs, if there's some stubborn issue, you might try branching out and using different metaphors, or recommending different patterns of working. It's relatively cheap to try and maybe you'll land on something great.
2. **Keep the prompt lean.** Remove things that aren't pulling their weight. Make sure to read the transcripts, not just the final outputs — if it looks like the skill is making the model waste a bunch of time doing things that are unproductive, you can try getting rid of the parts of the skill that are making it do that and seeing what happens.
3. **Explain the why.** Try hard to explain the **why** behind everything you're asking the model to do. Today's LLMs are *smart*. They have good theory of mind and when given a good harness can go beyond rote instructions and really make things happen. Even if the feedback from the user is terse or frustrated, try to actually understand the task and why the user is writing what they wrote, and what they actually wrote, and then transmit this understanding into the instructions. If you find yourself writing ALWAYS or NEVER in all caps, or using super rigid structures, that's a yellow flag — if possible, reframe and explain the reasoning so that the model understands why the thing you're asking for is important. That's a more humane, powerful, and effective approach.
4. **Look for repeated work across test cases.** Read the transcripts from the test runs and notice if the subagents all independently wrote similar helper scripts or took the same multi-step approach to something. If all 3 test cases resulted in the subagent writing a `create_docx.py` or a `build_chart.py`, that's a strong signal the skill should bundle that script. Write it once, put it in `scripts/`, and tell the skill to use it. This saves every future invocation from reinventing the wheel.
This task is pretty important (we are trying to create billions a year in economic value here!) and your thinking time is not the blocker; take your time and really mull things over. I'd suggest writing a draft revision and then looking at it anew and making improvements. Really do your best to get into the head of the user and understand what they want and need.
### The iteration loop
After improving the skill:
1. Apply your improvements to the skill
2. Rerun all test cases into a new `iteration-<N+1>/` directory, including baseline runs. If you're creating a new skill, the baseline is always `without_skill` (no skill) — that stays the same across iterations. If you're improving an existing skill, use your judgment on what makes sense as the baseline: the original version the user came in with, or the previous iteration.
3. Launch the reviewer with `--previous-workspace` pointing at the previous iteration
4. Wait for the user to review and tell you they're done
5. Read the new feedback, improve again, repeat
Keep going until:
- The user says they're happy
- The feedback is all empty (everything looks good)
- You're not making meaningful progress
---
## Advanced: Blind comparison
For situations where you want a more rigorous comparison between two versions of a skill (e.g., the user asks "is the new version actually better?"), there's a blind comparison system. Read `agents/comparator.md` and `agents/analyzer.md` for the details. The basic idea is: give two outputs to an independent agent without telling it which is which, and let it judge quality. Then analyze why the winner won.
This is optional, requires subagents, and most users won't need it. The human review loop is usually sufficient.
---
## Description Optimization
The description field in SKILL.md frontmatter is the primary mechanism that determines whether Claude invokes a skill. After creating or improving a skill, offer to optimize the description for better triggering accuracy.
### Step 1: Generate trigger eval queries
Create 20 eval queries — a mix of should-trigger and should-not-trigger. Save as JSON:
```json
[
{"query": "the user prompt", "should_trigger": true},
{"query": "another prompt", "should_trigger": false}
]
```
The queries must be realistic and something a Claude Code or Claude.ai user would actually type. Not abstract requests, but requests that are concrete and specific and have a good amount of detail. For instance, file paths, personal context about the user's job or situation, column names and values, company names, URLs. A little bit of backstory. Some might be in lowercase or contain abbreviations or typos or casual speech. Use a mix of different lengths, and focus on edge cases rather than making them clear-cut (the user will get a chance to sign off on them).
Bad: `"Format this data"`, `"Extract text from PDF"`, `"Create a chart"`
Good: `"ok so my boss just sent me this xlsx file (its in my downloads, called something like 'Q4 sales final FINAL v2.xlsx') and she wants me to add a column that shows the profit margin as a percentage. The revenue is in column C and costs are in column D i think"`
For the **should-trigger** queries (8-10), think about coverage. You want different phrasings of the same intent — some formal, some casual. Include cases where the user doesn't explicitly name the skill or file type but clearly needs it. Throw in some uncommon use cases and cases where this skill competes with another but should win.
For the **should-not-trigger** queries (8-10), the most valuable ones are the near-misses — queries that share keywords or concepts with the skill but actually need something different. Think adjacent domains, ambiguous phrasing where a naive keyword match would trigger but shouldn't, and cases where the query touches on something the skill does but in a context where another tool is more appropriate.
The key thing to avoid: don't make should-not-trigger queries obviously irrelevant. "Write a fibonacci function" as a negative test for a PDF skill is too easy — it doesn't test anything. The negative cases should be genuinely tricky.
### Step 2: Review with user
Present the eval set to the user for review using the HTML template:
1. Read the template from `assets/eval_review.html`
2. Replace the placeholders:
- `__EVAL_DATA_PLACEHOLDER__` → the JSON array of eval items (no quotes around it — it's a JS variable assignment)
- `__SKILL_NAME_PLACEHOLDER__` → the skill's name
- `__SKILL_DESCRIPTION_PLACEHOLDER__` → the skill's current description
3. Write to a temp file (e.g., `/tmp/eval_review_<skill-name>.html`) and open it: `open /tmp/eval_review_<skill-name>.html`
4. The user can edit queries, toggle should-trigger, add/remove entries, then click "Export Eval Set"
5. The file downloads to `~/Downloads/eval_set.json` — check the Downloads folder for the most recent version in case there are multiple (e.g., `eval_set (1).json`)
This step matters — bad eval queries lead to bad descriptions.
### Step 3: Run the optimization loop
Tell the user: "This will take some time — I'll run the optimization loop in the background and check on it periodically."
Save the eval set to the workspace, then run in the background:
```bash
python -m scripts.run_loop \
--eval-set <path-to-trigger-eval.json> \
--skill-path <path-to-skill> \
--model <model-id-powering-this-session> \
--max-iterations 5 \
--verbose
```
Use the model ID from your system prompt (the one powering the current session) so the triggering test matches what the user actually experiences.
While it runs, periodically tail the output to give the user updates on which iteration it's on and what the scores look like.
This handles the full optimization loop automatically. It splits the eval set into 60% train and 40% held-out test, evaluates the current description (running each query 3 times to get a reliable trigger rate), then calls Claude to propose improvements based on what failed. It re-evaluates each new description on both train and test, iterating up to 5 times. When it's done, it opens an HTML report in the browser showing the results per iteration and returns JSON with `best_description` — selected by test score rather than train score to avoid overfitting.
### How skill triggering works
Understanding the triggering mechanism helps design better eval queries. Skills appear in Claude's `available_skills` list with their name + description, and Claude decides whether to consult a skill based on that description. The important thing to know is that Claude only consults skills for tasks it can't easily handle on its own — simple, one-step queries like "read this PDF" may not trigger a skill even if the description matches perfectly, because Claude can handle them directly with basic tools. Complex, multi-step, or specialized queries reliably trigger skills when the description matches.
This means your eval queries should be substantive enough that Claude would actually benefit from consulting a skill. Simple queries like "read file X" are poor test cases — they won't trigger skills regardless of description quality.
### Step 4: Apply the result
Take `best_description` from the JSON output and update the skill's SKILL.md frontmatter. Show the user before/after and report the scores.
---
### Package and Present (only if `present_files` tool is available)
Check whether you have access to the `present_files` tool. If you don't, skip this step. If you do, package the skill and present the .skill file to the user:
```bash
python -m scripts.package_skill <path/to/skill-folder>
```
After packaging, direct the user to the resulting `.skill` file path so they can install it.
---
## Claude.ai-specific instructions
In Claude.ai, the core workflow is the same (draft → test → review → improve → repeat), but because Claude.ai doesn't have subagents, some mechanics change. Here's what to adapt:
**Running test cases**: No subagents means no parallel execution. For each test case, read the skill's SKILL.md, then follow its instructions to accomplish the test prompt yourself. Do them one at a time. This is less rigorous than independent subagents (you wrote the skill and you're also running it, so you have full context), but it's a useful sanity check — and the human review step compensates. Skip the baseline runs — just use the skill to complete the task as requested.
**Reviewing results**: If you can't open a browser (e.g., Claude.ai's VM has no display, or you're on a remote server), skip the browser reviewer entirely. Instead, present results directly in the conversation. For each test case, show the prompt and the output. If the output is a file the user needs to see (like a .docx or .xlsx), save it to the filesystem and tell them where it is so they can download and inspect it. Ask for feedback inline: "How does this look? Anything you'd change?"
**Benchmarking**: Skip the quantitative benchmarking — it relies on baseline comparisons which aren't meaningful without subagents. Focus on qualitative feedback from the user.
**The iteration loop**: Same as before — improve the skill, rerun the test cases, ask for feedback — just without the browser reviewer in the middle. You can still organize results into iteration directories on the filesystem if you have one.
**Description optimization**: This section requires the `claude` CLI tool (specifically `claude -p`) which is only available in Claude Code. Skip it if you're on Claude.ai.
**Blind comparison**: Requires subagents. Skip it.
**Packaging**: The `package_skill.py` script works anywhere with Python and a filesystem. On Claude.ai, you can run it and the user can download the resulting `.skill` file.
**Updating an existing skill**: The user might be asking you to update an existing skill, not create a new one. In this case:
- **Preserve the original name.** Note the skill's directory name and `name` frontmatter field -- use them unchanged. E.g., if the installed skill is `research-helper`, output `research-helper.skill` (not `research-helper-v2`).
- **Copy to a writeable location before editing.** The installed skill path may be read-only. Copy to `/tmp/skill-name/`, edit there, and package from the copy.
- **If packaging manually, stage in `/tmp/` first**, then copy to the output directory -- direct writes may fail due to permissions.
---
## Cowork-Specific Instructions
If you're in Cowork, the main things to know are:
- You have subagents, so the main workflow (spawn test cases in parallel, run baselines, grade, etc.) all works. (However, if you run into severe problems with timeouts, it's OK to run the test prompts in series rather than parallel.)
- You don't have a browser or display, so when generating the eval viewer, use `--static <output_path>` to write a standalone HTML file instead of starting a server. Then proffer a link that the user can click to open the HTML in their browser.
- For whatever reason, the Cowork setup seems to disincline Claude from generating the eval viewer after running the tests, so just to reiterate: whether you're in Cowork or in Claude Code, after running tests, you should always generate the eval viewer for the human to look at examples before revising the skill yourself and trying to make corrections, using `generate_review.py` (not writing your own boutique html code). Sorry in advance but I'm gonna go all caps here: GENERATE THE EVAL VIEWER *BEFORE* evaluating inputs yourself. You want to get them in front of the human ASAP!
- Feedback works differently: since there's no running server, the viewer's "Submit All Reviews" button will download `feedback.json` as a file. You can then read it from there (you may have to request access first).
- Packaging works — `package_skill.py` just needs Python and a filesystem.
- Description optimization (`run_loop.py` / `run_eval.py`) should work in Cowork just fine since it uses `claude -p` via subprocess, not a browser, but please save it until you've fully finished making the skill and the user agrees it's in good shape.
- **Updating an existing skill**: The user might be asking you to update an existing skill, not create a new one. Follow the update guidance in the claude.ai section above.
---
## Reference files
The agents/ directory contains instructions for specialized subagents. Read them when you need to spawn the relevant subagent.
- `agents/grader.md` — How to evaluate assertions against outputs
- `agents/comparator.md` — How to do blind A/B comparison between two outputs
- `agents/analyzer.md` — How to analyze why one version beat another
The references/ directory has additional documentation:
- `references/schemas.md` — JSON structures for evals.json, grading.json, etc.
---
Repeating one more time the core loop here for emphasis:
- Figure out what the skill is about
- Draft or edit the skill
- Run claude-with-access-to-the-skill on test prompts
- With the user, evaluate the outputs:
- Create benchmark.json and run `eval-viewer/generate_review.py` to help the user review them
- Run quantitative evals
- Repeat until you and the user are satisfied
- Package the final skill and return it to the user.
Please add steps to your TodoList, if you have such a thing, to make sure you don't forget. If you're in Cowork, please specifically put "Create evals JSON and run `eval-viewer/generate_review.py` so human can review test cases" in your TodoList to make sure it happens.
Good luck!

View File

@@ -0,0 +1,274 @@
# Post-hoc Analyzer Agent
Analyze blind comparison results to understand WHY the winner won and generate improvement suggestions.
## Role
After the blind comparator determines a winner, the Post-hoc Analyzer "unblids" the results by examining the skills and transcripts. The goal is to extract actionable insights: what made the winner better, and how can the loser be improved?
## Inputs
You receive these parameters in your prompt:
- **winner**: "A" or "B" (from blind comparison)
- **winner_skill_path**: Path to the skill that produced the winning output
- **winner_transcript_path**: Path to the execution transcript for the winner
- **loser_skill_path**: Path to the skill that produced the losing output
- **loser_transcript_path**: Path to the execution transcript for the loser
- **comparison_result_path**: Path to the blind comparator's output JSON
- **output_path**: Where to save the analysis results
## Process
### Step 1: Read Comparison Result
1. Read the blind comparator's output at comparison_result_path
2. Note the winning side (A or B), the reasoning, and any scores
3. Understand what the comparator valued in the winning output
### Step 2: Read Both Skills
1. Read the winner skill's SKILL.md and key referenced files
2. Read the loser skill's SKILL.md and key referenced files
3. Identify structural differences:
- Instructions clarity and specificity
- Script/tool usage patterns
- Example coverage
- Edge case handling
### Step 3: Read Both Transcripts
1. Read the winner's transcript
2. Read the loser's transcript
3. Compare execution patterns:
- How closely did each follow their skill's instructions?
- What tools were used differently?
- Where did the loser diverge from optimal behavior?
- Did either encounter errors or make recovery attempts?
### Step 4: Analyze Instruction Following
For each transcript, evaluate:
- Did the agent follow the skill's explicit instructions?
- Did the agent use the skill's provided tools/scripts?
- Were there missed opportunities to leverage skill content?
- Did the agent add unnecessary steps not in the skill?
Score instruction following 1-10 and note specific issues.
### Step 5: Identify Winner Strengths
Determine what made the winner better:
- Clearer instructions that led to better behavior?
- Better scripts/tools that produced better output?
- More comprehensive examples that guided edge cases?
- Better error handling guidance?
Be specific. Quote from skills/transcripts where relevant.
### Step 6: Identify Loser Weaknesses
Determine what held the loser back:
- Ambiguous instructions that led to suboptimal choices?
- Missing tools/scripts that forced workarounds?
- Gaps in edge case coverage?
- Poor error handling that caused failures?
### Step 7: Generate Improvement Suggestions
Based on the analysis, produce actionable suggestions for improving the loser skill:
- Specific instruction changes to make
- Tools/scripts to add or modify
- Examples to include
- Edge cases to address
Prioritize by impact. Focus on changes that would have changed the outcome.
### Step 8: Write Analysis Results
Save structured analysis to `{output_path}`.
## Output Format
Write a JSON file with this structure:
```json
{
"comparison_summary": {
"winner": "A",
"winner_skill": "path/to/winner/skill",
"loser_skill": "path/to/loser/skill",
"comparator_reasoning": "Brief summary of why comparator chose winner"
},
"winner_strengths": [
"Clear step-by-step instructions for handling multi-page documents",
"Included validation script that caught formatting errors",
"Explicit guidance on fallback behavior when OCR fails"
],
"loser_weaknesses": [
"Vague instruction 'process the document appropriately' led to inconsistent behavior",
"No script for validation, agent had to improvise and made errors",
"No guidance on OCR failure, agent gave up instead of trying alternatives"
],
"instruction_following": {
"winner": {
"score": 9,
"issues": [
"Minor: skipped optional logging step"
]
},
"loser": {
"score": 6,
"issues": [
"Did not use the skill's formatting template",
"Invented own approach instead of following step 3",
"Missed the 'always validate output' instruction"
]
}
},
"improvement_suggestions": [
{
"priority": "high",
"category": "instructions",
"suggestion": "Replace 'process the document appropriately' with explicit steps: 1) Extract text, 2) Identify sections, 3) Format per template",
"expected_impact": "Would eliminate ambiguity that caused inconsistent behavior"
},
{
"priority": "high",
"category": "tools",
"suggestion": "Add validate_output.py script similar to winner skill's validation approach",
"expected_impact": "Would catch formatting errors before final output"
},
{
"priority": "medium",
"category": "error_handling",
"suggestion": "Add fallback instructions: 'If OCR fails, try: 1) different resolution, 2) image preprocessing, 3) manual extraction'",
"expected_impact": "Would prevent early failure on difficult documents"
}
],
"transcript_insights": {
"winner_execution_pattern": "Read skill -> Followed 5-step process -> Used validation script -> Fixed 2 issues -> Produced output",
"loser_execution_pattern": "Read skill -> Unclear on approach -> Tried 3 different methods -> No validation -> Output had errors"
}
}
```
## Guidelines
- **Be specific**: Quote from skills and transcripts, don't just say "instructions were unclear"
- **Be actionable**: Suggestions should be concrete changes, not vague advice
- **Focus on skill improvements**: The goal is to improve the losing skill, not critique the agent
- **Prioritize by impact**: Which changes would most likely have changed the outcome?
- **Consider causation**: Did the skill weakness actually cause the worse output, or is it incidental?
- **Stay objective**: Analyze what happened, don't editorialize
- **Think about generalization**: Would this improvement help on other evals too?
## Categories for Suggestions
Use these categories to organize improvement suggestions:
| Category | Description |
|----------|-------------|
| `instructions` | Changes to the skill's prose instructions |
| `tools` | Scripts, templates, or utilities to add/modify |
| `examples` | Example inputs/outputs to include |
| `error_handling` | Guidance for handling failures |
| `structure` | Reorganization of skill content |
| `references` | External docs or resources to add |
## Priority Levels
- **high**: Would likely change the outcome of this comparison
- **medium**: Would improve quality but may not change win/loss
- **low**: Nice to have, marginal improvement
---
# Analyzing Benchmark Results
When analyzing benchmark results, the analyzer's purpose is to **surface patterns and anomalies** across multiple runs, not suggest skill improvements.
## Role
Review all benchmark run results and generate freeform notes that help the user understand skill performance. Focus on patterns that wouldn't be visible from aggregate metrics alone.
## Inputs
You receive these parameters in your prompt:
- **benchmark_data_path**: Path to the in-progress benchmark.json with all run results
- **skill_path**: Path to the skill being benchmarked
- **output_path**: Where to save the notes (as JSON array of strings)
## Process
### Step 1: Read Benchmark Data
1. Read the benchmark.json containing all run results
2. Note the configurations tested (with_skill, without_skill)
3. Understand the run_summary aggregates already calculated
### Step 2: Analyze Per-Assertion Patterns
For each expectation across all runs:
- Does it **always pass** in both configurations? (may not differentiate skill value)
- Does it **always fail** in both configurations? (may be broken or beyond capability)
- Does it **always pass with skill but fail without**? (skill clearly adds value here)
- Does it **always fail with skill but pass without**? (skill may be hurting)
- Is it **highly variable**? (flaky expectation or non-deterministic behavior)
### Step 3: Analyze Cross-Eval Patterns
Look for patterns across evals:
- Are certain eval types consistently harder/easier?
- Do some evals show high variance while others are stable?
- Are there surprising results that contradict expectations?
### Step 4: Analyze Metrics Patterns
Look at time_seconds, tokens, tool_calls:
- Does the skill significantly increase execution time?
- Is there high variance in resource usage?
- Are there outlier runs that skew the aggregates?
### Step 5: Generate Notes
Write freeform observations as a list of strings. Each note should:
- State a specific observation
- Be grounded in the data (not speculation)
- Help the user understand something the aggregate metrics don't show
Examples:
- "Assertion 'Output is a PDF file' passes 100% in both configurations - may not differentiate skill value"
- "Eval 3 shows high variance (50% ± 40%) - run 2 had an unusual failure that may be flaky"
- "Without-skill runs consistently fail on table extraction expectations (0% pass rate)"
- "Skill adds 13s average execution time but improves pass rate by 50%"
- "Token usage is 80% higher with skill, primarily due to script output parsing"
- "All 3 without-skill runs for eval 1 produced empty output"
### Step 6: Write Notes
Save notes to `{output_path}` as a JSON array of strings:
```json
[
"Assertion 'Output is a PDF file' passes 100% in both configurations - may not differentiate skill value",
"Eval 3 shows high variance (50% ± 40%) - run 2 had an unusual failure",
"Without-skill runs consistently fail on table extraction expectations",
"Skill adds 13s average execution time but improves pass rate by 50%"
]
```
## Guidelines
**DO:**
- Report what you observe in the data
- Be specific about which evals, expectations, or runs you're referring to
- Note patterns that aggregate metrics would hide
- Provide context that helps interpret the numbers
**DO NOT:**
- Suggest improvements to the skill (that's for the improvement step, not benchmarking)
- Make subjective quality judgments ("the output was good/bad")
- Speculate about causes without evidence
- Repeat information already in the run_summary aggregates

View File

@@ -0,0 +1,202 @@
# Blind Comparator Agent
Compare two outputs WITHOUT knowing which skill produced them.
## Role
The Blind Comparator judges which output better accomplishes the eval task. You receive two outputs labeled A and B, but you do NOT know which skill produced which. This prevents bias toward a particular skill or approach.
Your judgment is based purely on output quality and task completion.
## Inputs
You receive these parameters in your prompt:
- **output_a_path**: Path to the first output file or directory
- **output_b_path**: Path to the second output file or directory
- **eval_prompt**: The original task/prompt that was executed
- **expectations**: List of expectations to check (optional - may be empty)
## Process
### Step 1: Read Both Outputs
1. Examine output A (file or directory)
2. Examine output B (file or directory)
3. Note the type, structure, and content of each
4. If outputs are directories, examine all relevant files inside
### Step 2: Understand the Task
1. Read the eval_prompt carefully
2. Identify what the task requires:
- What should be produced?
- What qualities matter (accuracy, completeness, format)?
- What would distinguish a good output from a poor one?
### Step 3: Generate Evaluation Rubric
Based on the task, generate a rubric with two dimensions:
**Content Rubric** (what the output contains):
| Criterion | 1 (Poor) | 3 (Acceptable) | 5 (Excellent) |
|-----------|----------|----------------|---------------|
| Correctness | Major errors | Minor errors | Fully correct |
| Completeness | Missing key elements | Mostly complete | All elements present |
| Accuracy | Significant inaccuracies | Minor inaccuracies | Accurate throughout |
**Structure Rubric** (how the output is organized):
| Criterion | 1 (Poor) | 3 (Acceptable) | 5 (Excellent) |
|-----------|----------|----------------|---------------|
| Organization | Disorganized | Reasonably organized | Clear, logical structure |
| Formatting | Inconsistent/broken | Mostly consistent | Professional, polished |
| Usability | Difficult to use | Usable with effort | Easy to use |
Adapt criteria to the specific task. For example:
- PDF form → "Field alignment", "Text readability", "Data placement"
- Document → "Section structure", "Heading hierarchy", "Paragraph flow"
- Data output → "Schema correctness", "Data types", "Completeness"
### Step 4: Evaluate Each Output Against the Rubric
For each output (A and B):
1. **Score each criterion** on the rubric (1-5 scale)
2. **Calculate dimension totals**: Content score, Structure score
3. **Calculate overall score**: Average of dimension scores, scaled to 1-10
### Step 5: Check Assertions (if provided)
If expectations are provided:
1. Check each expectation against output A
2. Check each expectation against output B
3. Count pass rates for each output
4. Use expectation scores as secondary evidence (not the primary decision factor)
### Step 6: Determine the Winner
Compare A and B based on (in priority order):
1. **Primary**: Overall rubric score (content + structure)
2. **Secondary**: Assertion pass rates (if applicable)
3. **Tiebreaker**: If truly equal, declare a TIE
Be decisive - ties should be rare. One output is usually better, even if marginally.
### Step 7: Write Comparison Results
Save results to a JSON file at the path specified (or `comparison.json` if not specified).
## Output Format
Write a JSON file with this structure:
```json
{
"winner": "A",
"reasoning": "Output A provides a complete solution with proper formatting and all required fields. Output B is missing the date field and has formatting inconsistencies.",
"rubric": {
"A": {
"content": {
"correctness": 5,
"completeness": 5,
"accuracy": 4
},
"structure": {
"organization": 4,
"formatting": 5,
"usability": 4
},
"content_score": 4.7,
"structure_score": 4.3,
"overall_score": 9.0
},
"B": {
"content": {
"correctness": 3,
"completeness": 2,
"accuracy": 3
},
"structure": {
"organization": 3,
"formatting": 2,
"usability": 3
},
"content_score": 2.7,
"structure_score": 2.7,
"overall_score": 5.4
}
},
"output_quality": {
"A": {
"score": 9,
"strengths": ["Complete solution", "Well-formatted", "All fields present"],
"weaknesses": ["Minor style inconsistency in header"]
},
"B": {
"score": 5,
"strengths": ["Readable output", "Correct basic structure"],
"weaknesses": ["Missing date field", "Formatting inconsistencies", "Partial data extraction"]
}
},
"expectation_results": {
"A": {
"passed": 4,
"total": 5,
"pass_rate": 0.80,
"details": [
{"text": "Output includes name", "passed": true},
{"text": "Output includes date", "passed": true},
{"text": "Format is PDF", "passed": true},
{"text": "Contains signature", "passed": false},
{"text": "Readable text", "passed": true}
]
},
"B": {
"passed": 3,
"total": 5,
"pass_rate": 0.60,
"details": [
{"text": "Output includes name", "passed": true},
{"text": "Output includes date", "passed": false},
{"text": "Format is PDF", "passed": true},
{"text": "Contains signature", "passed": false},
{"text": "Readable text", "passed": true}
]
}
}
}
```
If no expectations were provided, omit the `expectation_results` field entirely.
## Field Descriptions
- **winner**: "A", "B", or "TIE"
- **reasoning**: Clear explanation of why the winner was chosen (or why it's a tie)
- **rubric**: Structured rubric evaluation for each output
- **content**: Scores for content criteria (correctness, completeness, accuracy)
- **structure**: Scores for structure criteria (organization, formatting, usability)
- **content_score**: Average of content criteria (1-5)
- **structure_score**: Average of structure criteria (1-5)
- **overall_score**: Combined score scaled to 1-10
- **output_quality**: Summary quality assessment
- **score**: 1-10 rating (should match rubric overall_score)
- **strengths**: List of positive aspects
- **weaknesses**: List of issues or shortcomings
- **expectation_results**: (Only if expectations provided)
- **passed**: Number of expectations that passed
- **total**: Total number of expectations
- **pass_rate**: Fraction passed (0.0 to 1.0)
- **details**: Individual expectation results
## Guidelines
- **Stay blind**: DO NOT try to infer which skill produced which output. Judge purely on output quality.
- **Be specific**: Cite specific examples when explaining strengths and weaknesses.
- **Be decisive**: Choose a winner unless outputs are genuinely equivalent.
- **Output quality first**: Assertion scores are secondary to overall task completion.
- **Be objective**: Don't favor outputs based on style preferences; focus on correctness and completeness.
- **Explain your reasoning**: The reasoning field should make it clear why you chose the winner.
- **Handle edge cases**: If both outputs fail, pick the one that fails less badly. If both are excellent, pick the one that's marginally better.

View File

@@ -0,0 +1,223 @@
# Grader Agent
Evaluate expectations against an execution transcript and outputs.
## Role
The Grader reviews a transcript and output files, then determines whether each expectation passes or fails. Provide clear evidence for each judgment.
You have two jobs: grade the outputs, and critique the evals themselves. A passing grade on a weak assertion is worse than useless — it creates false confidence. When you notice an assertion that's trivially satisfied, or an important outcome that no assertion checks, say so.
## Inputs
You receive these parameters in your prompt:
- **expectations**: List of expectations to evaluate (strings)
- **transcript_path**: Path to the execution transcript (markdown file)
- **outputs_dir**: Directory containing output files from execution
## Process
### Step 1: Read the Transcript
1. Read the transcript file completely
2. Note the eval prompt, execution steps, and final result
3. Identify any issues or errors documented
### Step 2: Examine Output Files
1. List files in outputs_dir
2. Read/examine each file relevant to the expectations. If outputs aren't plain text, use the inspection tools provided in your prompt — don't rely solely on what the transcript says the executor produced.
3. Note contents, structure, and quality
### Step 3: Evaluate Each Assertion
For each expectation:
1. **Search for evidence** in the transcript and outputs
2. **Determine verdict**:
- **PASS**: Clear evidence the expectation is true AND the evidence reflects genuine task completion, not just surface-level compliance
- **FAIL**: No evidence, or evidence contradicts the expectation, or the evidence is superficial (e.g., correct filename but empty/wrong content)
3. **Cite the evidence**: Quote the specific text or describe what you found
### Step 4: Extract and Verify Claims
Beyond the predefined expectations, extract implicit claims from the outputs and verify them:
1. **Extract claims** from the transcript and outputs:
- Factual statements ("The form has 12 fields")
- Process claims ("Used pypdf to fill the form")
- Quality claims ("All fields were filled correctly")
2. **Verify each claim**:
- **Factual claims**: Can be checked against the outputs or external sources
- **Process claims**: Can be verified from the transcript
- **Quality claims**: Evaluate whether the claim is justified
3. **Flag unverifiable claims**: Note claims that cannot be verified with available information
This catches issues that predefined expectations might miss.
### Step 5: Read User Notes
If `{outputs_dir}/user_notes.md` exists:
1. Read it and note any uncertainties or issues flagged by the executor
2. Include relevant concerns in the grading output
3. These may reveal problems even when expectations pass
### Step 6: Critique the Evals
After grading, consider whether the evals themselves could be improved. Only surface suggestions when there's a clear gap.
Good suggestions test meaningful outcomes — assertions that are hard to satisfy without actually doing the work correctly. Think about what makes an assertion *discriminating*: it passes when the skill genuinely succeeds and fails when it doesn't.
Suggestions worth raising:
- An assertion that passed but would also pass for a clearly wrong output (e.g., checking filename existence but not file content)
- An important outcome you observed — good or bad — that no assertion covers at all
- An assertion that can't actually be verified from the available outputs
Keep the bar high. The goal is to flag things the eval author would say "good catch" about, not to nitpick every assertion.
### Step 7: Write Grading Results
Save results to `{outputs_dir}/../grading.json` (sibling to outputs_dir).
## Grading Criteria
**PASS when**:
- The transcript or outputs clearly demonstrate the expectation is true
- Specific evidence can be cited
- The evidence reflects genuine substance, not just surface compliance (e.g., a file exists AND contains correct content, not just the right filename)
**FAIL when**:
- No evidence found for the expectation
- Evidence contradicts the expectation
- The expectation cannot be verified from available information
- The evidence is superficial — the assertion is technically satisfied but the underlying task outcome is wrong or incomplete
- The output appears to meet the assertion by coincidence rather than by actually doing the work
**When uncertain**: The burden of proof to pass is on the expectation.
### Step 8: Read Executor Metrics and Timing
1. If `{outputs_dir}/metrics.json` exists, read it and include in grading output
2. If `{outputs_dir}/../timing.json` exists, read it and include timing data
## Output Format
Write a JSON file with this structure:
```json
{
"expectations": [
{
"text": "The output includes the name 'John Smith'",
"passed": true,
"evidence": "Found in transcript Step 3: 'Extracted names: John Smith, Sarah Johnson'"
},
{
"text": "The spreadsheet has a SUM formula in cell B10",
"passed": false,
"evidence": "No spreadsheet was created. The output was a text file."
},
{
"text": "The assistant used the skill's OCR script",
"passed": true,
"evidence": "Transcript Step 2 shows: 'Tool: Bash - python ocr_script.py image.png'"
}
],
"summary": {
"passed": 2,
"failed": 1,
"total": 3,
"pass_rate": 0.67
},
"execution_metrics": {
"tool_calls": {
"Read": 5,
"Write": 2,
"Bash": 8
},
"total_tool_calls": 15,
"total_steps": 6,
"errors_encountered": 0,
"output_chars": 12450,
"transcript_chars": 3200
},
"timing": {
"executor_duration_seconds": 165.0,
"grader_duration_seconds": 26.0,
"total_duration_seconds": 191.0
},
"claims": [
{
"claim": "The form has 12 fillable fields",
"type": "factual",
"verified": true,
"evidence": "Counted 12 fields in field_info.json"
},
{
"claim": "All required fields were populated",
"type": "quality",
"verified": false,
"evidence": "Reference section was left blank despite data being available"
}
],
"user_notes_summary": {
"uncertainties": ["Used 2023 data, may be stale"],
"needs_review": [],
"workarounds": ["Fell back to text overlay for non-fillable fields"]
},
"eval_feedback": {
"suggestions": [
{
"assertion": "The output includes the name 'John Smith'",
"reason": "A hallucinated document that mentions the name would also pass — consider checking it appears as the primary contact with matching phone and email from the input"
},
{
"reason": "No assertion checks whether the extracted phone numbers match the input — I observed incorrect numbers in the output that went uncaught"
}
],
"overall": "Assertions check presence but not correctness. Consider adding content verification."
}
}
```
## Field Descriptions
- **expectations**: Array of graded expectations
- **text**: The original expectation text
- **passed**: Boolean - true if expectation passes
- **evidence**: Specific quote or description supporting the verdict
- **summary**: Aggregate statistics
- **passed**: Count of passed expectations
- **failed**: Count of failed expectations
- **total**: Total expectations evaluated
- **pass_rate**: Fraction passed (0.0 to 1.0)
- **execution_metrics**: Copied from executor's metrics.json (if available)
- **output_chars**: Total character count of output files (proxy for tokens)
- **transcript_chars**: Character count of transcript
- **timing**: Wall clock timing from timing.json (if available)
- **executor_duration_seconds**: Time spent in executor subagent
- **total_duration_seconds**: Total elapsed time for the run
- **claims**: Extracted and verified claims from the output
- **claim**: The statement being verified
- **type**: "factual", "process", or "quality"
- **verified**: Boolean - whether the claim holds
- **evidence**: Supporting or contradicting evidence
- **user_notes_summary**: Issues flagged by the executor
- **uncertainties**: Things the executor wasn't sure about
- **needs_review**: Items requiring human attention
- **workarounds**: Places where the skill didn't work as expected
- **eval_feedback**: Improvement suggestions for the evals (only when warranted)
- **suggestions**: List of concrete suggestions, each with a `reason` and optionally an `assertion` it relates to
- **overall**: Brief assessment — can be "No suggestions, evals look solid" if nothing to flag
## Guidelines
- **Be objective**: Base verdicts on evidence, not assumptions
- **Be specific**: Quote the exact text that supports your verdict
- **Be thorough**: Check both transcript and output files
- **Be consistent**: Apply the same standard to each expectation
- **Explain failures**: Make it clear why evidence was insufficient
- **No partial credit**: Each expectation is pass or fail, not partial

View File

@@ -0,0 +1,146 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Eval Set Review - __SKILL_NAME_PLACEHOLDER__</title>
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Poppins:wght@500;600&family=Lora:wght@400;500&display=swap" rel="stylesheet">
<style>
* { box-sizing: border-box; margin: 0; padding: 0; }
body { font-family: 'Lora', Georgia, serif; background: #faf9f5; padding: 2rem; color: #141413; }
h1 { font-family: 'Poppins', sans-serif; margin-bottom: 0.5rem; font-size: 1.5rem; }
.description { color: #b0aea5; margin-bottom: 1.5rem; font-style: italic; max-width: 900px; }
.controls { margin-bottom: 1rem; display: flex; gap: 0.5rem; }
.btn { font-family: 'Poppins', sans-serif; padding: 0.5rem 1rem; border: none; border-radius: 6px; cursor: pointer; font-size: 0.875rem; font-weight: 500; }
.btn-add { background: #6a9bcc; color: white; }
.btn-add:hover { background: #5889b8; }
.btn-export { background: #d97757; color: white; }
.btn-export:hover { background: #c4613f; }
table { width: 100%; max-width: 1100px; border-collapse: collapse; background: white; border-radius: 6px; overflow: hidden; box-shadow: 0 1px 3px rgba(0,0,0,0.08); }
th { font-family: 'Poppins', sans-serif; background: #141413; color: #faf9f5; padding: 0.75rem 1rem; text-align: left; font-size: 0.875rem; }
td { padding: 0.75rem 1rem; border-bottom: 1px solid #e8e6dc; vertical-align: top; }
tr:nth-child(even) td { background: #faf9f5; }
tr:hover td { background: #f3f1ea; }
.section-header td { background: #e8e6dc; font-family: 'Poppins', sans-serif; font-weight: 500; font-size: 0.8rem; color: #141413; text-transform: uppercase; letter-spacing: 0.05em; }
.query-input { width: 100%; padding: 0.4rem; border: 1px solid #e8e6dc; border-radius: 4px; font-size: 0.875rem; font-family: 'Lora', Georgia, serif; resize: vertical; min-height: 60px; }
.query-input:focus { outline: none; border-color: #d97757; box-shadow: 0 0 0 2px rgba(217,119,87,0.15); }
.toggle { position: relative; display: inline-block; width: 44px; height: 24px; }
.toggle input { opacity: 0; width: 0; height: 0; }
.toggle .slider { position: absolute; inset: 0; background: #b0aea5; border-radius: 24px; cursor: pointer; transition: 0.2s; }
.toggle .slider::before { content: ""; position: absolute; width: 18px; height: 18px; left: 3px; bottom: 3px; background: white; border-radius: 50%; transition: 0.2s; }
.toggle input:checked + .slider { background: #d97757; }
.toggle input:checked + .slider::before { transform: translateX(20px); }
.btn-delete { background: #c44; color: white; padding: 0.3rem 0.6rem; border: none; border-radius: 4px; cursor: pointer; font-size: 0.75rem; font-family: 'Poppins', sans-serif; }
.btn-delete:hover { background: #a33; }
.summary { margin-top: 1rem; color: #b0aea5; font-size: 0.875rem; }
</style>
</head>
<body>
<h1>Eval Set Review: <span id="skill-name">__SKILL_NAME_PLACEHOLDER__</span></h1>
<p class="description">Current description: <span id="skill-desc">__SKILL_DESCRIPTION_PLACEHOLDER__</span></p>
<div class="controls">
<button class="btn btn-add" onclick="addRow()">+ Add Query</button>
<button class="btn btn-export" onclick="exportEvalSet()">Export Eval Set</button>
</div>
<table>
<thead>
<tr>
<th style="width:65%">Query</th>
<th style="width:18%">Should Trigger</th>
<th style="width:10%">Actions</th>
</tr>
</thead>
<tbody id="eval-body"></tbody>
</table>
<p class="summary" id="summary"></p>
<script>
const EVAL_DATA = __EVAL_DATA_PLACEHOLDER__;
let evalItems = [...EVAL_DATA];
function render() {
const tbody = document.getElementById('eval-body');
tbody.innerHTML = '';
// Sort: should-trigger first, then should-not-trigger
const sorted = evalItems
.map((item, origIdx) => ({ ...item, origIdx }))
.sort((a, b) => (b.should_trigger ? 1 : 0) - (a.should_trigger ? 1 : 0));
let lastGroup = null;
sorted.forEach(item => {
const group = item.should_trigger ? 'trigger' : 'no-trigger';
if (group !== lastGroup) {
const headerRow = document.createElement('tr');
headerRow.className = 'section-header';
headerRow.innerHTML = `<td colspan="3">${item.should_trigger ? 'Should Trigger' : 'Should NOT Trigger'}</td>`;
tbody.appendChild(headerRow);
lastGroup = group;
}
const idx = item.origIdx;
const tr = document.createElement('tr');
tr.innerHTML = `
<td><textarea class="query-input" onchange="updateQuery(${idx}, this.value)">${escapeHtml(item.query)}</textarea></td>
<td>
<label class="toggle">
<input type="checkbox" ${item.should_trigger ? 'checked' : ''} onchange="updateTrigger(${idx}, this.checked)">
<span class="slider"></span>
</label>
<span style="margin-left:8px;font-size:0.8rem;color:#b0aea5">${item.should_trigger ? 'Yes' : 'No'}</span>
</td>
<td><button class="btn-delete" onclick="deleteRow(${idx})">Delete</button></td>
`;
tbody.appendChild(tr);
});
updateSummary();
}
function escapeHtml(text) {
const div = document.createElement('div');
div.textContent = text;
return div.innerHTML;
}
function updateQuery(idx, value) { evalItems[idx].query = value; updateSummary(); }
function updateTrigger(idx, value) { evalItems[idx].should_trigger = value; render(); }
function deleteRow(idx) { evalItems.splice(idx, 1); render(); }
function addRow() {
evalItems.push({ query: '', should_trigger: true });
render();
const inputs = document.querySelectorAll('.query-input');
inputs[inputs.length - 1].focus();
}
function updateSummary() {
const trigger = evalItems.filter(i => i.should_trigger).length;
const noTrigger = evalItems.filter(i => !i.should_trigger).length;
document.getElementById('summary').textContent =
`${evalItems.length} queries total: ${trigger} should trigger, ${noTrigger} should not trigger`;
}
function exportEvalSet() {
const valid = evalItems.filter(i => i.query.trim() !== '');
const data = valid.map(i => ({ query: i.query.trim(), should_trigger: i.should_trigger }));
const blob = new Blob([JSON.stringify(data, null, 2)], { type: 'application/json' });
const url = URL.createObjectURL(blob);
const a = document.createElement('a');
a.href = url;
a.download = 'eval_set.json';
document.body.appendChild(a);
a.click();
document.body.removeChild(a);
URL.revokeObjectURL(url);
}
render();
</script>
</body>
</html>

View File

@@ -0,0 +1,471 @@
#!/usr/bin/env python3
"""Generate and serve a review page for eval results.
Reads the workspace directory, discovers runs (directories with outputs/),
embeds all output data into a self-contained HTML page, and serves it via
a tiny HTTP server. Feedback auto-saves to feedback.json in the workspace.
Usage:
python generate_review.py <workspace-path> [--port PORT] [--skill-name NAME]
python generate_review.py <workspace-path> --previous-feedback /path/to/old/feedback.json
No dependencies beyond the Python stdlib are required.
"""
import argparse
import base64
import json
import mimetypes
import os
import re
import signal
import subprocess
import sys
import time
import webbrowser
from functools import partial
from http.server import HTTPServer, BaseHTTPRequestHandler
from pathlib import Path
# Files to exclude from output listings
METADATA_FILES = {"transcript.md", "user_notes.md", "metrics.json"}
# Extensions we render as inline text
TEXT_EXTENSIONS = {
".txt", ".md", ".json", ".csv", ".py", ".js", ".ts", ".tsx", ".jsx",
".yaml", ".yml", ".xml", ".html", ".css", ".sh", ".rb", ".go", ".rs",
".java", ".c", ".cpp", ".h", ".hpp", ".sql", ".r", ".toml",
}
# Extensions we render as inline images
IMAGE_EXTENSIONS = {".png", ".jpg", ".jpeg", ".gif", ".svg", ".webp"}
# MIME type overrides for common types
MIME_OVERRIDES = {
".svg": "image/svg+xml",
".xlsx": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
".docx": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
".pptx": "application/vnd.openxmlformats-officedocument.presentationml.presentation",
}
def get_mime_type(path: Path) -> str:
ext = path.suffix.lower()
if ext in MIME_OVERRIDES:
return MIME_OVERRIDES[ext]
mime, _ = mimetypes.guess_type(str(path))
return mime or "application/octet-stream"
def find_runs(workspace: Path) -> list[dict]:
"""Recursively find directories that contain an outputs/ subdirectory."""
runs: list[dict] = []
_find_runs_recursive(workspace, workspace, runs)
runs.sort(key=lambda r: (r.get("eval_id", float("inf")), r["id"]))
return runs
def _find_runs_recursive(root: Path, current: Path, runs: list[dict]) -> None:
if not current.is_dir():
return
outputs_dir = current / "outputs"
if outputs_dir.is_dir():
run = build_run(root, current)
if run:
runs.append(run)
return
skip = {"node_modules", ".git", "__pycache__", "skill", "inputs"}
for child in sorted(current.iterdir()):
if child.is_dir() and child.name not in skip:
_find_runs_recursive(root, child, runs)
def build_run(root: Path, run_dir: Path) -> dict | None:
"""Build a run dict with prompt, outputs, and grading data."""
prompt = ""
eval_id = None
# Try eval_metadata.json
for candidate in [run_dir / "eval_metadata.json", run_dir.parent / "eval_metadata.json"]:
if candidate.exists():
try:
metadata = json.loads(candidate.read_text())
prompt = metadata.get("prompt", "")
eval_id = metadata.get("eval_id")
except (json.JSONDecodeError, OSError):
pass
if prompt:
break
# Fall back to transcript.md
if not prompt:
for candidate in [run_dir / "transcript.md", run_dir / "outputs" / "transcript.md"]:
if candidate.exists():
try:
text = candidate.read_text()
match = re.search(r"## Eval Prompt\n\n([\s\S]*?)(?=\n##|$)", text)
if match:
prompt = match.group(1).strip()
except OSError:
pass
if prompt:
break
if not prompt:
prompt = "(No prompt found)"
run_id = str(run_dir.relative_to(root)).replace("/", "-").replace("\\", "-")
# Collect output files
outputs_dir = run_dir / "outputs"
output_files: list[dict] = []
if outputs_dir.is_dir():
for f in sorted(outputs_dir.iterdir()):
if f.is_file() and f.name not in METADATA_FILES:
output_files.append(embed_file(f))
# Load grading if present
grading = None
for candidate in [run_dir / "grading.json", run_dir.parent / "grading.json"]:
if candidate.exists():
try:
grading = json.loads(candidate.read_text())
except (json.JSONDecodeError, OSError):
pass
if grading:
break
return {
"id": run_id,
"prompt": prompt,
"eval_id": eval_id,
"outputs": output_files,
"grading": grading,
}
def embed_file(path: Path) -> dict:
"""Read a file and return an embedded representation."""
ext = path.suffix.lower()
mime = get_mime_type(path)
if ext in TEXT_EXTENSIONS:
try:
content = path.read_text(errors="replace")
except OSError:
content = "(Error reading file)"
return {
"name": path.name,
"type": "text",
"content": content,
}
elif ext in IMAGE_EXTENSIONS:
try:
raw = path.read_bytes()
b64 = base64.b64encode(raw).decode("ascii")
except OSError:
return {"name": path.name, "type": "error", "content": "(Error reading file)"}
return {
"name": path.name,
"type": "image",
"mime": mime,
"data_uri": f"data:{mime};base64,{b64}",
}
elif ext == ".pdf":
try:
raw = path.read_bytes()
b64 = base64.b64encode(raw).decode("ascii")
except OSError:
return {"name": path.name, "type": "error", "content": "(Error reading file)"}
return {
"name": path.name,
"type": "pdf",
"data_uri": f"data:{mime};base64,{b64}",
}
elif ext == ".xlsx":
try:
raw = path.read_bytes()
b64 = base64.b64encode(raw).decode("ascii")
except OSError:
return {"name": path.name, "type": "error", "content": "(Error reading file)"}
return {
"name": path.name,
"type": "xlsx",
"data_b64": b64,
}
else:
# Binary / unknown — base64 download link
try:
raw = path.read_bytes()
b64 = base64.b64encode(raw).decode("ascii")
except OSError:
return {"name": path.name, "type": "error", "content": "(Error reading file)"}
return {
"name": path.name,
"type": "binary",
"mime": mime,
"data_uri": f"data:{mime};base64,{b64}",
}
def load_previous_iteration(workspace: Path) -> dict[str, dict]:
"""Load previous iteration's feedback and outputs.
Returns a map of run_id -> {"feedback": str, "outputs": list[dict]}.
"""
result: dict[str, dict] = {}
# Load feedback
feedback_map: dict[str, str] = {}
feedback_path = workspace / "feedback.json"
if feedback_path.exists():
try:
data = json.loads(feedback_path.read_text())
feedback_map = {
r["run_id"]: r["feedback"]
for r in data.get("reviews", [])
if r.get("feedback", "").strip()
}
except (json.JSONDecodeError, OSError, KeyError):
pass
# Load runs (to get outputs)
prev_runs = find_runs(workspace)
for run in prev_runs:
result[run["id"]] = {
"feedback": feedback_map.get(run["id"], ""),
"outputs": run.get("outputs", []),
}
# Also add feedback for run_ids that had feedback but no matching run
for run_id, fb in feedback_map.items():
if run_id not in result:
result[run_id] = {"feedback": fb, "outputs": []}
return result
def generate_html(
runs: list[dict],
skill_name: str,
previous: dict[str, dict] | None = None,
benchmark: dict | None = None,
) -> str:
"""Generate the complete standalone HTML page with embedded data."""
template_path = Path(__file__).parent / "viewer.html"
template = template_path.read_text()
# Build previous_feedback and previous_outputs maps for the template
previous_feedback: dict[str, str] = {}
previous_outputs: dict[str, list[dict]] = {}
if previous:
for run_id, data in previous.items():
if data.get("feedback"):
previous_feedback[run_id] = data["feedback"]
if data.get("outputs"):
previous_outputs[run_id] = data["outputs"]
embedded = {
"skill_name": skill_name,
"runs": runs,
"previous_feedback": previous_feedback,
"previous_outputs": previous_outputs,
}
if benchmark:
embedded["benchmark"] = benchmark
data_json = json.dumps(embedded)
return template.replace("/*__EMBEDDED_DATA__*/", f"const EMBEDDED_DATA = {data_json};")
# ---------------------------------------------------------------------------
# HTTP server (stdlib only, zero dependencies)
# ---------------------------------------------------------------------------
def _kill_port(port: int) -> None:
"""Kill any process listening on the given port."""
try:
result = subprocess.run(
["lsof", "-ti", f":{port}"],
capture_output=True, text=True, timeout=5,
)
for pid_str in result.stdout.strip().split("\n"):
if pid_str.strip():
try:
os.kill(int(pid_str.strip()), signal.SIGTERM)
except (ProcessLookupError, ValueError):
pass
if result.stdout.strip():
time.sleep(0.5)
except subprocess.TimeoutExpired:
pass
except FileNotFoundError:
print("Note: lsof not found, cannot check if port is in use", file=sys.stderr)
class ReviewHandler(BaseHTTPRequestHandler):
"""Serves the review HTML and handles feedback saves.
Regenerates the HTML on each page load so that refreshing the browser
picks up new eval outputs without restarting the server.
"""
def __init__(
self,
workspace: Path,
skill_name: str,
feedback_path: Path,
previous: dict[str, dict],
benchmark_path: Path | None,
*args,
**kwargs,
):
self.workspace = workspace
self.skill_name = skill_name
self.feedback_path = feedback_path
self.previous = previous
self.benchmark_path = benchmark_path
super().__init__(*args, **kwargs)
def do_GET(self) -> None:
if self.path == "/" or self.path == "/index.html":
# Regenerate HTML on each request (re-scans workspace for new outputs)
runs = find_runs(self.workspace)
benchmark = None
if self.benchmark_path and self.benchmark_path.exists():
try:
benchmark = json.loads(self.benchmark_path.read_text())
except (json.JSONDecodeError, OSError):
pass
html = generate_html(runs, self.skill_name, self.previous, benchmark)
content = html.encode("utf-8")
self.send_response(200)
self.send_header("Content-Type", "text/html; charset=utf-8")
self.send_header("Content-Length", str(len(content)))
self.end_headers()
self.wfile.write(content)
elif self.path == "/api/feedback":
data = b"{}"
if self.feedback_path.exists():
data = self.feedback_path.read_bytes()
self.send_response(200)
self.send_header("Content-Type", "application/json")
self.send_header("Content-Length", str(len(data)))
self.end_headers()
self.wfile.write(data)
else:
self.send_error(404)
def do_POST(self) -> None:
if self.path == "/api/feedback":
length = int(self.headers.get("Content-Length", 0))
body = self.rfile.read(length)
try:
data = json.loads(body)
if not isinstance(data, dict) or "reviews" not in data:
raise ValueError("Expected JSON object with 'reviews' key")
self.feedback_path.write_text(json.dumps(data, indent=2) + "\n")
resp = b'{"ok":true}'
self.send_response(200)
except (json.JSONDecodeError, OSError, ValueError) as e:
resp = json.dumps({"error": str(e)}).encode()
self.send_response(500)
self.send_header("Content-Type", "application/json")
self.send_header("Content-Length", str(len(resp)))
self.end_headers()
self.wfile.write(resp)
else:
self.send_error(404)
def log_message(self, format: str, *args: object) -> None:
# Suppress request logging to keep terminal clean
pass
def main() -> None:
parser = argparse.ArgumentParser(description="Generate and serve eval review")
parser.add_argument("workspace", type=Path, help="Path to workspace directory")
parser.add_argument("--port", "-p", type=int, default=3117, help="Server port (default: 3117)")
parser.add_argument("--skill-name", "-n", type=str, default=None, help="Skill name for header")
parser.add_argument(
"--previous-workspace", type=Path, default=None,
help="Path to previous iteration's workspace (shows old outputs and feedback as context)",
)
parser.add_argument(
"--benchmark", type=Path, default=None,
help="Path to benchmark.json to show in the Benchmark tab",
)
parser.add_argument(
"--static", "-s", type=Path, default=None,
help="Write standalone HTML to this path instead of starting a server",
)
args = parser.parse_args()
workspace = args.workspace.resolve()
if not workspace.is_dir():
print(f"Error: {workspace} is not a directory", file=sys.stderr)
sys.exit(1)
runs = find_runs(workspace)
if not runs:
print(f"No runs found in {workspace}", file=sys.stderr)
sys.exit(1)
skill_name = args.skill_name or workspace.name.replace("-workspace", "")
feedback_path = workspace / "feedback.json"
previous: dict[str, dict] = {}
if args.previous_workspace:
previous = load_previous_iteration(args.previous_workspace.resolve())
benchmark_path = args.benchmark.resolve() if args.benchmark else None
benchmark = None
if benchmark_path and benchmark_path.exists():
try:
benchmark = json.loads(benchmark_path.read_text())
except (json.JSONDecodeError, OSError):
pass
if args.static:
html = generate_html(runs, skill_name, previous, benchmark)
args.static.parent.mkdir(parents=True, exist_ok=True)
args.static.write_text(html)
print(f"\n Static viewer written to: {args.static}\n")
sys.exit(0)
# Kill any existing process on the target port
port = args.port
_kill_port(port)
handler = partial(ReviewHandler, workspace, skill_name, feedback_path, previous, benchmark_path)
try:
server = HTTPServer(("127.0.0.1", port), handler)
except OSError:
# Port still in use after kill attempt — find a free one
server = HTTPServer(("127.0.0.1", 0), handler)
port = server.server_address[1]
url = f"http://localhost:{port}"
print(f"\n Eval Viewer")
print(f" ─────────────────────────────────")
print(f" URL: {url}")
print(f" Workspace: {workspace}")
print(f" Feedback: {feedback_path}")
if previous:
print(f" Previous: {args.previous_workspace} ({len(previous)} runs)")
if benchmark_path:
print(f" Benchmark: {benchmark_path}")
print(f"\n Press Ctrl+C to stop.\n")
webbrowser.open(url)
try:
server.serve_forever()
except KeyboardInterrupt:
print("\nStopped.")
server.server_close()
if __name__ == "__main__":
main()

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,430 @@
# JSON Schemas
This document defines the JSON schemas used by skill-creator.
---
## evals.json
Defines the evals for a skill. Located at `evals/evals.json` within the skill directory.
```json
{
"skill_name": "example-skill",
"evals": [
{
"id": 1,
"prompt": "User's example prompt",
"expected_output": "Description of expected result",
"files": ["evals/files/sample1.pdf"],
"expectations": [
"The output includes X",
"The skill used script Y"
]
}
]
}
```
**Fields:**
- `skill_name`: Name matching the skill's frontmatter
- `evals[].id`: Unique integer identifier
- `evals[].prompt`: The task to execute
- `evals[].expected_output`: Human-readable description of success
- `evals[].files`: Optional list of input file paths (relative to skill root)
- `evals[].expectations`: List of verifiable statements
---
## history.json
Tracks version progression in Improve mode. Located at workspace root.
```json
{
"started_at": "2026-01-15T10:30:00Z",
"skill_name": "pdf",
"current_best": "v2",
"iterations": [
{
"version": "v0",
"parent": null,
"expectation_pass_rate": 0.65,
"grading_result": "baseline",
"is_current_best": false
},
{
"version": "v1",
"parent": "v0",
"expectation_pass_rate": 0.75,
"grading_result": "won",
"is_current_best": false
},
{
"version": "v2",
"parent": "v1",
"expectation_pass_rate": 0.85,
"grading_result": "won",
"is_current_best": true
}
]
}
```
**Fields:**
- `started_at`: ISO timestamp of when improvement started
- `skill_name`: Name of the skill being improved
- `current_best`: Version identifier of the best performer
- `iterations[].version`: Version identifier (v0, v1, ...)
- `iterations[].parent`: Parent version this was derived from
- `iterations[].expectation_pass_rate`: Pass rate from grading
- `iterations[].grading_result`: "baseline", "won", "lost", or "tie"
- `iterations[].is_current_best`: Whether this is the current best version
---
## grading.json
Output from the grader agent. Located at `<run-dir>/grading.json`.
```json
{
"expectations": [
{
"text": "The output includes the name 'John Smith'",
"passed": true,
"evidence": "Found in transcript Step 3: 'Extracted names: John Smith, Sarah Johnson'"
},
{
"text": "The spreadsheet has a SUM formula in cell B10",
"passed": false,
"evidence": "No spreadsheet was created. The output was a text file."
}
],
"summary": {
"passed": 2,
"failed": 1,
"total": 3,
"pass_rate": 0.67
},
"execution_metrics": {
"tool_calls": {
"Read": 5,
"Write": 2,
"Bash": 8
},
"total_tool_calls": 15,
"total_steps": 6,
"errors_encountered": 0,
"output_chars": 12450,
"transcript_chars": 3200
},
"timing": {
"executor_duration_seconds": 165.0,
"grader_duration_seconds": 26.0,
"total_duration_seconds": 191.0
},
"claims": [
{
"claim": "The form has 12 fillable fields",
"type": "factual",
"verified": true,
"evidence": "Counted 12 fields in field_info.json"
}
],
"user_notes_summary": {
"uncertainties": ["Used 2023 data, may be stale"],
"needs_review": [],
"workarounds": ["Fell back to text overlay for non-fillable fields"]
},
"eval_feedback": {
"suggestions": [
{
"assertion": "The output includes the name 'John Smith'",
"reason": "A hallucinated document that mentions the name would also pass"
}
],
"overall": "Assertions check presence but not correctness."
}
}
```
**Fields:**
- `expectations[]`: Graded expectations with evidence
- `summary`: Aggregate pass/fail counts
- `execution_metrics`: Tool usage and output size (from executor's metrics.json)
- `timing`: Wall clock timing (from timing.json)
- `claims`: Extracted and verified claims from the output
- `user_notes_summary`: Issues flagged by the executor
- `eval_feedback`: (optional) Improvement suggestions for the evals, only present when the grader identifies issues worth raising
---
## metrics.json
Output from the executor agent. Located at `<run-dir>/outputs/metrics.json`.
```json
{
"tool_calls": {
"Read": 5,
"Write": 2,
"Bash": 8,
"Edit": 1,
"Glob": 2,
"Grep": 0
},
"total_tool_calls": 18,
"total_steps": 6,
"files_created": ["filled_form.pdf", "field_values.json"],
"errors_encountered": 0,
"output_chars": 12450,
"transcript_chars": 3200
}
```
**Fields:**
- `tool_calls`: Count per tool type
- `total_tool_calls`: Sum of all tool calls
- `total_steps`: Number of major execution steps
- `files_created`: List of output files created
- `errors_encountered`: Number of errors during execution
- `output_chars`: Total character count of output files
- `transcript_chars`: Character count of transcript
---
## timing.json
Wall clock timing for a run. Located at `<run-dir>/timing.json`.
**How to capture:** When a subagent task completes, the task notification includes `total_tokens` and `duration_ms`. Save these immediately — they are not persisted anywhere else and cannot be recovered after the fact.
```json
{
"total_tokens": 84852,
"duration_ms": 23332,
"total_duration_seconds": 23.3,
"executor_start": "2026-01-15T10:30:00Z",
"executor_end": "2026-01-15T10:32:45Z",
"executor_duration_seconds": 165.0,
"grader_start": "2026-01-15T10:32:46Z",
"grader_end": "2026-01-15T10:33:12Z",
"grader_duration_seconds": 26.0
}
```
---
## benchmark.json
Output from Benchmark mode. Located at `benchmarks/<timestamp>/benchmark.json`.
```json
{
"metadata": {
"skill_name": "pdf",
"skill_path": "/path/to/pdf",
"executor_model": "claude-sonnet-4-20250514",
"analyzer_model": "most-capable-model",
"timestamp": "2026-01-15T10:30:00Z",
"evals_run": [1, 2, 3],
"runs_per_configuration": 3
},
"runs": [
{
"eval_id": 1,
"eval_name": "Ocean",
"configuration": "with_skill",
"run_number": 1,
"result": {
"pass_rate": 0.85,
"passed": 6,
"failed": 1,
"total": 7,
"time_seconds": 42.5,
"tokens": 3800,
"tool_calls": 18,
"errors": 0
},
"expectations": [
{"text": "...", "passed": true, "evidence": "..."}
],
"notes": [
"Used 2023 data, may be stale",
"Fell back to text overlay for non-fillable fields"
]
}
],
"run_summary": {
"with_skill": {
"pass_rate": {"mean": 0.85, "stddev": 0.05, "min": 0.80, "max": 0.90},
"time_seconds": {"mean": 45.0, "stddev": 12.0, "min": 32.0, "max": 58.0},
"tokens": {"mean": 3800, "stddev": 400, "min": 3200, "max": 4100}
},
"without_skill": {
"pass_rate": {"mean": 0.35, "stddev": 0.08, "min": 0.28, "max": 0.45},
"time_seconds": {"mean": 32.0, "stddev": 8.0, "min": 24.0, "max": 42.0},
"tokens": {"mean": 2100, "stddev": 300, "min": 1800, "max": 2500}
},
"delta": {
"pass_rate": "+0.50",
"time_seconds": "+13.0",
"tokens": "+1700"
}
},
"notes": [
"Assertion 'Output is a PDF file' passes 100% in both configurations - may not differentiate skill value",
"Eval 3 shows high variance (50% ± 40%) - may be flaky or model-dependent",
"Without-skill runs consistently fail on table extraction expectations",
"Skill adds 13s average execution time but improves pass rate by 50%"
]
}
```
**Fields:**
- `metadata`: Information about the benchmark run
- `skill_name`: Name of the skill
- `timestamp`: When the benchmark was run
- `evals_run`: List of eval names or IDs
- `runs_per_configuration`: Number of runs per config (e.g. 3)
- `runs[]`: Individual run results
- `eval_id`: Numeric eval identifier
- `eval_name`: Human-readable eval name (used as section header in the viewer)
- `configuration`: Must be `"with_skill"` or `"without_skill"` (the viewer uses this exact string for grouping and color coding)
- `run_number`: Integer run number (1, 2, 3...)
- `result`: Nested object with `pass_rate`, `passed`, `total`, `time_seconds`, `tokens`, `errors`
- `run_summary`: Statistical aggregates per configuration
- `with_skill` / `without_skill`: Each contains `pass_rate`, `time_seconds`, `tokens` objects with `mean` and `stddev` fields
- `delta`: Difference strings like `"+0.50"`, `"+13.0"`, `"+1700"`
- `notes`: Freeform observations from the analyzer
**Important:** The viewer reads these field names exactly. Using `config` instead of `configuration`, or putting `pass_rate` at the top level of a run instead of nested under `result`, will cause the viewer to show empty/zero values. Always reference this schema when generating benchmark.json manually.
---
## comparison.json
Output from blind comparator. Located at `<grading-dir>/comparison-N.json`.
```json
{
"winner": "A",
"reasoning": "Output A provides a complete solution with proper formatting and all required fields. Output B is missing the date field and has formatting inconsistencies.",
"rubric": {
"A": {
"content": {
"correctness": 5,
"completeness": 5,
"accuracy": 4
},
"structure": {
"organization": 4,
"formatting": 5,
"usability": 4
},
"content_score": 4.7,
"structure_score": 4.3,
"overall_score": 9.0
},
"B": {
"content": {
"correctness": 3,
"completeness": 2,
"accuracy": 3
},
"structure": {
"organization": 3,
"formatting": 2,
"usability": 3
},
"content_score": 2.7,
"structure_score": 2.7,
"overall_score": 5.4
}
},
"output_quality": {
"A": {
"score": 9,
"strengths": ["Complete solution", "Well-formatted", "All fields present"],
"weaknesses": ["Minor style inconsistency in header"]
},
"B": {
"score": 5,
"strengths": ["Readable output", "Correct basic structure"],
"weaknesses": ["Missing date field", "Formatting inconsistencies", "Partial data extraction"]
}
},
"expectation_results": {
"A": {
"passed": 4,
"total": 5,
"pass_rate": 0.80,
"details": [
{"text": "Output includes name", "passed": true}
]
},
"B": {
"passed": 3,
"total": 5,
"pass_rate": 0.60,
"details": [
{"text": "Output includes name", "passed": true}
]
}
}
}
```
---
## analysis.json
Output from post-hoc analyzer. Located at `<grading-dir>/analysis.json`.
```json
{
"comparison_summary": {
"winner": "A",
"winner_skill": "path/to/winner/skill",
"loser_skill": "path/to/loser/skill",
"comparator_reasoning": "Brief summary of why comparator chose winner"
},
"winner_strengths": [
"Clear step-by-step instructions for handling multi-page documents",
"Included validation script that caught formatting errors"
],
"loser_weaknesses": [
"Vague instruction 'process the document appropriately' led to inconsistent behavior",
"No script for validation, agent had to improvise"
],
"instruction_following": {
"winner": {
"score": 9,
"issues": ["Minor: skipped optional logging step"]
},
"loser": {
"score": 6,
"issues": [
"Did not use the skill's formatting template",
"Invented own approach instead of following step 3"
]
}
},
"improvement_suggestions": [
{
"priority": "high",
"category": "instructions",
"suggestion": "Replace 'process the document appropriately' with explicit steps",
"expected_impact": "Would eliminate ambiguity that caused inconsistent behavior"
}
],
"transcript_insights": {
"winner_execution_pattern": "Read skill -> Followed 5-step process -> Used validation script",
"loser_execution_pattern": "Read skill -> Unclear on approach -> Tried 3 different methods"
}
}
```

View File

@@ -0,0 +1,401 @@
#!/usr/bin/env python3
"""
Aggregate individual run results into benchmark summary statistics.
Reads grading.json files from run directories and produces:
- run_summary with mean, stddev, min, max for each metric
- delta between with_skill and without_skill configurations
Usage:
python aggregate_benchmark.py <benchmark_dir>
Example:
python aggregate_benchmark.py benchmarks/2026-01-15T10-30-00/
The script supports two directory layouts:
Workspace layout (from skill-creator iterations):
<benchmark_dir>/
└── eval-N/
├── with_skill/
│ ├── run-1/grading.json
│ └── run-2/grading.json
└── without_skill/
├── run-1/grading.json
└── run-2/grading.json
Legacy layout (with runs/ subdirectory):
<benchmark_dir>/
└── runs/
└── eval-N/
├── with_skill/
│ └── run-1/grading.json
└── without_skill/
└── run-1/grading.json
"""
import argparse
import json
import math
import sys
from datetime import datetime, timezone
from pathlib import Path
def calculate_stats(values: list[float]) -> dict:
"""Calculate mean, stddev, min, max for a list of values."""
if not values:
return {"mean": 0.0, "stddev": 0.0, "min": 0.0, "max": 0.0}
n = len(values)
mean = sum(values) / n
if n > 1:
variance = sum((x - mean) ** 2 for x in values) / (n - 1)
stddev = math.sqrt(variance)
else:
stddev = 0.0
return {
"mean": round(mean, 4),
"stddev": round(stddev, 4),
"min": round(min(values), 4),
"max": round(max(values), 4)
}
def load_run_results(benchmark_dir: Path) -> dict:
"""
Load all run results from a benchmark directory.
Returns dict keyed by config name (e.g. "with_skill"/"without_skill",
or "new_skill"/"old_skill"), each containing a list of run results.
"""
# Support both layouts: eval dirs directly under benchmark_dir, or under runs/
runs_dir = benchmark_dir / "runs"
if runs_dir.exists():
search_dir = runs_dir
elif list(benchmark_dir.glob("eval-*")):
search_dir = benchmark_dir
else:
print(f"No eval directories found in {benchmark_dir} or {benchmark_dir / 'runs'}")
return {}
results: dict[str, list] = {}
for eval_idx, eval_dir in enumerate(sorted(search_dir.glob("eval-*"))):
metadata_path = eval_dir / "eval_metadata.json"
if metadata_path.exists():
try:
with open(metadata_path) as mf:
eval_id = json.load(mf).get("eval_id", eval_idx)
except (json.JSONDecodeError, OSError):
eval_id = eval_idx
else:
try:
eval_id = int(eval_dir.name.split("-")[1])
except ValueError:
eval_id = eval_idx
# Discover config directories dynamically rather than hardcoding names
for config_dir in sorted(eval_dir.iterdir()):
if not config_dir.is_dir():
continue
# Skip non-config directories (inputs, outputs, etc.)
if not list(config_dir.glob("run-*")):
continue
config = config_dir.name
if config not in results:
results[config] = []
for run_dir in sorted(config_dir.glob("run-*")):
run_number = int(run_dir.name.split("-")[1])
grading_file = run_dir / "grading.json"
if not grading_file.exists():
print(f"Warning: grading.json not found in {run_dir}")
continue
try:
with open(grading_file) as f:
grading = json.load(f)
except json.JSONDecodeError as e:
print(f"Warning: Invalid JSON in {grading_file}: {e}")
continue
# Extract metrics
result = {
"eval_id": eval_id,
"run_number": run_number,
"pass_rate": grading.get("summary", {}).get("pass_rate", 0.0),
"passed": grading.get("summary", {}).get("passed", 0),
"failed": grading.get("summary", {}).get("failed", 0),
"total": grading.get("summary", {}).get("total", 0),
}
# Extract timing — check grading.json first, then sibling timing.json
timing = grading.get("timing", {})
result["time_seconds"] = timing.get("total_duration_seconds", 0.0)
timing_file = run_dir / "timing.json"
if result["time_seconds"] == 0.0 and timing_file.exists():
try:
with open(timing_file) as tf:
timing_data = json.load(tf)
result["time_seconds"] = timing_data.get("total_duration_seconds", 0.0)
result["tokens"] = timing_data.get("total_tokens", 0)
except json.JSONDecodeError:
pass
# Extract metrics if available
metrics = grading.get("execution_metrics", {})
result["tool_calls"] = metrics.get("total_tool_calls", 0)
if not result.get("tokens"):
result["tokens"] = metrics.get("output_chars", 0)
result["errors"] = metrics.get("errors_encountered", 0)
# Extract expectations — viewer requires fields: text, passed, evidence
raw_expectations = grading.get("expectations", [])
for exp in raw_expectations:
if "text" not in exp or "passed" not in exp:
print(f"Warning: expectation in {grading_file} missing required fields (text, passed, evidence): {exp}")
result["expectations"] = raw_expectations
# Extract notes from user_notes_summary
notes_summary = grading.get("user_notes_summary", {})
notes = []
notes.extend(notes_summary.get("uncertainties", []))
notes.extend(notes_summary.get("needs_review", []))
notes.extend(notes_summary.get("workarounds", []))
result["notes"] = notes
results[config].append(result)
return results
def aggregate_results(results: dict) -> dict:
"""
Aggregate run results into summary statistics.
Returns run_summary with stats for each configuration and delta.
"""
run_summary = {}
configs = list(results.keys())
for config in configs:
runs = results.get(config, [])
if not runs:
run_summary[config] = {
"pass_rate": {"mean": 0.0, "stddev": 0.0, "min": 0.0, "max": 0.0},
"time_seconds": {"mean": 0.0, "stddev": 0.0, "min": 0.0, "max": 0.0},
"tokens": {"mean": 0, "stddev": 0, "min": 0, "max": 0}
}
continue
pass_rates = [r["pass_rate"] for r in runs]
times = [r["time_seconds"] for r in runs]
tokens = [r.get("tokens", 0) for r in runs]
run_summary[config] = {
"pass_rate": calculate_stats(pass_rates),
"time_seconds": calculate_stats(times),
"tokens": calculate_stats(tokens)
}
# Calculate delta between the first two configs (if two exist)
if len(configs) >= 2:
primary = run_summary.get(configs[0], {})
baseline = run_summary.get(configs[1], {})
else:
primary = run_summary.get(configs[0], {}) if configs else {}
baseline = {}
delta_pass_rate = primary.get("pass_rate", {}).get("mean", 0) - baseline.get("pass_rate", {}).get("mean", 0)
delta_time = primary.get("time_seconds", {}).get("mean", 0) - baseline.get("time_seconds", {}).get("mean", 0)
delta_tokens = primary.get("tokens", {}).get("mean", 0) - baseline.get("tokens", {}).get("mean", 0)
run_summary["delta"] = {
"pass_rate": f"{delta_pass_rate:+.2f}",
"time_seconds": f"{delta_time:+.1f}",
"tokens": f"{delta_tokens:+.0f}"
}
return run_summary
def generate_benchmark(benchmark_dir: Path, skill_name: str = "", skill_path: str = "") -> dict:
"""
Generate complete benchmark.json from run results.
"""
results = load_run_results(benchmark_dir)
run_summary = aggregate_results(results)
# Build runs array for benchmark.json
runs = []
for config in results:
for result in results[config]:
runs.append({
"eval_id": result["eval_id"],
"configuration": config,
"run_number": result["run_number"],
"result": {
"pass_rate": result["pass_rate"],
"passed": result["passed"],
"failed": result["failed"],
"total": result["total"],
"time_seconds": result["time_seconds"],
"tokens": result.get("tokens", 0),
"tool_calls": result.get("tool_calls", 0),
"errors": result.get("errors", 0)
},
"expectations": result["expectations"],
"notes": result["notes"]
})
# Determine eval IDs from results
eval_ids = sorted(set(
r["eval_id"]
for config in results.values()
for r in config
))
benchmark = {
"metadata": {
"skill_name": skill_name or "<skill-name>",
"skill_path": skill_path or "<path/to/skill>",
"executor_model": "<model-name>",
"analyzer_model": "<model-name>",
"timestamp": datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ"),
"evals_run": eval_ids,
"runs_per_configuration": 3
},
"runs": runs,
"run_summary": run_summary,
"notes": [] # To be filled by analyzer
}
return benchmark
def generate_markdown(benchmark: dict) -> str:
"""Generate human-readable benchmark.md from benchmark data."""
metadata = benchmark["metadata"]
run_summary = benchmark["run_summary"]
# Determine config names (excluding "delta")
configs = [k for k in run_summary if k != "delta"]
config_a = configs[0] if len(configs) >= 1 else "config_a"
config_b = configs[1] if len(configs) >= 2 else "config_b"
label_a = config_a.replace("_", " ").title()
label_b = config_b.replace("_", " ").title()
lines = [
f"# Skill Benchmark: {metadata['skill_name']}",
"",
f"**Model**: {metadata['executor_model']}",
f"**Date**: {metadata['timestamp']}",
f"**Evals**: {', '.join(map(str, metadata['evals_run']))} ({metadata['runs_per_configuration']} runs each per configuration)",
"",
"## Summary",
"",
f"| Metric | {label_a} | {label_b} | Delta |",
"|--------|------------|---------------|-------|",
]
a_summary = run_summary.get(config_a, {})
b_summary = run_summary.get(config_b, {})
delta = run_summary.get("delta", {})
# Format pass rate
a_pr = a_summary.get("pass_rate", {})
b_pr = b_summary.get("pass_rate", {})
lines.append(f"| Pass Rate | {a_pr.get('mean', 0)*100:.0f}% ± {a_pr.get('stddev', 0)*100:.0f}% | {b_pr.get('mean', 0)*100:.0f}% ± {b_pr.get('stddev', 0)*100:.0f}% | {delta.get('pass_rate', '')} |")
# Format time
a_time = a_summary.get("time_seconds", {})
b_time = b_summary.get("time_seconds", {})
lines.append(f"| Time | {a_time.get('mean', 0):.1f}s ± {a_time.get('stddev', 0):.1f}s | {b_time.get('mean', 0):.1f}s ± {b_time.get('stddev', 0):.1f}s | {delta.get('time_seconds', '')}s |")
# Format tokens
a_tokens = a_summary.get("tokens", {})
b_tokens = b_summary.get("tokens", {})
lines.append(f"| Tokens | {a_tokens.get('mean', 0):.0f} ± {a_tokens.get('stddev', 0):.0f} | {b_tokens.get('mean', 0):.0f} ± {b_tokens.get('stddev', 0):.0f} | {delta.get('tokens', '')} |")
# Notes section
if benchmark.get("notes"):
lines.extend([
"",
"## Notes",
""
])
for note in benchmark["notes"]:
lines.append(f"- {note}")
return "\n".join(lines)
def main():
parser = argparse.ArgumentParser(
description="Aggregate benchmark run results into summary statistics"
)
parser.add_argument(
"benchmark_dir",
type=Path,
help="Path to the benchmark directory"
)
parser.add_argument(
"--skill-name",
default="",
help="Name of the skill being benchmarked"
)
parser.add_argument(
"--skill-path",
default="",
help="Path to the skill being benchmarked"
)
parser.add_argument(
"--output", "-o",
type=Path,
help="Output path for benchmark.json (default: <benchmark_dir>/benchmark.json)"
)
args = parser.parse_args()
if not args.benchmark_dir.exists():
print(f"Directory not found: {args.benchmark_dir}")
sys.exit(1)
# Generate benchmark
benchmark = generate_benchmark(args.benchmark_dir, args.skill_name, args.skill_path)
# Determine output paths
output_json = args.output or (args.benchmark_dir / "benchmark.json")
output_md = output_json.with_suffix(".md")
# Write benchmark.json
with open(output_json, "w") as f:
json.dump(benchmark, f, indent=2)
print(f"Generated: {output_json}")
# Write benchmark.md
markdown = generate_markdown(benchmark)
with open(output_md, "w") as f:
f.write(markdown)
print(f"Generated: {output_md}")
# Print summary
run_summary = benchmark["run_summary"]
configs = [k for k in run_summary if k != "delta"]
delta = run_summary.get("delta", {})
print(f"\nSummary:")
for config in configs:
pr = run_summary[config]["pass_rate"]["mean"]
label = config.replace("_", " ").title()
print(f" {label}: {pr*100:.1f}% pass rate")
print(f" Delta: {delta.get('pass_rate', '')}")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,326 @@
#!/usr/bin/env python3
"""Generate an HTML report from run_loop.py output.
Takes the JSON output from run_loop.py and generates a visual HTML report
showing each description attempt with check/x for each test case.
Distinguishes between train and test queries.
"""
import argparse
import html
import json
import sys
from pathlib import Path
def generate_html(data: dict, auto_refresh: bool = False, skill_name: str = "") -> str:
"""Generate HTML report from loop output data. If auto_refresh is True, adds a meta refresh tag."""
history = data.get("history", [])
holdout = data.get("holdout", 0)
title_prefix = html.escape(skill_name + " \u2014 ") if skill_name else ""
# Get all unique queries from train and test sets, with should_trigger info
train_queries: list[dict] = []
test_queries: list[dict] = []
if history:
for r in history[0].get("train_results", history[0].get("results", [])):
train_queries.append({"query": r["query"], "should_trigger": r.get("should_trigger", True)})
if history[0].get("test_results"):
for r in history[0].get("test_results", []):
test_queries.append({"query": r["query"], "should_trigger": r.get("should_trigger", True)})
refresh_tag = ' <meta http-equiv="refresh" content="5">\n' if auto_refresh else ""
html_parts = ["""<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
""" + refresh_tag + """ <title>""" + title_prefix + """Skill Description Optimization</title>
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Poppins:wght@500;600&family=Lora:wght@400;500&display=swap" rel="stylesheet">
<style>
body {
font-family: 'Lora', Georgia, serif;
max-width: 100%;
margin: 0 auto;
padding: 20px;
background: #faf9f5;
color: #141413;
}
h1 { font-family: 'Poppins', sans-serif; color: #141413; }
.explainer {
background: white;
padding: 15px;
border-radius: 6px;
margin-bottom: 20px;
border: 1px solid #e8e6dc;
color: #b0aea5;
font-size: 0.875rem;
line-height: 1.6;
}
.summary {
background: white;
padding: 15px;
border-radius: 6px;
margin-bottom: 20px;
border: 1px solid #e8e6dc;
}
.summary p { margin: 5px 0; }
.best { color: #788c5d; font-weight: bold; }
.table-container {
overflow-x: auto;
width: 100%;
}
table {
border-collapse: collapse;
background: white;
border: 1px solid #e8e6dc;
border-radius: 6px;
font-size: 12px;
min-width: 100%;
}
th, td {
padding: 8px;
text-align: left;
border: 1px solid #e8e6dc;
white-space: normal;
word-wrap: break-word;
}
th {
font-family: 'Poppins', sans-serif;
background: #141413;
color: #faf9f5;
font-weight: 500;
}
th.test-col {
background: #6a9bcc;
}
th.query-col { min-width: 200px; }
td.description {
font-family: monospace;
font-size: 11px;
word-wrap: break-word;
max-width: 400px;
}
td.result {
text-align: center;
font-size: 16px;
min-width: 40px;
}
td.test-result {
background: #f0f6fc;
}
.pass { color: #788c5d; }
.fail { color: #c44; }
.rate {
font-size: 9px;
color: #b0aea5;
display: block;
}
tr:hover { background: #faf9f5; }
.score {
display: inline-block;
padding: 2px 6px;
border-radius: 4px;
font-weight: bold;
font-size: 11px;
}
.score-good { background: #eef2e8; color: #788c5d; }
.score-ok { background: #fef3c7; color: #d97706; }
.score-bad { background: #fceaea; color: #c44; }
.train-label { color: #b0aea5; font-size: 10px; }
.test-label { color: #6a9bcc; font-size: 10px; font-weight: bold; }
.best-row { background: #f5f8f2; }
th.positive-col { border-bottom: 3px solid #788c5d; }
th.negative-col { border-bottom: 3px solid #c44; }
th.test-col.positive-col { border-bottom: 3px solid #788c5d; }
th.test-col.negative-col { border-bottom: 3px solid #c44; }
.legend { font-family: 'Poppins', sans-serif; display: flex; gap: 20px; margin-bottom: 10px; font-size: 13px; align-items: center; }
.legend-item { display: flex; align-items: center; gap: 6px; }
.legend-swatch { width: 16px; height: 16px; border-radius: 3px; display: inline-block; }
.swatch-positive { background: #141413; border-bottom: 3px solid #788c5d; }
.swatch-negative { background: #141413; border-bottom: 3px solid #c44; }
.swatch-test { background: #6a9bcc; }
.swatch-train { background: #141413; }
</style>
</head>
<body>
<h1>""" + title_prefix + """Skill Description Optimization</h1>
<div class="explainer">
<strong>Optimizing your skill's description.</strong> This page updates automatically as Claude tests different versions of your skill's description. Each row is an iteration — a new description attempt. The columns show test queries: green checkmarks mean the skill triggered correctly (or correctly didn't trigger), red crosses mean it got it wrong. The "Train" score shows performance on queries used to improve the description; the "Test" score shows performance on held-out queries the optimizer hasn't seen. When it's done, Claude will apply the best-performing description to your skill.
</div>
"""]
# Summary section
best_test_score = data.get('best_test_score')
best_train_score = data.get('best_train_score')
html_parts.append(f"""
<div class="summary">
<p><strong>Original:</strong> {html.escape(data.get('original_description', 'N/A'))}</p>
<p class="best"><strong>Best:</strong> {html.escape(data.get('best_description', 'N/A'))}</p>
<p><strong>Best Score:</strong> {data.get('best_score', 'N/A')} {'(test)' if best_test_score else '(train)'}</p>
<p><strong>Iterations:</strong> {data.get('iterations_run', 0)} | <strong>Train:</strong> {data.get('train_size', '?')} | <strong>Test:</strong> {data.get('test_size', '?')}</p>
</div>
""")
# Legend
html_parts.append("""
<div class="legend">
<span style="font-weight:600">Query columns:</span>
<span class="legend-item"><span class="legend-swatch swatch-positive"></span> Should trigger</span>
<span class="legend-item"><span class="legend-swatch swatch-negative"></span> Should NOT trigger</span>
<span class="legend-item"><span class="legend-swatch swatch-train"></span> Train</span>
<span class="legend-item"><span class="legend-swatch swatch-test"></span> Test</span>
</div>
""")
# Table header
html_parts.append("""
<div class="table-container">
<table>
<thead>
<tr>
<th>Iter</th>
<th>Train</th>
<th>Test</th>
<th class="query-col">Description</th>
""")
# Add column headers for train queries
for qinfo in train_queries:
polarity = "positive-col" if qinfo["should_trigger"] else "negative-col"
html_parts.append(f' <th class="{polarity}">{html.escape(qinfo["query"])}</th>\n')
# Add column headers for test queries (different color)
for qinfo in test_queries:
polarity = "positive-col" if qinfo["should_trigger"] else "negative-col"
html_parts.append(f' <th class="test-col {polarity}">{html.escape(qinfo["query"])}</th>\n')
html_parts.append(""" </tr>
</thead>
<tbody>
""")
# Find best iteration for highlighting
if test_queries:
best_iter = max(history, key=lambda h: h.get("test_passed") or 0).get("iteration")
else:
best_iter = max(history, key=lambda h: h.get("train_passed", h.get("passed", 0))).get("iteration")
# Add rows for each iteration
for h in history:
iteration = h.get("iteration", "?")
train_passed = h.get("train_passed", h.get("passed", 0))
train_total = h.get("train_total", h.get("total", 0))
test_passed = h.get("test_passed")
test_total = h.get("test_total")
description = h.get("description", "")
train_results = h.get("train_results", h.get("results", []))
test_results = h.get("test_results", [])
# Create lookups for results by query
train_by_query = {r["query"]: r for r in train_results}
test_by_query = {r["query"]: r for r in test_results} if test_results else {}
# Compute aggregate correct/total runs across all retries
def aggregate_runs(results: list[dict]) -> tuple[int, int]:
correct = 0
total = 0
for r in results:
runs = r.get("runs", 0)
triggers = r.get("triggers", 0)
total += runs
if r.get("should_trigger", True):
correct += triggers
else:
correct += runs - triggers
return correct, total
train_correct, train_runs = aggregate_runs(train_results)
test_correct, test_runs = aggregate_runs(test_results)
# Determine score classes
def score_class(correct: int, total: int) -> str:
if total > 0:
ratio = correct / total
if ratio >= 0.8:
return "score-good"
elif ratio >= 0.5:
return "score-ok"
return "score-bad"
train_class = score_class(train_correct, train_runs)
test_class = score_class(test_correct, test_runs)
row_class = "best-row" if iteration == best_iter else ""
html_parts.append(f""" <tr class="{row_class}">
<td>{iteration}</td>
<td><span class="score {train_class}">{train_correct}/{train_runs}</span></td>
<td><span class="score {test_class}">{test_correct}/{test_runs}</span></td>
<td class="description">{html.escape(description)}</td>
""")
# Add result for each train query
for qinfo in train_queries:
r = train_by_query.get(qinfo["query"], {})
did_pass = r.get("pass", False)
triggers = r.get("triggers", 0)
runs = r.get("runs", 0)
icon = "" if did_pass else ""
css_class = "pass" if did_pass else "fail"
html_parts.append(f' <td class="result {css_class}">{icon}<span class="rate">{triggers}/{runs}</span></td>\n')
# Add result for each test query (with different background)
for qinfo in test_queries:
r = test_by_query.get(qinfo["query"], {})
did_pass = r.get("pass", False)
triggers = r.get("triggers", 0)
runs = r.get("runs", 0)
icon = "" if did_pass else ""
css_class = "pass" if did_pass else "fail"
html_parts.append(f' <td class="result test-result {css_class}">{icon}<span class="rate">{triggers}/{runs}</span></td>\n')
html_parts.append(" </tr>\n")
html_parts.append(""" </tbody>
</table>
</div>
""")
html_parts.append("""
</body>
</html>
""")
return "".join(html_parts)
def main():
parser = argparse.ArgumentParser(description="Generate HTML report from run_loop output")
parser.add_argument("input", help="Path to JSON output from run_loop.py (or - for stdin)")
parser.add_argument("-o", "--output", default=None, help="Output HTML file (default: stdout)")
parser.add_argument("--skill-name", default="", help="Skill name to include in the report title")
args = parser.parse_args()
if args.input == "-":
data = json.load(sys.stdin)
else:
data = json.loads(Path(args.input).read_text())
html_output = generate_html(data, skill_name=args.skill_name)
if args.output:
Path(args.output).write_text(html_output)
print(f"Report written to {args.output}", file=sys.stderr)
else:
print(html_output)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,247 @@
#!/usr/bin/env python3
"""Improve a skill description based on eval results.
Takes eval results (from run_eval.py) and generates an improved description
by calling `claude -p` as a subprocess (same auth pattern as run_eval.py —
uses the session's Claude Code auth, no separate ANTHROPIC_API_KEY needed).
"""
import argparse
import json
import os
import re
import subprocess
import sys
from pathlib import Path
from scripts.utils import parse_skill_md
def _call_claude(prompt: str, model: str | None, timeout: int = 300) -> str:
"""Run `claude -p` with the prompt on stdin and return the text response.
Prompt goes over stdin (not argv) because it embeds the full SKILL.md
body and can easily exceed comfortable argv length.
"""
cmd = ["claude", "-p", "--output-format", "text"]
if model:
cmd.extend(["--model", model])
# Remove CLAUDECODE env var to allow nesting claude -p inside a
# Claude Code session. The guard is for interactive terminal conflicts;
# programmatic subprocess usage is safe. Same pattern as run_eval.py.
env = {k: v for k, v in os.environ.items() if k != "CLAUDECODE"}
result = subprocess.run(
cmd,
input=prompt,
capture_output=True,
text=True,
env=env,
timeout=timeout,
)
if result.returncode != 0:
raise RuntimeError(
f"claude -p exited {result.returncode}\nstderr: {result.stderr}"
)
return result.stdout
def improve_description(
skill_name: str,
skill_content: str,
current_description: str,
eval_results: dict,
history: list[dict],
model: str,
test_results: dict | None = None,
log_dir: Path | None = None,
iteration: int | None = None,
) -> str:
"""Call Claude to improve the description based on eval results."""
failed_triggers = [
r for r in eval_results["results"]
if r["should_trigger"] and not r["pass"]
]
false_triggers = [
r for r in eval_results["results"]
if not r["should_trigger"] and not r["pass"]
]
# Build scores summary
train_score = f"{eval_results['summary']['passed']}/{eval_results['summary']['total']}"
if test_results:
test_score = f"{test_results['summary']['passed']}/{test_results['summary']['total']}"
scores_summary = f"Train: {train_score}, Test: {test_score}"
else:
scores_summary = f"Train: {train_score}"
prompt = f"""You are optimizing a skill description for a Claude Code skill called "{skill_name}". A "skill" is sort of like a prompt, but with progressive disclosure -- there's a title and description that Claude sees when deciding whether to use the skill, and then if it does use the skill, it reads the .md file which has lots more details and potentially links to other resources in the skill folder like helper files and scripts and additional documentation or examples.
The description appears in Claude's "available_skills" list. When a user sends a query, Claude decides whether to invoke the skill based solely on the title and on this description. Your goal is to write a description that triggers for relevant queries, and doesn't trigger for irrelevant ones.
Here's the current description:
<current_description>
"{current_description}"
</current_description>
Current scores ({scores_summary}):
<scores_summary>
"""
if failed_triggers:
prompt += "FAILED TO TRIGGER (should have triggered but didn't):\n"
for r in failed_triggers:
prompt += f' - "{r["query"]}" (triggered {r["triggers"]}/{r["runs"]} times)\n'
prompt += "\n"
if false_triggers:
prompt += "FALSE TRIGGERS (triggered but shouldn't have):\n"
for r in false_triggers:
prompt += f' - "{r["query"]}" (triggered {r["triggers"]}/{r["runs"]} times)\n'
prompt += "\n"
if history:
prompt += "PREVIOUS ATTEMPTS (do NOT repeat these — try something structurally different):\n\n"
for h in history:
train_s = f"{h.get('train_passed', h.get('passed', 0))}/{h.get('train_total', h.get('total', 0))}"
test_s = f"{h.get('test_passed', '?')}/{h.get('test_total', '?')}" if h.get('test_passed') is not None else None
score_str = f"train={train_s}" + (f", test={test_s}" if test_s else "")
prompt += f'<attempt {score_str}>\n'
prompt += f'Description: "{h["description"]}"\n'
if "results" in h:
prompt += "Train results:\n"
for r in h["results"]:
status = "PASS" if r["pass"] else "FAIL"
prompt += f' [{status}] "{r["query"][:80]}" (triggered {r["triggers"]}/{r["runs"]})\n'
if h.get("note"):
prompt += f'Note: {h["note"]}\n'
prompt += "</attempt>\n\n"
prompt += f"""</scores_summary>
Skill content (for context on what the skill does):
<skill_content>
{skill_content}
</skill_content>
Based on the failures, write a new and improved description that is more likely to trigger correctly. When I say "based on the failures", it's a bit of a tricky line to walk because we don't want to overfit to the specific cases you're seeing. So what I DON'T want you to do is produce an ever-expanding list of specific queries that this skill should or shouldn't trigger for. Instead, try to generalize from the failures to broader categories of user intent and situations where this skill would be useful or not useful. The reason for this is twofold:
1. Avoid overfitting
2. The list might get loooong and it's injected into ALL queries and there might be a lot of skills, so we don't want to blow too much space on any given description.
Concretely, your description should not be more than about 100-200 words, even if that comes at the cost of accuracy. There is a hard limit of 1024 characters — descriptions over that will be truncated, so stay comfortably under it.
Here are some tips that we've found to work well in writing these descriptions:
- The skill should be phrased in the imperative -- "Use this skill for" rather than "this skill does"
- The skill description should focus on the user's intent, what they are trying to achieve, vs. the implementation details of how the skill works.
- The description competes with other skills for Claude's attention — make it distinctive and immediately recognizable.
- If you're getting lots of failures after repeated attempts, change things up. Try different sentence structures or wordings.
I'd encourage you to be creative and mix up the style in different iterations since you'll have multiple opportunities to try different approaches and we'll just grab the highest-scoring one at the end.
Please respond with only the new description text in <new_description> tags, nothing else."""
text = _call_claude(prompt, model)
match = re.search(r"<new_description>(.*?)</new_description>", text, re.DOTALL)
description = match.group(1).strip().strip('"') if match else text.strip().strip('"')
transcript: dict = {
"iteration": iteration,
"prompt": prompt,
"response": text,
"parsed_description": description,
"char_count": len(description),
"over_limit": len(description) > 1024,
}
# Safety net: the prompt already states the 1024-char hard limit, but if
# the model blew past it anyway, make one fresh single-turn call that
# quotes the too-long version and asks for a shorter rewrite. (The old
# SDK path did this as a true multi-turn; `claude -p` is one-shot, so we
# inline the prior output into the new prompt instead.)
if len(description) > 1024:
shorten_prompt = (
f"{prompt}\n\n"
f"---\n\n"
f"A previous attempt produced this description, which at "
f"{len(description)} characters is over the 1024-character hard limit:\n\n"
f'"{description}"\n\n'
f"Rewrite it to be under 1024 characters while keeping the most "
f"important trigger words and intent coverage. Respond with only "
f"the new description in <new_description> tags."
)
shorten_text = _call_claude(shorten_prompt, model)
match = re.search(r"<new_description>(.*?)</new_description>", shorten_text, re.DOTALL)
shortened = match.group(1).strip().strip('"') if match else shorten_text.strip().strip('"')
transcript["rewrite_prompt"] = shorten_prompt
transcript["rewrite_response"] = shorten_text
transcript["rewrite_description"] = shortened
transcript["rewrite_char_count"] = len(shortened)
description = shortened
transcript["final_description"] = description
if log_dir:
log_dir.mkdir(parents=True, exist_ok=True)
log_file = log_dir / f"improve_iter_{iteration or 'unknown'}.json"
log_file.write_text(json.dumps(transcript, indent=2))
return description
def main():
parser = argparse.ArgumentParser(description="Improve a skill description based on eval results")
parser.add_argument("--eval-results", required=True, help="Path to eval results JSON (from run_eval.py)")
parser.add_argument("--skill-path", required=True, help="Path to skill directory")
parser.add_argument("--history", default=None, help="Path to history JSON (previous attempts)")
parser.add_argument("--model", required=True, help="Model for improvement")
parser.add_argument("--verbose", action="store_true", help="Print thinking to stderr")
args = parser.parse_args()
skill_path = Path(args.skill_path)
if not (skill_path / "SKILL.md").exists():
print(f"Error: No SKILL.md found at {skill_path}", file=sys.stderr)
sys.exit(1)
eval_results = json.loads(Path(args.eval_results).read_text())
history = []
if args.history:
history = json.loads(Path(args.history).read_text())
name, _, content = parse_skill_md(skill_path)
current_description = eval_results["description"]
if args.verbose:
print(f"Current: {current_description}", file=sys.stderr)
print(f"Score: {eval_results['summary']['passed']}/{eval_results['summary']['total']}", file=sys.stderr)
new_description = improve_description(
skill_name=name,
skill_content=content,
current_description=current_description,
eval_results=eval_results,
history=history,
model=args.model,
)
if args.verbose:
print(f"Improved: {new_description}", file=sys.stderr)
# Output as JSON with both the new description and updated history
output = {
"description": new_description,
"history": history + [{
"description": current_description,
"passed": eval_results["summary"]["passed"],
"failed": eval_results["summary"]["failed"],
"total": eval_results["summary"]["total"],
"results": eval_results["results"],
}],
}
print(json.dumps(output, indent=2))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,136 @@
#!/usr/bin/env python3
"""
Skill Packager - Creates a distributable .skill file of a skill folder
Usage:
python utils/package_skill.py <path/to/skill-folder> [output-directory]
Example:
python utils/package_skill.py skills/public/my-skill
python utils/package_skill.py skills/public/my-skill ./dist
"""
import fnmatch
import sys
import zipfile
from pathlib import Path
from scripts.quick_validate import validate_skill
# Patterns to exclude when packaging skills.
EXCLUDE_DIRS = {"__pycache__", "node_modules"}
EXCLUDE_GLOBS = {"*.pyc"}
EXCLUDE_FILES = {".DS_Store"}
# Directories excluded only at the skill root (not when nested deeper).
ROOT_EXCLUDE_DIRS = {"evals"}
def should_exclude(rel_path: Path) -> bool:
"""Check if a path should be excluded from packaging."""
parts = rel_path.parts
if any(part in EXCLUDE_DIRS for part in parts):
return True
# rel_path is relative to skill_path.parent, so parts[0] is the skill
# folder name and parts[1] (if present) is the first subdir.
if len(parts) > 1 and parts[1] in ROOT_EXCLUDE_DIRS:
return True
name = rel_path.name
if name in EXCLUDE_FILES:
return True
return any(fnmatch.fnmatch(name, pat) for pat in EXCLUDE_GLOBS)
def package_skill(skill_path, output_dir=None):
"""
Package a skill folder into a .skill file.
Args:
skill_path: Path to the skill folder
output_dir: Optional output directory for the .skill file (defaults to current directory)
Returns:
Path to the created .skill file, or None if error
"""
skill_path = Path(skill_path).resolve()
# Validate skill folder exists
if not skill_path.exists():
print(f"❌ Error: Skill folder not found: {skill_path}")
return None
if not skill_path.is_dir():
print(f"❌ Error: Path is not a directory: {skill_path}")
return None
# Validate SKILL.md exists
skill_md = skill_path / "SKILL.md"
if not skill_md.exists():
print(f"❌ Error: SKILL.md not found in {skill_path}")
return None
# Run validation before packaging
print("🔍 Validating skill...")
valid, message = validate_skill(skill_path)
if not valid:
print(f"❌ Validation failed: {message}")
print(" Please fix the validation errors before packaging.")
return None
print(f"{message}\n")
# Determine output location
skill_name = skill_path.name
if output_dir:
output_path = Path(output_dir).resolve()
output_path.mkdir(parents=True, exist_ok=True)
else:
output_path = Path.cwd()
skill_filename = output_path / f"{skill_name}.skill"
# Create the .skill file (zip format)
try:
with zipfile.ZipFile(skill_filename, 'w', zipfile.ZIP_DEFLATED) as zipf:
# Walk through the skill directory, excluding build artifacts
for file_path in skill_path.rglob('*'):
if not file_path.is_file():
continue
arcname = file_path.relative_to(skill_path.parent)
if should_exclude(arcname):
print(f" Skipped: {arcname}")
continue
zipf.write(file_path, arcname)
print(f" Added: {arcname}")
print(f"\n✅ Successfully packaged skill to: {skill_filename}")
return skill_filename
except Exception as e:
print(f"❌ Error creating .skill file: {e}")
return None
def main():
if len(sys.argv) < 2:
print("Usage: python utils/package_skill.py <path/to/skill-folder> [output-directory]")
print("\nExample:")
print(" python utils/package_skill.py skills/public/my-skill")
print(" python utils/package_skill.py skills/public/my-skill ./dist")
sys.exit(1)
skill_path = sys.argv[1]
output_dir = sys.argv[2] if len(sys.argv) > 2 else None
print(f"📦 Packaging skill: {skill_path}")
if output_dir:
print(f" Output directory: {output_dir}")
print()
result = package_skill(skill_path, output_dir)
if result:
sys.exit(0)
else:
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,103 @@
#!/usr/bin/env python3
"""
Quick validation script for skills - minimal version
"""
import sys
import os
import re
import yaml
from pathlib import Path
def validate_skill(skill_path):
"""Basic validation of a skill"""
skill_path = Path(skill_path)
# Check SKILL.md exists
skill_md = skill_path / 'SKILL.md'
if not skill_md.exists():
return False, "SKILL.md not found"
# Read and validate frontmatter
content = skill_md.read_text()
if not content.startswith('---'):
return False, "No YAML frontmatter found"
# Extract frontmatter
match = re.match(r'^---\n(.*?)\n---', content, re.DOTALL)
if not match:
return False, "Invalid frontmatter format"
frontmatter_text = match.group(1)
# Parse YAML frontmatter
try:
frontmatter = yaml.safe_load(frontmatter_text)
if not isinstance(frontmatter, dict):
return False, "Frontmatter must be a YAML dictionary"
except yaml.YAMLError as e:
return False, f"Invalid YAML in frontmatter: {e}"
# Define allowed properties
ALLOWED_PROPERTIES = {'name', 'description', 'license', 'allowed-tools', 'metadata', 'compatibility'}
# Check for unexpected properties (excluding nested keys under metadata)
unexpected_keys = set(frontmatter.keys()) - ALLOWED_PROPERTIES
if unexpected_keys:
return False, (
f"Unexpected key(s) in SKILL.md frontmatter: {', '.join(sorted(unexpected_keys))}. "
f"Allowed properties are: {', '.join(sorted(ALLOWED_PROPERTIES))}"
)
# Check required fields
if 'name' not in frontmatter:
return False, "Missing 'name' in frontmatter"
if 'description' not in frontmatter:
return False, "Missing 'description' in frontmatter"
# Extract name for validation
name = frontmatter.get('name', '')
if not isinstance(name, str):
return False, f"Name must be a string, got {type(name).__name__}"
name = name.strip()
if name:
# Check naming convention (kebab-case: lowercase with hyphens)
if not re.match(r'^[a-z0-9-]+$', name):
return False, f"Name '{name}' should be kebab-case (lowercase letters, digits, and hyphens only)"
if name.startswith('-') or name.endswith('-') or '--' in name:
return False, f"Name '{name}' cannot start/end with hyphen or contain consecutive hyphens"
# Check name length (max 64 characters per spec)
if len(name) > 64:
return False, f"Name is too long ({len(name)} characters). Maximum is 64 characters."
# Extract and validate description
description = frontmatter.get('description', '')
if not isinstance(description, str):
return False, f"Description must be a string, got {type(description).__name__}"
description = description.strip()
if description:
# Check for angle brackets
if '<' in description or '>' in description:
return False, "Description cannot contain angle brackets (< or >)"
# Check description length (max 1024 characters per spec)
if len(description) > 1024:
return False, f"Description is too long ({len(description)} characters). Maximum is 1024 characters."
# Validate compatibility field if present (optional)
compatibility = frontmatter.get('compatibility', '')
if compatibility:
if not isinstance(compatibility, str):
return False, f"Compatibility must be a string, got {type(compatibility).__name__}"
if len(compatibility) > 500:
return False, f"Compatibility is too long ({len(compatibility)} characters). Maximum is 500 characters."
return True, "Skill is valid!"
if __name__ == "__main__":
if len(sys.argv) != 2:
print("Usage: python quick_validate.py <skill_directory>")
sys.exit(1)
valid, message = validate_skill(sys.argv[1])
print(message)
sys.exit(0 if valid else 1)

View File

@@ -0,0 +1,310 @@
#!/usr/bin/env python3
"""Run trigger evaluation for a skill description.
Tests whether a skill's description causes Claude to trigger (read the skill)
for a set of queries. Outputs results as JSON.
"""
import argparse
import json
import os
import select
import subprocess
import sys
import time
import uuid
from concurrent.futures import ProcessPoolExecutor, as_completed
from pathlib import Path
from scripts.utils import parse_skill_md
def find_project_root() -> Path:
"""Find the project root by walking up from cwd looking for .claude/.
Mimics how Claude Code discovers its project root, so the command file
we create ends up where claude -p will look for it.
"""
current = Path.cwd()
for parent in [current, *current.parents]:
if (parent / ".claude").is_dir():
return parent
return current
def run_single_query(
query: str,
skill_name: str,
skill_description: str,
timeout: int,
project_root: str,
model: str | None = None,
) -> bool:
"""Run a single query and return whether the skill was triggered.
Creates a command file in .claude/commands/ so it appears in Claude's
available_skills list, then runs `claude -p` with the raw query.
Uses --include-partial-messages to detect triggering early from
stream events (content_block_start) rather than waiting for the
full assistant message, which only arrives after tool execution.
"""
unique_id = uuid.uuid4().hex[:8]
clean_name = f"{skill_name}-skill-{unique_id}"
project_commands_dir = Path(project_root) / ".claude" / "commands"
command_file = project_commands_dir / f"{clean_name}.md"
try:
project_commands_dir.mkdir(parents=True, exist_ok=True)
# Use YAML block scalar to avoid breaking on quotes in description
indented_desc = "\n ".join(skill_description.split("\n"))
command_content = (
f"---\n"
f"description: |\n"
f" {indented_desc}\n"
f"---\n\n"
f"# {skill_name}\n\n"
f"This skill handles: {skill_description}\n"
)
command_file.write_text(command_content)
cmd = [
"claude",
"-p", query,
"--output-format", "stream-json",
"--verbose",
"--include-partial-messages",
]
if model:
cmd.extend(["--model", model])
# Remove CLAUDECODE env var to allow nesting claude -p inside a
# Claude Code session. The guard is for interactive terminal conflicts;
# programmatic subprocess usage is safe.
env = {k: v for k, v in os.environ.items() if k != "CLAUDECODE"}
process = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.DEVNULL,
cwd=project_root,
env=env,
)
triggered = False
start_time = time.time()
buffer = ""
# Track state for stream event detection
pending_tool_name = None
accumulated_json = ""
try:
while time.time() - start_time < timeout:
if process.poll() is not None:
remaining = process.stdout.read()
if remaining:
buffer += remaining.decode("utf-8", errors="replace")
break
ready, _, _ = select.select([process.stdout], [], [], 1.0)
if not ready:
continue
chunk = os.read(process.stdout.fileno(), 8192)
if not chunk:
break
buffer += chunk.decode("utf-8", errors="replace")
while "\n" in buffer:
line, buffer = buffer.split("\n", 1)
line = line.strip()
if not line:
continue
try:
event = json.loads(line)
except json.JSONDecodeError:
continue
# Early detection via stream events
if event.get("type") == "stream_event":
se = event.get("event", {})
se_type = se.get("type", "")
if se_type == "content_block_start":
cb = se.get("content_block", {})
if cb.get("type") == "tool_use":
tool_name = cb.get("name", "")
if tool_name in ("Skill", "Read"):
pending_tool_name = tool_name
accumulated_json = ""
else:
return False
elif se_type == "content_block_delta" and pending_tool_name:
delta = se.get("delta", {})
if delta.get("type") == "input_json_delta":
accumulated_json += delta.get("partial_json", "")
if clean_name in accumulated_json:
return True
elif se_type in ("content_block_stop", "message_stop"):
if pending_tool_name:
return clean_name in accumulated_json
if se_type == "message_stop":
return False
# Fallback: full assistant message
elif event.get("type") == "assistant":
message = event.get("message", {})
for content_item in message.get("content", []):
if content_item.get("type") != "tool_use":
continue
tool_name = content_item.get("name", "")
tool_input = content_item.get("input", {})
if tool_name == "Skill" and clean_name in tool_input.get("skill", ""):
triggered = True
elif tool_name == "Read" and clean_name in tool_input.get("file_path", ""):
triggered = True
return triggered
elif event.get("type") == "result":
return triggered
finally:
# Clean up process on any exit path (return, exception, timeout)
if process.poll() is None:
process.kill()
process.wait()
return triggered
finally:
if command_file.exists():
command_file.unlink()
def run_eval(
eval_set: list[dict],
skill_name: str,
description: str,
num_workers: int,
timeout: int,
project_root: Path,
runs_per_query: int = 1,
trigger_threshold: float = 0.5,
model: str | None = None,
) -> dict:
"""Run the full eval set and return results."""
results = []
with ProcessPoolExecutor(max_workers=num_workers) as executor:
future_to_info = {}
for item in eval_set:
for run_idx in range(runs_per_query):
future = executor.submit(
run_single_query,
item["query"],
skill_name,
description,
timeout,
str(project_root),
model,
)
future_to_info[future] = (item, run_idx)
query_triggers: dict[str, list[bool]] = {}
query_items: dict[str, dict] = {}
for future in as_completed(future_to_info):
item, _ = future_to_info[future]
query = item["query"]
query_items[query] = item
if query not in query_triggers:
query_triggers[query] = []
try:
query_triggers[query].append(future.result())
except Exception as e:
print(f"Warning: query failed: {e}", file=sys.stderr)
query_triggers[query].append(False)
for query, triggers in query_triggers.items():
item = query_items[query]
trigger_rate = sum(triggers) / len(triggers)
should_trigger = item["should_trigger"]
if should_trigger:
did_pass = trigger_rate >= trigger_threshold
else:
did_pass = trigger_rate < trigger_threshold
results.append({
"query": query,
"should_trigger": should_trigger,
"trigger_rate": trigger_rate,
"triggers": sum(triggers),
"runs": len(triggers),
"pass": did_pass,
})
passed = sum(1 for r in results if r["pass"])
total = len(results)
return {
"skill_name": skill_name,
"description": description,
"results": results,
"summary": {
"total": total,
"passed": passed,
"failed": total - passed,
},
}
def main():
parser = argparse.ArgumentParser(description="Run trigger evaluation for a skill description")
parser.add_argument("--eval-set", required=True, help="Path to eval set JSON file")
parser.add_argument("--skill-path", required=True, help="Path to skill directory")
parser.add_argument("--description", default=None, help="Override description to test")
parser.add_argument("--num-workers", type=int, default=10, help="Number of parallel workers")
parser.add_argument("--timeout", type=int, default=30, help="Timeout per query in seconds")
parser.add_argument("--runs-per-query", type=int, default=3, help="Number of runs per query")
parser.add_argument("--trigger-threshold", type=float, default=0.5, help="Trigger rate threshold")
parser.add_argument("--model", default=None, help="Model to use for claude -p (default: user's configured model)")
parser.add_argument("--verbose", action="store_true", help="Print progress to stderr")
args = parser.parse_args()
eval_set = json.loads(Path(args.eval_set).read_text())
skill_path = Path(args.skill_path)
if not (skill_path / "SKILL.md").exists():
print(f"Error: No SKILL.md found at {skill_path}", file=sys.stderr)
sys.exit(1)
name, original_description, content = parse_skill_md(skill_path)
description = args.description or original_description
project_root = find_project_root()
if args.verbose:
print(f"Evaluating: {description}", file=sys.stderr)
output = run_eval(
eval_set=eval_set,
skill_name=name,
description=description,
num_workers=args.num_workers,
timeout=args.timeout,
project_root=project_root,
runs_per_query=args.runs_per_query,
trigger_threshold=args.trigger_threshold,
model=args.model,
)
if args.verbose:
summary = output["summary"]
print(f"Results: {summary['passed']}/{summary['total']} passed", file=sys.stderr)
for r in output["results"]:
status = "PASS" if r["pass"] else "FAIL"
rate_str = f"{r['triggers']}/{r['runs']}"
print(f" [{status}] rate={rate_str} expected={r['should_trigger']}: {r['query'][:70]}", file=sys.stderr)
print(json.dumps(output, indent=2))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,328 @@
#!/usr/bin/env python3
"""Run the eval + improve loop until all pass or max iterations reached.
Combines run_eval.py and improve_description.py in a loop, tracking history
and returning the best description found. Supports train/test split to prevent
overfitting.
"""
import argparse
import json
import random
import sys
import tempfile
import time
import webbrowser
from pathlib import Path
from scripts.generate_report import generate_html
from scripts.improve_description import improve_description
from scripts.run_eval import find_project_root, run_eval
from scripts.utils import parse_skill_md
def split_eval_set(eval_set: list[dict], holdout: float, seed: int = 42) -> tuple[list[dict], list[dict]]:
"""Split eval set into train and test sets, stratified by should_trigger."""
random.seed(seed)
# Separate by should_trigger
trigger = [e for e in eval_set if e["should_trigger"]]
no_trigger = [e for e in eval_set if not e["should_trigger"]]
# Shuffle each group
random.shuffle(trigger)
random.shuffle(no_trigger)
# Calculate split points
n_trigger_test = max(1, int(len(trigger) * holdout))
n_no_trigger_test = max(1, int(len(no_trigger) * holdout))
# Split
test_set = trigger[:n_trigger_test] + no_trigger[:n_no_trigger_test]
train_set = trigger[n_trigger_test:] + no_trigger[n_no_trigger_test:]
return train_set, test_set
def run_loop(
eval_set: list[dict],
skill_path: Path,
description_override: str | None,
num_workers: int,
timeout: int,
max_iterations: int,
runs_per_query: int,
trigger_threshold: float,
holdout: float,
model: str,
verbose: bool,
live_report_path: Path | None = None,
log_dir: Path | None = None,
) -> dict:
"""Run the eval + improvement loop."""
project_root = find_project_root()
name, original_description, content = parse_skill_md(skill_path)
current_description = description_override or original_description
# Split into train/test if holdout > 0
if holdout > 0:
train_set, test_set = split_eval_set(eval_set, holdout)
if verbose:
print(f"Split: {len(train_set)} train, {len(test_set)} test (holdout={holdout})", file=sys.stderr)
else:
train_set = eval_set
test_set = []
history = []
exit_reason = "unknown"
for iteration in range(1, max_iterations + 1):
if verbose:
print(f"\n{'='*60}", file=sys.stderr)
print(f"Iteration {iteration}/{max_iterations}", file=sys.stderr)
print(f"Description: {current_description}", file=sys.stderr)
print(f"{'='*60}", file=sys.stderr)
# Evaluate train + test together in one batch for parallelism
all_queries = train_set + test_set
t0 = time.time()
all_results = run_eval(
eval_set=all_queries,
skill_name=name,
description=current_description,
num_workers=num_workers,
timeout=timeout,
project_root=project_root,
runs_per_query=runs_per_query,
trigger_threshold=trigger_threshold,
model=model,
)
eval_elapsed = time.time() - t0
# Split results back into train/test by matching queries
train_queries_set = {q["query"] for q in train_set}
train_result_list = [r for r in all_results["results"] if r["query"] in train_queries_set]
test_result_list = [r for r in all_results["results"] if r["query"] not in train_queries_set]
train_passed = sum(1 for r in train_result_list if r["pass"])
train_total = len(train_result_list)
train_summary = {"passed": train_passed, "failed": train_total - train_passed, "total": train_total}
train_results = {"results": train_result_list, "summary": train_summary}
if test_set:
test_passed = sum(1 for r in test_result_list if r["pass"])
test_total = len(test_result_list)
test_summary = {"passed": test_passed, "failed": test_total - test_passed, "total": test_total}
test_results = {"results": test_result_list, "summary": test_summary}
else:
test_results = None
test_summary = None
history.append({
"iteration": iteration,
"description": current_description,
"train_passed": train_summary["passed"],
"train_failed": train_summary["failed"],
"train_total": train_summary["total"],
"train_results": train_results["results"],
"test_passed": test_summary["passed"] if test_summary else None,
"test_failed": test_summary["failed"] if test_summary else None,
"test_total": test_summary["total"] if test_summary else None,
"test_results": test_results["results"] if test_results else None,
# For backward compat with report generator
"passed": train_summary["passed"],
"failed": train_summary["failed"],
"total": train_summary["total"],
"results": train_results["results"],
})
# Write live report if path provided
if live_report_path:
partial_output = {
"original_description": original_description,
"best_description": current_description,
"best_score": "in progress",
"iterations_run": len(history),
"holdout": holdout,
"train_size": len(train_set),
"test_size": len(test_set),
"history": history,
}
live_report_path.write_text(generate_html(partial_output, auto_refresh=True, skill_name=name))
if verbose:
def print_eval_stats(label, results, elapsed):
pos = [r for r in results if r["should_trigger"]]
neg = [r for r in results if not r["should_trigger"]]
tp = sum(r["triggers"] for r in pos)
pos_runs = sum(r["runs"] for r in pos)
fn = pos_runs - tp
fp = sum(r["triggers"] for r in neg)
neg_runs = sum(r["runs"] for r in neg)
tn = neg_runs - fp
total = tp + tn + fp + fn
precision = tp / (tp + fp) if (tp + fp) > 0 else 1.0
recall = tp / (tp + fn) if (tp + fn) > 0 else 1.0
accuracy = (tp + tn) / total if total > 0 else 0.0
print(f"{label}: {tp+tn}/{total} correct, precision={precision:.0%} recall={recall:.0%} accuracy={accuracy:.0%} ({elapsed:.1f}s)", file=sys.stderr)
for r in results:
status = "PASS" if r["pass"] else "FAIL"
rate_str = f"{r['triggers']}/{r['runs']}"
print(f" [{status}] rate={rate_str} expected={r['should_trigger']}: {r['query'][:60]}", file=sys.stderr)
print_eval_stats("Train", train_results["results"], eval_elapsed)
if test_summary:
print_eval_stats("Test ", test_results["results"], 0)
if train_summary["failed"] == 0:
exit_reason = f"all_passed (iteration {iteration})"
if verbose:
print(f"\nAll train queries passed on iteration {iteration}!", file=sys.stderr)
break
if iteration == max_iterations:
exit_reason = f"max_iterations ({max_iterations})"
if verbose:
print(f"\nMax iterations reached ({max_iterations}).", file=sys.stderr)
break
# Improve the description based on train results
if verbose:
print(f"\nImproving description...", file=sys.stderr)
t0 = time.time()
# Strip test scores from history so improvement model can't see them
blinded_history = [
{k: v for k, v in h.items() if not k.startswith("test_")}
for h in history
]
new_description = improve_description(
skill_name=name,
skill_content=content,
current_description=current_description,
eval_results=train_results,
history=blinded_history,
model=model,
log_dir=log_dir,
iteration=iteration,
)
improve_elapsed = time.time() - t0
if verbose:
print(f"Proposed ({improve_elapsed:.1f}s): {new_description}", file=sys.stderr)
current_description = new_description
# Find the best iteration by TEST score (or train if no test set)
if test_set:
best = max(history, key=lambda h: h["test_passed"] or 0)
best_score = f"{best['test_passed']}/{best['test_total']}"
else:
best = max(history, key=lambda h: h["train_passed"])
best_score = f"{best['train_passed']}/{best['train_total']}"
if verbose:
print(f"\nExit reason: {exit_reason}", file=sys.stderr)
print(f"Best score: {best_score} (iteration {best['iteration']})", file=sys.stderr)
return {
"exit_reason": exit_reason,
"original_description": original_description,
"best_description": best["description"],
"best_score": best_score,
"best_train_score": f"{best['train_passed']}/{best['train_total']}",
"best_test_score": f"{best['test_passed']}/{best['test_total']}" if test_set else None,
"final_description": current_description,
"iterations_run": len(history),
"holdout": holdout,
"train_size": len(train_set),
"test_size": len(test_set),
"history": history,
}
def main():
parser = argparse.ArgumentParser(description="Run eval + improve loop")
parser.add_argument("--eval-set", required=True, help="Path to eval set JSON file")
parser.add_argument("--skill-path", required=True, help="Path to skill directory")
parser.add_argument("--description", default=None, help="Override starting description")
parser.add_argument("--num-workers", type=int, default=10, help="Number of parallel workers")
parser.add_argument("--timeout", type=int, default=30, help="Timeout per query in seconds")
parser.add_argument("--max-iterations", type=int, default=5, help="Max improvement iterations")
parser.add_argument("--runs-per-query", type=int, default=3, help="Number of runs per query")
parser.add_argument("--trigger-threshold", type=float, default=0.5, help="Trigger rate threshold")
parser.add_argument("--holdout", type=float, default=0.4, help="Fraction of eval set to hold out for testing (0 to disable)")
parser.add_argument("--model", required=True, help="Model for improvement")
parser.add_argument("--verbose", action="store_true", help="Print progress to stderr")
parser.add_argument("--report", default="auto", help="Generate HTML report at this path (default: 'auto' for temp file, 'none' to disable)")
parser.add_argument("--results-dir", default=None, help="Save all outputs (results.json, report.html, log.txt) to a timestamped subdirectory here")
args = parser.parse_args()
eval_set = json.loads(Path(args.eval_set).read_text())
skill_path = Path(args.skill_path)
if not (skill_path / "SKILL.md").exists():
print(f"Error: No SKILL.md found at {skill_path}", file=sys.stderr)
sys.exit(1)
name, _, _ = parse_skill_md(skill_path)
# Set up live report path
if args.report != "none":
if args.report == "auto":
timestamp = time.strftime("%Y%m%d_%H%M%S")
live_report_path = Path(tempfile.gettempdir()) / f"skill_description_report_{skill_path.name}_{timestamp}.html"
else:
live_report_path = Path(args.report)
# Open the report immediately so the user can watch
live_report_path.write_text("<html><body><h1>Starting optimization loop...</h1><meta http-equiv='refresh' content='5'></body></html>")
webbrowser.open(str(live_report_path))
else:
live_report_path = None
# Determine output directory (create before run_loop so logs can be written)
if args.results_dir:
timestamp = time.strftime("%Y-%m-%d_%H%M%S")
results_dir = Path(args.results_dir) / timestamp
results_dir.mkdir(parents=True, exist_ok=True)
else:
results_dir = None
log_dir = results_dir / "logs" if results_dir else None
output = run_loop(
eval_set=eval_set,
skill_path=skill_path,
description_override=args.description,
num_workers=args.num_workers,
timeout=args.timeout,
max_iterations=args.max_iterations,
runs_per_query=args.runs_per_query,
trigger_threshold=args.trigger_threshold,
holdout=args.holdout,
model=args.model,
verbose=args.verbose,
live_report_path=live_report_path,
log_dir=log_dir,
)
# Save JSON output
json_output = json.dumps(output, indent=2)
print(json_output)
if results_dir:
(results_dir / "results.json").write_text(json_output)
# Write final HTML report (without auto-refresh)
if live_report_path:
live_report_path.write_text(generate_html(output, auto_refresh=False, skill_name=name))
print(f"\nReport: {live_report_path}", file=sys.stderr)
if results_dir and live_report_path:
(results_dir / "report.html").write_text(generate_html(output, auto_refresh=False, skill_name=name))
if results_dir:
print(f"Results saved to: {results_dir}", file=sys.stderr)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,47 @@
"""Shared utilities for skill-creator scripts."""
from pathlib import Path
def parse_skill_md(skill_path: Path) -> tuple[str, str, str]:
"""Parse a SKILL.md file, returning (name, description, full_content)."""
content = (skill_path / "SKILL.md").read_text()
lines = content.split("\n")
if lines[0].strip() != "---":
raise ValueError("SKILL.md missing frontmatter (no opening ---)")
end_idx = None
for i, line in enumerate(lines[1:], start=1):
if line.strip() == "---":
end_idx = i
break
if end_idx is None:
raise ValueError("SKILL.md missing frontmatter (no closing ---)")
name = ""
description = ""
frontmatter_lines = lines[1:end_idx]
i = 0
while i < len(frontmatter_lines):
line = frontmatter_lines[i]
if line.startswith("name:"):
name = line[len("name:"):].strip().strip('"').strip("'")
elif line.startswith("description:"):
value = line[len("description:"):].strip()
# Handle YAML multiline indicators (>, |, >-, |-)
if value in (">", "|", ">-", "|-"):
continuation_lines: list[str] = []
i += 1
while i < len(frontmatter_lines) and (frontmatter_lines[i].startswith(" ") or frontmatter_lines[i].startswith("\t")):
continuation_lines.append(frontmatter_lines[i].strip())
i += 1
description = " ".join(continuation_lines)
continue
else:
description = value.strip('"').strip("'")
i += 1
return name, description, content

View File

@@ -0,0 +1,285 @@
---
name: zeroclaw
description: "Help users operate and interact with their ZeroClaw agent instance — through both the CLI (`zeroclaw` commands) and the REST/WebSocket gateway API. Use this skill whenever the user wants to: send messages to ZeroClaw, manage memory or cron jobs, check system status, configure channels or providers, hit the gateway API, troubleshoot their ZeroClaw setup, build from source, or do anything involving the `zeroclaw` binary or its HTTP endpoints. Trigger this even if the user just says things like 'check my agent status', 'schedule a reminder', 'store this in memory', 'list my cron jobs', 'send a message to my bot', 'set up Telegram', 'build zeroclaw', or 'my bot is broken' — these are all ZeroClaw operations."
---
# ZeroClaw Skill
You are helping a user operate their ZeroClaw agent instance. ZeroClaw is an autonomous agent runtime with a CLI and an HTTP/WebSocket gateway.
Your job is to understand what the user wants to accomplish and then **execute it** — run the command, make the API call, report the result. Do not just show commands for the user to copy-paste. Actually run them via the Bash tool and tell the user what happened. The only exception is destructive operations (clearing all memory, estop kill-all) where you should confirm first.
## Adaptive Expertise
Pay attention to how the user talks. Someone who says "can you hit the webhook endpoint with a POST" is telling you they know what they're doing — be concise, skip explanations, just execute. Someone who says "how do I make my bot remember things" needs more context about what's happening under the hood.
Signals of technical comfort: mentions specific endpoints, HTTP methods, JSON fields, talks about tokens/auth, uses CLI flags fluently, references config files directly.
Signals of less familiarity: asks "what does X do", uses casual language about the bot/agent, describes goals rather than mechanisms ("I want it to check something every morning").
Default to a middle ground — brief explanation of what you're about to do, then do it. Dial up or down from there based on cues.
## Discovery — Before You Act
Before running any ZeroClaw operation, make sure you know where things are:
1. **Find the binary.** Search in this order:
- `which zeroclaw` (PATH)
- The current project's build output: `./target/release/zeroclaw` or `./target/debug/zeroclaw` — this is the right choice when the user is working inside the ZeroClaw source tree and may have local changes
- Common install locations: `~/.cargo/bin/zeroclaw`, `~/Downloads/zeroclaw-bin/zeroclaw`
If no binary is found anywhere, offer to build from source (see "Building from Source" below). If the user is a developer working on ZeroClaw itself, they'll likely want the local build — watch for cues like them editing source files, mentioning PRs, or being in the project directory.
2. **Check if the gateway is running** (only needed for REST/WebSocket operations). A quick `curl -sf http://127.0.0.1:42617/health` tells you. If it's not running and the user wants REST access, let them know and offer to start it (`zeroclaw gateway` or `zeroclaw daemon`).
3. **Check auth status.** If the gateway requires pairing (`require_pairing = true` is the default), REST calls need a bearer token. Run `zeroclaw status` to see the current state, or check `~/.zeroclaw/config.toml` for a stored token under `[gateway]`.
Cache these findings for the conversation — don't re-discover every time.
## Important: REPL Limitation
`zeroclaw agent` (interactive REPL) requires interactive stdin, which doesn't work through the Bash tool. When the user wants to chat with their agent, use single-message mode instead:
```bash
zeroclaw agent -m "the message"
```
Each `-m` invocation is independent (no conversation history between calls). If the user needs multi-turn conversation, let them know they can run `zeroclaw agent` directly in their terminal, or use the WebSocket endpoint for programmatic streaming.
## First-Time Setup
If the user hasn't set up ZeroClaw yet (no `~/.zeroclaw/config.toml` exists), guide them through onboarding:
```bash
zeroclaw onboard # Quick mode — defaults to OpenRouter
zeroclaw onboard --provider anthropic # Use Anthropic directly
zeroclaw onboard # Guided wizard (default)
```
After onboarding, verify everything works:
```bash
zeroclaw status
zeroclaw doctor
```
If they already have a config but something is broken, `zeroclaw onboard --channels-only` repairs just the channel configuration without overwriting everything else.
## Building from Source
If the user wants to build ZeroClaw (or no binary is installed):
```bash
cargo build --release
```
This produces `target/release/zeroclaw`. For faster iteration during development, `cargo build` (debug mode) is quicker but produces a slower binary at `target/debug/zeroclaw`.
You can also run directly without a separate build step:
```bash
cargo run --release -- <subcommand> [args]
```
Before building, `cargo check` gives a quick compile validation without the full build.
## Choosing CLI vs REST
Both surfaces can do most things. Rules of thumb:
- **CLI is simpler** for one-off operations from the terminal. It handles auth internally and formats output nicely. Prefer CLI when the user is working locally.
- **REST is needed** when the user is building an integration, scripting from another language, or accessing a remote ZeroClaw instance. Also needed for streaming (WebSocket, SSE).
- If unclear, **default to CLI** — it's less setup.
## Core Operations
### Sending Messages
**CLI:** `zeroclaw agent -m "your message here"` — remember, always use `-m` mode, not bare `zeroclaw agent`.
**REST:**
```bash
curl -X POST http://127.0.0.1:42617/webhook \
-H "Authorization: Bearer <token>" \
-H "Content-Type: application/json" \
-d '{"message": "your message here"}'
```
Response: `{"response": "...", "model": "..."}`
**WebSocket** (for streaming): connect to `ws://127.0.0.1:42617/ws/chat?token=<token>`, send `{"type": "message", "content": "..."}`, receive `{"type": "done", "full_response": "..."}`.
### System Status
Run `zeroclaw status` to see provider, model, uptime, channels, memory backend. For deeper diagnostics: `zeroclaw doctor`.
**REST:** `GET /api/status` (same info as JSON), `GET /health` (no auth, quick ok/not-ok).
### Memory
The CLI can list, get, and clear memories but **cannot store** them directly. To store a memory:
- Via agent: `zeroclaw agent -m "remember that my favorite color is blue"`
- Via REST: `POST /api/memory` with `{"key": "...", "content": "...", "category": "core"}`
**CLI (read/delete):**
- `zeroclaw memory list` — list all entries
- `zeroclaw memory list --category core --limit 10` — filtered
- `zeroclaw memory get "key-name"` — get specific entry
- `zeroclaw memory stats` — usage statistics
- `zeroclaw memory clear --key "prefix" --yes` — delete entries (confirm with user first)
**REST (full CRUD):**
- `GET /api/memory` — list all (optional: `?query=search+text&category=core`)
- `POST /api/memory` — store: `{"key": "...", "content": "...", "category": "core"}`
- `DELETE /api/memory/{key}` — delete entry
Categories: `core`, `daily`, `conversation`, or any custom string.
### Cron / Scheduling
**CLI:**
- `zeroclaw cron list` — show all jobs
- `zeroclaw cron add '0 9 * * 1-5' 'Good morning' --tz America/New_York` — recurring
- `zeroclaw cron add-at '2026-03-11T10:00:00Z' 'Remind me'` — one-time at specific time
- `zeroclaw cron add-every 3600000 'Check health'` — interval in ms
- `zeroclaw cron once 30m 'Follow up'` — delay from now
- `zeroclaw cron pause <id>` / `zeroclaw cron resume <id>` / `zeroclaw cron remove <id>`
**REST:**
- `GET /api/cron` — list jobs
- `POST /api/cron` — add: `{"name": "...", "schedule": "0 9 * * *", "command": "..."}`
- `DELETE /api/cron/{id}` — remove job
### Tools
Tools are used automatically by the agent during conversations (shell, file ops, memory, browser, HTTP, web search, git, etc. — 30+ tools gated by security policy).
To see what's available: `GET /api/tools` (REST) lists all registered tools with descriptions and parameter schemas.
### Configuration
Edit `~/.zeroclaw/config.toml` directly, or re-run `zeroclaw onboard` to reconfigure.
**REST:**
- `GET /api/config` — get current config (secrets masked as `***MASKED***`)
- `PUT /api/config` — update config (send raw TOML as body, 1MB limit)
### Providers & Models
- `zeroclaw providers` — list all supported providers
- `zeroclaw models list` — cached model catalog
- `zeroclaw models refresh --all` — refresh from providers
- `zeroclaw models set anthropic/claude-sonnet-4-6` — set default model
Override per-message: `zeroclaw agent -p anthropic --model claude-sonnet-4-6 -m "hello"`
### Real-Time Events (SSE)
REST only — useful for building dashboards or monitoring:
```bash
curl -N -H "Authorization: Bearer <token>" http://127.0.0.1:42617/api/events
```
Streams JSON events: `llm_request`, `tool_call_start`, `tool_call`, `agent_start`, `agent_end`, `error`.
### Cost Tracking
`GET /api/cost` — returns session/daily/monthly costs, token counts, per-model breakdown.
### Emergency Stop
Confirm with the user before running any estop command — these are disruptive.
- `zeroclaw estop --level kill-all` — stop everything
- `zeroclaw estop --level network-kill` — block all network
- `zeroclaw estop --level tool-freeze --tool shell` — freeze specific tool
- `zeroclaw estop status` — check current estop state
- `zeroclaw estop resume --network` — resume
### Gateway Lifecycle
- `zeroclaw gateway` — start HTTP gateway (foreground)
- `zeroclaw gateway -p 8080 --host 127.0.0.1` — custom bind
- `zeroclaw daemon` — start gateway + channels + scheduler + heartbeat
- `zeroclaw service install/start/stop/status/uninstall` — OS service management
### Channels
ZeroClaw supports 21 messaging channels. To add one, you need to edit `~/.zeroclaw/config.toml`. For example, to set up Telegram:
```toml
[channels]
telegram = true
[channels_config.telegram]
bot_token = "your-bot-token-from-botfather"
allowed_users = [123456789]
```
Then restart the daemon. Check channel health with `zeroclaw channels doctor`.
For the full list of channels and their config fields, read `references/cli-reference.md` (Channels section).
### Pairing (Authentication Setup)
When `require_pairing = true` (default), REST clients need a bearer token:
```bash
curl -X POST http://127.0.0.1:42617/pair -H "X-Pairing-Code: <code>"
```
Response includes `{"token": "..."}` — save this for subsequent requests.
## Common Workflows
Here are multi-step sequences you're likely to need:
**"Is my agent healthy?"**
1. Run `zeroclaw status` — check provider, model, channels
2. Run `zeroclaw doctor` — check connectivity, diagnose issues
3. If gateway needed: `curl -sf http://127.0.0.1:42617/health`
**"Set up a new channel"**
1. Read the current config: `cat ~/.zeroclaw/config.toml`
2. Add the channel config (edit the TOML)
3. Restart: `zeroclaw service restart` (or restart daemon manually)
4. Verify: `zeroclaw channels doctor`
**"Switch to a different model"**
1. Check available: `zeroclaw models list`
2. Set it: `zeroclaw models set <provider/model>`
3. Verify: `zeroclaw status`
4. Test: `zeroclaw agent -m "hello, what model are you?"`
## Gateway Defaults
- **Port:** 42617
- **Host:** 127.0.0.1
- **Auth:** Pairing required (bearer token)
- **Rate limits:** 60 webhook requests/min, 10 pairing attempts/min
- **Body limit:** 64KB (1MB for config updates)
- **Timeout:** 30 seconds
- **Idempotency:** Optional `X-Idempotency-Key` header on `/webhook` (300s TTL)
- **Config location:** `~/.zeroclaw/config.toml`
## Reference Files
For the complete API specification with every endpoint, field, and edge case, read `references/rest-api.md`.
For the full CLI command tree with all flags and options, read `references/cli-reference.md`.
Only load these when you need precise details beyond what's in this file — for most operations, the quick references above are sufficient.
## Troubleshooting
**"zeroclaw: command not found"** — Binary not in PATH. Check `./target/release/zeroclaw`, `~/.cargo/bin/zeroclaw`, or build from source with `cargo build --release`.
**"Connection refused" on REST calls** — Gateway isn't running. Start it with `zeroclaw gateway` or `zeroclaw daemon`.
**"Unauthorized" (401/403)** — Bearer token is missing or invalid. Re-pair via `POST /pair` with the pairing code, or check `~/.zeroclaw/config.toml` for the stored token.
**"LLM request failed" (500)** — Provider issue. Run `zeroclaw doctor` to check connectivity. Common causes: expired API key, provider outage, rate limiting on the provider side.
**"Too many requests" (429)** — You're hitting ZeroClaw's rate limit. Back off — the response includes `retry_after` with the number of seconds to wait.
**Agent not using tools / acting limited** — Check autonomy settings in config.toml under `[autonomy]`. `level = "read_only"` disables most tools. Try `level = "supervised"` or `level = "full"`.
**Memory not persisting** — Check `[memory]` config. If `backend = "none"`, nothing is stored. Switch to `"sqlite"` or `"markdown"`. Also verify `auto_save = true`.
**Channel not responding** — Run `zeroclaw channels doctor` for the specific channel. Common issues: expired bot token, wrong allowed_users list, channel not enabled in `[channels]`.
Report errors to the user with context appropriate to their expertise level. For beginners, explain what went wrong and suggest the fix. For experts, just show the error and the fix.

View File

@@ -0,0 +1,23 @@
{
"skill_name": "zeroclaw",
"evals": [
{
"id": 0,
"prompt": "how do i make my bot remember my name",
"expected_output": "Executes a zeroclaw command to store a memory, explains what happened in beginner-friendly language",
"files": []
},
{
"id": 1,
"prompt": "I want to schedule a daily health check on my ZeroClaw instance every morning at 9am ET",
"expected_output": "Executes zeroclaw cron add with correct cron expression and timezone flag",
"files": []
},
{
"id": 2,
"prompt": "Set up a Python script that monitors my ZeroClaw agent's activity via SSE and logs tool calls to a file",
"expected_output": "Writes a Python script that connects to /api/events SSE endpoint with auth, filters for tool_call events, and logs to a file",
"files": []
}
]
}

View File

@@ -0,0 +1,277 @@
# ZeroClaw CLI Reference
Complete command reference for the `zeroclaw` binary.
## Table of Contents
1. [Agent](#agent)
2. [Onboarding](#onboarding)
3. [Status & Diagnostics](#status--diagnostics)
4. [Memory](#memory)
5. [Cron](#cron)
6. [Providers & Models](#providers--models)
7. [Gateway & Daemon](#gateway--daemon)
8. [Service Management](#service-management)
9. [Channels](#channels)
10. [Security & Emergency Stop](#security--emergency-stop)
11. [Hardware Peripherals](#hardware-peripherals)
12. [Skills](#skills)
13. [Shell Completions](#shell-completions)
---
## Agent
Interactive chat or single-message mode.
```bash
zeroclaw agent # Interactive REPL
zeroclaw agent -m "Summarize today's logs" # Single message
zeroclaw agent -p anthropic --model claude-sonnet-4-6 # Override provider/model
zeroclaw agent -t 0.3 # Set temperature
zeroclaw agent --peripheral nucleo-f401re:/dev/ttyACM0 # Attach hardware
```
**Key flags:**
- `-m <message>` — single message mode (no REPL)
- `-p <provider>` — override provider (openrouter, anthropic, openai, ollama)
- `--model <model>` — override model
- `-t <float>` — temperature (0.02.0)
- `--peripheral <name>:<port>` — attach hardware peripheral
The agent has access to 30+ tools gated by security policy: shell, file_read, file_write, file_edit, glob_search, content_search, memory_store, memory_recall, memory_forget, browser, http_request, web_fetch, web_search, cron, delegate, git, and more. Max tool iterations defaults to 10.
---
## Onboarding
First-time setup or reconfiguration.
```bash
zeroclaw onboard # Quick mode (default: openrouter)
zeroclaw onboard --provider anthropic # Quick mode with specific provider
zeroclaw onboard # Guided wizard (default)
zeroclaw onboard --memory sqlite # Set memory backend
zeroclaw onboard --force # Overwrite existing config
zeroclaw onboard --channels-only # Repair channels only
```
**Key flags:**
- `--provider <name>` — openrouter (default), anthropic, openai, ollama
- `--model <model>` — default model
- `--memory <backend>` — sqlite, markdown, lucid, none
- `--force` — overwrite existing config.toml
- `--channels-only` — only repair channel configuration
- `--reinit` — start fresh (backs up existing config)
Creates `~/.zeroclaw/config.toml` with `0600` permissions.
---
## Status & Diagnostics
```bash
zeroclaw status # System overview
zeroclaw doctor # Run all diagnostic checks
zeroclaw doctor models # Probe model connectivity
zeroclaw doctor traces # Query execution traces
```
---
## Memory
```bash
zeroclaw memory list # List all entries
zeroclaw memory list --category core --limit 10 # Filtered list
zeroclaw memory get "some-key" # Get specific entry
zeroclaw memory stats # Usage statistics
zeroclaw memory clear --key "prefix" --yes # Delete entries (requires --yes)
```
**Key flags:**
- `--category <name>` — filter by category (core, daily, conversation, custom)
- `--limit <n>` — limit results
- `--key <prefix>` — key prefix for clear operations
- `--yes` — skip confirmation (required for clear)
---
## Cron
```bash
zeroclaw cron list # List all jobs
zeroclaw cron add '0 9 * * 1-5' 'Good morning' --tz America/New_York # Recurring (cron expr)
zeroclaw cron add-at '2026-03-11T10:00:00Z' 'Remind me about meeting' # One-time at specific time
zeroclaw cron add-every 3600000 'Check server health' # Interval in milliseconds
zeroclaw cron once 30m 'Follow up on that task' # Delay from now
zeroclaw cron pause <id> # Pause job
zeroclaw cron resume <id> # Resume job
zeroclaw cron remove <id> # Delete job
```
**Subcommands:**
- `add <cron-expr> <command>` — standard cron expression (5-field)
- `add-at <iso-datetime> <command>` — fire once at exact time
- `add-every <ms> <command>` — repeating interval
- `once <duration> <command>` — delay from now (e.g., `30m`, `2h`, `1d`)
---
## Providers & Models
```bash
zeroclaw providers # List all 40+ supported providers
zeroclaw models list # Show cached model catalog
zeroclaw models refresh --all # Refresh catalogs from all providers
zeroclaw models set anthropic/claude-sonnet-4-6 # Set default model
zeroclaw models status # Current model info
```
Model routing in config.toml:
```toml
[[model_routes]]
hint = "reasoning"
provider = "openrouter"
model = "anthropic/claude-sonnet-4-6"
```
---
## Gateway & Daemon
```bash
zeroclaw gateway # Start HTTP gateway (foreground)
zeroclaw gateway -p 8080 --host 127.0.0.1 # Custom port/host
zeroclaw daemon # Gateway + channels + scheduler + heartbeat
zeroclaw daemon -p 8080 --host 0.0.0.0 # Custom bind
```
**Gateway defaults:**
- Port: 42617
- Host: 127.0.0.1
- Pairing required: true
- Public bind allowed: false
---
## Service Management
OS service lifecycle (systemd on Linux, launchd on macOS).
```bash
zeroclaw service install # Install as system service
zeroclaw service start # Start the service
zeroclaw service status # Check service status
zeroclaw service stop # Stop the service
zeroclaw service restart # Restart the service
zeroclaw service uninstall # Remove the service
```
**Logs:**
- macOS: `~/.zeroclaw/logs/daemon.stdout.log`
- Linux: `journalctl -u zeroclaw`
---
## Channels
Channels are configured in `config.toml` under `[channels]` and `[channels_config.*]`.
```bash
zeroclaw channels list # List configured channels
zeroclaw channels doctor # Check channel health
```
Supported channels (21 total): Telegram, Discord, Slack, WhatsApp (Meta), WATI, Linq (iMessage/RCS/SMS), Email (IMAP/SMTP), IRC, Matrix, Nostr, Signal, Nextcloud Talk, and more.
Channel config example (Telegram):
```toml
[channels]
telegram = true
[channels_config.telegram]
bot_token = "..."
allowed_users = [123456789]
```
---
## Security & Emergency Stop
```bash
zeroclaw estop --level kill-all # Stop everything
zeroclaw estop --level network-kill # Block all network access
zeroclaw estop --level domain-block --domain "*.example.com" # Block specific domains
zeroclaw estop --level tool-freeze --tool shell # Freeze specific tool
zeroclaw estop status # Check estop state
zeroclaw estop resume --network # Resume (may require OTP)
```
**Estop levels:**
- `kill-all` — nuclear option, stops all agent activity
- `network-kill` — blocks all outbound network
- `domain-block` — blocks specific domain patterns
- `tool-freeze` — freezes individual tools
Autonomy config in config.toml:
```toml
[autonomy]
level = "supervised" # read_only | supervised | full
workspace_only = true
allowed_commands = ["git", "cargo", "python"]
forbidden_paths = ["/etc", "/root", "~/.ssh"]
max_actions_per_hour = 20
max_cost_per_day_cents = 500
```
---
## Hardware Peripherals
```bash
zeroclaw hardware discover # Find USB devices
zeroclaw hardware introspect /dev/ttyACM0 # Probe device capabilities
zeroclaw peripheral list # List configured peripherals
zeroclaw peripheral add nucleo-f401re /dev/ttyACM0 # Add peripheral
zeroclaw peripheral flash-nucleo # Flash STM32 firmware
zeroclaw peripheral flash --port /dev/cu.usbmodem101 # Flash Arduino firmware
```
**Supported boards:** STM32 Nucleo-F401RE, Arduino Uno R4, Raspberry Pi GPIO, ESP32.
Attach to agent session: `zeroclaw agent --peripheral nucleo-f401re:/dev/ttyACM0`
---
## Skills
```bash
zeroclaw skills list # List installed skills
zeroclaw skills install <path-or-url> # Install a skill
zeroclaw skills audit # Audit installed skills
zeroclaw skills remove <name> # Remove a skill
```
---
## Shell Completions
```bash
zeroclaw completions zsh # Generate Zsh completions
zeroclaw completions bash # Generate Bash completions
zeroclaw completions fish # Generate Fish completions
```
---
## Config File
Default location: `~/.zeroclaw/config.toml`
Config resolution order (first match wins):
1. `ZEROCLAW_CONFIG_DIR` environment variable
2. `ZEROCLAW_WORKSPACE` environment variable
3. `~/.zeroclaw/active_workspace.toml` marker file
4. `~/.zeroclaw/config.toml` (default)

View File

@@ -0,0 +1,505 @@
# ZeroClaw REST API Reference
Complete endpoint reference for the ZeroClaw gateway HTTP API.
## Table of Contents
1. [Authentication](#authentication)
2. [Public Endpoints](#public-endpoints)
3. [Webhook](#webhook)
4. [WebSocket Chat](#websocket-chat)
5. [Status & Health](#status--health)
6. [Memory](#memory)
7. [Cron](#cron)
8. [Tools](#tools)
9. [Configuration](#configuration)
10. [Integrations](#integrations)
11. [Cost](#cost)
12. [Events (SSE)](#events-sse)
13. [Channel Webhooks](#channel-webhooks)
14. [Rate Limiting](#rate-limiting)
15. [Error Responses](#error-responses)
---
## Authentication
Three authentication mechanisms:
### Bearer Token (Primary)
```
Authorization: Bearer <token>
```
Obtained via `POST /pair`. Required for all `/api/*` endpoints when `require_pairing = true` (default).
### Webhook Secret
```
X-Webhook-Secret: <raw_secret>
```
Optional additional auth for `/webhook`. Server SHA-256 hashes and compares using constant-time comparison.
### WebSocket Token
```
ws://host:port/ws/chat?token=<bearer_token>
```
WebSocket connections pass the token as a query parameter (browsers can't set custom headers on WS handshake).
---
## Public Endpoints
### GET /health
No authentication required.
**Response 200:**
```json
{
"status": "ok",
"paired": true,
"require_pairing": true,
"runtime": {}
}
```
### GET /metrics
Prometheus text exposition format.
**Response 200:**
```
Content-Type: text/plain; version=0.0.4; charset=utf-8
```
### POST /pair
Exchange a one-time pairing code for a bearer token.
**Rate Limit:** Configurable per-minute limit per IP (default: 10/min).
**Headers:**
- `X-Pairing-Code: <code>` (required)
**Response 200 (success):**
```json
{
"paired": true,
"persisted": true,
"token": "<bearer_token>",
"message": "Save this token — use it as Authorization: Bearer <token>"
}
```
**Response 200 (persistence failure):**
```json
{
"paired": true,
"persisted": false,
"token": "<bearer_token>",
"message": "Paired for this process, but failed to persist token to config.toml..."
}
```
**Response 403:**
```json
{"error": "Invalid pairing code"}
```
**Response 429:**
```json
{"error": "Too many pairing requests. Please retry later.", "retry_after": 60}
```
**Response 429 (lockout):**
```json
{"error": "Too many failed attempts. Try again in {lockout_secs}s.", "retry_after": 120}
```
---
## Webhook
### POST /webhook
Send a message to the agent and receive a response.
**Rate Limit:** Configurable per-minute limit per IP (default: 60/min).
**Headers:**
- `Authorization: Bearer <token>` (if pairing enabled)
- `Content-Type: application/json`
- `X-Webhook-Secret: <secret>` (optional)
- `X-Idempotency-Key: <uuid>` (optional)
**Request Body:**
```json
{"message": "your prompt here"}
```
**Response 200:**
```json
{"response": "<llm_response>", "model": "<model_name>"}
```
**Response 200 (duplicate — idempotency key match):**
```json
{"status": "duplicate", "idempotent": true, "message": "Request already processed for this idempotency key"}
```
**Response 401:**
```json
{"error": "Unauthorized — pair first via POST /pair, then send Authorization: Bearer <token>"}
```
**Response 429:**
```json
{"error": "Too many webhook requests. Please retry later.", "retry_after": 60}
```
**Response 500:**
```json
{"error": "LLM request failed"}
```
### Idempotency
- Header: `X-Idempotency-Key: <uuid>`
- TTL: configurable, default 300 seconds
- Max tracked keys: configurable, default 10,000
- Duplicate requests within TTL return `"status": "duplicate"` instead of re-processing
---
## WebSocket Chat
### GET /ws/chat?token=<bearer_token>
Streaming agent chat over WebSocket.
**Client → Server:**
```json
{"type": "message", "content": "Hello, what's the weather?"}
```
**Server → Client (complete response):**
```json
{"type": "done", "full_response": "The weather in San Francisco is sunny..."}
```
**Server → Client (error):**
```json
{"type": "error", "message": "Error message here"}
```
Ignore unknown message types. Invalid JSON triggers an error response.
---
## Status & Health
### GET /api/status
**Response 200:**
```json
{
"provider": "openrouter",
"model": "anthropic/claude-sonnet-4",
"temperature": 0.7,
"uptime_seconds": 3600,
"gateway_port": 42617,
"locale": "en",
"memory_backend": "sqlite",
"paired": true,
"channels": {
"telegram": false,
"discord": true,
"slack": false
},
"health": {}
}
```
### GET /api/health
Component health snapshot (requires auth).
```json
{"health": {}}
```
### GET or POST /api/doctor
Run system diagnostics.
```json
{
"results": [
{"name": "provider_connectivity", "severity": "ok", "message": "OpenRouter API reachable"}
],
"summary": {"ok": 5, "warnings": 1, "errors": 0}
}
```
---
## Memory
### GET /api/memory
List or search memory entries.
**Query Parameters:**
- `query` (string, optional) — search text; triggers search mode
- `category` (string, optional) — filter by category
**Response 200:**
```json
{
"entries": [
{
"key": "memory_key",
"content": "memory content",
"category": "core",
"timestamp": "2025-01-10T12:00:00Z"
}
]
}
```
### POST /api/memory
Store a memory entry.
**Request Body:**
```json
{
"key": "unique_key",
"content": "memory content",
"category": "core"
}
```
Category defaults to `"core"` if omitted. Other values: `daily`, `conversation`, or any custom string.
**Response 200:**
```json
{"status": "ok"}
```
### DELETE /api/memory/{key}
Delete a memory entry.
**Response 200:**
```json
{"status": "ok", "deleted": true}
```
---
## Cron
### GET /api/cron
List all scheduled jobs.
**Response 200:**
```json
{
"jobs": [
{
"id": "<uuid>",
"name": "daily-backup",
"command": "backup.sh",
"next_run": "2025-01-10T15:00:00Z",
"last_run": "2025-01-09T15:00:00Z",
"last_status": "success",
"enabled": true
}
]
}
```
### POST /api/cron
Add a new job.
**Request Body:**
```json
{
"name": "job-name",
"schedule": "0 9 * * *",
"command": "command to run"
}
```
**Response 200:**
```json
{
"status": "ok",
"job": {"id": "<uuid>", "name": "job-name", "command": "command to run", "enabled": true}
}
```
### DELETE /api/cron/{id}
Remove a job.
**Response 200:**
```json
{"status": "ok"}
```
---
## Tools
### GET /api/tools
List all registered tools with descriptions and parameter schemas.
**Response 200:**
```json
{
"tools": [
{"name": "shell", "description": "Execute shell commands", "parameters": {}},
{"name": "file_read", "description": "Read file contents", "parameters": {}}
]
}
```
---
## Configuration
### GET /api/config
Get current config. Secrets are masked as `***MASKED***`.
**Response 200:**
```json
{"format": "toml", "content": "<toml_string>"}
```
### PUT /api/config
Update config from TOML body. Body limit: 1 MB.
**Request Body:** Raw TOML text.
**Response 200:**
```json
{"status": "ok"}
```
**Response 400:**
```json
{"error": "Invalid TOML: <details>"}
```
or
```json
{"error": "Invalid config: <validation_error>"}
```
---
## Integrations
### GET /api/integrations
List all integrations and their status.
**Response 200:**
```json
{
"integrations": [
{"name": "openrouter", "description": "OpenRouter LLM provider", "category": "providers", "status": "ok"},
{"name": "telegram", "description": "Telegram messaging channel", "category": "channels", "status": "configured"}
]
}
```
---
## Cost
### GET /api/cost
Cost tracking summary.
**Response 200:**
```json
{
"cost": {
"session_cost_usd": 1.50,
"daily_cost_usd": 5.00,
"monthly_cost_usd": 150.00,
"total_tokens": 50000,
"request_count": 25,
"by_model": {"anthropic/claude-sonnet-4": 1.50}
}
}
```
---
## Events (SSE)
### GET /api/events
Server-Sent Events stream. Requires bearer token.
**Content-Type:** `text/event-stream`
**Event types:**
| Type | Fields | Description |
|------|--------|-------------|
| `llm_request` | provider, model, timestamp | LLM call started |
| `tool_call_start` | tool, timestamp | Tool execution started |
| `tool_call` | tool, duration_ms, success, timestamp | Tool execution completed |
| `agent_start` | provider, model, timestamp | Agent loop started |
| `agent_end` | provider, model, duration_ms, tokens_used, cost_usd, timestamp | Agent loop completed |
| `error` | component, message, timestamp | Error occurred |
**Example:**
```bash
curl -N -H "Authorization: Bearer <token>" http://127.0.0.1:42617/api/events
```
---
## Channel Webhooks
These are incoming webhook endpoints for specific messaging channels. They're set up automatically when channels are configured.
### WhatsApp (Meta Cloud API)
- `GET /whatsapp` — verification (echoes `hub.challenge`)
- `POST /whatsapp` — incoming messages (signature verified via `X-Hub-Signature-256`)
### WATI (WhatsApp Business)
- `GET /wati` — verification (echoes `challenge`)
- `POST /wati` — incoming messages
### Linq (iMessage/RCS/SMS)
- `POST /linq` — incoming messages (signature verified via `X-Webhook-Signature` + `X-Webhook-Timestamp`)
### Nextcloud Talk
- `POST /nextcloud-talk` — bot API webhook (signature verified via `X-Nextcloud-Talk-Signature`)
---
## Rate Limiting
Sliding window (60-second window), per client IP.
| Endpoint | Default Limit |
|----------|--------------|
| `POST /pair` | 10/min |
| `POST /webhook` | 60/min |
If `trust_forwarded_headers` is enabled, uses `X-Forwarded-For` for client IP.
Max tracked keys: configurable (default: 10,000).
---
## Error Responses
**Standard format:**
```json
{"error": "Human-readable error message"}
```
**With retry info:**
```json
{"error": "...", "retry_after": 60}
```
**Status codes:**
| Code | Meaning |
|------|---------|
| 200 | Success |
| 400 | Invalid JSON, missing fields, invalid TOML |
| 401 | Invalid/missing bearer token or webhook secret |
| 403 | Pairing verification failed |
| 404 | Endpoint or channel not configured |
| 408 | Request timeout (30s) |
| 429 | Rate limited (check `retry_after`) |
| 500 | LLM error, database error, internal failure |

71
third_party/zeroclaw/.coderabbit.yaml vendored Normal file
View File

@@ -0,0 +1,71 @@
# CodeRabbit configuration for ZeroClaw
# Documentation: https://docs.coderabbit.ai/reference/configuration
language: en-US
early_access: false
# Enable tone control for reviews
reviews:
# Request changes workflow
request_changes_workflow: false
# High level summary of the PR
high_level_summary: true
# Generate sequence diagrams
sequence_diagrams: true
# Auto-review configuration
auto_review:
enabled: true
# Only review PRs targeting these branches
base_branches:
- master
# Skip reviews for draft PRs or WIP
drafts: false
# Poem feature toggle (must be a boolean, not an object)
poem: false
# Reviewer suggestions
reviewer:
# Suggest reviewers based on blame data
enabled: true
# Automatically assign suggested reviewers
auto_assign: false
# Enable finishing touches
finishing_touches:
# Generate docstrings
docstrings:
enabled: true
# Generate unit tests
unit_tests:
enabled: true
# Tools configuration
tools:
# Rust-specific tools
cargo:
enabled: true
# Chat configuration
chat:
auto_reply: true
# Path filters - ignore generated files
path_filters:
- "!**/target/**"
- "!**/node_modules/**"
- "!**/.cargo/**"
- "!**/Cargo.lock"
# Review instructions specific to Rust and this project
review_instructions:
- "Focus on Rust best practices and idiomatic code"
- "Check for security vulnerabilities in encryption/crypto code"
- "Ensure proper error handling with Result types"
- "Verify memory safety and avoid unnecessary clones"
- "Check for proper use of lifetimes and borrowing"
- "Ensure tests cover critical security paths"
- "Review configuration migration code carefully"

71
third_party/zeroclaw/.dockerignore vendored Normal file
View File

@@ -0,0 +1,71 @@
# Git history (may contain old secrets)
.git
.gitignore
.githooks
# Rust build artifacts (can be multiple GB)
target
# Documentation and examples (not needed for runtime)
docs
examples
tests
# Markdown files (README, CHANGELOG, etc.)
*.md
# Images (unnecessary for build)
*.png
*.svg
*.jpg
*.jpeg
*.gif
# SQLite databases (conversation history, cron jobs)
*.db
*.db-journal
# macOS artifacts
.DS_Store
.AppleDouble
.LSOverride
# CI/CD configs (not needed in image)
.github
# Cargo deny config (lint tool, not runtime)
deny.toml
# License file (not needed for runtime)
LICENSE
# Temporary files
.tmp_*
*.tmp
*.bak
*.swp
*~
# IDE and editor configs
.idea
.vscode
*.iml
# Windsurf workflows
.windsurf
# Environment files (may contain secrets)
.env
.env.*
!.env.example
# Coverage and profiling
*.profraw
*.profdata
coverage
lcov.info
# Application and script directories (not needed for Docker runtime)
apps/
python/
scripts/

44
third_party/zeroclaw/.editorconfig vendored Normal file
View File

@@ -0,0 +1,44 @@
# EditorConfig is awesome: https://EditorConfig.org
# top-most EditorConfig file
root = true
# All files
[*]
indent_style = space
indent_size = 2
end_of_line = lf
charset = utf-8
trim_trailing_whitespace = true
insert_final_newline = true
# Rust files - match rustfmt.toml
[*.rs]
indent_size = 4
max_line_length = 100
# Markdown files
[*.md]
trim_trailing_whitespace = false
max_line_length = 80
# TOML files
[*.toml]
indent_size = 2
# YAML files
[*.{yml,yaml}]
indent_size = 2
# Python files
[*.py]
indent_size = 4
max_line_length = 100
# Shell scripts
[*.{sh,bash}]
indent_size = 2
# JSON files
[*.json]
indent_size = 2

122
third_party/zeroclaw/.env.example vendored Normal file
View File

@@ -0,0 +1,122 @@
# ZeroClaw Environment Variables
# Copy this file to `.env` and fill in your local values.
# Never commit `.env` or any real secrets.
# ── Core Runtime ──────────────────────────────────────────────
# Provider key resolution at runtime:
# 1) explicit key passed from config/CLI
# 2) provider-specific env var (OPENROUTER_API_KEY, OPENAI_API_KEY, ...)
# 3) generic fallback env vars below
# Generic fallback API key (used when provider-specific key is absent)
API_KEY=your-api-key-here
# ZEROCLAW_API_KEY=your-api-key-here
# Default provider/model (can be overridden by CLI flags)
PROVIDER=openrouter
# ZEROCLAW_PROVIDER=openrouter
# ZEROCLAW_MODEL=anthropic/claude-sonnet-4-6
# ZEROCLAW_TEMPERATURE=0.7
# Workspace directory override
# ZEROCLAW_WORKSPACE=/path/to/workspace
# Reasoning mode (enables extended thinking for supported models)
# ZEROCLAW_REASONING_ENABLED=false
# REASONING_ENABLED=false
# ── Provider-Specific API Keys ────────────────────────────────
# OpenRouter
# OPENROUTER_API_KEY=sk-or-v1-...
# Anthropic
# ANTHROPIC_OAUTH_TOKEN=...
# ANTHROPIC_API_KEY=sk-ant-...
# OpenAI / Gemini
# OPENAI_API_KEY=sk-...
# GEMINI_API_KEY=...
# GOOGLE_API_KEY=...
# Other supported providers
# VENICE_API_KEY=...
# GROQ_API_KEY=...
# MISTRAL_API_KEY=...
# DEEPSEEK_API_KEY=...
# XAI_API_KEY=...
# TOGETHER_API_KEY=...
# FIREWORKS_API_KEY=...
# PERPLEXITY_API_KEY=...
# COHERE_API_KEY=...
# MOONSHOT_API_KEY=...
# GLM_API_KEY=...
# MINIMAX_OAUTH_TOKEN=...
# MINIMAX_API_KEY=...
# MINIMAX_OAUTH_REFRESH_TOKEN=...
# MINIMAX_OAUTH_REGION=global # optional: global|cn
# QIANFAN_API_KEY=...
# DASHSCOPE_API_KEY=...
# ZAI_API_KEY=...
# SYNTHETIC_API_KEY=...
# OPENCODE_API_KEY=...
# OPENCODE_GO_API_KEY=...
# VERCEL_API_KEY=...
# CLOUDFLARE_API_KEY=...
# ── Gateway ──────────────────────────────────────────────────
# ZEROCLAW_GATEWAY_PORT=3000
# ZEROCLAW_GATEWAY_HOST=127.0.0.1
# ZEROCLAW_ALLOW_PUBLIC_BIND=false
# ── Storage ─────────────────────────────────────────────────
# Backend override for persistent storage (default: sqlite)
# ZEROCLAW_STORAGE_PROVIDER=sqlite
# ── Proxy ──────────────────────────────────────────────────
# Forward provider/service traffic through an HTTP(S) proxy.
# ZEROCLAW_PROXY_ENABLED=false
# ZEROCLAW_HTTP_PROXY=http://proxy.example.com:8080
# ZEROCLAW_HTTPS_PROXY=http://proxy.example.com:8080
# ZEROCLAW_ALL_PROXY=socks5://proxy.example.com:1080
# ZEROCLAW_NO_PROXY=localhost,127.0.0.1
# ZEROCLAW_PROXY_SCOPE=zeroclaw # environment|zeroclaw|services
# ZEROCLAW_PROXY_SERVICES=openai,anthropic
# ── Optional Integrations ────────────────────────────────────
# Pushover notifications (`pushover` tool)
# PUSHOVER_TOKEN=your-pushover-app-token
# PUSHOVER_USER_KEY=your-pushover-user-key
# ── Docker Compose ───────────────────────────────────────────
# Host port mapping (used by docker-compose.yml)
# HOST_PORT=3000
# ── Z.AI GLM Coding Plan ───────────────────────────────────────
# Z.AI provides GLM models through OpenAI-compatible endpoints.
# API key format: id.secret (e.g., abc123.xyz789)
#
# Usage:
# zeroclaw onboard --provider zai --api-key YOUR_ZAI_API_KEY
#
# Or set the environment variable:
# ZAI_API_KEY=your-id.secret
#
# Common models: glm-5, glm-4.7, glm-4-plus, glm-4-flash
# See docs/zai-glm-setup.md for detailed configuration.
# ── Web Search ────────────────────────────────────────────────
# Web search tool for finding information on the internet.
# Enabled by default with DuckDuckGo (free, no API key required).
#
# WEB_SEARCH_ENABLED=true
# WEB_SEARCH_PROVIDER=duckduckgo
# WEB_SEARCH_MAX_RESULTS=5
# WEB_SEARCH_TIMEOUT_SECS=15
#
# Optional: Brave Search (requires API key from https://brave.com/search/api)
# WEB_SEARCH_PROVIDER=brave
# BRAVE_API_KEY=your-brave-search-api-key
#
# Optional: SearXNG (self-hosted, requires instance URL)
# WEB_SEARCH_PROVIDER=searxng
# SEARXNG_INSTANCE_URL=https://searx.example.com

1
third_party/zeroclaw/.envrc vendored Normal file
View File

@@ -0,0 +1 @@
use flake

View File

@@ -0,0 +1,89 @@
# ZeroClaw Code Style Guide
This style guide provides instructions for Gemini Code Assist when reviewing pull requests for the ZeroClaw project.
## Project Overview
ZeroClaw is a Rust-based security-focused project that handles encryption, secrets management, and secure configuration. Code reviews should prioritize security, memory safety, and Rust best practices.
## General Principles
### Priority Levels
- **CRITICAL**: Security vulnerabilities, memory safety issues, data leaks
- **HIGH**: Logic errors, incorrect error handling, API misuse
- **MEDIUM**: Code quality, performance concerns, non-idiomatic Rust
- **LOW**: Style issues, documentation improvements, minor refactoring
## Rust-Specific Guidelines
### Memory Safety
1. **Borrowing and Lifetimes**: Verify proper use of borrowing and lifetime annotations
2. **Unsafe Code**: Flag any `unsafe` blocks for careful review - they should be minimal and well-justified
3. **Clone Usage**: Identify unnecessary `.clone()` calls that could be replaced with borrowing
4. **Memory Leaks**: Watch for potential memory leaks in long-running processes
### Error Handling
1. **Result Types**: All fallible operations should return `Result` types
2. **Error Propagation**: Use `?` operator for clean error propagation
3. **Custom Errors**: Ensure custom error types implement appropriate traits
4. **Panic**: Flag any uses of `panic!`, `unwrap()`, or `expect()` in production code
### Security
1. **Cryptography**: Review all crypto code for:
- Proper key generation and storage
- Secure random number generation
- No hardcoded secrets or keys
- Use of well-vetted crypto libraries
2. **Secrets Management**:
- Secrets should never be logged
- Use secure memory wiping when appropriate
- Validate encryption/decryption implementations
3. **Input Validation**: All external input must be validated
### Code Quality
1. **Documentation**: Public APIs should have doc comments with examples
2. **Tests**: Critical paths should have comprehensive test coverage
3. **Type Safety**: Prefer type-safe abstractions over primitive types
4. **Idiomatic Rust**: Follow Rust API guidelines and conventions
## Project-Specific Rules
### Configuration Management
- Configuration migrations must be backward compatible
- Validate all configuration before applying
- Test migration paths from legacy to new formats
### Dependencies
- Prefer well-maintained crates with security audit history
- Avoid unnecessary dependencies
- Check for known vulnerabilities in dependencies
## Review Focus Areas
When reviewing PRs, pay special attention to:
1. Changes in `src/security/` - highest security scrutiny
2. Configuration migration code - ensure data integrity
3. Error handling paths - verify all edge cases
4. Public API changes - check for breaking changes
5. Test coverage - ensure critical code is tested
## Common Issues to Flag
- Unhandled errors or generic error messages
- Missing input validation
- Hardcoded credentials or secrets
- Unsafe code without justification
- Missing documentation on public APIs
- Inadequate test coverage on security-critical code
- Performance issues (unnecessary allocations, inefficient algorithms)
- Breaking API changes without deprecation warnings

61
third_party/zeroclaw/.gitattributes vendored Normal file
View File

@@ -0,0 +1,61 @@
# Git attributes for ZeroClaw
# https://git-scm.com/docs/gitattributes
# Auto detect text files and perform LF normalization
* text=auto
# Source code
*.rs text eol=lf linguist-language=Rust
*.toml text eol=lf linguist-language=TOML
*.py text eol=lf linguist-language=Python
*.js text eol=lf linguist-language=JavaScript
*.ts text eol=lf linguist-language=TypeScript
*.html text eol=lf linguist-language=HTML
*.css text eol=lf linguist-language=CSS
*.scss text eol=lf linguist-language=SCSS
*.json text eol=lf linguist-language=JSON
*.yaml text eol=lf linguist-language=YAML
*.yml text eol=lf linguist-language=YAML
*.md text eol=lf linguist-language=Markdown
*.sh text eol=lf linguist-language=Shell
*.bash text eol=lf linguist-language=Shell
*.ps1 text eol=crlf linguist-language=PowerShell
# Documentation
*.txt text eol=lf
LICENSE* text eol=lf
# Configuration files
.editorconfig text eol=lf
.gitattributes text eol=lf
.gitignore text eol=lf
.dockerignore text eol=lf
# Rust-specific
Cargo.lock text eol=lf linguist-generated
Cargo.toml text eol=lf
# Declare files that will always have CRLF line endings on checkout
*.sln text eol=crlf
# Denote all files that are truly binary and should not be modified
*.png binary
*.jpg binary
*.jpeg binary
*.gif binary
*.ico binary
*.svg text
*.wasm binary
*.woff binary
*.woff2 binary
*.ttf binary
*.eot binary
*.mp3 binary
*.mp4 binary
*.webm binary
*.zip binary
*.tar binary
*.gz binary
*.bz2 binary
*.7z binary
*.db binary

8
third_party/zeroclaw/.githooks/pre-commit vendored Executable file
View File

@@ -0,0 +1,8 @@
#!/usr/bin/env bash
set -euo pipefail
if command -v gitleaks >/dev/null 2>&1; then
gitleaks protect --staged --redact
else
echo "warning: gitleaks not found; skipping staged secret scan" >&2
fi

53
third_party/zeroclaw/.githooks/pre-push vendored Executable file
View File

@@ -0,0 +1,53 @@
#!/usr/bin/env bash
#
# pre-push hook — runs fmt, clippy, and tests before every push.
# Install: git config core.hooksPath .githooks
# Skip: git push --no-verify
set -euo pipefail
echo "==> pre-push: running rust quality gate..."
./scripts/ci/rust_quality_gate.sh || {
echo "FAIL: rust quality gate failed."
exit 1
}
if [ "${ZEROCLAW_STRICT_LINT:-0}" = "1" ]; then
echo "==> pre-push: running strict clippy warnings gate (ZEROCLAW_STRICT_LINT=1)..."
./scripts/ci/rust_quality_gate.sh --strict || {
echo "FAIL: strict clippy warnings gate reported issues."
exit 1
}
fi
if [ "${ZEROCLAW_STRICT_DELTA_LINT:-0}" = "1" ]; then
echo "==> pre-push: running strict delta lint gate (ZEROCLAW_STRICT_DELTA_LINT=1)..."
./scripts/ci/rust_strict_delta_gate.sh || {
echo "FAIL: strict delta lint gate reported issues."
exit 1
}
fi
if [ "${ZEROCLAW_DOCS_LINT:-0}" = "1" ]; then
echo "==> pre-push: running docs quality gate (ZEROCLAW_DOCS_LINT=1)..."
./scripts/ci/docs_quality_gate.sh || {
echo "FAIL: docs quality gate reported issues."
exit 1
}
fi
if [ "${ZEROCLAW_DOCS_LINKS:-0}" = "1" ]; then
echo "==> pre-push: running docs links gate (ZEROCLAW_DOCS_LINKS=1)..."
./scripts/ci/docs_links_gate.sh || {
echo "FAIL: docs links gate reported issues."
exit 1
}
fi
echo "==> pre-push: running tests..."
cargo test --locked || {
echo "FAIL: some tests did not pass."
exit 1
}
echo "==> pre-push: all checks passed."

32
third_party/zeroclaw/.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1,32 @@
# Default owner for all files
* @theonlyhennygod @JordanTheJet @SimianAstronaut7
# Important functional modules
/src/agent/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
/src/providers/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
/src/channels/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
/src/tools/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
/src/gateway/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
/src/runtime/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
/src/memory/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
/Cargo.toml @theonlyhennygod @JordanTheJet @SimianAstronaut7
/Cargo.lock @theonlyhennygod @JordanTheJet @SimianAstronaut7
# Security / tests / CI-CD ownership
/src/security/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
/tests/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
/.github/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
/.github/workflows/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
/.github/codeql/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
/.github/dependabot.yml @theonlyhennygod @JordanTheJet @SimianAstronaut7
/SECURITY.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
/docs/actions-source-policy.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
/docs/ci-map.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
# Docs & governance
/docs/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
/AGENTS.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
/CLAUDE.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
/CONTRIBUTING.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
/docs/pr-workflow.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
/docs/reviewer-playbook.md @theonlyhennygod @JordanTheJet @SimianAstronaut7

View File

@@ -0,0 +1,138 @@
name: Bug Report
description: Report a reproducible defect in ZeroClaw
title: "[Bug]: "
labels:
- bug
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to report a bug.
Please provide a minimal reproducible case so maintainers can triage quickly.
Do not include personal/sensitive data; redact and anonymize all logs/payloads.
- type: dropdown
id: component
attributes:
label: Affected component
options:
- runtime/daemon
- provider
- channel
- memory
- security/sandbox
- tooling/ci
- docs
- unknown
validations:
required: true
- type: dropdown
id: severity
attributes:
label: Severity
options:
- S0 - data loss / security risk
- S1 - workflow blocked
- S2 - degraded behavior
- S3 - minor issue
validations:
required: true
- type: textarea
id: current
attributes:
label: Current behavior
description: What is happening now?
placeholder: The process exits with ...
validations:
required: true
- type: textarea
id: expected
attributes:
label: Expected behavior
description: What should happen instead?
placeholder: The daemon should stay alive and ...
validations:
required: true
- type: textarea
id: reproduce
attributes:
label: Steps to reproduce
description: Please provide exact commands/config.
placeholder: |
1. zeroclaw onboard
2. zeroclaw daemon
3. Observe crash in logs
render: bash
validations:
required: true
- type: textarea
id: impact
attributes:
label: Impact
description: Who is affected, how often, and practical consequences (optional but helps triage).
placeholder: |
Affected users: ...
Frequency: always/intermittent
Consequence: ...
validations:
required: false
- type: textarea
id: logs
attributes:
label: Logs / stack traces
description: Paste relevant logs (redact secrets, personal identifiers, and sensitive data).
render: text
validations:
required: false
- type: input
id: version
attributes:
label: ZeroClaw version
placeholder: v0.1.0 / commit SHA
validations:
required: true
- type: input
id: rust
attributes:
label: Rust version
description: Required for runtime/build bugs; optional for docs/config issues.
placeholder: rustc 1.xx.x
validations:
required: false
- type: input
id: os
attributes:
label: Operating system
placeholder: Ubuntu 24.04 / macOS 15 / Windows 11
validations:
required: true
- type: dropdown
id: regression
attributes:
label: Regression?
options:
- Unknown
- Yes, it worked before
- No, first-time setup
validations:
required: true
- type: checkboxes
id: checks
attributes:
label: Pre-flight checks
options:
- label: I reproduced this on the latest master branch or latest release.
required: true
- label: I redacted secrets, tokens, and personal data from all submitted content.
required: true

View File

@@ -0,0 +1,11 @@
blank_issues_enabled: false
contact_links:
- name: Security vulnerability report
url: https://github.com/zeroclaw-labs/zeroclaw/security/policy
about: Please report security vulnerabilities privately via SECURITY.md policy.
- name: Contribution guide
url: https://github.com/zeroclaw-labs/zeroclaw/blob/master/CONTRIBUTING.md
about: Please read contribution and PR requirements before opening an issue.
- name: PR workflow & reviewer expectations
url: https://github.com/zeroclaw-labs/zeroclaw/blob/master/docs/pr-workflow.md
about: Read risk-based PR tracks, CI gates, and merge criteria before filing feature requests.

View File

@@ -0,0 +1,107 @@
name: Feature Request
description: Propose an improvement or new capability
title: "[Feature]: "
labels:
- enhancement
body:
- type: markdown
attributes:
value: |
Thanks for sharing your idea.
Please focus on user value, constraints, and rollout safety.
Do not include personal/sensitive data; use neutral project-scoped placeholders.
- type: input
id: summary
attributes:
label: Summary
description: One-line statement of the requested capability.
placeholder: Add a provider-level retry budget override for long-running channels.
validations:
required: true
- type: textarea
id: problem
attributes:
label: Problem statement
description: What user pain does this solve and why is current behavior insufficient?
placeholder: Teams operating in unstable networks cannot tune retries per provider...
validations:
required: true
- type: textarea
id: proposal
attributes:
label: Proposed solution
description: Describe preferred behavior and interfaces.
placeholder: Add `[provider.retry]` config and enforce bounds in config validation.
validations:
required: true
- type: textarea
id: non_goals
attributes:
label: Non-goals / out of scope
description: Clarify what should not be included in the first iteration (optional but helps scope discussion).
placeholder: No UI changes, no cross-provider dynamic adaptation in v1.
validations:
required: false
- type: textarea
id: alternatives
attributes:
label: Alternatives considered
description: What alternatives did you evaluate?
placeholder: Keep current behavior, use wrapper scripts, etc.
validations:
required: false
- type: textarea
id: acceptance
attributes:
label: Acceptance criteria
description: What outcomes would make this request complete? (optional — can be defined during triage)
placeholder: |
- Config key is documented and validated
- Runtime path uses configured retry budget
- Regression tests cover fallback and invalid config
validations:
required: false
- type: textarea
id: architecture
attributes:
label: Architecture impact
description: Which subsystem(s) are affected? (optional — maintainers will assess during triage)
placeholder: providers/, channels/, memory/, runtime/, security/, docs/ ...
validations:
required: false
- type: textarea
id: risk
attributes:
label: Risk and rollback
description: Main risk + how to disable/revert quickly (optional — can be defined during planning).
placeholder: Risk is ... rollback is ...
validations:
required: false
- type: dropdown
id: breaking
attributes:
label: Breaking change?
options:
- "No"
- "Yes"
validations:
required: true
- type: checkboxes
id: hygiene
attributes:
label: Data hygiene checks
options:
- label: I removed personal/sensitive data from examples, payloads, and logs.
required: true
- label: I used neutral, project-focused wording and placeholders.
required: true

View File

@@ -0,0 +1,3 @@
self-hosted-runner:
labels:
- blacksmith-2vcpu-ubuntu-2404

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

View File

@@ -0,0 +1,8 @@
# CodeQL configuration for ZeroClaw
#
# We intentionally ignore integration tests under `tests/` because they often
# contain security-focused fixtures (example secrets, malformed payloads, etc.)
# that can trigger false positives in security queries.
paths-ignore:
- tests/**

View File

@@ -0,0 +1,52 @@
version: 2
updates:
- package-ecosystem: cargo
directory: "/"
schedule:
interval: daily
target-branch: master
open-pull-requests-limit: 3
labels:
- "dependencies"
groups:
rust-all:
patterns:
- "*"
update-types:
- minor
- patch
- package-ecosystem: github-actions
directory: "/"
schedule:
interval: daily
target-branch: master
open-pull-requests-limit: 1
labels:
- "ci"
- "dependencies"
groups:
actions-all:
patterns:
- "*"
update-types:
- minor
- patch
- package-ecosystem: docker
directory: "/"
schedule:
interval: daily
target-branch: master
open-pull-requests-limit: 1
labels:
- "ci"
- "dependencies"
groups:
docker-all:
patterns:
- "*"
update-types:
- minor
- patch

View File

@@ -0,0 +1,21 @@
{
"contributor_tier_color": "2ED9FF",
"contributor_tiers": [
{
"label": "distinguished contributor",
"min_merged_prs": 50
},
{
"label": "principal contributor",
"min_merged_prs": 20
},
{
"label": "experienced contributor",
"min_merged_prs": 10
},
{
"label": "trusted contributor",
"min_merged_prs": 5
}
]
}

448
third_party/zeroclaw/.github/labeler.yml vendored Normal file
View File

@@ -0,0 +1,448 @@
"docs":
- changed-files:
- any-glob-to-any-file:
- "docs/**"
- "**/*.md"
- "**/*.mdx"
- "LICENSE"
- ".markdownlint-cli2.yaml"
"dependencies":
- changed-files:
- any-glob-to-any-file:
- "Cargo.toml"
- "Cargo.lock"
- "deny.toml"
- ".github/dependabot.yml"
"ci":
- changed-files:
- any-glob-to-any-file:
- ".github/**"
- ".githooks/**"
"core":
- changed-files:
- any-glob-to-any-file:
- "src/*.rs"
"agent":
- changed-files:
- any-glob-to-any-file:
- "src/agent/**"
"channel":
- changed-files:
- any-glob-to-any-file:
- "src/channels/**"
"channel:bluesky":
- changed-files:
- any-glob-to-any-file:
- "src/channels/bluesky.rs"
"channel:clawdtalk":
- changed-files:
- any-glob-to-any-file:
- "src/channels/clawdtalk.rs"
"channel:cli":
- changed-files:
- any-glob-to-any-file:
- "src/channels/cli.rs"
"channel:dingtalk":
- changed-files:
- any-glob-to-any-file:
- "src/channels/dingtalk.rs"
"channel:discord":
- changed-files:
- any-glob-to-any-file:
- "src/channels/discord.rs"
- "src/channels/discord_history.rs"
"channel:email":
- changed-files:
- any-glob-to-any-file:
- "src/channels/email_channel.rs"
- "src/channels/gmail_push.rs"
"channel:imessage":
- changed-files:
- any-glob-to-any-file:
- "src/channels/imessage.rs"
"channel:irc":
- changed-files:
- any-glob-to-any-file:
- "src/channels/irc.rs"
"channel:lark":
- changed-files:
- any-glob-to-any-file:
- "src/channels/lark.rs"
"channel:linq":
- changed-files:
- any-glob-to-any-file:
- "src/channels/linq.rs"
"channel:matrix":
- changed-files:
- any-glob-to-any-file:
- "src/channels/matrix.rs"
"channel:mattermost":
- changed-files:
- any-glob-to-any-file:
- "src/channels/mattermost.rs"
"channel:mochat":
- changed-files:
- any-glob-to-any-file:
- "src/channels/mochat.rs"
"channel:mqtt":
- changed-files:
- any-glob-to-any-file:
- "src/channels/mqtt.rs"
"channel:nextcloud-talk":
- changed-files:
- any-glob-to-any-file:
- "src/channels/nextcloud_talk.rs"
"channel:nostr":
- changed-files:
- any-glob-to-any-file:
- "src/channels/nostr.rs"
"channel:notion":
- changed-files:
- any-glob-to-any-file:
- "src/channels/notion.rs"
"channel:qq":
- changed-files:
- any-glob-to-any-file:
- "src/channels/qq.rs"
"channel:reddit":
- changed-files:
- any-glob-to-any-file:
- "src/channels/reddit.rs"
"channel:signal":
- changed-files:
- any-glob-to-any-file:
- "src/channels/signal.rs"
"channel:slack":
- changed-files:
- any-glob-to-any-file:
- "src/channels/slack.rs"
"channel:telegram":
- changed-files:
- any-glob-to-any-file:
- "src/channels/telegram.rs"
"channel:twitter":
- changed-files:
- any-glob-to-any-file:
- "src/channels/twitter.rs"
"channel:wati":
- changed-files:
- any-glob-to-any-file:
- "src/channels/wati.rs"
"channel:webhook":
- changed-files:
- any-glob-to-any-file:
- "src/channels/webhook.rs"
"channel:wecom":
- changed-files:
- any-glob-to-any-file:
- "src/channels/wecom.rs"
"channel:whatsapp":
- changed-files:
- any-glob-to-any-file:
- "src/channels/whatsapp.rs"
- "src/channels/whatsapp_storage.rs"
- "src/channels/whatsapp_web.rs"
"gateway":
- changed-files:
- any-glob-to-any-file:
- "src/gateway/**"
"config":
- changed-files:
- any-glob-to-any-file:
- "src/config/**"
"cron":
- changed-files:
- any-glob-to-any-file:
- "src/cron/**"
"daemon":
- changed-files:
- any-glob-to-any-file:
- "src/daemon/**"
"doctor":
- changed-files:
- any-glob-to-any-file:
- "src/doctor/**"
"health":
- changed-files:
- any-glob-to-any-file:
- "src/health/**"
"heartbeat":
- changed-files:
- any-glob-to-any-file:
- "src/heartbeat/**"
"integration":
- changed-files:
- any-glob-to-any-file:
- "src/integrations/**"
"memory":
- changed-files:
- any-glob-to-any-file:
- "src/memory/**"
"security":
- changed-files:
- any-glob-to-any-file:
- "src/security/**"
"runtime":
- changed-files:
- any-glob-to-any-file:
- "src/runtime/**"
"onboard":
- changed-files:
- any-glob-to-any-file:
- "src/onboard/**"
"provider":
- changed-files:
- any-glob-to-any-file:
- "src/providers/**"
"provider:anthropic":
- changed-files:
- any-glob-to-any-file:
- "src/providers/anthropic.rs"
"provider:azure-openai":
- changed-files:
- any-glob-to-any-file:
- "src/providers/azure_openai.rs"
"provider:bedrock":
- changed-files:
- any-glob-to-any-file:
- "src/providers/bedrock.rs"
"provider:claude-code":
- changed-files:
- any-glob-to-any-file:
- "src/providers/claude_code.rs"
"provider:compatible":
- changed-files:
- any-glob-to-any-file:
- "src/providers/compatible.rs"
"provider:copilot":
- changed-files:
- any-glob-to-any-file:
- "src/providers/copilot.rs"
"provider:gemini":
- changed-files:
- any-glob-to-any-file:
- "src/providers/gemini.rs"
- "src/providers/gemini_cli.rs"
"provider:glm":
- changed-files:
- any-glob-to-any-file:
- "src/providers/glm.rs"
"provider:kilocli":
- changed-files:
- any-glob-to-any-file:
- "src/providers/kilocli.rs"
"provider:ollama":
- changed-files:
- any-glob-to-any-file:
- "src/providers/ollama.rs"
"provider:openai":
- changed-files:
- any-glob-to-any-file:
- "src/providers/openai.rs"
- "src/providers/openai_codex.rs"
"provider:openrouter":
- changed-files:
- any-glob-to-any-file:
- "src/providers/openrouter.rs"
"provider:telnyx":
- changed-files:
- any-glob-to-any-file:
- "src/providers/telnyx.rs"
"service":
- changed-files:
- any-glob-to-any-file:
- "src/service/**"
"skillforge":
- changed-files:
- any-glob-to-any-file:
- "src/skillforge/**"
"skills":
- changed-files:
- any-glob-to-any-file:
- "src/skills/**"
"tool":
- changed-files:
- any-glob-to-any-file:
- "src/tools/**"
"tool:browser":
- changed-files:
- any-glob-to-any-file:
- "src/tools/browser.rs"
- "src/tools/browser_delegate.rs"
- "src/tools/browser_open.rs"
- "src/tools/text_browser.rs"
- "src/tools/screenshot.rs"
"tool:composio":
- changed-files:
- any-glob-to-any-file:
- "src/tools/composio.rs"
"tool:cron":
- changed-files:
- any-glob-to-any-file:
- "src/tools/cron_add.rs"
- "src/tools/cron_list.rs"
- "src/tools/cron_remove.rs"
- "src/tools/cron_run.rs"
- "src/tools/cron_runs.rs"
- "src/tools/cron_update.rs"
"tool:file":
- changed-files:
- any-glob-to-any-file:
- "src/tools/file_edit.rs"
- "src/tools/file_read.rs"
- "src/tools/file_write.rs"
- "src/tools/glob_search.rs"
- "src/tools/content_search.rs"
"tool:google-workspace":
- changed-files:
- any-glob-to-any-file:
- "src/tools/google_workspace.rs"
"tool:mcp":
- changed-files:
- any-glob-to-any-file:
- "src/tools/mcp_client.rs"
- "src/tools/mcp_deferred.rs"
- "src/tools/mcp_protocol.rs"
- "src/tools/mcp_tool.rs"
- "src/tools/mcp_transport.rs"
"tool:memory":
- changed-files:
- any-glob-to-any-file:
- "src/tools/memory_forget.rs"
- "src/tools/memory_recall.rs"
- "src/tools/memory_store.rs"
"tool:microsoft365":
- changed-files:
- any-glob-to-any-file:
- "src/tools/microsoft365/**"
"tool:shell":
- changed-files:
- any-glob-to-any-file:
- "src/tools/shell.rs"
- "src/tools/node_tool.rs"
- "src/tools/cli_discovery.rs"
"tool:sop":
- changed-files:
- any-glob-to-any-file:
- "src/tools/sop_advance.rs"
- "src/tools/sop_approve.rs"
- "src/tools/sop_execute.rs"
- "src/tools/sop_list.rs"
- "src/tools/sop_status.rs"
"tool:web":
- changed-files:
- any-glob-to-any-file:
- "src/tools/web_fetch.rs"
- "src/tools/web_search_tool.rs"
- "src/tools/web_search_provider_routing.rs"
- "src/tools/http_request.rs"
"tool:security":
- changed-files:
- any-glob-to-any-file:
- "src/tools/security_ops.rs"
- "src/tools/verifiable_intent.rs"
"tool:cloud":
- changed-files:
- any-glob-to-any-file:
- "src/tools/cloud_ops.rs"
- "src/tools/cloud_patterns.rs"
"tunnel":
- changed-files:
- any-glob-to-any-file:
- "src/tunnel/**"
"observability":
- changed-files:
- any-glob-to-any-file:
- "src/observability/**"
"tests":
- changed-files:
- any-glob-to-any-file:
- "tests/**"
"scripts":
- changed-files:
- any-glob-to-any-file:
- "scripts/**"
"dev":
- changed-files:
- any-glob-to-any-file:
- "dev/**"

View File

@@ -0,0 +1,114 @@
## Summary
Describe this PR in 2-5 bullets:
- Base branch target (`master` for all contributions):
- Problem:
- Why it matters:
- What changed:
- What did **not** change (scope boundary):
## Label Snapshot (required)
- Risk label (`risk: low|medium|high`):
- Size label (`size: XS|S|M|L|XL`, auto-managed/read-only):
- Scope labels (`core|agent|channel|config|cron|daemon|doctor|gateway|health|heartbeat|integration|memory|observability|onboard|provider|runtime|security|service|skillforge|skills|tool|tunnel|docs|dependencies|ci|tests|scripts|dev`, comma-separated):
- Module labels (`<module>: <component>`, for example `channel: telegram`, `provider: kimi`, `tool: shell`):
- Contributor tier label (`trusted contributor|experienced contributor|principal contributor|distinguished contributor`, auto-managed/read-only; author merged PRs >=5/10/20/50):
- If any auto-label is incorrect, note requested correction:
## Change Metadata
- Change type (`bug|feature|refactor|docs|security|chore`):
- Primary scope (`runtime|provider|channel|memory|security|ci|docs|multi`):
## Linked Issue
- Closes #
- Related #
- Depends on # (if stacked)
- Supersedes # (if replacing older PR)
## Supersede Attribution (required when `Supersedes #` is used)
- Superseded PRs + authors (`#<pr> by @<author>`, one per line):
- Integrated scope by source PR (what was materially carried forward):
- `Co-authored-by` trailers added for materially incorporated contributors? (`Yes/No`)
- If `No`, explain why (for example: inspiration-only, no direct code/design carry-over):
- Trailer format check (separate lines, no escaped `\n`): (`Pass/Fail`)
## Validation Evidence (required)
Commands and result summary:
```bash
cargo fmt --all -- --check
cargo clippy --all-targets -- -D warnings
cargo test
```
- Evidence provided (test/log/trace/screenshot/perf):
- If any command is intentionally skipped, explain why:
## Security Impact (required)
- New permissions/capabilities? (`Yes/No`)
- New external network calls? (`Yes/No`)
- Secrets/tokens handling changed? (`Yes/No`)
- File system access scope changed? (`Yes/No`)
- If any `Yes`, describe risk and mitigation:
## Privacy and Data Hygiene (required)
- Data-hygiene status (`pass|needs-follow-up`):
- Redaction/anonymization notes:
- Neutral wording confirmation (use ZeroClaw/project-native labels if identity-like wording is needed):
## Compatibility / Migration
- Backward compatible? (`Yes/No`)
- Config/env changes? (`Yes/No`)
- Migration needed? (`Yes/No`)
- If yes, exact upgrade steps:
## i18n Follow-Through (required when docs or user-facing wording changes)
- i18n follow-through triggered? (`Yes/No`)
- If `Yes`, locale navigation parity updated in `README*`, `docs/README*`, and `docs/SUMMARY.md` for supported locales (`en`, `zh-CN`, `ja`, `ru`, `fr`, `vi`)? (`Yes/No`)
- If `Yes`, localized runtime-contract docs updated where equivalents exist (minimum for `fr`/`vi`: `commands-reference`, `config-reference`, `troubleshooting`)? (`Yes/No/N.A.`)
- If `Yes`, Vietnamese canonical docs under `docs/i18n/vi/**` synced and compatibility shims under `docs/*.vi.md` validated? (`Yes/No/N.A.`)
- If any `No`/`N.A.`, link follow-up issue/PR and explain scope decision:
## Human Verification (required)
What was personally validated beyond CI:
- Verified scenarios:
- Edge cases checked:
- What was not verified:
## Side Effects / Blast Radius (required)
- Affected subsystems/workflows:
- Potential unintended effects:
- Guardrails/monitoring for early detection:
## Agent Collaboration Notes (recommended)
- Agent tools used (if any):
- Workflow/plan summary (if any):
- Verification focus:
- Confirmation: naming + architecture boundaries followed (`AGENTS.md` + `CONTRIBUTING.md`):
## Rollback Plan (required)
- Fast rollback command/path:
- Feature flags or config toggles (if any):
- Observable failure symptoms:
## Risks and Mitigations
List real risks in this PR (or write `None`).
- Risk:
- Mitigation:

View File

@@ -0,0 +1,17 @@
# Workflow Directory Layout
GitHub Actions only loads workflow entry files from:
- `.github/workflows/*.yml`
- `.github/workflows/*.yaml`
Subdirectories are not valid locations for workflow entry files.
Repository convention:
1. Keep runnable workflow entry files at `.github/workflows/` root.
2. Keep cross-tooling/local CI scripts under `dev/` or `scripts/ci/` when used outside Actions.
Workflow behavior documentation in this directory:
- `.github/workflows/master-branch-flow.md`

View File

@@ -0,0 +1,175 @@
name: Quality Gate
on:
pull_request:
branches: [master]
concurrency:
group: checks-${{ github.event.pull_request.number }}
cancel-in-progress: true
permissions:
contents: read
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: 0
jobs:
lint:
name: Lint
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
components: rustfmt, clippy
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
- name: Ensure web/dist placeholder exists
run: mkdir -p web/dist && touch web/dist/.gitkeep
- name: Check formatting
run: cargo fmt --all -- --check
- name: Clippy
run: cargo clippy --all-targets -- -D warnings
test:
name: Test
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
- name: Ensure web/dist placeholder exists
run: mkdir -p web/dist && touch web/dist/.gitkeep
- name: Install mold linker
run: |
sudo apt-get update -qq
sudo apt-get install -y mold
- name: Install cargo-nextest
run: curl -LsSf https://get.nexte.st/latest/linux | tar zxf - -C ${CARGO_HOME:-~/.cargo}/bin
- name: Run tests
run: cargo nextest run --locked
env:
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_LINKER: clang
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUSTFLAGS: "-C link-arg=-fuse-ld=mold"
build:
name: Build ${{ matrix.target }}
runs-on: ${{ matrix.os }}
timeout-minutes: 40
strategy:
fail-fast: false
matrix:
include:
- os: ubuntu-latest
target: x86_64-unknown-linux-gnu
- os: macos-14
target: aarch64-apple-darwin
- os: windows-latest
target: x86_64-pc-windows-msvc
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
targets: ${{ matrix.target }}
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
if: runner.os != 'Windows'
- name: Install mold linker
if: runner.os == 'Linux'
run: |
sudo apt-get update -qq
sudo apt-get install -y mold
- name: Ensure web/dist placeholder exists
shell: bash
run: mkdir -p web/dist && touch web/dist/.gitkeep
- name: Build release
shell: bash
run: cargo build --profile ci --locked --target ${{ matrix.target }}
env:
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_LINKER: clang
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUSTFLAGS: "-C link-arg=-fuse-ld=mold"
security:
name: Security Audit
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
- name: Install cargo-audit
run: cargo install cargo-audit --locked
- name: Install cargo-deny
run: cargo install cargo-deny --locked
- name: Audit dependencies
run: cargo audit
- name: Check licenses and sources
run: cargo deny check licenses sources
check-32bit:
name: "Check (32-bit)"
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
targets: i686-unknown-linux-gnu
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
- name: Install 32-bit libs
run: sudo apt-get update && sudo apt-get install -y gcc-multilib
- name: Ensure web/dist placeholder exists
run: mkdir -p web/dist && touch web/dist/.gitkeep
- name: Cargo check (32-bit, no default features)
run: cargo check --target i686-unknown-linux-gnu --no-default-features
# Composite status check — branch protection only needs to require this
# single job instead of tracking every matrix leg individually.
gate:
name: CI Required Gate
if: always()
needs: [lint, test, build, security, check-32bit]
runs-on: ubuntu-latest
steps:
- name: Check upstream job results
run: |
if [[ "${{ contains(needs.*.result, 'failure') || contains(needs.*.result, 'cancelled') }}" == "true" ]]; then
echo "::error::One or more upstream jobs failed or were cancelled"
exit 1
fi
security-gate:
name: Security Required Gate
if: always()
needs: [security]
runs-on: ubuntu-latest
steps:
- name: Check security job result
run: |
if [[ "${{ needs.security.result }}" != "success" ]]; then
echo "::error::Security audit failed or was cancelled"
exit 1
fi

View File

@@ -0,0 +1,210 @@
name: CI
on:
push:
branches: [master]
pull_request:
branches: [master]
concurrency:
group: ci-${{ github.event.pull_request.number || github.sha }}
cancel-in-progress: ${{ github.event_name == 'pull_request' }}
permissions:
contents: read
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: 0
jobs:
lint:
name: Lint
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
components: rustfmt, clippy
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
- name: Ensure web/dist placeholder exists
run: mkdir -p web/dist && touch web/dist/.gitkeep
- name: Check formatting
run: cargo fmt --all -- --check
- name: Clippy
run: cargo clippy --all-targets -- -D warnings
bench-compile:
name: Verify Benchmarks Compile
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
- name: Ensure web/dist placeholder exists
run: mkdir -p web/dist && touch web/dist/.gitkeep
- name: Verify benchmarks compile
run: cargo bench --no-run --locked
lint-strict-delta:
name: Strict Delta Lint
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
components: clippy
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
- name: Ensure web/dist placeholder exists
run: mkdir -p web/dist && touch web/dist/.gitkeep
- name: Run strict delta lint gate
run: bash scripts/ci/rust_strict_delta_gate.sh
env:
BASE_SHA: ${{ github.event.pull_request.base.sha || github.event.before }}
test:
name: Test
runs-on: ubuntu-latest
timeout-minutes: 30
needs: [lint]
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
- name: Ensure web/dist placeholder exists
run: mkdir -p web/dist && touch web/dist/.gitkeep
- name: Install mold linker
run: |
sudo apt-get update -qq
sudo apt-get install -y mold
- name: Install cargo-nextest
run: curl -LsSf https://get.nexte.st/latest/linux | tar zxf - -C ${CARGO_HOME:-~/.cargo}/bin
- name: Run tests
run: cargo nextest run --locked
env:
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_LINKER: clang
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUSTFLAGS: "-C link-arg=-fuse-ld=mold"
build:
name: Build ${{ matrix.target }}
runs-on: ${{ matrix.os }}
timeout-minutes: 40
needs: [lint]
strategy:
fail-fast: false
matrix:
include:
- os: ubuntu-latest
target: x86_64-unknown-linux-gnu
- os: macos-14
target: aarch64-apple-darwin
- os: windows-latest
target: x86_64-pc-windows-msvc
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
targets: ${{ matrix.target }}
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
if: runner.os != 'Windows'
- name: Install mold linker
if: runner.os == 'Linux'
run: |
sudo apt-get update -qq
sudo apt-get install -y mold
- name: Ensure web/dist placeholder exists
shell: bash
run: mkdir -p web/dist && touch web/dist/.gitkeep
- name: Build release
shell: bash
run: cargo build --profile ci --locked --target ${{ matrix.target }}
env:
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_LINKER: clang
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUSTFLAGS: "-C link-arg=-fuse-ld=mold"
check-all-features:
name: Check (all features)
runs-on: ubuntu-latest
timeout-minutes: 20
needs: [lint]
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
- name: Install system dependencies
run: sudo apt-get update -qq && sudo apt-get install -y libudev-dev
- name: Ensure web/dist placeholder exists
run: mkdir -p web/dist && touch web/dist/.gitkeep
- name: Check all features
run: cargo check --features ci-all --locked
docs-quality:
name: Docs Quality
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
- uses: actions/setup-node@1d0ff469b7ec7b3cb9d8673fde0c81c44821de2a # v4
with:
node-version: 20
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5
with:
python-version: "3.12"
- name: Run docs quality gate
run: bash scripts/ci/docs_quality_gate.sh
env:
BASE_SHA: ${{ github.event.pull_request.base.sha || github.event.before }}
# Composite status check — branch protection requires this single job.
gate:
name: CI Required Gate
if: always()
needs: [lint, bench-compile, lint-strict-delta, test, build, docs-quality, check-all-features]
runs-on: ubuntu-latest
steps:
- name: Check upstream job results
env:
HAS_FAILURE: ${{ contains(needs.*.result, 'failure') || contains(needs.*.result, 'cancelled') }}
run: |
if [[ "$HAS_FAILURE" == "true" ]]; then
echo "::error::One or more upstream jobs failed or were cancelled"
exit 1
fi

View File

@@ -0,0 +1,82 @@
name: Cross-Platform Build
on:
workflow_dispatch:
permissions:
contents: read
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: 0
jobs:
web:
name: Build Web Dashboard
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22
cache: npm
cache-dependency-path: web/package-lock.json
- name: Build web dashboard
run: cd web && npm ci && npm run build
- uses: actions/upload-artifact@v4
with:
name: web-dist
path: web/dist/
retention-days: 1
build:
name: Build ${{ matrix.target }}
needs: [web]
runs-on: ${{ matrix.os }}
timeout-minutes: 40
strategy:
fail-fast: false
matrix:
include:
- os: ubuntu-latest
target: aarch64-unknown-linux-gnu
cross_compiler: gcc-aarch64-linux-gnu
linker_env: CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER
linker: aarch64-linux-gnu-gcc
- os: ubuntu-latest
target: armv7-unknown-linux-gnueabihf
cross_compiler: gcc-arm-linux-gnueabihf
linker_env: CARGO_TARGET_ARMV7_UNKNOWN_LINUX_GNUEABIHF_LINKER
linker: arm-linux-gnueabihf-gcc
- os: macos-15-intel
target: x86_64-apple-darwin
- os: windows-latest
target: x86_64-pc-windows-msvc
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
targets: ${{ matrix.target }}
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
if: runner.os != 'Windows'
- uses: actions/download-artifact@v8
with:
name: web-dist
path: web/dist/
- name: Install cross compiler
if: matrix.cross_compiler
run: |
sudo apt-get update -qq
sudo apt-get install -y ${{ matrix.cross_compiler }}
- name: Build release
shell: bash
run: |
if [ -n "${{ matrix.linker_env || '' }}" ] && [ -n "${{ matrix.linker || '' }}" ]; then
export "${{ matrix.linker_env }}=${{ matrix.linker }}"
fi
cargo build --release --locked --features channel-matrix,channel-lark --target ${{ matrix.target }}

View File

@@ -0,0 +1,145 @@
name: Discord Release
on:
workflow_call:
inputs:
release_tag:
description: "Stable release tag (e.g. v0.6.2)"
required: true
type: string
release_url:
description: "GitHub Release URL"
required: true
type: string
secrets:
DISCORD_WEBHOOK_URL:
required: false
workflow_dispatch:
inputs:
release_tag:
description: "Release tag (e.g. v0.6.2)"
required: true
type: string
release_url:
description: "GitHub Release URL"
required: true
type: string
jobs:
discord:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
- name: Build Discord message
id: msg
shell: bash
env:
RELEASE_TAG: ${{ inputs.release_tag }}
RELEASE_URL: ${{ inputs.release_url }}
run: |
set -euo pipefail
# Find previous stable tag
PREV_STABLE=$(git tag --sort=-creatordate \
| grep -v "^${RELEASE_TAG}$" \
| grep -vE '\-beta\.' \
| head -1 || echo "")
RANGE="${PREV_STABLE:+${PREV_STABLE}..}${RELEASE_TAG}"
# Extract features
FEATURES=$(git log "$RANGE" --pretty=format:"%s" --no-merges \
| grep -iE '^feat(\(|:)' \
| sed 's/^feat(\([^)]*\)): /\1: /' \
| sed 's/^feat: //' \
| sed 's/ (#[0-9]*)$//' \
| sort -uf || true)
# Extract fixes
FIXES=$(git log "$RANGE" --pretty=format:"%s" --no-merges \
| grep -iE '^fix(\(|:)' \
| sed 's/^fix(\([^)]*\)): /\1: /' \
| sed 's/^fix: //' \
| sed 's/ (#[0-9]*)$//' \
| sort -uf || true)
FEAT_LIST=""
if [ -n "$FEATURES" ]; then
FEAT_LIST=$(echo "$FEATURES" | head -8 | while IFS= read -r line; do echo "🚀 ${line}"; done)
fi
FIX_LIST=""
if [ -n "$FIXES" ]; then
FIX_LIST=$(echo "$FIXES" | head -5 | while IFS= read -r line; do echo "🔧 ${line}"; done)
fi
BODY=""
if [ -n "$FEAT_LIST" ]; then
BODY="${FEAT_LIST}"
fi
if [ -n "$FIX_LIST" ]; then
[ -n "$BODY" ] && BODY="${BODY}\n"
BODY="${BODY}${FIX_LIST}"
fi
if [ -z "$BODY" ]; then
BODY="🚀 Incremental improvements and polish"
fi
{
echo "body<<MSG_EOF"
echo -e "$BODY"
echo "MSG_EOF"
} >> "$GITHUB_OUTPUT"
- name: Post to Discord
shell: bash
env:
DISCORD_WEBHOOK_URL: ${{ secrets.DISCORD_WEBHOOK_URL }}
RELEASE_TAG: ${{ inputs.release_tag }}
RELEASE_URL: ${{ inputs.release_url }}
MSG_BODY: ${{ steps.msg.outputs.body }}
run: |
set -euo pipefail
if [ -z "$DISCORD_WEBHOOK_URL" ]; then
echo "::warning::DISCORD_WEBHOOK_URL secret not configured — skipping"
exit 0
fi
# Build Discord embed payload
PAYLOAD=$(python3 -c "
import json, os
tag = os.environ['RELEASE_TAG']
url = os.environ['RELEASE_URL']
body = os.environ['MSG_BODY']
embed = {
'title': f'ZeroClaw {tag} Released',
'description': body + '\n\nZero overhead. Zero compromise. 100% Rust.',
'url': url,
'color': 0xF97316,
'footer': {'text': 'ZeroClaw Release Bot'},
}
payload = {
'username': 'ZeroClaw Releases',
'embeds': [embed],
}
print(json.dumps(payload))
")
HTTP_CODE=$(curl -s -o /tmp/discord_response.txt -w "%{http_code}" \
-H "Content-Type: application/json" \
-d "$PAYLOAD" \
"$DISCORD_WEBHOOK_URL")
if [ "$HTTP_CODE" -ge 200 ] && [ "$HTTP_CODE" -lt 300 ]; then
echo "Discord notification sent (HTTP $HTTP_CODE)"
else
echo "::error::Discord webhook failed (HTTP $HTTP_CODE)"
cat /tmp/discord_response.txt
exit 1
fi

View File

@@ -0,0 +1,130 @@
# Master Branch Delivery Flows
This document explains what runs when code is proposed to `master` and released.
Use this with:
- [`docs/ci-map.md`](../../docs/contributing/ci-map.md)
- [`docs/pr-workflow.md`](../../docs/contributing/pr-workflow.md)
- [`docs/release-process.md`](../../docs/contributing/release-process.md)
## Branching Model
ZeroClaw uses a single default branch: `master`. All contributor PRs target `master` directly. There is no `dev` or promotion branch.
Current maintainers with PR approval authority: `theonlyhennygod`, `JordanTheJet`, and `SimianAstronaut7`.
## Active Workflows
| File | Trigger | Purpose |
| --- | --- | --- |
| `checks-on-pr.yml` | `pull_request``master` | Lint + test + build + security audit on every PR |
| `cross-platform-build-manual.yml` | `workflow_dispatch` | Full platform build matrix (manual) |
| `release-beta-on-push.yml` | `push``master` | Beta release on every master commit |
| `release-stable-manual.yml` | `workflow_dispatch` | Stable release (manual, version-gated) |
## Event Summary
| Event | Workflows triggered |
| --- | --- |
| PR opened or updated against `master` | `checks-on-pr.yml` |
| Push to `master` (including after merge) | `release-beta-on-push.yml` |
| Manual dispatch | `cross-platform-build-manual.yml`, `release-stable-manual.yml` |
## Step-By-Step
### 1) PR → `master`
1. Contributor opens or updates a PR against `master`.
2. `checks-on-pr.yml` starts:
- `lint` job: runs `cargo fmt --check` and `cargo clippy -D warnings`.
- `test` job: runs `cargo nextest run --locked` on `ubuntu-latest` with Rust 1.92.0 and mold linker.
- `build` job (matrix): compiles release binary on `x86_64-unknown-linux-gnu` and `aarch64-apple-darwin`.
- `security` job: runs `cargo audit` and `cargo deny check licenses sources`.
- Concurrency group cancels in-progress runs for the same PR on new pushes.
3. All jobs must pass before merge.
4. Maintainer (`theonlyhennygod`, `JordanTheJet`, or `SimianAstronaut7`) merges PR once checks and review policy are satisfied.
5. Merge emits a `push` event on `master` (see section 2).
### 2) Push to `master` (including after merge)
1. Commit reaches `master`.
2. `release-beta-on-push.yml` (Release Beta) starts:
- `version` job: computes beta tag as `v{cargo_version}-beta.{run_number}`.
- `build` job (matrix, 4 targets): `x86_64-linux`, `aarch64-linux`, `aarch64-darwin`, `x86_64-windows`.
- `publish` job: generates `SHA256SUMS`, creates a GitHub pre-release with all artifacts. Artifact retention: 7 days.
- `docker` job: builds multi-platform image (`linux/amd64,linux/arm64`) and pushes to `ghcr.io` with `:beta` and the versioned beta tag.
3. This runs on every push to `master` without filtering. Every merged PR produces a beta pre-release.
### 3) Stable Release (manual)
1. Maintainer runs `release-stable-manual.yml` via `workflow_dispatch` with a version input (e.g. `0.2.0`).
2. `validate` job checks:
- Input matches semver `X.Y.Z` format.
- `Cargo.toml` version matches input exactly.
- Tag `vX.Y.Z` does not already exist on the remote.
3. `build` job (matrix, same 4 targets as beta): compiles release binary.
4. `publish` job: generates `SHA256SUMS`, creates a stable GitHub Release (not pre-release). Artifact retention: 14 days.
5. `docker` job: pushes to `ghcr.io` with `:latest` and `:vX.Y.Z`.
### 4) Full Platform Build (manual)
1. Maintainer runs `cross-platform-build-manual.yml` via `workflow_dispatch`.
2. `build` job (matrix, 3 targets): `aarch64-linux-gnu`, `x86_64-darwin` (macOS 15 Intel), `x86_64-windows-msvc`.
3. Build-only, no tests, no publish. Used to verify cross-compilation on platforms not covered by `checks-on-pr.yml`.
## Build Targets by Workflow
| Target | `checks-on-pr.yml` | `cross-platform-build-manual.yml` | `release-beta-on-push.yml` | `release-stable-manual.yml` |
| --- | :---: | :---: | :---: | :---: |
| `x86_64-unknown-linux-gnu` | ✓ | | ✓ | ✓ |
| `aarch64-unknown-linux-gnu` | | ✓ | ✓ | ✓ |
| `aarch64-apple-darwin` | ✓ | | ✓ | ✓ |
| `x86_64-apple-darwin` | | ✓ | | |
| `x86_64-pc-windows-msvc` | ✓ | ✓ | ✓ | ✓ |
## Mermaid Diagrams
### PR to Master
```mermaid
flowchart TD
A["PR opened or updated → master"] --> B["checks-on-pr.yml"]
B --> B0["lint: fmt + clippy"]
B --> B1["test: cargo nextest (ubuntu-latest)"]
B --> B2["build: x86_64-linux + aarch64-darwin"]
B --> B3["security: audit + deny"]
B0 & B1 & B2 & B3 --> C{"Checks pass?"}
C -->|No| D["PR stays open"]
C -->|Yes| E["Maintainer merges"]
E --> F["push event on master"]
```
### Beta Release (on every master push)
```mermaid
flowchart TD
A["Push to master"] --> B["release-beta-on-push.yml"]
B --> B1["version: compute v{x.y.z}-beta.{N}"]
B1 --> B2["build: 4 targets"]
B2 --> B3["publish: GitHub pre-release + SHA256SUMS"]
B2 --> B4["docker: push ghcr.io :beta + versioned tag"]
```
### Stable Release (manual)
```mermaid
flowchart TD
A["workflow_dispatch: version=X.Y.Z"] --> B["release-stable-manual.yml"]
B --> B1["validate: semver + Cargo.toml + tag uniqueness"]
B1 --> B2["build: 4 targets"]
B2 --> B3["publish: GitHub stable release + SHA256SUMS"]
B2 --> B4["docker: push ghcr.io :latest + :vX.Y.Z"]
```
## Quick Troubleshooting
1. **Quality gate failing on PR**: check `lint` job for formatting/clippy issues; check `test` job for test failures; check `build` job for compile errors; check `security` job for audit/deny failures.
2. **Beta release not appearing**: confirm the push landed on `master` (not another branch); check `release-beta-on-push.yml` run status.
3. **Stable release failing at validate**: ensure `Cargo.toml` version matches the input version and the tag does not already exist.
4. **Full matrix build needed**: run `cross-platform-build-manual.yml` manually from the Actions tab.

View File

@@ -0,0 +1,19 @@
name: PR Path Labeler
on:
pull_request_target:
types: [opened, synchronize, reopened]
permissions:
contents: read
pull-requests: write
jobs:
label:
name: Apply path labels
runs-on: ubuntu-latest
timeout-minutes: 5
steps:
- uses: actions/labeler@8558fd74291d67161a8a78ce36a881fa63b766a9 # v5
with:
sync-labels: true

View File

@@ -0,0 +1,181 @@
name: Pub AUR Package
on:
workflow_call:
inputs:
release_tag:
description: "Existing release tag (vX.Y.Z)"
required: true
type: string
dry_run:
description: "Generate PKGBUILD only (no push)"
required: false
default: false
type: boolean
secrets:
AUR_SSH_KEY:
required: false
workflow_dispatch:
inputs:
release_tag:
description: "Existing release tag (vX.Y.Z)"
required: true
type: string
dry_run:
description: "Generate PKGBUILD only (no push)"
required: false
default: true
type: boolean
concurrency:
group: aur-publish-${{ github.run_id }}
cancel-in-progress: false
permissions:
contents: read
jobs:
publish-aur:
name: Update AUR Package
runs-on: ubuntu-latest
env:
RELEASE_TAG: ${{ inputs.release_tag }}
DRY_RUN: ${{ inputs.dry_run }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Validate and compute metadata
id: meta
shell: bash
run: |
set -euo pipefail
if [[ ! "$RELEASE_TAG" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "::error::release_tag must be vX.Y.Z format."
exit 1
fi
version="${RELEASE_TAG#v}"
tarball_url="https://github.com/${GITHUB_REPOSITORY}/archive/refs/tags/${RELEASE_TAG}.tar.gz"
tarball_sha="$(curl -fsSL "$tarball_url" | sha256sum | awk '{print $1}')"
if [[ -z "$tarball_sha" ]]; then
echo "::error::Could not compute SHA256 for source tarball."
exit 1
fi
{
echo "version=$version"
echo "tarball_url=$tarball_url"
echo "tarball_sha=$tarball_sha"
} >> "$GITHUB_OUTPUT"
{
echo "### AUR Package Metadata"
echo "- version: \`${version}\`"
echo "- tarball_url: \`${tarball_url}\`"
echo "- tarball_sha: \`${tarball_sha}\`"
} >> "$GITHUB_STEP_SUMMARY"
- name: Generate PKGBUILD
id: pkgbuild
shell: bash
env:
VERSION: ${{ steps.meta.outputs.version }}
TARBALL_SHA: ${{ steps.meta.outputs.tarball_sha }}
run: |
set -euo pipefail
pkgbuild_file="$(mktemp)"
sed -e "s/^pkgver=.*/pkgver=${VERSION}/" \
-e "s/^sha256sums=.*/sha256sums=('${TARBALL_SHA}')/" \
dist/aur/PKGBUILD > "$pkgbuild_file"
echo "pkgbuild_file=$pkgbuild_file" >> "$GITHUB_OUTPUT"
echo "### Generated PKGBUILD" >> "$GITHUB_STEP_SUMMARY"
echo '```bash' >> "$GITHUB_STEP_SUMMARY"
cat "$pkgbuild_file" >> "$GITHUB_STEP_SUMMARY"
echo '```' >> "$GITHUB_STEP_SUMMARY"
- name: Generate .SRCINFO
id: srcinfo
shell: bash
env:
VERSION: ${{ steps.meta.outputs.version }}
TARBALL_SHA: ${{ steps.meta.outputs.tarball_sha }}
run: |
set -euo pipefail
srcinfo_file="$(mktemp)"
sed -e "s/pkgver = .*/pkgver = ${VERSION}/" \
-e "s/sha256sums = .*/sha256sums = ${TARBALL_SHA}/" \
-e "s|zeroclaw-[0-9.]*.tar.gz|zeroclaw-${VERSION}.tar.gz|g" \
-e "s|/v[0-9.]*\.tar\.gz|/v${VERSION}.tar.gz|g" \
dist/aur/.SRCINFO > "$srcinfo_file"
echo "srcinfo_file=$srcinfo_file" >> "$GITHUB_OUTPUT"
- name: Push to AUR
if: inputs.dry_run == false
shell: bash
env:
AUR_SSH_KEY: ${{ secrets.AUR_SSH_KEY }}
PKGBUILD_FILE: ${{ steps.pkgbuild.outputs.pkgbuild_file }}
SRCINFO_FILE: ${{ steps.srcinfo.outputs.srcinfo_file }}
VERSION: ${{ steps.meta.outputs.version }}
run: |
set -euo pipefail
if [[ -z "${AUR_SSH_KEY}" ]]; then
echo "::error::Secret AUR_SSH_KEY is required for non-dry-run."
exit 1
fi
# Set up SSH key — normalize line endings and ensure trailing newline
mkdir -p ~/.ssh
chmod 700 ~/.ssh
printf '%s\n' "$AUR_SSH_KEY" | tr -d '\r' > ~/.ssh/aur
chmod 600 ~/.ssh/aur
cat > ~/.ssh/config <<'SSH_CONFIG'
Host aur.archlinux.org
IdentityFile ~/.ssh/aur
User aur
StrictHostKeyChecking accept-new
SSH_CONFIG
chmod 600 ~/.ssh/config
# Verify key is valid and print fingerprint for debugging
echo "::group::SSH key diagnostics"
ssh-keygen -l -f ~/.ssh/aur || { echo "::error::AUR_SSH_KEY is not a valid SSH private key"; exit 1; }
echo "::endgroup::"
# Test SSH connectivity before attempting clone
ssh -T -o BatchMode=yes -o ConnectTimeout=10 aur@aur.archlinux.org 2>&1 || true
tmp_dir="$(mktemp -d)"
git clone ssh://aur@aur.archlinux.org/zeroclaw.git "$tmp_dir/aur"
cp "$PKGBUILD_FILE" "$tmp_dir/aur/PKGBUILD"
cp "$SRCINFO_FILE" "$tmp_dir/aur/.SRCINFO"
cd "$tmp_dir/aur"
git config user.name "zeroclaw-bot"
git config user.email "bot@zeroclaw.dev"
git add PKGBUILD .SRCINFO
git commit -m "zeroclaw ${VERSION}"
git push origin HEAD
echo "AUR package updated to ${VERSION}"
- name: Summary
shell: bash
run: |
if [[ "$DRY_RUN" == "true" ]]; then
echo "Dry run complete: PKGBUILD generated, no push performed."
else
echo "Publish complete: AUR package pushed."
fi

View File

@@ -0,0 +1,235 @@
name: Pub Homebrew Core
on:
workflow_call:
inputs:
release_tag:
description: "Existing release tag to publish (vX.Y.Z)"
required: true
type: string
dry_run:
description: "Patch formula only (no push/PR)"
required: false
default: false
type: boolean
secrets:
HOMEBREW_UPSTREAM_PR_TOKEN:
required: false
HOMEBREW_CORE_BOT_TOKEN:
required: false
workflow_dispatch:
inputs:
release_tag:
description: "Existing release tag to publish (vX.Y.Z)"
required: true
type: string
dry_run:
description: "Patch formula only (no push/PR)"
required: false
default: true
type: boolean
concurrency:
group: homebrew-core-${{ github.run_id }}
cancel-in-progress: false
permissions:
contents: read
jobs:
publish-homebrew-core:
name: Publish Homebrew Core PR
runs-on: ubuntu-latest
env:
UPSTREAM_REPO: Homebrew/homebrew-core
FORMULA_PATH: Formula/z/zeroclaw.rb
RELEASE_TAG: ${{ inputs.release_tag }}
DRY_RUN: ${{ inputs.dry_run }}
BOT_FORK_REPO: ${{ vars.HOMEBREW_CORE_BOT_FORK_REPO }}
BOT_EMAIL: ${{ vars.HOMEBREW_CORE_BOT_EMAIL }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Validate release tag and version alignment
id: release_meta
shell: bash
run: |
set -euo pipefail
semver_pattern='^v[0-9]+\.[0-9]+\.[0-9]+([.-][0-9A-Za-z.-]+)?$'
if [[ ! "$RELEASE_TAG" =~ $semver_pattern ]]; then
echo "::error::release_tag must match semver-like format (vX.Y.Z[-suffix])."
exit 1
fi
if ! git rev-parse "refs/tags/${RELEASE_TAG}" >/dev/null 2>&1; then
git fetch --tags origin
fi
tag_version="${RELEASE_TAG#v}"
cargo_version="$(git show "${RELEASE_TAG}:Cargo.toml" \
| sed -n 's/^version = "\([^"]*\)"/\1/p' | head -n1)"
if [[ -z "$cargo_version" ]]; then
echo "::error::Unable to read Cargo.toml version from tag ${RELEASE_TAG}."
exit 1
fi
if [[ "$cargo_version" != "$tag_version" ]]; then
echo "::error::Tag ${RELEASE_TAG} does not match Cargo.toml version (${cargo_version})."
exit 1
fi
tarball_url="https://github.com/${GITHUB_REPOSITORY}/archive/refs/tags/${RELEASE_TAG}.tar.gz"
tarball_sha="$(curl -fsSL "$tarball_url" | sha256sum | awk '{print $1}')"
{
echo "tag_version=$tag_version"
echo "tarball_url=$tarball_url"
echo "tarball_sha=$tarball_sha"
} >> "$GITHUB_OUTPUT"
{
echo "### Release Metadata"
echo "- release_tag: \`${RELEASE_TAG}\`"
echo "- cargo_version: \`${cargo_version}\`"
echo "- tarball_sha256: \`${tarball_sha}\`"
echo "- dry_run: ${DRY_RUN}"
} >> "$GITHUB_STEP_SUMMARY"
- name: Patch Homebrew formula
id: patch_formula
shell: bash
env:
HOMEBREW_CORE_BOT_TOKEN: ${{ secrets.HOMEBREW_UPSTREAM_PR_TOKEN || secrets.HOMEBREW_CORE_BOT_TOKEN }}
GH_TOKEN: ${{ secrets.HOMEBREW_UPSTREAM_PR_TOKEN || secrets.HOMEBREW_CORE_BOT_TOKEN }}
TARBALL_URL: ${{ steps.release_meta.outputs.tarball_url }}
TARBALL_SHA: ${{ steps.release_meta.outputs.tarball_sha }}
run: |
set -euo pipefail
tmp_repo="$(mktemp -d)"
echo "tmp_repo=$tmp_repo" >> "$GITHUB_OUTPUT"
if [[ "$DRY_RUN" == "true" ]]; then
git clone --depth=1 "https://github.com/${UPSTREAM_REPO}.git" "$tmp_repo/homebrew-core"
else
if [[ -z "${BOT_FORK_REPO}" ]]; then
echo "::error::Repository variable HOMEBREW_CORE_BOT_FORK_REPO is required when dry_run=false."
exit 1
fi
if [[ -z "${HOMEBREW_CORE_BOT_TOKEN}" ]]; then
echo "::error::Repository secret HOMEBREW_CORE_BOT_TOKEN is required when dry_run=false."
exit 1
fi
if [[ "$BOT_FORK_REPO" != */* ]]; then
echo "::error::HOMEBREW_CORE_BOT_FORK_REPO must be in owner/repo format."
exit 1
fi
if ! gh api "repos/${BOT_FORK_REPO}" >/dev/null 2>&1; then
echo "::error::HOMEBREW_CORE_BOT_TOKEN cannot access ${BOT_FORK_REPO}."
exit 1
fi
gh repo clone "${BOT_FORK_REPO}" "$tmp_repo/homebrew-core" -- --depth=1
fi
repo_dir="$tmp_repo/homebrew-core"
formula_file="$repo_dir/$FORMULA_PATH"
if [[ ! -f "$formula_file" ]]; then
echo "::error::Formula file not found: $FORMULA_PATH"
exit 1
fi
if [[ "$DRY_RUN" == "false" ]]; then
if git -C "$repo_dir" remote get-url upstream >/dev/null 2>&1; then
git -C "$repo_dir" remote set-url upstream "https://github.com/${UPSTREAM_REPO}.git"
else
git -C "$repo_dir" remote add upstream "https://github.com/${UPSTREAM_REPO}.git"
fi
if git -C "$repo_dir" ls-remote --exit-code --heads upstream main >/dev/null 2>&1; then
upstream_ref="main"
else
upstream_ref="master"
fi
git -C "$repo_dir" fetch --depth=1 upstream "$upstream_ref"
branch_name="zeroclaw-${RELEASE_TAG}-${GITHUB_RUN_ID}"
git -C "$repo_dir" checkout -B "$branch_name" "upstream/$upstream_ref"
echo "branch_name=$branch_name" >> "$GITHUB_OUTPUT"
fi
tarball_url="${TARBALL_URL}"
tarball_sha="${TARBALL_SHA}"
if [[ -z "$tarball_url" || -z "$tarball_sha" ]]; then
echo "::error::tarball_url or tarball_sha is empty — release_meta step output not propagated."
exit 1
fi
perl -0pi -e "s|^ url \".*\"| url \"${tarball_url}\"|m" "$formula_file"
perl -0pi -e "s|^ sha256 \".*\"| sha256 \"${tarball_sha}\"|m" "$formula_file"
perl -0pi -e "s|^ license \".*\"| license \"Apache-2.0 OR MIT\"|m" "$formula_file"
# Ensure Node.js build dependency is declared so that build.rs can
# run `npm ci && npm run build` to produce the web frontend assets.
if ! grep -q 'depends_on "node" => :build' "$formula_file"; then
perl -0pi -e 's|( depends_on "rust" => :build\n)|\1 depends_on "node" => :build\n|m' "$formula_file"
fi
git -C "$repo_dir" diff -- "$FORMULA_PATH" > "$tmp_repo/formula.diff"
if [[ ! -s "$tmp_repo/formula.diff" ]]; then
echo "::error::No formula changes generated. Nothing to publish."
exit 1
fi
{
echo "### Formula Diff"
echo '```diff'
cat "$tmp_repo/formula.diff"
echo '```'
} >> "$GITHUB_STEP_SUMMARY"
- name: Push branch and open Homebrew PR
if: inputs.dry_run == false
shell: bash
env:
GH_TOKEN: ${{ secrets.HOMEBREW_UPSTREAM_PR_TOKEN || secrets.HOMEBREW_CORE_BOT_TOKEN }}
TMP_REPO: ${{ steps.patch_formula.outputs.tmp_repo }}
BRANCH_NAME: ${{ steps.patch_formula.outputs.branch_name }}
TAG_VERSION: ${{ steps.release_meta.outputs.tag_version }}
TARBALL_URL: ${{ steps.release_meta.outputs.tarball_url }}
TARBALL_SHA: ${{ steps.release_meta.outputs.tarball_sha }}
run: |
set -euo pipefail
repo_dir="${TMP_REPO}/homebrew-core"
fork_owner="${BOT_FORK_REPO%%/*}"
bot_email="${BOT_EMAIL:-${fork_owner}@users.noreply.github.com}"
git -C "$repo_dir" config user.name "$fork_owner"
git -C "$repo_dir" config user.email "$bot_email"
git -C "$repo_dir" add "$FORMULA_PATH"
git -C "$repo_dir" commit -m "zeroclaw ${TAG_VERSION}"
gh auth setup-git
git -C "$repo_dir" push --set-upstream origin "$BRANCH_NAME"
pr_body="Automated formula bump from ZeroClaw release workflow.
- Release tag: ${RELEASE_TAG}
- Source tarball: ${TARBALL_URL}
- Source sha256: ${TARBALL_SHA}"
gh pr create \
--repo "$UPSTREAM_REPO" \
--base main \
--head "${fork_owner}:${BRANCH_NAME}" \
--title "zeroclaw ${TAG_VERSION}" \
--body "$pr_body"
- name: Summary
shell: bash
run: |
if [[ "$DRY_RUN" == "true" ]]; then
echo "Dry run complete: formula diff generated, no push/PR performed."
else
echo "Publish complete: branch pushed and PR opened from bot fork."
fi

View File

@@ -0,0 +1,165 @@
name: Pub Scoop Manifest
on:
workflow_call:
inputs:
release_tag:
description: "Existing release tag (vX.Y.Z)"
required: true
type: string
dry_run:
description: "Generate manifest only (no push)"
required: false
default: false
type: boolean
secrets:
SCOOP_BUCKET_TOKEN:
required: false
workflow_dispatch:
inputs:
release_tag:
description: "Existing release tag (vX.Y.Z)"
required: true
type: string
dry_run:
description: "Generate manifest only (no push)"
required: false
default: true
type: boolean
concurrency:
group: scoop-publish-${{ github.run_id }}
cancel-in-progress: false
permissions:
contents: read
jobs:
publish-scoop:
name: Update Scoop Manifest
runs-on: ubuntu-latest
env:
RELEASE_TAG: ${{ inputs.release_tag }}
DRY_RUN: ${{ inputs.dry_run }}
SCOOP_BUCKET_REPO: ${{ vars.SCOOP_BUCKET_REPO }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Validate and compute metadata
id: meta
shell: bash
run: |
set -euo pipefail
if [[ ! "$RELEASE_TAG" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "::error::release_tag must be vX.Y.Z format."
exit 1
fi
version="${RELEASE_TAG#v}"
zip_url="https://github.com/${GITHUB_REPOSITORY}/releases/download/${RELEASE_TAG}/zeroclaw-x86_64-pc-windows-msvc.zip"
sums_url="https://github.com/${GITHUB_REPOSITORY}/releases/download/${RELEASE_TAG}/SHA256SUMS"
sha256="$(curl -fsSL "$sums_url" | grep 'zeroclaw-x86_64-pc-windows-msvc.zip' | awk '{print $1}')"
if [[ -z "$sha256" ]]; then
echo "::error::Could not find Windows binary hash in SHA256SUMS for ${RELEASE_TAG}."
exit 1
fi
{
echo "version=$version"
echo "zip_url=$zip_url"
echo "sha256=$sha256"
} >> "$GITHUB_OUTPUT"
{
echo "### Scoop Manifest Metadata"
echo "- version: \`${version}\`"
echo "- zip_url: \`${zip_url}\`"
echo "- sha256: \`${sha256}\`"
} >> "$GITHUB_STEP_SUMMARY"
- name: Generate manifest
id: manifest
shell: bash
env:
VERSION: ${{ steps.meta.outputs.version }}
ZIP_URL: ${{ steps.meta.outputs.zip_url }}
SHA256: ${{ steps.meta.outputs.sha256 }}
run: |
set -euo pipefail
manifest_file="$(mktemp)"
cat > "$manifest_file" <<MANIFEST
{
"version": "${VERSION}",
"description": "Zero overhead. Zero compromise. 100% Rust. The fastest, smallest AI assistant.",
"homepage": "https://github.com/zeroclaw-labs/zeroclaw",
"license": "MIT|Apache-2.0",
"architecture": {
"64bit": {
"url": "${ZIP_URL}",
"hash": "${SHA256}",
"bin": "zeroclaw.exe"
}
},
"checkver": {
"github": "https://github.com/zeroclaw-labs/zeroclaw"
},
"autoupdate": {
"architecture": {
"64bit": {
"url": "https://github.com/zeroclaw-labs/zeroclaw/releases/download/v\$version/zeroclaw-x86_64-pc-windows-msvc.zip"
}
},
"hash": {
"url": "https://github.com/zeroclaw-labs/zeroclaw/releases/download/v\$version/SHA256SUMS",
"regex": "([a-f0-9]{64})\\\\s+zeroclaw-x86_64-pc-windows-msvc\\\\.zip"
}
}
}
MANIFEST
jq '.' "$manifest_file" > "${manifest_file}.formatted"
mv "${manifest_file}.formatted" "$manifest_file"
echo "manifest_file=$manifest_file" >> "$GITHUB_OUTPUT"
echo "### Generated Manifest" >> "$GITHUB_STEP_SUMMARY"
echo '```json' >> "$GITHUB_STEP_SUMMARY"
cat "$manifest_file" >> "$GITHUB_STEP_SUMMARY"
echo '```' >> "$GITHUB_STEP_SUMMARY"
- name: Push to Scoop bucket
if: inputs.dry_run == false
shell: bash
env:
GH_TOKEN: ${{ secrets.SCOOP_BUCKET_TOKEN }}
MANIFEST_FILE: ${{ steps.manifest.outputs.manifest_file }}
VERSION: ${{ steps.meta.outputs.version }}
run: |
set -euo pipefail
if [[ -z "${SCOOP_BUCKET_REPO}" ]]; then
echo "::error::Repository variable SCOOP_BUCKET_REPO is required (e.g. zeroclaw-labs/scoop-zeroclaw)."
exit 1
fi
tmp_dir="$(mktemp -d)"
gh repo clone "${SCOOP_BUCKET_REPO}" "$tmp_dir/bucket" -- --depth=1
mkdir -p "$tmp_dir/bucket/bucket"
cp "$MANIFEST_FILE" "$tmp_dir/bucket/bucket/zeroclaw.json"
cd "$tmp_dir/bucket"
git config user.name "zeroclaw-bot"
git config user.email "bot@zeroclaw.dev"
git add bucket/zeroclaw.json
git commit -m "zeroclaw ${VERSION}"
gh auth setup-git
git push origin HEAD
echo "Scoop manifest updated to ${VERSION}"

View File

@@ -0,0 +1,160 @@
name: Auto-sync crates.io
on:
push:
branches: [master]
paths:
- "Cargo.toml"
concurrency:
group: publish-crates-auto
cancel-in-progress: false
permissions:
contents: read
env:
CARGO_TERM_COLOR: always
jobs:
detect-version-change:
name: Detect Version Bump
if: github.repository == 'zeroclaw-labs/zeroclaw'
runs-on: ubuntu-latest
outputs:
changed: ${{ steps.check.outputs.changed }}
version: ${{ steps.check.outputs.version }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 2
- name: Check if version changed
id: check
shell: bash
run: |
set -euo pipefail
current=$(sed -n 's/^version = "\([^"]*\)"/\1/p' Cargo.toml | head -1)
previous=$(git show HEAD~1:Cargo.toml 2>/dev/null | sed -n 's/^version = "\([^"]*\)"/\1/p' | head -1 || echo "")
echo "Current version: ${current}"
echo "Previous version: ${previous}"
# Skip if stable release workflow will handle this version
# (indicated by an existing or imminent stable tag)
if git ls-remote --exit-code --tags origin "refs/tags/v${current}" >/dev/null 2>&1; then
echo "Stable tag v${current} exists — stable release workflow handles crates.io"
echo "changed=false" >> "$GITHUB_OUTPUT"
exit 0
fi
if [[ "$current" != "$previous" && -n "$current" ]]; then
echo "changed=true" >> "$GITHUB_OUTPUT"
echo "version=${current}" >> "$GITHUB_OUTPUT"
echo "Version bumped from ${previous} to ${current} — will publish"
else
echo "changed=false" >> "$GITHUB_OUTPUT"
echo "Version unchanged (${current}) — skipping publish"
fi
check-registry:
name: Check if Already Published
needs: [detect-version-change]
if: needs.detect-version-change.outputs.changed == 'true'
runs-on: ubuntu-latest
outputs:
should_publish: ${{ steps.check.outputs.should_publish }}
steps:
- name: Check crates.io for existing version
id: check
shell: bash
env:
VERSION: ${{ needs.detect-version-change.outputs.version }}
run: |
set -euo pipefail
status=$(curl -s -o /dev/null -w "%{http_code}" \
"https://crates.io/api/v1/crates/zeroclawlabs/${VERSION}")
if [[ "$status" == "200" ]]; then
echo "Version ${VERSION} already exists on crates.io — skipping"
echo "should_publish=false" >> "$GITHUB_OUTPUT"
else
echo "Version ${VERSION} not yet published — proceeding"
echo "should_publish=true" >> "$GITHUB_OUTPUT"
fi
publish:
name: Publish to crates.io
needs: [detect-version-change, check-registry]
if: needs.check-registry.outputs.should_publish == 'true'
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
with:
toolchain: 1.92.0
- uses: Swatinem/rust-cache@v2
- uses: actions/setup-node@v4
with:
node-version: 22
cache: npm
cache-dependency-path: web/package-lock.json
- name: Build web dashboard
run: cd web && npm ci && npm run build
- name: Clean web build artifacts
run: rm -rf web/node_modules web/src web/package.json web/package-lock.json web/tsconfig*.json web/vite.config.ts web/index.html
- name: Publish aardvark-sys to crates.io
shell: bash
env:
CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
run: |
OUTPUT=$(cargo publish --locked --allow-dirty --no-verify -p aardvark-sys 2>&1) && exit 0
echo "$OUTPUT"
if echo "$OUTPUT" | grep -q 'already exists'; then
echo "::notice::aardvark-sys already on crates.io — skipping"
exit 0
fi
exit 1
- name: Wait for aardvark-sys to index
run: sleep 15
- name: Publish to crates.io
shell: bash
env:
CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
VERSION: ${{ needs.detect-version-change.outputs.version }}
run: |
# Publish to crates.io; treat "already exists" as success
# (manual publish or stable workflow may have already published)
OUTPUT=$(cargo publish --locked --allow-dirty --no-verify 2>&1) && exit 0
echo "$OUTPUT"
if echo "$OUTPUT" | grep -q 'already exists'; then
echo "::notice::zeroclawlabs@${VERSION} already on crates.io — skipping"
exit 0
fi
exit 1
- name: Verify published
shell: bash
env:
VERSION: ${{ needs.detect-version-change.outputs.version }}
run: |
echo "Waiting for crates.io to index..."
sleep 15
status=$(curl -s -o /dev/null -w "%{http_code}" \
"https://crates.io/api/v1/crates/zeroclawlabs/${VERSION}")
if [[ "$status" == "200" ]]; then
echo "zeroclawlabs v${VERSION} is live on crates.io"
echo "Install: cargo install zeroclawlabs"
else
echo "::warning::Version may still be indexing — check https://crates.io/crates/zeroclawlabs"
fi

View File

@@ -0,0 +1,108 @@
name: Publish to crates.io
on:
workflow_dispatch:
inputs:
version:
description: "Version to publish (e.g. 0.2.0) — must match Cargo.toml"
required: true
type: string
dry_run:
description: "Dry run (validate without publishing)"
required: false
type: boolean
default: false
concurrency:
group: publish-crates
cancel-in-progress: false
permissions:
contents: read
env:
CARGO_TERM_COLOR: always
jobs:
validate:
name: Validate
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Check version matches Cargo.toml
shell: bash
env:
INPUT_VERSION: ${{ inputs.version }}
run: |
set -euo pipefail
cargo_version=$(sed -n 's/^version = "\([^"]*\)"/\1/p' Cargo.toml | head -1)
if [[ "$cargo_version" != "$INPUT_VERSION" ]]; then
echo "::error::Cargo.toml version (${cargo_version}) does not match input (${INPUT_VERSION})"
exit 1
fi
publish:
name: Publish to crates.io
needs: [validate]
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
with:
toolchain: 1.92.0
- uses: Swatinem/rust-cache@v2
- uses: actions/setup-node@v4
with:
node-version: 22
cache: npm
cache-dependency-path: web/package-lock.json
- name: Build web dashboard
run: cd web && npm ci && npm run build
- name: Clean web build artifacts
run: rm -rf web/node_modules web/src web/package.json web/package-lock.json web/tsconfig*.json web/vite.config.ts web/index.html
- name: Publish aardvark-sys to crates.io
if: "!inputs.dry_run"
shell: bash
env:
CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
run: |
OUTPUT=$(cargo publish --locked --allow-dirty --no-verify -p aardvark-sys 2>&1) && exit 0
echo "$OUTPUT"
if echo "$OUTPUT" | grep -q 'already exists'; then
echo "::notice::aardvark-sys already on crates.io — skipping"
exit 0
fi
exit 1
- name: Wait for aardvark-sys to index
if: "!inputs.dry_run"
run: sleep 15
- name: Publish (dry run)
if: inputs.dry_run
run: cargo publish --dry-run --locked --allow-dirty --no-verify
env:
CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
- name: Publish to crates.io
if: "!inputs.dry_run"
shell: bash
env:
CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
VERSION: ${{ inputs.version }}
run: |
# Publish to crates.io; treat "already exists" as success
OUTPUT=$(cargo publish --locked --allow-dirty --no-verify 2>&1) && exit 0
echo "$OUTPUT"
if echo "$OUTPUT" | grep -q 'already exists'; then
echo "::notice::zeroclawlabs@${VERSION} already on crates.io — skipping"
exit 0
fi
exit 1

View File

@@ -0,0 +1,462 @@
name: Release Beta
on:
push:
branches: [master]
concurrency:
group: release-beta
cancel-in-progress: true
permissions:
contents: write
packages: write
env:
CARGO_TERM_COLOR: always
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
RELEASE_CARGO_FEATURES: channel-matrix,channel-lark,whatsapp-web
jobs:
version:
name: Resolve Version
if: github.repository == 'zeroclaw-labs/zeroclaw'
runs-on: ubuntu-latest
outputs:
version: ${{ steps.ver.outputs.version }}
tag: ${{ steps.ver.outputs.tag }}
skip: ${{ steps.ver.outputs.skip }}
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 2
- name: Compute beta version
id: ver
shell: bash
run: |
set -euo pipefail
base_version=$(sed -n 's/^version = "\([^"]*\)"/\1/p' Cargo.toml | head -1)
# Skip beta if this is a version bump commit (stable release handles it)
commit_msg=$(git log -1 --pretty=format:"%s")
if [[ "$commit_msg" =~ ^chore:\ bump\ version ]]; then
echo "Version bump commit detected — skipping beta release"
echo "skip=true" >> "$GITHUB_OUTPUT"
exit 0
fi
# Skip beta if a stable tag already exists for this version
if git ls-remote --exit-code --tags origin "refs/tags/v${base_version}" >/dev/null 2>&1; then
echo "Stable tag v${base_version} exists — skipping beta release"
echo "skip=true" >> "$GITHUB_OUTPUT"
exit 0
fi
beta_tag="v${base_version}-beta.${GITHUB_RUN_NUMBER}"
echo "version=${base_version}" >> "$GITHUB_OUTPUT"
echo "tag=${beta_tag}" >> "$GITHUB_OUTPUT"
echo "skip=false" >> "$GITHUB_OUTPUT"
echo "Beta release: ${beta_tag}"
release-notes:
name: Generate Release Notes
needs: [version]
if: github.repository == 'zeroclaw-labs/zeroclaw' && needs.version.outputs.skip != 'true'
runs-on: ubuntu-latest
outputs:
notes: ${{ steps.notes.outputs.body }}
features: ${{ steps.notes.outputs.features }}
contributors: ${{ steps.notes.outputs.contributors }}
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
- name: Build release notes
id: notes
shell: bash
run: |
set -euo pipefail
# Use a wider range — find the previous stable tag to capture all
# contributors across the full release cycle, not just one beta bump
PREV_TAG=$(git tag --sort=-creatordate \
| grep -vE '\-beta\.' \
| head -1 || echo "")
if [ -z "$PREV_TAG" ]; then
RANGE="HEAD"
else
RANGE="${PREV_TAG}..HEAD"
fi
# Extract features only (feat commits) — skip bug fixes for clean notes
FEATURES=$(git log "$RANGE" --pretty=format:"%s" --no-merges \
| grep -iE '^feat(\(|:)' \
| sed 's/^feat(\([^)]*\)): /\1: /' \
| sed 's/^feat: //' \
| sed 's/ (#[0-9]*)$//' \
| sort -uf \
| while IFS= read -r line; do echo "- ${line}"; done || true)
if [ -z "$FEATURES" ]; then
FEATURES="- Incremental improvements and polish"
fi
# Collect ALL unique contributors: git authors + Co-Authored-By
GIT_AUTHORS=$(git log "$RANGE" --pretty=format:"%an" --no-merges | sort -uf || true)
CO_AUTHORS=$(git log "$RANGE" --pretty=format:"%b" --no-merges \
| grep -ioE 'Co-Authored-By: *[^<]+' \
| sed 's/Co-Authored-By: *//i' \
| sed 's/ *$//' \
| sort -uf || true)
# Merge, deduplicate, and filter out bots
ALL_CONTRIBUTORS=$(printf "%s\n%s" "$GIT_AUTHORS" "$CO_AUTHORS" \
| sort -uf \
| grep -v '^$' \
| grep -viE '\[bot\]$|^dependabot|^github-actions|^copilot|^ZeroClaw Bot|^ZeroClaw Runner|^ZeroClaw Agent|^blacksmith' \
| while IFS= read -r name; do echo "- ${name}"; done || true)
# Build release body
BODY=$(cat <<NOTES_EOF
## What's New
${FEATURES}
## Contributors
${ALL_CONTRIBUTORS}
---
*Full changelog: ${PREV_TAG}...HEAD*
NOTES_EOF
)
# Output multiline values
{
echo "body<<BODY_EOF"
echo "$BODY"
echo "BODY_EOF"
} >> "$GITHUB_OUTPUT"
{
echo "features<<FEAT_EOF"
echo "$FEATURES"
echo "FEAT_EOF"
} >> "$GITHUB_OUTPUT"
{
echo "contributors<<CONTRIB_EOF"
echo "$ALL_CONTRIBUTORS"
echo "CONTRIB_EOF"
} >> "$GITHUB_OUTPUT"
web:
name: Build Web Dashboard
needs: [version]
if: github.repository == 'zeroclaw-labs/zeroclaw' && needs.version.outputs.skip != 'true'
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22
cache: npm
cache-dependency-path: web/package-lock.json
- name: Build web dashboard
run: cd web && npm ci && npm run build
- uses: actions/upload-artifact@v4
with:
name: web-dist
path: web/dist/
retention-days: 1
build:
name: Build ${{ matrix.target }}
needs: [version, web]
runs-on: ${{ matrix.os }}
timeout-minutes: 40
strategy:
fail-fast: false
matrix:
include:
# Use ubuntu-22.04 for Linux builds to link against glibc 2.35,
# ensuring compatibility with Ubuntu 22.04+ (#3573).
- os: ubuntu-22.04
target: x86_64-unknown-linux-gnu
artifact: zeroclaw
ext: tar.gz
- os: ubuntu-22.04
target: aarch64-unknown-linux-gnu
artifact: zeroclaw
ext: tar.gz
cross_compiler: gcc-aarch64-linux-gnu
linker_env: CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER
linker: aarch64-linux-gnu-gcc
- os: ubuntu-22.04
target: armv7-unknown-linux-gnueabihf
artifact: zeroclaw
ext: tar.gz
cross_compiler: gcc-arm-linux-gnueabihf
linker_env: CARGO_TARGET_ARMV7_UNKNOWN_LINUX_GNUEABIHF_LINKER
linker: arm-linux-gnueabihf-gcc
- os: macos-14
target: aarch64-apple-darwin
artifact: zeroclaw
ext: tar.gz
- os: ubuntu-latest
target: aarch64-linux-android
artifact: zeroclaw
ext: tar.gz
ndk: true
- os: windows-latest
target: x86_64-pc-windows-msvc
artifact: zeroclaw.exe
ext: zip
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
targets: ${{ matrix.target }}
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
if: runner.os != 'Windows'
with:
prefix-key: ${{ matrix.os }}-${{ matrix.target }}
- uses: actions/download-artifact@v4
with:
name: web-dist
path: web/dist/
- name: Install cross compiler
if: matrix.cross_compiler
run: |
sudo apt-get update -qq
sudo apt-get install -y ${{ matrix.cross_compiler }}
- name: Setup Android NDK
if: matrix.ndk
run: echo "$ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin" >> "$GITHUB_PATH"
- name: Build release
shell: bash
run: |
if [ -n "${{ matrix.linker_env || '' }}" ] && [ -n "${{ matrix.linker || '' }}" ]; then
export "${{ matrix.linker_env }}=${{ matrix.linker }}"
fi
cargo build --release --locked --features "${{ env.RELEASE_CARGO_FEATURES }}" --target ${{ matrix.target }}
- name: Check binary size
shell: bash
run: bash scripts/ci/check_binary_size.sh "target/${{ matrix.target }}/release/${{ matrix.artifact }}" "${{ matrix.target }}"
- name: Package (Unix)
if: runner.os != 'Windows'
run: |
cd target/${{ matrix.target }}/release
tar czf ../../../zeroclaw-${{ matrix.target }}.${{ matrix.ext }} ${{ matrix.artifact }}
- name: Package (Windows)
if: runner.os == 'Windows'
run: |
cd target/${{ matrix.target }}/release
7z a ../../../zeroclaw-${{ matrix.target }}.${{ matrix.ext }} ${{ matrix.artifact }}
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
with:
name: zeroclaw-${{ matrix.target }}
path: zeroclaw-${{ matrix.target }}.${{ matrix.ext }}
retention-days: 7
build-desktop:
name: Build Desktop App (macOS Universal)
needs: [version]
if: needs.version.outputs.skip != 'true'
runs-on: macos-14
timeout-minutes: 40
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
targets: aarch64-apple-darwin,x86_64-apple-darwin
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
with:
prefix-key: macos-tauri
- uses: actions/setup-node@v4
with:
node-version: 22
- name: Install Tauri CLI
run: cargo install tauri-cli --locked
- name: Sync Tauri version with Cargo.toml
shell: bash
run: |
VERSION=$(sed -n 's/^version = "\([^"]*\)"/\1/p' Cargo.toml | head -1)
cd apps/tauri
if command -v jq >/dev/null 2>&1; then
jq --arg v "$VERSION" '.version = $v' tauri.conf.json > tmp.json && mv tmp.json tauri.conf.json
else
sed -i '' "s/\"version\": \"[^\"]*\"/\"version\": \"$VERSION\"/" tauri.conf.json
fi
echo "Tauri version set to: $VERSION"
- name: Build Tauri app (universal binary)
working-directory: apps/tauri
run: cargo tauri build --target universal-apple-darwin
- name: Prepare desktop release assets
run: |
mkdir -p desktop-assets
find target -name '*.dmg' -exec cp {} desktop-assets/ZeroClaw.dmg \; 2>/dev/null || true
find target -name '*.app.tar.gz' -exec cp {} desktop-assets/ZeroClaw-macos.app.tar.gz \; 2>/dev/null || true
find target -name '*.app.tar.gz.sig' -exec cp {} desktop-assets/ZeroClaw-macos.app.tar.gz.sig \; 2>/dev/null || true
echo "--- Desktop assets ---"
ls -lh desktop-assets/
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
with:
name: desktop-macos
path: desktop-assets/*
retention-days: 7
publish:
name: Publish Beta Release
needs: [version, release-notes, build, build-desktop]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
pattern: zeroclaw-*
path: artifacts
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
name: desktop-macos
path: artifacts/desktop-macos
- name: Generate checksums
run: |
cd artifacts
find . -type f \( -name '*.tar.gz' -o -name '*.zip' -o -name '*.dmg' \) -exec sha256sum {} + | sed 's| \./[^/]*/| |' > SHA256SUMS
cat SHA256SUMS
- name: Collect release assets
run: |
mkdir -p release-assets
find artifacts -type f \( -name '*.tar.gz' -o -name '*.zip' -o -name '*.dmg' -o -name 'SHA256SUMS' \) -exec cp {} release-assets/ \;
cp install.sh release-assets/
echo "--- Assets ---"
ls -lh release-assets/
- name: Write release notes
env:
NOTES: ${{ needs.release-notes.outputs.notes }}
run: printf '%s\n' "$NOTES" > release-notes.md
- name: Create GitHub Release
env:
GH_TOKEN: ${{ secrets.RELEASE_TOKEN }}
TAG: ${{ needs.version.outputs.tag }}
run: |
gh release create "$TAG" release-assets/* \
--repo "${{ github.repository }}" \
--title "$TAG" \
--notes-file release-notes.md \
--prerelease
redeploy-website:
name: Trigger Website Redeploy
needs: [publish]
runs-on: ubuntu-latest
steps:
- name: Trigger website redeploy
env:
PAT: ${{ secrets.WEBSITE_REPO_PAT }}
run: |
curl -fsSL -X POST \
-H "Authorization: token $PAT" \
-H "Accept: application/vnd.github+json" \
https://api.github.com/repos/zeroclaw-labs/zeroclaw-website/dispatches \
-d '{"event_type":"new-release","client_payload":{"install_script_url":"https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/install.sh"}}'
docker:
name: Push Docker Image
needs: [version, build]
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
name: zeroclaw-x86_64-unknown-linux-gnu
path: artifacts/
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
name: zeroclaw-aarch64-unknown-linux-gnu
path: artifacts/
- name: Prepare Docker context with pre-built binaries
run: |
mkdir -p docker-ctx/bin/amd64 docker-ctx/bin/arm64
tar xzf artifacts/zeroclaw-x86_64-unknown-linux-gnu.tar.gz -C docker-ctx/bin/amd64
tar xzf artifacts/zeroclaw-aarch64-unknown-linux-gnu.tar.gz -C docker-ctx/bin/arm64
mkdir -p docker-ctx/zeroclaw-data/.zeroclaw docker-ctx/zeroclaw-data/workspace
printf '%s\n' \
'workspace_dir = "/zeroclaw-data/workspace"' \
'config_path = "/zeroclaw-data/.zeroclaw/config.toml"' \
'api_key = ""' \
'default_provider = "openrouter"' \
'default_model = "anthropic/claude-sonnet-4-20250514"' \
'default_temperature = 0.7' \
'' \
'[gateway]' \
'port = 42617' \
'host = "[::]"' \
'allow_public_bind = true' \
> docker-ctx/zeroclaw-data/.zeroclaw/config.toml
cp Dockerfile.ci docker-ctx/Dockerfile
cp Dockerfile.debian.ci docker-ctx/Dockerfile.debian
- uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3
- uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
with:
context: docker-ctx
push: true
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ needs.version.outputs.tag }}
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:beta
platforms: linux/amd64,linux/arm64
- name: Build and push Debian compatibility image
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
with:
context: docker-ctx
file: docker-ctx/Dockerfile.debian
push: true
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ needs.version.outputs.tag }}-debian
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:beta-debian
platforms: linux/amd64,linux/arm64
# Tweet removed — only stable releases should tweet (see tweet-release.yml).

View File

@@ -0,0 +1,598 @@
name: Release Stable
on:
push:
tags:
- "v[0-9]+.[0-9]+.[0-9]+" # stable tags only (no -beta suffix)
workflow_dispatch:
inputs:
version:
description: "Stable version to release (e.g. 0.2.0)"
required: true
type: string
concurrency:
group: promote-release
cancel-in-progress: false
permissions:
contents: write
packages: write
env:
CARGO_TERM_COLOR: always
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
RELEASE_CARGO_FEATURES: channel-matrix,channel-lark,whatsapp-web
jobs:
validate:
name: Validate Version
runs-on: ubuntu-latest
outputs:
tag: ${{ steps.check.outputs.tag }}
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Validate semver and Cargo.toml match
id: check
shell: bash
env:
INPUT_VERSION: ${{ inputs.version || '' }}
REF_NAME: ${{ github.ref_name }}
EVENT_NAME: ${{ github.event_name }}
run: |
set -euo pipefail
cargo_version=$(sed -n 's/^version = "\([^"]*\)"/\1/p' Cargo.toml | head -1)
# Resolve version from tag push or manual input
if [[ "$EVENT_NAME" == "push" ]]; then
# Tag push: extract version from tag name (v0.5.9 -> 0.5.9)
input_version="${REF_NAME#v}"
else
input_version="$INPUT_VERSION"
fi
if [[ ! "$input_version" =~ ^[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "::error::Version must be semver (X.Y.Z). Got: ${input_version}"
exit 1
fi
if [[ "$cargo_version" != "$input_version" ]]; then
echo "::error::Cargo.toml version (${cargo_version}) does not match input (${input_version}). Bump Cargo.toml first."
exit 1
fi
tag="v${input_version}"
# Only check tag existence for manual dispatch (tag push means it already exists)
if [[ "$EVENT_NAME" != "push" ]]; then
if git ls-remote --exit-code --tags origin "refs/tags/${tag}" >/dev/null 2>&1; then
echo "::error::Tag ${tag} already exists."
exit 1
fi
fi
echo "tag=${tag}" >> "$GITHUB_OUTPUT"
web:
name: Build Web Dashboard
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22
cache: npm
cache-dependency-path: web/package-lock.json
- name: Build web dashboard
run: cd web && npm ci && npm run build
- uses: actions/upload-artifact@v4
with:
name: web-dist
path: web/dist/
retention-days: 1
release-notes:
name: Generate Release Notes
runs-on: ubuntu-latest
outputs:
notes: ${{ steps.notes.outputs.body }}
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
- name: Build release notes
id: notes
shell: bash
env:
INPUT_VERSION: ${{ inputs.version || '' }}
REF_NAME: ${{ github.ref_name }}
EVENT_NAME: ${{ github.event_name }}
run: |
set -euo pipefail
# Resolve version from tag push or manual input
if [[ "$EVENT_NAME" == "push" ]]; then
INPUT_VERSION="${REF_NAME#v}"
fi
# Find the previous stable tag (exclude beta tags)
PREV_TAG=$(git tag --sort=-creatordate | grep -vE '\-beta\.' | grep -v "^v${INPUT_VERSION}$" | head -1 || echo "")
if [ -z "$PREV_TAG" ]; then
RANGE="HEAD"
else
RANGE="${PREV_TAG}..HEAD"
fi
# Extract features only — skip bug fixes for clean release notes
FEATURES=$(git log "$RANGE" --pretty=format:"%s" --no-merges \
| grep -iE '^feat(\(|:)' \
| sed 's/^feat(\([^)]*\)): /\1: /' \
| sed 's/^feat: //' \
| sed 's/ (#[0-9]*)$//' \
| sort -uf \
| while IFS= read -r line; do echo "- ${line}"; done || true)
if [ -z "$FEATURES" ]; then
FEATURES="- Incremental improvements and polish"
fi
# Collect ALL unique contributors: git authors + Co-Authored-By
GIT_AUTHORS=$(git log "$RANGE" --pretty=format:"%an" --no-merges | sort -uf || true)
CO_AUTHORS=$(git log "$RANGE" --pretty=format:"%b" --no-merges \
| grep -ioE 'Co-Authored-By: *[^<]+' \
| sed 's/Co-Authored-By: *//i' \
| sed 's/ *$//' \
| sort -uf || true)
# Merge, deduplicate, and filter out bots
ALL_CONTRIBUTORS=$(printf "%s\n%s" "$GIT_AUTHORS" "$CO_AUTHORS" \
| sort -uf \
| grep -v '^$' \
| grep -viE '\[bot\]$|^dependabot|^github-actions|^copilot|^ZeroClaw Bot|^ZeroClaw Runner|^ZeroClaw Agent|^blacksmith' \
| while IFS= read -r name; do echo "- ${name}"; done || true)
BODY=$(cat <<NOTES_EOF
## What's New
${FEATURES}
## Contributors
${ALL_CONTRIBUTORS}
---
*Full changelog: ${PREV_TAG}...v${INPUT_VERSION}*
NOTES_EOF
)
{
echo "body<<BODY_EOF"
echo "$BODY"
echo "BODY_EOF"
} >> "$GITHUB_OUTPUT"
build:
name: Build ${{ matrix.target }}
needs: [validate, web]
runs-on: ${{ matrix.os }}
timeout-minutes: 40
strategy:
fail-fast: false
matrix:
include:
# Use ubuntu-22.04 for Linux builds to link against glibc 2.35,
# ensuring compatibility with Ubuntu 22.04+ (#3573).
- os: ubuntu-22.04
target: x86_64-unknown-linux-gnu
artifact: zeroclaw
ext: tar.gz
- os: ubuntu-22.04
target: aarch64-unknown-linux-gnu
artifact: zeroclaw
ext: tar.gz
cross_compiler: gcc-aarch64-linux-gnu
linker_env: CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER
linker: aarch64-linux-gnu-gcc
- os: ubuntu-22.04
target: armv7-unknown-linux-gnueabihf
artifact: zeroclaw
ext: tar.gz
cross_compiler: gcc-arm-linux-gnueabihf
linker_env: CARGO_TARGET_ARMV7_UNKNOWN_LINUX_GNUEABIHF_LINKER
linker: arm-linux-gnueabihf-gcc
skip_prometheus: true
- os: ubuntu-22.04
target: arm-unknown-linux-gnueabihf
artifact: zeroclaw
ext: tar.gz
cross_compiler: gcc-arm-linux-gnueabihf
linker_env: CARGO_TARGET_ARM_UNKNOWN_LINUX_GNUEABIHF_LINKER
linker: arm-linux-gnueabihf-gcc
skip_prometheus: true
- os: macos-14
target: aarch64-apple-darwin
artifact: zeroclaw
ext: tar.gz
- os: ubuntu-latest
target: aarch64-linux-android
artifact: zeroclaw
ext: tar.gz
ndk: true
- os: windows-latest
target: x86_64-pc-windows-msvc
artifact: zeroclaw.exe
ext: zip
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
targets: ${{ matrix.target }}
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
if: runner.os != 'Windows'
with:
prefix-key: ${{ matrix.os }}-${{ matrix.target }}
- uses: actions/download-artifact@v4
with:
name: web-dist
path: web/dist/
- name: Install cross compiler
if: matrix.cross_compiler
run: |
sudo apt-get update -qq
sudo apt-get install -y ${{ matrix.cross_compiler }}
- name: Setup Android NDK
if: matrix.ndk
run: echo "$ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin" >> "$GITHUB_PATH"
- name: Build release
shell: bash
run: |
if [ -n "${{ matrix.linker_env || '' }}" ] && [ -n "${{ matrix.linker || '' }}" ]; then
export "${{ matrix.linker_env }}=${{ matrix.linker }}"
fi
# Force ARMv6 codegen for arm-unknown-linux-gnueabihf (#4556)
# Ubuntu 22.04's gcc-arm-linux-gnueabihf defaults to ARMv7+NEON,
# which segfaults on ARMv6 devices (e.g. Raspberry Pi Zero W).
if [ "${{ matrix.target }}" = "arm-unknown-linux-gnueabihf" ]; then
export CFLAGS_arm_unknown_linux_gnueabihf="-march=armv6 -mfpu=vfp -mfloat-abi=hard"
export CARGO_TARGET_ARM_UNKNOWN_LINUX_GNUEABIHF_RUSTFLAGS="-C target-feature=-neon"
fi
if [ "${{ matrix.skip_prometheus || 'false' }}" = "true" ]; then
cargo build --release --locked --no-default-features --features "${{ env.RELEASE_CARGO_FEATURES }},channel-nostr,skill-creation" --target ${{ matrix.target }}
else
cargo build --release --locked --features "${{ env.RELEASE_CARGO_FEATURES }}" --target ${{ matrix.target }}
fi
- name: Check binary size
shell: bash
run: bash scripts/ci/check_binary_size.sh "target/${{ matrix.target }}/release/${{ matrix.artifact }}" "${{ matrix.target }}"
- name: Package (Unix)
if: runner.os != 'Windows'
run: |
cd target/${{ matrix.target }}/release
tar czf ../../../zeroclaw-${{ matrix.target }}.${{ matrix.ext }} ${{ matrix.artifact }}
- name: Package (Windows)
if: runner.os == 'Windows'
run: |
cd target/${{ matrix.target }}/release
7z a ../../../zeroclaw-${{ matrix.target }}.${{ matrix.ext }} ${{ matrix.artifact }}
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
with:
name: zeroclaw-${{ matrix.target }}
path: zeroclaw-${{ matrix.target }}.${{ matrix.ext }}
retention-days: 14
build-desktop:
name: Build Desktop App (macOS Universal)
needs: [validate]
runs-on: macos-14
timeout-minutes: 40
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
targets: aarch64-apple-darwin,x86_64-apple-darwin
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
with:
prefix-key: macos-tauri
- uses: actions/setup-node@v4
with:
node-version: 22
- name: Install Tauri CLI
run: cargo install tauri-cli --locked
- name: Sync Tauri version with Cargo.toml
shell: bash
run: |
VERSION=$(sed -n 's/^version = "\([^"]*\)"/\1/p' Cargo.toml | head -1)
cd apps/tauri
if command -v jq >/dev/null 2>&1; then
jq --arg v "$VERSION" '.version = $v' tauri.conf.json > tmp.json && mv tmp.json tauri.conf.json
else
sed -i '' "s/\"version\": \"[^\"]*\"/\"version\": \"$VERSION\"/" tauri.conf.json
fi
echo "Tauri version set to: $VERSION"
- name: Build Tauri app (universal binary)
working-directory: apps/tauri
run: cargo tauri build --target universal-apple-darwin
- name: Prepare desktop release assets
run: |
mkdir -p desktop-assets
find target -name '*.dmg' -exec cp {} desktop-assets/ZeroClaw.dmg \; 2>/dev/null || true
find target -name '*.app.tar.gz' -exec cp {} desktop-assets/ZeroClaw-macos.app.tar.gz \; 2>/dev/null || true
find target -name '*.app.tar.gz.sig' -exec cp {} desktop-assets/ZeroClaw-macos.app.tar.gz.sig \; 2>/dev/null || true
echo "--- Desktop assets ---"
ls -lh desktop-assets/
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
with:
name: desktop-macos
path: desktop-assets/*
retention-days: 14
publish:
name: Publish Stable Release
needs: [validate, release-notes, build, build-desktop]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
pattern: zeroclaw-*
path: artifacts
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
name: desktop-macos
path: artifacts/desktop-macos
- name: Generate checksums
run: |
cd artifacts
find . -type f \( -name '*.tar.gz' -o -name '*.zip' -o -name '*.dmg' \) -exec sha256sum {} + | sed 's| \./[^/]*/| |' > SHA256SUMS
cat SHA256SUMS
- name: Collect release assets
run: |
mkdir -p release-assets
find artifacts -type f \( -name '*.tar.gz' -o -name '*.zip' -o -name '*.dmg' -o -name 'SHA256SUMS' \) -exec cp {} release-assets/ \;
cp install.sh release-assets/
echo "--- Assets ---"
ls -lh release-assets/
- name: Write release notes
env:
NOTES: ${{ needs.release-notes.outputs.notes }}
run: printf '%s\n' "$NOTES" > release-notes.md
- name: Create tag if manual dispatch
if: github.event_name == 'workflow_dispatch'
env:
TAG: ${{ needs.validate.outputs.tag }}
run: |
git tag -a "$TAG" -m "zeroclaw $TAG"
git push origin "$TAG"
- name: Create GitHub Release
env:
GH_TOKEN: ${{ secrets.RELEASE_TOKEN }}
TAG: ${{ needs.validate.outputs.tag }}
run: |
gh release create "$TAG" release-assets/* \
--repo "${{ github.repository }}" \
--title "$TAG" \
--notes-file release-notes.md \
--latest
crates-io:
name: Publish to crates.io
needs: [validate, publish]
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
with:
toolchain: 1.92.0
- uses: Swatinem/rust-cache@v2
- uses: actions/setup-node@v4
with:
node-version: 22
cache: npm
cache-dependency-path: web/package-lock.json
- name: Build web dashboard
run: cd web && npm ci && npm run build
- name: Clean web build artifacts
run: rm -rf web/node_modules web/src web/package.json web/package-lock.json web/tsconfig*.json web/vite.config.ts web/index.html
- name: Publish aardvark-sys to crates.io
env:
CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
run: |
OUTPUT=$(cargo publish --locked --allow-dirty --no-verify -p aardvark-sys 2>&1) && exit 0
echo "$OUTPUT"
if echo "$OUTPUT" | grep -q 'already exists'; then
echo "::notice::aardvark-sys already on crates.io — skipping"
exit 0
fi
exit 1
- name: Wait for aardvark-sys to index
run: sleep 15
- name: Publish to crates.io
env:
CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
VERSION: ${{ inputs.version }}
run: |
# Publish to crates.io; treat "already exists" as success
# (auto-publish workflow may have already published this version)
CRATE_NAME=$(sed -n 's/^name = "\([^"]*\)"/\1/p' Cargo.toml | head -1)
OUTPUT=$(cargo publish --locked --allow-dirty --no-verify 2>&1) && exit 0
echo "$OUTPUT"
if echo "$OUTPUT" | grep -q 'already exists'; then
echo "::notice::${CRATE_NAME}@${VERSION} already on crates.io — skipping"
exit 0
fi
exit 1
redeploy-website:
name: Trigger Website Redeploy
needs: [publish]
runs-on: ubuntu-latest
steps:
- name: Trigger website redeploy
env:
PAT: ${{ secrets.WEBSITE_REPO_PAT }}
run: |
curl -fsSL -X POST \
-H "Authorization: token $PAT" \
-H "Accept: application/vnd.github+json" \
https://api.github.com/repos/zeroclaw-labs/zeroclaw-website/dispatches \
-d '{"event_type":"new-release","client_payload":{"install_script_url":"https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/install.sh"}}'
docker:
name: Push Docker Image
needs: [validate, build]
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
name: zeroclaw-x86_64-unknown-linux-gnu
path: artifacts/
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
name: zeroclaw-aarch64-unknown-linux-gnu
path: artifacts/
- name: Prepare Docker context with pre-built binaries
run: |
mkdir -p docker-ctx/bin/amd64 docker-ctx/bin/arm64
tar xzf artifacts/zeroclaw-x86_64-unknown-linux-gnu.tar.gz -C docker-ctx/bin/amd64
tar xzf artifacts/zeroclaw-aarch64-unknown-linux-gnu.tar.gz -C docker-ctx/bin/arm64
mkdir -p docker-ctx/zeroclaw-data/.zeroclaw docker-ctx/zeroclaw-data/workspace
printf '%s\n' \
'workspace_dir = "/zeroclaw-data/workspace"' \
'config_path = "/zeroclaw-data/.zeroclaw/config.toml"' \
'api_key = ""' \
'default_provider = "openrouter"' \
'default_model = "anthropic/claude-sonnet-4-20250514"' \
'default_temperature = 0.7' \
'' \
'[gateway]' \
'port = 42617' \
'host = "[::]"' \
'allow_public_bind = true' \
> docker-ctx/zeroclaw-data/.zeroclaw/config.toml
cp Dockerfile.ci docker-ctx/Dockerfile
cp Dockerfile.debian.ci docker-ctx/Dockerfile.debian
- uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3
- uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
with:
context: docker-ctx
push: true
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ needs.validate.outputs.tag }}
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
platforms: linux/amd64,linux/arm64
- name: Build and push Debian compatibility image
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
with:
context: docker-ctx
file: docker-ctx/Dockerfile.debian
push: true
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ needs.validate.outputs.tag }}-debian
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:debian
platforms: linux/amd64,linux/arm64
# ── Post-publish: package manager auto-sync ─────────────────────────
scoop:
name: Update Scoop Manifest
needs: [validate, publish]
if: ${{ !cancelled() && needs.publish.result == 'success' }}
uses: ./.github/workflows/pub-scoop.yml
with:
release_tag: ${{ needs.validate.outputs.tag }}
dry_run: false
secrets: inherit
aur:
name: Update AUR Package
needs: [validate, publish]
if: ${{ !cancelled() && needs.publish.result == 'success' }}
uses: ./.github/workflows/pub-aur.yml
with:
release_tag: ${{ needs.validate.outputs.tag }}
dry_run: false
secrets: inherit
homebrew:
name: Update Homebrew Core
needs: [validate, publish]
if: ${{ !cancelled() && needs.publish.result == 'success' }}
uses: ./.github/workflows/pub-homebrew-core.yml
with:
release_tag: ${{ needs.validate.outputs.tag }}
dry_run: false
secrets: inherit
# ── Post-publish: announce after release + website are live ───────────
# Docker push can be slow; don't let it block announcements.
tweet:
name: Tweet Release
needs: [validate, publish, redeploy-website]
if: ${{ !cancelled() && needs.publish.result == 'success' }}
uses: ./.github/workflows/tweet-release.yml
with:
release_tag: ${{ needs.validate.outputs.tag }}
release_url: https://github.com/zeroclaw-labs/zeroclaw/releases/tag/${{ needs.validate.outputs.tag }}
secrets: inherit
discord:
name: Discord Announcement
needs: [validate, publish, redeploy-website]
if: ${{ !cancelled() && needs.publish.result == 'success' }}
uses: ./.github/workflows/discord-release.yml
with:
release_tag: ${{ needs.validate.outputs.tag }}
release_url: https://github.com/zeroclaw-labs/zeroclaw/releases/tag/${{ needs.validate.outputs.tag }}
secrets: inherit

View File

@@ -0,0 +1,308 @@
name: Tweet Release
on:
# Called by release workflows AFTER all publish steps (docker, crates, website) complete.
workflow_call:
inputs:
release_tag:
description: "Stable release tag (e.g. v0.3.0)"
required: true
type: string
release_url:
description: "GitHub Release URL"
required: true
type: string
secrets:
TWITTER_CONSUMER_API_KEY:
required: false
TWITTER_CONSUMER_API_SECRET_KEY:
required: false
TWITTER_ACCESS_TOKEN:
required: false
TWITTER_ACCESS_TOKEN_SECRET:
required: false
workflow_dispatch:
inputs:
tweet_text:
description: "Custom tweet text (include emojis, keep it punchy)"
required: true
type: string
image_url:
description: "Optional image URL to attach (png/jpg)"
required: false
type: string
jobs:
tweet:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
- name: Check for new features
id: check
shell: bash
env:
RELEASE_TAG: ${{ inputs.release_tag || '' }}
MANUAL_TEXT: ${{ inputs.tweet_text || '' }}
run: |
# Manual dispatch always proceeds
if [ -n "$MANUAL_TEXT" ]; then
echo "skip=false" >> "$GITHUB_OUTPUT"
exit 0
fi
# Stable releases (no -beta suffix) always tweet — they represent
# the full release cycle, so skipping them loses visibility.
if [[ ! "$RELEASE_TAG" =~ -beta\. ]]; then
echo "Stable release ${RELEASE_TAG} — always tweet"
echo "skip=false" >> "$GITHUB_OUTPUT"
exit 0
fi
# Find the previous STABLE release tag (exclude betas) to check for new features
PREV_TAG=$(git tag --sort=-creatordate \
| grep -v "^${RELEASE_TAG}$" \
| grep -vE '\-beta\.' \
| head -1 || echo "")
if [ -z "$PREV_TAG" ]; then
echo "skip=false" >> "$GITHUB_OUTPUT"
exit 0
fi
# Count new feat() OR fix() commits since the previous release
NEW_CHANGES=$(git log "${PREV_TAG}..${RELEASE_TAG}" --pretty=format:"%s" --no-merges \
| grep -ciE '^(feat|fix)(\(|:)' || echo "0")
if [ "$NEW_CHANGES" -eq 0 ]; then
echo "No new features or fixes since ${PREV_TAG} — skipping tweet"
echo "skip=true" >> "$GITHUB_OUTPUT"
else
echo "${NEW_CHANGES} new change(s) since ${PREV_TAG} — tweeting"
echo "skip=false" >> "$GITHUB_OUTPUT"
fi
- name: Build tweet text
id: tweet
if: steps.check.outputs.skip != 'true'
shell: bash
env:
RELEASE_TAG: ${{ inputs.release_tag || '' }}
RELEASE_URL: ${{ inputs.release_url || '' }}
MANUAL_TEXT: ${{ inputs.tweet_text || '' }}
run: |
set -euo pipefail
if [ -n "$MANUAL_TEXT" ]; then
TWEET="$MANUAL_TEXT"
else
# Diff against the last STABLE release (exclude betas) to capture
# ALL features accumulated across the full beta cycle
PREV_STABLE=$(git tag --sort=-creatordate \
| grep -v "^${RELEASE_TAG}$" \
| grep -vE '\-beta\.' \
| head -1 || echo "")
RANGE="${PREV_STABLE:+${PREV_STABLE}..}${RELEASE_TAG}"
# Extract ALL features since the last stable release
FEATURES=$(git log "$RANGE" --pretty=format:"%s" --no-merges \
| grep -iE '^feat(\(|:)' \
| sed 's/^feat(\([^)]*\)): /\1: /' \
| sed 's/^feat: //' \
| sed 's/ (#[0-9]*)$//' \
| sort -uf || true)
FEAT_COUNT=$(echo "$FEATURES" | grep -c . || echo "0")
# Format top features with rocket emoji (limit to 6 for tweet space)
FEAT_LIST=$(echo "$FEATURES" \
| head -6 \
| while IFS= read -r line; do echo "🚀 ${line}"; done || true)
if [ -z "$FEAT_LIST" ]; then
FEAT_LIST="🚀 Incremental improvements and polish"
fi
# Build tweet — feature-focused style
TWEET=$(printf "🦀 ZeroClaw %s\n\n%s\n\nZero overhead. Zero compromise. 100%% Rust.\n\n#zeroclaw #rust #ai #opensource" \
"$RELEASE_TAG" "$FEAT_LIST")
fi
# X/Twitter counts any URL as 23 chars (t.co shortening).
# Extract the URL (if present), truncate the BODY to fit, then
# re-append the URL so it is never chopped.
URL=""
BODY="$TWEET"
# Pull URL out of existing tweet text or use RELEASE_URL
FOUND_URL=$(echo "$TWEET" | grep -oE 'https?://[^ ]+' | tail -1 || true)
if [ -n "$FOUND_URL" ]; then
URL="$FOUND_URL"
BODY=$(echo "$TWEET" | sed "s|${URL}||" | sed -e 's/[[:space:]]*$//')
elif [ -n "$RELEASE_URL" ]; then
URL="$RELEASE_URL"
fi
if [ -n "$URL" ]; then
# URL counts as 23 chars on X + 2 chars for \n\n separator = 25
MAX_BODY=$((280 - 25))
if [ ${#BODY} -gt $MAX_BODY ]; then
BODY="${BODY:0:$((MAX_BODY - 3))}..."
fi
TWEET=$(printf "%s\n\n%s" "$BODY" "$URL")
else
if [ ${#TWEET} -gt 280 ]; then
TWEET="${TWEET:0:277}..."
fi
fi
echo "--- Tweet preview ---"
echo "$TWEET"
echo "--- ${#TWEET} chars ---"
{
echo "text<<TWEET_EOF"
echo "$TWEET"
echo "TWEET_EOF"
} >> "$GITHUB_OUTPUT"
- name: Check for duplicate tweet
id: dedup
if: steps.check.outputs.skip != 'true'
shell: bash
env:
TWEET_TEXT: ${{ steps.tweet.outputs.text }}
run: |
# Hash the tweet content (ignore whitespace differences)
TWEET_HASH=$(echo "$TWEET_TEXT" | tr -s '[:space:]' | sha256sum | cut -d' ' -f1)
echo "hash=${TWEET_HASH}" >> "$GITHUB_OUTPUT"
# Check if we already have a cache hit for this exact tweet
MARKER_FILE="/tmp/tweet-dedup-${TWEET_HASH}"
echo "$TWEET_HASH" > "$MARKER_FILE"
- uses: actions/cache@v4
if: steps.check.outputs.skip != 'true'
id: tweet-cache
with:
path: /tmp/tweet-dedup-${{ steps.dedup.outputs.hash }}
key: tweet-${{ steps.dedup.outputs.hash }}
- name: Skip duplicate tweet
if: steps.check.outputs.skip != 'true' && steps.tweet-cache.outputs.cache-hit == 'true'
run: |
echo "::warning::Duplicate tweet detected (hash=${{ steps.dedup.outputs.hash }}) — skipping"
echo "This exact tweet was already posted in a previous run."
- name: Post to X
if: steps.check.outputs.skip != 'true' && steps.tweet-cache.outputs.cache-hit != 'true'
shell: bash
env:
TWITTER_CONSUMER_KEY: ${{ secrets.TWITTER_CONSUMER_API_KEY }}
TWITTER_CONSUMER_SECRET: ${{ secrets.TWITTER_CONSUMER_API_SECRET_KEY }}
TWITTER_ACCESS_TOKEN: ${{ secrets.TWITTER_ACCESS_TOKEN }}
TWITTER_ACCESS_TOKEN_SECRET: ${{ secrets.TWITTER_ACCESS_TOKEN_SECRET }}
TWEET_TEXT: ${{ steps.tweet.outputs.text }}
IMAGE_URL: ${{ inputs.image_url || '' }}
run: |
set -euo pipefail
# Skip if Twitter secrets are not configured
if [ -z "$TWITTER_CONSUMER_KEY" ] || [ -z "$TWITTER_ACCESS_TOKEN" ]; then
echo "::warning::Twitter secrets not configured — skipping tweet"
exit 0
fi
pip install requests requests-oauthlib --quiet
python3 - <<'PYEOF'
import os, sys, time
from requests_oauthlib import OAuth1Session
consumer_key = os.environ["TWITTER_CONSUMER_KEY"]
consumer_secret = os.environ["TWITTER_CONSUMER_SECRET"]
access_token = os.environ["TWITTER_ACCESS_TOKEN"]
access_token_secret = os.environ["TWITTER_ACCESS_TOKEN_SECRET"]
tweet_text = os.environ["TWEET_TEXT"]
image_url = os.environ.get("IMAGE_URL", "")
oauth = OAuth1Session(
consumer_key,
client_secret=consumer_secret,
resource_owner_key=access_token,
resource_owner_secret=access_token_secret,
)
media_id = None
# Upload image if provided
if image_url:
import requests
print(f"Downloading image: {image_url}")
img_resp = requests.get(image_url, timeout=30)
img_resp.raise_for_status()
content_type = img_resp.headers.get("content-type", "image/png")
init_resp = oauth.post(
"https://upload.twitter.com/1.1/media/upload.json",
data={
"command": "INIT",
"total_bytes": len(img_resp.content),
"media_type": content_type,
},
)
if init_resp.status_code != 202:
print(f"Media INIT failed: {init_resp.status_code} {init_resp.text}", file=sys.stderr)
sys.exit(1)
media_id = init_resp.json()["media_id_string"]
append_resp = oauth.post(
"https://upload.twitter.com/1.1/media/upload.json",
data={"command": "APPEND", "media_id": media_id, "segment_index": 0},
files={"media_data": img_resp.content},
)
if append_resp.status_code not in (200, 204):
print(f"Media APPEND failed: {append_resp.status_code} {append_resp.text}", file=sys.stderr)
sys.exit(1)
fin_resp = oauth.post(
"https://upload.twitter.com/1.1/media/upload.json",
data={"command": "FINALIZE", "media_id": media_id},
)
if fin_resp.status_code not in (200, 201):
print(f"Media FINALIZE failed: {fin_resp.status_code} {fin_resp.text}", file=sys.stderr)
sys.exit(1)
state = fin_resp.json().get("processing_info", {}).get("state")
while state == "pending" or state == "in_progress":
wait = fin_resp.json().get("processing_info", {}).get("check_after_secs", 2)
time.sleep(wait)
status_resp = oauth.get(
"https://upload.twitter.com/1.1/media/upload.json",
params={"command": "STATUS", "media_id": media_id},
)
state = status_resp.json().get("processing_info", {}).get("state")
fin_resp = status_resp
print(f"Image uploaded: media_id={media_id}")
# Post tweet
payload = {"text": tweet_text}
if media_id:
payload["media"] = {"media_ids": [media_id]}
resp = oauth.post("https://api.x.com/2/tweets", json=payload)
if resp.status_code == 201:
data = resp.json()
tweet_id = data["data"]["id"]
print(f"Tweet posted: https://x.com/zeroclawlabs/status/{tweet_id}")
else:
print(f"Failed to post tweet: {resp.status_code}", file=sys.stderr)
print(resp.text, file=sys.stderr)
sys.exit(1)
PYEOF

54
third_party/zeroclaw/.gitignore vendored Normal file
View File

@@ -0,0 +1,54 @@
/target
/target-*/
firmware/*/target
web/dist/*
!web/dist/.gitkeep
*.db
*.db-journal
.DS_Store
._*
.wt-pr37/
__pycache__/
*.pyc
docker-compose.override.yml
# Environment files (may contain secrets)
.env
# Python virtual environments
.venv/
venv/
# ESP32 build cache (esp-idf-sys managed)
.embuild/
.env.local
.env.*.local
# Secret keys and credentials
.secret_key
*.key
*.pem
credentials.json
.worktrees/
.zeroclaw/*
# Skill eval workspaces (test outputs, transcripts, grading)
.claude/skills/*-workspace/
# Claude Code agent worktrees (temporary isolated workspaces)
.claude/worktrees/
# Local state backups
.local-state-backups/
*.local-state-backup/
# Coverage artifacts
lcov.info
# IDE's stuff
.idea
# Wrangler cache
.wrangler/

View File

@@ -0,0 +1,15 @@
config:
default: true
MD013: false
MD007: false
MD031: false
MD032: false
MD033: false
MD040: false
MD041: false
MD060: false
MD024:
allow_different_nesting: true
ignores:
- "target/**"

View File

@@ -0,0 +1,14 @@
{
"recommendations": [
"rust-lang.rust-analyzer",
"vadimcn.vscode-lldb",
"serayuzgur.crates",
"bungcip.better-toml",
"usernamehw.errorlens",
"eamodio.gitlens",
"tamasfe.even-better-toml",
"dbaeumer.vscode-eslint",
"oderwat.indent-rainbow",
"ryanluker.vscode-coverage-gutters"
]
}

View File

@@ -0,0 +1,73 @@
{
"version": "0.2.0",
"inputs": [
{
"id": "testName",
"description": "Exact test name to debug (e.g. tests::my_test)",
"type": "promptString",
"default": ""
}
],
"configurations": [
// ── Runtime ───────────────────────────────────────────
{
"type": "lldb",
"request": "launch",
"name": "Debug: Agent",
"program": "${workspaceFolder}/target/debug/zeroclaw",
"args": ["agent"],
"cwd": "${workspaceFolder}",
"preLaunchTask": "Build: Debug"
},
{
"type": "lldb",
"request": "launch",
"name": "Debug: Gateway",
"program": "${workspaceFolder}/target/debug/zeroclaw",
"args": ["gateway"],
"cwd": "${workspaceFolder}",
"preLaunchTask": "Build: Debug"
},
{
"type": "lldb",
"request": "launch",
"name": "Debug: Daemon",
"program": "${workspaceFolder}/target/debug/zeroclaw",
"args": ["daemon"],
"cwd": "${workspaceFolder}",
"preLaunchTask": "Build: Debug"
},
{
"type": "lldb",
"request": "launch",
"name": "Debug: Status",
"program": "${workspaceFolder}/target/debug/zeroclaw",
"args": ["status"],
"cwd": "${workspaceFolder}",
"preLaunchTask": "Build: Debug"
},
{
"type": "lldb",
"request": "launch",
"name": "Debug: Onboard",
"program": "${workspaceFolder}/target/debug/zeroclaw",
"args": ["onboard"],
"cwd": "${workspaceFolder}",
"preLaunchTask": "Build: Debug"
},
// ── Test ──────────────────────────────────────────────
{
"type": "lldb",
"request": "launch",
"name": "Debug: Test (by name)",
"cargo": {
"args": ["test", "--no-run", "--lib", "--"],
"filter": {
"kind": "lib"
}
},
"args": ["--exact", "${input:testName}", "--nocapture"],
"cwd": "${workspaceFolder}"
}
]
}

View File

@@ -0,0 +1,22 @@
{
"git.autofetch": true,
"git.autofetchPeriod": 90,
"search.exclude": {
"**/target": true
},
"files.watcherExclude": {
"**/target/**": true
},
"[rust]": {
"editor.defaultFormatter": "rust-lang.rust-analyzer"
},
"editor.formatOnSave": true,
"editor.formatOnPaste": true,
"files.autoSave": "afterDelay",
"files.autoSaveDelay": 1000,
"rust-analyzer.check.command": "clippy",
"rust-analyzer.check.extraArgs": ["--all-targets", "--", "-D", "warnings"],
"window.title": "${activeRepositoryBranchName}",
"coverage-gutters.coverageFileNames": ["lcov.info"],
"git.postCommitCommand": "push"
}

133
third_party/zeroclaw/.vscode/tasks.json vendored Normal file
View File

@@ -0,0 +1,133 @@
{
"version": "2.0.0",
"inputs": [
{
"id": "testFilter",
"description": "Test name or filter pattern",
"type": "promptString",
"default": ""
}
],
"tasks": [
// ── Build ──────────────────────────────────────────────
{
"label": "Build: Debug",
"type": "shell",
"command": "cargo",
"args": ["build"],
"group": {
"kind": "build",
"isDefault": true
},
"problemMatcher": ["$rustc"]
},
{
"label": "Build: Release",
"type": "shell",
"command": "cargo",
"args": ["build", "--release"],
"problemMatcher": ["$rustc"]
},
{
"label": "Build: Check (fast)",
"type": "shell",
"command": "cargo",
"args": ["check", "--all-targets"],
"problemMatcher": ["$rustc"]
},
// ── Lint ───────────────────────────────────────────────
{
"label": "Lint: Clippy",
"type": "shell",
"command": "cargo",
"args": ["clippy", "--all-targets", "--", "-D", "warnings"],
"problemMatcher": ["$rustc"]
},
{
"label": "Lint: Format Check",
"type": "shell",
"command": "cargo",
"args": ["fmt", "--all", "--", "--check"],
"problemMatcher": []
},
{
"label": "Lint: Format Fix",
"type": "shell",
"command": "cargo",
"args": ["fmt", "--all"],
"problemMatcher": []
},
// ── Test ──────────────────────────────────────────────
{
"label": "Test: All",
"type": "shell",
"command": "cargo nextest --version >/dev/null 2>&1 || cargo install cargo-nextest && cargo nextest run",
"group": {
"kind": "test",
"isDefault": true
},
"problemMatcher": ["$rustc"]
},
{
"label": "Test: Filtered",
"type": "shell",
"command": "cargo nextest --version >/dev/null 2>&1 || cargo install cargo-nextest && cargo nextest run -E 'test(${input:testFilter})'",
"problemMatcher": ["$rustc"]
},
{
"label": "Test: Coverage Report",
"type": "shell",
"command": "cargo llvm-cov --version >/dev/null 2>&1 || cargo install cargo-llvm-cov && cargo llvm-cov --lcov --output-path lcov.info",
"problemMatcher": []
},
{
"label": "Test: Benchmarks",
"type": "shell",
"command": "cargo",
"args": ["bench"],
"problemMatcher": []
},
// ── Security ──────────────────────────────────────────
{
"label": "Security: Audit",
"type": "shell",
"command": "cargo audit --version >/dev/null 2>&1 || cargo install cargo-audit && cargo audit",
"problemMatcher": []
},
{
"label": "Security: Deny (licenses + sources)",
"type": "shell",
"command": "cargo deny --version >/dev/null 2>&1 || cargo install cargo-deny && cargo deny check licenses sources",
"problemMatcher": []
},
// ── CI (Docker) ───────────────────────────────────────
{
"label": "CI: All (Docker)",
"type": "shell",
"command": "./dev/ci.sh",
"args": ["all"],
"problemMatcher": []
},
{
"label": "CI: Lint (Docker)",
"type": "shell",
"command": "./dev/ci.sh",
"args": ["lint"],
"problemMatcher": []
},
{
"label": "CI: Test (Docker)",
"type": "shell",
"command": "./dev/ci.sh",
"args": ["test"],
"problemMatcher": []
},
{
"label": "CI: Security (Docker)",
"type": "shell",
"command": "./dev/ci.sh",
"args": ["security"],
"problemMatcher": []
}
]
}

92
third_party/zeroclaw/AGENTS.md vendored Normal file
View File

@@ -0,0 +1,92 @@
# AGENTS.md — ZeroClaw
Cross-tool agent instructions for any AI coding assistant working on this repository.
## Commands
```bash
cargo fmt --all -- --check
cargo clippy --all-targets -- -D warnings
cargo test
```
Full pre-PR validation (recommended):
```bash
./dev/ci.sh all
```
Docs-only changes: run markdown lint and link-integrity checks. If touching bootstrap scripts: `bash -n install.sh`.
## Project Snapshot
ZeroClaw is a Rust-first autonomous agent runtime optimized for performance, efficiency, stability, extensibility, sustainability, and security.
Core architecture is trait-driven and modular. Extend by implementing traits and registering in factory modules.
Key extension points:
- `src/providers/traits.rs` (`Provider`)
- `src/channels/traits.rs` (`Channel`)
- `src/tools/traits.rs` (`Tool`)
- `src/memory/traits.rs` (`Memory`)
- `src/observability/traits.rs` (`Observer`)
- `src/runtime/traits.rs` (`RuntimeAdapter`)
- `src/peripherals/traits.rs` (`Peripheral`) — hardware boards (STM32, RPi GPIO)
## Repository Map
- `src/main.rs` — CLI entrypoint and command routing
- `src/lib.rs` — module exports and shared command enums
- `src/config/` — schema + config loading/merging
- `src/agent/` — orchestration loop
- `src/gateway/` — webhook/gateway server
- `src/security/` — policy, pairing, secret store
- `src/memory/` — markdown/sqlite memory backends + embeddings/vector merge
- `src/providers/` — model providers and resilient wrapper
- `src/channels/` — Telegram/Discord/Slack/etc channels
- `src/tools/` — tool execution surface (shell, file, memory, browser)
- `src/peripherals/` — hardware peripherals (STM32, RPi GPIO)
- `src/runtime/` — runtime adapters (currently native)
- `docs/` — topic-based documentation (setup-guides, reference, ops, security, hardware, contributing, maintainers)
- `.github/` — CI, templates, automation workflows
## Risk Tiers
- **Low risk**: docs/chore/tests-only changes
- **Medium risk**: most `src/**` behavior changes without boundary/security impact
- **High risk**: `src/security/**`, `src/runtime/**`, `src/gateway/**`, `src/tools/**`, `.github/workflows/**`, access-control boundaries
When uncertain, classify as higher risk.
## Workflow
1. **Read before write** — inspect existing module, factory wiring, and adjacent tests before editing.
2. **One concern per PR** — avoid mixed feature+refactor+infra patches.
3. **Implement minimal patch** — no speculative abstractions, no config keys without a concrete use case.
4. **Validate by risk tier** — docs-only: lightweight checks. Code changes: full relevant checks.
5. **Document impact** — update PR notes for behavior, risk, side effects, and rollback.
6. **Queue hygiene** — stacked PR: declare `Depends on #...`. Replacing old PR: declare `Supersedes #...`.
Branch/commit/PR rules:
- Work from a non-`master` branch. Open a PR to `master`; do not push directly.
- Use conventional commit titles. Prefer small PRs (`size: XS/S/M`).
- Follow `.github/pull_request_template.md` fully.
- Never commit secrets, personal data, or real identity information (see `@docs/contributing/pr-discipline.md`).
## Anti-Patterns
- Do not add heavy dependencies for minor convenience.
- Do not silently weaken security policy or access constraints.
- Do not add speculative config/feature flags "just in case".
- Do not mix massive formatting-only changes with functional changes.
- Do not modify unrelated modules "while here".
- Do not bypass failing checks without explicit explanation.
- Do not hide behavior-changing side effects in refactor commits.
- Do not include personal identity or sensitive information in test data, examples, docs, or commits.
## Linked References
- `@docs/contributing/change-playbooks.md` — adding providers, channels, tools, peripherals; security/gateway changes; architecture boundaries
- `@docs/contributing/pr-discipline.md` — privacy rules, superseded-PR attribution/templates, handoff template
- `@docs/contributing/docs-contract.md` — docs system contract, i18n rules, locale parity

1
third_party/zeroclaw/CHANGELOG.md vendored Normal file
View File

@@ -0,0 +1 @@
# Changelog

16
third_party/zeroclaw/CLAUDE.md vendored Normal file
View File

@@ -0,0 +1,16 @@
# CLAUDE.md — ZeroClaw (Claude Code)
> **Shared instructions live in [`AGENTS.md`](./AGENTS.md).**
> This file contains only Claude Code-specific directives.
## Claude Code Settings
Claude Code should read and follow all instructions in `AGENTS.md` at the repository root for project conventions, commands, risk tiers, workflow rules, and anti-patterns.
## Hooks
_No custom hooks defined yet._
## Slash Commands
_No custom slash commands defined yet._

128
third_party/zeroclaw/CODE_OF_CONDUCT.md vendored Normal file
View File

@@ -0,0 +1,128 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
https://x.com/argenistherose.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.

582
third_party/zeroclaw/CONTRIBUTING.md vendored Normal file
View File

@@ -0,0 +1,582 @@
# Contributing to ZeroClaw
Thanks for your interest in contributing to ZeroClaw! This guide will help you get started.
---
## ⚠️ Branch Migration Notice (March 2026)
**`master` is the ONLY default branch. The `main` branch no longer exists.**
If you have an existing fork or local clone that tracks `main`, you **must** update it:
```bash
# Update your local clone to track master
git checkout master
git branch -D main 2>/dev/null # delete local main if it exists
git remote set-head origin master
git fetch origin --prune # remove stale remote refs
# If your fork still has a main branch, delete it
git push origin --delete main 2>/dev/null
```
All PRs must target **`master`**. PRs targeting `main` will be rejected.
**Background:** ZeroClaw previously used `main` in some documentation and scripts, which caused 404 errors, broken CI refs, and contributor confusion (see [#2929](https://github.com/zeroclaw-labs/zeroclaw/issues/2929), [#3061](https://github.com/zeroclaw-labs/zeroclaw/issues/3061), [#3194](https://github.com/zeroclaw-labs/zeroclaw/pull/3194)). As of March 2026, all references have been corrected, stale branches cleaned up, and the `main` branch permanently deleted.
---
## Branching Model
> **`master`** is the single source-of-truth branch.
>
> **How contributors should work:**
> 1. Fork the repository
> 2. Create a `feat/*` or `fix/*` branch from `master`
> 3. Open a PR targeting `master`
>
> Do **not** create or push to a `main` branch. There is no `main` branch — it will not work.
## First-Time Contributors
Welcome — contributions of all sizes are valued. If this is your first contribution, here is how to get started:
1. **Find an issue.** Look for issues labeled [`good first issue`](https://github.com/zeroclaw-labs/zeroclaw/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) — these are scoped for newcomers and include context to get moving quickly.
2. **Pick a scope.** Good first contributions include:
- Typo and documentation fixes
- Test additions or improvements
- Small bug fixes with clear reproduction steps
3. **Follow the fork → branch → change → test → PR workflow:**
- Fork the repository and clone your fork
- Create a feature branch (`git checkout -b feat/my-change` or `git checkout -b fix/my-change`)
- Make your changes and run `cargo fmt && cargo clippy && cargo test`
- Open a PR against `master` using the PR template
4. **Start with Track A.** ZeroClaw uses three [collaboration tracks](#collaboration-tracks-risk-based) (A/B/C) based on risk. First-time contributors should target **Track A** (docs, tests, chore) — these require lighter review and are the fastest path to a merged PR.
If you get stuck, open a draft PR early and ask questions in the description.
## Development Setup
```bash
# Clone the repo
git clone https://github.com/zeroclaw-labs/zeroclaw.git
cd zeroclaw
# Enable the pre-push hook (runs fmt, clippy, tests before every push)
git config core.hooksPath .githooks
# Build
cargo build
# Run tests (all must pass)
cargo test --locked
# Format & lint (required before PR)
./scripts/ci/rust_quality_gate.sh
# Optional strict lint audit (full repo, recommended periodically)
./scripts/ci/rust_quality_gate.sh --strict
# Optional strict lint delta gate (blocks only changed Rust lines)
./scripts/ci/rust_strict_delta_gate.sh
# Optional docs lint gate (blocks only markdown issues on changed lines)
./scripts/ci/docs_quality_gate.sh
# Optional docs links gate (checks only links added on changed lines)
./scripts/ci/docs_links_gate.sh
# Release build
cargo build --release --locked
```
### Pre-push hook
The repo includes a pre-push hook in `.githooks/` that enforces `./scripts/ci/rust_quality_gate.sh` and `cargo test --locked` before every push. Enable it with `git config core.hooksPath .githooks`.
For an opt-in strict lint pass during pre-push, set:
```bash
ZEROCLAW_STRICT_LINT=1 git push
```
For an opt-in strict lint delta pass during pre-push (changed Rust lines only), set:
```bash
ZEROCLAW_STRICT_DELTA_LINT=1 git push
```
For an opt-in docs quality pass during pre-push (changed-line markdown gate), set:
```bash
ZEROCLAW_DOCS_LINT=1 git push
```
For an opt-in docs links pass during pre-push (added-links gate), set:
```bash
ZEROCLAW_DOCS_LINKS=1 git push
```
For full CI parity in Docker, run:
```bash
./dev/ci.sh all
```
To skip it during rapid iteration:
```bash
git push --no-verify
```
> **Note:** CI runs the same checks, so skipped hooks will be caught on the PR.
## Local Secret Management (Required)
ZeroClaw supports layered secret management for local development and CI hygiene.
### Secret Storage Options
1. **Environment variables** (recommended for local development)
- Copy `.env.example` to `.env` and fill in values
- `.env` files are Git-ignored and should stay local
- Best for temporary/local API keys
2. **Config file** (`~/.zeroclaw/config.toml`)
- Persistent setup for long-term use
- When `secrets.encrypt = true` (default), secret values are encrypted before save
- Secret key is stored at `~/.zeroclaw/.secret_key` with restricted permissions
- Use `zeroclaw onboard` for guided setup
### Runtime Resolution Rules
API key resolution follows this order:
1. Explicit key passed from config/CLI
2. Provider-specific env vars (`OPENROUTER_API_KEY`, `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, ...)
3. Generic env vars (`ZEROCLAW_API_KEY`, `API_KEY`)
Provider/model config overrides:
- `ZEROCLAW_PROVIDER` / `PROVIDER`
- `ZEROCLAW_MODEL`
See `.env.example` for practical examples and currently supported provider key env vars.
### Pre-Commit Secret Hygiene (Mandatory)
Before every commit, verify:
- [ ] No `.env` files are staged (`.env.example` only)
- [ ] No raw API keys/tokens in code, tests, fixtures, examples, logs, or commit messages
- [ ] No credentials in debug output or error payloads
- [ ] `git diff --cached` has no accidental secret-like strings
Quick local audit:
```bash
# Search staged diff for common secret markers
git diff --cached | grep -iE '(api[_-]?key|secret|token|password|bearer|sk-)'
# Confirm no .env file is staged
git status --short | grep -E '\.env$'
```
### Optional Local Secret Scanning
For extra guardrails, install one of:
- **gitleaks**: [GitHub - gitleaks/gitleaks](https://github.com/gitleaks/gitleaks)
- **truffleHog**: [GitHub - trufflesecurity/trufflehog](https://github.com/trufflesecurity/trufflehog)
- **git-secrets**: [GitHub - awslabs/git-secrets](https://github.com/awslabs/git-secrets)
This repo includes `.githooks/pre-commit` to run `gitleaks protect --staged --redact` when gitleaks is installed.
Enable hooks with:
```bash
git config core.hooksPath .githooks
```
If gitleaks is not installed, the pre-commit hook prints a warning and continues.
### What Must Never Be Committed
- `.env` files (use `.env.example` only)
- API keys, tokens, passwords, or credentials (plain or encrypted)
- OAuth tokens or session identifiers
- Webhook signing secrets
- `~/.zeroclaw/.secret_key` or similar key files
- Personal identifiers or real user data in tests/fixtures
### If a Secret Is Committed Accidentally
1. Revoke/rotate the credential immediately
2. Do not rely only on `git revert` (history still contains the secret)
3. Purge history with `git filter-repo` or BFG
4. Force-push cleaned history (coordinate with maintainers)
5. Ensure the leaked value is removed from PR/issue/discussion/comment history
Reference: [GitHub guide: removing sensitive data from a repository](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/removing-sensitive-data-from-a-repository)
## Collaboration Tracks (Risk-Based)
To keep review throughput high without lowering quality, every PR should map to one track:
| Track | Typical scope | Required review depth |
|---|---|---|
| **Track A (Low risk)** | docs/tests/chore, isolated refactors, no security/runtime/CI impact | 1 maintainer review + green `CI Required Gate` |
| **Track B (Medium risk)** | providers/channels/memory/tools behavior changes | 1 subsystem-aware review + explicit validation evidence |
| **Track C (High risk)** | `src/security/**`, `src/runtime/**`, `src/gateway/**`, `.github/workflows/**`, access-control boundaries | 2-pass review (fast triage + deep risk review), rollback plan required |
When in doubt, choose the higher track.
## Documentation Optimization Principles
To keep docs useful under high PR volume, we use these rules:
- **Single source of truth**: policy lives in docs, not scattered across PR comments.
- **Decision-oriented content**: every checklist item should directly help accept/reject a change.
- **Risk-proportionate detail**: high-risk paths need deeper evidence; low-risk paths stay lightweight.
- **Side-effect visibility**: document blast radius, failure modes, and rollback before merge.
- **Automation assists, humans decide**: bots triage and label, but merge accountability stays human.
- **Index-first discoverability**: `docs/README.md` is the first entry point for operational documentation.
- **Template-first authoring**: start new operational docs from `docs/contributing/doc-template.md`.
### Documentation System Map
| Doc | Primary purpose | When to update |
|---|---|---|
| `docs/README.md` | canonical docs index and taxonomy | add/remove docs or change documentation ownership/navigation |
| `docs/contributing/doc-template.md` | standard skeleton for new operational documentation | when required sections or documentation quality bar changes |
| `CONTRIBUTING.md` | contributor contract and readiness baseline | contributor expectations or policy changes |
| `docs/contributing/pr-workflow.md` | governance logic and merge contract | workflow/risk/merge gate changes |
| `docs/contributing/reviewer-playbook.md` | reviewer operating checklist | review depth or triage behavior changes |
| `docs/contributing/ci-map.md` | CI ownership and triage entry points | workflow trigger/job ownership changes |
| `docs/ops/network-deployment.md` | runtime deployment and network operating guide | gateway/channel/tunnel/network runtime behavior changes |
| `docs/ops/proxy-agent-playbook.md` | agent-operable proxy runbook and rollback recipes | proxy scope/selector/tooling behavior changes |
## PR Definition of Ready (DoR)
Before requesting review, ensure all of the following are true:
- Scope is focused to a single concern.
- `.github/pull_request_template.md` is fully completed.
- Relevant local validation has been run (`fmt`, `clippy`, `test`, scenario checks).
- Security impact and rollback path are explicitly described.
- No personal/sensitive data is introduced in code/docs/tests/fixtures/logs/examples/commit messages.
- Tests/fixtures/examples use neutral project-scoped wording (no identity-specific or first-person phrasing).
- If identity-like wording is required, use ZeroClaw-centric labels only (for example: `ZeroClawAgent`, `ZeroClawOperator`, `zeroclaw_user`).
- If docs were changed, update `docs/README.md` navigation and reciprocal links with related docs.
- If a new operational doc was added, start from `docs/contributing/doc-template.md` and keep risk/rollback/troubleshooting sections where applicable.
- Linked issue (or rationale for no issue) is included.
## PR Definition of Done (DoD)
A PR is merge-ready when:
- `CI Required Gate` is green.
- Required reviewers approved (including CODEOWNERS paths).
- Risk level matches changed paths (`risk: low/medium/high`).
- User-visible behavior, migration, and rollback notes are complete.
- Follow-up TODOs are explicit and tracked in issues.
- For documentation changes, links and ownership mapping in `CONTRIBUTING.md` and `docs/README.md` are consistent.
## High-Volume Collaboration Rules
When PR traffic is high (especially with AI-assisted contributions), these rules keep quality and throughput stable:
- **One concern per PR**: avoid mixing refactor + feature + infra in one change.
- **Small PRs first**: prefer PR size `XS/S/M`; split large work into stacked PRs.
- **Template is mandatory**: complete every section in `.github/pull_request_template.md`.
- **Explicit rollback**: every PR must include a fast rollback path.
- **Security-first review**: changes in `src/security/`, runtime, gateway, and CI need stricter validation.
- **Risk-first triage**: use labels (`risk: high`, `risk: medium`, `risk: low`) to route review depth.
- **Privacy-first hygiene**: redact/anonymize sensitive payloads and keep tests/examples neutral and project-scoped.
- **Identity normalization**: when identity traits are unavoidable, use ZeroClaw/project-native roles instead of personal or real-world identities.
- **Supersede hygiene**: if your PR replaces an older open PR, add `Supersedes #...` and request maintainers close the outdated one.
Full maintainer workflow: [`docs/contributing/pr-workflow.md`](docs/contributing/pr-workflow.md).
CI workflow ownership and triage map: [`docs/contributing/ci-map.md`](docs/contributing/ci-map.md).
Reviewer operating checklist: [`docs/contributing/reviewer-playbook.md`](docs/contributing/reviewer-playbook.md).
## Agent Collaboration Guidance
Agent-assisted contributions are welcome and treated as first-class contributions.
For smoother agent-to-agent and human-to-agent review:
- Keep PR summaries concrete (problem, change, non-goals).
- Include reproducible validation evidence (`fmt`, `clippy`, `test`, scenario checks).
- Add brief workflow notes when automation materially influenced design/code.
- Agent-assisted PRs are welcome, but contributors remain accountable for understanding what the code does and what it could affect.
- Call out uncertainty and risky edges explicitly.
We do **not** require PRs to declare an AI-vs-human line ratio.
Agent implementation playbook lives in [`AGENTS.md`](AGENTS.md).
## Architecture: Trait-Based Pluggability
ZeroClaw's architecture is built on **traits** — every subsystem is swappable. This means contributing a new integration is as simple as implementing a trait and registering it in the factory function.
```
src/
├── providers/ # LLM backends → Provider trait
├── channels/ # Messaging → Channel trait
├── observability/ # Metrics/logging → Observer trait
├── runtime/ # Platform adapters → RuntimeAdapter trait
├── tools/ # Agent tools → Tool trait
├── memory/ # Persistence/brain → Memory trait
└── security/ # Sandboxing → SecurityPolicy
```
## Code Naming Conventions (Required)
Use these defaults unless an existing subsystem pattern clearly overrides them.
- **Rust casing**: modules/files `snake_case`, types/traits/enums `PascalCase`, functions/variables `snake_case`, constants `SCREAMING_SNAKE_CASE`.
- **Domain-first naming**: prefer explicit role names such as `DiscordChannel`, `SecurityPolicy`, `SqliteMemory` over ambiguous names (`Manager`, `Util`, `Helper`).
- **Trait implementers**: keep predictable suffixes (`*Provider`, `*Channel`, `*Tool`, `*Memory`, `*Observer`, `*RuntimeAdapter`).
- **Factory keys**: keep lowercase and stable (`openai`, `discord`, `shell`); avoid adding aliases without migration need.
- **Tests**: use behavior-oriented names (`subject_expected_behavior`) and neutral project-scoped fixtures.
- **Identity-like labels**: if unavoidable, use ZeroClaw-native identifiers only (`ZeroClawAgent`, `zeroclaw_user`, `zeroclaw_node`).
## Architecture Boundary Rules (Required)
Keep architecture extensible and auditable by following these boundaries.
- Extend features via trait implementations + factory registration before considering broad refactors.
- Keep dependency direction contract-first: concrete integrations depend on shared traits/config/util, not on other concrete integrations.
- Avoid cross-subsystem coupling (provider ↔ channel internals, tools mutating security/gateway internals directly, etc.).
- Keep responsibilities single-purpose by module (`agent` orchestration, `channels` transport, `providers` model I/O, `security` policy, `tools` execution, `memory` persistence).
- Introduce shared abstractions only after repeated stable use (rule-of-three) and at least one current caller.
- Treat `src/config/schema.rs` keys as public contract; document compatibility impact, migration steps, and rollback path for changes.
## Naming and Architecture Examples (Bad vs Good)
Use these quick examples to align implementation choices before opening a PR.
### Naming examples
- **Bad**: `Manager`, `Helper`, `doStuff`, `tmp_data`
- **Good**: `DiscordChannel`, `SecurityPolicy`, `send_message`, `channel_allowlist`
- **Bad test name**: `test1` / `works`
- **Good test name**: `allowlist_denies_unknown_user`, `provider_returns_error_on_invalid_model`
- **Bad identity-like label**: `john_user`, `alice_bot`
- **Good identity-like label**: `ZeroClawAgent`, `zeroclaw_user`, `zeroclaw_node`
### Architecture boundary examples
- **Bad**: channel implementation directly imports provider internals to call model APIs.
- **Good**: channel emits normalized `ChannelMessage`; agent/runtime orchestrates provider calls via trait contracts.
- **Bad**: tool mutates gateway/security policy directly from execution path.
- **Good**: tool returns structured `ToolResult`; policy enforcement remains in security/runtime boundaries.
- **Bad**: adding broad shared abstraction before any repeated caller.
- **Good**: keep local logic first; extract shared abstraction only after stable rule-of-three evidence.
- **Bad**: config key changes without migration notes.
- **Good**: config/schema changes include defaults, compatibility impact, migration steps, and rollback guidance.
## How to Add a New Provider
Create `src/providers/your_provider.rs`:
```rust
use async_trait::async_trait;
use anyhow::Result;
use crate::providers::traits::Provider;
pub struct YourProvider {
api_key: String,
client: reqwest::Client,
}
impl YourProvider {
pub fn new(api_key: Option<&str>) -> Self {
Self {
api_key: api_key.unwrap_or_default().to_string(),
client: reqwest::Client::new(),
}
}
}
#[async_trait]
impl Provider for YourProvider {
async fn chat(&self, message: &str, model: &str, temperature: f64) -> Result<String> {
// Your API call here
todo!()
}
}
```
Then register it in `src/providers/mod.rs`:
```rust
"your_provider" => Ok(Box::new(your_provider::YourProvider::new(api_key))),
```
## How to Add a New Channel
Create `src/channels/your_channel.rs`:
```rust
use async_trait::async_trait;
use anyhow::Result;
use tokio::sync::mpsc;
use crate::channels::traits::{Channel, ChannelMessage};
pub struct YourChannel { /* config fields */ }
#[async_trait]
impl Channel for YourChannel {
fn name(&self) -> &str { "your_channel" }
async fn send(&self, message: &str, recipient: &str) -> Result<()> {
// Send message via your platform
todo!()
}
async fn listen(&self, tx: mpsc::Sender<ChannelMessage>) -> Result<()> {
// Listen for incoming messages, forward to tx
todo!()
}
async fn health_check(&self) -> bool { true }
}
```
## How to Add a New Observer
Create `src/observability/your_observer.rs`:
```rust
use crate::observability::traits::{Observer, ObserverEvent, ObserverMetric};
pub struct YourObserver { /* client, config, etc. */ }
impl Observer for YourObserver {
fn record_event(&self, event: &ObserverEvent) {
// Push event to your backend
}
fn record_metric(&self, metric: &ObserverMetric) {
// Push metric to your backend
}
fn name(&self) -> &str { "your_observer" }
}
```
## How to Add a New Tool
Create `src/tools/your_tool.rs`:
```rust
use async_trait::async_trait;
use anyhow::Result;
use serde_json::{json, Value};
use crate::tools::traits::{Tool, ToolResult};
pub struct YourTool { /* security policy, config, etc. */ }
#[async_trait]
impl Tool for YourTool {
fn name(&self) -> &str { "your_tool" }
fn description(&self) -> &str { "Does something useful" }
fn parameters_schema(&self) -> Value {
json!({
"type": "object",
"properties": {
"input": { "type": "string", "description": "The input" }
},
"required": ["input"]
})
}
async fn execute(&self, args: Value) -> Result<ToolResult> {
let input = args["input"].as_str()
.ok_or_else(|| anyhow::anyhow!("Missing 'input'"))?;
Ok(ToolResult {
success: true,
output: format!("Processed: {input}"),
error: None,
})
}
}
```
## Pull Request Checklist
- [ ] PR template sections are completed (including security + rollback)
- [ ] `./scripts/ci/rust_quality_gate.sh` — merge gate formatter/lint baseline passes
- [ ] `cargo test --locked` — all tests pass locally or skipped tests are explained
- [ ] Optional strict audit: `./scripts/ci/rust_quality_gate.sh --strict` (full repo, run when doing lint cleanup or release-hardening work)
- [ ] Optional strict delta audit: `./scripts/ci/rust_strict_delta_gate.sh` (changed Rust lines only, useful for incremental debt control)
- [ ] New code has inline `#[cfg(test)]` tests
- [ ] No new dependencies unless absolutely necessary (we optimize for binary size)
- [ ] README updated if adding user-facing features
- [ ] Follows existing code patterns and conventions
- [ ] Follows code naming conventions and architecture boundary rules in this guide
- [ ] No personal/sensitive data in code/docs/tests/fixtures/logs/examples/commit messages
- [ ] Test names/messages/fixtures/examples are neutral and project-focused
- [ ] Any required identity-like wording uses ZeroClaw/project-native labels only
## Commit Convention
We use [Conventional Commits](https://www.conventionalcommits.org/):
```
feat: add Anthropic provider
feat(provider): add Anthropic provider
fix: path traversal edge case with symlinks
docs: update contributing guide
test: add heartbeat unicode parsing tests
refactor: extract common security checks
chore: bump tokio to 1.43
```
Recommended scope keys in commit titles:
- `provider`, `channel`, `memory`, `security`, `runtime`, `ci`, `docs`, `tests`
## Code Style
- **Minimal dependencies** — every crate adds to binary size
- **Inline tests** — `#[cfg(test)] mod tests {}` at the bottom of each file
- **Trait-first** — define the trait, then implement
- **Security by default** — sandbox everything, allowlist, never blocklist
- **No unwrap in production code** — use `?`, `anyhow`, or `thiserror`
## Reporting Issues
- **Bugs**: Include OS, Rust version, steps to reproduce, expected vs actual
- **Features**: Describe the use case, propose which trait to extend
- **Security**: See [SECURITY.md](SECURITY.md) for responsible disclosure
- **Privacy**: Redact/anonymize all personal data and sensitive identifiers before posting logs/payloads
## Maintainer Merge Policy
- Require passing `CI Required Gate` before merge.
- Require docs quality checks when docs are touched.
- Require review approval for non-trivial changes.
- Require CODEOWNERS review for protected paths.
- Use risk labels to determine review depth, scope labels (`core`, `provider`, `channel`, `security`, etc.) to route ownership, and module labels (`<module>:<component>`, e.g. `channel:telegram`, `provider:kimi`, `tool:shell`) to route subsystem expertise.
- Contributor tier labels are auto-applied on PRs and issues by merged PR count: `experienced contributor` (>=10), `principal contributor` (>=20), `distinguished contributor` (>=50). Treat them as read-only automation labels; manual edits are auto-corrected.
- Prefer squash merge with conventional commit title.
- Revert fast on regressions; re-land with tests.
## License
By contributing, you agree that your contributions will be licensed under the MIT License.

12652
third_party/zeroclaw/Cargo.lock generated vendored Normal file

File diff suppressed because it is too large Load Diff

337
third_party/zeroclaw/Cargo.toml vendored Normal file
View File

@@ -0,0 +1,337 @@
[workspace]
members = [".", "crates/robot-kit", "crates/aardvark-sys", "apps/tauri"]
resolver = "2"
[package]
name = "zeroclawlabs"
version = "0.6.3"
edition = "2021"
authors = ["theonlyhennygod"]
license = "MIT OR Apache-2.0"
description = "Zero overhead. Zero compromise. 100% Rust. The fastest, smallest AI assistant."
repository = "https://github.com/zeroclaw-labs/zeroclaw"
readme = "README.md"
keywords = ["ai", "agent", "cli", "assistant", "chatbot"]
categories = ["command-line-utilities", "api-bindings"]
rust-version = "1.87"
include = [
"/src/**/*",
"/build.rs",
"/Cargo.toml",
"/Cargo.lock",
"/LICENSE*",
"/README.md",
"/web/dist/**/*",
"/tool_descriptions/**/*",
]
[[bin]]
name = "zeroclaw"
path = "src/main.rs"
[lib]
name = "zeroclaw"
path = "src/lib.rs"
[dependencies]
# CLI - minimal and fast
clap = { version = "4.5", features = ["derive"] }
clap_complete = "4.5"
# Async runtime - feature-optimized for size
tokio = { version = "1.50", default-features = false, features = ["rt-multi-thread", "macros", "time", "net", "io-util", "sync", "process", "io-std", "fs", "signal"] }
tokio-util = { version = "0.7", default-features = false }
tokio-stream = { version = "0.1.18", default-features = false, features = ["fs", "sync"] }
# HTTP client - minimal features
reqwest = { version = "0.12", default-features = false, features = ["json", "rustls-tls", "blocking", "multipart", "stream", "socks"] }
# Matrix client + E2EE decryption
matrix-sdk = { version = "0.16", optional = true, default-features = false, features = ["e2e-encryption", "rustls-tls", "markdown", "sqlite"] }
# Serialization
serde = { version = "1.0", default-features = false, features = ["derive"] }
serde_json = { version = "1.0", default-features = false, features = ["std"] }
# Config
directories = "6.0"
toml = "1.0"
shellexpand = "3.1"
# JSON Schema generation for config export
schemars = "1.2"
# Logging - minimal
tracing = { version = "0.1", default-features = false }
tracing-subscriber = { version = "0.3", default-features = false, features = ["fmt", "ansi", "env-filter"] }
# Observability - Prometheus metrics (optional; requires AtomicU64, unavailable on 32-bit)
prometheus = { version = "0.14", default-features = false, optional = true }
# Base64 encoding (screenshots, image data)
base64 = "0.22"
image = { version = "0.25", default-features = false, features = ["jpeg", "png"] }
# URL encoding for web search
urlencoding = "2.1"
# HTML to plain text conversion (web_fetch tool)
nanohtml2text = "0.2"
# Optional Rust-native browser automation backend
fantoccini = { version = "0.22.1", optional = true, default-features = false, features = ["rustls-tls"] }
# Progress bars (update pipeline)
indicatif = "0.18"
# Temp files (update pipeline rollback)
tempfile = "3.26"
# Tar/gzip extraction (update pipeline)
flate2 = "1.1"
tar = "0.4"
# Zip extraction for ClawhHub / OpenClaw registry installers
zip = { version = "8.1", default-features = false, features = ["deflate-flate2"] }
# Error handling
anyhow = "1.0"
thiserror = "2.0"
# Aardvark I2C/SPI/GPIO USB adapter (Total Phase) — stub when SDK absent
aardvark-sys = { path = "crates/aardvark-sys", version = "0.1.0" }
# UUID generation
uuid = { version = "1.22", default-features = false, features = ["v4", "std"] }
# Authenticated encryption (AEAD) for secret store
chacha20poly1305 = "0.10"
# HMAC for webhook signature verification
hmac = "0.12"
sha2 = "0.10"
hex = "0.4"
# CSPRNG for secure token generation
rand = "0.10"
# Portable atomic fallbacks for targets without native 64-bit atomics
portable-atomic = "1"
# serde-big-array for wa-rs storage (large array serialization)
serde-big-array = { version = "0.5", optional = true }
# Fast mutexes that don't poison on panic
parking_lot = "0.12"
# Async traits
async-trait = "0.1"
# HMAC-SHA256 (Zhipu/GLM JWT auth)
ring = "0.17"
# Protobuf encode/decode (Lark WS frame codec, WhatsApp storage)
prost = { version = "0.14", default-features = false, features = ["derive"], optional = true }
# Memory / persistence
rusqlite = { version = "0.37", features = ["bundled"] }
chrono = { version = "0.4", default-features = false, features = ["clock", "std", "serde"] }
chrono-tz = "0.10"
cron = "0.15"
# Interactive CLI prompts
dialoguer = { version = "0.12", features = ["fuzzy-select"] }
console = "0.16"
# Hardware discovery (device path globbing)
glob = "0.3"
# Binary discovery (init system detection)
which = "8.0"
# WebSocket client channels (Discord/Lark/DingTalk/Nostr)
tokio-tungstenite = { version = "0.29", features = ["rustls-tls-webpki-roots"] }
tokio-socks = "0.5"
futures-util = { version = "0.3", default-features = false, features = ["sink"] }
nostr-sdk = { version = "0.44", default-features = false, features = ["nip04", "nip59"], optional = true }
regex = "1.10"
hostname = "0.4.2"
rustls = "0.23"
rustls-pemfile = "2"
rustls-pki-types = "1.14.0"
tokio-rustls = "0.26.4"
webpki-roots = "1.0.6"
# email
lettre = { version = "0.11.19", default-features = false, features = ["builder", "smtp-transport", "rustls-tls"] }
mail-parser = "0.11.2"
async-imap = { version = "0.11",features = ["runtime-tokio"], default-features = false }
# HTTP server (gateway) — replaces raw TCP for proper HTTP/1.1 compliance
axum = { version = "0.8", default-features = false, features = ["http1", "json", "tokio", "query", "ws", "macros"] }
hyper = { version = "1", features = ["http1", "server"] }
hyper-util = { version = "0.1", features = ["tokio", "server-auto", "server-graceful"] }
tower = { version = "0.5", default-features = false, features = ["util"] }
tower-http = { version = "0.6", default-features = false, features = ["limit", "timeout"] }
http-body-util = "0.1"
# Embed frontend assets into binary (web dashboard)
rust-embed = "8"
mime_guess = "2"
# OpenTelemetry — OTLP trace + metrics export.
# Use the blocking HTTP exporter client to avoid Tokio-reactor panics in
# OpenTelemetry background batch threads when ZeroClaw emits spans/metrics from
# non-Tokio contexts.
opentelemetry = { version = "0.31", default-features = false, features = ["trace", "metrics"], optional = true }
opentelemetry_sdk = { version = "0.31", default-features = false, features = ["trace", "metrics"], optional = true }
opentelemetry-otlp = { version = "0.31", default-features = false, features = ["trace", "metrics", "http-proto", "reqwest-blocking-client", "reqwest-rustls-webpki-roots"], optional = true }
# Serial port for peripheral communication (STM32, etc.)
tokio-serial = { version = "5", default-features = false, optional = true }
# USB device enumeration (hardware discovery) — only on platforms nusb supports
# (Linux, macOS, Windows). Android/Termux uses target_os="android" and is excluded.
[target.'cfg(any(target_os = "linux", target_os = "macos", target_os = "windows"))'.dependencies]
nusb = { version = "0.2", default-features = false, optional = true }
# probe-rs for STM32/Nucleo memory read (Phase B)
probe-rs = { version = "0.31", optional = true }
# PDF extraction for datasheet RAG (optional, enable with --features rag-pdf)
pdf-extract = { version = "0.10", optional = true }
# WASM plugin runtime (extism)
extism = { version = "1.20", optional = true }
# Cross-platform audio capture for voice wake word detection (optional, enable with --features voice-wake)
cpal = { version = "0.15", optional = true }
# Terminal QR rendering for WhatsApp Web pairing flow.
qrcode = { version = "0.14", optional = true }
# WhatsApp Web client (wa-rs) — optional, enable with --features whatsapp-web
# Uses wa-rs for Bot and Client, wa-rs-core for storage traits, custom rusqlite backend avoids Diesel conflict.
wa-rs = { version = "0.2", optional = true, default-features = false }
wa-rs-core = { version = "0.2", optional = true, default-features = false }
wa-rs-binary = { version = "0.2", optional = true, default-features = false }
wa-rs-proto = { version = "0.2", optional = true, default-features = false }
wa-rs-ureq-http = { version = "0.2", optional = true }
wa-rs-tokio-transport = { version = "0.2", optional = true, default-features = false }
# Raspberry Pi GPIO / Landlock (Linux only) — target-specific to avoid compile failure on macOS
[target.'cfg(target_os = "linux")'.dependencies]
rppal = { version = "0.22", optional = true }
landlock = { version = "0.4", optional = true }
# Unix-specific dependencies (for root check, etc.)
[target.'cfg(unix)'.dependencies]
libc = "0.2"
[features]
default = ["observability-prometheus", "skill-creation"]
channel-nostr = ["dep:nostr-sdk"]
hardware = ["nusb", "tokio-serial"]
channel-matrix = ["dep:matrix-sdk"]
channel-lark = ["dep:prost"]
channel-feishu = ["channel-lark"] # Alias for Feishu users (Lark and Feishu are the same platform)
observability-prometheus = ["dep:prometheus"]
observability-otel = ["dep:opentelemetry", "dep:opentelemetry_sdk", "dep:opentelemetry-otlp"]
peripheral-rpi = ["rppal"]
# Browser backend feature alias used by cfg(feature = "browser-native")
browser-native = ["dep:fantoccini"]
# Backward-compatible alias for older invocations
fantoccini = ["browser-native"]
# Sandbox feature aliases used by cfg(feature = "sandbox-*")
sandbox-landlock = ["dep:landlock"]
sandbox-bubblewrap = []
# Backward-compatible alias for older invocations
landlock = ["sandbox-landlock"]
# Prometheus metrics observer (requires 64-bit atomics; disable on 32-bit targets)
metrics = ["observability-prometheus"]
# probe = probe-rs for Nucleo memory read (adds ~50 deps; optional)
probe = ["dep:probe-rs"]
# rag-pdf = PDF ingestion for datasheet RAG
rag-pdf = ["dep:pdf-extract"]
# skill-creation = Autonomous skill creation from successful multi-step tasks
skill-creation = []
# whatsapp-web = Native WhatsApp Web client with custom rusqlite storage backend
whatsapp-web = ["dep:wa-rs", "dep:wa-rs-core", "dep:wa-rs-binary", "dep:wa-rs-proto", "dep:wa-rs-ureq-http", "dep:wa-rs-tokio-transport", "dep:serde-big-array", "dep:prost", "dep:qrcode"]
# voice-wake = Voice wake word detection via microphone (cpal)
voice-wake = ["dep:cpal"]
# WASM plugin system (extism-based)
plugins-wasm = ["dep:extism"]
# WebAuthn / FIDO2 hardware key authentication
webauthn = []
# Meta-feature for CI: all features except those requiring system C libraries
# not available on standard CI runners (e.g., voice-wake needs libasound2-dev).
ci-all = [
"channel-nostr",
"hardware",
"channel-matrix",
"channel-lark",
"observability-prometheus",
"observability-otel",
"peripheral-rpi",
"browser-native",
"sandbox-landlock",
"sandbox-bubblewrap",
"probe",
"rag-pdf",
"skill-creation",
"whatsapp-web",
"plugins-wasm",
]
[profile.release]
opt-level = "z" # Optimize for size
lto = "fat" # Maximum cross-crate optimization for smaller binaries
codegen-units = 1 # Serialized codegen for low-memory devices (e.g., Raspberry Pi 3 with 1GB RAM)
# Higher values (e.g., 8) compile faster but require more RAM during compilation
strip = true # Remove debug symbols
panic = "abort" # Reduce binary size
[profile.release-fast]
inherits = "release"
codegen-units = 8 # Parallel codegen for faster builds on powerful machines (16GB+ RAM recommended)
# Use: cargo build --profile release-fast
[profile.ci]
inherits = "release"
lto = "thin" # Much faster than fat LTO; still catches release-mode issues
codegen-units = 16 # Full parallelism for CI runners
[profile.dist]
inherits = "release"
opt-level = "z"
lto = "fat"
codegen-units = 1
strip = true
panic = "abort"
[dev-dependencies]
tempfile = "3.26"
criterion = { version = "0.8", features = ["async_tokio"] }
wiremock = "0.6"
scopeguard = "1.2"
rcgen = "0.13"
[[test]]
name = "component"
path = "tests/test_component.rs"
[[test]]
name = "integration"
path = "tests/test_integration.rs"
[[test]]
name = "system"
path = "tests/test_system.rs"
[[test]]
name = "live"
path = "tests/test_live.rs"
[[bench]]
name = "agent_benchmarks"
harness = false

152
third_party/zeroclaw/Dockerfile vendored Normal file
View File

@@ -0,0 +1,152 @@
# syntax=docker/dockerfile:1.7
# ── Stage 0: Frontend build ─────────────────────────────────────
FROM node:22-alpine AS web-builder
WORKDIR /web
COPY web/package.json web/package-lock.json* ./
RUN npm ci --ignore-scripts 2>/dev/null || npm install --ignore-scripts
COPY web/ .
RUN npm run build
# ── Stage 1: Build ────────────────────────────────────────────
FROM rust:1.94-slim@sha256:da9dab7a6b8dd428e71718402e97207bb3e54167d37b5708616050b1e8f60ed6 AS builder
WORKDIR /app
ARG ZEROCLAW_CARGO_FEATURES="channel-lark,whatsapp-web"
# Install build dependencies
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y \
pkg-config \
&& rm -rf /var/lib/apt/lists/*
# 1. Copy manifests to cache dependencies
COPY Cargo.toml Cargo.lock ./
# Include every workspace member: Cargo.lock is generated for the full workspace.
# Previously we used sed to drop `crates/robot-kit`, which made the manifest disagree
# with the lockfile and caused `cargo --locked` to fail (Cargo refused to rewrite the lock).
COPY crates/robot-kit/ crates/robot-kit/
COPY crates/aardvark-sys/ crates/aardvark-sys/
# Create dummy targets declared in Cargo.toml so manifest parsing succeeds.
RUN mkdir -p src benches \
&& echo "fn main() {}" > src/main.rs \
&& echo "" > src/lib.rs \
&& echo "fn main() {}" > benches/agent_benchmarks.rs
RUN --mount=type=cache,id=zeroclaw-cargo-registry,target=/usr/local/cargo/registry,sharing=locked \
--mount=type=cache,id=zeroclaw-cargo-git,target=/usr/local/cargo/git,sharing=locked \
--mount=type=cache,id=zeroclaw-target,target=/app/target,sharing=locked \
if [ -n "$ZEROCLAW_CARGO_FEATURES" ]; then \
cargo build --release --locked --features "$ZEROCLAW_CARGO_FEATURES"; \
else \
cargo build --release --locked; \
fi
RUN rm -rf src benches
# 2. Copy only build-relevant source paths (avoid cache-busting on docs/tests/scripts)
COPY src/ src/
COPY benches/ benches/
COPY --from=web-builder /web/dist web/dist
COPY *.rs .
RUN touch src/main.rs
RUN --mount=type=cache,id=zeroclaw-cargo-registry,target=/usr/local/cargo/registry,sharing=locked \
--mount=type=cache,id=zeroclaw-cargo-git,target=/usr/local/cargo/git,sharing=locked \
--mount=type=cache,id=zeroclaw-target,target=/app/target,sharing=locked \
rm -rf target/release/.fingerprint/zeroclawlabs-* \
target/release/deps/zeroclawlabs-* \
target/release/incremental/zeroclawlabs-* && \
if [ -n "$ZEROCLAW_CARGO_FEATURES" ]; then \
cargo build --release --locked --features "$ZEROCLAW_CARGO_FEATURES"; \
else \
cargo build --release --locked; \
fi && \
cp target/release/zeroclaw /app/zeroclaw && \
strip /app/zeroclaw
RUN size=$(stat -c%s /app/zeroclaw) && \
if [ "$size" -lt 1000000 ]; then echo "ERROR: binary too small (${size} bytes), likely dummy build artifact" && exit 1; fi
# Prepare runtime directory structure and default config inline (no extra stage)
RUN mkdir -p /zeroclaw-data/.zeroclaw /zeroclaw-data/workspace && \
printf '%s\n' \
'workspace_dir = "/zeroclaw-data/workspace"' \
'config_path = "/zeroclaw-data/.zeroclaw/config.toml"' \
'api_key = ""' \
'default_provider = "openrouter"' \
'default_model = "anthropic/claude-sonnet-4-20250514"' \
'default_temperature = 0.7' \
'' \
'[gateway]' \
'port = 42617' \
'host = "[::]"' \
'allow_public_bind = true' \
'require_pairing = false' \
'' \
'[autonomy]' \
'level = "supervised"' \
'auto_approve = ["file_read", "file_write", "file_edit", "memory_recall", "memory_store", "web_search_tool", "web_fetch", "calculator", "glob_search", "content_search", "image_info", "weather", "git_operations"]' \
> /zeroclaw-data/.zeroclaw/config.toml && \
chown -R 65534:65534 /zeroclaw-data
# ── Stage 2: Development Runtime (Debian) ────────────────────
FROM debian:trixie-slim@sha256:f6e2cfac5cf956ea044b4bd75e6397b4372ad88fe00908045e9a0d21712ae3ba AS dev
# Install essential runtime dependencies only (use docker-compose.override.yml for dev tools)
RUN apt-get update && apt-get install -y \
ca-certificates \
curl \
&& rm -rf /var/lib/apt/lists/*
COPY --from=builder /zeroclaw-data /zeroclaw-data
COPY --from=builder /app/zeroclaw /usr/local/bin/zeroclaw
# Overwrite minimal config with DEV template (Ollama defaults)
COPY dev/config.template.toml /zeroclaw-data/.zeroclaw/config.toml
RUN chown 65534:65534 /zeroclaw-data/.zeroclaw/config.toml
# Environment setup
# Ensure UTF-8 locale so CJK / multibyte input is handled correctly
ENV LANG=C.UTF-8
# Use consistent workspace path
ENV ZEROCLAW_WORKSPACE=/zeroclaw-data/workspace
ENV HOME=/zeroclaw-data
# Defaults for local dev (Ollama) - matches config.template.toml
ENV PROVIDER="ollama"
ENV ZEROCLAW_MODEL="llama3.2"
ENV ZEROCLAW_GATEWAY_PORT=42617
# Note: API_KEY is intentionally NOT set here to avoid confusion.
# It is set in config.toml as the Ollama URL.
WORKDIR /zeroclaw-data
USER 65534:65534
EXPOSE 42617
HEALTHCHECK --interval=60s --timeout=10s --retries=3 --start-period=10s \
CMD ["zeroclaw", "status", "--format=exit-code"]
ENTRYPOINT ["zeroclaw"]
CMD ["daemon"]
# ── Stage 3: Production Runtime (Distroless) ─────────────────
FROM gcr.io/distroless/cc-debian13:nonroot@sha256:84fcd3c223b144b0cb6edc5ecc75641819842a9679a3a58fd6294bec47532bf7 AS release
COPY --from=builder /app/zeroclaw /usr/local/bin/zeroclaw
COPY --from=builder /zeroclaw-data /zeroclaw-data
# Environment setup
# Ensure UTF-8 locale so CJK / multibyte input is handled correctly
ENV LANG=C.UTF-8
ENV ZEROCLAW_WORKSPACE=/zeroclaw-data/workspace
ENV HOME=/zeroclaw-data
# Default provider and model are set in config.toml, not here,
# so config file edits are not silently overridden
#ENV PROVIDER=
ENV ZEROCLAW_GATEWAY_PORT=42617
# API_KEY must be provided at runtime!
WORKDIR /zeroclaw-data
USER 65534:65534
EXPOSE 42617
HEALTHCHECK --interval=60s --timeout=10s --retries=3 --start-period=10s \
CMD ["zeroclaw", "status", "--format=exit-code"]
ENTRYPOINT ["zeroclaw"]
CMD ["daemon"]

Some files were not shown because too many files have changed in this diff Show More