Compare commits
10 Commits
9979b1fdd0
...
d315c13f66
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d315c13f66 | ||
|
|
bae0e452a5 | ||
|
|
0e3af5a391 | ||
|
|
11c0b0fc70 | ||
|
|
b90955d1b5 | ||
|
|
d256643208 | ||
|
|
4d9b17f975 | ||
|
|
16b65c01cf | ||
|
|
ff0771a83f | ||
|
|
bca5b75801 |
14
AGENTS.md
14
AGENTS.md
@@ -1,17 +1,17 @@
|
|||||||
# Repository Guidelines
|
# Repository Guidelines
|
||||||
|
|
||||||
## Project Structure & Module Organization
|
## Project Structure & Module Organization
|
||||||
`docs/` is the main source of product, architecture, integration, and team-process documentation. Keep active engineering documents in `docs/*.md`; presentation exports belong under `docs/archive/领导演示资料/`. `frontend/sgClaw验证/` contains the only active runnable UI: a Vue 2 verification page (`index.html`, `index.vue`) plus helper scripts (`serve.sh`, `download-libs.sh`, `testRunner.js`). `frontend/README.md` and `docs/README.md` describe what is active versus archived.
|
`docs/` is the main source of product, architecture, integration, and team-process documentation. Keep active engineering documents in `docs/*.md`; presentation exports belong under `docs/archive/领导演示资料/`. `frontend/archive/sgClaw验证-已归档/` contains the historical Vue 2 verification page (`index.html`, `index.vue`) plus helper scripts (`serve.sh`, `download-libs.sh`, `testRunner.js`). `frontend/README.md` and `docs/README.md` describe what is active versus archived.
|
||||||
|
|
||||||
## Build, Test, and Development Commands
|
## Build, Test, and Development Commands
|
||||||
There is no formal build system in the repository today. Use the local verification page directly:
|
There is no formal build system in the repository today. Use the local verification page directly:
|
||||||
|
|
||||||
- `bash frontend/sgClaw验证/serve.sh`
|
- `bash frontend/archive/sgClaw验证-已归档/serve.sh`
|
||||||
Starts a local HTTP server on port `8080` by default.
|
Starts a local HTTP server on port `8080` by default.
|
||||||
- `bash frontend/sgClaw验证/serve.sh 9090`
|
- `bash frontend/archive/sgClaw验证-已归档/serve.sh 9090`
|
||||||
Serves the verification page on a custom port.
|
Serves the verification page on a custom port.
|
||||||
- `bash frontend/sgClaw验证/download-libs.sh`
|
- `bash frontend/archive/sgClaw验证-已归档/download-libs.sh`
|
||||||
Downloads Vue 2.6.14 and Element UI assets into `frontend/sgClaw验证/lib/` for offline use.
|
Downloads Vue 2.6.14 and Element UI assets into `frontend/archive/sgClaw验证-已归档/lib/` for offline use.
|
||||||
|
|
||||||
Open `http://localhost:8080/index.html` after starting the server.
|
Open `http://localhost:8080/index.html` after starting the server.
|
||||||
|
|
||||||
@@ -19,10 +19,10 @@ Open `http://localhost:8080/index.html` after starting the server.
|
|||||||
Match the existing style in each file. Frontend code uses 2-space indentation, semicolon-free JavaScript, and simple Vue 2 patterns. Shell scripts should stay Bash-compatible, include `set -e`, and keep usage notes at the top. Preserve existing Chinese file names and domain terminology; add new docs with concise, descriptive names such as `L5-xxx.md` or `xxx_printable.md` when extending the documentation set.
|
Match the existing style in each file. Frontend code uses 2-space indentation, semicolon-free JavaScript, and simple Vue 2 patterns. Shell scripts should stay Bash-compatible, include `set -e`, and keep usage notes at the top. Preserve existing Chinese file names and domain terminology; add new docs with concise, descriptive names such as `L5-xxx.md` or `xxx_printable.md` when extending the documentation set.
|
||||||
|
|
||||||
## Testing Guidelines
|
## Testing Guidelines
|
||||||
Testing is currently manual and centered on `frontend/sgClaw验证/testRunner.js`. Validate changes by serving the page, running the relevant verification flows, and recording whether the change affects external API checks, internal browser integration checks, or end-to-end scenarios. If a change touches archived presentation assets, verify links and exported files still open correctly.
|
Testing is currently manual and centered on `frontend/archive/sgClaw验证-已归档/testRunner.js`. Validate changes by serving the page, running the relevant verification flows, and recording whether the change affects external API checks, internal browser integration checks, or end-to-end scenarios. If a change touches archived presentation assets, verify links and exported files still open correctly.
|
||||||
|
|
||||||
## Commit & Pull Request Guidelines
|
## Commit & Pull Request Guidelines
|
||||||
Git history currently contains only `first commit`, so no strong convention is established yet. Use short imperative commit subjects, for example `docs: update browser integration notes` or `frontend: adjust verification report layout`. PRs should include a clear summary, affected paths, manual validation steps, and screenshots when `frontend/sgClaw验证/` UI output changes. Link related docs or issues when the change updates architecture or process guidance.
|
Git history currently contains only `first commit`, so no strong convention is established yet. Use short imperative commit subjects, for example `docs: update browser integration notes` or `frontend: adjust verification report layout`. PRs should include a clear summary, affected paths, manual validation steps, and screenshots when `frontend/archive/sgClaw验证-已归档/` UI output changes. Link related docs or issues when the change updates architecture or process guidance.
|
||||||
|
|
||||||
## Security & Configuration Tips
|
## Security & Configuration Tips
|
||||||
Do not commit real API keys. The verification page expects runtime globals such as `window.__SGCLAW_TEST_OPENAI_KEY__` and `window.__SGCLAW_TEST_CLAUDE_KEY__`; keep them in local test-only setup, not tracked files.
|
Do not commit real API keys. The verification page expects runtime globals such as `window.__SGCLAW_TEST_OPENAI_KEY__` and `window.__SGCLAW_TEST_CLAUDE_KEY__`; keep them in local test-only setup, not tracked files.
|
||||||
|
|||||||
2118
Cargo.lock
generated
2118
Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -4,6 +4,10 @@ version = "0.1.0"
|
|||||||
edition = "2021"
|
edition = "2021"
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
|
anyhow = "1"
|
||||||
|
async-trait = "0.1"
|
||||||
|
chrono = { version = "0.4", default-features = false, features = ["clock"] }
|
||||||
|
futures-util = "0.3"
|
||||||
hex = "0.4"
|
hex = "0.4"
|
||||||
hmac = "0.12"
|
hmac = "0.12"
|
||||||
reqwest = { version = "0.12", default-features = false, features = ["blocking", "json", "rustls-tls"] }
|
reqwest = { version = "0.12", default-features = false, features = ["blocking", "json", "rustls-tls"] }
|
||||||
@@ -11,4 +15,6 @@ serde = { version = "1", features = ["derive"] }
|
|||||||
serde_json = "1"
|
serde_json = "1"
|
||||||
sha2 = "0.10"
|
sha2 = "0.10"
|
||||||
thiserror = "1"
|
thiserror = "1"
|
||||||
|
tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "macros"] }
|
||||||
uuid = { version = "1", features = ["v4"] }
|
uuid = { version = "1", features = ["v4"] }
|
||||||
|
zeroclaw = { package = "zeroclawlabs", path = "third_party/zeroclaw", default-features = false }
|
||||||
|
|||||||
32
README.md
32
README.md
@@ -4,16 +4,38 @@ sgClaw 项目仓库。
|
|||||||
|
|
||||||
## 当前工程形态
|
## 当前工程形态
|
||||||
|
|
||||||
- `src/`:Rust 侧最小可联调实现,包含 pipe 协议、握手、`BrowserPipeTool`、MAC Policy。
|
- `src/`:Rust 侧最小 Agent 实现,包含 pipe 协议、握手、`BrowserPipeTool`、规则规划器、DeepSeek provider、最小 Agent runtime。
|
||||||
- `tests/`:协议、握手、工具与 JSON Line 联调测试。
|
- `tests/`:协议、握手、工具、规划器、runtime 与 JSON Line 联调测试。
|
||||||
- `resources/rules.json`:本地安全策略白名单。
|
- `resources/rules.json`:本地安全策略白名单。
|
||||||
- `docs/`:项目架构、联调协议与团队启动文档。
|
- `docs/`:产品主线文档(架构、实现、交付、接口)与归档入口。
|
||||||
- `frontend/sgClaw验证/`:本地验证页面与辅助脚本。
|
- `frontend/archive/sgClaw验证-已归档/`:历史本地验证页面与脚本(归档,仅做参考)。
|
||||||
|
|
||||||
## 常用命令
|
## 常用命令
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cargo test
|
cargo test
|
||||||
|
cargo test --test planner_test -q
|
||||||
|
cargo test --test agent_runtime_test -q
|
||||||
|
node --test tools/browser_smoke/fake_deepseek_server.test.mjs
|
||||||
|
node tools/browser_smoke/run_deepseek_browser_smoke.mjs
|
||||||
cargo run
|
cargo run
|
||||||
bash frontend/sgClaw验证/serve.sh
|
bash frontend/archive/sgClaw验证-已归档/serve.sh
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## 浏览器侧 DeepSeek smoke
|
||||||
|
|
||||||
|
在已经可用的 SuperRPA 浏览器构建目录上,可以通过下面的组合验证浏览器侧 `sgclaw` 是否真的走了 ZeroClaw/DeepSeek compat runtime,而不是回退到本地 planner:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 /home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/build_sgclaw.py \
|
||||||
|
--manifest-path /home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/Cargo.toml \
|
||||||
|
--out /home/zyl/projects/superRpa/src/out/KylinRelease/sgclaw
|
||||||
|
|
||||||
|
node tools/browser_smoke/run_deepseek_browser_smoke.mjs
|
||||||
|
```
|
||||||
|
|
||||||
|
该 wrapper 会:
|
||||||
|
- 启动本地 fake DeepSeek 服务
|
||||||
|
- 注入 `DEEPSEEK_API_KEY` / `DEEPSEEK_BASE_URL` / `DEEPSEEK_MODEL`
|
||||||
|
- 调用现有 `/home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/sgclaw_chat_smoke.mjs`
|
||||||
|
- 在 smoke 通过后,再额外确认 fake 服务确实收到了百度和知乎两组 provider 请求
|
||||||
|
|||||||
@@ -1,474 +1,129 @@
|
|||||||
# L0 — 产品白皮书与能力全景层
|
# L0 — 产品白皮书与能力全景层
|
||||||
|
|
||||||
**文档版本**: 1.0
|
**文档版本**: 2.0
|
||||||
**适用项目**: sgClaw (业数融合一平台 AI Agent 底座)
|
**适用项目**: sgClaw(ZeroClaw 重构版)
|
||||||
**编制日期**: 2026-03-03
|
**编制日期**: 2026-03-26
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 1. 产品定位
|
## 1. 产品定义
|
||||||
|
|
||||||
sgClaw 是面向国家电网"业数融合一平台"的 **AI 驱动智能代理平台**。它并非一个独立应用程序,而是作为核心能力嵌入 SuperRPA 定制 Chromium 浏览器内核之中,通过浏览器 Side Panel 中的控制按钮一键激活。
|
sgClaw 是一个嵌入企业浏览器运行环境中的浏览器智能体执行内核。它的职责不是替代整个平台,也不是承诺“全自动数字员工”,而是把自然语言任务转换成受控的浏览器操作,并通过既有浏览器宿主完成页面执行。
|
||||||
|
|
||||||
用户只需用自然语言描述业务意图,sgClaw 即可自主理解指令语义,规划执行步骤,在 ERP、OA、财务、人力资源、经济法务等复杂业务系统中完成跨系统操作——**无需编写任何代码**。
|
ZeroClaw 重构之后,sgClaw 的产品形态可以概括为三件事:
|
||||||
|
|
||||||
> **核心比喻:一位会思考、能学习、永不犯错的数字员工。**
|
1. 把用户任务接入统一的 Agent 执行入口。
|
||||||
|
2. 通过固定的 `browser_action` 工具把意图翻译为浏览器命令。
|
||||||
|
3. 在协议、域名和动作白名单的约束下完成可审计的页面操作。
|
||||||
|
|
||||||
sgClaw 从浏览器内核层面发起操作,与真实用户行为完全一致,不可被反自动化机制识别,从根本上解决了传统外部 RPA 工具被检测、被拦截的行业痛点。
|
当前仓库中的 sgClaw 不是一个完整前端产品,也不是浏览器发行版本身,而是“浏览器 Agent Runtime + Pipe 协议 + ZeroClaw 兼容层”的产品核心。
|
||||||
|
|
||||||
```
|
|
||||||
┌─────────────────────────────────────────────────────────────────┐
|
|
||||||
│ SuperRPA 定制 Chromium 浏览器 │
|
|
||||||
│ │
|
|
||||||
│ ┌──────────────────────┐ ┌────────────────────────────────┐ │
|
|
||||||
│ │ 浏览器主窗口 │ │ Side Panel 控制区 │ │
|
|
||||||
│ │ │ │ │ │
|
|
||||||
│ │ ┌────────────────┐ │ │ ┌──────────────────────────┐ │ │
|
|
||||||
│ │ │ ERP / OA / │ │ │ │ [启动 Agent] [停止] │ │ │
|
|
||||||
│ │ │ 财务 / HR 等 │ │ │ │ │ │ │
|
|
||||||
│ │ │ 业务系统页面 │ │ │ │ 指令输入: │ │ │
|
|
||||||
│ │ │ │ │ │ │ "导出本月合规报表" │ │ │
|
|
||||||
│ │ │ │ │ │ │ │ │ │
|
|
||||||
│ │ └────────────────┘ │ │ │ ▼ 任务进度 │ │ │
|
|
||||||
│ │ ▲ │ │ │ ████████░░ 80% │ │ │
|
|
||||||
│ │ │ 内核级操作 │ │ │ │ │ │
|
|
||||||
│ │ │ │ │ │ ✓ 已登录 ERP │ │ │
|
|
||||||
│ │ ┌──────┴─────────┐ │ │ │ ✓ 已导出财务报表 │ │ │
|
|
||||||
│ │ │ sgClaw 引擎 │◄─┼────┼──│ ► 正在导出合规报表... │ │ │
|
|
||||||
│ │ │ (Rust Binary) │ │ │ │ │ │ │
|
|
||||||
│ │ └────────────────┘ │ │ └──────────────────────────┘ │ │
|
|
||||||
│ └──────────────────────┘ └────────────────────────────────┘ │
|
|
||||||
└─────────────────────────────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 2. 行业痛点
|
## 2. 重构后的产品边界
|
||||||
|
|
||||||
国家电网及大型央企的业务运营高度依赖多套信息系统协同。一线业务人员每天需要在 5 至 10 余套系统之间反复切换,手工搬运数据,面临以下核心痛点:
|
### 2.1 当前已经落地的能力
|
||||||
|
|
||||||
### 2.1 效率低下
|
- 浏览器侧通过 STDIO JSON Line 协议与 Rust 进程通信。
|
||||||
|
- 启动时执行 `init -> init_ack` 握手,并建立会话级 HMAC 密钥。
|
||||||
|
- 任务输入统一走 `submit_task` 消息。
|
||||||
|
- Rust 侧支持两条执行路径:
|
||||||
|
- 未配置大模型时,使用仓库内置 planner/fallback 逻辑。
|
||||||
|
- 配置 `DEEPSEEK_*` 环境变量时,切换到 ZeroClaw compatibility runtime。
|
||||||
|
- 当前有效工具面收敛为一个工具:`browser_action`。
|
||||||
|
- 当前真正开放给模型的动作仅 4 个:`click`、`type`、`navigate`、`getText`。
|
||||||
|
- 所有浏览器动作都受 `resources/rules.json` 中的域名和动作白名单约束。
|
||||||
|
- 执行过程中会向宿主发送结构化日志和最终任务结果。
|
||||||
|
|
||||||
一线员工日常需在 ERP、OA、财务管控、人力资源、经济法务、营销等多套系统间反复登录、切换、手工录入。一项跨系统操作(如合规线索提报)平均需要 **15-30 分钟**,涉及 **3-5 个系统** 的数据交叉核对。全年此类重复操作累计耗费数万人时。
|
### 2.2 当前明确不宣称的能力
|
||||||
|
|
||||||
### 2.2 人工差错
|
以下内容在旧文档中存在较多规划性描述,但并非当前仓库中的已实现事实:
|
||||||
|
|
||||||
手工跨系统数据搬运极易出错。财务合规场景下,一个数字的录入错误可能导致审计异常,引发合规风险。据行业统计,人工跨系统操作的 **错误率约为 2%-5%**,在高强度、高压力的月末结算期间错误率更高。
|
- 独立的 Skill 仓库与 Skill 脚本执行引擎。
|
||||||
|
- 完整 MCP 工具接入和多工具编排。
|
||||||
|
- 独立 Critic/Circuit Breaker 子系统。
|
||||||
|
- 完整的浏览器 Side Panel 产品界面。
|
||||||
|
- 40+ 页面动作在 Agent 侧全部开放。
|
||||||
|
- 真实生产级多租户、审计后台、任务编排中心。
|
||||||
|
|
||||||
### 2.3 培训成本高
|
这些能力可以保留为后续扩展方向,但不应继续写入 L0-L4 作为现状描述。
|
||||||
|
|
||||||
新员工需要 **3-6 个月** 才能熟练掌握多套业务系统的操作流程和业务规则。人员调动频繁时,培训成本成倍增长,且经验难以沉淀、传承。
|
|
||||||
|
|
||||||
### 2.4 合规风险
|
|
||||||
|
|
||||||
手工操作缺乏完整的审计轨迹,难以事后追溯"谁在什么时间对哪个系统做了什么操作"。在日趋严格的内控与合规要求下,这构成了显著的制度性风险。
|
|
||||||
|
|
||||||
### 2.5 重复劳动
|
|
||||||
|
|
||||||
经调研分析,一线业务人员 **约 80%** 的跨系统操作属于规则明确、流程固定的重复性工作。这些工作本应由自动化工具承担,但因系统间壁垒和技术限制,长期依赖人力完成。
|
|
||||||
|
|
||||||
### 2.6 传统 RPA 局限
|
|
||||||
|
|
||||||
外部 RPA 工具(UiPath、BluePrism 等)通过屏幕抓取、模拟点击等方式操控浏览器,存在根本性缺陷:
|
|
||||||
|
|
||||||
- **易被检测**:反自动化机制可识别 WebDriver、Selenium 等注入痕迹
|
|
||||||
- **被系统拦截**:越来越多的业务系统部署了 Bot Detection,直接阻断 RPA 操作
|
|
||||||
- **需专业脚本**:每个流程需要专门开发自动化脚本,维护成本高
|
|
||||||
- **环境依赖**:对操作系统版本、屏幕分辨率、系统界面变更高度敏感
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 3. 核心能力矩阵
|
## 3. 产品价值主张
|
||||||
|
|
||||||
| 能力维度 | 能力描述 | 关键指标 |
|
ZeroClaw 重构后的 sgClaw,核心价值不在“功能堆叠”,而在于把原本分散的浏览器自动化能力收敛成一个可控、可替换、可验证的智能体执行底座。
|
||||||
|---------|---------|---------|
|
|
||||||
| **自然语言驱动** | 用户以自然语言(中文)描述业务意图,Agent 自主理解语义、分解任务、规划步骤并执行 | 支持复杂多步指令,意图识别准确率 > 95% |
|
|
||||||
| **内核级隐蔽操作** | 从浏览器内核层面发起 DOM 操作与事件派发,与真实用户行为在技术栈上完全一致 | 反自动化检测通过率 100%,零注入痕迹 |
|
|
||||||
| **自进化学习** | 每次成功执行的操作序列自动沉淀为 Skill,后续同类任务直接复用,无需重复推理 | Skill 复用率随使用时长持续提升 |
|
|
||||||
| **三层安全防御** | Pipeline 协议层安全 + Rust 命令验证层 + C++ 内核 MAC 强制访问控制 | 纵深防御,任一层均可独立拦截非法操作 |
|
|
||||||
| **Skill 技能仓库** | 预置覆盖财务合规、风险管控、营销、人力资源、经济法务等业务领域的操作技能包 | 开箱即用,支持自定义扩展 |
|
|
||||||
| **多模型适配** | 支持 Claude、GPT 系列、本地化模型(Qwen、ChatGLM 等),可按安全等级灵活切换 | 模型切换零代码,响应延迟 < 2s |
|
|
||||||
| **跨平台支持** | 原生支持 Linux(银河麒麟 V10)与 Windows,满足国产化适配要求 | 信创环境全面兼容 |
|
|
||||||
| **极致轻量** | Rust 编写的 Agent 引擎,资源占用极低 | 内存 ~5MB,冷启动 < 10ms |
|
|
||||||
|
|
||||||
```
|
### 3.1 对业务侧
|
||||||
┌─────────────────────────────────────────────────────────────┐
|
|
||||||
│ sgClaw 核心能力全景图 │
|
- 用自然语言触发浏览器任务,不再直接暴露底层页面命令。
|
||||||
├─────────────────────────────────────────────────────────────┤
|
- 统一任务入口,降低页面自动化能力的使用门槛。
|
||||||
│ │
|
- 执行链路具备日志、结果回传和协议约束,便于纳入业务流程。
|
||||||
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │
|
|
||||||
│ │ 自然语言 │ │ 自进化学习 │ │ 多模型适配 │ │
|
### 3.2 对集成侧
|
||||||
│ │ 理解与规划 │ │ Skill 沉淀 │ │ Claude/GPT/Qwen │ │
|
|
||||||
│ └──────┬──────┘ └──────┬──────┘ └──────────┬──────────┘ │
|
- 浏览器宿主只需实现固定协议,不必理解模型内部细节。
|
||||||
│ │ │ │ │
|
- Agent Runtime 可以在保留宿主协议的前提下切换实现策略。
|
||||||
│ ▼ ▼ ▼ │
|
- ZeroClaw 兼容层把未来模型、记忆、工具调度的升级入口预留在 Rust 侧。
|
||||||
│ ┌──────────────────────────────────────────────────────┐ │
|
|
||||||
│ │ sgClaw Agent 引擎 (Rust) │ │
|
### 3.3 对安全侧
|
||||||
│ │ 内存 ~5MB | 冷启动 < 10ms │ │
|
|
||||||
│ └───────────────────────┬──────────────────────────────┘ │
|
- 不是“模型可任意操作浏览器”,而是“模型只能调用被允许的动作”。
|
||||||
│ │ │
|
- 安全边界前置到协议和 MAC Policy,而不是把约束留给提示词。
|
||||||
│ ┌────────────────┼────────────────┐ │
|
- 域名、动作、HMAC 三类控制共同组成最小可信执行面。
|
||||||
│ ▼ ▼ ▼ │
|
|
||||||
│ ┌────────────┐ ┌─────────────┐ ┌────────────────┐ │
|
|
||||||
│ │ Pipeline │ │ Rust 命令 │ │ C++ 内核 MAC │ │
|
|
||||||
│ │ 协议层安全 │ │ 验证层 │ │ 强制访问控制 │ │
|
|
||||||
│ └────────────┘ └─────────────┘ └────────────────┘ │
|
|
||||||
│ │ │ │ │
|
|
||||||
│ └────────────────┼────────────────┘ │
|
|
||||||
│ ▼ │
|
|
||||||
│ ┌──────────────────────────────────────────────────────┐ │
|
|
||||||
│ │ 内核级隐蔽操作 (Chromium C++ 层) │ │
|
|
||||||
│ │ DOM 操作 · 事件派发 · 与真实用户行为完全一致 │ │
|
|
||||||
│ └──────────────────────────────────────────────────────┘ │
|
|
||||||
│ │ │
|
|
||||||
│ ┌────────────────┼────────────────┐ │
|
|
||||||
│ ▼ ▼ ▼ │
|
|
||||||
│ ┌────────────┐ ┌─────────────┐ ┌────────────────┐ │
|
|
||||||
│ │ Skill 仓库 │ │ 跨平台支持 │ │ 全链路审计 │ │
|
|
||||||
│ │ 业务技能包 │ │ 麒麟/Windows │ │ trace_id 追溯 │ │
|
|
||||||
│ └────────────┘ └─────────────┘ └────────────────┘ │
|
|
||||||
│ │
|
|
||||||
└─────────────────────────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 4. 典型业务场景
|
## 4. 能力全景
|
||||||
|
|
||||||
### 4.1 财务合规
|
| 能力域 | 当前状态 | 产品含义 |
|
||||||
|
|---|---|---|
|
||||||
**场景示例**:合规线索提报与交叉核查
|
| 任务接入 | 已实现 | 接收浏览器宿主发来的 `submit_task` 指令 |
|
||||||
|
| 协议握手 | 已实现 | 统一版本、会话标识、HMAC 种子交换 |
|
||||||
用户指令:*"将本月 ERP 中的异常交易记录与财务管控系统的合规规则交叉比对,生成合规线索提报清单。"*
|
| Agent 执行 | 已实现 | planner fallback 与 ZeroClaw compat 共存 |
|
||||||
|
| 浏览器工具 | 已实现 | 单一 `browser_action` 工具 |
|
||||||
sgClaw 执行流程:
|
| 核心动作 | 已实现 | `click/type/navigate/getText` |
|
||||||
1. 自动登录 ERP 系统,导航至异常交易模块
|
| 域名白名单 | 已实现 | 仅允许规则文件中的域名 |
|
||||||
2. 按时间范围筛选并导出本月异常交易数据
|
| 动作白名单 | 已实现 | 仅允许规则文件中的动作 |
|
||||||
3. 切换至财务管控系统,调取对应合规规则库
|
| 结构化日志 | 已实现 | `log_entry` 与 `task_complete` 回传 |
|
||||||
4. 逐条交叉比对,标记命中合规规则的记录
|
| 扩展动作枚举 | 已预留 | 协议枚举已定义,但默认未开放 |
|
||||||
5. 自动生成合规线索提报清单,填入指定模板
|
| Skill 引擎 | 未独立实现 | 当前仅保留“可被工具和提示词扩展”的语义入口 |
|
||||||
6. 提交至审批流程,附加完整操作审计记录
|
| MCP 生态 | 未在主链路启用 | ZeroClaw 兼容层为后续保留位置 |
|
||||||
|
|
||||||
**业务价值**:原需 2-3 小时的人工操作压缩至 **5-8 分钟**,错误率从 3% 降至 **0%**。
|
|
||||||
|
|
||||||
### 4.2 风险管控
|
|
||||||
|
|
||||||
**场景示例**:跨系统风险指标监测与异常预警
|
|
||||||
|
|
||||||
用户指令:*"每日自动检查 ERP 和风控系统中的关键风险指标,发现异常立即生成预警报告。"*
|
|
||||||
|
|
||||||
sgClaw 执行流程:
|
|
||||||
1. 定时自动巡检 ERP 系统中的关键财务指标
|
|
||||||
2. 同步核查风控系统中的风险阈值配置
|
|
||||||
3. 对比分析指标偏离情况,识别异常模式
|
|
||||||
4. 异常触发时自动截屏取证、生成预警报告
|
|
||||||
5. 推送至相关负责人,并在 OA 系统创建跟踪工单
|
|
||||||
|
|
||||||
**业务价值**:实现 **7x24 小时** 不间断风险监控,预警响应时间从 "次日发现" 缩短至 **实时告警**。
|
|
||||||
|
|
||||||
### 4.3 营销
|
|
||||||
|
|
||||||
**场景示例**:电费异常批量处理与账单核对
|
|
||||||
|
|
||||||
用户指令:*"批量处理本月电费账单异常记录,对比营销系统与财务系统的数据差异。"*
|
|
||||||
|
|
||||||
sgClaw 执行流程:
|
|
||||||
1. 进入营销系统,筛选本月标记为异常的电费账单
|
|
||||||
2. 逐条提取异常记录的用户编号、金额、异常类型
|
|
||||||
3. 在财务系统中查询对应的收费记录
|
|
||||||
4. 自动比对金额差异,生成差异明细报表
|
|
||||||
5. 对可自动修正的记录执行批量修正操作
|
|
||||||
6. 对需人工确认的记录生成待办清单
|
|
||||||
|
|
||||||
**业务价值**:月均处理量从 **200 条/人日** 提升至 **5000+ 条/小时**,释放大量人力投入高价值工作。
|
|
||||||
|
|
||||||
### 4.4 人力资源
|
|
||||||
|
|
||||||
**场景示例**:社保表单自动填报与薪酬数据核验
|
|
||||||
|
|
||||||
用户指令:*"从 HR 系统导出本月社保基数变更人员名单,自动填入社保申报表并交叉验证薪酬数据。"*
|
|
||||||
|
|
||||||
sgClaw 执行流程:
|
|
||||||
1. 登录 HR 系统,导出社保基数变更人员明细
|
|
||||||
2. 自动填入社保局在线申报表单的对应字段
|
|
||||||
3. 同步查询薪酬系统中的工资明细数据
|
|
||||||
4. 交叉验证社保基数与实际薪酬的一致性
|
|
||||||
5. 标记不一致记录,生成差异报告
|
|
||||||
6. 合规记录自动提交,异常记录流转至人工复核
|
|
||||||
|
|
||||||
**业务价值**:每月社保申报工作从 **3-5 个工作日** 压缩至 **2-4 小时**。
|
|
||||||
|
|
||||||
### 4.5 经济法务
|
|
||||||
|
|
||||||
**场景示例**:合同履约监测与法律风险预警
|
|
||||||
|
|
||||||
用户指令:*"监控即将到期的合同,检查履约状态,对存在违约风险的合同生成法律风险预警。"*
|
|
||||||
|
|
||||||
sgClaw 执行流程:
|
|
||||||
1. 在合同管理系统中筛选 30 天内到期的合同
|
|
||||||
2. 逐一核查合同关键条款的履约状态
|
|
||||||
3. 交叉查询 ERP 系统中的付款/交货记录
|
|
||||||
4. 识别履约偏差,评估违约风险等级
|
|
||||||
5. 生成法律风险预警报告,按风险等级排序
|
|
||||||
6. 自动推送至法务部门,创建跟踪任务
|
|
||||||
|
|
||||||
**业务价值**:合同风险识别从 "事后补救" 转变为 **"事前预警"**,法律纠纷发生率显著降低。
|
|
||||||
|
|
||||||
### 4.6 协同办公
|
|
||||||
|
|
||||||
**场景示例**:跨系统数据同步与报表整合
|
|
||||||
|
|
||||||
用户指令:*"从 ERP、财务、HR 三个系统导出本月关键运营数据,汇总生成月度经营分析报表。"*
|
|
||||||
|
|
||||||
sgClaw 执行流程:
|
|
||||||
1. 依次登录 ERP、财务、HR 系统
|
|
||||||
2. 按预设模板提取各系统的关键运营数据
|
|
||||||
3. 自动对齐数据口径,统一格式
|
|
||||||
4. 汇总计算关键指标,生成月度经营分析报表
|
|
||||||
5. 导出为标准格式,上传至 OA 系统
|
|
||||||
|
|
||||||
**业务价值**:月度报表整合从 **2-3 天人工汇总** 缩短至 **30 分钟自动生成**。
|
|
||||||
|
|
||||||
### 4.7 通用场景
|
|
||||||
|
|
||||||
用户只需一句自然语言指令,sgClaw 即可自主完成端到端的跨系统操作:
|
|
||||||
|
|
||||||
| 自然语言指令 | Agent 自主完成的操作 |
|
|
||||||
|------------|-------------------|
|
|
||||||
| "导出本月所有合规报表" | 依次登录各业务系统 → 定位报表模块 → 设定时间范围 → 导出 → 汇总 |
|
|
||||||
| "检查上周新入职员工的系统权限配置" | HR 系统查询入职名单 → 各业务系统逐一核查权限 → 生成核查报告 |
|
|
||||||
| "把 ERP 里的采购订单数据同步到财务系统" | ERP 导出订单 → 格式转换 → 财务系统录入 → 数据校验 |
|
|
||||||
| "统计各部门本季度差旅报销总额" | OA 系统提取差旅审批 → 财务系统核查报销 → 按部门汇总 → 生成报表 |
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 5. 技术优势对比
|
## 5. 典型产品场景
|
||||||
|
|
||||||
### 5.1 综合对比矩阵
|
### 5.1 页面导航与信息读取
|
||||||
|
|
||||||
| 对比维度 | 人工操作 | 传统 RPA (UiPath/BluePrism) | 外部 Agent (OpenClaw) | **sgClaw** |
|
用户输入“进入 ERP 首页并读取当前待办数量”,系统可以拆解为:
|
||||||
|---------|---------|---------------------------|---------------------|-----------|
|
|
||||||
| **架构方式** | N/A | 外部进程控制浏览器 | 外部进程 + WebSocket | **嵌入浏览器内核** |
|
|
||||||
| **反检测能力** | 天然通过 | 易被检测拦截 | 可被端口扫描发现 | **原生行为,不可检测** |
|
|
||||||
| **安全层级** | 依赖人员素质 | 应用层安全 | 应用层安全 | **三层纵深防御** |
|
|
||||||
| **通信方式** | N/A | HTTP / COM | HTTP / WebSocket (端口暴露) | **STDIO Pipe (进程私有)** |
|
|
||||||
| **内存占用** | N/A | 200-500MB | 394MB+ | **~5MB** |
|
|
||||||
| **冷启动时间** | N/A | 10-30s | 5-15s | **< 10ms** |
|
|
||||||
| **技能复用** | 经验口传 | 需重新开发脚本 | 需重新训练 | **复用已有 JS 业务代码** |
|
|
||||||
| **部署方式** | N/A | 独立安装 + 配置 | 独立安装 + 配置 | **内嵌浏览器,零独立安装** |
|
|
||||||
| **自然语言** | N/A | 不支持 | 部分支持 | **完整支持中文自然语言** |
|
|
||||||
| **国产化适配** | N/A | 有限支持 | 不支持 | **银河麒麟 V10 原生支持** |
|
|
||||||
| **学习门槛** | 3-6 个月 | 需专业 RPA 开发 | 需技术配置 | **自然语言,零学习成本** |
|
|
||||||
|
|
||||||
### 5.2 关键差异化优势
|
1. `navigate` 到目标地址。
|
||||||
|
2. `getText` 读取页面目标区域。
|
||||||
|
3. 返回结构化结果摘要。
|
||||||
|
|
||||||
```
|
这是当前仓库最稳定、最符合实现面的任务类型。
|
||||||
┌──────────────────────────────────────────────────────────────────┐
|
|
||||||
│ 架构差异:外部控制 vs 内核嵌入 │
|
### 5.2 表单录入与提交流程中的局部自动化
|
||||||
├──────────────────────────────────────────────────────────────────┤
|
|
||||||
│ │
|
当页面元素定位规则明确时,系统可用 `click` 和 `type` 组合完成表单录入、按钮点击、简单提交等动作。
|
||||||
│ 传统 RPA / 外部 Agent 方案: │
|
是否能覆盖完整业务流程,取决于浏览器宿主是否提供对应页面、选择器和回包信息,而不是文档层面预设“所有流程都能端到端执行”。
|
||||||
│ │
|
|
||||||
│ ┌────────────┐ HTTP/WS ┌──────────────┐ │
|
### 5.3 作为更大产品中的 Agent 执行核
|
||||||
│ │ RPA Engine │ ──────────────→│ 浏览器 │ │
|
|
||||||
│ │ (外部进程) │ 端口暴露 │ (被外部控制) │ │
|
sgClaw 更适合被理解为产品底座中的一个执行核:
|
||||||
│ └────────────┘ 可被检测 └──────────────┘ │
|
|
||||||
│ 394MB+ 反自动化机制 │
|
- 上层可以接入任务输入框、审批入口或业务编排器。
|
||||||
│ 可识别拦截 │
|
- 下层通过既有浏览器控制面执行。
|
||||||
│ │
|
- 中间由 sgClaw 把自然语言与浏览器动作连接起来。
|
||||||
│ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ │
|
|
||||||
│ │
|
|
||||||
│ sgClaw 方案: │
|
|
||||||
│ │
|
|
||||||
│ ┌──────────────────────────────────────────────┐ │
|
|
||||||
│ │ SuperRPA Chromium 浏览器 │ │
|
|
||||||
│ │ │ │
|
|
||||||
│ │ ┌──────────┐ STDIO Pipe ┌──────────────┐ │ │
|
|
||||||
│ │ │ sgClaw │ ◄──────────► │ Chromium C++ │ │ │
|
|
||||||
│ │ │ (Rust) │ 进程私有 │ 内核层 │ │ │
|
|
||||||
│ │ │ ~5MB │ 零端口暴露 │ │ │ │
|
|
||||||
│ │ └──────────┘ └──────────────┘ │ │
|
|
||||||
│ │ │ │
|
|
||||||
│ │ 操作 = 原生用户行为,不可被检测 │ │
|
|
||||||
│ └──────────────────────────────────────────────┘ │
|
|
||||||
│ │
|
|
||||||
└──────────────────────────────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 6. 安全与合规保障
|
## 6. 成功标准
|
||||||
|
|
||||||
sgClaw 将安全视为产品基因而非附加功能,构建了从通信层到内核层的 **三层纵深防御体系**。
|
重构后的产品文档,以“真实能力清晰可交付”为标准,而不是以“愿景尽可能大”为标准。当前版本应满足:
|
||||||
|
|
||||||
### 6.1 进程隔离通信
|
- 任何架构描述都能在 `src/`、`resources/`、`tests/` 中找到对应实现。
|
||||||
|
- 任何对外宣称的动作能力都与 `rules.json` 和工具 schema 一致。
|
||||||
- 采用 **STDIO Pipe** 作为 Agent 与浏览器内核的唯一通信通道
|
- 任何“未来可扩展”内容都与“当前已实现”明确区分。
|
||||||
- 不开放任何网络端口,外部进程无法探测或连接
|
- L0 到 L4 能从产品、架构、接口、数据流、工程五层连续闭环。
|
||||||
- 通信数据仅存在于父子进程的文件描述符中,操作系统级别的隐私保护
|
|
||||||
|
|
||||||
### 6.2 MAC 强制访问控制
|
|
||||||
|
|
||||||
- 浏览器 C++ 内核层实施 **Mandatory Access Control**
|
|
||||||
- 严格的域名白名单机制:Agent 仅可操作授权的业务系统域名
|
|
||||||
- 敏感操作(如支付、审批)需额外的内核级权限校验
|
|
||||||
- 白名单策略由管理员统一配置,Agent 无法自行绕过
|
|
||||||
|
|
||||||
### 6.3 凭证安全保护
|
|
||||||
|
|
||||||
- 用户凭证由浏览器 Zombie Session Pool 统一管理
|
|
||||||
- 凭证信息 **永远不会通过 Pipe 协议传输** 至 Agent 进程
|
|
||||||
- Agent 通过 BrowserAction API 间接使用已建立的会话,无需接触明文密码
|
|
||||||
|
|
||||||
### 6.4 人工激活机制
|
|
||||||
|
|
||||||
- Agent 功能 **默认关闭**,需用户在 Side Panel 中显式点击启动按钮
|
|
||||||
- 每次启动均需用户确认,杜绝后台无感自动运行
|
|
||||||
- 用户可随时一键停止 Agent 的所有操作
|
|
||||||
|
|
||||||
### 6.5 全链路审计追溯
|
|
||||||
|
|
||||||
- 每次 Agent 会话分配唯一 **trace_id**
|
|
||||||
- 所有操作步骤(页面导航、元素点击、数据读取、表单提交)均有完整日志记录
|
|
||||||
- 日志包含操作时间戳、目标系统、操作类型、执行结果
|
|
||||||
- 支持事后审计回溯与合规举证
|
|
||||||
|
|
||||||
### 6.6 防失控熔断机制
|
|
||||||
|
|
||||||
- 内置 **Circuit Breaker** 机制,防止 Agent 进入死循环或失控状态
|
|
||||||
- 单次任务设置最大步骤数上限
|
|
||||||
- 连续失败自动熔断,暂停执行并通知用户
|
|
||||||
- 关键操作设置人工确认断点(human-in-the-loop)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 7. 产品形态与交付方式
|
|
||||||
|
|
||||||
### 7.1 产品形态
|
|
||||||
|
|
||||||
| 组件 | 形态 | 规格 |
|
|
||||||
|------|------|------|
|
|
||||||
| Agent 引擎 | Rust 编译二进制 | 约 8.8MB |
|
|
||||||
| 宿主环境 | SuperRPA 定制 Chromium 浏览器 | 集成交付 |
|
|
||||||
| 用户界面 | 浏览器 Side Panel 控制区 | 启停按钮 + 指令输入 + 任务进度 |
|
|
||||||
| Skill 仓库 | JSON 格式技能定义文件 | 随浏览器内置,支持在线更新 |
|
|
||||||
| 运行时依赖 | 无 | Rust 静态编译,零外部依赖 |
|
|
||||||
|
|
||||||
### 7.2 交付方式
|
|
||||||
|
|
||||||
- **Linux (银河麒麟 V10)**:集成于 `superrpa-chromium` .deb 安装包
|
|
||||||
- **Windows**:集成于 `superrpa-chromium` .exe 安装包
|
|
||||||
- **无需独立安装**:随浏览器一并部署,无额外配置步骤
|
|
||||||
- **无需独立升级**:随浏览器版本统一升级管理
|
|
||||||
|
|
||||||
### 7.3 用户交互流程
|
|
||||||
|
|
||||||
```
|
|
||||||
用户操作流程:
|
|
||||||
|
|
||||||
打开 SuperRPA 浏览器
|
|
||||||
│
|
|
||||||
▼
|
|
||||||
访问业务系统(自动登录)
|
|
||||||
│
|
|
||||||
▼
|
|
||||||
打开 Side Panel ──→ 看到 sgClaw 控制区
|
|
||||||
│
|
|
||||||
▼
|
|
||||||
点击 [启动 Agent] 按钮
|
|
||||||
│
|
|
||||||
▼
|
|
||||||
输入自然语言指令 ──→ "导出本月所有合规报表"
|
|
||||||
│
|
|
||||||
▼
|
|
||||||
Agent 自主执行 ──→ Side Panel 实时显示进度
|
|
||||||
│
|
|
||||||
▼
|
|
||||||
执行完成 ──→ 结果展示 / 文件下载
|
|
||||||
│
|
|
||||||
▼
|
|
||||||
(可选)点击 [停止] 终止任务
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 8. 与 SuperRPA 浏览器的协同关系
|
|
||||||
|
|
||||||
sgClaw 并非独立产品,而是与 SuperRPA 浏览器深度耦合的 **智能增强层**。两者各司其职,协同构成完整的"智能数字员工"平台。
|
|
||||||
|
|
||||||
### 8.1 能力分工
|
|
||||||
|
|
||||||
```
|
|
||||||
┌────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ "智能数字员工" 完整能力栈 │
|
|
||||||
├────────────────────────────────────────────────────────────────────┤
|
|
||||||
│ │
|
|
||||||
│ ┌──────────────────────────────────────────────────────────────┐ │
|
|
||||||
│ │ sgClaw 智能增强层 │ │
|
|
||||||
│ │ │ │
|
|
||||||
│ │ ┌────────────┐ ┌──────────┐ ┌──────────┐ ┌──────────────┐ │ │
|
|
||||||
│ │ │ LLM 智能 │ │ 自然语言 │ │ 多步自主 │ │ 自进化学习 │ │ │
|
|
||||||
│ │ │ 推理引擎 │ │ 理解 │ │ 任务执行 │ │ Skill 沉淀 │ │ │
|
|
||||||
│ │ └────────────┘ └──────────┘ └──────────┘ └──────────────┘ │ │
|
|
||||||
│ │ │ │
|
|
||||||
│ └──────────────────────────┬───────────────────────────────────┘ │
|
|
||||||
│ │ STDIO Pipe │
|
|
||||||
│ ┌──────────────────────────┴───────────────────────────────────┐ │
|
|
||||||
│ │ SuperRPA 浏览器基础设施层 │ │
|
|
||||||
│ │ │ │
|
|
||||||
│ │ ┌────────────┐ ┌──────────┐ ┌──────────┐ ┌──────────────┐ │ │
|
|
||||||
│ │ │ Zombie │ │ SDK │ │ Browser │ │ 凭证与会话 │ │ │
|
|
||||||
│ │ │ Session │ │ 注入引擎 │ │ Action │ │ 安全管理 │ │ │
|
|
||||||
│ │ │ Pool │ │ │ │ API │ │ │ │ │
|
|
||||||
│ │ └────────────┘ └──────────┘ └──────────┘ └──────────────┘ │ │
|
|
||||||
│ │ │ │
|
|
||||||
│ │ ┌────────────┐ ┌──────────┐ ┌──────────┐ ┌──────────────┐ │ │
|
|
||||||
│ │ │ 反检测 │ │ 多标签页 │ │ 域名 │ │ C++ 内核 │ │ │
|
|
||||||
│ │ │ 指纹伪装 │ │ 并发管理 │ │ 白名单 │ │ MAC 控制 │ │ │
|
|
||||||
│ │ └────────────┘ └──────────┘ └──────────┘ └──────────────┘ │ │
|
|
||||||
│ │ │ │
|
|
||||||
│ └──────────────────────────────────────────────────────────────┘ │
|
|
||||||
│ │
|
|
||||||
├────────────────────────────────────────────────────────────────────┤
|
|
||||||
│ 协同价值 │
|
|
||||||
│ │
|
|
||||||
│ SuperRPA 提供: sgClaw 增加: │
|
|
||||||
│ ├─ Zombie Session Pool 会话池 ├─ LLM 智能推理能力 │
|
|
||||||
│ ├─ SDK 注入与 JS 执行环境 ├─ 自然语言理解与意图解析 │
|
|
||||||
│ ├─ BrowserAction API 操作接口 ├─ 自主多步任务规划与执行 │
|
|
||||||
│ ├─ 凭证管理与自动登录 ├─ 自进化学习与 Skill 积累 │
|
|
||||||
│ ├─ 反自动化检测基础设施 ├─ 跨系统业务流程编排 │
|
|
||||||
│ └─ 内核级安全强制控制 └─ 业务语义理解与异常处理 │
|
|
||||||
│ │
|
|
||||||
│ 单独的 SuperRPA = 强大的自动化浏览器 │
|
|
||||||
│ SuperRPA + sgClaw = 会思考的智能数字员工 │
|
|
||||||
│ │
|
|
||||||
└────────────────────────────────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
### 8.2 典型协同流程
|
|
||||||
|
|
||||||
以"自动完成月度合规报表导出"为例:
|
|
||||||
|
|
||||||
| 步骤 | 执行者 | 操作 |
|
|
||||||
|------|-------|------|
|
|
||||||
| 1 | SuperRPA | Zombie Session Pool 提供已登录的各系统会话 |
|
|
||||||
| 2 | sgClaw | LLM 理解用户指令,规划任务步骤 |
|
|
||||||
| 3 | sgClaw | 通过 BrowserAction API 向浏览器发送操作指令 |
|
|
||||||
| 4 | SuperRPA | SDK 注入层执行 DOM 操作(内核级,不可检测) |
|
|
||||||
| 5 | SuperRPA | C++ 内核 MAC 校验操作合法性(域名白名单) |
|
|
||||||
| 6 | sgClaw | 解析操作结果,决定下一步行动 |
|
|
||||||
| 7 | sgClaw | 任务完成后将操作序列沉淀为 Skill |
|
|
||||||
| 8 | SuperRPA | 记录完整操作审计日志(含 trace_id) |
|
|
||||||
|
|
||||||
### 8.3 价值总结
|
|
||||||
|
|
||||||
sgClaw 与 SuperRPA 浏览器的结合,实现了 **"能力 + 智能"** 的完整闭环:
|
|
||||||
|
|
||||||
- **SuperRPA 浏览器** 解决了 "如何安全、隐蔽地操作业务系统" 的基础设施问题
|
|
||||||
- **sgClaw** 解决了 "如何智能地理解业务意图并自主执行" 的上层智能问题
|
|
||||||
- 两者结合,使"业数融合一平台"真正具备 **"理解自然语言 → 自主规划 → 安全执行 → 持续进化"** 的完整智能数字员工能力
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
> **sgClaw — 让每一位员工都拥有一位永不疲倦、永不犯错的智能数字助手。**
|
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
164
docs/L5-提示词分布与安全改造方案.md
Normal file
164
docs/L5-提示词分布与安全改造方案.md
Normal file
@@ -0,0 +1,164 @@
|
|||||||
|
# L5-提示词分布与安全改造方案
|
||||||
|
|
||||||
|
- 记录时间:2026-03-26
|
||||||
|
- 场景:回答“项目里有没有提示词?如何改造更安全?提示词放在哪,什么时候调用?”
|
||||||
|
- 目标:给出可执行的工程改造路径与落地记录
|
||||||
|
|
||||||
|
## 1. 结论(先说结论)
|
||||||
|
项目存在至少两条主要提示词构造链路:
|
||||||
|
|
||||||
|
1) **轻量运行时链路**(`src/agent/runtime.rs`)
|
||||||
|
- 仅有非常基础的固定 system 提示。
|
||||||
|
- 适用于非完整流程的本地/最小执行场景。
|
||||||
|
|
||||||
|
2) **ZeroClaw 主链路**(`third_party/zeroclaw/*`)
|
||||||
|
- 这条链路是“系统提示”主体,分为:
|
||||||
|
- `Agent` 内部结构化构建器(`SystemPromptBuilder`)
|
||||||
|
- `channels` 侧统一字符串拼装
|
||||||
|
- `skills / personality / identity / bootstrap 文件 / 工具说明` 等多个注入源
|
||||||
|
- 这也是你要关注的主要安全面。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. 提示词分布结构(按文件/模块)
|
||||||
|
|
||||||
|
### 2.1 固定系统提示(轻量链路)
|
||||||
|
- `src/agent/runtime.rs`
|
||||||
|
- `execute_task_with_provider` 的 `ChatMessage { role: "system" ... }`
|
||||||
|
- 当前内容:`You are sgClaw. Use browser_action to complete the browser task.`
|
||||||
|
|
||||||
|
### 2.2 ZeroClaw `Agent` 内构建的提示词
|
||||||
|
- `third_party/zeroclaw/src/agent/prompt.rs`
|
||||||
|
- `SystemPromptBuilder`(默认 sections)
|
||||||
|
- Sections:`ToolHonesty / Tools / Safety / Skills / Workspace / Runtime / ChannelMedia / DateTime`
|
||||||
|
- `identity_config`、`skills_prompt_mode`、`security_summary`、`autonomy_level` 会影响注入内容。
|
||||||
|
- `third_party/zeroclaw/src/agent/agent.rs`
|
||||||
|
- `Agent::from_config` 组装 `prompt_builder(SystemPromptBuilder::with_defaults())` 与 `security.prompt_summary()`。
|
||||||
|
- `Agent::build_system_prompt` 每次首次 turn 缓存/重构系统提示。
|
||||||
|
- `seed_history` 处理恢复会话时避免系统提示重复。
|
||||||
|
|
||||||
|
### 2.3 通道侧(channel)系统提示拼装器
|
||||||
|
- `third_party/zeroclaw/src/channels/mod.rs`
|
||||||
|
- `build_system_prompt` / `build_system_prompt_with_mode_and_autonomy`
|
||||||
|
- 负责 workspace bootstrap 文件注入、技能注入、工具列表、硬件说明、channel 能力、时区与runtime信息。
|
||||||
|
- 会触发 `load_openclaw_bootstrap_files()`(`AGENTS.md/SOUL.md/IDENTITY.md/USER.md/TOOLS.md/MEMORY.md` 等)
|
||||||
|
- compact 模式下会传递 `bootstrap_max_chars`(默认压缩上下文)。
|
||||||
|
|
||||||
|
### 2.4 技能提示词注入
|
||||||
|
- `third_party/zeroclaw/src/skills/mod.rs`
|
||||||
|
- `skills_to_prompt_with_mode`:
|
||||||
|
- `Full`:inline 注入完整 `instructions`
|
||||||
|
- `Compact`:只注入摘要+工具清单,完整内容通过工具读取。
|
||||||
|
- `third_party/zeroclaw/src/tools/read_skill.rs`
|
||||||
|
- `read_skill(name)` 负责 compact 模式下按需读取技能全文。
|
||||||
|
|
||||||
|
### 2.5 人格/身份上下文注入
|
||||||
|
- `third_party/zeroclaw/src/agent/personality.rs`
|
||||||
|
- 读取 `SOUL.md/IDENTITY.md/USER.md/AGENTS.md/TOOLS.md/HEARTBEAT.md/BOOTSTRAP.md/MEMORY.md`
|
||||||
|
- `load_personality` + `render` 组成身份上下文片段。
|
||||||
|
- `third_party/zeroclaw/src/channels/mod.rs`
|
||||||
|
- `load_openclaw_bootstrap_files()` 读取 `AGENTS.md` 等工作区文件。
|
||||||
|
|
||||||
|
### 2.6 子代理提示词(可单独注入)
|
||||||
|
- `third_party/zeroclaw/src/tools/delegate.rs`
|
||||||
|
- `build_enriched_system_prompt` 组合 `ToolsSection / SafetySection / SkillsSection / WorkspaceSection / DateTimeSection`
|
||||||
|
- 叠加 `agent_config.system_prompt`(可选)
|
||||||
|
|
||||||
|
### 2.7 安全模块相关(目前与 prompt 解耦)
|
||||||
|
- `third_party/zeroclaw/src/security/policy.rs`
|
||||||
|
- 安全策略、命令校验、`prompt_summary()`。
|
||||||
|
- `third_party/zeroclaw/src/security/prompt_guard.rs`
|
||||||
|
- 已有 prompt 注入检测能力,但当前代码链上未见到统一接入点(需要补齐)。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. 提示词何时调用(触发场景)
|
||||||
|
|
||||||
|
### 3.1 WS 网关(持久会话)
|
||||||
|
- `third_party/zeroclaw/src/gateway/ws.rs`
|
||||||
|
- 连接建立后 `Agent::from_config`。
|
||||||
|
- 若后端有历史消息:`agent.seed_history(&messages)`。
|
||||||
|
- 每条用户消息执行 `agent.turn_streamed`。
|
||||||
|
- `turn_streamed`:若历史空则调用 `build_system_prompt()`。
|
||||||
|
|
||||||
|
### 3.2 gateway 简版 webhook
|
||||||
|
- `third_party/zeroclaw/src/gateway/mod.rs`(`run_gateway_chat_simple`)
|
||||||
|
- 通过 `channels::build_system_prompt(...)` 构造简版系统提示。
|
||||||
|
|
||||||
|
### 3.3 gateway 全功能通道
|
||||||
|
- `third_party/zeroclaw/src/gateway/mod.rs`(`run_gateway_chat_with_tools`)
|
||||||
|
- 走 `agent::process_message`。
|
||||||
|
- `process_message` 中每次请求构建一次通道 system prompt。
|
||||||
|
|
||||||
|
### 3.4 CLI 主入口(daemon / interactive)
|
||||||
|
- `third_party/zeroclaw/src/agent/loop_.rs`
|
||||||
|
- CLI run 或交互会初始化工具/skills/系统提示后,`agent_turn` 执行。
|
||||||
|
- 命令行消息与 tool_loop 共用通道侧 build path。
|
||||||
|
|
||||||
|
### 3.5 每轮 Agent 恢复与续接
|
||||||
|
- `Agent::seed_history()`(持久化会话恢复)
|
||||||
|
- 首次首轮会确保系统提示存在;历史中的旧系统提示会被过滤并重建。
|
||||||
|
|
||||||
|
### 3.6 交互历史恢复
|
||||||
|
- `agent/loop_.rs:load_interactive_session_history`
|
||||||
|
- 历史文件缺失或首条非系统时,补系统提示。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. 安全改造建议(按优先级)
|
||||||
|
|
||||||
|
### P0(建议立即做)
|
||||||
|
1) 接入 `PromptGuard`
|
||||||
|
- 目前已有 `third_party/zeroclaw/src/security/prompt_guard.rs`
|
||||||
|
- 在以下入口加扫描并截断/告警:
|
||||||
|
- `Agent::turn` / `turn_streamed`
|
||||||
|
- `agent::process_message`
|
||||||
|
- `gateway simple chat` 和 ws/process path 的入口
|
||||||
|
- 对注入风险高命令(ignore previous/system override/role confusion)直接 block 或标记高风险。
|
||||||
|
|
||||||
|
2) 统一把工作区文件内容做“可注入净化”
|
||||||
|
- 在注入前清洗 `AGENTS.md`/`SOUL.md` 等:
|
||||||
|
- 去控制字符、长度限制、拒绝危险模板片段(如“you are now…”、“ignore previous instructions”)
|
||||||
|
- 记录清洗与截断明细(便于审计)。
|
||||||
|
|
||||||
|
### P1(1-2次迭代内)
|
||||||
|
3) 将安全摘要作为结构化 section 强约束
|
||||||
|
- 在 `SafetySection`/`build_system_prompt_with_mode...` 中统一注入 `security.prompt_summary()`。
|
||||||
|
- 保证“允许命令/禁用命令/路径/审批要求/速率限制”同步显示,降低模型 trial-and-error。
|
||||||
|
|
||||||
|
4) 对 compact/full 模式加分流控制
|
||||||
|
- 将 `skills prompt mode` 默认由 full 改为 compact。
|
||||||
|
- full 模式仅在受信任场景启用;compact 场景默认使用 `read_skill`。
|
||||||
|
|
||||||
|
5) 工具调用策略在提示词中与执行层双向一致
|
||||||
|
- 当前提示词有“Do not ask, execute directly”等语义,与执行层策略一致,但对 high-risk 仍需更硬约束。
|
||||||
|
- 与 `tools::shell` 参数、`security.validate_command_execution`、`tool approval` 形成统一 policy 文档化。
|
||||||
|
|
||||||
|
### P2(优化)
|
||||||
|
6) 统一系统提示模板
|
||||||
|
- `channels::build_system_prompt_*` 与 `SystemPromptBuilder` 逻辑有重叠。
|
||||||
|
- 建议抽取公共 section(日期、安全、技能、工具)并做一次性组装,减少版本漂移导致的绕过面。
|
||||||
|
|
||||||
|
7) 增加会话级审计
|
||||||
|
- 当检测到提示词注入高分时:记录原始用户输入哈希、触发规则、决策(block/warn/sanitize)。
|
||||||
|
- 与工具执行失败(rate limit / blocked path)打通到同一告警链。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. 本次已确认的“关键风险”
|
||||||
|
- `PromptGuard` 尚未在主入口统一挂载(存在检测能力,但未形成强制拦截链)。
|
||||||
|
- workspace/skills 内容可直接进入 prompt,注入面较宽。
|
||||||
|
- 两套系统提示构建链路(agent builder 与 channel builder)存在口径差异,需要统一。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. 建议的落地顺序(两周内可完成)
|
||||||
|
1. 统一入口加 `PromptGuard.scan` + deny/block 映射(最小改动)。
|
||||||
|
2. 在 `channels` + `personality` 的文件注入点加净化和长度守卫。
|
||||||
|
3. 安全摘要 section 作为每条提示词的必含块。
|
||||||
|
4. compact 模式默认开启并补充 `read_skill` 受控流程。
|
||||||
|
5. 增加一组回归用例:
|
||||||
|
- 复现提示词覆盖攻击
|
||||||
|
- 系统提示重复/续接场景(seed/reseed)
|
||||||
|
- compact/full 两种技能注入对比
|
||||||
|
|
||||||
@@ -1,21 +1,38 @@
|
|||||||
# docs 目录说明
|
# docs 目录说明
|
||||||
|
|
||||||
## 当前有效文档(研发与管理)
|
## 产品文档(核心)
|
||||||
|
|
||||||
- `团队管理标准.md`:团队管理制度、角色清单、变更流程。
|
- `L0-产品白皮书与能力全景层.md`:能力边界与目标价值。
|
||||||
- `浏览器对接标准.md`:Chromium ↔ sgClaw 联调接口标准(P1a/P2 必读)。
|
- `L1-系统架构与安全模型层.md`:架构分层与安全决策。
|
||||||
- `L0-产品白皮书与能力全景层.md` ~ `L4-工程实现与部署拓扑层.md`:架构分层文档。
|
- `L2-核心模块与接口契约层.md`:模块边界、接口设计与数据结构。
|
||||||
- `团队分工.md`、`协作时间表.md`、`协作甘特图.md`:协作计划源文档。
|
- `L3-数据流与Skill体系层.md`:执行流程、Skill 语义与数据协议。
|
||||||
|
- `L4-工程实现与部署拓扑层.md`:仓库结构、构建、集成和部署。
|
||||||
|
- `L5-提示词分布与安全改造方案.md`:提示词治理与风控增强策略。
|
||||||
|
- `浏览器对接标准.md`:Rust 与 Chromium 对接的协议基线。
|
||||||
|
|
||||||
## 归档文档(领导演示素材)
|
## 归档文档
|
||||||
|
|
||||||
为避免研发目录混杂演示资产,以下内容已统一归档到:
|
### 项目管理与排期(已归档)
|
||||||
|
|
||||||
- `archive/领导演示资料/docs-html/`:演示网页(架构图、时间表)
|
以下文档已移入 `archive/项目管理与排期/`,保留历史参考,不作为产品主线阅读入口:
|
||||||
- `archive/领导演示资料/docs-pdf/`:演示导出的 PDF
|
|
||||||
- `archive/领导演示资料/docs-figures/`:演示图(SVG)
|
|
||||||
- `archive/领导演示资料/docs-scripts/`:演示查看/导出脚本
|
|
||||||
- `archive/领导演示资料/frontend-pages/`:前端演示网页
|
|
||||||
- `archive/领导演示资料/frontend-svgs/`:前端演示图源文件
|
|
||||||
|
|
||||||
> 归档原则:不影响研发主线文档,演示资产可追溯、可复用、可批量查找。
|
- `archive/项目管理与排期/团队分工.md`
|
||||||
|
- `archive/项目管理与排期/团队管理标准.md`
|
||||||
|
- `archive/项目管理与排期/协作时间表.md`
|
||||||
|
- `archive/项目管理与排期/协作甘特图.md`
|
||||||
|
- `archive/项目管理与排期/协作时间表_printable.md`
|
||||||
|
- `archive/项目管理与排期/协作甘特图_printable.md`
|
||||||
|
- `archive/项目管理与排期/sgclaw_project_team_kickoff.md`
|
||||||
|
- `archive/项目管理与排期/browser_team_kickoff.md`
|
||||||
|
- `archive/项目管理与排期/团队管理标准.pdf`
|
||||||
|
|
||||||
|
### 领导演示与导出资产
|
||||||
|
|
||||||
|
- `archive/领导演示资料/docs-html/`
|
||||||
|
- `archive/领导演示资料/docs-pdf/`
|
||||||
|
- `archive/领导演示资料/docs-figures/`
|
||||||
|
- `archive/领导演示资料/docs-scripts/`
|
||||||
|
- `archive/领导演示资料/frontend-pages/`
|
||||||
|
- `archive/领导演示资料/frontend-svgs/`
|
||||||
|
|
||||||
|
> 归档原则:产品主线文档与交付实现说明保持在 `docs/` 根目录;管理类资料与演示资料集中归档便于追溯。
|
||||||
|
|||||||
10
docs/archive/项目管理与排期/README.md
Normal file
10
docs/archive/项目管理与排期/README.md
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
# 项目管理与排期归档
|
||||||
|
|
||||||
|
本目录存放团队协作与管理向文档,作为历史参考:
|
||||||
|
|
||||||
|
- 团队分工与职责
|
||||||
|
- 管理标准与流程规范
|
||||||
|
- 协作时间表与甘特图
|
||||||
|
- 启动说明文档(Rust 侧 / 浏览器侧)
|
||||||
|
|
||||||
|
这些文档不再作为产品主线文档入口的一部分。
|
||||||
134
docs/plans/2026-03-26-deepseek-browser-smoke-plan.md
Normal file
134
docs/plans/2026-03-26-deepseek-browser-smoke-plan.md
Normal file
@@ -0,0 +1,134 @@
|
|||||||
|
# DeepSeek Browser Smoke Implementation Plan
|
||||||
|
|
||||||
|
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
||||||
|
|
||||||
|
**Goal:** Add a repo-local verification path that exercises the browser-delivered `sgclaw` binary through the ZeroClaw/DeepSeek compat runtime without requiring a real DeepSeek account.
|
||||||
|
|
||||||
|
**Architecture:** Keep the existing SuperRPA browser smoke script unchanged. Add a small sgClaw-owned helper module that behaves like a fake OpenAI-compatible DeepSeek server and a runner script that starts that server, injects `DEEPSEEK_*` into the browser process environment, and delegates the actual browser/UI verification to the existing `sgclaw_chat_smoke.mjs`.
|
||||||
|
|
||||||
|
**Tech Stack:** Node.js ESM, Node built-in `node:test`, local HTTP server, Chromium `build_sgclaw.py`, existing SuperRPA `sgclaw_chat_smoke.mjs`.
|
||||||
|
|
||||||
|
### Task 1: Add Fake DeepSeek Response Planner
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/tools/browser_smoke/fake_deepseek_server.mjs`
|
||||||
|
- Test: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/tools/browser_smoke/fake_deepseek_server.test.mjs`
|
||||||
|
|
||||||
|
**Step 1: Write the failing test**
|
||||||
|
|
||||||
|
Add `node:test` coverage that proves the fake server planner:
|
||||||
|
- returns Baidu tool calls for `打开百度搜索天气`
|
||||||
|
- returns Zhihu navigate tool calls for `打开知乎搜索天气`
|
||||||
|
- returns final summaries matching the existing smoke script expectations
|
||||||
|
- rejects unsupported instructions clearly
|
||||||
|
|
||||||
|
**Step 2: Run test to verify it fails**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
node --test tools/browser_smoke/fake_deepseek_server.test.mjs
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: FAIL because the helper module does not exist yet.
|
||||||
|
|
||||||
|
**Step 3: Implement the minimal helper**
|
||||||
|
|
||||||
|
The helper should:
|
||||||
|
- inspect the latest user message / tool-result phase
|
||||||
|
- emit OpenAI-compatible `choices[0].message.tool_calls` for the first round
|
||||||
|
- emit `choices[0].message.content` for the second round
|
||||||
|
- keep summaries identical to the current smoke assertions
|
||||||
|
|
||||||
|
**Step 4: Run test to verify it passes**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
node --test tools/browser_smoke/fake_deepseek_server.test.mjs
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS
|
||||||
|
|
||||||
|
### Task 2: Add DeepSeek Smoke Wrapper Script
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/tools/browser_smoke/run_deepseek_browser_smoke.mjs`
|
||||||
|
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/README.md`
|
||||||
|
|
||||||
|
**Step 1: Write the failing wrapper expectation**
|
||||||
|
|
||||||
|
Add a small test or dry-run seam in the helper test that proves the wrapper environment includes:
|
||||||
|
- `DEEPSEEK_API_KEY`
|
||||||
|
- `DEEPSEEK_BASE_URL`
|
||||||
|
- `DEEPSEEK_MODEL`
|
||||||
|
|
||||||
|
and points at the fake local server.
|
||||||
|
|
||||||
|
**Step 2: Run the targeted test to verify it fails**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
node --test tools/browser_smoke/fake_deepseek_server.test.mjs
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: FAIL because no wrapper/env builder exists yet.
|
||||||
|
|
||||||
|
**Step 3: Implement the wrapper**
|
||||||
|
|
||||||
|
The wrapper should:
|
||||||
|
- start the fake DeepSeek server
|
||||||
|
- invoke:
|
||||||
|
```bash
|
||||||
|
node /home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/sgclaw_chat_smoke.mjs
|
||||||
|
```
|
||||||
|
- inject `DEEPSEEK_*` into the child environment
|
||||||
|
- print the child stdout/stderr through
|
||||||
|
- stop the fake server on exit
|
||||||
|
|
||||||
|
**Step 4: Run the targeted test to verify it passes**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
node --test tools/browser_smoke/fake_deepseek_server.test.mjs
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS
|
||||||
|
|
||||||
|
### Task 3: Verify the Browser-Delivered DeepSeek Path
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Verify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/tools/browser_smoke/*`
|
||||||
|
- Verify: `/home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/build_sgclaw.py`
|
||||||
|
|
||||||
|
**Step 1: Build the browser-delivered binary from the worktree**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
python3 /home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/build_sgclaw.py \
|
||||||
|
--manifest-path /home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/Cargo.toml \
|
||||||
|
--out /home/zyl/projects/superRpa/src/out/KylinRelease/sgclaw
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS
|
||||||
|
|
||||||
|
**Step 2: Run the DeepSeek smoke wrapper**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
node tools/browser_smoke/run_deepseek_browser_smoke.mjs
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected:
|
||||||
|
- existing browser smoke passes
|
||||||
|
- `sgclaw` is forced down the compat runtime path through `DEEPSEEK_*`
|
||||||
|
- Baidu and Zhihu tasks still complete
|
||||||
|
|
||||||
|
**Step 3: Re-run full Rust tests to guard against regressions**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
python3 /home/zyl/projects/superRpa/src/tools/crates/run_cargo.py test \
|
||||||
|
--manifest-path /home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/Cargo.toml \
|
||||||
|
--tests
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS
|
||||||
93
docs/plans/2026-03-26-l0-l4-doc-refresh.md
Normal file
93
docs/plans/2026-03-26-l0-l4-doc-refresh.md
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
# L0-L4 Documentation Refresh Implementation Plan
|
||||||
|
|
||||||
|
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
||||||
|
|
||||||
|
**Goal:** Refresh the L0-L4 product documentation so it matches the current ZeroClaw-based refactor and removes outdated team or roadmap narratives.
|
||||||
|
|
||||||
|
**Architecture:** Replace speculative architecture with the repository's current runtime model: a Rust browser-agent process that speaks the existing STDIO JSON Line protocol, enforces MAC policy from `resources/rules.json`, and uses a ZeroClaw compatibility runtime when provider configuration is present. Keep protocol and deployment descriptions aligned with actual files under `src/`, `resources/`, `tests/`, and `docs/浏览器对接标准.md`.
|
||||||
|
|
||||||
|
**Tech Stack:** Markdown, Rust source inspection, existing sgClaw protocol docs
|
||||||
|
|
||||||
|
### Task 1: Reconfirm source-of-truth files
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `docs/L0-产品白皮书与能力全景层.md`
|
||||||
|
- Modify: `docs/L1-系统架构与安全模型层.md`
|
||||||
|
- Modify: `docs/L2-核心模块与接口契约层.md`
|
||||||
|
- Modify: `docs/L3-数据流与Skill体系层.md`
|
||||||
|
- Modify: `docs/L4-工程实现与部署拓扑层.md`
|
||||||
|
- Reference: `src/lib.rs`
|
||||||
|
- Reference: `src/agent/mod.rs`
|
||||||
|
- Reference: `src/agent/runtime.rs`
|
||||||
|
- Reference: `src/compat/runtime.rs`
|
||||||
|
- Reference: `src/compat/browser_tool_adapter.rs`
|
||||||
|
- Reference: `src/pipe/protocol.rs`
|
||||||
|
- Reference: `resources/rules.json`
|
||||||
|
- Reference: `docs/浏览器对接标准.md`
|
||||||
|
|
||||||
|
**Step 1: Inspect current docs and implementation**
|
||||||
|
|
||||||
|
Run: `sed -n '1,220p' docs/L0-产品白皮书与能力全景层.md`
|
||||||
|
Expected: outdated capability claims and pre-refactor architecture language are visible.
|
||||||
|
|
||||||
|
**Step 2: Inspect runtime and protocol source**
|
||||||
|
|
||||||
|
Run: `sed -n '1,260p' src/pipe/protocol.rs`
|
||||||
|
Expected: `BrowserMessage`, `AgentMessage`, and `Action` definitions show the real contract surface.
|
||||||
|
|
||||||
|
**Step 3: Inspect compatibility runtime path**
|
||||||
|
|
||||||
|
Run: `sed -n '1,260p' src/compat/runtime.rs`
|
||||||
|
Expected: current ZeroClaw integration is clearly a compatibility adapter around `browser_action`.
|
||||||
|
|
||||||
|
### Task 2: Rewrite the layered product narrative
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `docs/L0-产品白皮书与能力全景层.md`
|
||||||
|
- Modify: `docs/L1-系统架构与安全模型层.md`
|
||||||
|
|
||||||
|
**Step 1: Replace L0 narrative**
|
||||||
|
|
||||||
|
Write: describe sgClaw as the productized browser-agent runtime after the ZeroClaw refactor, define current value, supported workflows, and explicit non-goals.
|
||||||
|
|
||||||
|
**Step 2: Replace L1 architecture**
|
||||||
|
|
||||||
|
Write: describe the actual three-part runtime topology, dual execution path, and layered security model without claiming unimplemented subsystems.
|
||||||
|
|
||||||
|
### Task 3: Rewrite contract and flow documents
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `docs/L2-核心模块与接口契约层.md`
|
||||||
|
- Modify: `docs/L3-数据流与Skill体系层.md`
|
||||||
|
|
||||||
|
**Step 1: Replace L2**
|
||||||
|
|
||||||
|
Write: define module ownership, protocol messages, active tool contract, and the relationship to `docs/浏览器对接标准.md`.
|
||||||
|
|
||||||
|
**Step 2: Replace L3**
|
||||||
|
|
||||||
|
Write: describe task lifecycle, planner fallback versus ZeroClaw compat path, memory/config loading, and why “Skill 体系” is currently a prompt/tool abstraction rather than a standalone skill engine.
|
||||||
|
|
||||||
|
### Task 4: Rewrite engineering and deployment view
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `docs/L4-工程实现与部署拓扑层.md`
|
||||||
|
|
||||||
|
**Step 1: Replace L4**
|
||||||
|
|
||||||
|
Write: document the real repository layout, build/test commands, environment variables, deployment assumptions, and integration boundaries with the browser host.
|
||||||
|
|
||||||
|
### Task 5: Verify consistency
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `docs/plans/2026-03-26-l0-l4-doc-refresh.md`
|
||||||
|
|
||||||
|
**Step 1: Review git status**
|
||||||
|
|
||||||
|
Run: `git status --short`
|
||||||
|
Expected: only intended doc updates and existing archive-related changes remain.
|
||||||
|
|
||||||
|
**Step 2: Spot-check final docs**
|
||||||
|
|
||||||
|
Run: `sed -n '1,120p' docs/L2-核心模块与接口契约层.md`
|
||||||
|
Expected: tool contract, protocol messages, and allowed actions match the codebase.
|
||||||
274
docs/plans/2026-03-26-zeroclaw-core-refactor-plan.md
Normal file
274
docs/plans/2026-03-26-zeroclaw-core-refactor-plan.md
Normal file
@@ -0,0 +1,274 @@
|
|||||||
|
# ZeroClaw Core Refactor Implementation Plan
|
||||||
|
|
||||||
|
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
||||||
|
|
||||||
|
**Goal:** Rebuild `sgClaw` on top of vendored ZeroClaw core while preserving the existing SuperRPA browser pipe protocol, `FunctionsUI` bridge names, and `sgclaw` binary contract.
|
||||||
|
|
||||||
|
**Architecture:** Keep `sgclaw` as the compatibility shell and replace its current minimal runtime with a ZeroClaw-based core adapter. Vendor the upstream ZeroClaw workspace into this repository for reproducible builds, then build a `compat` layer that translates `submit_task` / `task_complete` / log events to and from ZeroClaw agent, memory, cron, and tool abstractions. Do not integrate the upstream ZeroClaw gateway in this phase; the future standalone gateway will reuse the same vendored core through a separate entrypoint.
|
||||||
|
|
||||||
|
**Tech Stack:** Rust workspace, vendored upstream ZeroClaw (`zeroclawlabs`), current sgClaw pipe protocol and browser tool, DeepSeek via ZeroClaw provider routing, SQLite memory backends, Chromium `run_cargo.py` build flow.
|
||||||
|
|
||||||
|
### Task 1: Vendor ZeroClaw Upstream Snapshot
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/third_party/zeroclaw/**`
|
||||||
|
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/third_party/zeroclaw/VENDORED_FROM.md`
|
||||||
|
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/.gitignore`
|
||||||
|
|
||||||
|
**Step 1: Copy the upstream snapshot into the repo**
|
||||||
|
|
||||||
|
Source:
|
||||||
|
```bash
|
||||||
|
/home/zyl/Downloads/zeroclaw-master.zip
|
||||||
|
```
|
||||||
|
|
||||||
|
Destination:
|
||||||
|
```bash
|
||||||
|
/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/third_party/zeroclaw
|
||||||
|
```
|
||||||
|
|
||||||
|
Strip the top-level `zeroclaw-master/` folder so the vendored directory itself is the workspace root.
|
||||||
|
|
||||||
|
**Step 2: Record provenance**
|
||||||
|
|
||||||
|
Write `third_party/zeroclaw/VENDORED_FROM.md` with:
|
||||||
|
- upstream repo URL
|
||||||
|
- upstream default branch (`master`)
|
||||||
|
- source ZIP filename
|
||||||
|
- vendoring date
|
||||||
|
- a note that this copy is used to guarantee offline/reproducible browser builds
|
||||||
|
|
||||||
|
**Step 3: Verify the vendor tree exists**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
find third_party/zeroclaw -maxdepth 2 -name Cargo.toml -o -name README.md
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: upstream workspace files are present.
|
||||||
|
|
||||||
|
### Task 2: Convert sgClaw into a ZeroClaw-Backed Workspace Shell
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/Cargo.toml`
|
||||||
|
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/lib.rs`
|
||||||
|
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/main.rs`
|
||||||
|
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/compat/mod.rs`
|
||||||
|
|
||||||
|
**Step 1: Add the vendored ZeroClaw dependency**
|
||||||
|
|
||||||
|
Use a local path dependency:
|
||||||
|
```toml
|
||||||
|
zeroclaw = { package = "zeroclawlabs", path = "third_party/zeroclaw" }
|
||||||
|
tokio = { version = "1", features = ["rt-multi-thread", "macros"] }
|
||||||
|
```
|
||||||
|
|
||||||
|
Do not use a git dependency. Browser builds must not depend on network access.
|
||||||
|
|
||||||
|
**Step 2: Preserve the root crate identity**
|
||||||
|
|
||||||
|
Keep:
|
||||||
|
- package name `sgclaw`
|
||||||
|
- binary name `sgclaw`
|
||||||
|
- current manifest path used by SuperRPA browser build scripts
|
||||||
|
|
||||||
|
This avoids breaking `/home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/build_sgclaw.py`.
|
||||||
|
|
||||||
|
**Step 3: Route the process entrypoint through the compatibility layer**
|
||||||
|
|
||||||
|
`src/lib.rs` should keep:
|
||||||
|
- current handshake
|
||||||
|
- current `BrowserPipeTool`
|
||||||
|
- current message loop
|
||||||
|
|
||||||
|
But delegate task execution to `compat::runtime`, not directly to the current thin planner/runtime path.
|
||||||
|
|
||||||
|
### Task 3: Introduce the sgClaw Compatibility Layer
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/compat/runtime.rs`
|
||||||
|
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/compat/browser_tool_adapter.rs`
|
||||||
|
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/compat/config_adapter.rs`
|
||||||
|
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/compat/event_bridge.rs`
|
||||||
|
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/compat/memory_adapter.rs`
|
||||||
|
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/agent/mod.rs`
|
||||||
|
|
||||||
|
**Step 1: Define the boundary**
|
||||||
|
|
||||||
|
`compat::runtime` owns:
|
||||||
|
- creating the ZeroClaw config/provider/runtime/memory/tool registry
|
||||||
|
- executing a task from a browser `submit_task`
|
||||||
|
- translating ZeroClaw progress into current `AgentMessage::LogEntry`
|
||||||
|
- returning the final summary string for current `task_complete`
|
||||||
|
|
||||||
|
`compat::event_bridge` owns all formatting decisions for:
|
||||||
|
- `[info] ...`
|
||||||
|
- `[error] ...`
|
||||||
|
- final summary propagation
|
||||||
|
|
||||||
|
**Step 2: Keep the browser protocol unchanged**
|
||||||
|
|
||||||
|
Do not change these wire-level contracts:
|
||||||
|
- `BrowserMessage::SubmitTask`
|
||||||
|
- `AgentMessage::TaskComplete`
|
||||||
|
- `AgentMessage::LogEntry`
|
||||||
|
- `init/init_ack`
|
||||||
|
|
||||||
|
The browser side must not need a corresponding protocol change.
|
||||||
|
|
||||||
|
**Step 3: Retire direct planner ownership from the main path**
|
||||||
|
|
||||||
|
`src/agent/mod.rs` should stop owning the main task intelligence flow. The current rule-based planner can remain only as:
|
||||||
|
- transitional fallback, or
|
||||||
|
- deterministic test fixture
|
||||||
|
|
||||||
|
It must no longer be the primary execution engine.
|
||||||
|
|
||||||
|
### Task 4: Adapt BrowserPipeTool into a ZeroClaw Tool
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/compat/browser_tool_adapter.rs`
|
||||||
|
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/pipe/browser_tool.rs`
|
||||||
|
- Test: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/tests/compat_browser_tool_test.rs`
|
||||||
|
|
||||||
|
**Step 1: Write the failing adapter test**
|
||||||
|
|
||||||
|
Add a focused test that proves:
|
||||||
|
- a ZeroClaw tool invocation can issue `navigate`, `type`, `click`, `getText`
|
||||||
|
- domain validation still flows through current MAC/rules enforcement
|
||||||
|
- returned observation data includes browser response payload and AOM snapshot
|
||||||
|
|
||||||
|
**Step 2: Verify RED**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
python3 /home/zyl/projects/superRpa/src/tools/crates/run_cargo.py test \
|
||||||
|
--manifest-path /home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/Cargo.toml \
|
||||||
|
--test compat_browser_tool_test
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: fail because the adapter does not exist yet.
|
||||||
|
|
||||||
|
**Step 3: Implement the adapter**
|
||||||
|
|
||||||
|
Wrap current `BrowserPipeTool` behind ZeroClaw’s async `Tool` trait:
|
||||||
|
- tool name should stay stable and sgClaw-specific, for example `browser_action`
|
||||||
|
- schema should only expose the currently supported safe actions
|
||||||
|
- `ToolResult` should include serialized `data`, `aom_snapshot`, `timing`
|
||||||
|
|
||||||
|
### Task 5: Build the DeepSeek-Backed ZeroClaw Runtime Path
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/tests/compat_runtime_test.rs`
|
||||||
|
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/compat/runtime.rs`
|
||||||
|
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/compat/config_adapter.rs`
|
||||||
|
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/config/settings.rs`
|
||||||
|
|
||||||
|
**Step 1: Write the failing runtime test**
|
||||||
|
|
||||||
|
Add a compatibility runtime test that proves:
|
||||||
|
- when `DEEPSEEK_API_KEY` is configured, sgClaw uses the ZeroClaw provider path
|
||||||
|
- the runtime can execute a simple mocked `browser_action` sequence
|
||||||
|
- the final result is returned as current sgClaw `task_complete`
|
||||||
|
|
||||||
|
Use a fake provider or deterministic ZeroClaw test seam for RED/GREEN speed.
|
||||||
|
|
||||||
|
**Step 2: Verify RED**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
python3 /home/zyl/projects/superRpa/src/tools/crates/run_cargo.py test \
|
||||||
|
--manifest-path /home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/Cargo.toml \
|
||||||
|
--test compat_runtime_test
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: fail because the compatibility runtime is not wired yet.
|
||||||
|
|
||||||
|
**Step 3: Implement DeepSeek mapping**
|
||||||
|
|
||||||
|
Map current sgClaw env/config into ZeroClaw provider config:
|
||||||
|
- `DEEPSEEK_API_KEY`
|
||||||
|
- `DEEPSEEK_BASE_URL`
|
||||||
|
- `DEEPSEEK_MODEL`
|
||||||
|
|
||||||
|
DeepSeek should be treated as OpenAI-compatible routing under ZeroClaw, not via the old local `DeepSeekProvider`.
|
||||||
|
|
||||||
|
### Task 6: Introduce Memory and Cron Through the Compatibility Core
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/compat/config_adapter.rs`
|
||||||
|
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/src/compat/memory_adapter.rs`
|
||||||
|
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/tests/compat_memory_test.rs`
|
||||||
|
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/tests/compat_cron_test.rs`
|
||||||
|
|
||||||
|
**Step 1: Memory**
|
||||||
|
|
||||||
|
Configure a workspace-local ZeroClaw memory backend suitable for browser embedding:
|
||||||
|
- default to SQLite
|
||||||
|
- keep storage under sgClaw-owned data path
|
||||||
|
- avoid enabling unrelated gateway/channel storage
|
||||||
|
|
||||||
|
**Step 2: Cron**
|
||||||
|
|
||||||
|
Expose ZeroClaw cron internally, but do not yet bind it to browser UI.
|
||||||
|
This phase only requires:
|
||||||
|
- creating validated agent jobs
|
||||||
|
- listing/running due jobs in tests
|
||||||
|
|
||||||
|
The future standalone gateway will surface management UI for cron.
|
||||||
|
|
||||||
|
### Task 7: Verification and Browser Integration
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Verify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/tests/*.rs`
|
||||||
|
- Verify: `/home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/build_sgclaw.py`
|
||||||
|
- Verify: `/home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/sgclaw_chat_smoke.mjs`
|
||||||
|
|
||||||
|
**Step 1: Run the full Rust test baseline**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
python3 /home/zyl/projects/superRpa/src/tools/crates/run_cargo.py test \
|
||||||
|
--manifest-path /home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/Cargo.toml \
|
||||||
|
--tests
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: current protocol/tool/planner compatibility tests still pass or are consciously replaced with equivalent compat tests.
|
||||||
|
|
||||||
|
**Step 2: Build the browser-delivered binary from the worktree**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
python3 /home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/build_sgclaw.py \
|
||||||
|
--manifest-path /home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-core-refactor/Cargo.toml \
|
||||||
|
--out /home/zyl/projects/superRpa/src/out/KylinRelease/sgclaw
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: the compatibility-shell binary is produced at the same output path as today.
|
||||||
|
|
||||||
|
**Step 3: Run browser smoke**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
node /home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/sgclaw_chat_smoke.mjs
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected:
|
||||||
|
- browser protocol still starts and stops correctly
|
||||||
|
- Baidu task still succeeds
|
||||||
|
- Zhihu task still succeeds
|
||||||
|
- no browser-side API/bridge changes are required
|
||||||
|
|
||||||
|
### Non-Goals for This Refactor
|
||||||
|
|
||||||
|
- Do not replace the current SuperRPA browser protocol with ZeroClaw gateway protocols.
|
||||||
|
- Do not expose the upstream ZeroClaw web dashboard inside FunctionsUI.
|
||||||
|
- Do not ship the standalone gateway in this phase.
|
||||||
|
- Do not migrate browser-side code to a new transport.
|
||||||
|
|
||||||
|
### Phase 2 After This Refactor
|
||||||
|
|
||||||
|
After this compatibility refactor is stable:
|
||||||
|
- add a separate `gateway` crate or binary that uses the same vendored ZeroClaw core
|
||||||
|
- expose memory/cron/agent management there
|
||||||
|
- keep browser-side `sgclaw` as a thin local execution shell
|
||||||
363
docs/plans/2026-03-27-sgclaw-floating-chat-plan.md
Normal file
363
docs/plans/2026-03-27-sgclaw-floating-chat-plan.md
Normal file
@@ -0,0 +1,363 @@
|
|||||||
|
# sgClaw Floating Chat Implementation Plan
|
||||||
|
|
||||||
|
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
||||||
|
|
||||||
|
**Goal:** Replace the current debug-style `sgclaw-chat` page as the primary UX with a floating page button + popup chat window, add real multi-turn conversation support, and harden the DeepSeek/browser tool protocol so browser automation is stable.
|
||||||
|
|
||||||
|
**Architecture:** Keep `chrome://superrpa-functions/sgclaw-chat` and `chrome://superrpa-functions/sgclaw-config` as debug/config pages, but make the user-facing entry a floating page launcher injected into allowed HTTP/HTTPS pages via existing SuperRPA page-injection capabilities. Reuse the browser-side persistent `SgClawSessionService` as the session owner, extend it from “logs + final result” to “conversation + runtime state”, and extend the sgClaw pipe path so each submit can carry conversation context instead of behaving like a fresh one-shot task. Fix protocol bugs in parallel: strict action-schema validation, better browser/sgClaw error attribution, and DeepSeek tool-call history compatibility.
|
||||||
|
|
||||||
|
**Tech Stack:** Chromium WebUI + Lit, existing SuperRPA page injection (`sg_compat.js` / hook injection), browser-side `FunctionsUI`/`SgClawSessionService`, Rust `sgClaw`, ZeroClaw compatibility runtime, DeepSeek OpenAI-compatible chat API.
|
||||||
|
|
||||||
|
### Task 1: Freeze Current Baseline And Add Pure UI State Tests
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-floating_state.ts`
|
||||||
|
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-floating_state_mainline_unittest.ts`
|
||||||
|
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/BUILD.gn`
|
||||||
|
|
||||||
|
**Step 1: Write the failing test**
|
||||||
|
|
||||||
|
Write a pure state test that describes the floating UX:
|
||||||
|
|
||||||
|
```ts
|
||||||
|
import {
|
||||||
|
collapseFloatingWindow,
|
||||||
|
createFloatingViewState,
|
||||||
|
openFloatingWindow,
|
||||||
|
toggleSettingsPanel,
|
||||||
|
} from './sgclaw-floating_state.js';
|
||||||
|
|
||||||
|
test('opens from fab and collapses back on blur', () => {
|
||||||
|
let state = createFloatingViewState();
|
||||||
|
state = openFloatingWindow(state);
|
||||||
|
expect(state.windowOpen).toBe(true);
|
||||||
|
state = collapseFloatingWindow(state);
|
||||||
|
expect(state.windowOpen).toBe(false);
|
||||||
|
expect(state.fabVisible).toBe(true);
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 2: Run test to verify it fails**
|
||||||
|
|
||||||
|
Run: `autoninja -C /home/zyl/projects/superRpa/src/out/KylinRelease sgclaw-chat_build_ts`
|
||||||
|
|
||||||
|
Expected: build/test target fails because the new state module and test do not exist yet.
|
||||||
|
|
||||||
|
**Step 3: Write minimal implementation**
|
||||||
|
|
||||||
|
Create a small pure state module with:
|
||||||
|
- `fabVisible`
|
||||||
|
- `windowOpen`
|
||||||
|
- `settingsOpen`
|
||||||
|
- `statusBadge`
|
||||||
|
- `unreadCount`
|
||||||
|
|
||||||
|
Keep it logic-only; no DOM code here.
|
||||||
|
|
||||||
|
**Step 4: Run test to verify it passes**
|
||||||
|
|
||||||
|
Run the same `autoninja` target or the relevant TS unit target once wired.
|
||||||
|
|
||||||
|
Expected: the new state test compiles and passes.
|
||||||
|
|
||||||
|
**Step 5: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git -C /home/zyl/projects/superRpa/src add \
|
||||||
|
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-floating_state.ts \
|
||||||
|
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-floating_state_mainline_unittest.ts \
|
||||||
|
chrome/browser/resources/superrpa/devtools/BUILD.gn
|
||||||
|
git -C /home/zyl/projects/superRpa/src commit -m "test: add sgclaw floating UI state"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Task 2: Build The Floating Page Entry Using Existing SuperRPA Overlay Capabilities
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/sgclaw_overlay.js`
|
||||||
|
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/sg_compat.js`
|
||||||
|
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/superrpa/hooks/hook_injector.cc`
|
||||||
|
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/superrpa/hooks/hook_injector.h`
|
||||||
|
- Test: `/home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/sgclaw_chat_smoke.mjs`
|
||||||
|
|
||||||
|
**Step 1: Write the failing smoke expectation**
|
||||||
|
|
||||||
|
Update the browser smoke so it expects:
|
||||||
|
- a floating button exists on a normal page
|
||||||
|
- clicking it opens the sgClaw popup
|
||||||
|
- clicking outside collapses the popup back to the button
|
||||||
|
|
||||||
|
Use an assertion like:
|
||||||
|
|
||||||
|
```js
|
||||||
|
await waitFor(() => page.evaluate(() =>
|
||||||
|
!!document.querySelector('#superrpa-sgclaw-fab')));
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 2: Run smoke to verify it fails**
|
||||||
|
|
||||||
|
Run: `node /home/zyl/projects/sgClaw/claw/tools/browser_smoke/run_deepseek_browser_smoke.mjs`
|
||||||
|
|
||||||
|
Expected: smoke fails because the floating entry does not exist.
|
||||||
|
|
||||||
|
**Step 3: Write minimal implementation**
|
||||||
|
|
||||||
|
Implement the launcher inside injected page JS, not a side panel:
|
||||||
|
- floating circular button in bottom-right
|
||||||
|
- popup window anchored to the button
|
||||||
|
- button actions: open chat, stop/start runtime, open settings
|
||||||
|
- blur/outside-click collapses popup back to button
|
||||||
|
|
||||||
|
Prefer reusing the existing SuperRPA overlay/dialog/message primitives in `sg_compat.js` instead of inventing a second overlay stack.
|
||||||
|
|
||||||
|
**Step 4: Run smoke to verify it passes**
|
||||||
|
|
||||||
|
Run the same smoke command.
|
||||||
|
|
||||||
|
Expected: smoke reaches the popup, submits a task, and collapses correctly after blur.
|
||||||
|
|
||||||
|
**Step 5: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git -C /home/zyl/projects/superRpa/src add \
|
||||||
|
chrome/browser/resources/superrpa/sgclaw_overlay.js \
|
||||||
|
chrome/browser/resources/superrpa/sg_compat.js \
|
||||||
|
chrome/browser/superrpa/hooks/hook_injector.cc \
|
||||||
|
chrome/browser/superrpa/hooks/hook_injector.h \
|
||||||
|
chrome/browser/superrpa/sgclaw/sgclaw_chat_smoke.mjs
|
||||||
|
git -C /home/zyl/projects/superRpa/src commit -m "feat: add sgclaw floating launcher"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Task 3: Upgrade Browser Session State From “Result Page” To “Real Conversation”
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/ui/webui/superrpa/sgclaw_session_service.h`
|
||||||
|
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/ui/webui/superrpa/sgclaw_session_service.cc`
|
||||||
|
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/ui/webui/superrpa/functions_ui.h`
|
||||||
|
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/ui/webui/superrpa/functions_ui.cc`
|
||||||
|
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.ts`
|
||||||
|
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_messages.ts`
|
||||||
|
- Test: `/home/zyl/projects/superRpa/src/chrome/browser/ui/webui/superrpa/functions_ui_mainline_unittest.cc`
|
||||||
|
|
||||||
|
**Step 1: Write the failing browser-side tests**
|
||||||
|
|
||||||
|
Add tests for:
|
||||||
|
- conversation messages are returned by `sgclawConnect`
|
||||||
|
- reopening the chat keeps prior user/assistant turns
|
||||||
|
- `sgclawSubmitTask` appends a user turn immediately and an assistant turn when complete
|
||||||
|
|
||||||
|
Example expectation:
|
||||||
|
|
||||||
|
```cc
|
||||||
|
EXPECT_EQ("user", FindStringValue(*message, "role"));
|
||||||
|
EXPECT_EQ("打开百度搜索天气", FindStringValue(*message, "content"));
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 2: Run test to verify it fails**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
autoninja -C /home/zyl/projects/superRpa/src/out/KylinRelease \
|
||||||
|
functions_ui_mainline_unittests
|
||||||
|
./out/KylinRelease/functions_ui_mainline_unittests
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: tests fail because runtime state only has logs/final result.
|
||||||
|
|
||||||
|
**Step 3: Write minimal implementation**
|
||||||
|
|
||||||
|
Extend `SgClawSessionService` to store:
|
||||||
|
- conversation id
|
||||||
|
- ordered messages
|
||||||
|
- pending assistant reply state
|
||||||
|
- runtime status/logs
|
||||||
|
|
||||||
|
Keep the debug page and popup both consuming the same runtime shape.
|
||||||
|
|
||||||
|
**Step 4: Run test to verify it passes**
|
||||||
|
|
||||||
|
Run the same test command.
|
||||||
|
|
||||||
|
Expected: connect/reopen behavior passes and conversation persists while browser stays open.
|
||||||
|
|
||||||
|
**Step 5: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git -C /home/zyl/projects/superRpa/src add \
|
||||||
|
chrome/browser/ui/webui/superrpa/sgclaw_session_service.h \
|
||||||
|
chrome/browser/ui/webui/superrpa/sgclaw_session_service.cc \
|
||||||
|
chrome/browser/ui/webui/superrpa/functions_ui.h \
|
||||||
|
chrome/browser/ui/webui/superrpa/functions_ui.cc \
|
||||||
|
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.ts \
|
||||||
|
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_messages.ts \
|
||||||
|
chrome/browser/ui/webui/superrpa/functions_ui_mainline_unittest.cc
|
||||||
|
git -C /home/zyl/projects/superRpa/src commit -m "feat: persist sgclaw conversation state"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Task 4: Extend sgClaw Submit Protocol For Multi-Turn Context
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/sgclaw_pipe_protocol.h`
|
||||||
|
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/sgclaw_pipe_protocol.cc`
|
||||||
|
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/sgclaw_process_host.cc`
|
||||||
|
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/ui/webui/superrpa/sgclaw_session_service.cc`
|
||||||
|
- Modify: `/home/zyl/projects/sgClaw/claw/src/pipe/protocol.rs`
|
||||||
|
- Modify: `/home/zyl/projects/sgClaw/claw/src/agent/mod.rs`
|
||||||
|
- Modify: `/home/zyl/projects/sgClaw/claw/src/compat/runtime.rs`
|
||||||
|
- Test: `/home/zyl/projects/sgClaw/claw/tests/compat_runtime_test.rs`
|
||||||
|
- Test: `/home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/sgclaw_process_host_mainline_unittest.cc`
|
||||||
|
|
||||||
|
**Step 1: Write the failing protocol tests**
|
||||||
|
|
||||||
|
Add tests that `submit_task` can carry:
|
||||||
|
- current user input
|
||||||
|
- prior user/assistant turns
|
||||||
|
- active page URL / title hints if needed
|
||||||
|
|
||||||
|
For Rust, add a test that two consecutive submits produce a provider request containing prior turns.
|
||||||
|
|
||||||
|
**Step 2: Run tests to verify they fail**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 /home/zyl/projects/superRpa/src/tools/crates/run_cargo.py test \
|
||||||
|
--manifest-path /home/zyl/projects/sgClaw/claw/Cargo.toml --test compat_runtime_test
|
||||||
|
autoninja -C /home/zyl/projects/superRpa/src/out/KylinRelease \
|
||||||
|
sgclaw_process_host_mainline_unittests
|
||||||
|
./out/KylinRelease/sgclaw_process_host_mainline_unittests \
|
||||||
|
--gtest_filter='SgClawProcessHostMainlineTest.*'
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: tests fail because submit currently only sends a raw instruction string.
|
||||||
|
|
||||||
|
**Step 3: Write minimal implementation**
|
||||||
|
|
||||||
|
Change the pipe payload from one-shot instruction to:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "submit_task",
|
||||||
|
"instruction": "...",
|
||||||
|
"messages": [
|
||||||
|
{"role": "user", "content": "..."},
|
||||||
|
{"role": "assistant", "content": "..."}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
On the Rust side, feed this history into the ZeroClaw turn so the next submit is a continuation, not a new session.
|
||||||
|
|
||||||
|
**Step 4: Run tests to verify they pass**
|
||||||
|
|
||||||
|
Run the same Rust + browser unit commands.
|
||||||
|
|
||||||
|
Expected: previous-turn context reaches the provider path.
|
||||||
|
|
||||||
|
**Step 5: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git -C /home/zyl/projects/sgClaw/claw add \
|
||||||
|
src/pipe/protocol.rs src/agent/mod.rs src/compat/runtime.rs tests/compat_runtime_test.rs
|
||||||
|
git -C /home/zyl/projects/sgClaw/claw commit -m "feat: carry conversation history through sgclaw pipe"
|
||||||
|
|
||||||
|
git -C /home/zyl/projects/superRpa/src add \
|
||||||
|
chrome/browser/superrpa/sgclaw/sgclaw_pipe_protocol.h \
|
||||||
|
chrome/browser/superrpa/sgclaw/sgclaw_pipe_protocol.cc \
|
||||||
|
chrome/browser/superrpa/sgclaw/sgclaw_process_host.cc \
|
||||||
|
chrome/browser/ui/webui/superrpa/sgclaw_session_service.cc \
|
||||||
|
chrome/browser/superrpa/sgclaw/sgclaw_process_host_mainline_unittest.cc
|
||||||
|
git -C /home/zyl/projects/superRpa/src commit -m "feat: send sgclaw conversation context"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Task 5: Harden Tool Schema And DeepSeek Compatibility
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `/home/zyl/projects/sgClaw/claw/src/compat/browser_tool_adapter.rs`
|
||||||
|
- Modify: `/home/zyl/projects/sgClaw/claw/src/compat/runtime.rs`
|
||||||
|
- Modify: `/home/zyl/projects/sgClaw/claw/tests/compat_browser_tool_test.rs`
|
||||||
|
- Modify: `/home/zyl/projects/sgClaw/claw/tests/compat_runtime_test.rs`
|
||||||
|
- Modify: `/home/zyl/projects/sgClaw/claw/tools/browser_smoke/run_deepseek_browser_smoke.mjs`
|
||||||
|
|
||||||
|
**Step 1: Write the failing tests**
|
||||||
|
|
||||||
|
Cover:
|
||||||
|
- `getText` without `selector` is rejected before it hits the browser
|
||||||
|
- `click` without `selector` is rejected
|
||||||
|
- `navigate` without `url` is rejected
|
||||||
|
- DeepSeek multi-round tool-call history does not trigger the `role=tool` 400 anymore
|
||||||
|
- non-task greeting behavior is explicit: either reject or answer in chat-only mode, but not silently pretend to be a browser task
|
||||||
|
|
||||||
|
**Step 2: Run tests to verify they fail**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 /home/zyl/projects/superRpa/src/tools/crates/run_cargo.py test \
|
||||||
|
--manifest-path /home/zyl/projects/sgClaw/claw/Cargo.toml --lib --tests
|
||||||
|
node /home/zyl/projects/sgClaw/claw/tools/browser_smoke/run_deepseek_browser_smoke.mjs
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: current code allows incomplete tool args and still has DeepSeek history edge cases.
|
||||||
|
|
||||||
|
**Step 3: Write minimal implementation**
|
||||||
|
|
||||||
|
Implement:
|
||||||
|
- action-specific required param validation in `browser_tool_adapter.rs`
|
||||||
|
- better tool-result/history formatting if needed for DeepSeek compatibility
|
||||||
|
- explicit user-facing handling for non-browser-chat input
|
||||||
|
|
||||||
|
**Step 4: Run tests to verify they pass**
|
||||||
|
|
||||||
|
Run the same Rust tests and browser smoke.
|
||||||
|
|
||||||
|
Expected: no malformed tool actions, no DeepSeek `role=tool` 400 in smoke.
|
||||||
|
|
||||||
|
**Step 5: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git -C /home/zyl/projects/sgClaw/claw add \
|
||||||
|
src/compat/browser_tool_adapter.rs \
|
||||||
|
src/compat/runtime.rs \
|
||||||
|
tests/compat_browser_tool_test.rs \
|
||||||
|
tests/compat_runtime_test.rs \
|
||||||
|
tools/browser_smoke/run_deepseek_browser_smoke.mjs
|
||||||
|
git -C /home/zyl/projects/sgClaw/claw commit -m "fix: harden sgclaw tool protocol for DeepSeek"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Task 6: Final Verification And Manual Smoke Checklist
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify if needed: `/home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/sgclaw_chat_smoke.mjs`
|
||||||
|
- Document manual steps in PR/summary, not code
|
||||||
|
|
||||||
|
**Step 1: Run automated verification**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 /home/zyl/projects/superRpa/src/tools/crates/run_cargo.py test \
|
||||||
|
--manifest-path /home/zyl/projects/sgClaw/claw/Cargo.toml --lib --tests
|
||||||
|
autoninja -C /home/zyl/projects/superRpa/src/out/KylinRelease \
|
||||||
|
functions_ui_mainline_unittests \
|
||||||
|
sgclaw_process_host_mainline_unittests
|
||||||
|
./out/KylinRelease/functions_ui_mainline_unittests
|
||||||
|
./out/KylinRelease/sgclaw_process_host_mainline_unittests \
|
||||||
|
--gtest_filter='SgClawProcessHostMainlineTest.*'
|
||||||
|
node /home/zyl/projects/sgClaw/claw/tools/browser_smoke/run_deepseek_browser_smoke.mjs
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: all pass.
|
||||||
|
|
||||||
|
**Step 2: Manual smoke**
|
||||||
|
|
||||||
|
1. Open a normal HTTP/HTTPS page.
|
||||||
|
2. Verify the floating button appears.
|
||||||
|
3. Click to open popup.
|
||||||
|
4. Start sgClaw from popup.
|
||||||
|
5. Submit one browser task and one follow-up task.
|
||||||
|
6. Click outside popup and verify it collapses to the button.
|
||||||
|
7. Reopen popup and verify conversation history is still present.
|
||||||
|
8. Open settings from the launcher, update model/base URL, return to popup, submit again, and verify hot update.
|
||||||
|
|
||||||
|
**Step 3: Final commit if verification requires touch-ups**
|
||||||
|
|
||||||
|
Use focused commit messages only for actual fixes found during verification.
|
||||||
@@ -0,0 +1,73 @@
|
|||||||
|
# Rust-Only Acceptance Checklist
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
|
||||||
|
This checklist covers the Rust-side work that can be verified before the SuperRPA browser repository is available locally.
|
||||||
|
|
||||||
|
Covered:
|
||||||
|
|
||||||
|
- pipe handshake and protocol baseline
|
||||||
|
- task-level message types
|
||||||
|
- HMAC canonical string alignment
|
||||||
|
- Phase 1 rule-based Baidu search planner
|
||||||
|
- DeepSeek provider scaffolding
|
||||||
|
- provider-backed minimal Agent runtime with fallback to planner mode
|
||||||
|
|
||||||
|
Not covered yet:
|
||||||
|
|
||||||
|
- `SgClawProcessHost`
|
||||||
|
- `PipeListener`
|
||||||
|
- `CommandRouter` reuse in SuperRPA
|
||||||
|
- FunctionsUI bridge integration
|
||||||
|
|
||||||
|
## Required Commands
|
||||||
|
|
||||||
|
Run from the feature worktree:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd /home/zyl/projects/sgClaw/.worktrees/superrpa-browser-control
|
||||||
|
cargo test -q
|
||||||
|
```
|
||||||
|
|
||||||
|
Optional focused checks:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test task_protocol_test -q
|
||||||
|
cargo test --test planner_test -q
|
||||||
|
cargo test --test runtime_task_flow_test -q
|
||||||
|
cargo test --test deepseek_provider_test -q
|
||||||
|
cargo test --test agent_runtime_test -q
|
||||||
|
```
|
||||||
|
|
||||||
|
## Pass Criteria
|
||||||
|
|
||||||
|
- `init -> init_ack` tests pass
|
||||||
|
- `submit_task`, `task_complete`, and `log_entry` serialize correctly
|
||||||
|
- HMAC output is based on newline-delimited canonical string with stable JSON ordering
|
||||||
|
- Planner turns `打开百度搜索天气` into `navigate -> type -> click`
|
||||||
|
- Runtime mock flow emits browser commands and finishes with `task_complete`
|
||||||
|
- DeepSeek settings load from environment with default model `deepseek-chat`
|
||||||
|
- DeepSeek request body matches OpenAI-compatible chat completion shape
|
||||||
|
|
||||||
|
## Runtime Configuration
|
||||||
|
|
||||||
|
The provider-backed path is enabled only when `DEEPSEEK_API_KEY` is present.
|
||||||
|
|
||||||
|
Environment variables:
|
||||||
|
|
||||||
|
- `DEEPSEEK_API_KEY`
|
||||||
|
- `DEEPSEEK_BASE_URL` optional, defaults to `https://api.deepseek.com`
|
||||||
|
- `DEEPSEEK_MODEL` optional, defaults to `deepseek-chat`
|
||||||
|
|
||||||
|
Without `DEEPSEEK_API_KEY`, sgClaw falls back to the Phase 1 rule-based planner.
|
||||||
|
|
||||||
|
## Current Branch Milestones
|
||||||
|
|
||||||
|
- `b9773d4` — task pipe protocol and HMAC alignment
|
||||||
|
- `1ab0012` — Phase 1 task planner flow
|
||||||
|
- `0d0097b` — DeepSeek provider scaffolding
|
||||||
|
- `9979b1f` — provider-backed Agent runtime
|
||||||
|
|
||||||
|
## Next Dependency
|
||||||
|
|
||||||
|
To continue beyond Rust-only acceptance, the local SuperRPA / Chromium repository path is required so Tasks 3, 4, and 5 can be implemented and verified.
|
||||||
@@ -301,7 +301,7 @@ Expected: PASS including provider and runtime suites.
|
|||||||
- Modify: `README.md`
|
- Modify: `README.md`
|
||||||
- Create: `docs/superpowers/acceptance/2026-03-25-superrpa-sgclaw-browser-control.md`
|
- Create: `docs/superpowers/acceptance/2026-03-25-superrpa-sgclaw-browser-control.md`
|
||||||
- Modify: `docs/浏览器对接标准.md`
|
- Modify: `docs/浏览器对接标准.md`
|
||||||
- Modify: `docs/sgclaw_project_team_kickoff.md`
|
- Modify: `docs/archive/项目管理与排期/sgclaw_project_team_kickoff.md`
|
||||||
|
|
||||||
- [ ] **Step 1: Write acceptance checklist**
|
- [ ] **Step 1: Write acceptance checklist**
|
||||||
|
|
||||||
|
|||||||
@@ -1,8 +1,8 @@
|
|||||||
# frontend 目录说明
|
# frontend 目录说明
|
||||||
|
|
||||||
当前 `frontend/` 仅保留开发验证相关内容:
|
当前 `frontend/` 保留验证与归档文件:
|
||||||
|
|
||||||
- `sgClaw验证/`:本地验证页面与脚本。
|
- `archive/sgClaw验证-已归档/`:历史本地验证页面与脚本(含 Vue 2 验证页、`serve.sh`、`download-libs.sh`、`testRunner.js`)。
|
||||||
|
|
||||||
原先用于领导演示的网页与图文件已归档到:
|
原先用于领导演示的网页与图文件已归档到:
|
||||||
|
|
||||||
|
|||||||
13
frontend/archive/README.md
Normal file
13
frontend/archive/README.md
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
# 前端归档资源
|
||||||
|
|
||||||
|
## 已归档内容
|
||||||
|
|
||||||
|
- `sgClaw验证-已归档/`:历史本地验证页面与脚本(Vue 2 验证页面、服务脚本、离线依赖下载脚本、测试运行器)。
|
||||||
|
|
||||||
|
## 使用说明
|
||||||
|
|
||||||
|
这是历史资产,不作为项目主线运行链路;如需复现旧版手工验证流程,可在该目录下直接执行:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bash frontend/archive/sgClaw验证-已归档/serve.sh
|
||||||
|
```
|
||||||
1062
frontend/archive/sgClaw验证-已归档/index.html
Normal file
1062
frontend/archive/sgClaw验证-已归档/index.html
Normal file
File diff suppressed because it is too large
Load Diff
1063
frontend/archive/sgClaw验证-已归档/sgclaw-chat-standalone.html
Normal file
1063
frontend/archive/sgClaw验证-已归档/sgclaw-chat-standalone.html
Normal file
File diff suppressed because it is too large
Load Diff
30
frontend/archive/sgClaw验证-已归档/superrpa_migration/README.md
Normal file
30
frontend/archive/sgClaw验证-已归档/superrpa_migration/README.md
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
# sgClaw 聊天界面:SuperRPA 迁移稿
|
||||||
|
|
||||||
|
该目录用于在当前仓库先验证新聊天页面,再手动迁移到:
|
||||||
|
`/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/`
|
||||||
|
|
||||||
|
## 可直接迁移文件
|
||||||
|
|
||||||
|
- `sgclaw-chat.ts`:新的 Lit 组件版聊天页(对应 `sgclaw-chat` Function 的主实现)。
|
||||||
|
|
||||||
|
## 迁移步骤(建议)
|
||||||
|
|
||||||
|
1. 备份原文件:
|
||||||
|
- `cp /home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.ts /tmp/sgclaw-chat-backup.ts`
|
||||||
|
|
||||||
|
2. 复制新文件:
|
||||||
|
- `cp frontend/archive/sgClaw验证-已归档/superrpa_migration/sgclaw-chat.ts /home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.ts`
|
||||||
|
|
||||||
|
3. (可选)保留兼容:
|
||||||
|
- 现有 `sgclaw-chat.html.ts` 与 `sgclaw-chat.css.ts` 仍是占位导出,不影响本组件内联模板;
|
||||||
|
- 如有项目 lint/格式规范要求,可再拆分为独立 html.ts/css.ts。
|
||||||
|
|
||||||
|
4. 重新加载 Functions 页面验证:访问对应的 `sgclaw-chat` 功能入口。
|
||||||
|
|
||||||
|
## 注意
|
||||||
|
|
||||||
|
- 当前版本保留 localStorage 键:
|
||||||
|
- `sgclaw-chat-ui-v1`
|
||||||
|
- `sgclaw-chat-messages-v1`
|
||||||
|
- 未检测到 API Key 时会自动降级到 mock 回答。
|
||||||
|
- 已支持 OpenAI / Claude / mock 三种模式。
|
||||||
1140
frontend/archive/sgClaw验证-已归档/superrpa_migration/sgclaw-chat.ts
Normal file
1140
frontend/archive/sgClaw验证-已归档/superrpa_migration/sgclaw-chat.ts
Normal file
File diff suppressed because it is too large
Load Diff
@@ -1,13 +1,15 @@
|
|||||||
{
|
{
|
||||||
"version": "1.0",
|
"version": "1.0",
|
||||||
"demo_only_domains": ["baidu.com", "www.baidu.com"],
|
"demo_only_domains": ["baidu.com", "www.baidu.com", "zhihu.com", "www.zhihu.com"],
|
||||||
"domains": {
|
"domains": {
|
||||||
"allowed": [
|
"allowed": [
|
||||||
"oa.example.com",
|
"oa.example.com",
|
||||||
"erp.example.com",
|
"erp.example.com",
|
||||||
"hr.example.com",
|
"hr.example.com",
|
||||||
"baidu.com",
|
"baidu.com",
|
||||||
"www.baidu.com"
|
"www.baidu.com",
|
||||||
|
"zhihu.com",
|
||||||
|
"www.zhihu.com"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
"pipe_actions": {
|
"pipe_actions": {
|
||||||
|
|||||||
262
src/agent/mod.rs
262
src/agent/mod.rs
@@ -1,17 +1,133 @@
|
|||||||
pub mod planner;
|
pub mod planner;
|
||||||
pub mod runtime;
|
pub mod runtime;
|
||||||
|
|
||||||
use crate::llm::DeepSeekProvider;
|
use std::ffi::OsString;
|
||||||
use crate::pipe::{AgentMessage, BrowserMessage, BrowserPipeTool, PipeError, Transport};
|
use std::path::PathBuf;
|
||||||
|
|
||||||
pub fn execute_task<T: Transport>(
|
use crate::compat::runtime::CompatTaskContext;
|
||||||
|
use crate::config::DeepSeekSettings;
|
||||||
|
use crate::pipe::{
|
||||||
|
AgentMessage, BrowserMessage, BrowserPipeTool, ConversationMessage, PipeError, Transport,
|
||||||
|
};
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||||
|
pub struct AgentRuntimeContext {
|
||||||
|
config_path: Option<PathBuf>,
|
||||||
|
workspace_root: PathBuf,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl AgentRuntimeContext {
|
||||||
|
pub fn new(config_path: Option<PathBuf>, workspace_root: PathBuf) -> Self {
|
||||||
|
Self {
|
||||||
|
config_path,
|
||||||
|
workspace_root,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn from_process_args<I, S>(args: I) -> Result<Self, PipeError>
|
||||||
|
where
|
||||||
|
I: IntoIterator<Item = S>,
|
||||||
|
S: Into<OsString>,
|
||||||
|
{
|
||||||
|
let mut config_path = None;
|
||||||
|
let mut args = args.into_iter().map(Into::into);
|
||||||
|
let _ = args.next();
|
||||||
|
|
||||||
|
while let Some(arg) = args.next() {
|
||||||
|
if arg == OsString::from("--config-path") {
|
||||||
|
let Some(value) = args.next() else {
|
||||||
|
return Err(PipeError::Protocol(
|
||||||
|
"missing value for --config-path".to_string(),
|
||||||
|
));
|
||||||
|
};
|
||||||
|
config_path = Some(PathBuf::from(value));
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
let arg_string = arg.to_string_lossy();
|
||||||
|
if let Some(value) = arg_string.strip_prefix("--config-path=") {
|
||||||
|
config_path = Some(PathBuf::from(value));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let workspace_root = config_path
|
||||||
|
.as_ref()
|
||||||
|
.and_then(|path| path.parent().map(|parent| parent.to_path_buf()))
|
||||||
|
.unwrap_or_else(default_workspace_root);
|
||||||
|
|
||||||
|
Ok(Self::new(config_path, workspace_root))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn load_deepseek_settings(&self) -> Result<Option<DeepSeekSettings>, PipeError> {
|
||||||
|
DeepSeekSettings::load(self.config_path.as_deref())
|
||||||
|
.map_err(|err| PipeError::Protocol(err.to_string()))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn deepseek_source_label(&self) -> String {
|
||||||
|
match &self.config_path {
|
||||||
|
Some(path) if path.exists() => path.display().to_string(),
|
||||||
|
_ => "environment".to_string(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Default for AgentRuntimeContext {
|
||||||
|
fn default() -> Self {
|
||||||
|
Self::new(None, default_workspace_root())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn default_workspace_root() -> PathBuf {
|
||||||
|
std::env::current_dir().unwrap_or_else(|_| PathBuf::from("."))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn send_mode_log<T: Transport>(transport: &T, mode: &str) -> Result<(), PipeError> {
|
||||||
|
transport.send(&AgentMessage::LogEntry {
|
||||||
|
level: "mode".to_string(),
|
||||||
|
message: mode.to_string(),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn explicit_non_task_response(history: &[ConversationMessage], instruction: &str) -> Option<String> {
|
||||||
|
if !history.is_empty() {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
|
||||||
|
let trimmed = instruction.trim();
|
||||||
|
if trimmed.is_empty() {
|
||||||
|
return Some("sgClaw 目前只处理浏览器任务,请直接描述要打开、搜索、点击或提取的网页操作。".to_string());
|
||||||
|
}
|
||||||
|
|
||||||
|
const TASK_HINTS: &[&str] = &[
|
||||||
|
"打开", "搜索", "点击", "输入", "导航", "跳转", "访问", "提取", "获取", "网页", "页面",
|
||||||
|
"标签页", "百度", "知乎", "google", "open", "search", "click", "type", "navigate",
|
||||||
|
];
|
||||||
|
if TASK_HINTS.iter().any(|hint| trimmed.contains(hint)) {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
|
||||||
|
const CHITCHAT_INPUTS: &[&str] = &[
|
||||||
|
"hi", "hello", "hey", "你好", "您好", "嗨", "在吗", "你是谁", "介绍一下你自己",
|
||||||
|
];
|
||||||
|
if CHITCHAT_INPUTS
|
||||||
|
.iter()
|
||||||
|
.any(|candidate| trimmed.eq_ignore_ascii_case(candidate) || trimmed == *candidate)
|
||||||
|
{
|
||||||
|
return Some("sgClaw 现在是浏览器任务入口,不做通用闲聊。请直接说你想在网页上执行什么操作,例如“打开百度搜索天气”。".to_string());
|
||||||
|
}
|
||||||
|
|
||||||
|
if trimmed.chars().count() <= 8 {
|
||||||
|
return Some("sgClaw 现在只处理浏览器任务。请直接描述网页操作目标,例如“打开知乎搜索天气”或“提取当前页面标题”。".to_string());
|
||||||
|
}
|
||||||
|
|
||||||
|
None
|
||||||
|
}
|
||||||
|
|
||||||
|
fn execute_plan<T: Transport>(
|
||||||
transport: &T,
|
transport: &T,
|
||||||
browser_tool: &BrowserPipeTool<T>,
|
browser_tool: &BrowserPipeTool<T>,
|
||||||
instruction: &str,
|
plan: &planner::TaskPlan,
|
||||||
) -> Result<String, PipeError> {
|
) -> Result<String, PipeError> {
|
||||||
let plan = planner::plan_instruction(instruction)
|
|
||||||
.map_err(|err| PipeError::Protocol(err.to_string()))?;
|
|
||||||
|
|
||||||
for step in &plan.steps {
|
for step in &plan.steps {
|
||||||
transport.send(&AgentMessage::LogEntry {
|
transport.send(&AgentMessage::LogEntry {
|
||||||
level: "info".to_string(),
|
level: "info".to_string(),
|
||||||
@@ -31,42 +147,128 @@ pub fn execute_task<T: Transport>(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
Ok(plan.summary)
|
Ok(plan.summary.clone())
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn handle_browser_message<T: Transport>(
|
pub fn execute_task<T: Transport>(
|
||||||
|
transport: &T,
|
||||||
|
browser_tool: &BrowserPipeTool<T>,
|
||||||
|
instruction: &str,
|
||||||
|
) -> Result<String, PipeError> {
|
||||||
|
let plan = planner::plan_instruction(instruction)
|
||||||
|
.map_err(|err| PipeError::Protocol(err.to_string()))?;
|
||||||
|
execute_plan(transport, browser_tool, &plan)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn handle_browser_message<T: Transport + 'static>(
|
||||||
transport: &T,
|
transport: &T,
|
||||||
browser_tool: &BrowserPipeTool<T>,
|
browser_tool: &BrowserPipeTool<T>,
|
||||||
message: BrowserMessage,
|
message: BrowserMessage,
|
||||||
|
) -> Result<(), PipeError> {
|
||||||
|
handle_browser_message_with_context(
|
||||||
|
transport,
|
||||||
|
browser_tool,
|
||||||
|
&AgentRuntimeContext::default(),
|
||||||
|
message,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn handle_browser_message_with_context<T: Transport + 'static>(
|
||||||
|
transport: &T,
|
||||||
|
browser_tool: &BrowserPipeTool<T>,
|
||||||
|
context: &AgentRuntimeContext,
|
||||||
|
message: BrowserMessage,
|
||||||
) -> Result<(), PipeError> {
|
) -> Result<(), PipeError> {
|
||||||
match message {
|
match message {
|
||||||
BrowserMessage::SubmitTask { instruction } => {
|
BrowserMessage::SubmitTask {
|
||||||
let completion = match DeepSeekProvider::from_env() {
|
instruction,
|
||||||
Ok(provider) => match runtime::execute_task_with_provider(
|
conversation_id,
|
||||||
transport,
|
messages,
|
||||||
browser_tool,
|
page_url,
|
||||||
&provider,
|
page_title,
|
||||||
&instruction,
|
} => {
|
||||||
) {
|
if let Some(summary) = explicit_non_task_response(&messages, &instruction) {
|
||||||
Ok(summary) => AgentMessage::TaskComplete {
|
return transport.send(&AgentMessage::TaskComplete {
|
||||||
success: true,
|
success: false,
|
||||||
summary,
|
summary,
|
||||||
},
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
let task_context = CompatTaskContext {
|
||||||
|
conversation_id: (!conversation_id.trim().is_empty())
|
||||||
|
.then_some(conversation_id.clone()),
|
||||||
|
messages,
|
||||||
|
page_url: (!page_url.trim().is_empty()).then_some(page_url),
|
||||||
|
page_title: (!page_title.trim().is_empty()).then_some(page_title),
|
||||||
|
};
|
||||||
|
if !task_context.messages.is_empty() {
|
||||||
|
let _ = transport.send(&AgentMessage::LogEntry {
|
||||||
|
level: "info".to_string(),
|
||||||
|
message: format!(
|
||||||
|
"continuing conversation with {} prior turns",
|
||||||
|
task_context.messages.len()
|
||||||
|
),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
let completion = match context.load_deepseek_settings() {
|
||||||
|
Ok(Some(settings)) => {
|
||||||
|
let _ = transport.send(&AgentMessage::LogEntry {
|
||||||
|
level: "info".to_string(),
|
||||||
|
message: format!(
|
||||||
|
"DeepSeek config loaded from {} model={} base_url={}",
|
||||||
|
context.deepseek_source_label(),
|
||||||
|
settings.model,
|
||||||
|
settings.base_url
|
||||||
|
),
|
||||||
|
});
|
||||||
|
let _ = send_mode_log(transport, "compat_llm_primary");
|
||||||
|
match crate::compat::runtime::execute_task(
|
||||||
|
transport,
|
||||||
|
browser_tool.clone(),
|
||||||
|
&instruction,
|
||||||
|
&task_context,
|
||||||
|
&context.workspace_root,
|
||||||
|
&settings,
|
||||||
|
) {
|
||||||
|
Ok(summary) => AgentMessage::TaskComplete {
|
||||||
|
success: true,
|
||||||
|
summary,
|
||||||
|
},
|
||||||
|
Err(err) => AgentMessage::TaskComplete {
|
||||||
|
success: false,
|
||||||
|
summary: err.to_string(),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(None) => match planner::plan_instruction(&instruction) {
|
||||||
|
Ok(plan) => {
|
||||||
|
let _ = send_mode_log(transport, "deterministic_planner");
|
||||||
|
match execute_plan(transport, browser_tool, &plan) {
|
||||||
|
Ok(summary) => AgentMessage::TaskComplete {
|
||||||
|
success: true,
|
||||||
|
summary,
|
||||||
|
},
|
||||||
|
Err(err) => AgentMessage::TaskComplete {
|
||||||
|
success: false,
|
||||||
|
summary: err.to_string(),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
Err(err) => AgentMessage::TaskComplete {
|
Err(err) => AgentMessage::TaskComplete {
|
||||||
success: false,
|
success: false,
|
||||||
summary: err.to_string(),
|
summary: PipeError::Protocol(err.to_string()).to_string(),
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
Err(_) => match execute_task(transport, browser_tool, &instruction) {
|
Err(err) => {
|
||||||
Ok(summary) => AgentMessage::TaskComplete {
|
let _ = transport.send(&AgentMessage::LogEntry {
|
||||||
success: true,
|
level: "error".to_string(),
|
||||||
summary,
|
message: format!("failed to load DeepSeek config: {err}"),
|
||||||
},
|
});
|
||||||
Err(err) => AgentMessage::TaskComplete {
|
AgentMessage::TaskComplete {
|
||||||
success: false,
|
success: false,
|
||||||
summary: err.to_string(),
|
summary: err.to_string(),
|
||||||
},
|
}
|
||||||
},
|
}
|
||||||
};
|
};
|
||||||
transport.send(&completion)
|
transport.send(&completion)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,3 +1,4 @@
|
|||||||
|
use reqwest::Url;
|
||||||
use serde_json::{json, Value};
|
use serde_json::{json, Value};
|
||||||
use thiserror::Error;
|
use thiserror::Error;
|
||||||
|
|
||||||
@@ -7,6 +8,8 @@ const BAIDU_URL: &str = "https://www.baidu.com";
|
|||||||
const BAIDU_DOMAIN: &str = "www.baidu.com";
|
const BAIDU_DOMAIN: &str = "www.baidu.com";
|
||||||
const BAIDU_INPUT_SELECTOR: &str = "#kw";
|
const BAIDU_INPUT_SELECTOR: &str = "#kw";
|
||||||
const BAIDU_SEARCH_BUTTON_SELECTOR: &str = "#su";
|
const BAIDU_SEARCH_BUTTON_SELECTOR: &str = "#su";
|
||||||
|
const ZHIHU_URL: &str = "https://www.zhihu.com/search";
|
||||||
|
const ZHIHU_DOMAIN: &str = "www.zhihu.com";
|
||||||
|
|
||||||
#[derive(Debug, Clone, PartialEq)]
|
#[derive(Debug, Clone, PartialEq)]
|
||||||
pub struct PlannedStep {
|
pub struct PlannedStep {
|
||||||
@@ -32,17 +35,38 @@ pub enum PlannerError {
|
|||||||
|
|
||||||
pub fn plan_instruction(instruction: &str) -> Result<TaskPlan, PlannerError> {
|
pub fn plan_instruction(instruction: &str) -> Result<TaskPlan, PlannerError> {
|
||||||
let trimmed = instruction.trim();
|
let trimmed = instruction.trim();
|
||||||
let query = trimmed
|
if let Some(query) = extract_query(trimmed, &["打开百度搜索", "打开百度并搜索"])? {
|
||||||
.strip_prefix("打开百度搜索")
|
return Ok(plan_baidu_search(query));
|
||||||
.or_else(|| trimmed.strip_prefix("打开百度并搜索"))
|
}
|
||||||
.ok_or_else(|| PlannerError::UnsupportedInstruction(trimmed.to_string()))?
|
|
||||||
.trim();
|
|
||||||
|
|
||||||
|
if let Some(query) = extract_query(trimmed, &["打开知乎搜索", "打开知乎并搜索"])? {
|
||||||
|
return Ok(plan_zhihu_search(query));
|
||||||
|
}
|
||||||
|
|
||||||
|
Err(PlannerError::UnsupportedInstruction(trimmed.to_string()))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn extract_query<'a>(
|
||||||
|
instruction: &'a str,
|
||||||
|
prefixes: &[&str],
|
||||||
|
) -> Result<Option<&'a str>, PlannerError> {
|
||||||
|
let Some(query) = prefixes
|
||||||
|
.iter()
|
||||||
|
.find_map(|prefix| instruction.strip_prefix(prefix))
|
||||||
|
else {
|
||||||
|
return Ok(None);
|
||||||
|
};
|
||||||
|
|
||||||
|
let query = query.trim();
|
||||||
if query.is_empty() {
|
if query.is_empty() {
|
||||||
return Err(PlannerError::MissingQuery);
|
return Err(PlannerError::MissingQuery);
|
||||||
}
|
}
|
||||||
|
|
||||||
Ok(TaskPlan {
|
Ok(Some(query))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn plan_baidu_search(query: &str) -> TaskPlan {
|
||||||
|
TaskPlan {
|
||||||
summary: format!("已在百度搜索{query}"),
|
summary: format!("已在百度搜索{query}"),
|
||||||
steps: vec![
|
steps: vec![
|
||||||
PlannedStep {
|
PlannedStep {
|
||||||
@@ -68,5 +92,21 @@ pub fn plan_instruction(instruction: &str) -> Result<TaskPlan, PlannerError> {
|
|||||||
log_message: format!("click {BAIDU_SEARCH_BUTTON_SELECTOR}"),
|
log_message: format!("click {BAIDU_SEARCH_BUTTON_SELECTOR}"),
|
||||||
},
|
},
|
||||||
],
|
],
|
||||||
})
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn plan_zhihu_search(query: &str) -> TaskPlan {
|
||||||
|
let url = Url::parse_with_params(ZHIHU_URL, &[("type", "content"), ("q", query)])
|
||||||
|
.expect("valid Zhihu search URL");
|
||||||
|
let url: String = url.into();
|
||||||
|
|
||||||
|
TaskPlan {
|
||||||
|
summary: format!("已在知乎搜索{query}"),
|
||||||
|
steps: vec![PlannedStep {
|
||||||
|
action: Action::Navigate,
|
||||||
|
params: json!({ "url": url }),
|
||||||
|
expected_domain: ZHIHU_DOMAIN.to_string(),
|
||||||
|
log_message: format!("navigate {url}"),
|
||||||
|
}],
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
204
src/compat/browser_tool_adapter.rs
Normal file
204
src/compat/browser_tool_adapter.rs
Normal file
@@ -0,0 +1,204 @@
|
|||||||
|
use async_trait::async_trait;
|
||||||
|
use serde_json::{json, Map, Value};
|
||||||
|
use zeroclaw::tools::{Tool, ToolResult};
|
||||||
|
|
||||||
|
use crate::pipe::{Action, BrowserPipeTool, Transport};
|
||||||
|
|
||||||
|
pub const BROWSER_ACTION_TOOL_NAME: &str = "browser_action";
|
||||||
|
|
||||||
|
pub struct ZeroClawBrowserTool<T: Transport> {
|
||||||
|
browser_tool: BrowserPipeTool<T>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<T: Transport> ZeroClawBrowserTool<T> {
|
||||||
|
pub fn new(browser_tool: BrowserPipeTool<T>) -> Self {
|
||||||
|
Self { browser_tool }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl<T: Transport + 'static> Tool for ZeroClawBrowserTool<T> {
|
||||||
|
fn name(&self) -> &str {
|
||||||
|
BROWSER_ACTION_TOOL_NAME
|
||||||
|
}
|
||||||
|
|
||||||
|
fn description(&self) -> &str {
|
||||||
|
"Execute browser actions in SuperRPA through the existing sgClaw pipe protocol."
|
||||||
|
}
|
||||||
|
|
||||||
|
fn parameters_schema(&self) -> Value {
|
||||||
|
json!({
|
||||||
|
"type": "object",
|
||||||
|
"required": ["action", "expected_domain"],
|
||||||
|
"properties": {
|
||||||
|
"action": {
|
||||||
|
"type": "string",
|
||||||
|
"enum": ["click", "type", "navigate", "getText"]
|
||||||
|
},
|
||||||
|
"expected_domain": {
|
||||||
|
"type": "string"
|
||||||
|
},
|
||||||
|
"selector": {
|
||||||
|
"type": "string"
|
||||||
|
},
|
||||||
|
"text": {
|
||||||
|
"type": "string"
|
||||||
|
},
|
||||||
|
"url": {
|
||||||
|
"type": "string"
|
||||||
|
},
|
||||||
|
"clear_first": {
|
||||||
|
"type": "boolean"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn execute(&self, args: Value) -> anyhow::Result<ToolResult> {
|
||||||
|
let request = match parse_browser_action_request(args) {
|
||||||
|
Ok(request) => request,
|
||||||
|
Err(err) => return Ok(failed_tool_result(err.to_string())),
|
||||||
|
};
|
||||||
|
|
||||||
|
let result = match self.browser_tool.invoke(
|
||||||
|
request.action,
|
||||||
|
request.params,
|
||||||
|
&request.expected_domain,
|
||||||
|
) {
|
||||||
|
Ok(result) => result,
|
||||||
|
Err(err) => return Ok(failed_tool_result(err.to_string())),
|
||||||
|
};
|
||||||
|
|
||||||
|
let output = serde_json::to_string(&json!({
|
||||||
|
"seq": result.seq,
|
||||||
|
"success": result.success,
|
||||||
|
"data": result.data,
|
||||||
|
"aom_snapshot": result.aom_snapshot,
|
||||||
|
"timing": result.timing
|
||||||
|
}))?;
|
||||||
|
|
||||||
|
Ok(ToolResult {
|
||||||
|
success: result.success,
|
||||||
|
output,
|
||||||
|
error: (!result.success)
|
||||||
|
.then(|| format_browser_action_error(&result.data)),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
struct BrowserActionRequest {
|
||||||
|
action: Action,
|
||||||
|
expected_domain: String,
|
||||||
|
params: Value,
|
||||||
|
}
|
||||||
|
|
||||||
|
fn parse_browser_action_request(args: Value) -> Result<BrowserActionRequest, BrowserActionAdapterError> {
|
||||||
|
let mut args = match args {
|
||||||
|
Value::Object(args) => args,
|
||||||
|
other => {
|
||||||
|
return Err(BrowserActionAdapterError::InvalidArguments(format!(
|
||||||
|
"expected object arguments, got {other}"
|
||||||
|
)))
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let action_name = take_required_string(&mut args, "action")?;
|
||||||
|
let expected_domain = take_required_string(&mut args, "expected_domain")?;
|
||||||
|
let action = parse_action(&action_name)?;
|
||||||
|
validate_action_params(&action_name, &args)?;
|
||||||
|
|
||||||
|
Ok(BrowserActionRequest {
|
||||||
|
action,
|
||||||
|
expected_domain,
|
||||||
|
params: Value::Object(args),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn parse_action(action_name: &str) -> Result<Action, BrowserActionAdapterError> {
|
||||||
|
match action_name {
|
||||||
|
"click" => Ok(Action::Click),
|
||||||
|
"type" => Ok(Action::Type),
|
||||||
|
"navigate" => Ok(Action::Navigate),
|
||||||
|
"getText" => Ok(Action::GetText),
|
||||||
|
other => Err(BrowserActionAdapterError::UnsupportedAction(
|
||||||
|
other.to_string(),
|
||||||
|
)),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn take_required_string(
|
||||||
|
args: &mut Map<String, Value>,
|
||||||
|
key: &'static str,
|
||||||
|
) -> Result<String, BrowserActionAdapterError> {
|
||||||
|
match args.remove(key) {
|
||||||
|
Some(Value::String(value)) if !value.trim().is_empty() => Ok(value),
|
||||||
|
Some(other) => Err(BrowserActionAdapterError::InvalidArguments(format!(
|
||||||
|
"{key} must be a non-empty string, got {other}"
|
||||||
|
))),
|
||||||
|
None => Err(BrowserActionAdapterError::MissingField(key)),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn failed_tool_result(error: String) -> ToolResult {
|
||||||
|
ToolResult {
|
||||||
|
success: false,
|
||||||
|
output: String::new(),
|
||||||
|
error: Some(error),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn validate_action_params(
|
||||||
|
action_name: &str,
|
||||||
|
args: &Map<String, Value>,
|
||||||
|
) -> Result<(), BrowserActionAdapterError> {
|
||||||
|
match action_name {
|
||||||
|
"click" | "getText" => require_non_empty_string(args, "selector", action_name),
|
||||||
|
"type" => {
|
||||||
|
require_non_empty_string(args, "selector", action_name)?;
|
||||||
|
require_non_empty_string(args, "text", action_name)
|
||||||
|
}
|
||||||
|
"navigate" => require_non_empty_string(args, "url", action_name),
|
||||||
|
_ => Ok(()),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn require_non_empty_string(
|
||||||
|
args: &Map<String, Value>,
|
||||||
|
key: &'static str,
|
||||||
|
action_name: &str,
|
||||||
|
) -> Result<(), BrowserActionAdapterError> {
|
||||||
|
match args.get(key) {
|
||||||
|
Some(Value::String(value)) if !value.trim().is_empty() => Ok(()),
|
||||||
|
Some(other) => Err(BrowserActionAdapterError::InvalidArguments(format!(
|
||||||
|
"{action_name} requires a non-empty {key}, got {other}"
|
||||||
|
))),
|
||||||
|
None => Err(BrowserActionAdapterError::InvalidArguments(format!(
|
||||||
|
"{action_name} requires {key}"
|
||||||
|
))),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn format_browser_action_error(data: &Value) -> String {
|
||||||
|
if let Some(error) = data.get("error") {
|
||||||
|
if let Some(message) = error.get("message").and_then(Value::as_str) {
|
||||||
|
return message.to_string();
|
||||||
|
}
|
||||||
|
return format!("browser action failed: {error}");
|
||||||
|
}
|
||||||
|
|
||||||
|
if data.is_null() {
|
||||||
|
return "browser action returned success=false".to_string();
|
||||||
|
}
|
||||||
|
|
||||||
|
format!("browser action failed: {data}")
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, thiserror::Error)]
|
||||||
|
enum BrowserActionAdapterError {
|
||||||
|
#[error("unsupported action: {0}")]
|
||||||
|
UnsupportedAction(String),
|
||||||
|
#[error("missing required field: {0}")]
|
||||||
|
MissingField(&'static str),
|
||||||
|
#[error("invalid tool arguments: {0}")]
|
||||||
|
InvalidArguments(String),
|
||||||
|
}
|
||||||
38
src/compat/config_adapter.rs
Normal file
38
src/compat/config_adapter.rs
Normal file
@@ -0,0 +1,38 @@
|
|||||||
|
use std::path::{Path, PathBuf};
|
||||||
|
|
||||||
|
use zeroclaw::Config as ZeroClawConfig;
|
||||||
|
|
||||||
|
use crate::compat::cron_adapter::configure_embedded_cron;
|
||||||
|
use crate::compat::memory_adapter::configure_embedded_memory;
|
||||||
|
use crate::config::DeepSeekSettings;
|
||||||
|
|
||||||
|
const SGCLAW_ZEROCLAW_WORKSPACE_DIR: &str = ".sgclaw-zeroclaw-workspace";
|
||||||
|
|
||||||
|
pub fn build_zeroclaw_config(workspace_root: &Path) -> Result<ZeroClawConfig, crate::config::ConfigError> {
|
||||||
|
let settings = DeepSeekSettings::from_env()?;
|
||||||
|
Ok(build_zeroclaw_config_from_settings(
|
||||||
|
workspace_root,
|
||||||
|
&settings,
|
||||||
|
))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn build_zeroclaw_config_from_settings(
|
||||||
|
workspace_root: &Path,
|
||||||
|
settings: &DeepSeekSettings,
|
||||||
|
) -> ZeroClawConfig {
|
||||||
|
let workspace_dir = zeroclaw_workspace_dir(workspace_root);
|
||||||
|
let mut config = ZeroClawConfig::default();
|
||||||
|
config.workspace_dir = workspace_dir.clone();
|
||||||
|
config.config_path = workspace_dir.join("config.toml");
|
||||||
|
config.default_provider = Some("deepseek".to_string());
|
||||||
|
config.default_model = Some(settings.model.clone());
|
||||||
|
config.api_key = Some(settings.api_key.clone());
|
||||||
|
config.api_url = Some(settings.base_url.clone());
|
||||||
|
configure_embedded_memory(&mut config);
|
||||||
|
configure_embedded_cron(&mut config);
|
||||||
|
config
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn zeroclaw_workspace_dir(workspace_root: &Path) -> PathBuf {
|
||||||
|
workspace_root.join(SGCLAW_ZEROCLAW_WORKSPACE_DIR)
|
||||||
|
}
|
||||||
98
src/compat/cron_adapter.rs
Normal file
98
src/compat/cron_adapter.rs
Normal file
@@ -0,0 +1,98 @@
|
|||||||
|
use std::future::Future;
|
||||||
|
|
||||||
|
use chrono::{DateTime, Utc};
|
||||||
|
use zeroclaw::config::Config as ZeroClawConfig;
|
||||||
|
use zeroclaw::cron::{self, CronJob, CronRun, JobType, Schedule, SessionTarget};
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||||
|
pub struct CronExecutionResult {
|
||||||
|
pub job_id: String,
|
||||||
|
pub success: bool,
|
||||||
|
pub output: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn configure_embedded_cron(config: &mut ZeroClawConfig) {
|
||||||
|
config.cron.enabled = true;
|
||||||
|
config.cron.catch_up_on_startup = false;
|
||||||
|
config.scheduler.enabled = false;
|
||||||
|
config.scheduler.max_concurrent = 1;
|
||||||
|
config.scheduler.max_tasks = config.scheduler.max_tasks.max(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn add_agent_job(
|
||||||
|
config: &ZeroClawConfig,
|
||||||
|
name: Option<String>,
|
||||||
|
schedule: Schedule,
|
||||||
|
prompt: &str,
|
||||||
|
allowed_tools: Option<Vec<String>>,
|
||||||
|
) -> anyhow::Result<CronJob> {
|
||||||
|
cron::add_agent_job(
|
||||||
|
config,
|
||||||
|
name,
|
||||||
|
schedule,
|
||||||
|
prompt,
|
||||||
|
SessionTarget::Isolated,
|
||||||
|
None,
|
||||||
|
None,
|
||||||
|
false,
|
||||||
|
allowed_tools,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn list_jobs(config: &ZeroClawConfig) -> anyhow::Result<Vec<CronJob>> {
|
||||||
|
cron::list_jobs(config)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn list_runs(
|
||||||
|
config: &ZeroClawConfig,
|
||||||
|
job_id: &str,
|
||||||
|
limit: usize,
|
||||||
|
) -> anyhow::Result<Vec<CronRun>> {
|
||||||
|
cron::list_runs(config, job_id, limit)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn run_due_jobs<F, Fut>(
|
||||||
|
config: &ZeroClawConfig,
|
||||||
|
now: DateTime<Utc>,
|
||||||
|
mut runner: F,
|
||||||
|
) -> anyhow::Result<Vec<CronExecutionResult>>
|
||||||
|
where
|
||||||
|
F: FnMut(&CronJob) -> Fut,
|
||||||
|
Fut: Future<Output = anyhow::Result<String>>,
|
||||||
|
{
|
||||||
|
let jobs = cron::due_jobs(config, now)?;
|
||||||
|
let mut results = Vec::with_capacity(jobs.len());
|
||||||
|
|
||||||
|
for job in jobs {
|
||||||
|
if !matches!(job.job_type, JobType::Agent) {
|
||||||
|
anyhow::bail!("unsupported cron job type in sgclaw compat: {:?}", job.job_type);
|
||||||
|
}
|
||||||
|
|
||||||
|
let started_at = Utc::now();
|
||||||
|
let (success, output) = match runner(&job).await {
|
||||||
|
Ok(output) => (true, output),
|
||||||
|
Err(err) => (false, err.to_string()),
|
||||||
|
};
|
||||||
|
let finished_at = Utc::now();
|
||||||
|
let duration_ms = (finished_at - started_at).num_milliseconds();
|
||||||
|
|
||||||
|
cron::record_run(
|
||||||
|
config,
|
||||||
|
&job.id,
|
||||||
|
started_at,
|
||||||
|
finished_at,
|
||||||
|
if success { "ok" } else { "error" },
|
||||||
|
Some(&output),
|
||||||
|
duration_ms,
|
||||||
|
)?;
|
||||||
|
cron::reschedule_after_run(config, &job, success, &output)?;
|
||||||
|
|
||||||
|
results.push(CronExecutionResult {
|
||||||
|
job_id: job.id,
|
||||||
|
success,
|
||||||
|
output,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(results)
|
||||||
|
}
|
||||||
63
src/compat/event_bridge.rs
Normal file
63
src/compat/event_bridge.rs
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
use serde_json::Value;
|
||||||
|
use zeroclaw::agent::TurnEvent;
|
||||||
|
|
||||||
|
use crate::pipe::AgentMessage;
|
||||||
|
|
||||||
|
pub fn log_entry_for_turn_event(event: &TurnEvent) -> Option<AgentMessage> {
|
||||||
|
match event {
|
||||||
|
TurnEvent::ToolCall { name, args } => Some(AgentMessage::LogEntry {
|
||||||
|
level: "info".to_string(),
|
||||||
|
message: format_tool_call(name, args),
|
||||||
|
}),
|
||||||
|
TurnEvent::ToolResult { output, .. } if is_tool_error(output) => Some(AgentMessage::LogEntry {
|
||||||
|
level: "error".to_string(),
|
||||||
|
message: output.trim_start_matches("Error: ").to_string(),
|
||||||
|
}),
|
||||||
|
_ => None,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn format_tool_call(name: &str, args: &Value) -> String {
|
||||||
|
if name != "browser_action" {
|
||||||
|
return format!("call {name}");
|
||||||
|
}
|
||||||
|
|
||||||
|
let action = args
|
||||||
|
.get("action")
|
||||||
|
.and_then(Value::as_str)
|
||||||
|
.unwrap_or("unknown");
|
||||||
|
|
||||||
|
match action {
|
||||||
|
"navigate" => {
|
||||||
|
let url = args.get("url").and_then(Value::as_str).unwrap_or("<missing-url>");
|
||||||
|
format!("navigate {url}")
|
||||||
|
}
|
||||||
|
"type" => {
|
||||||
|
let text = args.get("text").and_then(Value::as_str).unwrap_or("");
|
||||||
|
let selector = args
|
||||||
|
.get("selector")
|
||||||
|
.and_then(Value::as_str)
|
||||||
|
.unwrap_or("<missing-selector>");
|
||||||
|
format!("type {text} into {selector}")
|
||||||
|
}
|
||||||
|
"click" => {
|
||||||
|
let selector = args
|
||||||
|
.get("selector")
|
||||||
|
.and_then(Value::as_str)
|
||||||
|
.unwrap_or("<missing-selector>");
|
||||||
|
format!("click {selector}")
|
||||||
|
}
|
||||||
|
"getText" => {
|
||||||
|
let selector = args
|
||||||
|
.get("selector")
|
||||||
|
.and_then(Value::as_str)
|
||||||
|
.unwrap_or("<missing-selector>");
|
||||||
|
format!("getText {selector}")
|
||||||
|
}
|
||||||
|
other => format!("browser_action {other}"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn is_tool_error(output: &str) -> bool {
|
||||||
|
output.starts_with("Error:")
|
||||||
|
}
|
||||||
30
src/compat/memory_adapter.rs
Normal file
30
src/compat/memory_adapter.rs
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
use std::path::{Path, PathBuf};
|
||||||
|
|
||||||
|
use zeroclaw::config::Config as ZeroClawConfig;
|
||||||
|
use zeroclaw::memory::{self, Memory};
|
||||||
|
|
||||||
|
pub fn configure_embedded_memory(config: &mut ZeroClawConfig) {
|
||||||
|
config.memory.backend = "sqlite".to_string();
|
||||||
|
config.memory.embedding_provider = "none".to_string();
|
||||||
|
config.memory.response_cache_enabled = false;
|
||||||
|
config.memory.snapshot_enabled = false;
|
||||||
|
config.memory.snapshot_on_hygiene = false;
|
||||||
|
|
||||||
|
config.storage.provider.config.provider.clear();
|
||||||
|
config.storage.provider.config.db_url = None;
|
||||||
|
config.storage.provider.config.connect_timeout_secs = None;
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn build_memory(config: &ZeroClawConfig) -> anyhow::Result<Box<dyn Memory>> {
|
||||||
|
memory::create_memory_with_storage_and_routes(
|
||||||
|
&config.memory,
|
||||||
|
&config.embedding_routes,
|
||||||
|
Some(&config.storage.provider.config),
|
||||||
|
&config.workspace_dir,
|
||||||
|
config.api_key.as_deref(),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn brain_db_path(workspace_dir: &Path) -> PathBuf {
|
||||||
|
workspace_dir.join("memory").join("brain.db")
|
||||||
|
}
|
||||||
6
src/compat/mod.rs
Normal file
6
src/compat/mod.rs
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
pub mod browser_tool_adapter;
|
||||||
|
pub mod config_adapter;
|
||||||
|
pub mod cron_adapter;
|
||||||
|
pub mod event_bridge;
|
||||||
|
pub mod memory_adapter;
|
||||||
|
pub mod runtime;
|
||||||
245
src/compat/runtime.rs
Normal file
245
src/compat/runtime.rs
Normal file
@@ -0,0 +1,245 @@
|
|||||||
|
use std::path::Path;
|
||||||
|
use std::sync::Arc;
|
||||||
|
|
||||||
|
use async_trait::async_trait;
|
||||||
|
use futures_util::{stream, StreamExt};
|
||||||
|
use zeroclaw::agent::dispatcher::NativeToolDispatcher;
|
||||||
|
use zeroclaw::agent::{Agent, TurnEvent};
|
||||||
|
use zeroclaw::config::Config as ZeroClawConfig;
|
||||||
|
use zeroclaw::observability::{NoopObserver, Observer};
|
||||||
|
use zeroclaw::providers::{
|
||||||
|
self, ChatMessage, ChatRequest, ChatResponse, Provider,
|
||||||
|
};
|
||||||
|
use zeroclaw::providers::traits::{
|
||||||
|
ProviderCapabilities, StreamEvent, StreamOptions, StreamResult,
|
||||||
|
};
|
||||||
|
|
||||||
|
use crate::compat::browser_tool_adapter::{ZeroClawBrowserTool, BROWSER_ACTION_TOOL_NAME};
|
||||||
|
use crate::compat::config_adapter::build_zeroclaw_config_from_settings;
|
||||||
|
use crate::config::DeepSeekSettings;
|
||||||
|
use crate::compat::event_bridge::log_entry_for_turn_event;
|
||||||
|
use crate::compat::memory_adapter::build_memory;
|
||||||
|
use crate::pipe::{BrowserPipeTool, ConversationMessage, PipeError, Transport};
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, Default)]
|
||||||
|
pub struct CompatTaskContext {
|
||||||
|
pub conversation_id: Option<String>,
|
||||||
|
pub messages: Vec<ConversationMessage>,
|
||||||
|
pub page_url: Option<String>,
|
||||||
|
pub page_title: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn execute_task<T: Transport + 'static>(
|
||||||
|
transport: &T,
|
||||||
|
browser_tool: BrowserPipeTool<T>,
|
||||||
|
instruction: &str,
|
||||||
|
task_context: &CompatTaskContext,
|
||||||
|
workspace_root: &Path,
|
||||||
|
settings: &DeepSeekSettings,
|
||||||
|
) -> Result<String, PipeError> {
|
||||||
|
let config = build_zeroclaw_config_from_settings(workspace_root, settings);
|
||||||
|
let provider = build_provider(&config)?;
|
||||||
|
let runtime = tokio::runtime::Runtime::new()
|
||||||
|
.map_err(|err| PipeError::Protocol(format!("failed to create tokio runtime: {err}")))?;
|
||||||
|
|
||||||
|
runtime.block_on(execute_task_with_provider(
|
||||||
|
transport,
|
||||||
|
browser_tool,
|
||||||
|
provider,
|
||||||
|
instruction,
|
||||||
|
task_context,
|
||||||
|
config,
|
||||||
|
))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn execute_task_with_provider<T: Transport + 'static>(
|
||||||
|
transport: &T,
|
||||||
|
browser_tool: BrowserPipeTool<T>,
|
||||||
|
provider: Box<dyn Provider>,
|
||||||
|
instruction: &str,
|
||||||
|
task_context: &CompatTaskContext,
|
||||||
|
config: ZeroClawConfig,
|
||||||
|
) -> Result<String, PipeError> {
|
||||||
|
let mut agent = build_agent(browser_tool, provider, &config)?;
|
||||||
|
if let Some(conversation_id) = task_context
|
||||||
|
.conversation_id
|
||||||
|
.as_deref()
|
||||||
|
.map(str::trim)
|
||||||
|
.filter(|value| !value.is_empty())
|
||||||
|
{
|
||||||
|
agent.set_memory_session_id(Some(conversation_id.to_string()));
|
||||||
|
}
|
||||||
|
|
||||||
|
let seed_messages = build_seed_history(task_context);
|
||||||
|
if !seed_messages.is_empty() {
|
||||||
|
agent.seed_history(&seed_messages);
|
||||||
|
}
|
||||||
|
|
||||||
|
let (event_tx, mut event_rx) = tokio::sync::mpsc::channel::<TurnEvent>(32);
|
||||||
|
let instruction = instruction.to_string();
|
||||||
|
|
||||||
|
let task = tokio::spawn(async move { agent.turn_streamed(&instruction, event_tx).await });
|
||||||
|
|
||||||
|
while let Some(event) = event_rx.recv().await {
|
||||||
|
if let Some(log_entry) = log_entry_for_turn_event(&event) {
|
||||||
|
transport.send(&log_entry)?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
task.await
|
||||||
|
.map_err(|err| PipeError::Protocol(format!("zeroclaw task join failed: {err}")))?
|
||||||
|
.map_err(|err| PipeError::Protocol(err.to_string()))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn build_agent<T: Transport + 'static>(
|
||||||
|
browser_tool: BrowserPipeTool<T>,
|
||||||
|
provider: Box<dyn Provider>,
|
||||||
|
config: &ZeroClawConfig,
|
||||||
|
) -> Result<Agent, PipeError> {
|
||||||
|
let memory = build_memory(config).map_err(map_anyhow_to_pipe_error)?;
|
||||||
|
let observer: Arc<dyn Observer> = Arc::new(NoopObserver);
|
||||||
|
let tools: Vec<Box<dyn zeroclaw::tools::Tool>> =
|
||||||
|
vec![Box::new(ZeroClawBrowserTool::new(browser_tool))];
|
||||||
|
|
||||||
|
Agent::builder()
|
||||||
|
.provider(provider)
|
||||||
|
.tools(tools)
|
||||||
|
.memory(Arc::from(memory))
|
||||||
|
.observer(observer)
|
||||||
|
.tool_dispatcher(Box::new(NativeToolDispatcher))
|
||||||
|
.config(config.agent.clone())
|
||||||
|
.model_name(
|
||||||
|
config
|
||||||
|
.default_model
|
||||||
|
.clone()
|
||||||
|
.unwrap_or_else(|| "deepseek-chat".to_string()),
|
||||||
|
)
|
||||||
|
.temperature(config.default_temperature)
|
||||||
|
.workspace_dir(config.workspace_dir.clone())
|
||||||
|
.allowed_tools(Some(vec![BROWSER_ACTION_TOOL_NAME.to_string()]))
|
||||||
|
.build()
|
||||||
|
.map_err(map_anyhow_to_pipe_error)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn build_provider(config: &ZeroClawConfig) -> Result<Box<dyn Provider>, PipeError> {
|
||||||
|
let provider_name = config.default_provider.as_deref().unwrap_or("deepseek");
|
||||||
|
let model_name = config
|
||||||
|
.default_model
|
||||||
|
.as_deref()
|
||||||
|
.unwrap_or("deepseek-chat");
|
||||||
|
let runtime_options = providers::provider_runtime_options_from_config(config);
|
||||||
|
let resolved_provider_name = if provider_name == "deepseek" {
|
||||||
|
config
|
||||||
|
.api_url
|
||||||
|
.as_deref()
|
||||||
|
.map(str::trim)
|
||||||
|
.filter(|url| !url.is_empty())
|
||||||
|
.map(|url| format!("custom:{url}"))
|
||||||
|
.unwrap_or_else(|| provider_name.to_string())
|
||||||
|
} else {
|
||||||
|
provider_name.to_string()
|
||||||
|
};
|
||||||
|
let provider = providers::create_routed_provider_with_options(
|
||||||
|
&resolved_provider_name,
|
||||||
|
config.api_key.as_deref(),
|
||||||
|
config.api_url.as_deref(),
|
||||||
|
&config.reliability,
|
||||||
|
&config.model_routes,
|
||||||
|
model_name,
|
||||||
|
&runtime_options,
|
||||||
|
)
|
||||||
|
.map_err(map_anyhow_to_pipe_error)?;
|
||||||
|
|
||||||
|
Ok(Box::new(NonStreamingProvider::new(provider)))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn map_anyhow_to_pipe_error(err: anyhow::Error) -> PipeError {
|
||||||
|
PipeError::Protocol(err.to_string())
|
||||||
|
}
|
||||||
|
|
||||||
|
struct NonStreamingProvider {
|
||||||
|
inner: Box<dyn Provider>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl NonStreamingProvider {
|
||||||
|
fn new(inner: Box<dyn Provider>) -> Self {
|
||||||
|
Self { inner }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl Provider for NonStreamingProvider {
|
||||||
|
fn capabilities(&self) -> ProviderCapabilities {
|
||||||
|
self.inner.capabilities()
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn chat_with_system(
|
||||||
|
&self,
|
||||||
|
system_prompt: Option<&str>,
|
||||||
|
message: &str,
|
||||||
|
model: &str,
|
||||||
|
temperature: f64,
|
||||||
|
) -> anyhow::Result<String> {
|
||||||
|
self.inner
|
||||||
|
.chat_with_system(system_prompt, message, model, temperature)
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn chat_with_history(
|
||||||
|
&self,
|
||||||
|
messages: &[ChatMessage],
|
||||||
|
model: &str,
|
||||||
|
temperature: f64,
|
||||||
|
) -> anyhow::Result<String> {
|
||||||
|
self.inner.chat_with_history(messages, model, temperature).await
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn chat(
|
||||||
|
&self,
|
||||||
|
request: ChatRequest<'_>,
|
||||||
|
model: &str,
|
||||||
|
temperature: f64,
|
||||||
|
) -> anyhow::Result<ChatResponse> {
|
||||||
|
self.inner.chat(request, model, temperature).await
|
||||||
|
}
|
||||||
|
|
||||||
|
fn supports_streaming(&self) -> bool {
|
||||||
|
false
|
||||||
|
}
|
||||||
|
|
||||||
|
fn supports_streaming_tool_events(&self) -> bool {
|
||||||
|
false
|
||||||
|
}
|
||||||
|
|
||||||
|
fn stream_chat(
|
||||||
|
&self,
|
||||||
|
_request: ChatRequest<'_>,
|
||||||
|
_model: &str,
|
||||||
|
_temperature: f64,
|
||||||
|
_options: StreamOptions,
|
||||||
|
) -> stream::BoxStream<'static, StreamResult<StreamEvent>> {
|
||||||
|
stream::empty().boxed()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn build_seed_history(task_context: &CompatTaskContext) -> Vec<ChatMessage> {
|
||||||
|
task_context
|
||||||
|
.messages
|
||||||
|
.iter()
|
||||||
|
.filter_map(to_chat_message)
|
||||||
|
.collect()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn to_chat_message(message: &ConversationMessage) -> Option<ChatMessage> {
|
||||||
|
let content = message.content.trim();
|
||||||
|
if content.is_empty() {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
|
||||||
|
match message.role.as_str() {
|
||||||
|
"user" => Some(ChatMessage::user(content)),
|
||||||
|
"assistant" => Some(ChatMessage::assistant(content)),
|
||||||
|
"system" => Some(ChatMessage::system(content)),
|
||||||
|
_ => None,
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,3 +1,6 @@
|
|||||||
|
use std::path::{Path, PathBuf};
|
||||||
|
|
||||||
|
use serde::Deserialize;
|
||||||
use thiserror::Error;
|
use thiserror::Error;
|
||||||
|
|
||||||
const DEFAULT_DEEPSEEK_BASE_URL: &str = "https://api.deepseek.com";
|
const DEFAULT_DEEPSEEK_BASE_URL: &str = "https://api.deepseek.com";
|
||||||
@@ -12,20 +15,65 @@ pub struct DeepSeekSettings {
|
|||||||
|
|
||||||
impl DeepSeekSettings {
|
impl DeepSeekSettings {
|
||||||
pub fn from_env() -> Result<Self, ConfigError> {
|
pub fn from_env() -> Result<Self, ConfigError> {
|
||||||
let api_key = std::env::var("DEEPSEEK_API_KEY")
|
Self::maybe_from_env()?.ok_or(ConfigError::MissingEnv("DEEPSEEK_API_KEY"))
|
||||||
.map_err(|_| ConfigError::MissingEnv("DEEPSEEK_API_KEY"))?;
|
}
|
||||||
|
|
||||||
|
pub fn load(config_path: Option<&Path>) -> Result<Option<Self>, ConfigError> {
|
||||||
|
if let Some(path) = config_path {
|
||||||
|
if path.exists() {
|
||||||
|
return Self::from_config_path(path).map(Some);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Self::maybe_from_env()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn maybe_from_env() -> Result<Option<Self>, ConfigError> {
|
||||||
|
let api_key = match std::env::var("DEEPSEEK_API_KEY") {
|
||||||
|
Ok(value) => value,
|
||||||
|
Err(std::env::VarError::NotPresent) => return Ok(None),
|
||||||
|
Err(std::env::VarError::NotUnicode(_)) => {
|
||||||
|
return Err(ConfigError::InvalidEnv("DEEPSEEK_API_KEY"))
|
||||||
|
}
|
||||||
|
};
|
||||||
let base_url = std::env::var("DEEPSEEK_BASE_URL")
|
let base_url = std::env::var("DEEPSEEK_BASE_URL")
|
||||||
.unwrap_or_else(|_| DEFAULT_DEEPSEEK_BASE_URL.to_string());
|
.unwrap_or_else(|_| DEFAULT_DEEPSEEK_BASE_URL.to_string());
|
||||||
let model =
|
let model =
|
||||||
std::env::var("DEEPSEEK_MODEL").unwrap_or_else(|_| DEFAULT_DEEPSEEK_MODEL.to_string());
|
std::env::var("DEEPSEEK_MODEL").unwrap_or_else(|_| DEFAULT_DEEPSEEK_MODEL.to_string());
|
||||||
|
|
||||||
if api_key.trim().is_empty() {
|
Ok(Some(Self::new(api_key, base_url, model)?))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn from_config_path(path: &Path) -> Result<Self, ConfigError> {
|
||||||
|
let raw = std::fs::read_to_string(path)
|
||||||
|
.map_err(|err| ConfigError::ConfigRead(path.to_path_buf(), err.to_string()))?;
|
||||||
|
let config: RawDeepSeekSettings = serde_json::from_str(&raw)
|
||||||
|
.map_err(|err| ConfigError::ConfigParse(path.to_path_buf(), err.to_string()))?;
|
||||||
|
|
||||||
|
Self::new(config.api_key, config.base_url, config.model)
|
||||||
|
.map_err(|err| err.with_path(path))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn new(api_key: String, base_url: String, model: String) -> Result<Self, ConfigError> {
|
||||||
|
let api_key = api_key.trim().to_string();
|
||||||
|
let base_url = if base_url.trim().is_empty() {
|
||||||
|
DEFAULT_DEEPSEEK_BASE_URL.to_string()
|
||||||
|
} else {
|
||||||
|
base_url.trim().to_string()
|
||||||
|
};
|
||||||
|
let model = if model.trim().is_empty() {
|
||||||
|
DEFAULT_DEEPSEEK_MODEL.to_string()
|
||||||
|
} else {
|
||||||
|
model.trim().to_string()
|
||||||
|
};
|
||||||
|
|
||||||
|
if api_key.is_empty() {
|
||||||
return Err(ConfigError::EmptyValue("DEEPSEEK_API_KEY"));
|
return Err(ConfigError::EmptyValue("DEEPSEEK_API_KEY"));
|
||||||
}
|
}
|
||||||
if base_url.trim().is_empty() {
|
if base_url.is_empty() {
|
||||||
return Err(ConfigError::EmptyValue("DEEPSEEK_BASE_URL"));
|
return Err(ConfigError::EmptyValue("DEEPSEEK_BASE_URL"));
|
||||||
}
|
}
|
||||||
if model.trim().is_empty() {
|
if model.is_empty() {
|
||||||
return Err(ConfigError::EmptyValue("DEEPSEEK_MODEL"));
|
return Err(ConfigError::EmptyValue("DEEPSEEK_MODEL"));
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -37,10 +85,37 @@ impl DeepSeekSettings {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
struct RawDeepSeekSettings {
|
||||||
|
#[serde(rename = "apiKey", default)]
|
||||||
|
api_key: String,
|
||||||
|
#[serde(rename = "baseUrl", default)]
|
||||||
|
base_url: String,
|
||||||
|
#[serde(default)]
|
||||||
|
model: String,
|
||||||
|
}
|
||||||
|
|
||||||
#[derive(Debug, Error, Clone, PartialEq, Eq)]
|
#[derive(Debug, Error, Clone, PartialEq, Eq)]
|
||||||
pub enum ConfigError {
|
pub enum ConfigError {
|
||||||
#[error("missing environment variable: {0}")]
|
#[error("missing environment variable: {0}")]
|
||||||
MissingEnv(&'static str),
|
MissingEnv(&'static str),
|
||||||
#[error("environment variable must not be empty: {0}")]
|
#[error("environment variable must not be empty: {0}")]
|
||||||
EmptyValue(&'static str),
|
EmptyValue(&'static str),
|
||||||
|
#[error("invalid non-utf8 environment variable: {0}")]
|
||||||
|
InvalidEnv(&'static str),
|
||||||
|
#[error("failed to read DeepSeek config file {0}: {1}")]
|
||||||
|
ConfigRead(PathBuf, String),
|
||||||
|
#[error("invalid DeepSeek config JSON in {0}: {1}")]
|
||||||
|
ConfigParse(PathBuf, String),
|
||||||
|
#[error("DeepSeek config value must not be empty: {0} ({1})")]
|
||||||
|
ConfigValueEmpty(&'static str, PathBuf),
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ConfigError {
|
||||||
|
fn with_path(self, path: &Path) -> Self {
|
||||||
|
match self {
|
||||||
|
Self::EmptyValue(field) => Self::ConfigValueEmpty(field, path.to_path_buf()),
|
||||||
|
other => other,
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
43
src/lib.rs
43
src/lib.rs
@@ -1,4 +1,5 @@
|
|||||||
pub mod agent;
|
pub mod agent;
|
||||||
|
pub mod compat;
|
||||||
pub mod config;
|
pub mod config;
|
||||||
pub mod llm;
|
pub mod llm;
|
||||||
pub mod pipe;
|
pub mod pipe;
|
||||||
@@ -8,18 +9,25 @@ use std::path::PathBuf;
|
|||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
use std::time::Duration;
|
use std::time::Duration;
|
||||||
|
|
||||||
use agent::handle_browser_message;
|
use agent::{handle_browser_message_with_context, AgentRuntimeContext};
|
||||||
use pipe::{perform_handshake, BrowserPipeTool, PipeError, StdioTransport, Transport};
|
use pipe::{perform_handshake, BrowserPipeTool, PipeError, StdioTransport, Transport};
|
||||||
use security::MacPolicy;
|
use security::MacPolicy;
|
||||||
|
|
||||||
|
fn default_rules_path_from_executable(executable_path: PathBuf) -> PathBuf {
|
||||||
|
executable_path
|
||||||
|
.parent()
|
||||||
|
.map(|dir| dir.join("resources").join("rules.json"))
|
||||||
|
.unwrap_or_else(|| PathBuf::from("resources").join("rules.json"))
|
||||||
|
}
|
||||||
|
|
||||||
fn default_rules_path() -> PathBuf {
|
fn default_rules_path() -> PathBuf {
|
||||||
std::env::current_dir()
|
std::env::current_exe()
|
||||||
.unwrap_or_else(|_| PathBuf::from("."))
|
.map(default_rules_path_from_executable)
|
||||||
.join("resources")
|
.unwrap_or_else(|_| PathBuf::from("resources").join("rules.json"))
|
||||||
.join("rules.json")
|
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn run() -> Result<(), PipeError> {
|
pub fn run() -> Result<(), PipeError> {
|
||||||
|
let runtime_context = AgentRuntimeContext::from_process_args(std::env::args_os())?;
|
||||||
let transport = Arc::new(StdioTransport::new(std::io::stdin(), std::io::stdout()));
|
let transport = Arc::new(StdioTransport::new(std::io::stdin(), std::io::stdout()));
|
||||||
let handshake = perform_handshake(transport.as_ref(), Duration::from_secs(5))?;
|
let handshake = perform_handshake(transport.as_ref(), Duration::from_secs(5))?;
|
||||||
let mac_policy = MacPolicy::load_from_path(default_rules_path())?;
|
let mac_policy = MacPolicy::load_from_path(default_rules_path())?;
|
||||||
@@ -31,7 +39,12 @@ pub fn run() -> Result<(), PipeError> {
|
|||||||
loop {
|
loop {
|
||||||
match transport.recv_timeout(Duration::from_secs(3600)) {
|
match transport.recv_timeout(Duration::from_secs(3600)) {
|
||||||
Ok(message) => {
|
Ok(message) => {
|
||||||
handle_browser_message(transport.as_ref(), &browser_tool, message)?;
|
handle_browser_message_with_context(
|
||||||
|
transport.as_ref(),
|
||||||
|
&browser_tool,
|
||||||
|
&runtime_context,
|
||||||
|
message,
|
||||||
|
)?;
|
||||||
}
|
}
|
||||||
Err(PipeError::Timeout) => continue,
|
Err(PipeError::Timeout) => continue,
|
||||||
Err(PipeError::PipeClosed) => return Ok(()),
|
Err(PipeError::PipeClosed) => return Ok(()),
|
||||||
@@ -39,3 +52,21 @@ pub fn run() -> Result<(), PipeError> {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::default_rules_path_from_executable;
|
||||||
|
use std::path::PathBuf;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn default_rules_path_uses_executable_directory_instead_of_cwd() {
|
||||||
|
let executable_path = PathBuf::from("/tmp/out/KylinRelease/sgclaw");
|
||||||
|
|
||||||
|
let resolved = default_rules_path_from_executable(executable_path);
|
||||||
|
|
||||||
|
assert_eq!(
|
||||||
|
resolved,
|
||||||
|
PathBuf::from("/tmp/out/KylinRelease/resources/rules.json")
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@@ -21,17 +21,29 @@ pub struct BrowserPipeTool<T: Transport> {
|
|||||||
transport: Arc<T>,
|
transport: Arc<T>,
|
||||||
mac_policy: MacPolicy,
|
mac_policy: MacPolicy,
|
||||||
session_key: Vec<u8>,
|
session_key: Vec<u8>,
|
||||||
next_seq: AtomicU64,
|
next_seq: Arc<AtomicU64>,
|
||||||
response_timeout: Duration,
|
response_timeout: Duration,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
impl<T: Transport> Clone for BrowserPipeTool<T> {
|
||||||
|
fn clone(&self) -> Self {
|
||||||
|
Self {
|
||||||
|
transport: self.transport.clone(),
|
||||||
|
mac_policy: self.mac_policy.clone(),
|
||||||
|
session_key: self.session_key.clone(),
|
||||||
|
next_seq: self.next_seq.clone(),
|
||||||
|
response_timeout: self.response_timeout,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
impl<T: Transport> BrowserPipeTool<T> {
|
impl<T: Transport> BrowserPipeTool<T> {
|
||||||
pub fn new(transport: Arc<T>, mac_policy: MacPolicy, session_key: Vec<u8>) -> Self {
|
pub fn new(transport: Arc<T>, mac_policy: MacPolicy, session_key: Vec<u8>) -> Self {
|
||||||
Self {
|
Self {
|
||||||
transport,
|
transport,
|
||||||
mac_policy,
|
mac_policy,
|
||||||
session_key,
|
session_key,
|
||||||
next_seq: AtomicU64::new(1),
|
next_seq: Arc::new(AtomicU64::new(1)),
|
||||||
response_timeout: Duration::from_secs(30),
|
response_timeout: Duration::from_secs(30),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -5,7 +5,8 @@ pub mod protocol;
|
|||||||
pub use browser_tool::{BrowserPipeTool, CommandOutput};
|
pub use browser_tool::{BrowserPipeTool, CommandOutput};
|
||||||
pub use handshake::{perform_handshake, HandshakeResult};
|
pub use handshake::{perform_handshake, HandshakeResult};
|
||||||
pub use protocol::{
|
pub use protocol::{
|
||||||
supported_actions, Action, AgentMessage, BrowserMessage, SecurityFields, Timing,
|
supported_actions, Action, AgentMessage, BrowserMessage, ConversationMessage,
|
||||||
|
SecurityFields, Timing,
|
||||||
};
|
};
|
||||||
|
|
||||||
use std::io::{BufRead, BufReader, Read, Write};
|
use std::io::{BufRead, BufReader, Read, Write};
|
||||||
|
|||||||
@@ -14,6 +14,14 @@ pub enum BrowserMessage {
|
|||||||
},
|
},
|
||||||
SubmitTask {
|
SubmitTask {
|
||||||
instruction: String,
|
instruction: String,
|
||||||
|
#[serde(default)]
|
||||||
|
conversation_id: String,
|
||||||
|
#[serde(default)]
|
||||||
|
messages: Vec<ConversationMessage>,
|
||||||
|
#[serde(default)]
|
||||||
|
page_url: String,
|
||||||
|
#[serde(default)]
|
||||||
|
page_title: String,
|
||||||
},
|
},
|
||||||
Response {
|
Response {
|
||||||
seq: u64,
|
seq: u64,
|
||||||
@@ -26,6 +34,12 @@ pub enum BrowserMessage {
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
|
||||||
|
pub struct ConversationMessage {
|
||||||
|
pub role: String,
|
||||||
|
pub content: String,
|
||||||
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
|
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
|
||||||
#[serde(tag = "type", rename_all = "snake_case")]
|
#[serde(tag = "type", rename_all = "snake_case")]
|
||||||
pub enum AgentMessage {
|
pub enum AgentMessage {
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
mod common;
|
mod common;
|
||||||
|
|
||||||
|
use std::path::PathBuf;
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
use std::time::Duration;
|
use std::time::Duration;
|
||||||
|
|
||||||
@@ -82,3 +83,13 @@ fn browser_tool_rejects_action_when_mac_policy_blocks_it() {
|
|||||||
|
|
||||||
assert!(err.to_string().contains("action is not allowed"));
|
assert!(err.to_string().contains("action is not allowed"));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn default_rules_allow_zhihu_navigation() {
|
||||||
|
let rules_path = PathBuf::from(env!("CARGO_MANIFEST_DIR"))
|
||||||
|
.join("resources")
|
||||||
|
.join("rules.json");
|
||||||
|
let policy = MacPolicy::load_from_path(rules_path).unwrap();
|
||||||
|
|
||||||
|
policy.validate(&Action::Navigate, "www.zhihu.com").unwrap();
|
||||||
|
}
|
||||||
|
|||||||
256
tests/compat_browser_tool_test.rs
Normal file
256
tests/compat_browser_tool_test.rs
Normal file
@@ -0,0 +1,256 @@
|
|||||||
|
mod common;
|
||||||
|
|
||||||
|
use std::sync::Arc;
|
||||||
|
use std::time::Duration;
|
||||||
|
|
||||||
|
use common::MockTransport;
|
||||||
|
use serde_json::{json, Value};
|
||||||
|
use sgclaw::security::MacPolicy;
|
||||||
|
use sgclaw::{
|
||||||
|
compat::browser_tool_adapter::ZeroClawBrowserTool,
|
||||||
|
pipe::{Action, AgentMessage, BrowserMessage, BrowserPipeTool, Timing},
|
||||||
|
};
|
||||||
|
use zeroclaw::tools::Tool;
|
||||||
|
|
||||||
|
fn test_policy() -> MacPolicy {
|
||||||
|
MacPolicy::from_json_str(
|
||||||
|
r#"{
|
||||||
|
"version": "1.0",
|
||||||
|
"domains": { "allowed": ["www.baidu.com"] },
|
||||||
|
"pipe_actions": {
|
||||||
|
"allowed": ["click", "type", "navigate", "getText"],
|
||||||
|
"blocked": ["eval", "executeJsInPage"]
|
||||||
|
}
|
||||||
|
}"#,
|
||||||
|
)
|
||||||
|
.unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn build_adapter(messages: Vec<BrowserMessage>) -> (Arc<MockTransport>, ZeroClawBrowserTool<MockTransport>) {
|
||||||
|
let transport = Arc::new(MockTransport::new(messages));
|
||||||
|
let browser_tool = BrowserPipeTool::new(
|
||||||
|
transport.clone(),
|
||||||
|
test_policy(),
|
||||||
|
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||||
|
)
|
||||||
|
.with_response_timeout(Duration::from_secs(1));
|
||||||
|
|
||||||
|
(transport, ZeroClawBrowserTool::new(browser_tool))
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn zeroclaw_browser_tool_schema_exposes_only_supported_safe_actions() {
|
||||||
|
let (_, tool) = build_adapter(vec![]);
|
||||||
|
let schema = tool.parameters_schema();
|
||||||
|
|
||||||
|
assert_eq!(tool.name(), "browser_action");
|
||||||
|
assert_eq!(
|
||||||
|
schema["properties"]["action"]["enum"],
|
||||||
|
json!(["click", "type", "navigate", "getText"])
|
||||||
|
);
|
||||||
|
assert_eq!(schema["required"], json!(["action", "expected_domain"]));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn zeroclaw_browser_tool_executes_supported_actions_and_returns_observation_payload() {
|
||||||
|
let (transport, tool) = build_adapter(vec![
|
||||||
|
BrowserMessage::Response {
|
||||||
|
seq: 1,
|
||||||
|
success: true,
|
||||||
|
data: json!({ "navigated": true }),
|
||||||
|
aom_snapshot: vec![],
|
||||||
|
timing: Timing {
|
||||||
|
queue_ms: 1,
|
||||||
|
exec_ms: 11,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
BrowserMessage::Response {
|
||||||
|
seq: 2,
|
||||||
|
success: true,
|
||||||
|
data: json!({ "typed": true }),
|
||||||
|
aom_snapshot: vec![],
|
||||||
|
timing: Timing {
|
||||||
|
queue_ms: 2,
|
||||||
|
exec_ms: 12,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
BrowserMessage::Response {
|
||||||
|
seq: 3,
|
||||||
|
success: true,
|
||||||
|
data: json!({ "clicked": true }),
|
||||||
|
aom_snapshot: vec![],
|
||||||
|
timing: Timing {
|
||||||
|
queue_ms: 3,
|
||||||
|
exec_ms: 13,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
BrowserMessage::Response {
|
||||||
|
seq: 4,
|
||||||
|
success: true,
|
||||||
|
data: json!({ "text": "天气" }),
|
||||||
|
aom_snapshot: vec![json!({
|
||||||
|
"role": "textbox",
|
||||||
|
"name": "百度一下"
|
||||||
|
})],
|
||||||
|
timing: Timing {
|
||||||
|
queue_ms: 4,
|
||||||
|
exec_ms: 14,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
]);
|
||||||
|
|
||||||
|
let navigate = tool
|
||||||
|
.execute(json!({
|
||||||
|
"action": "navigate",
|
||||||
|
"expected_domain": "www.baidu.com",
|
||||||
|
"url": "https://www.baidu.com"
|
||||||
|
}))
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let type_text = tool
|
||||||
|
.execute(json!({
|
||||||
|
"action": "type",
|
||||||
|
"expected_domain": "www.baidu.com",
|
||||||
|
"selector": "#kw",
|
||||||
|
"text": "天气",
|
||||||
|
"clear_first": true
|
||||||
|
}))
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let click = tool
|
||||||
|
.execute(json!({
|
||||||
|
"action": "click",
|
||||||
|
"expected_domain": "www.baidu.com",
|
||||||
|
"selector": "#su"
|
||||||
|
}))
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let get_text = tool
|
||||||
|
.execute(json!({
|
||||||
|
"action": "getText",
|
||||||
|
"expected_domain": "www.baidu.com",
|
||||||
|
"selector": "#content_left"
|
||||||
|
}))
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let navigate_output: Value = serde_json::from_str(&navigate.output).unwrap();
|
||||||
|
let get_text_output: Value = serde_json::from_str(&get_text.output).unwrap();
|
||||||
|
let sent = transport.sent_messages();
|
||||||
|
|
||||||
|
assert!(navigate.success);
|
||||||
|
assert!(type_text.success);
|
||||||
|
assert!(click.success);
|
||||||
|
assert!(get_text.success);
|
||||||
|
assert_eq!(navigate_output["data"], json!({ "navigated": true }));
|
||||||
|
assert_eq!(get_text_output["data"], json!({ "text": "天气" }));
|
||||||
|
assert_eq!(
|
||||||
|
get_text_output["aom_snapshot"],
|
||||||
|
json!([{ "role": "textbox", "name": "百度一下" }])
|
||||||
|
);
|
||||||
|
assert_eq!(
|
||||||
|
get_text_output["timing"],
|
||||||
|
json!({
|
||||||
|
"queue_ms": 4,
|
||||||
|
"exec_ms": 14
|
||||||
|
})
|
||||||
|
);
|
||||||
|
assert!(matches!(
|
||||||
|
&sent[0],
|
||||||
|
AgentMessage::Command { seq, action, .. }
|
||||||
|
if *seq == 1 && action == &Action::Navigate
|
||||||
|
));
|
||||||
|
assert!(matches!(
|
||||||
|
&sent[1],
|
||||||
|
AgentMessage::Command { seq, action, .. }
|
||||||
|
if *seq == 2 && action == &Action::Type
|
||||||
|
));
|
||||||
|
assert!(matches!(
|
||||||
|
&sent[2],
|
||||||
|
AgentMessage::Command { seq, action, .. }
|
||||||
|
if *seq == 3 && action == &Action::Click
|
||||||
|
));
|
||||||
|
assert!(matches!(
|
||||||
|
&sent[3],
|
||||||
|
AgentMessage::Command { seq, action, .. }
|
||||||
|
if *seq == 4 && action == &Action::GetText
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn zeroclaw_browser_tool_keeps_domain_validation_in_mac_policy() {
|
||||||
|
let (transport, tool) = build_adapter(vec![]);
|
||||||
|
|
||||||
|
let result = tool
|
||||||
|
.execute(json!({
|
||||||
|
"action": "navigate",
|
||||||
|
"expected_domain": "www.zhihu.com",
|
||||||
|
"url": "https://www.zhihu.com"
|
||||||
|
}))
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert!(!result.success);
|
||||||
|
assert!(result.output.is_empty());
|
||||||
|
assert_eq!(transport.sent_messages().len(), 0);
|
||||||
|
assert!(
|
||||||
|
result
|
||||||
|
.error
|
||||||
|
.as_deref()
|
||||||
|
.unwrap()
|
||||||
|
.contains("domain is not allowed")
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn zeroclaw_browser_tool_rejects_missing_required_action_parameters() {
|
||||||
|
let (transport, tool) = build_adapter(vec![]);
|
||||||
|
|
||||||
|
let missing_click_selector = tool
|
||||||
|
.execute(json!({
|
||||||
|
"action": "click",
|
||||||
|
"expected_domain": "www.baidu.com"
|
||||||
|
}))
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let missing_text_selector = tool
|
||||||
|
.execute(json!({
|
||||||
|
"action": "getText",
|
||||||
|
"expected_domain": "www.baidu.com"
|
||||||
|
}))
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
let missing_navigate_url = tool
|
||||||
|
.execute(json!({
|
||||||
|
"action": "navigate",
|
||||||
|
"expected_domain": "www.baidu.com"
|
||||||
|
}))
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert!(!missing_click_selector.success);
|
||||||
|
assert!(!missing_text_selector.success);
|
||||||
|
assert!(!missing_navigate_url.success);
|
||||||
|
assert_eq!(transport.sent_messages().len(), 0);
|
||||||
|
assert!(
|
||||||
|
missing_click_selector
|
||||||
|
.error
|
||||||
|
.as_deref()
|
||||||
|
.unwrap()
|
||||||
|
.contains("click requires selector")
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
missing_text_selector
|
||||||
|
.error
|
||||||
|
.as_deref()
|
||||||
|
.unwrap()
|
||||||
|
.contains("getText requires selector")
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
missing_navigate_url
|
||||||
|
.error
|
||||||
|
.as_deref()
|
||||||
|
.unwrap()
|
||||||
|
.contains("navigate requires url")
|
||||||
|
);
|
||||||
|
}
|
||||||
98
tests/compat_config_test.rs
Normal file
98
tests/compat_config_test.rs
Normal file
@@ -0,0 +1,98 @@
|
|||||||
|
use std::fs;
|
||||||
|
use std::path::Path;
|
||||||
|
use std::sync::{Mutex, OnceLock};
|
||||||
|
|
||||||
|
use sgclaw::compat::config_adapter::{
|
||||||
|
build_zeroclaw_config,
|
||||||
|
build_zeroclaw_config_from_settings,
|
||||||
|
zeroclaw_workspace_dir,
|
||||||
|
};
|
||||||
|
use sgclaw::config::DeepSeekSettings;
|
||||||
|
use uuid::Uuid;
|
||||||
|
|
||||||
|
fn env_lock() -> &'static Mutex<()> {
|
||||||
|
static LOCK: OnceLock<Mutex<()>> = OnceLock::new();
|
||||||
|
LOCK.get_or_init(|| Mutex::new(()))
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn zeroclaw_config_adapter_maps_deepseek_env_to_zeroclaw_config() {
|
||||||
|
let _guard = env_lock().lock().unwrap();
|
||||||
|
std::env::set_var("DEEPSEEK_API_KEY", "deepseek-test-key");
|
||||||
|
std::env::set_var("DEEPSEEK_BASE_URL", "https://api.deepseek.com");
|
||||||
|
std::env::set_var("DEEPSEEK_MODEL", "deepseek-chat");
|
||||||
|
|
||||||
|
let config = build_zeroclaw_config(Path::new("/tmp/sgclaw")).unwrap();
|
||||||
|
|
||||||
|
assert_eq!(config.default_provider.as_deref(), Some("deepseek"));
|
||||||
|
assert_eq!(config.default_model.as_deref(), Some("deepseek-chat"));
|
||||||
|
assert_eq!(config.api_key.as_deref(), Some("deepseek-test-key"));
|
||||||
|
assert_eq!(config.api_url.as_deref(), Some("https://api.deepseek.com"));
|
||||||
|
assert_eq!(
|
||||||
|
config.workspace_dir,
|
||||||
|
Path::new("/tmp/sgclaw/.sgclaw-zeroclaw-workspace")
|
||||||
|
);
|
||||||
|
assert_eq!(
|
||||||
|
config.config_path,
|
||||||
|
Path::new("/tmp/sgclaw/.sgclaw-zeroclaw-workspace/config.toml")
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn zeroclaw_config_adapter_uses_deterministic_workspace_dir() {
|
||||||
|
let settings = DeepSeekSettings {
|
||||||
|
api_key: "key".to_string(),
|
||||||
|
base_url: "https://proxy.example.com/v1".to_string(),
|
||||||
|
model: "deepseek-reasoner".to_string(),
|
||||||
|
};
|
||||||
|
|
||||||
|
let workspace_dir = zeroclaw_workspace_dir(Path::new("/var/lib/sgclaw"));
|
||||||
|
let config = build_zeroclaw_config_from_settings(Path::new("/var/lib/sgclaw"), &settings);
|
||||||
|
|
||||||
|
assert_eq!(workspace_dir, Path::new("/var/lib/sgclaw/.sgclaw-zeroclaw-workspace"));
|
||||||
|
assert_eq!(config.workspace_dir, workspace_dir);
|
||||||
|
assert_eq!(config.default_provider.as_deref(), Some("deepseek"));
|
||||||
|
assert_eq!(config.default_model.as_deref(), Some("deepseek-reasoner"));
|
||||||
|
assert_eq!(config.api_url.as_deref(), Some("https://proxy.example.com/v1"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn deepseek_settings_reload_from_browser_config_path_after_file_changes() {
|
||||||
|
let root = std::env::temp_dir().join(format!("sgclaw-config-{}", Uuid::new_v4()));
|
||||||
|
fs::create_dir_all(&root).unwrap();
|
||||||
|
let config_path = root.join("sgclaw_config.json");
|
||||||
|
|
||||||
|
fs::write(
|
||||||
|
&config_path,
|
||||||
|
r#"{
|
||||||
|
"apiKey": "sk-first",
|
||||||
|
"baseUrl": "",
|
||||||
|
"model": ""
|
||||||
|
}"#,
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let first = DeepSeekSettings::load(Some(config_path.as_path()))
|
||||||
|
.unwrap()
|
||||||
|
.expect("expected config file to produce settings");
|
||||||
|
assert_eq!(first.api_key, "sk-first");
|
||||||
|
assert_eq!(first.base_url, "https://api.deepseek.com");
|
||||||
|
assert_eq!(first.model, "deepseek-chat");
|
||||||
|
|
||||||
|
fs::write(
|
||||||
|
&config_path,
|
||||||
|
r#"{
|
||||||
|
"apiKey": "sk-second",
|
||||||
|
"baseUrl": "https://proxy.example.com/v1",
|
||||||
|
"model": "deepseek-reasoner"
|
||||||
|
}"#,
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let second = DeepSeekSettings::load(Some(config_path.as_path()))
|
||||||
|
.unwrap()
|
||||||
|
.expect("expected updated config file to produce settings");
|
||||||
|
assert_eq!(second.api_key, "sk-second");
|
||||||
|
assert_eq!(second.base_url, "https://proxy.example.com/v1");
|
||||||
|
assert_eq!(second.model, "deepseek-reasoner");
|
||||||
|
}
|
||||||
63
tests/compat_cron_test.rs
Normal file
63
tests/compat_cron_test.rs
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
use std::path::{Path, PathBuf};
|
||||||
|
|
||||||
|
use chrono::Duration;
|
||||||
|
use sgclaw::compat::config_adapter::build_zeroclaw_config_from_settings;
|
||||||
|
use sgclaw::config::DeepSeekSettings;
|
||||||
|
use zeroclaw::cron::Schedule;
|
||||||
|
|
||||||
|
fn workspace_root(label: &str) -> PathBuf {
|
||||||
|
let root = std::env::temp_dir().join(format!("{label}-{}", uuid::Uuid::new_v4()));
|
||||||
|
std::fs::create_dir_all(&root).unwrap();
|
||||||
|
root
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn compat_cron_adapter_creates_lists_and_runs_due_agent_jobs() {
|
||||||
|
let settings = DeepSeekSettings {
|
||||||
|
api_key: "key".to_string(),
|
||||||
|
base_url: "https://api.deepseek.com".to_string(),
|
||||||
|
model: "deepseek-chat".to_string(),
|
||||||
|
};
|
||||||
|
let workspace_root = workspace_root("sgclaw-cron");
|
||||||
|
let config = build_zeroclaw_config_from_settings(Path::new(&workspace_root), &settings);
|
||||||
|
|
||||||
|
assert!(config.cron.enabled);
|
||||||
|
assert!(!config.cron.catch_up_on_startup);
|
||||||
|
assert!(!config.scheduler.enabled);
|
||||||
|
|
||||||
|
let created = sgclaw::compat::cron_adapter::add_agent_job(
|
||||||
|
&config,
|
||||||
|
Some("search-weather".to_string()),
|
||||||
|
Schedule::Every { every_ms: 1 },
|
||||||
|
"打开百度搜索天气",
|
||||||
|
Some(vec!["browser_action".to_string()]),
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let listed = sgclaw::compat::cron_adapter::list_jobs(&config).unwrap();
|
||||||
|
assert_eq!(listed.len(), 1);
|
||||||
|
assert_eq!(listed[0].id, created.id);
|
||||||
|
assert_eq!(listed[0].prompt.as_deref(), Some("打开百度搜索天气"));
|
||||||
|
|
||||||
|
let results = sgclaw::compat::cron_adapter::run_due_jobs(
|
||||||
|
&config,
|
||||||
|
created.next_run + Duration::milliseconds(1),
|
||||||
|
|job| {
|
||||||
|
let output = format!("ran {}", job.prompt.as_deref().unwrap_or_default());
|
||||||
|
async move { Ok::<String, anyhow::Error>(output) }
|
||||||
|
},
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let runs = sgclaw::compat::cron_adapter::list_runs(&config, &created.id, 10).unwrap();
|
||||||
|
let updated = sgclaw::compat::cron_adapter::list_jobs(&config).unwrap();
|
||||||
|
|
||||||
|
assert_eq!(results.len(), 1);
|
||||||
|
assert!(results[0].success);
|
||||||
|
assert_eq!(results[0].job_id, created.id);
|
||||||
|
assert_eq!(runs.len(), 1);
|
||||||
|
assert_eq!(runs[0].status, "ok");
|
||||||
|
assert!(updated[0].last_status.as_deref() == Some("ok"));
|
||||||
|
assert!(updated[0].next_run > created.next_run);
|
||||||
|
}
|
||||||
42
tests/compat_memory_test.rs
Normal file
42
tests/compat_memory_test.rs
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
use std::path::{Path, PathBuf};
|
||||||
|
|
||||||
|
use sgclaw::compat::config_adapter::build_zeroclaw_config_from_settings;
|
||||||
|
use sgclaw::config::DeepSeekSettings;
|
||||||
|
use zeroclaw::memory::MemoryCategory;
|
||||||
|
|
||||||
|
fn workspace_root(label: &str) -> PathBuf {
|
||||||
|
let root = std::env::temp_dir().join(format!("{label}-{}", uuid::Uuid::new_v4()));
|
||||||
|
std::fs::create_dir_all(&root).unwrap();
|
||||||
|
root
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn compat_memory_adapter_uses_workspace_local_sqlite_backend() {
|
||||||
|
let settings = DeepSeekSettings {
|
||||||
|
api_key: "key".to_string(),
|
||||||
|
base_url: "https://api.deepseek.com".to_string(),
|
||||||
|
model: "deepseek-chat".to_string(),
|
||||||
|
};
|
||||||
|
let workspace_root = workspace_root("sgclaw-memory");
|
||||||
|
let config = build_zeroclaw_config_from_settings(Path::new(&workspace_root), &settings);
|
||||||
|
|
||||||
|
assert_eq!(config.memory.backend, "sqlite");
|
||||||
|
assert_eq!(config.memory.embedding_provider, "none");
|
||||||
|
assert!(!config.memory.response_cache_enabled);
|
||||||
|
assert!(!config.memory.snapshot_enabled);
|
||||||
|
assert!(config.storage.provider.config.provider.is_empty());
|
||||||
|
|
||||||
|
let memory = sgclaw::compat::memory_adapter::build_memory(&config).unwrap();
|
||||||
|
memory
|
||||||
|
.store(
|
||||||
|
"weather",
|
||||||
|
"remember today's weather workflow",
|
||||||
|
MemoryCategory::Conversation,
|
||||||
|
None,
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(memory.count().await.unwrap(), 1);
|
||||||
|
assert!(sgclaw::compat::memory_adapter::brain_db_path(&config.workspace_dir).exists());
|
||||||
|
}
|
||||||
653
tests/compat_runtime_test.rs
Normal file
653
tests/compat_runtime_test.rs
Normal file
@@ -0,0 +1,653 @@
|
|||||||
|
mod common;
|
||||||
|
|
||||||
|
use std::fs;
|
||||||
|
use std::io::{Read, Write};
|
||||||
|
use std::net::TcpListener;
|
||||||
|
use std::path::PathBuf;
|
||||||
|
use std::sync::{Arc, Mutex, OnceLock};
|
||||||
|
use std::thread;
|
||||||
|
use std::time::Duration;
|
||||||
|
|
||||||
|
use common::MockTransport;
|
||||||
|
use serde_json::{json, Value};
|
||||||
|
use sgclaw::agent::{
|
||||||
|
handle_browser_message,
|
||||||
|
handle_browser_message_with_context,
|
||||||
|
AgentRuntimeContext,
|
||||||
|
};
|
||||||
|
use sgclaw::compat::runtime::{execute_task, CompatTaskContext};
|
||||||
|
use sgclaw::config::DeepSeekSettings;
|
||||||
|
use sgclaw::pipe::{
|
||||||
|
Action, AgentMessage, BrowserMessage, BrowserPipeTool, ConversationMessage, Timing,
|
||||||
|
};
|
||||||
|
use sgclaw::security::MacPolicy;
|
||||||
|
use uuid::Uuid;
|
||||||
|
|
||||||
|
fn env_lock() -> &'static Mutex<()> {
|
||||||
|
static LOCK: OnceLock<Mutex<()>> = OnceLock::new();
|
||||||
|
LOCK.get_or_init(|| Mutex::new(()))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn test_policy() -> MacPolicy {
|
||||||
|
MacPolicy::from_json_str(
|
||||||
|
r#"{
|
||||||
|
"version": "1.0",
|
||||||
|
"domains": { "allowed": ["www.baidu.com"] },
|
||||||
|
"pipe_actions": {
|
||||||
|
"allowed": ["click", "type", "navigate", "getText"],
|
||||||
|
"blocked": []
|
||||||
|
}
|
||||||
|
}"#,
|
||||||
|
)
|
||||||
|
.unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn temp_workspace_root() -> PathBuf {
|
||||||
|
let root = std::env::temp_dir().join(format!("sgclaw-compat-runtime-{}", Uuid::new_v4()));
|
||||||
|
std::fs::create_dir_all(&root).unwrap();
|
||||||
|
root
|
||||||
|
}
|
||||||
|
|
||||||
|
fn write_deepseek_config(root: &PathBuf, api_key: &str, base_url: &str, model: &str) -> PathBuf {
|
||||||
|
let config_path = root.join("sgclaw_config.json");
|
||||||
|
fs::write(
|
||||||
|
&config_path,
|
||||||
|
serde_json::to_string_pretty(&json!({
|
||||||
|
"apiKey": api_key,
|
||||||
|
"baseUrl": base_url,
|
||||||
|
"model": model,
|
||||||
|
}))
|
||||||
|
.unwrap(),
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
config_path
|
||||||
|
}
|
||||||
|
|
||||||
|
fn start_fake_deepseek_server(
|
||||||
|
responses: Vec<Value>,
|
||||||
|
) -> (String, Arc<Mutex<Vec<Value>>>, thread::JoinHandle<()>) {
|
||||||
|
let listener = TcpListener::bind("127.0.0.1:0").unwrap();
|
||||||
|
listener.set_nonblocking(true).unwrap();
|
||||||
|
let address = format!("http://{}", listener.local_addr().unwrap());
|
||||||
|
let requests = Arc::new(Mutex::new(Vec::new()));
|
||||||
|
let request_log = requests.clone();
|
||||||
|
|
||||||
|
let handle = thread::spawn(move || {
|
||||||
|
for response in responses {
|
||||||
|
let deadline = std::time::Instant::now() + Duration::from_secs(5);
|
||||||
|
let (mut stream, _) = loop {
|
||||||
|
match listener.accept() {
|
||||||
|
Ok(pair) => break pair,
|
||||||
|
Err(err) if err.kind() == std::io::ErrorKind::WouldBlock => {
|
||||||
|
assert!(
|
||||||
|
std::time::Instant::now() < deadline,
|
||||||
|
"timed out waiting for provider request"
|
||||||
|
);
|
||||||
|
thread::sleep(Duration::from_millis(10));
|
||||||
|
}
|
||||||
|
Err(err) => panic!("failed to accept provider request: {err}"),
|
||||||
|
}
|
||||||
|
};
|
||||||
|
let body = read_http_json_body(&mut stream);
|
||||||
|
request_log.lock().unwrap().push(body);
|
||||||
|
|
||||||
|
let payload = response.to_string();
|
||||||
|
let reply = format!(
|
||||||
|
"HTTP/1.1 200 OK\r\nContent-Type: application/json\r\nContent-Length: {}\r\nConnection: close\r\n\r\n{}",
|
||||||
|
payload.as_bytes().len(),
|
||||||
|
payload
|
||||||
|
);
|
||||||
|
stream.write_all(reply.as_bytes()).unwrap();
|
||||||
|
stream.flush().unwrap();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
(address, requests, handle)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn read_http_json_body(stream: &mut impl Read) -> Value {
|
||||||
|
let mut buffer = Vec::new();
|
||||||
|
let mut headers_end = None;
|
||||||
|
|
||||||
|
while headers_end.is_none() {
|
||||||
|
let mut chunk = [0_u8; 1024];
|
||||||
|
let bytes = stream.read(&mut chunk).unwrap();
|
||||||
|
assert!(bytes > 0, "unexpected EOF while reading headers");
|
||||||
|
buffer.extend_from_slice(&chunk[..bytes]);
|
||||||
|
headers_end = buffer.windows(4).position(|window| window == b"\r\n\r\n");
|
||||||
|
}
|
||||||
|
|
||||||
|
let headers_end = headers_end.unwrap() + 4;
|
||||||
|
let headers = String::from_utf8(buffer[..headers_end].to_vec()).unwrap();
|
||||||
|
let content_length = headers
|
||||||
|
.lines()
|
||||||
|
.find_map(|line| {
|
||||||
|
let (name, value) = line.split_once(':')?;
|
||||||
|
name.eq_ignore_ascii_case("content-length")
|
||||||
|
.then(|| value.trim().parse::<usize>().unwrap())
|
||||||
|
})
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
while buffer.len() < headers_end + content_length {
|
||||||
|
let mut chunk = vec![0_u8; content_length];
|
||||||
|
let bytes = stream.read(&mut chunk).unwrap();
|
||||||
|
assert!(bytes > 0, "unexpected EOF while reading body");
|
||||||
|
buffer.extend_from_slice(&chunk[..bytes]);
|
||||||
|
}
|
||||||
|
|
||||||
|
serde_json::from_slice(&buffer[headers_end..headers_end + content_length]).unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn compat_runtime_uses_zeroclaw_provider_path_and_executes_browser_actions() {
|
||||||
|
let _guard = env_lock().lock().unwrap_or_else(|err| err.into_inner());
|
||||||
|
|
||||||
|
let first_response = json!({
|
||||||
|
"choices": [{
|
||||||
|
"message": {
|
||||||
|
"content": "",
|
||||||
|
"tool_calls": [
|
||||||
|
{
|
||||||
|
"id": "call_1",
|
||||||
|
"type": "function",
|
||||||
|
"function": {
|
||||||
|
"name": "browser_action",
|
||||||
|
"arguments": serde_json::to_string(&json!({
|
||||||
|
"action": "navigate",
|
||||||
|
"expected_domain": "www.baidu.com",
|
||||||
|
"url": "https://www.baidu.com"
|
||||||
|
})).unwrap()
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "call_2",
|
||||||
|
"type": "function",
|
||||||
|
"function": {
|
||||||
|
"name": "browser_action",
|
||||||
|
"arguments": serde_json::to_string(&json!({
|
||||||
|
"action": "type",
|
||||||
|
"expected_domain": "www.baidu.com",
|
||||||
|
"selector": "#kw",
|
||||||
|
"text": "天气",
|
||||||
|
"clear_first": true
|
||||||
|
})).unwrap()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}],
|
||||||
|
"usage": {
|
||||||
|
"prompt_tokens": 12,
|
||||||
|
"completion_tokens": 7
|
||||||
|
}
|
||||||
|
});
|
||||||
|
let second_response = json!({
|
||||||
|
"choices": [{
|
||||||
|
"message": {
|
||||||
|
"content": "已通过 ZeroClaw 执行任务: 打开百度搜索天气"
|
||||||
|
}
|
||||||
|
}],
|
||||||
|
"usage": {
|
||||||
|
"prompt_tokens": 15,
|
||||||
|
"completion_tokens": 8
|
||||||
|
}
|
||||||
|
});
|
||||||
|
let (base_url, requests, server_handle) =
|
||||||
|
start_fake_deepseek_server(vec![first_response, second_response]);
|
||||||
|
|
||||||
|
std::env::set_var("DEEPSEEK_API_KEY", "deepseek-test-key");
|
||||||
|
std::env::set_var("DEEPSEEK_BASE_URL", base_url);
|
||||||
|
std::env::set_var("DEEPSEEK_MODEL", "deepseek-chat");
|
||||||
|
|
||||||
|
let workspace_root = temp_workspace_root();
|
||||||
|
let settings = DeepSeekSettings::from_env().unwrap();
|
||||||
|
let transport = Arc::new(MockTransport::new(vec![
|
||||||
|
BrowserMessage::Response {
|
||||||
|
seq: 1,
|
||||||
|
success: true,
|
||||||
|
data: json!({ "navigated": true }),
|
||||||
|
aom_snapshot: vec![],
|
||||||
|
timing: Timing {
|
||||||
|
queue_ms: 1,
|
||||||
|
exec_ms: 10,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
BrowserMessage::Response {
|
||||||
|
seq: 2,
|
||||||
|
success: true,
|
||||||
|
data: json!({ "typed": true }),
|
||||||
|
aom_snapshot: vec![],
|
||||||
|
timing: Timing {
|
||||||
|
queue_ms: 1,
|
||||||
|
exec_ms: 11,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
]));
|
||||||
|
let browser_tool = BrowserPipeTool::new(
|
||||||
|
transport.clone(),
|
||||||
|
test_policy(),
|
||||||
|
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||||
|
)
|
||||||
|
.with_response_timeout(Duration::from_secs(1));
|
||||||
|
|
||||||
|
let summary = execute_task(
|
||||||
|
transport.as_ref(),
|
||||||
|
browser_tool,
|
||||||
|
"打开百度搜索天气",
|
||||||
|
&CompatTaskContext::default(),
|
||||||
|
&workspace_root,
|
||||||
|
&settings,
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
server_handle.join().unwrap();
|
||||||
|
|
||||||
|
let request_bodies = requests.lock().unwrap().clone();
|
||||||
|
let sent = transport.sent_messages();
|
||||||
|
|
||||||
|
assert_eq!(summary, "已通过 ZeroClaw 执行任务: 打开百度搜索天气");
|
||||||
|
assert_eq!(request_bodies.len(), 2);
|
||||||
|
assert_eq!(request_bodies[0]["model"], json!("deepseek-chat"));
|
||||||
|
assert_eq!(
|
||||||
|
request_bodies[0]["tools"][0]["function"]["name"],
|
||||||
|
json!("browser_action")
|
||||||
|
);
|
||||||
|
assert!(request_bodies[1].to_string().contains("tool_call_id"));
|
||||||
|
assert!(sent.iter().any(|message| {
|
||||||
|
matches!(
|
||||||
|
message,
|
||||||
|
AgentMessage::LogEntry { level, message }
|
||||||
|
if level == "info" && message == "navigate https://www.baidu.com"
|
||||||
|
)
|
||||||
|
}));
|
||||||
|
assert!(sent.iter().any(|message| {
|
||||||
|
matches!(
|
||||||
|
message,
|
||||||
|
AgentMessage::LogEntry { level, message }
|
||||||
|
if level == "info" && message == "type 天气 into #kw"
|
||||||
|
)
|
||||||
|
}));
|
||||||
|
assert!(sent.iter().any(|message| {
|
||||||
|
matches!(
|
||||||
|
message,
|
||||||
|
AgentMessage::Command { action, .. } if action == &Action::Navigate
|
||||||
|
)
|
||||||
|
}));
|
||||||
|
assert!(sent.iter().any(|message| {
|
||||||
|
matches!(
|
||||||
|
message,
|
||||||
|
AgentMessage::Command { action, .. } if action == &Action::Type
|
||||||
|
)
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn handle_browser_message_prefers_compat_runtime_for_supported_instruction_when_deepseek_is_configured() {
|
||||||
|
let _guard = env_lock().lock().unwrap_or_else(|err| err.into_inner());
|
||||||
|
|
||||||
|
let first_response = json!({
|
||||||
|
"choices": [{
|
||||||
|
"message": {
|
||||||
|
"content": "",
|
||||||
|
"tool_calls": [
|
||||||
|
{
|
||||||
|
"id": "call_1",
|
||||||
|
"type": "function",
|
||||||
|
"function": {
|
||||||
|
"name": "browser_action",
|
||||||
|
"arguments": serde_json::to_string(&json!({
|
||||||
|
"action": "navigate",
|
||||||
|
"expected_domain": "www.baidu.com",
|
||||||
|
"url": "https://www.baidu.com"
|
||||||
|
})).unwrap()
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "call_2",
|
||||||
|
"type": "function",
|
||||||
|
"function": {
|
||||||
|
"name": "browser_action",
|
||||||
|
"arguments": serde_json::to_string(&json!({
|
||||||
|
"action": "type",
|
||||||
|
"expected_domain": "www.baidu.com",
|
||||||
|
"selector": "#kw",
|
||||||
|
"text": "天气",
|
||||||
|
"clear_first": true
|
||||||
|
})).unwrap()
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "call_3",
|
||||||
|
"type": "function",
|
||||||
|
"function": {
|
||||||
|
"name": "browser_action",
|
||||||
|
"arguments": serde_json::to_string(&json!({
|
||||||
|
"action": "click",
|
||||||
|
"expected_domain": "www.baidu.com",
|
||||||
|
"selector": "#su"
|
||||||
|
})).unwrap()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}]
|
||||||
|
});
|
||||||
|
let second_response = json!({
|
||||||
|
"choices": [{
|
||||||
|
"message": {
|
||||||
|
"content": "已通过 DeepSeek 执行任务: 打开百度搜索天气"
|
||||||
|
}
|
||||||
|
}]
|
||||||
|
});
|
||||||
|
let (base_url, requests, server_handle) =
|
||||||
|
start_fake_deepseek_server(vec![first_response, second_response]);
|
||||||
|
|
||||||
|
std::env::remove_var("DEEPSEEK_API_KEY");
|
||||||
|
std::env::remove_var("DEEPSEEK_BASE_URL");
|
||||||
|
std::env::remove_var("DEEPSEEK_MODEL");
|
||||||
|
|
||||||
|
let workspace_root = temp_workspace_root();
|
||||||
|
let config_path = write_deepseek_config(
|
||||||
|
&workspace_root,
|
||||||
|
"deepseek-test-key",
|
||||||
|
&base_url,
|
||||||
|
"deepseek-chat",
|
||||||
|
);
|
||||||
|
let runtime_context = AgentRuntimeContext::new(Some(config_path), workspace_root.clone());
|
||||||
|
|
||||||
|
let transport = Arc::new(MockTransport::new(vec![
|
||||||
|
BrowserMessage::Response {
|
||||||
|
seq: 1,
|
||||||
|
success: true,
|
||||||
|
data: json!({ "navigated": true }),
|
||||||
|
aom_snapshot: vec![],
|
||||||
|
timing: Timing {
|
||||||
|
queue_ms: 1,
|
||||||
|
exec_ms: 10,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
BrowserMessage::Response {
|
||||||
|
seq: 2,
|
||||||
|
success: true,
|
||||||
|
data: json!({ "typed": true }),
|
||||||
|
aom_snapshot: vec![],
|
||||||
|
timing: Timing {
|
||||||
|
queue_ms: 1,
|
||||||
|
exec_ms: 10,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
BrowserMessage::Response {
|
||||||
|
seq: 3,
|
||||||
|
success: true,
|
||||||
|
data: json!({ "clicked": true }),
|
||||||
|
aom_snapshot: vec![],
|
||||||
|
timing: Timing {
|
||||||
|
queue_ms: 1,
|
||||||
|
exec_ms: 10,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
]));
|
||||||
|
let browser_tool = BrowserPipeTool::new(
|
||||||
|
transport.clone(),
|
||||||
|
test_policy(),
|
||||||
|
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||||
|
)
|
||||||
|
.with_response_timeout(Duration::from_secs(1));
|
||||||
|
|
||||||
|
handle_browser_message_with_context(
|
||||||
|
transport.as_ref(),
|
||||||
|
&browser_tool,
|
||||||
|
&runtime_context,
|
||||||
|
BrowserMessage::SubmitTask {
|
||||||
|
instruction: "打开百度搜索天气".to_string(),
|
||||||
|
conversation_id: String::new(),
|
||||||
|
messages: vec![],
|
||||||
|
page_url: String::new(),
|
||||||
|
page_title: String::new(),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
server_handle.join().unwrap();
|
||||||
|
|
||||||
|
let sent = transport.sent_messages();
|
||||||
|
let request_bodies = requests.lock().unwrap().clone();
|
||||||
|
|
||||||
|
assert!(sent.iter().any(|message| {
|
||||||
|
matches!(
|
||||||
|
message,
|
||||||
|
AgentMessage::TaskComplete { success, summary }
|
||||||
|
if *success && summary == "已通过 DeepSeek 执行任务: 打开百度搜索天气"
|
||||||
|
)
|
||||||
|
}));
|
||||||
|
assert!(sent.iter().any(|message| {
|
||||||
|
matches!(
|
||||||
|
message,
|
||||||
|
AgentMessage::LogEntry { level, message }
|
||||||
|
if level == "mode" && message == "compat_llm_primary"
|
||||||
|
)
|
||||||
|
}));
|
||||||
|
assert_eq!(request_bodies.len(), 2);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn handle_browser_message_falls_back_to_compat_runtime_for_unsupported_instruction() {
|
||||||
|
let _guard = env_lock().lock().unwrap_or_else(|err| err.into_inner());
|
||||||
|
|
||||||
|
let first_response = json!({
|
||||||
|
"choices": [{
|
||||||
|
"message": {
|
||||||
|
"content": "",
|
||||||
|
"tool_calls": [{
|
||||||
|
"id": "call_1",
|
||||||
|
"type": "function",
|
||||||
|
"function": {
|
||||||
|
"name": "browser_action",
|
||||||
|
"arguments": serde_json::to_string(&json!({
|
||||||
|
"action": "navigate",
|
||||||
|
"expected_domain": "www.baidu.com",
|
||||||
|
"url": "https://www.baidu.com"
|
||||||
|
})).unwrap()
|
||||||
|
}
|
||||||
|
}]
|
||||||
|
}
|
||||||
|
}]
|
||||||
|
});
|
||||||
|
let second_response = json!({
|
||||||
|
"choices": [{
|
||||||
|
"message": {
|
||||||
|
"content": "来自 ZeroClaw runtime"
|
||||||
|
}
|
||||||
|
}]
|
||||||
|
});
|
||||||
|
let (base_url, requests, server_handle) =
|
||||||
|
start_fake_deepseek_server(vec![first_response, second_response]);
|
||||||
|
|
||||||
|
std::env::set_var("DEEPSEEK_API_KEY", "deepseek-test-key");
|
||||||
|
std::env::set_var("DEEPSEEK_BASE_URL", base_url);
|
||||||
|
std::env::set_var("DEEPSEEK_MODEL", "deepseek-chat");
|
||||||
|
|
||||||
|
let workspace_root = temp_workspace_root();
|
||||||
|
let original_dir = std::env::current_dir().unwrap();
|
||||||
|
std::env::set_current_dir(&workspace_root).unwrap();
|
||||||
|
|
||||||
|
let transport = Arc::new(MockTransport::new(vec![BrowserMessage::Response {
|
||||||
|
seq: 1,
|
||||||
|
success: true,
|
||||||
|
data: json!({ "navigated": true }),
|
||||||
|
aom_snapshot: vec![],
|
||||||
|
timing: Timing {
|
||||||
|
queue_ms: 1,
|
||||||
|
exec_ms: 10,
|
||||||
|
},
|
||||||
|
}]));
|
||||||
|
let browser_tool = BrowserPipeTool::new(
|
||||||
|
transport.clone(),
|
||||||
|
test_policy(),
|
||||||
|
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||||
|
)
|
||||||
|
.with_response_timeout(Duration::from_secs(1));
|
||||||
|
|
||||||
|
handle_browser_message(
|
||||||
|
transport.as_ref(),
|
||||||
|
&browser_tool,
|
||||||
|
BrowserMessage::SubmitTask {
|
||||||
|
instruction: "帮我打开百度首页".to_string(),
|
||||||
|
conversation_id: String::new(),
|
||||||
|
messages: vec![],
|
||||||
|
page_url: String::new(),
|
||||||
|
page_title: String::new(),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
server_handle.join().unwrap();
|
||||||
|
std::env::set_current_dir(original_dir).unwrap();
|
||||||
|
|
||||||
|
let sent = transport.sent_messages();
|
||||||
|
let request_bodies = requests.lock().unwrap().clone();
|
||||||
|
|
||||||
|
assert!(sent.iter().any(|message| {
|
||||||
|
matches!(
|
||||||
|
message,
|
||||||
|
AgentMessage::TaskComplete { success, summary }
|
||||||
|
if *success && summary == "来自 ZeroClaw runtime"
|
||||||
|
)
|
||||||
|
}));
|
||||||
|
assert!(sent.iter().any(|message| {
|
||||||
|
matches!(
|
||||||
|
message,
|
||||||
|
AgentMessage::LogEntry { level, message }
|
||||||
|
if level == "mode" && message == "compat_llm_primary"
|
||||||
|
)
|
||||||
|
}));
|
||||||
|
assert_eq!(request_bodies.len(), 2);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn handle_browser_message_rejects_non_task_greeting_explicitly() {
|
||||||
|
let transport = Arc::new(MockTransport::new(vec![]));
|
||||||
|
let browser_tool = BrowserPipeTool::new(
|
||||||
|
transport.clone(),
|
||||||
|
test_policy(),
|
||||||
|
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||||
|
)
|
||||||
|
.with_response_timeout(Duration::from_secs(1));
|
||||||
|
|
||||||
|
handle_browser_message(
|
||||||
|
transport.as_ref(),
|
||||||
|
&browser_tool,
|
||||||
|
BrowserMessage::SubmitTask {
|
||||||
|
instruction: "你好".to_string(),
|
||||||
|
conversation_id: String::new(),
|
||||||
|
messages: vec![],
|
||||||
|
page_url: String::new(),
|
||||||
|
page_title: String::new(),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let sent = transport.sent_messages();
|
||||||
|
assert!(matches!(
|
||||||
|
sent.last(),
|
||||||
|
Some(AgentMessage::TaskComplete { success, summary })
|
||||||
|
if !success && summary.contains("浏览器任务入口")
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn compat_runtime_includes_prior_turns_in_follow_up_provider_request() {
|
||||||
|
let _guard = env_lock().lock().unwrap_or_else(|err| err.into_inner());
|
||||||
|
|
||||||
|
let first_response = json!({
|
||||||
|
"choices": [{
|
||||||
|
"message": {
|
||||||
|
"content": "",
|
||||||
|
"tool_calls": [{
|
||||||
|
"id": "call_1",
|
||||||
|
"type": "function",
|
||||||
|
"function": {
|
||||||
|
"name": "browser_action",
|
||||||
|
"arguments": serde_json::to_string(&json!({
|
||||||
|
"action": "navigate",
|
||||||
|
"expected_domain": "www.zhihu.com",
|
||||||
|
"url": "https://www.zhihu.com/search?q=天气&type=content"
|
||||||
|
})).unwrap()
|
||||||
|
}
|
||||||
|
}]
|
||||||
|
}
|
||||||
|
}]
|
||||||
|
});
|
||||||
|
let second_response = json!({
|
||||||
|
"choices": [{
|
||||||
|
"message": {
|
||||||
|
"content": "已在知乎搜索天气"
|
||||||
|
}
|
||||||
|
}]
|
||||||
|
});
|
||||||
|
let (base_url, requests, server_handle) =
|
||||||
|
start_fake_deepseek_server(vec![first_response, second_response]);
|
||||||
|
|
||||||
|
let workspace_root = temp_workspace_root();
|
||||||
|
let settings = DeepSeekSettings {
|
||||||
|
api_key: "deepseek-test-key".to_string(),
|
||||||
|
base_url,
|
||||||
|
model: "deepseek-chat".to_string(),
|
||||||
|
};
|
||||||
|
let transport = Arc::new(MockTransport::new(vec![BrowserMessage::Response {
|
||||||
|
seq: 1,
|
||||||
|
success: true,
|
||||||
|
data: json!({ "navigated": true }),
|
||||||
|
aom_snapshot: vec![],
|
||||||
|
timing: Timing {
|
||||||
|
queue_ms: 1,
|
||||||
|
exec_ms: 10,
|
||||||
|
},
|
||||||
|
}]));
|
||||||
|
let browser_tool = BrowserPipeTool::new(
|
||||||
|
transport.clone(),
|
||||||
|
test_policy(),
|
||||||
|
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||||
|
)
|
||||||
|
.with_response_timeout(Duration::from_secs(1));
|
||||||
|
|
||||||
|
let task_context = CompatTaskContext {
|
||||||
|
conversation_id: Some("conversation-1".to_string()),
|
||||||
|
messages: vec![
|
||||||
|
ConversationMessage {
|
||||||
|
role: "user".to_string(),
|
||||||
|
content: "打开百度搜索天气".to_string(),
|
||||||
|
},
|
||||||
|
ConversationMessage {
|
||||||
|
role: "assistant".to_string(),
|
||||||
|
content: "已在百度搜索天气".to_string(),
|
||||||
|
},
|
||||||
|
],
|
||||||
|
page_url: Some("https://www.zhihu.com/".to_string()),
|
||||||
|
page_title: Some("知乎".to_string()),
|
||||||
|
};
|
||||||
|
|
||||||
|
let summary = execute_task(
|
||||||
|
transport.as_ref(),
|
||||||
|
browser_tool,
|
||||||
|
"打开知乎搜索天气",
|
||||||
|
&task_context,
|
||||||
|
&workspace_root,
|
||||||
|
&settings,
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
server_handle.join().unwrap();
|
||||||
|
|
||||||
|
let request_bodies = requests.lock().unwrap().clone();
|
||||||
|
let first_request_messages = request_bodies[0]["messages"]
|
||||||
|
.as_array()
|
||||||
|
.cloned()
|
||||||
|
.unwrap_or_default();
|
||||||
|
|
||||||
|
assert_eq!(summary, "已在知乎搜索天气");
|
||||||
|
assert!(first_request_messages.iter().any(|message| {
|
||||||
|
message["role"] == json!("user")
|
||||||
|
&& message["content"] == json!("打开百度搜索天气")
|
||||||
|
}));
|
||||||
|
assert!(first_request_messages.iter().any(|message| {
|
||||||
|
message["role"] == json!("assistant")
|
||||||
|
&& message["content"] == json!("已在百度搜索天气")
|
||||||
|
}));
|
||||||
|
}
|
||||||
@@ -30,6 +30,24 @@ fn planner_supports_baidu_search_variant_with_conjunction() {
|
|||||||
assert_eq!(plan.steps[1].params["text"], "电网调度");
|
assert_eq!(plan.steps[1].params["text"], "电网调度");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn planner_supports_zhihu_search_instruction_with_direct_search_url() {
|
||||||
|
let plan = plan_instruction("打开知乎搜索天气").unwrap();
|
||||||
|
|
||||||
|
assert_eq!(plan.summary, "已在知乎搜索天气");
|
||||||
|
assert_eq!(plan.steps.len(), 1);
|
||||||
|
assert_eq!(plan.steps[0].action, Action::Navigate);
|
||||||
|
assert_eq!(
|
||||||
|
plan.steps[0].params,
|
||||||
|
json!({ "url": "https://www.zhihu.com/search?type=content&q=%E5%A4%A9%E6%B0%94" })
|
||||||
|
);
|
||||||
|
assert_eq!(plan.steps[0].expected_domain, "www.zhihu.com");
|
||||||
|
assert_eq!(
|
||||||
|
plan.steps[0].log_message,
|
||||||
|
"navigate https://www.zhihu.com/search?type=content&q=%E5%A4%A9%E6%B0%94"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn planner_rejects_unrelated_instruction() {
|
fn planner_rejects_unrelated_instruction() {
|
||||||
let err = plan_instruction("打开谷歌搜索天气").unwrap_err();
|
let err = plan_instruction("打开谷歌搜索天气").unwrap_err();
|
||||||
|
|||||||
@@ -68,45 +68,54 @@ fn submit_task_sends_three_commands_and_finishes_with_task_complete() {
|
|||||||
&tool,
|
&tool,
|
||||||
BrowserMessage::SubmitTask {
|
BrowserMessage::SubmitTask {
|
||||||
instruction: "打开百度搜索天气".to_string(),
|
instruction: "打开百度搜索天气".to_string(),
|
||||||
|
conversation_id: String::new(),
|
||||||
|
messages: vec![],
|
||||||
|
page_url: String::new(),
|
||||||
|
page_title: String::new(),
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
let sent = transport.sent_messages();
|
let sent = transport.sent_messages();
|
||||||
|
|
||||||
assert_eq!(sent.len(), 7);
|
assert_eq!(sent.len(), 8);
|
||||||
assert!(matches!(
|
assert!(matches!(
|
||||||
&sent[0],
|
&sent[0],
|
||||||
|
AgentMessage::LogEntry { level, message }
|
||||||
|
if level == "mode" && message == "deterministic_planner"
|
||||||
|
));
|
||||||
|
assert!(matches!(
|
||||||
|
&sent[1],
|
||||||
AgentMessage::LogEntry { level, message }
|
AgentMessage::LogEntry { level, message }
|
||||||
if level == "info" && message == "navigate https://www.baidu.com"
|
if level == "info" && message == "navigate https://www.baidu.com"
|
||||||
));
|
));
|
||||||
assert!(matches!(
|
assert!(matches!(
|
||||||
&sent[1],
|
&sent[2],
|
||||||
AgentMessage::Command { seq, action, .. }
|
AgentMessage::Command { seq, action, .. }
|
||||||
if *seq == 1 && action == &Action::Navigate
|
if *seq == 1 && action == &Action::Navigate
|
||||||
));
|
));
|
||||||
assert!(matches!(
|
assert!(matches!(
|
||||||
&sent[2],
|
&sent[3],
|
||||||
AgentMessage::LogEntry { level, message }
|
AgentMessage::LogEntry { level, message }
|
||||||
if level == "info" && message == "type 天气 into #kw"
|
if level == "info" && message == "type 天气 into #kw"
|
||||||
));
|
));
|
||||||
assert!(matches!(
|
assert!(matches!(
|
||||||
&sent[3],
|
&sent[4],
|
||||||
AgentMessage::Command { seq, action, .. }
|
AgentMessage::Command { seq, action, .. }
|
||||||
if *seq == 2 && action == &Action::Type
|
if *seq == 2 && action == &Action::Type
|
||||||
));
|
));
|
||||||
assert!(matches!(
|
assert!(matches!(
|
||||||
&sent[4],
|
&sent[5],
|
||||||
AgentMessage::LogEntry { level, message }
|
AgentMessage::LogEntry { level, message }
|
||||||
if level == "info" && message == "click #su"
|
if level == "info" && message == "click #su"
|
||||||
));
|
));
|
||||||
assert!(matches!(
|
assert!(matches!(
|
||||||
&sent[5],
|
&sent[6],
|
||||||
AgentMessage::Command { seq, action, .. }
|
AgentMessage::Command { seq, action, .. }
|
||||||
if *seq == 3 && action == &Action::Click
|
if *seq == 3 && action == &Action::Click
|
||||||
));
|
));
|
||||||
assert!(matches!(
|
assert!(matches!(
|
||||||
&sent[6],
|
&sent[7],
|
||||||
AgentMessage::TaskComplete { success, summary }
|
AgentMessage::TaskComplete { success, summary }
|
||||||
if *success && summary == "已在百度搜索天气"
|
if *success && summary == "已在百度搜索天气"
|
||||||
));
|
));
|
||||||
|
|||||||
@@ -8,13 +8,20 @@ type HmacSha256 = Hmac<Sha256>;
|
|||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn browser_submit_task_round_trip_uses_task_wire_format() {
|
fn browser_submit_task_round_trip_uses_task_wire_format() {
|
||||||
let raw = r#"{"type":"submit_task","instruction":"打开百度并搜索今日汇率"}"#;
|
let raw = r#"{"type":"submit_task","instruction":"打开百度并搜索今日汇率","conversation_id":"conversation-1","messages":[{"role":"assistant","content":"上一轮完成"}],"page_url":"https://www.baidu.com/","page_title":"百度一下"}"#;
|
||||||
let message: BrowserMessage = serde_json::from_str(raw).unwrap();
|
let message: BrowserMessage = serde_json::from_str(raw).unwrap();
|
||||||
|
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
message,
|
message,
|
||||||
BrowserMessage::SubmitTask {
|
BrowserMessage::SubmitTask {
|
||||||
instruction: "打开百度并搜索今日汇率".to_string(),
|
instruction: "打开百度并搜索今日汇率".to_string(),
|
||||||
|
conversation_id: "conversation-1".to_string(),
|
||||||
|
messages: vec![sgclaw::pipe::ConversationMessage {
|
||||||
|
role: "assistant".to_string(),
|
||||||
|
content: "上一轮完成".to_string(),
|
||||||
|
}],
|
||||||
|
page_url: "https://www.baidu.com/".to_string(),
|
||||||
|
page_title: "百度一下".to_string(),
|
||||||
}
|
}
|
||||||
);
|
);
|
||||||
assert_eq!(serde_json::to_string(&message).unwrap(), raw);
|
assert_eq!(serde_json::to_string(&message).unwrap(), raw);
|
||||||
|
|||||||
12
third_party/zeroclaw/.cargo/audit.toml
vendored
Normal file
12
third_party/zeroclaw/.cargo/audit.toml
vendored
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
# cargo-audit configuration
|
||||||
|
# https://rustsec.org/
|
||||||
|
|
||||||
|
[advisories]
|
||||||
|
ignore = [
|
||||||
|
# wasmtime vulns via extism 1.13.0 — no upstream fix; plugins feature-gated
|
||||||
|
"RUSTSEC-2026-0006", # wasmtime f64.copysign segfault on x86-64
|
||||||
|
"RUSTSEC-2026-0020", # WASI guest-controlled resource exhaustion
|
||||||
|
"RUSTSEC-2026-0021", # WASI http fields panic
|
||||||
|
# instant crate unmaintained — transitive dep via nostr; no upstream fix
|
||||||
|
"RUSTSEC-2024-0384",
|
||||||
|
]
|
||||||
13
third_party/zeroclaw/.cargo/config.toml
vendored
Normal file
13
third_party/zeroclaw/.cargo/config.toml
vendored
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
[target.x86_64-unknown-linux-musl]
|
||||||
|
rustflags = ["-C", "link-arg=-static"]
|
||||||
|
|
||||||
|
[target.aarch64-unknown-linux-musl]
|
||||||
|
rustflags = ["-C", "link-arg=-static", "-C", "link-arg=-Wl,-z,stack-size=8388608"]
|
||||||
|
|
||||||
|
# Android targets (NDK toolchain)
|
||||||
|
[target.armv7-linux-androideabi]
|
||||||
|
linker = "armv7a-linux-androideabi21-clang"
|
||||||
|
|
||||||
|
[target.aarch64-linux-android]
|
||||||
|
linker = "aarch64-linux-android21-clang"
|
||||||
|
rustflags = ["-C", "link-arg=-Wl,-z,stack-size=8388608"]
|
||||||
133
third_party/zeroclaw/.claude/skills/github-issue/SKILL.md
vendored
Normal file
133
third_party/zeroclaw/.claude/skills/github-issue/SKILL.md
vendored
Normal file
@@ -0,0 +1,133 @@
|
|||||||
|
# Skill: github-issue
|
||||||
|
|
||||||
|
File a structured GitHub issue (bug report or feature request) for ZeroClaw interactively from Claude Code.
|
||||||
|
|
||||||
|
## When to Use
|
||||||
|
|
||||||
|
Trigger when the user wants to file a GitHub issue, report a bug, or request a feature for ZeroClaw. Keywords: "file issue", "report bug", "feature request", "open issue", "create issue", "github issue".
|
||||||
|
|
||||||
|
## Instructions
|
||||||
|
|
||||||
|
You are filing a GitHub issue against the ZeroClaw repository using structured issue forms. Follow this workflow exactly.
|
||||||
|
|
||||||
|
### Step 1: Detect Issue Type and Read the Template
|
||||||
|
|
||||||
|
Determine from the user's message whether this is a **bug report** or **feature request**.
|
||||||
|
- If unclear, use AskUserQuestion to ask: "Is this a bug report or a feature request?"
|
||||||
|
|
||||||
|
Then read the corresponding issue template to understand the required fields:
|
||||||
|
|
||||||
|
- Bug report: `.github/ISSUE_TEMPLATE/bug_report.yml`
|
||||||
|
- Feature request: `.github/ISSUE_TEMPLATE/feature_request.yml`
|
||||||
|
|
||||||
|
Parse the YAML to extract:
|
||||||
|
- The `title` prefix (e.g. `[Bug]: `, `[Feature]: `)
|
||||||
|
- The `labels` array
|
||||||
|
- Each field in the `body` array: its `type` (dropdown, textarea, input, checkboxes, markdown), `id`, `attributes.label`, `attributes.options` (for dropdowns), `attributes.description`, `attributes.placeholder`, and `validations.required`
|
||||||
|
|
||||||
|
This is the source of truth for what fields exist, what they're called, what options are available, and which are required. Do not assume or hardcode any field names or options — always derive them from the template file.
|
||||||
|
|
||||||
|
### Step 2: Auto-Gather Context
|
||||||
|
|
||||||
|
Before asking the user anything, silently gather environment and repo context:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Git context
|
||||||
|
git log --oneline -5
|
||||||
|
git status --short
|
||||||
|
git diff --stat HEAD~1 2>/dev/null
|
||||||
|
|
||||||
|
# For bug reports — environment detection
|
||||||
|
uname -s -r -m # OS info
|
||||||
|
sw_vers 2>/dev/null # macOS version
|
||||||
|
rustc --version 2>/dev/null # Rust version
|
||||||
|
cargo metadata --format-version=1 --no-deps 2>/dev/null | jq -r '.packages[] | select(.name=="zeroclaw") | .version' 2>/dev/null # ZeroClaw version
|
||||||
|
git rev-parse --short HEAD # commit SHA fallback
|
||||||
|
```
|
||||||
|
|
||||||
|
Also read recently changed files to infer the affected component and architecture impact.
|
||||||
|
|
||||||
|
### Step 3: Pre-Fill and Present the Form
|
||||||
|
|
||||||
|
Using the parsed template fields and gathered context, draft values for ALL fields from the template:
|
||||||
|
|
||||||
|
- **dropdown** fields: select the most likely option from `attributes.options` based on context. For dropdowns where you're uncertain, note your best guess and flag it for the user.
|
||||||
|
- **textarea** fields: draft content based on the user's description, git context, and the field's `attributes.description`/`attributes.placeholder` for guidance on what's expected.
|
||||||
|
- **input** fields: fill with auto-detected values (versions, OS) or draft from user context.
|
||||||
|
- **checkboxes** fields: auto-check all items (the skill itself ensures compliance with the stated checks).
|
||||||
|
- **markdown** fields: skip these — they're informational headers, not form inputs.
|
||||||
|
- **optional fields** (where `validations.required` is false): fill if there's enough context, otherwise note "(optional — not enough context to fill)".
|
||||||
|
|
||||||
|
Present the complete draft to the user in a clean readable format:
|
||||||
|
|
||||||
|
```
|
||||||
|
## Issue Draft: [Bug]: <title> / [Feature]: <title>
|
||||||
|
**Labels**: <from template>
|
||||||
|
|
||||||
|
### <Field Label>
|
||||||
|
<proposed value or selection>
|
||||||
|
|
||||||
|
### <Field Label>
|
||||||
|
<proposed value>
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
Use AskUserQuestion to ask the user to review:
|
||||||
|
- "Here's the pre-filled issue. Please review and let me know what to change, or say 'submit' to file it."
|
||||||
|
|
||||||
|
If the user requests changes, update the draft and re-present. Iterate until the user approves.
|
||||||
|
|
||||||
|
### Step 4: Scope Guard
|
||||||
|
|
||||||
|
Before final submission, analyze the collected content for scope creep:
|
||||||
|
- Does the bug report describe multiple independent defects?
|
||||||
|
- Does the feature request bundle unrelated changes?
|
||||||
|
|
||||||
|
If multi-concept issues are detected:
|
||||||
|
1. Inform the user: "This issue appears to cover multiple distinct topics. Focused, single-concept issues are strongly preferred and more likely to be accepted."
|
||||||
|
2. Break down the distinct groups found.
|
||||||
|
3. Offer to file separate issues for each group, reusing shared context (environment, etc.).
|
||||||
|
4. Let the user decide: proceed as-is or split.
|
||||||
|
|
||||||
|
### Step 5: Construct Issue Body
|
||||||
|
|
||||||
|
Build the issue body as markdown sections matching GitHub's form-field rendering format. GitHub renders form-submitted issues with `### <Field Label>` sections, so use that exact structure.
|
||||||
|
|
||||||
|
For each non-markdown field from the template, in order:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
### <attributes.label>
|
||||||
|
|
||||||
|
<value>
|
||||||
|
```
|
||||||
|
|
||||||
|
For optional fields with no content, use `_No response_` as the value (this matches GitHub's native rendering for empty optional fields).
|
||||||
|
|
||||||
|
For checkbox fields, render each option as:
|
||||||
|
```markdown
|
||||||
|
- [X] <option label text>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 6: Final Preview and Submit
|
||||||
|
|
||||||
|
Show the final constructed issue (title + labels + full body) for one last confirmation.
|
||||||
|
|
||||||
|
Then submit using a HEREDOC for the body to preserve formatting:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
gh issue create --title "<title prefix><user title>" --label "<label1>,<label2>" --body "$(cat <<'ISSUE_EOF'
|
||||||
|
<body content>
|
||||||
|
ISSUE_EOF
|
||||||
|
)"
|
||||||
|
```
|
||||||
|
|
||||||
|
Return the resulting issue URL to the user.
|
||||||
|
|
||||||
|
### Important Rules
|
||||||
|
|
||||||
|
- **Always read the template file** — never assume field names, options, or structure. The templates are the source of truth and may change over time.
|
||||||
|
- **Never include personal/sensitive data** in the issue. Redact secrets, tokens, emails, real names.
|
||||||
|
- **Use neutral project-scoped placeholders** per ZeroClaw's privacy contract.
|
||||||
|
- **One concept per issue** — enforce the scope guard.
|
||||||
|
- **Auto-detect, don't guess** — use real command output for environment fields.
|
||||||
|
- **Match GitHub's rendering** — use `### Field Label` sections so issues look consistent whether filed via web UI or this skill.
|
||||||
209
third_party/zeroclaw/.claude/skills/github-pr/SKILL.md
vendored
Normal file
209
third_party/zeroclaw/.claude/skills/github-pr/SKILL.md
vendored
Normal file
@@ -0,0 +1,209 @@
|
|||||||
|
# Skill: github-pr
|
||||||
|
|
||||||
|
Open or update a GitHub Pull Request for ZeroClaw. Handles creating new PRs with a fully filled-out template body, and updating existing PRs (title, body sections, labels, comments). Use this skill whenever the user wants to open a PR, create a pull request, update a PR, edit PR description, add labels to a PR, or sync a PR after new commits — even if they don't say "PR" explicitly (e.g., "submit this for review", "push and open for merge").
|
||||||
|
|
||||||
|
## Instructions
|
||||||
|
|
||||||
|
This skill supports two modes: **Open** (create a new PR) and **Update** (edit an existing PR). Detect the mode from context — if there's already an open PR for the current branch and the user didn't say "open a new PR", default to update mode.
|
||||||
|
|
||||||
|
The PR template at `.github/pull_request_template.md` is the source of truth for the PR body structure. Read it every time — never assume or hardcode section names, fields, or their order. The template may change over time and the skill should always reflect its current state.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Shared: Read the PR Template
|
||||||
|
|
||||||
|
Before opening or updating a PR body, read `.github/pull_request_template.md` and parse it to understand:
|
||||||
|
|
||||||
|
- The `## ` section headers (these are the top-level sections of the PR body)
|
||||||
|
- The bullet points, fields, and prompts within each section
|
||||||
|
- Which sections are marked `(required)` vs optional/recommended
|
||||||
|
- Any inline formatting conventions (backtick options, Yes/No fields, etc.)
|
||||||
|
|
||||||
|
This parsed structure drives how you fill, present, and edit the PR body.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Mode: Open a New PR
|
||||||
|
|
||||||
|
### Step 1: Gather Context
|
||||||
|
|
||||||
|
Collect information to pre-fill the PR body. Run these in parallel:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Branch and commit context
|
||||||
|
git branch --show-current
|
||||||
|
git log master..HEAD --oneline
|
||||||
|
git diff master...HEAD --stat
|
||||||
|
|
||||||
|
# Check if branch is pushed
|
||||||
|
git rev-parse --abbrev-ref --symbolic-full-name @{u} 2>/dev/null
|
||||||
|
|
||||||
|
# Environment (for validation evidence)
|
||||||
|
rustc --version 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
Also review the changed files and commit messages to understand the nature of the change (bug fix, feature, refactor, docs, chore, etc.) and which subsystems are affected.
|
||||||
|
|
||||||
|
### Step 2: Pre-Fill the Template
|
||||||
|
|
||||||
|
Using the parsed template structure and gathered context, draft a complete PR body:
|
||||||
|
|
||||||
|
- For each `## ` section from the template, fill in the bullet points and fields based on context from the commits, diff, and changed files.
|
||||||
|
- Use the field descriptions and placeholder text in the template as guidance for what each field expects.
|
||||||
|
- For Yes/No fields, infer from the diff (e.g., if no files in `src/security/` changed, security impact is likely all No).
|
||||||
|
- For required sections, always provide a substantive answer. For optional sections, fill if there's enough context, otherwise leave the template prompts in place.
|
||||||
|
- Draft a conventional commit-style PR title based on the changes (e.g., `feat(provider): add retry budget override`, `fix(channel): handle disconnect gracefully`, `chore(ci): update workflow targets`).
|
||||||
|
|
||||||
|
### Step 3: Present Draft for Review
|
||||||
|
|
||||||
|
Show the user the complete draft:
|
||||||
|
|
||||||
|
```
|
||||||
|
## PR Draft: <title>
|
||||||
|
**Branch**: <head> -> master
|
||||||
|
**Labels**: <suggested labels>
|
||||||
|
|
||||||
|
<full body with all sections filled>
|
||||||
|
```
|
||||||
|
|
||||||
|
Ask the user to review: "Here's the pre-filled PR. Review and let me know what to change, or say 'submit' to open it."
|
||||||
|
|
||||||
|
Iterate on changes until the user approves.
|
||||||
|
|
||||||
|
### Step 4: Push and Create
|
||||||
|
|
||||||
|
1. If the branch isn't pushed yet, push it:
|
||||||
|
```bash
|
||||||
|
git push -u origin <branch>
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Create the PR using a HEREDOC for the body:
|
||||||
|
```bash
|
||||||
|
gh pr create --title "<title>" --base master --body "$(cat <<'PR_BODY_EOF'
|
||||||
|
<full body>
|
||||||
|
PR_BODY_EOF
|
||||||
|
)"
|
||||||
|
```
|
||||||
|
|
||||||
|
3. If labels were agreed on, add them:
|
||||||
|
```bash
|
||||||
|
gh pr edit <number> --add-label "<label1>,<label2>"
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Return the PR URL to the user.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Mode: Update an Existing PR
|
||||||
|
|
||||||
|
### Step 1: Identify the PR
|
||||||
|
|
||||||
|
1. **If a PR number or URL is given**: use that directly.
|
||||||
|
2. **If on a branch with an open PR**: auto-detect:
|
||||||
|
```bash
|
||||||
|
gh pr view --json number,title,body,labels,state,author,url,headRefName 2>/dev/null
|
||||||
|
```
|
||||||
|
3. **If neither**: ask the user for the PR number.
|
||||||
|
|
||||||
|
Verify the current user is the PR author:
|
||||||
|
```bash
|
||||||
|
CURRENT_USER=$(gh api user --jq '.login')
|
||||||
|
PR_AUTHOR=$(gh pr view <number> --json author --jq '.author.login')
|
||||||
|
```
|
||||||
|
If not the author, stop and inform the user.
|
||||||
|
|
||||||
|
### Step 2: Fetch Current State
|
||||||
|
|
||||||
|
```bash
|
||||||
|
gh pr view <number> --json number,title,body,labels,state,baseRefName,headRefName,url,author,reviewDecision,statusCheckRollup,commits
|
||||||
|
```
|
||||||
|
|
||||||
|
Display a summary:
|
||||||
|
```
|
||||||
|
## PR #<number>: <title>
|
||||||
|
**State**: <open/closed/merged>
|
||||||
|
**Branch**: <head> -> <base>
|
||||||
|
**Labels**: <label list>
|
||||||
|
**Checks**: <pass/fail/pending>
|
||||||
|
**URL**: <url>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Determine What to Update
|
||||||
|
|
||||||
|
Support these operations:
|
||||||
|
|
||||||
|
| Operation | How |
|
||||||
|
|---|---|
|
||||||
|
| **Edit title** | `gh pr edit <number> --title "<new title>"` |
|
||||||
|
| **Edit full body** | `gh pr edit <number> --body "<new body>"` |
|
||||||
|
| **Add labels** | `gh pr edit <number> --add-label "<label1>,<label2>"` |
|
||||||
|
| **Remove labels** | `gh pr edit <number> --remove-label "<label1>"` |
|
||||||
|
| **Edit specific section** | Parse body by `## ` headers, modify target section, re-submit full body |
|
||||||
|
| **Add a comment** | `gh pr comment <number> --body "<comment>"` |
|
||||||
|
| **Link an issue** | Edit the linked-issue section in the body |
|
||||||
|
| **Smart update after new commits** | Re-analyze and suggest section updates |
|
||||||
|
|
||||||
|
### Step 4: Handle Body Section Edits
|
||||||
|
|
||||||
|
When editing a specific section:
|
||||||
|
|
||||||
|
1. Parse the current PR body into sections by `## ` headers
|
||||||
|
2. Match the user's request to the corresponding section from the template
|
||||||
|
3. Show the current content of that section and the proposed replacement
|
||||||
|
4. On confirmation, modify only that section, reconstruct the full body, and submit
|
||||||
|
|
||||||
|
### Step 5: Smart Update After New Commits
|
||||||
|
|
||||||
|
When the user wants to sync the PR description after pushing new changes:
|
||||||
|
|
||||||
|
1. Identify new commits:
|
||||||
|
```bash
|
||||||
|
gh pr view <number> --json commits --jq '.commits[].messageHeadline'
|
||||||
|
git log <base>..<head> --oneline
|
||||||
|
git diff <base>...<head> --stat
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Re-read the PR template. Analyze which sections are now stale based on the new changes — use the template's section names and field descriptions to identify what needs updating rather than relying on hardcoded assumptions.
|
||||||
|
|
||||||
|
3. Present proposed updates section-by-section and confirm before applying.
|
||||||
|
|
||||||
|
### Step 6: Apply Updates
|
||||||
|
|
||||||
|
For title/label changes, use direct `gh pr edit` flags.
|
||||||
|
|
||||||
|
For body edits, use a HEREDOC:
|
||||||
|
```bash
|
||||||
|
gh pr edit <number> --body "$(cat <<'PR_BODY_EOF'
|
||||||
|
<full updated body>
|
||||||
|
PR_BODY_EOF
|
||||||
|
)"
|
||||||
|
```
|
||||||
|
|
||||||
|
For comments:
|
||||||
|
```bash
|
||||||
|
gh pr comment <number> --body "$(cat <<'COMMENT_EOF'
|
||||||
|
<comment text>
|
||||||
|
COMMENT_EOF
|
||||||
|
)"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 7: Confirm
|
||||||
|
|
||||||
|
Fetch and display the updated state:
|
||||||
|
```bash
|
||||||
|
gh pr view <number> --json number,title,labels,url
|
||||||
|
```
|
||||||
|
|
||||||
|
Return the PR URL.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Important Rules
|
||||||
|
|
||||||
|
- **Always read `.github/pull_request_template.md`** before filling or editing a PR body. Never assume section names, fields, or structure — derive everything from the template. It's the source of truth and may change.
|
||||||
|
- **For updates, only modify requested sections.** Preserve everything else exactly as-is.
|
||||||
|
- **Always show diffs before applying body edits.** Present current vs proposed for each changed section.
|
||||||
|
- **Never include personal/sensitive data** in PR content per ZeroClaw's privacy contract.
|
||||||
|
- **For label changes**, only use labels that exist in the repository. Check with `gh label list` if unsure.
|
||||||
|
- **Fetch the latest body before editing** to avoid clobbering concurrent changes.
|
||||||
|
- **For new PRs**, push the branch before creating (with `-u` to set upstream tracking).
|
||||||
202
third_party/zeroclaw/.claude/skills/skill-creator/LICENSE.txt
vendored
Normal file
202
third_party/zeroclaw/.claude/skills/skill-creator/LICENSE.txt
vendored
Normal file
@@ -0,0 +1,202 @@
|
|||||||
|
|
||||||
|
Apache License
|
||||||
|
Version 2.0, January 2004
|
||||||
|
http://www.apache.org/licenses/
|
||||||
|
|
||||||
|
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||||
|
|
||||||
|
1. Definitions.
|
||||||
|
|
||||||
|
"License" shall mean the terms and conditions for use, reproduction,
|
||||||
|
and distribution as defined by Sections 1 through 9 of this document.
|
||||||
|
|
||||||
|
"Licensor" shall mean the copyright owner or entity authorized by
|
||||||
|
the copyright owner that is granting the License.
|
||||||
|
|
||||||
|
"Legal Entity" shall mean the union of the acting entity and all
|
||||||
|
other entities that control, are controlled by, or are under common
|
||||||
|
control with that entity. For the purposes of this definition,
|
||||||
|
"control" means (i) the power, direct or indirect, to cause the
|
||||||
|
direction or management of such entity, whether by contract or
|
||||||
|
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||||
|
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||||
|
|
||||||
|
"You" (or "Your") shall mean an individual or Legal Entity
|
||||||
|
exercising permissions granted by this License.
|
||||||
|
|
||||||
|
"Source" form shall mean the preferred form for making modifications,
|
||||||
|
including but not limited to software source code, documentation
|
||||||
|
source, and configuration files.
|
||||||
|
|
||||||
|
"Object" form shall mean any form resulting from mechanical
|
||||||
|
transformation or translation of a Source form, including but
|
||||||
|
not limited to compiled object code, generated documentation,
|
||||||
|
and conversions to other media types.
|
||||||
|
|
||||||
|
"Work" shall mean the work of authorship, whether in Source or
|
||||||
|
Object form, made available under the License, as indicated by a
|
||||||
|
copyright notice that is included in or attached to the work
|
||||||
|
(an example is provided in the Appendix below).
|
||||||
|
|
||||||
|
"Derivative Works" shall mean any work, whether in Source or Object
|
||||||
|
form, that is based on (or derived from) the Work and for which the
|
||||||
|
editorial revisions, annotations, elaborations, or other modifications
|
||||||
|
represent, as a whole, an original work of authorship. For the purposes
|
||||||
|
of this License, Derivative Works shall not include works that remain
|
||||||
|
separable from, or merely link (or bind by name) to the interfaces of,
|
||||||
|
the Work and Derivative Works thereof.
|
||||||
|
|
||||||
|
"Contribution" shall mean any work of authorship, including
|
||||||
|
the original version of the Work and any modifications or additions
|
||||||
|
to that Work or Derivative Works thereof, that is intentionally
|
||||||
|
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||||
|
or by an individual or Legal Entity authorized to submit on behalf of
|
||||||
|
the copyright owner. For the purposes of this definition, "submitted"
|
||||||
|
means any form of electronic, verbal, or written communication sent
|
||||||
|
to the Licensor or its representatives, including but not limited to
|
||||||
|
communication on electronic mailing lists, source code control systems,
|
||||||
|
and issue tracking systems that are managed by, or on behalf of, the
|
||||||
|
Licensor for the purpose of discussing and improving the Work, but
|
||||||
|
excluding communication that is conspicuously marked or otherwise
|
||||||
|
designated in writing by the copyright owner as "Not a Contribution."
|
||||||
|
|
||||||
|
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||||
|
on behalf of whom a Contribution has been received by Licensor and
|
||||||
|
subsequently incorporated within the Work.
|
||||||
|
|
||||||
|
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
copyright license to reproduce, prepare Derivative Works of,
|
||||||
|
publicly display, publicly perform, sublicense, and distribute the
|
||||||
|
Work and such Derivative Works in Source or Object form.
|
||||||
|
|
||||||
|
3. Grant of Patent License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
(except as stated in this section) patent license to make, have made,
|
||||||
|
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||||
|
where such license applies only to those patent claims licensable
|
||||||
|
by such Contributor that are necessarily infringed by their
|
||||||
|
Contribution(s) alone or by combination of their Contribution(s)
|
||||||
|
with the Work to which such Contribution(s) was submitted. If You
|
||||||
|
institute patent litigation against any entity (including a
|
||||||
|
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||||
|
or a Contribution incorporated within the Work constitutes direct
|
||||||
|
or contributory patent infringement, then any patent licenses
|
||||||
|
granted to You under this License for that Work shall terminate
|
||||||
|
as of the date such litigation is filed.
|
||||||
|
|
||||||
|
4. Redistribution. You may reproduce and distribute copies of the
|
||||||
|
Work or Derivative Works thereof in any medium, with or without
|
||||||
|
modifications, and in Source or Object form, provided that You
|
||||||
|
meet the following conditions:
|
||||||
|
|
||||||
|
(a) You must give any other recipients of the Work or
|
||||||
|
Derivative Works a copy of this License; and
|
||||||
|
|
||||||
|
(b) You must cause any modified files to carry prominent notices
|
||||||
|
stating that You changed the files; and
|
||||||
|
|
||||||
|
(c) You must retain, in the Source form of any Derivative Works
|
||||||
|
that You distribute, all copyright, patent, trademark, and
|
||||||
|
attribution notices from the Source form of the Work,
|
||||||
|
excluding those notices that do not pertain to any part of
|
||||||
|
the Derivative Works; and
|
||||||
|
|
||||||
|
(d) If the Work includes a "NOTICE" text file as part of its
|
||||||
|
distribution, then any Derivative Works that You distribute must
|
||||||
|
include a readable copy of the attribution notices contained
|
||||||
|
within such NOTICE file, excluding those notices that do not
|
||||||
|
pertain to any part of the Derivative Works, in at least one
|
||||||
|
of the following places: within a NOTICE text file distributed
|
||||||
|
as part of the Derivative Works; within the Source form or
|
||||||
|
documentation, if provided along with the Derivative Works; or,
|
||||||
|
within a display generated by the Derivative Works, if and
|
||||||
|
wherever such third-party notices normally appear. The contents
|
||||||
|
of the NOTICE file are for informational purposes only and
|
||||||
|
do not modify the License. You may add Your own attribution
|
||||||
|
notices within Derivative Works that You distribute, alongside
|
||||||
|
or as an addendum to the NOTICE text from the Work, provided
|
||||||
|
that such additional attribution notices cannot be construed
|
||||||
|
as modifying the License.
|
||||||
|
|
||||||
|
You may add Your own copyright statement to Your modifications and
|
||||||
|
may provide additional or different license terms and conditions
|
||||||
|
for use, reproduction, or distribution of Your modifications, or
|
||||||
|
for any such Derivative Works as a whole, provided Your use,
|
||||||
|
reproduction, and distribution of the Work otherwise complies with
|
||||||
|
the conditions stated in this License.
|
||||||
|
|
||||||
|
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||||
|
any Contribution intentionally submitted for inclusion in the Work
|
||||||
|
by You to the Licensor shall be under the terms and conditions of
|
||||||
|
this License, without any additional terms or conditions.
|
||||||
|
Notwithstanding the above, nothing herein shall supersede or modify
|
||||||
|
the terms of any separate license agreement you may have executed
|
||||||
|
with Licensor regarding such Contributions.
|
||||||
|
|
||||||
|
6. Trademarks. This License does not grant permission to use the trade
|
||||||
|
names, trademarks, service marks, or product names of the Licensor,
|
||||||
|
except as required for reasonable and customary use in describing the
|
||||||
|
origin of the Work and reproducing the content of the NOTICE file.
|
||||||
|
|
||||||
|
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||||
|
agreed to in writing, Licensor provides the Work (and each
|
||||||
|
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||||
|
implied, including, without limitation, any warranties or conditions
|
||||||
|
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||||
|
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||||
|
appropriateness of using or redistributing the Work and assume any
|
||||||
|
risks associated with Your exercise of permissions under this License.
|
||||||
|
|
||||||
|
8. Limitation of Liability. In no event and under no legal theory,
|
||||||
|
whether in tort (including negligence), contract, or otherwise,
|
||||||
|
unless required by applicable law (such as deliberate and grossly
|
||||||
|
negligent acts) or agreed to in writing, shall any Contributor be
|
||||||
|
liable to You for damages, including any direct, indirect, special,
|
||||||
|
incidental, or consequential damages of any character arising as a
|
||||||
|
result of this License or out of the use or inability to use the
|
||||||
|
Work (including but not limited to damages for loss of goodwill,
|
||||||
|
work stoppage, computer failure or malfunction, or any and all
|
||||||
|
other commercial damages or losses), even if such Contributor
|
||||||
|
has been advised of the possibility of such damages.
|
||||||
|
|
||||||
|
9. Accepting Warranty or Additional Liability. While redistributing
|
||||||
|
the Work or Derivative Works thereof, You may choose to offer,
|
||||||
|
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||||
|
or other liability obligations and/or rights consistent with this
|
||||||
|
License. However, in accepting such obligations, You may act only
|
||||||
|
on Your own behalf and on Your sole responsibility, not on behalf
|
||||||
|
of any other Contributor, and only if You agree to indemnify,
|
||||||
|
defend, and hold each Contributor harmless for any liability
|
||||||
|
incurred by, or claims asserted against, such Contributor by reason
|
||||||
|
of your accepting any such warranty or additional liability.
|
||||||
|
|
||||||
|
END OF TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
APPENDIX: How to apply the Apache License to your work.
|
||||||
|
|
||||||
|
To apply the Apache License to your work, attach the following
|
||||||
|
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||||
|
replaced with your own identifying information. (Don't include
|
||||||
|
the brackets!) The text should be enclosed in the appropriate
|
||||||
|
comment syntax for the file format. We also recommend that a
|
||||||
|
file or class name and description of purpose be included on the
|
||||||
|
same "printed page" as the copyright notice for easier
|
||||||
|
identification within third-party archives.
|
||||||
|
|
||||||
|
Copyright [yyyy] [name of copyright owner]
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
485
third_party/zeroclaw/.claude/skills/skill-creator/SKILL.md
vendored
Normal file
485
third_party/zeroclaw/.claude/skills/skill-creator/SKILL.md
vendored
Normal file
@@ -0,0 +1,485 @@
|
|||||||
|
---
|
||||||
|
name: skill-creator
|
||||||
|
description: Create new skills, modify and improve existing skills, and measure skill performance. Use when users want to create a skill from scratch, edit, or optimize an existing skill, run evals to test a skill, benchmark skill performance with variance analysis, or optimize a skill's description for better triggering accuracy.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Skill Creator
|
||||||
|
|
||||||
|
A skill for creating new skills and iteratively improving them.
|
||||||
|
|
||||||
|
At a high level, the process of creating a skill goes like this:
|
||||||
|
|
||||||
|
- Decide what you want the skill to do and roughly how it should do it
|
||||||
|
- Write a draft of the skill
|
||||||
|
- Create a few test prompts and run claude-with-access-to-the-skill on them
|
||||||
|
- Help the user evaluate the results both qualitatively and quantitatively
|
||||||
|
- While the runs happen in the background, draft some quantitative evals if there aren't any (if there are some, you can either use as is or modify if you feel something needs to change about them). Then explain them to the user (or if they already existed, explain the ones that already exist)
|
||||||
|
- Use the `eval-viewer/generate_review.py` script to show the user the results for them to look at, and also let them look at the quantitative metrics
|
||||||
|
- Rewrite the skill based on feedback from the user's evaluation of the results (and also if there are any glaring flaws that become apparent from the quantitative benchmarks)
|
||||||
|
- Repeat until you're satisfied
|
||||||
|
- Expand the test set and try again at larger scale
|
||||||
|
|
||||||
|
Your job when using this skill is to figure out where the user is in this process and then jump in and help them progress through these stages. So for instance, maybe they're like "I want to make a skill for X". You can help narrow down what they mean, write a draft, write the test cases, figure out how they want to evaluate, run all the prompts, and repeat.
|
||||||
|
|
||||||
|
On the other hand, maybe they already have a draft of the skill. In this case you can go straight to the eval/iterate part of the loop.
|
||||||
|
|
||||||
|
Of course, you should always be flexible and if the user is like "I don't need to run a bunch of evaluations, just vibe with me", you can do that instead.
|
||||||
|
|
||||||
|
Then after the skill is done (but again, the order is flexible), you can also run the skill description improver, which we have a whole separate script for, to optimize the triggering of the skill.
|
||||||
|
|
||||||
|
Cool? Cool.
|
||||||
|
|
||||||
|
## Communicating with the user
|
||||||
|
|
||||||
|
The skill creator is liable to be used by people across a wide range of familiarity with coding jargon. If you haven't heard (and how could you, it's only very recently that it started), there's a trend now where the power of Claude is inspiring plumbers to open up their terminals, parents and grandparents to google "how to install npm". On the other hand, the bulk of users are probably fairly computer-literate.
|
||||||
|
|
||||||
|
So please pay attention to context cues to understand how to phrase your communication! In the default case, just to give you some idea:
|
||||||
|
|
||||||
|
- "evaluation" and "benchmark" are borderline, but OK
|
||||||
|
- for "JSON" and "assertion" you want to see serious cues from the user that they know what those things are before using them without explaining them
|
||||||
|
|
||||||
|
It's OK to briefly explain terms if you're in doubt, and feel free to clarify terms with a short definition if you're unsure if the user will get it.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Creating a skill
|
||||||
|
|
||||||
|
### Capture Intent
|
||||||
|
|
||||||
|
Start by understanding the user's intent. The current conversation might already contain a workflow the user wants to capture (e.g., they say "turn this into a skill"). If so, extract answers from the conversation history first — the tools used, the sequence of steps, corrections the user made, input/output formats observed. The user may need to fill the gaps, and should confirm before proceeding to the next step.
|
||||||
|
|
||||||
|
1. What should this skill enable Claude to do?
|
||||||
|
2. When should this skill trigger? (what user phrases/contexts)
|
||||||
|
3. What's the expected output format?
|
||||||
|
4. Should we set up test cases to verify the skill works? Skills with objectively verifiable outputs (file transforms, data extraction, code generation, fixed workflow steps) benefit from test cases. Skills with subjective outputs (writing style, art) often don't need them. Suggest the appropriate default based on the skill type, but let the user decide.
|
||||||
|
|
||||||
|
### Interview and Research
|
||||||
|
|
||||||
|
Proactively ask questions about edge cases, input/output formats, example files, success criteria, and dependencies. Wait to write test prompts until you've got this part ironed out.
|
||||||
|
|
||||||
|
Check available MCPs - if useful for research (searching docs, finding similar skills, looking up best practices), research in parallel via subagents if available, otherwise inline. Come prepared with context to reduce burden on the user.
|
||||||
|
|
||||||
|
### Write the SKILL.md
|
||||||
|
|
||||||
|
Based on the user interview, fill in these components:
|
||||||
|
|
||||||
|
- **name**: Skill identifier
|
||||||
|
- **description**: When to trigger, what it does. This is the primary triggering mechanism - include both what the skill does AND specific contexts for when to use it. All "when to use" info goes here, not in the body. Note: currently Claude has a tendency to "undertrigger" skills -- to not use them when they'd be useful. To combat this, please make the skill descriptions a little bit "pushy". So for instance, instead of "How to build a simple fast dashboard to display internal Anthropic data.", you might write "How to build a simple fast dashboard to display internal Anthropic data. Make sure to use this skill whenever the user mentions dashboards, data visualization, internal metrics, or wants to display any kind of company data, even if they don't explicitly ask for a 'dashboard.'"
|
||||||
|
- **compatibility**: Required tools, dependencies (optional, rarely needed)
|
||||||
|
- **the rest of the skill :)**
|
||||||
|
|
||||||
|
### Skill Writing Guide
|
||||||
|
|
||||||
|
#### Anatomy of a Skill
|
||||||
|
|
||||||
|
```
|
||||||
|
skill-name/
|
||||||
|
├── SKILL.md (required)
|
||||||
|
│ ├── YAML frontmatter (name, description required)
|
||||||
|
│ └── Markdown instructions
|
||||||
|
└── Bundled Resources (optional)
|
||||||
|
├── scripts/ - Executable code for deterministic/repetitive tasks
|
||||||
|
├── references/ - Docs loaded into context as needed
|
||||||
|
└── assets/ - Files used in output (templates, icons, fonts)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Progressive Disclosure
|
||||||
|
|
||||||
|
Skills use a three-level loading system:
|
||||||
|
1. **Metadata** (name + description) - Always in context (~100 words)
|
||||||
|
2. **SKILL.md body** - In context whenever skill triggers (<500 lines ideal)
|
||||||
|
3. **Bundled resources** - As needed (unlimited, scripts can execute without loading)
|
||||||
|
|
||||||
|
These word counts are approximate and you can feel free to go longer if needed.
|
||||||
|
|
||||||
|
**Key patterns:**
|
||||||
|
- Keep SKILL.md under 500 lines; if you're approaching this limit, add an additional layer of hierarchy along with clear pointers about where the model using the skill should go next to follow up.
|
||||||
|
- Reference files clearly from SKILL.md with guidance on when to read them
|
||||||
|
- For large reference files (>300 lines), include a table of contents
|
||||||
|
|
||||||
|
**Domain organization**: When a skill supports multiple domains/frameworks, organize by variant:
|
||||||
|
```
|
||||||
|
cloud-deploy/
|
||||||
|
├── SKILL.md (workflow + selection)
|
||||||
|
└── references/
|
||||||
|
├── aws.md
|
||||||
|
├── gcp.md
|
||||||
|
└── azure.md
|
||||||
|
```
|
||||||
|
Claude reads only the relevant reference file.
|
||||||
|
|
||||||
|
#### Principle of Lack of Surprise
|
||||||
|
|
||||||
|
This goes without saying, but skills must not contain malware, exploit code, or any content that could compromise system security. A skill's contents should not surprise the user in their intent if described. Don't go along with requests to create misleading skills or skills designed to facilitate unauthorized access, data exfiltration, or other malicious activities. Things like a "roleplay as an XYZ" are OK though.
|
||||||
|
|
||||||
|
#### Writing Patterns
|
||||||
|
|
||||||
|
Prefer using the imperative form in instructions.
|
||||||
|
|
||||||
|
**Defining output formats** - You can do it like this:
|
||||||
|
```markdown
|
||||||
|
## Report structure
|
||||||
|
ALWAYS use this exact template:
|
||||||
|
# [Title]
|
||||||
|
## Executive summary
|
||||||
|
## Key findings
|
||||||
|
## Recommendations
|
||||||
|
```
|
||||||
|
|
||||||
|
**Examples pattern** - It's useful to include examples. You can format them like this (but if "Input" and "Output" are in the examples you might want to deviate a little):
|
||||||
|
```markdown
|
||||||
|
## Commit message format
|
||||||
|
**Example 1:**
|
||||||
|
Input: Added user authentication with JWT tokens
|
||||||
|
Output: feat(auth): implement JWT-based authentication
|
||||||
|
```
|
||||||
|
|
||||||
|
### Writing Style
|
||||||
|
|
||||||
|
Try to explain to the model why things are important in lieu of heavy-handed musty MUSTs. Use theory of mind and try to make the skill general and not super-narrow to specific examples. Start by writing a draft and then look at it with fresh eyes and improve it.
|
||||||
|
|
||||||
|
### Test Cases
|
||||||
|
|
||||||
|
After writing the skill draft, come up with 2-3 realistic test prompts — the kind of thing a real user would actually say. Share them with the user: [you don't have to use this exact language] "Here are a few test cases I'd like to try. Do these look right, or do you want to add more?" Then run them.
|
||||||
|
|
||||||
|
Save test cases to `evals/evals.json`. Don't write assertions yet — just the prompts. You'll draft assertions in the next step while the runs are in progress.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"skill_name": "example-skill",
|
||||||
|
"evals": [
|
||||||
|
{
|
||||||
|
"id": 1,
|
||||||
|
"prompt": "User's task prompt",
|
||||||
|
"expected_output": "Description of expected result",
|
||||||
|
"files": []
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
See `references/schemas.md` for the full schema (including the `assertions` field, which you'll add later).
|
||||||
|
|
||||||
|
## Running and evaluating test cases
|
||||||
|
|
||||||
|
This section is one continuous sequence — don't stop partway through. Do NOT use `/skill-test` or any other testing skill.
|
||||||
|
|
||||||
|
Put results in `<skill-name>-workspace/` as a sibling to the skill directory. Within the workspace, organize results by iteration (`iteration-1/`, `iteration-2/`, etc.) and within that, each test case gets a directory (`eval-0/`, `eval-1/`, etc.). Don't create all of this upfront — just create directories as you go.
|
||||||
|
|
||||||
|
### Step 1: Spawn all runs (with-skill AND baseline) in the same turn
|
||||||
|
|
||||||
|
For each test case, spawn two subagents in the same turn — one with the skill, one without. This is important: don't spawn the with-skill runs first and then come back for baselines later. Launch everything at once so it all finishes around the same time.
|
||||||
|
|
||||||
|
**With-skill run:**
|
||||||
|
|
||||||
|
```
|
||||||
|
Execute this task:
|
||||||
|
- Skill path: <path-to-skill>
|
||||||
|
- Task: <eval prompt>
|
||||||
|
- Input files: <eval files if any, or "none">
|
||||||
|
- Save outputs to: <workspace>/iteration-<N>/eval-<ID>/with_skill/outputs/
|
||||||
|
- Outputs to save: <what the user cares about — e.g., "the .docx file", "the final CSV">
|
||||||
|
```
|
||||||
|
|
||||||
|
**Baseline run** (same prompt, but the baseline depends on context):
|
||||||
|
- **Creating a new skill**: no skill at all. Same prompt, no skill path, save to `without_skill/outputs/`.
|
||||||
|
- **Improving an existing skill**: the old version. Before editing, snapshot the skill (`cp -r <skill-path> <workspace>/skill-snapshot/`), then point the baseline subagent at the snapshot. Save to `old_skill/outputs/`.
|
||||||
|
|
||||||
|
Write an `eval_metadata.json` for each test case (assertions can be empty for now). Give each eval a descriptive name based on what it's testing — not just "eval-0". Use this name for the directory too. If this iteration uses new or modified eval prompts, create these files for each new eval directory — don't assume they carry over from previous iterations.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"eval_id": 0,
|
||||||
|
"eval_name": "descriptive-name-here",
|
||||||
|
"prompt": "The user's task prompt",
|
||||||
|
"assertions": []
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: While runs are in progress, draft assertions
|
||||||
|
|
||||||
|
Don't just wait for the runs to finish — you can use this time productively. Draft quantitative assertions for each test case and explain them to the user. If assertions already exist in `evals/evals.json`, review them and explain what they check.
|
||||||
|
|
||||||
|
Good assertions are objectively verifiable and have descriptive names — they should read clearly in the benchmark viewer so someone glancing at the results immediately understands what each one checks. Subjective skills (writing style, design quality) are better evaluated qualitatively — don't force assertions onto things that need human judgment.
|
||||||
|
|
||||||
|
Update the `eval_metadata.json` files and `evals/evals.json` with the assertions once drafted. Also explain to the user what they'll see in the viewer — both the qualitative outputs and the quantitative benchmark.
|
||||||
|
|
||||||
|
### Step 3: As runs complete, capture timing data
|
||||||
|
|
||||||
|
When each subagent task completes, you receive a notification containing `total_tokens` and `duration_ms`. Save this data immediately to `timing.json` in the run directory:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"total_tokens": 84852,
|
||||||
|
"duration_ms": 23332,
|
||||||
|
"total_duration_seconds": 23.3
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This is the only opportunity to capture this data — it comes through the task notification and isn't persisted elsewhere. Process each notification as it arrives rather than trying to batch them.
|
||||||
|
|
||||||
|
### Step 4: Grade, aggregate, and launch the viewer
|
||||||
|
|
||||||
|
Once all runs are done:
|
||||||
|
|
||||||
|
1. **Grade each run** — spawn a grader subagent (or grade inline) that reads `agents/grader.md` and evaluates each assertion against the outputs. Save results to `grading.json` in each run directory. The grading.json expectations array must use the fields `text`, `passed`, and `evidence` (not `name`/`met`/`details` or other variants) — the viewer depends on these exact field names. For assertions that can be checked programmatically, write and run a script rather than eyeballing it — scripts are faster, more reliable, and can be reused across iterations.
|
||||||
|
|
||||||
|
2. **Aggregate into benchmark** — run the aggregation script from the skill-creator directory:
|
||||||
|
```bash
|
||||||
|
python -m scripts.aggregate_benchmark <workspace>/iteration-N --skill-name <name>
|
||||||
|
```
|
||||||
|
This produces `benchmark.json` and `benchmark.md` with pass_rate, time, and tokens for each configuration, with mean ± stddev and the delta. If generating benchmark.json manually, see `references/schemas.md` for the exact schema the viewer expects.
|
||||||
|
Put each with_skill version before its baseline counterpart.
|
||||||
|
|
||||||
|
3. **Do an analyst pass** — read the benchmark data and surface patterns the aggregate stats might hide. See `agents/analyzer.md` (the "Analyzing Benchmark Results" section) for what to look for — things like assertions that always pass regardless of skill (non-discriminating), high-variance evals (possibly flaky), and time/token tradeoffs.
|
||||||
|
|
||||||
|
4. **Launch the viewer** with both qualitative outputs and quantitative data:
|
||||||
|
```bash
|
||||||
|
nohup python <skill-creator-path>/eval-viewer/generate_review.py \
|
||||||
|
<workspace>/iteration-N \
|
||||||
|
--skill-name "my-skill" \
|
||||||
|
--benchmark <workspace>/iteration-N/benchmark.json \
|
||||||
|
> /dev/null 2>&1 &
|
||||||
|
VIEWER_PID=$!
|
||||||
|
```
|
||||||
|
For iteration 2+, also pass `--previous-workspace <workspace>/iteration-<N-1>`.
|
||||||
|
|
||||||
|
**Cowork / headless environments:** If `webbrowser.open()` is not available or the environment has no display, use `--static <output_path>` to write a standalone HTML file instead of starting a server. Feedback will be downloaded as a `feedback.json` file when the user clicks "Submit All Reviews". After download, copy `feedback.json` into the workspace directory for the next iteration to pick up.
|
||||||
|
|
||||||
|
Note: please use generate_review.py to create the viewer; there's no need to write custom HTML.
|
||||||
|
|
||||||
|
5. **Tell the user** something like: "I've opened the results in your browser. There are two tabs — 'Outputs' lets you click through each test case and leave feedback, 'Benchmark' shows the quantitative comparison. When you're done, come back here and let me know."
|
||||||
|
|
||||||
|
### What the user sees in the viewer
|
||||||
|
|
||||||
|
The "Outputs" tab shows one test case at a time:
|
||||||
|
- **Prompt**: the task that was given
|
||||||
|
- **Output**: the files the skill produced, rendered inline where possible
|
||||||
|
- **Previous Output** (iteration 2+): collapsed section showing last iteration's output
|
||||||
|
- **Formal Grades** (if grading was run): collapsed section showing assertion pass/fail
|
||||||
|
- **Feedback**: a textbox that auto-saves as they type
|
||||||
|
- **Previous Feedback** (iteration 2+): their comments from last time, shown below the textbox
|
||||||
|
|
||||||
|
The "Benchmark" tab shows the stats summary: pass rates, timing, and token usage for each configuration, with per-eval breakdowns and analyst observations.
|
||||||
|
|
||||||
|
Navigation is via prev/next buttons or arrow keys. When done, they click "Submit All Reviews" which saves all feedback to `feedback.json`.
|
||||||
|
|
||||||
|
### Step 5: Read the feedback
|
||||||
|
|
||||||
|
When the user tells you they're done, read `feedback.json`:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"reviews": [
|
||||||
|
{"run_id": "eval-0-with_skill", "feedback": "the chart is missing axis labels", "timestamp": "..."},
|
||||||
|
{"run_id": "eval-1-with_skill", "feedback": "", "timestamp": "..."},
|
||||||
|
{"run_id": "eval-2-with_skill", "feedback": "perfect, love this", "timestamp": "..."}
|
||||||
|
],
|
||||||
|
"status": "complete"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Empty feedback means the user thought it was fine. Focus your improvements on the test cases where the user had specific complaints.
|
||||||
|
|
||||||
|
Kill the viewer server when you're done with it:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kill $VIEWER_PID 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Improving the skill
|
||||||
|
|
||||||
|
This is the heart of the loop. You've run the test cases, the user has reviewed the results, and now you need to make the skill better based on their feedback.
|
||||||
|
|
||||||
|
### How to think about improvements
|
||||||
|
|
||||||
|
1. **Generalize from the feedback.** The big picture thing that's happening here is that we're trying to create skills that can be used a million times (maybe literally, maybe even more who knows) across many different prompts. Here you and the user are iterating on only a few examples over and over again because it helps move faster. The user knows these examples in and out and it's quick for them to assess new outputs. But if the skill you and the user are codeveloping works only for those examples, it's useless. Rather than put in fiddly overfitty changes, or oppressively constrictive MUSTs, if there's some stubborn issue, you might try branching out and using different metaphors, or recommending different patterns of working. It's relatively cheap to try and maybe you'll land on something great.
|
||||||
|
|
||||||
|
2. **Keep the prompt lean.** Remove things that aren't pulling their weight. Make sure to read the transcripts, not just the final outputs — if it looks like the skill is making the model waste a bunch of time doing things that are unproductive, you can try getting rid of the parts of the skill that are making it do that and seeing what happens.
|
||||||
|
|
||||||
|
3. **Explain the why.** Try hard to explain the **why** behind everything you're asking the model to do. Today's LLMs are *smart*. They have good theory of mind and when given a good harness can go beyond rote instructions and really make things happen. Even if the feedback from the user is terse or frustrated, try to actually understand the task and why the user is writing what they wrote, and what they actually wrote, and then transmit this understanding into the instructions. If you find yourself writing ALWAYS or NEVER in all caps, or using super rigid structures, that's a yellow flag — if possible, reframe and explain the reasoning so that the model understands why the thing you're asking for is important. That's a more humane, powerful, and effective approach.
|
||||||
|
|
||||||
|
4. **Look for repeated work across test cases.** Read the transcripts from the test runs and notice if the subagents all independently wrote similar helper scripts or took the same multi-step approach to something. If all 3 test cases resulted in the subagent writing a `create_docx.py` or a `build_chart.py`, that's a strong signal the skill should bundle that script. Write it once, put it in `scripts/`, and tell the skill to use it. This saves every future invocation from reinventing the wheel.
|
||||||
|
|
||||||
|
This task is pretty important (we are trying to create billions a year in economic value here!) and your thinking time is not the blocker; take your time and really mull things over. I'd suggest writing a draft revision and then looking at it anew and making improvements. Really do your best to get into the head of the user and understand what they want and need.
|
||||||
|
|
||||||
|
### The iteration loop
|
||||||
|
|
||||||
|
After improving the skill:
|
||||||
|
|
||||||
|
1. Apply your improvements to the skill
|
||||||
|
2. Rerun all test cases into a new `iteration-<N+1>/` directory, including baseline runs. If you're creating a new skill, the baseline is always `without_skill` (no skill) — that stays the same across iterations. If you're improving an existing skill, use your judgment on what makes sense as the baseline: the original version the user came in with, or the previous iteration.
|
||||||
|
3. Launch the reviewer with `--previous-workspace` pointing at the previous iteration
|
||||||
|
4. Wait for the user to review and tell you they're done
|
||||||
|
5. Read the new feedback, improve again, repeat
|
||||||
|
|
||||||
|
Keep going until:
|
||||||
|
- The user says they're happy
|
||||||
|
- The feedback is all empty (everything looks good)
|
||||||
|
- You're not making meaningful progress
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Advanced: Blind comparison
|
||||||
|
|
||||||
|
For situations where you want a more rigorous comparison between two versions of a skill (e.g., the user asks "is the new version actually better?"), there's a blind comparison system. Read `agents/comparator.md` and `agents/analyzer.md` for the details. The basic idea is: give two outputs to an independent agent without telling it which is which, and let it judge quality. Then analyze why the winner won.
|
||||||
|
|
||||||
|
This is optional, requires subagents, and most users won't need it. The human review loop is usually sufficient.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Description Optimization
|
||||||
|
|
||||||
|
The description field in SKILL.md frontmatter is the primary mechanism that determines whether Claude invokes a skill. After creating or improving a skill, offer to optimize the description for better triggering accuracy.
|
||||||
|
|
||||||
|
### Step 1: Generate trigger eval queries
|
||||||
|
|
||||||
|
Create 20 eval queries — a mix of should-trigger and should-not-trigger. Save as JSON:
|
||||||
|
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
{"query": "the user prompt", "should_trigger": true},
|
||||||
|
{"query": "another prompt", "should_trigger": false}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
The queries must be realistic and something a Claude Code or Claude.ai user would actually type. Not abstract requests, but requests that are concrete and specific and have a good amount of detail. For instance, file paths, personal context about the user's job or situation, column names and values, company names, URLs. A little bit of backstory. Some might be in lowercase or contain abbreviations or typos or casual speech. Use a mix of different lengths, and focus on edge cases rather than making them clear-cut (the user will get a chance to sign off on them).
|
||||||
|
|
||||||
|
Bad: `"Format this data"`, `"Extract text from PDF"`, `"Create a chart"`
|
||||||
|
|
||||||
|
Good: `"ok so my boss just sent me this xlsx file (its in my downloads, called something like 'Q4 sales final FINAL v2.xlsx') and she wants me to add a column that shows the profit margin as a percentage. The revenue is in column C and costs are in column D i think"`
|
||||||
|
|
||||||
|
For the **should-trigger** queries (8-10), think about coverage. You want different phrasings of the same intent — some formal, some casual. Include cases where the user doesn't explicitly name the skill or file type but clearly needs it. Throw in some uncommon use cases and cases where this skill competes with another but should win.
|
||||||
|
|
||||||
|
For the **should-not-trigger** queries (8-10), the most valuable ones are the near-misses — queries that share keywords or concepts with the skill but actually need something different. Think adjacent domains, ambiguous phrasing where a naive keyword match would trigger but shouldn't, and cases where the query touches on something the skill does but in a context where another tool is more appropriate.
|
||||||
|
|
||||||
|
The key thing to avoid: don't make should-not-trigger queries obviously irrelevant. "Write a fibonacci function" as a negative test for a PDF skill is too easy — it doesn't test anything. The negative cases should be genuinely tricky.
|
||||||
|
|
||||||
|
### Step 2: Review with user
|
||||||
|
|
||||||
|
Present the eval set to the user for review using the HTML template:
|
||||||
|
|
||||||
|
1. Read the template from `assets/eval_review.html`
|
||||||
|
2. Replace the placeholders:
|
||||||
|
- `__EVAL_DATA_PLACEHOLDER__` → the JSON array of eval items (no quotes around it — it's a JS variable assignment)
|
||||||
|
- `__SKILL_NAME_PLACEHOLDER__` → the skill's name
|
||||||
|
- `__SKILL_DESCRIPTION_PLACEHOLDER__` → the skill's current description
|
||||||
|
3. Write to a temp file (e.g., `/tmp/eval_review_<skill-name>.html`) and open it: `open /tmp/eval_review_<skill-name>.html`
|
||||||
|
4. The user can edit queries, toggle should-trigger, add/remove entries, then click "Export Eval Set"
|
||||||
|
5. The file downloads to `~/Downloads/eval_set.json` — check the Downloads folder for the most recent version in case there are multiple (e.g., `eval_set (1).json`)
|
||||||
|
|
||||||
|
This step matters — bad eval queries lead to bad descriptions.
|
||||||
|
|
||||||
|
### Step 3: Run the optimization loop
|
||||||
|
|
||||||
|
Tell the user: "This will take some time — I'll run the optimization loop in the background and check on it periodically."
|
||||||
|
|
||||||
|
Save the eval set to the workspace, then run in the background:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -m scripts.run_loop \
|
||||||
|
--eval-set <path-to-trigger-eval.json> \
|
||||||
|
--skill-path <path-to-skill> \
|
||||||
|
--model <model-id-powering-this-session> \
|
||||||
|
--max-iterations 5 \
|
||||||
|
--verbose
|
||||||
|
```
|
||||||
|
|
||||||
|
Use the model ID from your system prompt (the one powering the current session) so the triggering test matches what the user actually experiences.
|
||||||
|
|
||||||
|
While it runs, periodically tail the output to give the user updates on which iteration it's on and what the scores look like.
|
||||||
|
|
||||||
|
This handles the full optimization loop automatically. It splits the eval set into 60% train and 40% held-out test, evaluates the current description (running each query 3 times to get a reliable trigger rate), then calls Claude to propose improvements based on what failed. It re-evaluates each new description on both train and test, iterating up to 5 times. When it's done, it opens an HTML report in the browser showing the results per iteration and returns JSON with `best_description` — selected by test score rather than train score to avoid overfitting.
|
||||||
|
|
||||||
|
### How skill triggering works
|
||||||
|
|
||||||
|
Understanding the triggering mechanism helps design better eval queries. Skills appear in Claude's `available_skills` list with their name + description, and Claude decides whether to consult a skill based on that description. The important thing to know is that Claude only consults skills for tasks it can't easily handle on its own — simple, one-step queries like "read this PDF" may not trigger a skill even if the description matches perfectly, because Claude can handle them directly with basic tools. Complex, multi-step, or specialized queries reliably trigger skills when the description matches.
|
||||||
|
|
||||||
|
This means your eval queries should be substantive enough that Claude would actually benefit from consulting a skill. Simple queries like "read file X" are poor test cases — they won't trigger skills regardless of description quality.
|
||||||
|
|
||||||
|
### Step 4: Apply the result
|
||||||
|
|
||||||
|
Take `best_description` from the JSON output and update the skill's SKILL.md frontmatter. Show the user before/after and report the scores.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Package and Present (only if `present_files` tool is available)
|
||||||
|
|
||||||
|
Check whether you have access to the `present_files` tool. If you don't, skip this step. If you do, package the skill and present the .skill file to the user:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -m scripts.package_skill <path/to/skill-folder>
|
||||||
|
```
|
||||||
|
|
||||||
|
After packaging, direct the user to the resulting `.skill` file path so they can install it.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Claude.ai-specific instructions
|
||||||
|
|
||||||
|
In Claude.ai, the core workflow is the same (draft → test → review → improve → repeat), but because Claude.ai doesn't have subagents, some mechanics change. Here's what to adapt:
|
||||||
|
|
||||||
|
**Running test cases**: No subagents means no parallel execution. For each test case, read the skill's SKILL.md, then follow its instructions to accomplish the test prompt yourself. Do them one at a time. This is less rigorous than independent subagents (you wrote the skill and you're also running it, so you have full context), but it's a useful sanity check — and the human review step compensates. Skip the baseline runs — just use the skill to complete the task as requested.
|
||||||
|
|
||||||
|
**Reviewing results**: If you can't open a browser (e.g., Claude.ai's VM has no display, or you're on a remote server), skip the browser reviewer entirely. Instead, present results directly in the conversation. For each test case, show the prompt and the output. If the output is a file the user needs to see (like a .docx or .xlsx), save it to the filesystem and tell them where it is so they can download and inspect it. Ask for feedback inline: "How does this look? Anything you'd change?"
|
||||||
|
|
||||||
|
**Benchmarking**: Skip the quantitative benchmarking — it relies on baseline comparisons which aren't meaningful without subagents. Focus on qualitative feedback from the user.
|
||||||
|
|
||||||
|
**The iteration loop**: Same as before — improve the skill, rerun the test cases, ask for feedback — just without the browser reviewer in the middle. You can still organize results into iteration directories on the filesystem if you have one.
|
||||||
|
|
||||||
|
**Description optimization**: This section requires the `claude` CLI tool (specifically `claude -p`) which is only available in Claude Code. Skip it if you're on Claude.ai.
|
||||||
|
|
||||||
|
**Blind comparison**: Requires subagents. Skip it.
|
||||||
|
|
||||||
|
**Packaging**: The `package_skill.py` script works anywhere with Python and a filesystem. On Claude.ai, you can run it and the user can download the resulting `.skill` file.
|
||||||
|
|
||||||
|
**Updating an existing skill**: The user might be asking you to update an existing skill, not create a new one. In this case:
|
||||||
|
- **Preserve the original name.** Note the skill's directory name and `name` frontmatter field -- use them unchanged. E.g., if the installed skill is `research-helper`, output `research-helper.skill` (not `research-helper-v2`).
|
||||||
|
- **Copy to a writeable location before editing.** The installed skill path may be read-only. Copy to `/tmp/skill-name/`, edit there, and package from the copy.
|
||||||
|
- **If packaging manually, stage in `/tmp/` first**, then copy to the output directory -- direct writes may fail due to permissions.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Cowork-Specific Instructions
|
||||||
|
|
||||||
|
If you're in Cowork, the main things to know are:
|
||||||
|
|
||||||
|
- You have subagents, so the main workflow (spawn test cases in parallel, run baselines, grade, etc.) all works. (However, if you run into severe problems with timeouts, it's OK to run the test prompts in series rather than parallel.)
|
||||||
|
- You don't have a browser or display, so when generating the eval viewer, use `--static <output_path>` to write a standalone HTML file instead of starting a server. Then proffer a link that the user can click to open the HTML in their browser.
|
||||||
|
- For whatever reason, the Cowork setup seems to disincline Claude from generating the eval viewer after running the tests, so just to reiterate: whether you're in Cowork or in Claude Code, after running tests, you should always generate the eval viewer for the human to look at examples before revising the skill yourself and trying to make corrections, using `generate_review.py` (not writing your own boutique html code). Sorry in advance but I'm gonna go all caps here: GENERATE THE EVAL VIEWER *BEFORE* evaluating inputs yourself. You want to get them in front of the human ASAP!
|
||||||
|
- Feedback works differently: since there's no running server, the viewer's "Submit All Reviews" button will download `feedback.json` as a file. You can then read it from there (you may have to request access first).
|
||||||
|
- Packaging works — `package_skill.py` just needs Python and a filesystem.
|
||||||
|
- Description optimization (`run_loop.py` / `run_eval.py`) should work in Cowork just fine since it uses `claude -p` via subprocess, not a browser, but please save it until you've fully finished making the skill and the user agrees it's in good shape.
|
||||||
|
- **Updating an existing skill**: The user might be asking you to update an existing skill, not create a new one. Follow the update guidance in the claude.ai section above.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Reference files
|
||||||
|
|
||||||
|
The agents/ directory contains instructions for specialized subagents. Read them when you need to spawn the relevant subagent.
|
||||||
|
|
||||||
|
- `agents/grader.md` — How to evaluate assertions against outputs
|
||||||
|
- `agents/comparator.md` — How to do blind A/B comparison between two outputs
|
||||||
|
- `agents/analyzer.md` — How to analyze why one version beat another
|
||||||
|
|
||||||
|
The references/ directory has additional documentation:
|
||||||
|
- `references/schemas.md` — JSON structures for evals.json, grading.json, etc.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
Repeating one more time the core loop here for emphasis:
|
||||||
|
|
||||||
|
- Figure out what the skill is about
|
||||||
|
- Draft or edit the skill
|
||||||
|
- Run claude-with-access-to-the-skill on test prompts
|
||||||
|
- With the user, evaluate the outputs:
|
||||||
|
- Create benchmark.json and run `eval-viewer/generate_review.py` to help the user review them
|
||||||
|
- Run quantitative evals
|
||||||
|
- Repeat until you and the user are satisfied
|
||||||
|
- Package the final skill and return it to the user.
|
||||||
|
|
||||||
|
Please add steps to your TodoList, if you have such a thing, to make sure you don't forget. If you're in Cowork, please specifically put "Create evals JSON and run `eval-viewer/generate_review.py` so human can review test cases" in your TodoList to make sure it happens.
|
||||||
|
|
||||||
|
Good luck!
|
||||||
274
third_party/zeroclaw/.claude/skills/skill-creator/agents/analyzer.md
vendored
Normal file
274
third_party/zeroclaw/.claude/skills/skill-creator/agents/analyzer.md
vendored
Normal file
@@ -0,0 +1,274 @@
|
|||||||
|
# Post-hoc Analyzer Agent
|
||||||
|
|
||||||
|
Analyze blind comparison results to understand WHY the winner won and generate improvement suggestions.
|
||||||
|
|
||||||
|
## Role
|
||||||
|
|
||||||
|
After the blind comparator determines a winner, the Post-hoc Analyzer "unblids" the results by examining the skills and transcripts. The goal is to extract actionable insights: what made the winner better, and how can the loser be improved?
|
||||||
|
|
||||||
|
## Inputs
|
||||||
|
|
||||||
|
You receive these parameters in your prompt:
|
||||||
|
|
||||||
|
- **winner**: "A" or "B" (from blind comparison)
|
||||||
|
- **winner_skill_path**: Path to the skill that produced the winning output
|
||||||
|
- **winner_transcript_path**: Path to the execution transcript for the winner
|
||||||
|
- **loser_skill_path**: Path to the skill that produced the losing output
|
||||||
|
- **loser_transcript_path**: Path to the execution transcript for the loser
|
||||||
|
- **comparison_result_path**: Path to the blind comparator's output JSON
|
||||||
|
- **output_path**: Where to save the analysis results
|
||||||
|
|
||||||
|
## Process
|
||||||
|
|
||||||
|
### Step 1: Read Comparison Result
|
||||||
|
|
||||||
|
1. Read the blind comparator's output at comparison_result_path
|
||||||
|
2. Note the winning side (A or B), the reasoning, and any scores
|
||||||
|
3. Understand what the comparator valued in the winning output
|
||||||
|
|
||||||
|
### Step 2: Read Both Skills
|
||||||
|
|
||||||
|
1. Read the winner skill's SKILL.md and key referenced files
|
||||||
|
2. Read the loser skill's SKILL.md and key referenced files
|
||||||
|
3. Identify structural differences:
|
||||||
|
- Instructions clarity and specificity
|
||||||
|
- Script/tool usage patterns
|
||||||
|
- Example coverage
|
||||||
|
- Edge case handling
|
||||||
|
|
||||||
|
### Step 3: Read Both Transcripts
|
||||||
|
|
||||||
|
1. Read the winner's transcript
|
||||||
|
2. Read the loser's transcript
|
||||||
|
3. Compare execution patterns:
|
||||||
|
- How closely did each follow their skill's instructions?
|
||||||
|
- What tools were used differently?
|
||||||
|
- Where did the loser diverge from optimal behavior?
|
||||||
|
- Did either encounter errors or make recovery attempts?
|
||||||
|
|
||||||
|
### Step 4: Analyze Instruction Following
|
||||||
|
|
||||||
|
For each transcript, evaluate:
|
||||||
|
- Did the agent follow the skill's explicit instructions?
|
||||||
|
- Did the agent use the skill's provided tools/scripts?
|
||||||
|
- Were there missed opportunities to leverage skill content?
|
||||||
|
- Did the agent add unnecessary steps not in the skill?
|
||||||
|
|
||||||
|
Score instruction following 1-10 and note specific issues.
|
||||||
|
|
||||||
|
### Step 5: Identify Winner Strengths
|
||||||
|
|
||||||
|
Determine what made the winner better:
|
||||||
|
- Clearer instructions that led to better behavior?
|
||||||
|
- Better scripts/tools that produced better output?
|
||||||
|
- More comprehensive examples that guided edge cases?
|
||||||
|
- Better error handling guidance?
|
||||||
|
|
||||||
|
Be specific. Quote from skills/transcripts where relevant.
|
||||||
|
|
||||||
|
### Step 6: Identify Loser Weaknesses
|
||||||
|
|
||||||
|
Determine what held the loser back:
|
||||||
|
- Ambiguous instructions that led to suboptimal choices?
|
||||||
|
- Missing tools/scripts that forced workarounds?
|
||||||
|
- Gaps in edge case coverage?
|
||||||
|
- Poor error handling that caused failures?
|
||||||
|
|
||||||
|
### Step 7: Generate Improvement Suggestions
|
||||||
|
|
||||||
|
Based on the analysis, produce actionable suggestions for improving the loser skill:
|
||||||
|
- Specific instruction changes to make
|
||||||
|
- Tools/scripts to add or modify
|
||||||
|
- Examples to include
|
||||||
|
- Edge cases to address
|
||||||
|
|
||||||
|
Prioritize by impact. Focus on changes that would have changed the outcome.
|
||||||
|
|
||||||
|
### Step 8: Write Analysis Results
|
||||||
|
|
||||||
|
Save structured analysis to `{output_path}`.
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
Write a JSON file with this structure:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"comparison_summary": {
|
||||||
|
"winner": "A",
|
||||||
|
"winner_skill": "path/to/winner/skill",
|
||||||
|
"loser_skill": "path/to/loser/skill",
|
||||||
|
"comparator_reasoning": "Brief summary of why comparator chose winner"
|
||||||
|
},
|
||||||
|
"winner_strengths": [
|
||||||
|
"Clear step-by-step instructions for handling multi-page documents",
|
||||||
|
"Included validation script that caught formatting errors",
|
||||||
|
"Explicit guidance on fallback behavior when OCR fails"
|
||||||
|
],
|
||||||
|
"loser_weaknesses": [
|
||||||
|
"Vague instruction 'process the document appropriately' led to inconsistent behavior",
|
||||||
|
"No script for validation, agent had to improvise and made errors",
|
||||||
|
"No guidance on OCR failure, agent gave up instead of trying alternatives"
|
||||||
|
],
|
||||||
|
"instruction_following": {
|
||||||
|
"winner": {
|
||||||
|
"score": 9,
|
||||||
|
"issues": [
|
||||||
|
"Minor: skipped optional logging step"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"loser": {
|
||||||
|
"score": 6,
|
||||||
|
"issues": [
|
||||||
|
"Did not use the skill's formatting template",
|
||||||
|
"Invented own approach instead of following step 3",
|
||||||
|
"Missed the 'always validate output' instruction"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"improvement_suggestions": [
|
||||||
|
{
|
||||||
|
"priority": "high",
|
||||||
|
"category": "instructions",
|
||||||
|
"suggestion": "Replace 'process the document appropriately' with explicit steps: 1) Extract text, 2) Identify sections, 3) Format per template",
|
||||||
|
"expected_impact": "Would eliminate ambiguity that caused inconsistent behavior"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"priority": "high",
|
||||||
|
"category": "tools",
|
||||||
|
"suggestion": "Add validate_output.py script similar to winner skill's validation approach",
|
||||||
|
"expected_impact": "Would catch formatting errors before final output"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"priority": "medium",
|
||||||
|
"category": "error_handling",
|
||||||
|
"suggestion": "Add fallback instructions: 'If OCR fails, try: 1) different resolution, 2) image preprocessing, 3) manual extraction'",
|
||||||
|
"expected_impact": "Would prevent early failure on difficult documents"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"transcript_insights": {
|
||||||
|
"winner_execution_pattern": "Read skill -> Followed 5-step process -> Used validation script -> Fixed 2 issues -> Produced output",
|
||||||
|
"loser_execution_pattern": "Read skill -> Unclear on approach -> Tried 3 different methods -> No validation -> Output had errors"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Guidelines
|
||||||
|
|
||||||
|
- **Be specific**: Quote from skills and transcripts, don't just say "instructions were unclear"
|
||||||
|
- **Be actionable**: Suggestions should be concrete changes, not vague advice
|
||||||
|
- **Focus on skill improvements**: The goal is to improve the losing skill, not critique the agent
|
||||||
|
- **Prioritize by impact**: Which changes would most likely have changed the outcome?
|
||||||
|
- **Consider causation**: Did the skill weakness actually cause the worse output, or is it incidental?
|
||||||
|
- **Stay objective**: Analyze what happened, don't editorialize
|
||||||
|
- **Think about generalization**: Would this improvement help on other evals too?
|
||||||
|
|
||||||
|
## Categories for Suggestions
|
||||||
|
|
||||||
|
Use these categories to organize improvement suggestions:
|
||||||
|
|
||||||
|
| Category | Description |
|
||||||
|
|----------|-------------|
|
||||||
|
| `instructions` | Changes to the skill's prose instructions |
|
||||||
|
| `tools` | Scripts, templates, or utilities to add/modify |
|
||||||
|
| `examples` | Example inputs/outputs to include |
|
||||||
|
| `error_handling` | Guidance for handling failures |
|
||||||
|
| `structure` | Reorganization of skill content |
|
||||||
|
| `references` | External docs or resources to add |
|
||||||
|
|
||||||
|
## Priority Levels
|
||||||
|
|
||||||
|
- **high**: Would likely change the outcome of this comparison
|
||||||
|
- **medium**: Would improve quality but may not change win/loss
|
||||||
|
- **low**: Nice to have, marginal improvement
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Analyzing Benchmark Results
|
||||||
|
|
||||||
|
When analyzing benchmark results, the analyzer's purpose is to **surface patterns and anomalies** across multiple runs, not suggest skill improvements.
|
||||||
|
|
||||||
|
## Role
|
||||||
|
|
||||||
|
Review all benchmark run results and generate freeform notes that help the user understand skill performance. Focus on patterns that wouldn't be visible from aggregate metrics alone.
|
||||||
|
|
||||||
|
## Inputs
|
||||||
|
|
||||||
|
You receive these parameters in your prompt:
|
||||||
|
|
||||||
|
- **benchmark_data_path**: Path to the in-progress benchmark.json with all run results
|
||||||
|
- **skill_path**: Path to the skill being benchmarked
|
||||||
|
- **output_path**: Where to save the notes (as JSON array of strings)
|
||||||
|
|
||||||
|
## Process
|
||||||
|
|
||||||
|
### Step 1: Read Benchmark Data
|
||||||
|
|
||||||
|
1. Read the benchmark.json containing all run results
|
||||||
|
2. Note the configurations tested (with_skill, without_skill)
|
||||||
|
3. Understand the run_summary aggregates already calculated
|
||||||
|
|
||||||
|
### Step 2: Analyze Per-Assertion Patterns
|
||||||
|
|
||||||
|
For each expectation across all runs:
|
||||||
|
- Does it **always pass** in both configurations? (may not differentiate skill value)
|
||||||
|
- Does it **always fail** in both configurations? (may be broken or beyond capability)
|
||||||
|
- Does it **always pass with skill but fail without**? (skill clearly adds value here)
|
||||||
|
- Does it **always fail with skill but pass without**? (skill may be hurting)
|
||||||
|
- Is it **highly variable**? (flaky expectation or non-deterministic behavior)
|
||||||
|
|
||||||
|
### Step 3: Analyze Cross-Eval Patterns
|
||||||
|
|
||||||
|
Look for patterns across evals:
|
||||||
|
- Are certain eval types consistently harder/easier?
|
||||||
|
- Do some evals show high variance while others are stable?
|
||||||
|
- Are there surprising results that contradict expectations?
|
||||||
|
|
||||||
|
### Step 4: Analyze Metrics Patterns
|
||||||
|
|
||||||
|
Look at time_seconds, tokens, tool_calls:
|
||||||
|
- Does the skill significantly increase execution time?
|
||||||
|
- Is there high variance in resource usage?
|
||||||
|
- Are there outlier runs that skew the aggregates?
|
||||||
|
|
||||||
|
### Step 5: Generate Notes
|
||||||
|
|
||||||
|
Write freeform observations as a list of strings. Each note should:
|
||||||
|
- State a specific observation
|
||||||
|
- Be grounded in the data (not speculation)
|
||||||
|
- Help the user understand something the aggregate metrics don't show
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
- "Assertion 'Output is a PDF file' passes 100% in both configurations - may not differentiate skill value"
|
||||||
|
- "Eval 3 shows high variance (50% ± 40%) - run 2 had an unusual failure that may be flaky"
|
||||||
|
- "Without-skill runs consistently fail on table extraction expectations (0% pass rate)"
|
||||||
|
- "Skill adds 13s average execution time but improves pass rate by 50%"
|
||||||
|
- "Token usage is 80% higher with skill, primarily due to script output parsing"
|
||||||
|
- "All 3 without-skill runs for eval 1 produced empty output"
|
||||||
|
|
||||||
|
### Step 6: Write Notes
|
||||||
|
|
||||||
|
Save notes to `{output_path}` as a JSON array of strings:
|
||||||
|
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
"Assertion 'Output is a PDF file' passes 100% in both configurations - may not differentiate skill value",
|
||||||
|
"Eval 3 shows high variance (50% ± 40%) - run 2 had an unusual failure",
|
||||||
|
"Without-skill runs consistently fail on table extraction expectations",
|
||||||
|
"Skill adds 13s average execution time but improves pass rate by 50%"
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Guidelines
|
||||||
|
|
||||||
|
**DO:**
|
||||||
|
- Report what you observe in the data
|
||||||
|
- Be specific about which evals, expectations, or runs you're referring to
|
||||||
|
- Note patterns that aggregate metrics would hide
|
||||||
|
- Provide context that helps interpret the numbers
|
||||||
|
|
||||||
|
**DO NOT:**
|
||||||
|
- Suggest improvements to the skill (that's for the improvement step, not benchmarking)
|
||||||
|
- Make subjective quality judgments ("the output was good/bad")
|
||||||
|
- Speculate about causes without evidence
|
||||||
|
- Repeat information already in the run_summary aggregates
|
||||||
202
third_party/zeroclaw/.claude/skills/skill-creator/agents/comparator.md
vendored
Normal file
202
third_party/zeroclaw/.claude/skills/skill-creator/agents/comparator.md
vendored
Normal file
@@ -0,0 +1,202 @@
|
|||||||
|
# Blind Comparator Agent
|
||||||
|
|
||||||
|
Compare two outputs WITHOUT knowing which skill produced them.
|
||||||
|
|
||||||
|
## Role
|
||||||
|
|
||||||
|
The Blind Comparator judges which output better accomplishes the eval task. You receive two outputs labeled A and B, but you do NOT know which skill produced which. This prevents bias toward a particular skill or approach.
|
||||||
|
|
||||||
|
Your judgment is based purely on output quality and task completion.
|
||||||
|
|
||||||
|
## Inputs
|
||||||
|
|
||||||
|
You receive these parameters in your prompt:
|
||||||
|
|
||||||
|
- **output_a_path**: Path to the first output file or directory
|
||||||
|
- **output_b_path**: Path to the second output file or directory
|
||||||
|
- **eval_prompt**: The original task/prompt that was executed
|
||||||
|
- **expectations**: List of expectations to check (optional - may be empty)
|
||||||
|
|
||||||
|
## Process
|
||||||
|
|
||||||
|
### Step 1: Read Both Outputs
|
||||||
|
|
||||||
|
1. Examine output A (file or directory)
|
||||||
|
2. Examine output B (file or directory)
|
||||||
|
3. Note the type, structure, and content of each
|
||||||
|
4. If outputs are directories, examine all relevant files inside
|
||||||
|
|
||||||
|
### Step 2: Understand the Task
|
||||||
|
|
||||||
|
1. Read the eval_prompt carefully
|
||||||
|
2. Identify what the task requires:
|
||||||
|
- What should be produced?
|
||||||
|
- What qualities matter (accuracy, completeness, format)?
|
||||||
|
- What would distinguish a good output from a poor one?
|
||||||
|
|
||||||
|
### Step 3: Generate Evaluation Rubric
|
||||||
|
|
||||||
|
Based on the task, generate a rubric with two dimensions:
|
||||||
|
|
||||||
|
**Content Rubric** (what the output contains):
|
||||||
|
| Criterion | 1 (Poor) | 3 (Acceptable) | 5 (Excellent) |
|
||||||
|
|-----------|----------|----------------|---------------|
|
||||||
|
| Correctness | Major errors | Minor errors | Fully correct |
|
||||||
|
| Completeness | Missing key elements | Mostly complete | All elements present |
|
||||||
|
| Accuracy | Significant inaccuracies | Minor inaccuracies | Accurate throughout |
|
||||||
|
|
||||||
|
**Structure Rubric** (how the output is organized):
|
||||||
|
| Criterion | 1 (Poor) | 3 (Acceptable) | 5 (Excellent) |
|
||||||
|
|-----------|----------|----------------|---------------|
|
||||||
|
| Organization | Disorganized | Reasonably organized | Clear, logical structure |
|
||||||
|
| Formatting | Inconsistent/broken | Mostly consistent | Professional, polished |
|
||||||
|
| Usability | Difficult to use | Usable with effort | Easy to use |
|
||||||
|
|
||||||
|
Adapt criteria to the specific task. For example:
|
||||||
|
- PDF form → "Field alignment", "Text readability", "Data placement"
|
||||||
|
- Document → "Section structure", "Heading hierarchy", "Paragraph flow"
|
||||||
|
- Data output → "Schema correctness", "Data types", "Completeness"
|
||||||
|
|
||||||
|
### Step 4: Evaluate Each Output Against the Rubric
|
||||||
|
|
||||||
|
For each output (A and B):
|
||||||
|
|
||||||
|
1. **Score each criterion** on the rubric (1-5 scale)
|
||||||
|
2. **Calculate dimension totals**: Content score, Structure score
|
||||||
|
3. **Calculate overall score**: Average of dimension scores, scaled to 1-10
|
||||||
|
|
||||||
|
### Step 5: Check Assertions (if provided)
|
||||||
|
|
||||||
|
If expectations are provided:
|
||||||
|
|
||||||
|
1. Check each expectation against output A
|
||||||
|
2. Check each expectation against output B
|
||||||
|
3. Count pass rates for each output
|
||||||
|
4. Use expectation scores as secondary evidence (not the primary decision factor)
|
||||||
|
|
||||||
|
### Step 6: Determine the Winner
|
||||||
|
|
||||||
|
Compare A and B based on (in priority order):
|
||||||
|
|
||||||
|
1. **Primary**: Overall rubric score (content + structure)
|
||||||
|
2. **Secondary**: Assertion pass rates (if applicable)
|
||||||
|
3. **Tiebreaker**: If truly equal, declare a TIE
|
||||||
|
|
||||||
|
Be decisive - ties should be rare. One output is usually better, even if marginally.
|
||||||
|
|
||||||
|
### Step 7: Write Comparison Results
|
||||||
|
|
||||||
|
Save results to a JSON file at the path specified (or `comparison.json` if not specified).
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
Write a JSON file with this structure:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"winner": "A",
|
||||||
|
"reasoning": "Output A provides a complete solution with proper formatting and all required fields. Output B is missing the date field and has formatting inconsistencies.",
|
||||||
|
"rubric": {
|
||||||
|
"A": {
|
||||||
|
"content": {
|
||||||
|
"correctness": 5,
|
||||||
|
"completeness": 5,
|
||||||
|
"accuracy": 4
|
||||||
|
},
|
||||||
|
"structure": {
|
||||||
|
"organization": 4,
|
||||||
|
"formatting": 5,
|
||||||
|
"usability": 4
|
||||||
|
},
|
||||||
|
"content_score": 4.7,
|
||||||
|
"structure_score": 4.3,
|
||||||
|
"overall_score": 9.0
|
||||||
|
},
|
||||||
|
"B": {
|
||||||
|
"content": {
|
||||||
|
"correctness": 3,
|
||||||
|
"completeness": 2,
|
||||||
|
"accuracy": 3
|
||||||
|
},
|
||||||
|
"structure": {
|
||||||
|
"organization": 3,
|
||||||
|
"formatting": 2,
|
||||||
|
"usability": 3
|
||||||
|
},
|
||||||
|
"content_score": 2.7,
|
||||||
|
"structure_score": 2.7,
|
||||||
|
"overall_score": 5.4
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"output_quality": {
|
||||||
|
"A": {
|
||||||
|
"score": 9,
|
||||||
|
"strengths": ["Complete solution", "Well-formatted", "All fields present"],
|
||||||
|
"weaknesses": ["Minor style inconsistency in header"]
|
||||||
|
},
|
||||||
|
"B": {
|
||||||
|
"score": 5,
|
||||||
|
"strengths": ["Readable output", "Correct basic structure"],
|
||||||
|
"weaknesses": ["Missing date field", "Formatting inconsistencies", "Partial data extraction"]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"expectation_results": {
|
||||||
|
"A": {
|
||||||
|
"passed": 4,
|
||||||
|
"total": 5,
|
||||||
|
"pass_rate": 0.80,
|
||||||
|
"details": [
|
||||||
|
{"text": "Output includes name", "passed": true},
|
||||||
|
{"text": "Output includes date", "passed": true},
|
||||||
|
{"text": "Format is PDF", "passed": true},
|
||||||
|
{"text": "Contains signature", "passed": false},
|
||||||
|
{"text": "Readable text", "passed": true}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"B": {
|
||||||
|
"passed": 3,
|
||||||
|
"total": 5,
|
||||||
|
"pass_rate": 0.60,
|
||||||
|
"details": [
|
||||||
|
{"text": "Output includes name", "passed": true},
|
||||||
|
{"text": "Output includes date", "passed": false},
|
||||||
|
{"text": "Format is PDF", "passed": true},
|
||||||
|
{"text": "Contains signature", "passed": false},
|
||||||
|
{"text": "Readable text", "passed": true}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
If no expectations were provided, omit the `expectation_results` field entirely.
|
||||||
|
|
||||||
|
## Field Descriptions
|
||||||
|
|
||||||
|
- **winner**: "A", "B", or "TIE"
|
||||||
|
- **reasoning**: Clear explanation of why the winner was chosen (or why it's a tie)
|
||||||
|
- **rubric**: Structured rubric evaluation for each output
|
||||||
|
- **content**: Scores for content criteria (correctness, completeness, accuracy)
|
||||||
|
- **structure**: Scores for structure criteria (organization, formatting, usability)
|
||||||
|
- **content_score**: Average of content criteria (1-5)
|
||||||
|
- **structure_score**: Average of structure criteria (1-5)
|
||||||
|
- **overall_score**: Combined score scaled to 1-10
|
||||||
|
- **output_quality**: Summary quality assessment
|
||||||
|
- **score**: 1-10 rating (should match rubric overall_score)
|
||||||
|
- **strengths**: List of positive aspects
|
||||||
|
- **weaknesses**: List of issues or shortcomings
|
||||||
|
- **expectation_results**: (Only if expectations provided)
|
||||||
|
- **passed**: Number of expectations that passed
|
||||||
|
- **total**: Total number of expectations
|
||||||
|
- **pass_rate**: Fraction passed (0.0 to 1.0)
|
||||||
|
- **details**: Individual expectation results
|
||||||
|
|
||||||
|
## Guidelines
|
||||||
|
|
||||||
|
- **Stay blind**: DO NOT try to infer which skill produced which output. Judge purely on output quality.
|
||||||
|
- **Be specific**: Cite specific examples when explaining strengths and weaknesses.
|
||||||
|
- **Be decisive**: Choose a winner unless outputs are genuinely equivalent.
|
||||||
|
- **Output quality first**: Assertion scores are secondary to overall task completion.
|
||||||
|
- **Be objective**: Don't favor outputs based on style preferences; focus on correctness and completeness.
|
||||||
|
- **Explain your reasoning**: The reasoning field should make it clear why you chose the winner.
|
||||||
|
- **Handle edge cases**: If both outputs fail, pick the one that fails less badly. If both are excellent, pick the one that's marginally better.
|
||||||
223
third_party/zeroclaw/.claude/skills/skill-creator/agents/grader.md
vendored
Normal file
223
third_party/zeroclaw/.claude/skills/skill-creator/agents/grader.md
vendored
Normal file
@@ -0,0 +1,223 @@
|
|||||||
|
# Grader Agent
|
||||||
|
|
||||||
|
Evaluate expectations against an execution transcript and outputs.
|
||||||
|
|
||||||
|
## Role
|
||||||
|
|
||||||
|
The Grader reviews a transcript and output files, then determines whether each expectation passes or fails. Provide clear evidence for each judgment.
|
||||||
|
|
||||||
|
You have two jobs: grade the outputs, and critique the evals themselves. A passing grade on a weak assertion is worse than useless — it creates false confidence. When you notice an assertion that's trivially satisfied, or an important outcome that no assertion checks, say so.
|
||||||
|
|
||||||
|
## Inputs
|
||||||
|
|
||||||
|
You receive these parameters in your prompt:
|
||||||
|
|
||||||
|
- **expectations**: List of expectations to evaluate (strings)
|
||||||
|
- **transcript_path**: Path to the execution transcript (markdown file)
|
||||||
|
- **outputs_dir**: Directory containing output files from execution
|
||||||
|
|
||||||
|
## Process
|
||||||
|
|
||||||
|
### Step 1: Read the Transcript
|
||||||
|
|
||||||
|
1. Read the transcript file completely
|
||||||
|
2. Note the eval prompt, execution steps, and final result
|
||||||
|
3. Identify any issues or errors documented
|
||||||
|
|
||||||
|
### Step 2: Examine Output Files
|
||||||
|
|
||||||
|
1. List files in outputs_dir
|
||||||
|
2. Read/examine each file relevant to the expectations. If outputs aren't plain text, use the inspection tools provided in your prompt — don't rely solely on what the transcript says the executor produced.
|
||||||
|
3. Note contents, structure, and quality
|
||||||
|
|
||||||
|
### Step 3: Evaluate Each Assertion
|
||||||
|
|
||||||
|
For each expectation:
|
||||||
|
|
||||||
|
1. **Search for evidence** in the transcript and outputs
|
||||||
|
2. **Determine verdict**:
|
||||||
|
- **PASS**: Clear evidence the expectation is true AND the evidence reflects genuine task completion, not just surface-level compliance
|
||||||
|
- **FAIL**: No evidence, or evidence contradicts the expectation, or the evidence is superficial (e.g., correct filename but empty/wrong content)
|
||||||
|
3. **Cite the evidence**: Quote the specific text or describe what you found
|
||||||
|
|
||||||
|
### Step 4: Extract and Verify Claims
|
||||||
|
|
||||||
|
Beyond the predefined expectations, extract implicit claims from the outputs and verify them:
|
||||||
|
|
||||||
|
1. **Extract claims** from the transcript and outputs:
|
||||||
|
- Factual statements ("The form has 12 fields")
|
||||||
|
- Process claims ("Used pypdf to fill the form")
|
||||||
|
- Quality claims ("All fields were filled correctly")
|
||||||
|
|
||||||
|
2. **Verify each claim**:
|
||||||
|
- **Factual claims**: Can be checked against the outputs or external sources
|
||||||
|
- **Process claims**: Can be verified from the transcript
|
||||||
|
- **Quality claims**: Evaluate whether the claim is justified
|
||||||
|
|
||||||
|
3. **Flag unverifiable claims**: Note claims that cannot be verified with available information
|
||||||
|
|
||||||
|
This catches issues that predefined expectations might miss.
|
||||||
|
|
||||||
|
### Step 5: Read User Notes
|
||||||
|
|
||||||
|
If `{outputs_dir}/user_notes.md` exists:
|
||||||
|
1. Read it and note any uncertainties or issues flagged by the executor
|
||||||
|
2. Include relevant concerns in the grading output
|
||||||
|
3. These may reveal problems even when expectations pass
|
||||||
|
|
||||||
|
### Step 6: Critique the Evals
|
||||||
|
|
||||||
|
After grading, consider whether the evals themselves could be improved. Only surface suggestions when there's a clear gap.
|
||||||
|
|
||||||
|
Good suggestions test meaningful outcomes — assertions that are hard to satisfy without actually doing the work correctly. Think about what makes an assertion *discriminating*: it passes when the skill genuinely succeeds and fails when it doesn't.
|
||||||
|
|
||||||
|
Suggestions worth raising:
|
||||||
|
- An assertion that passed but would also pass for a clearly wrong output (e.g., checking filename existence but not file content)
|
||||||
|
- An important outcome you observed — good or bad — that no assertion covers at all
|
||||||
|
- An assertion that can't actually be verified from the available outputs
|
||||||
|
|
||||||
|
Keep the bar high. The goal is to flag things the eval author would say "good catch" about, not to nitpick every assertion.
|
||||||
|
|
||||||
|
### Step 7: Write Grading Results
|
||||||
|
|
||||||
|
Save results to `{outputs_dir}/../grading.json` (sibling to outputs_dir).
|
||||||
|
|
||||||
|
## Grading Criteria
|
||||||
|
|
||||||
|
**PASS when**:
|
||||||
|
- The transcript or outputs clearly demonstrate the expectation is true
|
||||||
|
- Specific evidence can be cited
|
||||||
|
- The evidence reflects genuine substance, not just surface compliance (e.g., a file exists AND contains correct content, not just the right filename)
|
||||||
|
|
||||||
|
**FAIL when**:
|
||||||
|
- No evidence found for the expectation
|
||||||
|
- Evidence contradicts the expectation
|
||||||
|
- The expectation cannot be verified from available information
|
||||||
|
- The evidence is superficial — the assertion is technically satisfied but the underlying task outcome is wrong or incomplete
|
||||||
|
- The output appears to meet the assertion by coincidence rather than by actually doing the work
|
||||||
|
|
||||||
|
**When uncertain**: The burden of proof to pass is on the expectation.
|
||||||
|
|
||||||
|
### Step 8: Read Executor Metrics and Timing
|
||||||
|
|
||||||
|
1. If `{outputs_dir}/metrics.json` exists, read it and include in grading output
|
||||||
|
2. If `{outputs_dir}/../timing.json` exists, read it and include timing data
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
Write a JSON file with this structure:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"expectations": [
|
||||||
|
{
|
||||||
|
"text": "The output includes the name 'John Smith'",
|
||||||
|
"passed": true,
|
||||||
|
"evidence": "Found in transcript Step 3: 'Extracted names: John Smith, Sarah Johnson'"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"text": "The spreadsheet has a SUM formula in cell B10",
|
||||||
|
"passed": false,
|
||||||
|
"evidence": "No spreadsheet was created. The output was a text file."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"text": "The assistant used the skill's OCR script",
|
||||||
|
"passed": true,
|
||||||
|
"evidence": "Transcript Step 2 shows: 'Tool: Bash - python ocr_script.py image.png'"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"summary": {
|
||||||
|
"passed": 2,
|
||||||
|
"failed": 1,
|
||||||
|
"total": 3,
|
||||||
|
"pass_rate": 0.67
|
||||||
|
},
|
||||||
|
"execution_metrics": {
|
||||||
|
"tool_calls": {
|
||||||
|
"Read": 5,
|
||||||
|
"Write": 2,
|
||||||
|
"Bash": 8
|
||||||
|
},
|
||||||
|
"total_tool_calls": 15,
|
||||||
|
"total_steps": 6,
|
||||||
|
"errors_encountered": 0,
|
||||||
|
"output_chars": 12450,
|
||||||
|
"transcript_chars": 3200
|
||||||
|
},
|
||||||
|
"timing": {
|
||||||
|
"executor_duration_seconds": 165.0,
|
||||||
|
"grader_duration_seconds": 26.0,
|
||||||
|
"total_duration_seconds": 191.0
|
||||||
|
},
|
||||||
|
"claims": [
|
||||||
|
{
|
||||||
|
"claim": "The form has 12 fillable fields",
|
||||||
|
"type": "factual",
|
||||||
|
"verified": true,
|
||||||
|
"evidence": "Counted 12 fields in field_info.json"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"claim": "All required fields were populated",
|
||||||
|
"type": "quality",
|
||||||
|
"verified": false,
|
||||||
|
"evidence": "Reference section was left blank despite data being available"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"user_notes_summary": {
|
||||||
|
"uncertainties": ["Used 2023 data, may be stale"],
|
||||||
|
"needs_review": [],
|
||||||
|
"workarounds": ["Fell back to text overlay for non-fillable fields"]
|
||||||
|
},
|
||||||
|
"eval_feedback": {
|
||||||
|
"suggestions": [
|
||||||
|
{
|
||||||
|
"assertion": "The output includes the name 'John Smith'",
|
||||||
|
"reason": "A hallucinated document that mentions the name would also pass — consider checking it appears as the primary contact with matching phone and email from the input"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"reason": "No assertion checks whether the extracted phone numbers match the input — I observed incorrect numbers in the output that went uncaught"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"overall": "Assertions check presence but not correctness. Consider adding content verification."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Field Descriptions
|
||||||
|
|
||||||
|
- **expectations**: Array of graded expectations
|
||||||
|
- **text**: The original expectation text
|
||||||
|
- **passed**: Boolean - true if expectation passes
|
||||||
|
- **evidence**: Specific quote or description supporting the verdict
|
||||||
|
- **summary**: Aggregate statistics
|
||||||
|
- **passed**: Count of passed expectations
|
||||||
|
- **failed**: Count of failed expectations
|
||||||
|
- **total**: Total expectations evaluated
|
||||||
|
- **pass_rate**: Fraction passed (0.0 to 1.0)
|
||||||
|
- **execution_metrics**: Copied from executor's metrics.json (if available)
|
||||||
|
- **output_chars**: Total character count of output files (proxy for tokens)
|
||||||
|
- **transcript_chars**: Character count of transcript
|
||||||
|
- **timing**: Wall clock timing from timing.json (if available)
|
||||||
|
- **executor_duration_seconds**: Time spent in executor subagent
|
||||||
|
- **total_duration_seconds**: Total elapsed time for the run
|
||||||
|
- **claims**: Extracted and verified claims from the output
|
||||||
|
- **claim**: The statement being verified
|
||||||
|
- **type**: "factual", "process", or "quality"
|
||||||
|
- **verified**: Boolean - whether the claim holds
|
||||||
|
- **evidence**: Supporting or contradicting evidence
|
||||||
|
- **user_notes_summary**: Issues flagged by the executor
|
||||||
|
- **uncertainties**: Things the executor wasn't sure about
|
||||||
|
- **needs_review**: Items requiring human attention
|
||||||
|
- **workarounds**: Places where the skill didn't work as expected
|
||||||
|
- **eval_feedback**: Improvement suggestions for the evals (only when warranted)
|
||||||
|
- **suggestions**: List of concrete suggestions, each with a `reason` and optionally an `assertion` it relates to
|
||||||
|
- **overall**: Brief assessment — can be "No suggestions, evals look solid" if nothing to flag
|
||||||
|
|
||||||
|
## Guidelines
|
||||||
|
|
||||||
|
- **Be objective**: Base verdicts on evidence, not assumptions
|
||||||
|
- **Be specific**: Quote the exact text that supports your verdict
|
||||||
|
- **Be thorough**: Check both transcript and output files
|
||||||
|
- **Be consistent**: Apply the same standard to each expectation
|
||||||
|
- **Explain failures**: Make it clear why evidence was insufficient
|
||||||
|
- **No partial credit**: Each expectation is pass or fail, not partial
|
||||||
146
third_party/zeroclaw/.claude/skills/skill-creator/assets/eval_review.html
vendored
Normal file
146
third_party/zeroclaw/.claude/skills/skill-creator/assets/eval_review.html
vendored
Normal file
@@ -0,0 +1,146 @@
|
|||||||
|
<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="UTF-8">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
<title>Eval Set Review - __SKILL_NAME_PLACEHOLDER__</title>
|
||||||
|
<link rel="preconnect" href="https://fonts.googleapis.com">
|
||||||
|
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
|
||||||
|
<link href="https://fonts.googleapis.com/css2?family=Poppins:wght@500;600&family=Lora:wght@400;500&display=swap" rel="stylesheet">
|
||||||
|
<style>
|
||||||
|
* { box-sizing: border-box; margin: 0; padding: 0; }
|
||||||
|
body { font-family: 'Lora', Georgia, serif; background: #faf9f5; padding: 2rem; color: #141413; }
|
||||||
|
h1 { font-family: 'Poppins', sans-serif; margin-bottom: 0.5rem; font-size: 1.5rem; }
|
||||||
|
.description { color: #b0aea5; margin-bottom: 1.5rem; font-style: italic; max-width: 900px; }
|
||||||
|
.controls { margin-bottom: 1rem; display: flex; gap: 0.5rem; }
|
||||||
|
.btn { font-family: 'Poppins', sans-serif; padding: 0.5rem 1rem; border: none; border-radius: 6px; cursor: pointer; font-size: 0.875rem; font-weight: 500; }
|
||||||
|
.btn-add { background: #6a9bcc; color: white; }
|
||||||
|
.btn-add:hover { background: #5889b8; }
|
||||||
|
.btn-export { background: #d97757; color: white; }
|
||||||
|
.btn-export:hover { background: #c4613f; }
|
||||||
|
table { width: 100%; max-width: 1100px; border-collapse: collapse; background: white; border-radius: 6px; overflow: hidden; box-shadow: 0 1px 3px rgba(0,0,0,0.08); }
|
||||||
|
th { font-family: 'Poppins', sans-serif; background: #141413; color: #faf9f5; padding: 0.75rem 1rem; text-align: left; font-size: 0.875rem; }
|
||||||
|
td { padding: 0.75rem 1rem; border-bottom: 1px solid #e8e6dc; vertical-align: top; }
|
||||||
|
tr:nth-child(even) td { background: #faf9f5; }
|
||||||
|
tr:hover td { background: #f3f1ea; }
|
||||||
|
.section-header td { background: #e8e6dc; font-family: 'Poppins', sans-serif; font-weight: 500; font-size: 0.8rem; color: #141413; text-transform: uppercase; letter-spacing: 0.05em; }
|
||||||
|
.query-input { width: 100%; padding: 0.4rem; border: 1px solid #e8e6dc; border-radius: 4px; font-size: 0.875rem; font-family: 'Lora', Georgia, serif; resize: vertical; min-height: 60px; }
|
||||||
|
.query-input:focus { outline: none; border-color: #d97757; box-shadow: 0 0 0 2px rgba(217,119,87,0.15); }
|
||||||
|
.toggle { position: relative; display: inline-block; width: 44px; height: 24px; }
|
||||||
|
.toggle input { opacity: 0; width: 0; height: 0; }
|
||||||
|
.toggle .slider { position: absolute; inset: 0; background: #b0aea5; border-radius: 24px; cursor: pointer; transition: 0.2s; }
|
||||||
|
.toggle .slider::before { content: ""; position: absolute; width: 18px; height: 18px; left: 3px; bottom: 3px; background: white; border-radius: 50%; transition: 0.2s; }
|
||||||
|
.toggle input:checked + .slider { background: #d97757; }
|
||||||
|
.toggle input:checked + .slider::before { transform: translateX(20px); }
|
||||||
|
.btn-delete { background: #c44; color: white; padding: 0.3rem 0.6rem; border: none; border-radius: 4px; cursor: pointer; font-size: 0.75rem; font-family: 'Poppins', sans-serif; }
|
||||||
|
.btn-delete:hover { background: #a33; }
|
||||||
|
.summary { margin-top: 1rem; color: #b0aea5; font-size: 0.875rem; }
|
||||||
|
</style>
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<h1>Eval Set Review: <span id="skill-name">__SKILL_NAME_PLACEHOLDER__</span></h1>
|
||||||
|
<p class="description">Current description: <span id="skill-desc">__SKILL_DESCRIPTION_PLACEHOLDER__</span></p>
|
||||||
|
|
||||||
|
<div class="controls">
|
||||||
|
<button class="btn btn-add" onclick="addRow()">+ Add Query</button>
|
||||||
|
<button class="btn btn-export" onclick="exportEvalSet()">Export Eval Set</button>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<table>
|
||||||
|
<thead>
|
||||||
|
<tr>
|
||||||
|
<th style="width:65%">Query</th>
|
||||||
|
<th style="width:18%">Should Trigger</th>
|
||||||
|
<th style="width:10%">Actions</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody id="eval-body"></tbody>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
<p class="summary" id="summary"></p>
|
||||||
|
|
||||||
|
<script>
|
||||||
|
const EVAL_DATA = __EVAL_DATA_PLACEHOLDER__;
|
||||||
|
|
||||||
|
let evalItems = [...EVAL_DATA];
|
||||||
|
|
||||||
|
function render() {
|
||||||
|
const tbody = document.getElementById('eval-body');
|
||||||
|
tbody.innerHTML = '';
|
||||||
|
|
||||||
|
// Sort: should-trigger first, then should-not-trigger
|
||||||
|
const sorted = evalItems
|
||||||
|
.map((item, origIdx) => ({ ...item, origIdx }))
|
||||||
|
.sort((a, b) => (b.should_trigger ? 1 : 0) - (a.should_trigger ? 1 : 0));
|
||||||
|
|
||||||
|
let lastGroup = null;
|
||||||
|
sorted.forEach(item => {
|
||||||
|
const group = item.should_trigger ? 'trigger' : 'no-trigger';
|
||||||
|
if (group !== lastGroup) {
|
||||||
|
const headerRow = document.createElement('tr');
|
||||||
|
headerRow.className = 'section-header';
|
||||||
|
headerRow.innerHTML = `<td colspan="3">${item.should_trigger ? 'Should Trigger' : 'Should NOT Trigger'}</td>`;
|
||||||
|
tbody.appendChild(headerRow);
|
||||||
|
lastGroup = group;
|
||||||
|
}
|
||||||
|
|
||||||
|
const idx = item.origIdx;
|
||||||
|
const tr = document.createElement('tr');
|
||||||
|
tr.innerHTML = `
|
||||||
|
<td><textarea class="query-input" onchange="updateQuery(${idx}, this.value)">${escapeHtml(item.query)}</textarea></td>
|
||||||
|
<td>
|
||||||
|
<label class="toggle">
|
||||||
|
<input type="checkbox" ${item.should_trigger ? 'checked' : ''} onchange="updateTrigger(${idx}, this.checked)">
|
||||||
|
<span class="slider"></span>
|
||||||
|
</label>
|
||||||
|
<span style="margin-left:8px;font-size:0.8rem;color:#b0aea5">${item.should_trigger ? 'Yes' : 'No'}</span>
|
||||||
|
</td>
|
||||||
|
<td><button class="btn-delete" onclick="deleteRow(${idx})">Delete</button></td>
|
||||||
|
`;
|
||||||
|
tbody.appendChild(tr);
|
||||||
|
});
|
||||||
|
updateSummary();
|
||||||
|
}
|
||||||
|
|
||||||
|
function escapeHtml(text) {
|
||||||
|
const div = document.createElement('div');
|
||||||
|
div.textContent = text;
|
||||||
|
return div.innerHTML;
|
||||||
|
}
|
||||||
|
|
||||||
|
function updateQuery(idx, value) { evalItems[idx].query = value; updateSummary(); }
|
||||||
|
function updateTrigger(idx, value) { evalItems[idx].should_trigger = value; render(); }
|
||||||
|
function deleteRow(idx) { evalItems.splice(idx, 1); render(); }
|
||||||
|
|
||||||
|
function addRow() {
|
||||||
|
evalItems.push({ query: '', should_trigger: true });
|
||||||
|
render();
|
||||||
|
const inputs = document.querySelectorAll('.query-input');
|
||||||
|
inputs[inputs.length - 1].focus();
|
||||||
|
}
|
||||||
|
|
||||||
|
function updateSummary() {
|
||||||
|
const trigger = evalItems.filter(i => i.should_trigger).length;
|
||||||
|
const noTrigger = evalItems.filter(i => !i.should_trigger).length;
|
||||||
|
document.getElementById('summary').textContent =
|
||||||
|
`${evalItems.length} queries total: ${trigger} should trigger, ${noTrigger} should not trigger`;
|
||||||
|
}
|
||||||
|
|
||||||
|
function exportEvalSet() {
|
||||||
|
const valid = evalItems.filter(i => i.query.trim() !== '');
|
||||||
|
const data = valid.map(i => ({ query: i.query.trim(), should_trigger: i.should_trigger }));
|
||||||
|
const blob = new Blob([JSON.stringify(data, null, 2)], { type: 'application/json' });
|
||||||
|
const url = URL.createObjectURL(blob);
|
||||||
|
const a = document.createElement('a');
|
||||||
|
a.href = url;
|
||||||
|
a.download = 'eval_set.json';
|
||||||
|
document.body.appendChild(a);
|
||||||
|
a.click();
|
||||||
|
document.body.removeChild(a);
|
||||||
|
URL.revokeObjectURL(url);
|
||||||
|
}
|
||||||
|
|
||||||
|
render();
|
||||||
|
</script>
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
471
third_party/zeroclaw/.claude/skills/skill-creator/eval-viewer/generate_review.py
vendored
Normal file
471
third_party/zeroclaw/.claude/skills/skill-creator/eval-viewer/generate_review.py
vendored
Normal file
@@ -0,0 +1,471 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Generate and serve a review page for eval results.
|
||||||
|
|
||||||
|
Reads the workspace directory, discovers runs (directories with outputs/),
|
||||||
|
embeds all output data into a self-contained HTML page, and serves it via
|
||||||
|
a tiny HTTP server. Feedback auto-saves to feedback.json in the workspace.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python generate_review.py <workspace-path> [--port PORT] [--skill-name NAME]
|
||||||
|
python generate_review.py <workspace-path> --previous-feedback /path/to/old/feedback.json
|
||||||
|
|
||||||
|
No dependencies beyond the Python stdlib are required.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import base64
|
||||||
|
import json
|
||||||
|
import mimetypes
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
import signal
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
import webbrowser
|
||||||
|
from functools import partial
|
||||||
|
from http.server import HTTPServer, BaseHTTPRequestHandler
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
# Files to exclude from output listings
|
||||||
|
METADATA_FILES = {"transcript.md", "user_notes.md", "metrics.json"}
|
||||||
|
|
||||||
|
# Extensions we render as inline text
|
||||||
|
TEXT_EXTENSIONS = {
|
||||||
|
".txt", ".md", ".json", ".csv", ".py", ".js", ".ts", ".tsx", ".jsx",
|
||||||
|
".yaml", ".yml", ".xml", ".html", ".css", ".sh", ".rb", ".go", ".rs",
|
||||||
|
".java", ".c", ".cpp", ".h", ".hpp", ".sql", ".r", ".toml",
|
||||||
|
}
|
||||||
|
|
||||||
|
# Extensions we render as inline images
|
||||||
|
IMAGE_EXTENSIONS = {".png", ".jpg", ".jpeg", ".gif", ".svg", ".webp"}
|
||||||
|
|
||||||
|
# MIME type overrides for common types
|
||||||
|
MIME_OVERRIDES = {
|
||||||
|
".svg": "image/svg+xml",
|
||||||
|
".xlsx": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
|
||||||
|
".docx": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
|
||||||
|
".pptx": "application/vnd.openxmlformats-officedocument.presentationml.presentation",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def get_mime_type(path: Path) -> str:
|
||||||
|
ext = path.suffix.lower()
|
||||||
|
if ext in MIME_OVERRIDES:
|
||||||
|
return MIME_OVERRIDES[ext]
|
||||||
|
mime, _ = mimetypes.guess_type(str(path))
|
||||||
|
return mime or "application/octet-stream"
|
||||||
|
|
||||||
|
|
||||||
|
def find_runs(workspace: Path) -> list[dict]:
|
||||||
|
"""Recursively find directories that contain an outputs/ subdirectory."""
|
||||||
|
runs: list[dict] = []
|
||||||
|
_find_runs_recursive(workspace, workspace, runs)
|
||||||
|
runs.sort(key=lambda r: (r.get("eval_id", float("inf")), r["id"]))
|
||||||
|
return runs
|
||||||
|
|
||||||
|
|
||||||
|
def _find_runs_recursive(root: Path, current: Path, runs: list[dict]) -> None:
|
||||||
|
if not current.is_dir():
|
||||||
|
return
|
||||||
|
|
||||||
|
outputs_dir = current / "outputs"
|
||||||
|
if outputs_dir.is_dir():
|
||||||
|
run = build_run(root, current)
|
||||||
|
if run:
|
||||||
|
runs.append(run)
|
||||||
|
return
|
||||||
|
|
||||||
|
skip = {"node_modules", ".git", "__pycache__", "skill", "inputs"}
|
||||||
|
for child in sorted(current.iterdir()):
|
||||||
|
if child.is_dir() and child.name not in skip:
|
||||||
|
_find_runs_recursive(root, child, runs)
|
||||||
|
|
||||||
|
|
||||||
|
def build_run(root: Path, run_dir: Path) -> dict | None:
|
||||||
|
"""Build a run dict with prompt, outputs, and grading data."""
|
||||||
|
prompt = ""
|
||||||
|
eval_id = None
|
||||||
|
|
||||||
|
# Try eval_metadata.json
|
||||||
|
for candidate in [run_dir / "eval_metadata.json", run_dir.parent / "eval_metadata.json"]:
|
||||||
|
if candidate.exists():
|
||||||
|
try:
|
||||||
|
metadata = json.loads(candidate.read_text())
|
||||||
|
prompt = metadata.get("prompt", "")
|
||||||
|
eval_id = metadata.get("eval_id")
|
||||||
|
except (json.JSONDecodeError, OSError):
|
||||||
|
pass
|
||||||
|
if prompt:
|
||||||
|
break
|
||||||
|
|
||||||
|
# Fall back to transcript.md
|
||||||
|
if not prompt:
|
||||||
|
for candidate in [run_dir / "transcript.md", run_dir / "outputs" / "transcript.md"]:
|
||||||
|
if candidate.exists():
|
||||||
|
try:
|
||||||
|
text = candidate.read_text()
|
||||||
|
match = re.search(r"## Eval Prompt\n\n([\s\S]*?)(?=\n##|$)", text)
|
||||||
|
if match:
|
||||||
|
prompt = match.group(1).strip()
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
if prompt:
|
||||||
|
break
|
||||||
|
|
||||||
|
if not prompt:
|
||||||
|
prompt = "(No prompt found)"
|
||||||
|
|
||||||
|
run_id = str(run_dir.relative_to(root)).replace("/", "-").replace("\\", "-")
|
||||||
|
|
||||||
|
# Collect output files
|
||||||
|
outputs_dir = run_dir / "outputs"
|
||||||
|
output_files: list[dict] = []
|
||||||
|
if outputs_dir.is_dir():
|
||||||
|
for f in sorted(outputs_dir.iterdir()):
|
||||||
|
if f.is_file() and f.name not in METADATA_FILES:
|
||||||
|
output_files.append(embed_file(f))
|
||||||
|
|
||||||
|
# Load grading if present
|
||||||
|
grading = None
|
||||||
|
for candidate in [run_dir / "grading.json", run_dir.parent / "grading.json"]:
|
||||||
|
if candidate.exists():
|
||||||
|
try:
|
||||||
|
grading = json.loads(candidate.read_text())
|
||||||
|
except (json.JSONDecodeError, OSError):
|
||||||
|
pass
|
||||||
|
if grading:
|
||||||
|
break
|
||||||
|
|
||||||
|
return {
|
||||||
|
"id": run_id,
|
||||||
|
"prompt": prompt,
|
||||||
|
"eval_id": eval_id,
|
||||||
|
"outputs": output_files,
|
||||||
|
"grading": grading,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def embed_file(path: Path) -> dict:
|
||||||
|
"""Read a file and return an embedded representation."""
|
||||||
|
ext = path.suffix.lower()
|
||||||
|
mime = get_mime_type(path)
|
||||||
|
|
||||||
|
if ext in TEXT_EXTENSIONS:
|
||||||
|
try:
|
||||||
|
content = path.read_text(errors="replace")
|
||||||
|
except OSError:
|
||||||
|
content = "(Error reading file)"
|
||||||
|
return {
|
||||||
|
"name": path.name,
|
||||||
|
"type": "text",
|
||||||
|
"content": content,
|
||||||
|
}
|
||||||
|
elif ext in IMAGE_EXTENSIONS:
|
||||||
|
try:
|
||||||
|
raw = path.read_bytes()
|
||||||
|
b64 = base64.b64encode(raw).decode("ascii")
|
||||||
|
except OSError:
|
||||||
|
return {"name": path.name, "type": "error", "content": "(Error reading file)"}
|
||||||
|
return {
|
||||||
|
"name": path.name,
|
||||||
|
"type": "image",
|
||||||
|
"mime": mime,
|
||||||
|
"data_uri": f"data:{mime};base64,{b64}",
|
||||||
|
}
|
||||||
|
elif ext == ".pdf":
|
||||||
|
try:
|
||||||
|
raw = path.read_bytes()
|
||||||
|
b64 = base64.b64encode(raw).decode("ascii")
|
||||||
|
except OSError:
|
||||||
|
return {"name": path.name, "type": "error", "content": "(Error reading file)"}
|
||||||
|
return {
|
||||||
|
"name": path.name,
|
||||||
|
"type": "pdf",
|
||||||
|
"data_uri": f"data:{mime};base64,{b64}",
|
||||||
|
}
|
||||||
|
elif ext == ".xlsx":
|
||||||
|
try:
|
||||||
|
raw = path.read_bytes()
|
||||||
|
b64 = base64.b64encode(raw).decode("ascii")
|
||||||
|
except OSError:
|
||||||
|
return {"name": path.name, "type": "error", "content": "(Error reading file)"}
|
||||||
|
return {
|
||||||
|
"name": path.name,
|
||||||
|
"type": "xlsx",
|
||||||
|
"data_b64": b64,
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
# Binary / unknown — base64 download link
|
||||||
|
try:
|
||||||
|
raw = path.read_bytes()
|
||||||
|
b64 = base64.b64encode(raw).decode("ascii")
|
||||||
|
except OSError:
|
||||||
|
return {"name": path.name, "type": "error", "content": "(Error reading file)"}
|
||||||
|
return {
|
||||||
|
"name": path.name,
|
||||||
|
"type": "binary",
|
||||||
|
"mime": mime,
|
||||||
|
"data_uri": f"data:{mime};base64,{b64}",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def load_previous_iteration(workspace: Path) -> dict[str, dict]:
|
||||||
|
"""Load previous iteration's feedback and outputs.
|
||||||
|
|
||||||
|
Returns a map of run_id -> {"feedback": str, "outputs": list[dict]}.
|
||||||
|
"""
|
||||||
|
result: dict[str, dict] = {}
|
||||||
|
|
||||||
|
# Load feedback
|
||||||
|
feedback_map: dict[str, str] = {}
|
||||||
|
feedback_path = workspace / "feedback.json"
|
||||||
|
if feedback_path.exists():
|
||||||
|
try:
|
||||||
|
data = json.loads(feedback_path.read_text())
|
||||||
|
feedback_map = {
|
||||||
|
r["run_id"]: r["feedback"]
|
||||||
|
for r in data.get("reviews", [])
|
||||||
|
if r.get("feedback", "").strip()
|
||||||
|
}
|
||||||
|
except (json.JSONDecodeError, OSError, KeyError):
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Load runs (to get outputs)
|
||||||
|
prev_runs = find_runs(workspace)
|
||||||
|
for run in prev_runs:
|
||||||
|
result[run["id"]] = {
|
||||||
|
"feedback": feedback_map.get(run["id"], ""),
|
||||||
|
"outputs": run.get("outputs", []),
|
||||||
|
}
|
||||||
|
|
||||||
|
# Also add feedback for run_ids that had feedback but no matching run
|
||||||
|
for run_id, fb in feedback_map.items():
|
||||||
|
if run_id not in result:
|
||||||
|
result[run_id] = {"feedback": fb, "outputs": []}
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def generate_html(
|
||||||
|
runs: list[dict],
|
||||||
|
skill_name: str,
|
||||||
|
previous: dict[str, dict] | None = None,
|
||||||
|
benchmark: dict | None = None,
|
||||||
|
) -> str:
|
||||||
|
"""Generate the complete standalone HTML page with embedded data."""
|
||||||
|
template_path = Path(__file__).parent / "viewer.html"
|
||||||
|
template = template_path.read_text()
|
||||||
|
|
||||||
|
# Build previous_feedback and previous_outputs maps for the template
|
||||||
|
previous_feedback: dict[str, str] = {}
|
||||||
|
previous_outputs: dict[str, list[dict]] = {}
|
||||||
|
if previous:
|
||||||
|
for run_id, data in previous.items():
|
||||||
|
if data.get("feedback"):
|
||||||
|
previous_feedback[run_id] = data["feedback"]
|
||||||
|
if data.get("outputs"):
|
||||||
|
previous_outputs[run_id] = data["outputs"]
|
||||||
|
|
||||||
|
embedded = {
|
||||||
|
"skill_name": skill_name,
|
||||||
|
"runs": runs,
|
||||||
|
"previous_feedback": previous_feedback,
|
||||||
|
"previous_outputs": previous_outputs,
|
||||||
|
}
|
||||||
|
if benchmark:
|
||||||
|
embedded["benchmark"] = benchmark
|
||||||
|
|
||||||
|
data_json = json.dumps(embedded)
|
||||||
|
|
||||||
|
return template.replace("/*__EMBEDDED_DATA__*/", f"const EMBEDDED_DATA = {data_json};")
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# HTTP server (stdlib only, zero dependencies)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
def _kill_port(port: int) -> None:
|
||||||
|
"""Kill any process listening on the given port."""
|
||||||
|
try:
|
||||||
|
result = subprocess.run(
|
||||||
|
["lsof", "-ti", f":{port}"],
|
||||||
|
capture_output=True, text=True, timeout=5,
|
||||||
|
)
|
||||||
|
for pid_str in result.stdout.strip().split("\n"):
|
||||||
|
if pid_str.strip():
|
||||||
|
try:
|
||||||
|
os.kill(int(pid_str.strip()), signal.SIGTERM)
|
||||||
|
except (ProcessLookupError, ValueError):
|
||||||
|
pass
|
||||||
|
if result.stdout.strip():
|
||||||
|
time.sleep(0.5)
|
||||||
|
except subprocess.TimeoutExpired:
|
||||||
|
pass
|
||||||
|
except FileNotFoundError:
|
||||||
|
print("Note: lsof not found, cannot check if port is in use", file=sys.stderr)
|
||||||
|
|
||||||
|
class ReviewHandler(BaseHTTPRequestHandler):
|
||||||
|
"""Serves the review HTML and handles feedback saves.
|
||||||
|
|
||||||
|
Regenerates the HTML on each page load so that refreshing the browser
|
||||||
|
picks up new eval outputs without restarting the server.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
workspace: Path,
|
||||||
|
skill_name: str,
|
||||||
|
feedback_path: Path,
|
||||||
|
previous: dict[str, dict],
|
||||||
|
benchmark_path: Path | None,
|
||||||
|
*args,
|
||||||
|
**kwargs,
|
||||||
|
):
|
||||||
|
self.workspace = workspace
|
||||||
|
self.skill_name = skill_name
|
||||||
|
self.feedback_path = feedback_path
|
||||||
|
self.previous = previous
|
||||||
|
self.benchmark_path = benchmark_path
|
||||||
|
super().__init__(*args, **kwargs)
|
||||||
|
|
||||||
|
def do_GET(self) -> None:
|
||||||
|
if self.path == "/" or self.path == "/index.html":
|
||||||
|
# Regenerate HTML on each request (re-scans workspace for new outputs)
|
||||||
|
runs = find_runs(self.workspace)
|
||||||
|
benchmark = None
|
||||||
|
if self.benchmark_path and self.benchmark_path.exists():
|
||||||
|
try:
|
||||||
|
benchmark = json.loads(self.benchmark_path.read_text())
|
||||||
|
except (json.JSONDecodeError, OSError):
|
||||||
|
pass
|
||||||
|
html = generate_html(runs, self.skill_name, self.previous, benchmark)
|
||||||
|
content = html.encode("utf-8")
|
||||||
|
self.send_response(200)
|
||||||
|
self.send_header("Content-Type", "text/html; charset=utf-8")
|
||||||
|
self.send_header("Content-Length", str(len(content)))
|
||||||
|
self.end_headers()
|
||||||
|
self.wfile.write(content)
|
||||||
|
elif self.path == "/api/feedback":
|
||||||
|
data = b"{}"
|
||||||
|
if self.feedback_path.exists():
|
||||||
|
data = self.feedback_path.read_bytes()
|
||||||
|
self.send_response(200)
|
||||||
|
self.send_header("Content-Type", "application/json")
|
||||||
|
self.send_header("Content-Length", str(len(data)))
|
||||||
|
self.end_headers()
|
||||||
|
self.wfile.write(data)
|
||||||
|
else:
|
||||||
|
self.send_error(404)
|
||||||
|
|
||||||
|
def do_POST(self) -> None:
|
||||||
|
if self.path == "/api/feedback":
|
||||||
|
length = int(self.headers.get("Content-Length", 0))
|
||||||
|
body = self.rfile.read(length)
|
||||||
|
try:
|
||||||
|
data = json.loads(body)
|
||||||
|
if not isinstance(data, dict) or "reviews" not in data:
|
||||||
|
raise ValueError("Expected JSON object with 'reviews' key")
|
||||||
|
self.feedback_path.write_text(json.dumps(data, indent=2) + "\n")
|
||||||
|
resp = b'{"ok":true}'
|
||||||
|
self.send_response(200)
|
||||||
|
except (json.JSONDecodeError, OSError, ValueError) as e:
|
||||||
|
resp = json.dumps({"error": str(e)}).encode()
|
||||||
|
self.send_response(500)
|
||||||
|
self.send_header("Content-Type", "application/json")
|
||||||
|
self.send_header("Content-Length", str(len(resp)))
|
||||||
|
self.end_headers()
|
||||||
|
self.wfile.write(resp)
|
||||||
|
else:
|
||||||
|
self.send_error(404)
|
||||||
|
|
||||||
|
def log_message(self, format: str, *args: object) -> None:
|
||||||
|
# Suppress request logging to keep terminal clean
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> None:
|
||||||
|
parser = argparse.ArgumentParser(description="Generate and serve eval review")
|
||||||
|
parser.add_argument("workspace", type=Path, help="Path to workspace directory")
|
||||||
|
parser.add_argument("--port", "-p", type=int, default=3117, help="Server port (default: 3117)")
|
||||||
|
parser.add_argument("--skill-name", "-n", type=str, default=None, help="Skill name for header")
|
||||||
|
parser.add_argument(
|
||||||
|
"--previous-workspace", type=Path, default=None,
|
||||||
|
help="Path to previous iteration's workspace (shows old outputs and feedback as context)",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--benchmark", type=Path, default=None,
|
||||||
|
help="Path to benchmark.json to show in the Benchmark tab",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--static", "-s", type=Path, default=None,
|
||||||
|
help="Write standalone HTML to this path instead of starting a server",
|
||||||
|
)
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
workspace = args.workspace.resolve()
|
||||||
|
if not workspace.is_dir():
|
||||||
|
print(f"Error: {workspace} is not a directory", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
runs = find_runs(workspace)
|
||||||
|
if not runs:
|
||||||
|
print(f"No runs found in {workspace}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
skill_name = args.skill_name or workspace.name.replace("-workspace", "")
|
||||||
|
feedback_path = workspace / "feedback.json"
|
||||||
|
|
||||||
|
previous: dict[str, dict] = {}
|
||||||
|
if args.previous_workspace:
|
||||||
|
previous = load_previous_iteration(args.previous_workspace.resolve())
|
||||||
|
|
||||||
|
benchmark_path = args.benchmark.resolve() if args.benchmark else None
|
||||||
|
benchmark = None
|
||||||
|
if benchmark_path and benchmark_path.exists():
|
||||||
|
try:
|
||||||
|
benchmark = json.loads(benchmark_path.read_text())
|
||||||
|
except (json.JSONDecodeError, OSError):
|
||||||
|
pass
|
||||||
|
|
||||||
|
if args.static:
|
||||||
|
html = generate_html(runs, skill_name, previous, benchmark)
|
||||||
|
args.static.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
args.static.write_text(html)
|
||||||
|
print(f"\n Static viewer written to: {args.static}\n")
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
|
# Kill any existing process on the target port
|
||||||
|
port = args.port
|
||||||
|
_kill_port(port)
|
||||||
|
handler = partial(ReviewHandler, workspace, skill_name, feedback_path, previous, benchmark_path)
|
||||||
|
try:
|
||||||
|
server = HTTPServer(("127.0.0.1", port), handler)
|
||||||
|
except OSError:
|
||||||
|
# Port still in use after kill attempt — find a free one
|
||||||
|
server = HTTPServer(("127.0.0.1", 0), handler)
|
||||||
|
port = server.server_address[1]
|
||||||
|
|
||||||
|
url = f"http://localhost:{port}"
|
||||||
|
print(f"\n Eval Viewer")
|
||||||
|
print(f" ─────────────────────────────────")
|
||||||
|
print(f" URL: {url}")
|
||||||
|
print(f" Workspace: {workspace}")
|
||||||
|
print(f" Feedback: {feedback_path}")
|
||||||
|
if previous:
|
||||||
|
print(f" Previous: {args.previous_workspace} ({len(previous)} runs)")
|
||||||
|
if benchmark_path:
|
||||||
|
print(f" Benchmark: {benchmark_path}")
|
||||||
|
print(f"\n Press Ctrl+C to stop.\n")
|
||||||
|
|
||||||
|
webbrowser.open(url)
|
||||||
|
|
||||||
|
try:
|
||||||
|
server.serve_forever()
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
print("\nStopped.")
|
||||||
|
server.server_close()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
1325
third_party/zeroclaw/.claude/skills/skill-creator/eval-viewer/viewer.html
vendored
Normal file
1325
third_party/zeroclaw/.claude/skills/skill-creator/eval-viewer/viewer.html
vendored
Normal file
File diff suppressed because it is too large
Load Diff
430
third_party/zeroclaw/.claude/skills/skill-creator/references/schemas.md
vendored
Normal file
430
third_party/zeroclaw/.claude/skills/skill-creator/references/schemas.md
vendored
Normal file
@@ -0,0 +1,430 @@
|
|||||||
|
# JSON Schemas
|
||||||
|
|
||||||
|
This document defines the JSON schemas used by skill-creator.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## evals.json
|
||||||
|
|
||||||
|
Defines the evals for a skill. Located at `evals/evals.json` within the skill directory.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"skill_name": "example-skill",
|
||||||
|
"evals": [
|
||||||
|
{
|
||||||
|
"id": 1,
|
||||||
|
"prompt": "User's example prompt",
|
||||||
|
"expected_output": "Description of expected result",
|
||||||
|
"files": ["evals/files/sample1.pdf"],
|
||||||
|
"expectations": [
|
||||||
|
"The output includes X",
|
||||||
|
"The skill used script Y"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
- `skill_name`: Name matching the skill's frontmatter
|
||||||
|
- `evals[].id`: Unique integer identifier
|
||||||
|
- `evals[].prompt`: The task to execute
|
||||||
|
- `evals[].expected_output`: Human-readable description of success
|
||||||
|
- `evals[].files`: Optional list of input file paths (relative to skill root)
|
||||||
|
- `evals[].expectations`: List of verifiable statements
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## history.json
|
||||||
|
|
||||||
|
Tracks version progression in Improve mode. Located at workspace root.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"started_at": "2026-01-15T10:30:00Z",
|
||||||
|
"skill_name": "pdf",
|
||||||
|
"current_best": "v2",
|
||||||
|
"iterations": [
|
||||||
|
{
|
||||||
|
"version": "v0",
|
||||||
|
"parent": null,
|
||||||
|
"expectation_pass_rate": 0.65,
|
||||||
|
"grading_result": "baseline",
|
||||||
|
"is_current_best": false
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "v1",
|
||||||
|
"parent": "v0",
|
||||||
|
"expectation_pass_rate": 0.75,
|
||||||
|
"grading_result": "won",
|
||||||
|
"is_current_best": false
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "v2",
|
||||||
|
"parent": "v1",
|
||||||
|
"expectation_pass_rate": 0.85,
|
||||||
|
"grading_result": "won",
|
||||||
|
"is_current_best": true
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
- `started_at`: ISO timestamp of when improvement started
|
||||||
|
- `skill_name`: Name of the skill being improved
|
||||||
|
- `current_best`: Version identifier of the best performer
|
||||||
|
- `iterations[].version`: Version identifier (v0, v1, ...)
|
||||||
|
- `iterations[].parent`: Parent version this was derived from
|
||||||
|
- `iterations[].expectation_pass_rate`: Pass rate from grading
|
||||||
|
- `iterations[].grading_result`: "baseline", "won", "lost", or "tie"
|
||||||
|
- `iterations[].is_current_best`: Whether this is the current best version
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## grading.json
|
||||||
|
|
||||||
|
Output from the grader agent. Located at `<run-dir>/grading.json`.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"expectations": [
|
||||||
|
{
|
||||||
|
"text": "The output includes the name 'John Smith'",
|
||||||
|
"passed": true,
|
||||||
|
"evidence": "Found in transcript Step 3: 'Extracted names: John Smith, Sarah Johnson'"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"text": "The spreadsheet has a SUM formula in cell B10",
|
||||||
|
"passed": false,
|
||||||
|
"evidence": "No spreadsheet was created. The output was a text file."
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"summary": {
|
||||||
|
"passed": 2,
|
||||||
|
"failed": 1,
|
||||||
|
"total": 3,
|
||||||
|
"pass_rate": 0.67
|
||||||
|
},
|
||||||
|
"execution_metrics": {
|
||||||
|
"tool_calls": {
|
||||||
|
"Read": 5,
|
||||||
|
"Write": 2,
|
||||||
|
"Bash": 8
|
||||||
|
},
|
||||||
|
"total_tool_calls": 15,
|
||||||
|
"total_steps": 6,
|
||||||
|
"errors_encountered": 0,
|
||||||
|
"output_chars": 12450,
|
||||||
|
"transcript_chars": 3200
|
||||||
|
},
|
||||||
|
"timing": {
|
||||||
|
"executor_duration_seconds": 165.0,
|
||||||
|
"grader_duration_seconds": 26.0,
|
||||||
|
"total_duration_seconds": 191.0
|
||||||
|
},
|
||||||
|
"claims": [
|
||||||
|
{
|
||||||
|
"claim": "The form has 12 fillable fields",
|
||||||
|
"type": "factual",
|
||||||
|
"verified": true,
|
||||||
|
"evidence": "Counted 12 fields in field_info.json"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"user_notes_summary": {
|
||||||
|
"uncertainties": ["Used 2023 data, may be stale"],
|
||||||
|
"needs_review": [],
|
||||||
|
"workarounds": ["Fell back to text overlay for non-fillable fields"]
|
||||||
|
},
|
||||||
|
"eval_feedback": {
|
||||||
|
"suggestions": [
|
||||||
|
{
|
||||||
|
"assertion": "The output includes the name 'John Smith'",
|
||||||
|
"reason": "A hallucinated document that mentions the name would also pass"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"overall": "Assertions check presence but not correctness."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
- `expectations[]`: Graded expectations with evidence
|
||||||
|
- `summary`: Aggregate pass/fail counts
|
||||||
|
- `execution_metrics`: Tool usage and output size (from executor's metrics.json)
|
||||||
|
- `timing`: Wall clock timing (from timing.json)
|
||||||
|
- `claims`: Extracted and verified claims from the output
|
||||||
|
- `user_notes_summary`: Issues flagged by the executor
|
||||||
|
- `eval_feedback`: (optional) Improvement suggestions for the evals, only present when the grader identifies issues worth raising
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## metrics.json
|
||||||
|
|
||||||
|
Output from the executor agent. Located at `<run-dir>/outputs/metrics.json`.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"tool_calls": {
|
||||||
|
"Read": 5,
|
||||||
|
"Write": 2,
|
||||||
|
"Bash": 8,
|
||||||
|
"Edit": 1,
|
||||||
|
"Glob": 2,
|
||||||
|
"Grep": 0
|
||||||
|
},
|
||||||
|
"total_tool_calls": 18,
|
||||||
|
"total_steps": 6,
|
||||||
|
"files_created": ["filled_form.pdf", "field_values.json"],
|
||||||
|
"errors_encountered": 0,
|
||||||
|
"output_chars": 12450,
|
||||||
|
"transcript_chars": 3200
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
- `tool_calls`: Count per tool type
|
||||||
|
- `total_tool_calls`: Sum of all tool calls
|
||||||
|
- `total_steps`: Number of major execution steps
|
||||||
|
- `files_created`: List of output files created
|
||||||
|
- `errors_encountered`: Number of errors during execution
|
||||||
|
- `output_chars`: Total character count of output files
|
||||||
|
- `transcript_chars`: Character count of transcript
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## timing.json
|
||||||
|
|
||||||
|
Wall clock timing for a run. Located at `<run-dir>/timing.json`.
|
||||||
|
|
||||||
|
**How to capture:** When a subagent task completes, the task notification includes `total_tokens` and `duration_ms`. Save these immediately — they are not persisted anywhere else and cannot be recovered after the fact.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"total_tokens": 84852,
|
||||||
|
"duration_ms": 23332,
|
||||||
|
"total_duration_seconds": 23.3,
|
||||||
|
"executor_start": "2026-01-15T10:30:00Z",
|
||||||
|
"executor_end": "2026-01-15T10:32:45Z",
|
||||||
|
"executor_duration_seconds": 165.0,
|
||||||
|
"grader_start": "2026-01-15T10:32:46Z",
|
||||||
|
"grader_end": "2026-01-15T10:33:12Z",
|
||||||
|
"grader_duration_seconds": 26.0
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## benchmark.json
|
||||||
|
|
||||||
|
Output from Benchmark mode. Located at `benchmarks/<timestamp>/benchmark.json`.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"metadata": {
|
||||||
|
"skill_name": "pdf",
|
||||||
|
"skill_path": "/path/to/pdf",
|
||||||
|
"executor_model": "claude-sonnet-4-20250514",
|
||||||
|
"analyzer_model": "most-capable-model",
|
||||||
|
"timestamp": "2026-01-15T10:30:00Z",
|
||||||
|
"evals_run": [1, 2, 3],
|
||||||
|
"runs_per_configuration": 3
|
||||||
|
},
|
||||||
|
|
||||||
|
"runs": [
|
||||||
|
{
|
||||||
|
"eval_id": 1,
|
||||||
|
"eval_name": "Ocean",
|
||||||
|
"configuration": "with_skill",
|
||||||
|
"run_number": 1,
|
||||||
|
"result": {
|
||||||
|
"pass_rate": 0.85,
|
||||||
|
"passed": 6,
|
||||||
|
"failed": 1,
|
||||||
|
"total": 7,
|
||||||
|
"time_seconds": 42.5,
|
||||||
|
"tokens": 3800,
|
||||||
|
"tool_calls": 18,
|
||||||
|
"errors": 0
|
||||||
|
},
|
||||||
|
"expectations": [
|
||||||
|
{"text": "...", "passed": true, "evidence": "..."}
|
||||||
|
],
|
||||||
|
"notes": [
|
||||||
|
"Used 2023 data, may be stale",
|
||||||
|
"Fell back to text overlay for non-fillable fields"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
|
||||||
|
"run_summary": {
|
||||||
|
"with_skill": {
|
||||||
|
"pass_rate": {"mean": 0.85, "stddev": 0.05, "min": 0.80, "max": 0.90},
|
||||||
|
"time_seconds": {"mean": 45.0, "stddev": 12.0, "min": 32.0, "max": 58.0},
|
||||||
|
"tokens": {"mean": 3800, "stddev": 400, "min": 3200, "max": 4100}
|
||||||
|
},
|
||||||
|
"without_skill": {
|
||||||
|
"pass_rate": {"mean": 0.35, "stddev": 0.08, "min": 0.28, "max": 0.45},
|
||||||
|
"time_seconds": {"mean": 32.0, "stddev": 8.0, "min": 24.0, "max": 42.0},
|
||||||
|
"tokens": {"mean": 2100, "stddev": 300, "min": 1800, "max": 2500}
|
||||||
|
},
|
||||||
|
"delta": {
|
||||||
|
"pass_rate": "+0.50",
|
||||||
|
"time_seconds": "+13.0",
|
||||||
|
"tokens": "+1700"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
|
||||||
|
"notes": [
|
||||||
|
"Assertion 'Output is a PDF file' passes 100% in both configurations - may not differentiate skill value",
|
||||||
|
"Eval 3 shows high variance (50% ± 40%) - may be flaky or model-dependent",
|
||||||
|
"Without-skill runs consistently fail on table extraction expectations",
|
||||||
|
"Skill adds 13s average execution time but improves pass rate by 50%"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
- `metadata`: Information about the benchmark run
|
||||||
|
- `skill_name`: Name of the skill
|
||||||
|
- `timestamp`: When the benchmark was run
|
||||||
|
- `evals_run`: List of eval names or IDs
|
||||||
|
- `runs_per_configuration`: Number of runs per config (e.g. 3)
|
||||||
|
- `runs[]`: Individual run results
|
||||||
|
- `eval_id`: Numeric eval identifier
|
||||||
|
- `eval_name`: Human-readable eval name (used as section header in the viewer)
|
||||||
|
- `configuration`: Must be `"with_skill"` or `"without_skill"` (the viewer uses this exact string for grouping and color coding)
|
||||||
|
- `run_number`: Integer run number (1, 2, 3...)
|
||||||
|
- `result`: Nested object with `pass_rate`, `passed`, `total`, `time_seconds`, `tokens`, `errors`
|
||||||
|
- `run_summary`: Statistical aggregates per configuration
|
||||||
|
- `with_skill` / `without_skill`: Each contains `pass_rate`, `time_seconds`, `tokens` objects with `mean` and `stddev` fields
|
||||||
|
- `delta`: Difference strings like `"+0.50"`, `"+13.0"`, `"+1700"`
|
||||||
|
- `notes`: Freeform observations from the analyzer
|
||||||
|
|
||||||
|
**Important:** The viewer reads these field names exactly. Using `config` instead of `configuration`, or putting `pass_rate` at the top level of a run instead of nested under `result`, will cause the viewer to show empty/zero values. Always reference this schema when generating benchmark.json manually.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## comparison.json
|
||||||
|
|
||||||
|
Output from blind comparator. Located at `<grading-dir>/comparison-N.json`.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"winner": "A",
|
||||||
|
"reasoning": "Output A provides a complete solution with proper formatting and all required fields. Output B is missing the date field and has formatting inconsistencies.",
|
||||||
|
"rubric": {
|
||||||
|
"A": {
|
||||||
|
"content": {
|
||||||
|
"correctness": 5,
|
||||||
|
"completeness": 5,
|
||||||
|
"accuracy": 4
|
||||||
|
},
|
||||||
|
"structure": {
|
||||||
|
"organization": 4,
|
||||||
|
"formatting": 5,
|
||||||
|
"usability": 4
|
||||||
|
},
|
||||||
|
"content_score": 4.7,
|
||||||
|
"structure_score": 4.3,
|
||||||
|
"overall_score": 9.0
|
||||||
|
},
|
||||||
|
"B": {
|
||||||
|
"content": {
|
||||||
|
"correctness": 3,
|
||||||
|
"completeness": 2,
|
||||||
|
"accuracy": 3
|
||||||
|
},
|
||||||
|
"structure": {
|
||||||
|
"organization": 3,
|
||||||
|
"formatting": 2,
|
||||||
|
"usability": 3
|
||||||
|
},
|
||||||
|
"content_score": 2.7,
|
||||||
|
"structure_score": 2.7,
|
||||||
|
"overall_score": 5.4
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"output_quality": {
|
||||||
|
"A": {
|
||||||
|
"score": 9,
|
||||||
|
"strengths": ["Complete solution", "Well-formatted", "All fields present"],
|
||||||
|
"weaknesses": ["Minor style inconsistency in header"]
|
||||||
|
},
|
||||||
|
"B": {
|
||||||
|
"score": 5,
|
||||||
|
"strengths": ["Readable output", "Correct basic structure"],
|
||||||
|
"weaknesses": ["Missing date field", "Formatting inconsistencies", "Partial data extraction"]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"expectation_results": {
|
||||||
|
"A": {
|
||||||
|
"passed": 4,
|
||||||
|
"total": 5,
|
||||||
|
"pass_rate": 0.80,
|
||||||
|
"details": [
|
||||||
|
{"text": "Output includes name", "passed": true}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"B": {
|
||||||
|
"passed": 3,
|
||||||
|
"total": 5,
|
||||||
|
"pass_rate": 0.60,
|
||||||
|
"details": [
|
||||||
|
{"text": "Output includes name", "passed": true}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## analysis.json
|
||||||
|
|
||||||
|
Output from post-hoc analyzer. Located at `<grading-dir>/analysis.json`.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"comparison_summary": {
|
||||||
|
"winner": "A",
|
||||||
|
"winner_skill": "path/to/winner/skill",
|
||||||
|
"loser_skill": "path/to/loser/skill",
|
||||||
|
"comparator_reasoning": "Brief summary of why comparator chose winner"
|
||||||
|
},
|
||||||
|
"winner_strengths": [
|
||||||
|
"Clear step-by-step instructions for handling multi-page documents",
|
||||||
|
"Included validation script that caught formatting errors"
|
||||||
|
],
|
||||||
|
"loser_weaknesses": [
|
||||||
|
"Vague instruction 'process the document appropriately' led to inconsistent behavior",
|
||||||
|
"No script for validation, agent had to improvise"
|
||||||
|
],
|
||||||
|
"instruction_following": {
|
||||||
|
"winner": {
|
||||||
|
"score": 9,
|
||||||
|
"issues": ["Minor: skipped optional logging step"]
|
||||||
|
},
|
||||||
|
"loser": {
|
||||||
|
"score": 6,
|
||||||
|
"issues": [
|
||||||
|
"Did not use the skill's formatting template",
|
||||||
|
"Invented own approach instead of following step 3"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"improvement_suggestions": [
|
||||||
|
{
|
||||||
|
"priority": "high",
|
||||||
|
"category": "instructions",
|
||||||
|
"suggestion": "Replace 'process the document appropriately' with explicit steps",
|
||||||
|
"expected_impact": "Would eliminate ambiguity that caused inconsistent behavior"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"transcript_insights": {
|
||||||
|
"winner_execution_pattern": "Read skill -> Followed 5-step process -> Used validation script",
|
||||||
|
"loser_execution_pattern": "Read skill -> Unclear on approach -> Tried 3 different methods"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
0
third_party/zeroclaw/.claude/skills/skill-creator/scripts/__init__.py
vendored
Normal file
0
third_party/zeroclaw/.claude/skills/skill-creator/scripts/__init__.py
vendored
Normal file
401
third_party/zeroclaw/.claude/skills/skill-creator/scripts/aggregate_benchmark.py
vendored
Executable file
401
third_party/zeroclaw/.claude/skills/skill-creator/scripts/aggregate_benchmark.py
vendored
Executable file
@@ -0,0 +1,401 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Aggregate individual run results into benchmark summary statistics.
|
||||||
|
|
||||||
|
Reads grading.json files from run directories and produces:
|
||||||
|
- run_summary with mean, stddev, min, max for each metric
|
||||||
|
- delta between with_skill and without_skill configurations
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python aggregate_benchmark.py <benchmark_dir>
|
||||||
|
|
||||||
|
Example:
|
||||||
|
python aggregate_benchmark.py benchmarks/2026-01-15T10-30-00/
|
||||||
|
|
||||||
|
The script supports two directory layouts:
|
||||||
|
|
||||||
|
Workspace layout (from skill-creator iterations):
|
||||||
|
<benchmark_dir>/
|
||||||
|
└── eval-N/
|
||||||
|
├── with_skill/
|
||||||
|
│ ├── run-1/grading.json
|
||||||
|
│ └── run-2/grading.json
|
||||||
|
└── without_skill/
|
||||||
|
├── run-1/grading.json
|
||||||
|
└── run-2/grading.json
|
||||||
|
|
||||||
|
Legacy layout (with runs/ subdirectory):
|
||||||
|
<benchmark_dir>/
|
||||||
|
└── runs/
|
||||||
|
└── eval-N/
|
||||||
|
├── with_skill/
|
||||||
|
│ └── run-1/grading.json
|
||||||
|
└── without_skill/
|
||||||
|
└── run-1/grading.json
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import math
|
||||||
|
import sys
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
|
||||||
|
def calculate_stats(values: list[float]) -> dict:
|
||||||
|
"""Calculate mean, stddev, min, max for a list of values."""
|
||||||
|
if not values:
|
||||||
|
return {"mean": 0.0, "stddev": 0.0, "min": 0.0, "max": 0.0}
|
||||||
|
|
||||||
|
n = len(values)
|
||||||
|
mean = sum(values) / n
|
||||||
|
|
||||||
|
if n > 1:
|
||||||
|
variance = sum((x - mean) ** 2 for x in values) / (n - 1)
|
||||||
|
stddev = math.sqrt(variance)
|
||||||
|
else:
|
||||||
|
stddev = 0.0
|
||||||
|
|
||||||
|
return {
|
||||||
|
"mean": round(mean, 4),
|
||||||
|
"stddev": round(stddev, 4),
|
||||||
|
"min": round(min(values), 4),
|
||||||
|
"max": round(max(values), 4)
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def load_run_results(benchmark_dir: Path) -> dict:
|
||||||
|
"""
|
||||||
|
Load all run results from a benchmark directory.
|
||||||
|
|
||||||
|
Returns dict keyed by config name (e.g. "with_skill"/"without_skill",
|
||||||
|
or "new_skill"/"old_skill"), each containing a list of run results.
|
||||||
|
"""
|
||||||
|
# Support both layouts: eval dirs directly under benchmark_dir, or under runs/
|
||||||
|
runs_dir = benchmark_dir / "runs"
|
||||||
|
if runs_dir.exists():
|
||||||
|
search_dir = runs_dir
|
||||||
|
elif list(benchmark_dir.glob("eval-*")):
|
||||||
|
search_dir = benchmark_dir
|
||||||
|
else:
|
||||||
|
print(f"No eval directories found in {benchmark_dir} or {benchmark_dir / 'runs'}")
|
||||||
|
return {}
|
||||||
|
|
||||||
|
results: dict[str, list] = {}
|
||||||
|
|
||||||
|
for eval_idx, eval_dir in enumerate(sorted(search_dir.glob("eval-*"))):
|
||||||
|
metadata_path = eval_dir / "eval_metadata.json"
|
||||||
|
if metadata_path.exists():
|
||||||
|
try:
|
||||||
|
with open(metadata_path) as mf:
|
||||||
|
eval_id = json.load(mf).get("eval_id", eval_idx)
|
||||||
|
except (json.JSONDecodeError, OSError):
|
||||||
|
eval_id = eval_idx
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
eval_id = int(eval_dir.name.split("-")[1])
|
||||||
|
except ValueError:
|
||||||
|
eval_id = eval_idx
|
||||||
|
|
||||||
|
# Discover config directories dynamically rather than hardcoding names
|
||||||
|
for config_dir in sorted(eval_dir.iterdir()):
|
||||||
|
if not config_dir.is_dir():
|
||||||
|
continue
|
||||||
|
# Skip non-config directories (inputs, outputs, etc.)
|
||||||
|
if not list(config_dir.glob("run-*")):
|
||||||
|
continue
|
||||||
|
config = config_dir.name
|
||||||
|
if config not in results:
|
||||||
|
results[config] = []
|
||||||
|
|
||||||
|
for run_dir in sorted(config_dir.glob("run-*")):
|
||||||
|
run_number = int(run_dir.name.split("-")[1])
|
||||||
|
grading_file = run_dir / "grading.json"
|
||||||
|
|
||||||
|
if not grading_file.exists():
|
||||||
|
print(f"Warning: grading.json not found in {run_dir}")
|
||||||
|
continue
|
||||||
|
|
||||||
|
try:
|
||||||
|
with open(grading_file) as f:
|
||||||
|
grading = json.load(f)
|
||||||
|
except json.JSONDecodeError as e:
|
||||||
|
print(f"Warning: Invalid JSON in {grading_file}: {e}")
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Extract metrics
|
||||||
|
result = {
|
||||||
|
"eval_id": eval_id,
|
||||||
|
"run_number": run_number,
|
||||||
|
"pass_rate": grading.get("summary", {}).get("pass_rate", 0.0),
|
||||||
|
"passed": grading.get("summary", {}).get("passed", 0),
|
||||||
|
"failed": grading.get("summary", {}).get("failed", 0),
|
||||||
|
"total": grading.get("summary", {}).get("total", 0),
|
||||||
|
}
|
||||||
|
|
||||||
|
# Extract timing — check grading.json first, then sibling timing.json
|
||||||
|
timing = grading.get("timing", {})
|
||||||
|
result["time_seconds"] = timing.get("total_duration_seconds", 0.0)
|
||||||
|
timing_file = run_dir / "timing.json"
|
||||||
|
if result["time_seconds"] == 0.0 and timing_file.exists():
|
||||||
|
try:
|
||||||
|
with open(timing_file) as tf:
|
||||||
|
timing_data = json.load(tf)
|
||||||
|
result["time_seconds"] = timing_data.get("total_duration_seconds", 0.0)
|
||||||
|
result["tokens"] = timing_data.get("total_tokens", 0)
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Extract metrics if available
|
||||||
|
metrics = grading.get("execution_metrics", {})
|
||||||
|
result["tool_calls"] = metrics.get("total_tool_calls", 0)
|
||||||
|
if not result.get("tokens"):
|
||||||
|
result["tokens"] = metrics.get("output_chars", 0)
|
||||||
|
result["errors"] = metrics.get("errors_encountered", 0)
|
||||||
|
|
||||||
|
# Extract expectations — viewer requires fields: text, passed, evidence
|
||||||
|
raw_expectations = grading.get("expectations", [])
|
||||||
|
for exp in raw_expectations:
|
||||||
|
if "text" not in exp or "passed" not in exp:
|
||||||
|
print(f"Warning: expectation in {grading_file} missing required fields (text, passed, evidence): {exp}")
|
||||||
|
result["expectations"] = raw_expectations
|
||||||
|
|
||||||
|
# Extract notes from user_notes_summary
|
||||||
|
notes_summary = grading.get("user_notes_summary", {})
|
||||||
|
notes = []
|
||||||
|
notes.extend(notes_summary.get("uncertainties", []))
|
||||||
|
notes.extend(notes_summary.get("needs_review", []))
|
||||||
|
notes.extend(notes_summary.get("workarounds", []))
|
||||||
|
result["notes"] = notes
|
||||||
|
|
||||||
|
results[config].append(result)
|
||||||
|
|
||||||
|
return results
|
||||||
|
|
||||||
|
|
||||||
|
def aggregate_results(results: dict) -> dict:
|
||||||
|
"""
|
||||||
|
Aggregate run results into summary statistics.
|
||||||
|
|
||||||
|
Returns run_summary with stats for each configuration and delta.
|
||||||
|
"""
|
||||||
|
run_summary = {}
|
||||||
|
configs = list(results.keys())
|
||||||
|
|
||||||
|
for config in configs:
|
||||||
|
runs = results.get(config, [])
|
||||||
|
|
||||||
|
if not runs:
|
||||||
|
run_summary[config] = {
|
||||||
|
"pass_rate": {"mean": 0.0, "stddev": 0.0, "min": 0.0, "max": 0.0},
|
||||||
|
"time_seconds": {"mean": 0.0, "stddev": 0.0, "min": 0.0, "max": 0.0},
|
||||||
|
"tokens": {"mean": 0, "stddev": 0, "min": 0, "max": 0}
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
|
||||||
|
pass_rates = [r["pass_rate"] for r in runs]
|
||||||
|
times = [r["time_seconds"] for r in runs]
|
||||||
|
tokens = [r.get("tokens", 0) for r in runs]
|
||||||
|
|
||||||
|
run_summary[config] = {
|
||||||
|
"pass_rate": calculate_stats(pass_rates),
|
||||||
|
"time_seconds": calculate_stats(times),
|
||||||
|
"tokens": calculate_stats(tokens)
|
||||||
|
}
|
||||||
|
|
||||||
|
# Calculate delta between the first two configs (if two exist)
|
||||||
|
if len(configs) >= 2:
|
||||||
|
primary = run_summary.get(configs[0], {})
|
||||||
|
baseline = run_summary.get(configs[1], {})
|
||||||
|
else:
|
||||||
|
primary = run_summary.get(configs[0], {}) if configs else {}
|
||||||
|
baseline = {}
|
||||||
|
|
||||||
|
delta_pass_rate = primary.get("pass_rate", {}).get("mean", 0) - baseline.get("pass_rate", {}).get("mean", 0)
|
||||||
|
delta_time = primary.get("time_seconds", {}).get("mean", 0) - baseline.get("time_seconds", {}).get("mean", 0)
|
||||||
|
delta_tokens = primary.get("tokens", {}).get("mean", 0) - baseline.get("tokens", {}).get("mean", 0)
|
||||||
|
|
||||||
|
run_summary["delta"] = {
|
||||||
|
"pass_rate": f"{delta_pass_rate:+.2f}",
|
||||||
|
"time_seconds": f"{delta_time:+.1f}",
|
||||||
|
"tokens": f"{delta_tokens:+.0f}"
|
||||||
|
}
|
||||||
|
|
||||||
|
return run_summary
|
||||||
|
|
||||||
|
|
||||||
|
def generate_benchmark(benchmark_dir: Path, skill_name: str = "", skill_path: str = "") -> dict:
|
||||||
|
"""
|
||||||
|
Generate complete benchmark.json from run results.
|
||||||
|
"""
|
||||||
|
results = load_run_results(benchmark_dir)
|
||||||
|
run_summary = aggregate_results(results)
|
||||||
|
|
||||||
|
# Build runs array for benchmark.json
|
||||||
|
runs = []
|
||||||
|
for config in results:
|
||||||
|
for result in results[config]:
|
||||||
|
runs.append({
|
||||||
|
"eval_id": result["eval_id"],
|
||||||
|
"configuration": config,
|
||||||
|
"run_number": result["run_number"],
|
||||||
|
"result": {
|
||||||
|
"pass_rate": result["pass_rate"],
|
||||||
|
"passed": result["passed"],
|
||||||
|
"failed": result["failed"],
|
||||||
|
"total": result["total"],
|
||||||
|
"time_seconds": result["time_seconds"],
|
||||||
|
"tokens": result.get("tokens", 0),
|
||||||
|
"tool_calls": result.get("tool_calls", 0),
|
||||||
|
"errors": result.get("errors", 0)
|
||||||
|
},
|
||||||
|
"expectations": result["expectations"],
|
||||||
|
"notes": result["notes"]
|
||||||
|
})
|
||||||
|
|
||||||
|
# Determine eval IDs from results
|
||||||
|
eval_ids = sorted(set(
|
||||||
|
r["eval_id"]
|
||||||
|
for config in results.values()
|
||||||
|
for r in config
|
||||||
|
))
|
||||||
|
|
||||||
|
benchmark = {
|
||||||
|
"metadata": {
|
||||||
|
"skill_name": skill_name or "<skill-name>",
|
||||||
|
"skill_path": skill_path or "<path/to/skill>",
|
||||||
|
"executor_model": "<model-name>",
|
||||||
|
"analyzer_model": "<model-name>",
|
||||||
|
"timestamp": datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ"),
|
||||||
|
"evals_run": eval_ids,
|
||||||
|
"runs_per_configuration": 3
|
||||||
|
},
|
||||||
|
"runs": runs,
|
||||||
|
"run_summary": run_summary,
|
||||||
|
"notes": [] # To be filled by analyzer
|
||||||
|
}
|
||||||
|
|
||||||
|
return benchmark
|
||||||
|
|
||||||
|
|
||||||
|
def generate_markdown(benchmark: dict) -> str:
|
||||||
|
"""Generate human-readable benchmark.md from benchmark data."""
|
||||||
|
metadata = benchmark["metadata"]
|
||||||
|
run_summary = benchmark["run_summary"]
|
||||||
|
|
||||||
|
# Determine config names (excluding "delta")
|
||||||
|
configs = [k for k in run_summary if k != "delta"]
|
||||||
|
config_a = configs[0] if len(configs) >= 1 else "config_a"
|
||||||
|
config_b = configs[1] if len(configs) >= 2 else "config_b"
|
||||||
|
label_a = config_a.replace("_", " ").title()
|
||||||
|
label_b = config_b.replace("_", " ").title()
|
||||||
|
|
||||||
|
lines = [
|
||||||
|
f"# Skill Benchmark: {metadata['skill_name']}",
|
||||||
|
"",
|
||||||
|
f"**Model**: {metadata['executor_model']}",
|
||||||
|
f"**Date**: {metadata['timestamp']}",
|
||||||
|
f"**Evals**: {', '.join(map(str, metadata['evals_run']))} ({metadata['runs_per_configuration']} runs each per configuration)",
|
||||||
|
"",
|
||||||
|
"## Summary",
|
||||||
|
"",
|
||||||
|
f"| Metric | {label_a} | {label_b} | Delta |",
|
||||||
|
"|--------|------------|---------------|-------|",
|
||||||
|
]
|
||||||
|
|
||||||
|
a_summary = run_summary.get(config_a, {})
|
||||||
|
b_summary = run_summary.get(config_b, {})
|
||||||
|
delta = run_summary.get("delta", {})
|
||||||
|
|
||||||
|
# Format pass rate
|
||||||
|
a_pr = a_summary.get("pass_rate", {})
|
||||||
|
b_pr = b_summary.get("pass_rate", {})
|
||||||
|
lines.append(f"| Pass Rate | {a_pr.get('mean', 0)*100:.0f}% ± {a_pr.get('stddev', 0)*100:.0f}% | {b_pr.get('mean', 0)*100:.0f}% ± {b_pr.get('stddev', 0)*100:.0f}% | {delta.get('pass_rate', '—')} |")
|
||||||
|
|
||||||
|
# Format time
|
||||||
|
a_time = a_summary.get("time_seconds", {})
|
||||||
|
b_time = b_summary.get("time_seconds", {})
|
||||||
|
lines.append(f"| Time | {a_time.get('mean', 0):.1f}s ± {a_time.get('stddev', 0):.1f}s | {b_time.get('mean', 0):.1f}s ± {b_time.get('stddev', 0):.1f}s | {delta.get('time_seconds', '—')}s |")
|
||||||
|
|
||||||
|
# Format tokens
|
||||||
|
a_tokens = a_summary.get("tokens", {})
|
||||||
|
b_tokens = b_summary.get("tokens", {})
|
||||||
|
lines.append(f"| Tokens | {a_tokens.get('mean', 0):.0f} ± {a_tokens.get('stddev', 0):.0f} | {b_tokens.get('mean', 0):.0f} ± {b_tokens.get('stddev', 0):.0f} | {delta.get('tokens', '—')} |")
|
||||||
|
|
||||||
|
# Notes section
|
||||||
|
if benchmark.get("notes"):
|
||||||
|
lines.extend([
|
||||||
|
"",
|
||||||
|
"## Notes",
|
||||||
|
""
|
||||||
|
])
|
||||||
|
for note in benchmark["notes"]:
|
||||||
|
lines.append(f"- {note}")
|
||||||
|
|
||||||
|
return "\n".join(lines)
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description="Aggregate benchmark run results into summary statistics"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"benchmark_dir",
|
||||||
|
type=Path,
|
||||||
|
help="Path to the benchmark directory"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--skill-name",
|
||||||
|
default="",
|
||||||
|
help="Name of the skill being benchmarked"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--skill-path",
|
||||||
|
default="",
|
||||||
|
help="Path to the skill being benchmarked"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--output", "-o",
|
||||||
|
type=Path,
|
||||||
|
help="Output path for benchmark.json (default: <benchmark_dir>/benchmark.json)"
|
||||||
|
)
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
if not args.benchmark_dir.exists():
|
||||||
|
print(f"Directory not found: {args.benchmark_dir}")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
# Generate benchmark
|
||||||
|
benchmark = generate_benchmark(args.benchmark_dir, args.skill_name, args.skill_path)
|
||||||
|
|
||||||
|
# Determine output paths
|
||||||
|
output_json = args.output or (args.benchmark_dir / "benchmark.json")
|
||||||
|
output_md = output_json.with_suffix(".md")
|
||||||
|
|
||||||
|
# Write benchmark.json
|
||||||
|
with open(output_json, "w") as f:
|
||||||
|
json.dump(benchmark, f, indent=2)
|
||||||
|
print(f"Generated: {output_json}")
|
||||||
|
|
||||||
|
# Write benchmark.md
|
||||||
|
markdown = generate_markdown(benchmark)
|
||||||
|
with open(output_md, "w") as f:
|
||||||
|
f.write(markdown)
|
||||||
|
print(f"Generated: {output_md}")
|
||||||
|
|
||||||
|
# Print summary
|
||||||
|
run_summary = benchmark["run_summary"]
|
||||||
|
configs = [k for k in run_summary if k != "delta"]
|
||||||
|
delta = run_summary.get("delta", {})
|
||||||
|
|
||||||
|
print(f"\nSummary:")
|
||||||
|
for config in configs:
|
||||||
|
pr = run_summary[config]["pass_rate"]["mean"]
|
||||||
|
label = config.replace("_", " ").title()
|
||||||
|
print(f" {label}: {pr*100:.1f}% pass rate")
|
||||||
|
print(f" Delta: {delta.get('pass_rate', '—')}")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
326
third_party/zeroclaw/.claude/skills/skill-creator/scripts/generate_report.py
vendored
Executable file
326
third_party/zeroclaw/.claude/skills/skill-creator/scripts/generate_report.py
vendored
Executable file
@@ -0,0 +1,326 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Generate an HTML report from run_loop.py output.
|
||||||
|
|
||||||
|
Takes the JSON output from run_loop.py and generates a visual HTML report
|
||||||
|
showing each description attempt with check/x for each test case.
|
||||||
|
Distinguishes between train and test queries.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import html
|
||||||
|
import json
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
|
||||||
|
def generate_html(data: dict, auto_refresh: bool = False, skill_name: str = "") -> str:
|
||||||
|
"""Generate HTML report from loop output data. If auto_refresh is True, adds a meta refresh tag."""
|
||||||
|
history = data.get("history", [])
|
||||||
|
holdout = data.get("holdout", 0)
|
||||||
|
title_prefix = html.escape(skill_name + " \u2014 ") if skill_name else ""
|
||||||
|
|
||||||
|
# Get all unique queries from train and test sets, with should_trigger info
|
||||||
|
train_queries: list[dict] = []
|
||||||
|
test_queries: list[dict] = []
|
||||||
|
if history:
|
||||||
|
for r in history[0].get("train_results", history[0].get("results", [])):
|
||||||
|
train_queries.append({"query": r["query"], "should_trigger": r.get("should_trigger", True)})
|
||||||
|
if history[0].get("test_results"):
|
||||||
|
for r in history[0].get("test_results", []):
|
||||||
|
test_queries.append({"query": r["query"], "should_trigger": r.get("should_trigger", True)})
|
||||||
|
|
||||||
|
refresh_tag = ' <meta http-equiv="refresh" content="5">\n' if auto_refresh else ""
|
||||||
|
|
||||||
|
html_parts = ["""<!DOCTYPE html>
|
||||||
|
<html>
|
||||||
|
<head>
|
||||||
|
<meta charset="utf-8">
|
||||||
|
""" + refresh_tag + """ <title>""" + title_prefix + """Skill Description Optimization</title>
|
||||||
|
<link rel="preconnect" href="https://fonts.googleapis.com">
|
||||||
|
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
|
||||||
|
<link href="https://fonts.googleapis.com/css2?family=Poppins:wght@500;600&family=Lora:wght@400;500&display=swap" rel="stylesheet">
|
||||||
|
<style>
|
||||||
|
body {
|
||||||
|
font-family: 'Lora', Georgia, serif;
|
||||||
|
max-width: 100%;
|
||||||
|
margin: 0 auto;
|
||||||
|
padding: 20px;
|
||||||
|
background: #faf9f5;
|
||||||
|
color: #141413;
|
||||||
|
}
|
||||||
|
h1 { font-family: 'Poppins', sans-serif; color: #141413; }
|
||||||
|
.explainer {
|
||||||
|
background: white;
|
||||||
|
padding: 15px;
|
||||||
|
border-radius: 6px;
|
||||||
|
margin-bottom: 20px;
|
||||||
|
border: 1px solid #e8e6dc;
|
||||||
|
color: #b0aea5;
|
||||||
|
font-size: 0.875rem;
|
||||||
|
line-height: 1.6;
|
||||||
|
}
|
||||||
|
.summary {
|
||||||
|
background: white;
|
||||||
|
padding: 15px;
|
||||||
|
border-radius: 6px;
|
||||||
|
margin-bottom: 20px;
|
||||||
|
border: 1px solid #e8e6dc;
|
||||||
|
}
|
||||||
|
.summary p { margin: 5px 0; }
|
||||||
|
.best { color: #788c5d; font-weight: bold; }
|
||||||
|
.table-container {
|
||||||
|
overflow-x: auto;
|
||||||
|
width: 100%;
|
||||||
|
}
|
||||||
|
table {
|
||||||
|
border-collapse: collapse;
|
||||||
|
background: white;
|
||||||
|
border: 1px solid #e8e6dc;
|
||||||
|
border-radius: 6px;
|
||||||
|
font-size: 12px;
|
||||||
|
min-width: 100%;
|
||||||
|
}
|
||||||
|
th, td {
|
||||||
|
padding: 8px;
|
||||||
|
text-align: left;
|
||||||
|
border: 1px solid #e8e6dc;
|
||||||
|
white-space: normal;
|
||||||
|
word-wrap: break-word;
|
||||||
|
}
|
||||||
|
th {
|
||||||
|
font-family: 'Poppins', sans-serif;
|
||||||
|
background: #141413;
|
||||||
|
color: #faf9f5;
|
||||||
|
font-weight: 500;
|
||||||
|
}
|
||||||
|
th.test-col {
|
||||||
|
background: #6a9bcc;
|
||||||
|
}
|
||||||
|
th.query-col { min-width: 200px; }
|
||||||
|
td.description {
|
||||||
|
font-family: monospace;
|
||||||
|
font-size: 11px;
|
||||||
|
word-wrap: break-word;
|
||||||
|
max-width: 400px;
|
||||||
|
}
|
||||||
|
td.result {
|
||||||
|
text-align: center;
|
||||||
|
font-size: 16px;
|
||||||
|
min-width: 40px;
|
||||||
|
}
|
||||||
|
td.test-result {
|
||||||
|
background: #f0f6fc;
|
||||||
|
}
|
||||||
|
.pass { color: #788c5d; }
|
||||||
|
.fail { color: #c44; }
|
||||||
|
.rate {
|
||||||
|
font-size: 9px;
|
||||||
|
color: #b0aea5;
|
||||||
|
display: block;
|
||||||
|
}
|
||||||
|
tr:hover { background: #faf9f5; }
|
||||||
|
.score {
|
||||||
|
display: inline-block;
|
||||||
|
padding: 2px 6px;
|
||||||
|
border-radius: 4px;
|
||||||
|
font-weight: bold;
|
||||||
|
font-size: 11px;
|
||||||
|
}
|
||||||
|
.score-good { background: #eef2e8; color: #788c5d; }
|
||||||
|
.score-ok { background: #fef3c7; color: #d97706; }
|
||||||
|
.score-bad { background: #fceaea; color: #c44; }
|
||||||
|
.train-label { color: #b0aea5; font-size: 10px; }
|
||||||
|
.test-label { color: #6a9bcc; font-size: 10px; font-weight: bold; }
|
||||||
|
.best-row { background: #f5f8f2; }
|
||||||
|
th.positive-col { border-bottom: 3px solid #788c5d; }
|
||||||
|
th.negative-col { border-bottom: 3px solid #c44; }
|
||||||
|
th.test-col.positive-col { border-bottom: 3px solid #788c5d; }
|
||||||
|
th.test-col.negative-col { border-bottom: 3px solid #c44; }
|
||||||
|
.legend { font-family: 'Poppins', sans-serif; display: flex; gap: 20px; margin-bottom: 10px; font-size: 13px; align-items: center; }
|
||||||
|
.legend-item { display: flex; align-items: center; gap: 6px; }
|
||||||
|
.legend-swatch { width: 16px; height: 16px; border-radius: 3px; display: inline-block; }
|
||||||
|
.swatch-positive { background: #141413; border-bottom: 3px solid #788c5d; }
|
||||||
|
.swatch-negative { background: #141413; border-bottom: 3px solid #c44; }
|
||||||
|
.swatch-test { background: #6a9bcc; }
|
||||||
|
.swatch-train { background: #141413; }
|
||||||
|
</style>
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<h1>""" + title_prefix + """Skill Description Optimization</h1>
|
||||||
|
<div class="explainer">
|
||||||
|
<strong>Optimizing your skill's description.</strong> This page updates automatically as Claude tests different versions of your skill's description. Each row is an iteration — a new description attempt. The columns show test queries: green checkmarks mean the skill triggered correctly (or correctly didn't trigger), red crosses mean it got it wrong. The "Train" score shows performance on queries used to improve the description; the "Test" score shows performance on held-out queries the optimizer hasn't seen. When it's done, Claude will apply the best-performing description to your skill.
|
||||||
|
</div>
|
||||||
|
"""]
|
||||||
|
|
||||||
|
# Summary section
|
||||||
|
best_test_score = data.get('best_test_score')
|
||||||
|
best_train_score = data.get('best_train_score')
|
||||||
|
html_parts.append(f"""
|
||||||
|
<div class="summary">
|
||||||
|
<p><strong>Original:</strong> {html.escape(data.get('original_description', 'N/A'))}</p>
|
||||||
|
<p class="best"><strong>Best:</strong> {html.escape(data.get('best_description', 'N/A'))}</p>
|
||||||
|
<p><strong>Best Score:</strong> {data.get('best_score', 'N/A')} {'(test)' if best_test_score else '(train)'}</p>
|
||||||
|
<p><strong>Iterations:</strong> {data.get('iterations_run', 0)} | <strong>Train:</strong> {data.get('train_size', '?')} | <strong>Test:</strong> {data.get('test_size', '?')}</p>
|
||||||
|
</div>
|
||||||
|
""")
|
||||||
|
|
||||||
|
# Legend
|
||||||
|
html_parts.append("""
|
||||||
|
<div class="legend">
|
||||||
|
<span style="font-weight:600">Query columns:</span>
|
||||||
|
<span class="legend-item"><span class="legend-swatch swatch-positive"></span> Should trigger</span>
|
||||||
|
<span class="legend-item"><span class="legend-swatch swatch-negative"></span> Should NOT trigger</span>
|
||||||
|
<span class="legend-item"><span class="legend-swatch swatch-train"></span> Train</span>
|
||||||
|
<span class="legend-item"><span class="legend-swatch swatch-test"></span> Test</span>
|
||||||
|
</div>
|
||||||
|
""")
|
||||||
|
|
||||||
|
# Table header
|
||||||
|
html_parts.append("""
|
||||||
|
<div class="table-container">
|
||||||
|
<table>
|
||||||
|
<thead>
|
||||||
|
<tr>
|
||||||
|
<th>Iter</th>
|
||||||
|
<th>Train</th>
|
||||||
|
<th>Test</th>
|
||||||
|
<th class="query-col">Description</th>
|
||||||
|
""")
|
||||||
|
|
||||||
|
# Add column headers for train queries
|
||||||
|
for qinfo in train_queries:
|
||||||
|
polarity = "positive-col" if qinfo["should_trigger"] else "negative-col"
|
||||||
|
html_parts.append(f' <th class="{polarity}">{html.escape(qinfo["query"])}</th>\n')
|
||||||
|
|
||||||
|
# Add column headers for test queries (different color)
|
||||||
|
for qinfo in test_queries:
|
||||||
|
polarity = "positive-col" if qinfo["should_trigger"] else "negative-col"
|
||||||
|
html_parts.append(f' <th class="test-col {polarity}">{html.escape(qinfo["query"])}</th>\n')
|
||||||
|
|
||||||
|
html_parts.append(""" </tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
""")
|
||||||
|
|
||||||
|
# Find best iteration for highlighting
|
||||||
|
if test_queries:
|
||||||
|
best_iter = max(history, key=lambda h: h.get("test_passed") or 0).get("iteration")
|
||||||
|
else:
|
||||||
|
best_iter = max(history, key=lambda h: h.get("train_passed", h.get("passed", 0))).get("iteration")
|
||||||
|
|
||||||
|
# Add rows for each iteration
|
||||||
|
for h in history:
|
||||||
|
iteration = h.get("iteration", "?")
|
||||||
|
train_passed = h.get("train_passed", h.get("passed", 0))
|
||||||
|
train_total = h.get("train_total", h.get("total", 0))
|
||||||
|
test_passed = h.get("test_passed")
|
||||||
|
test_total = h.get("test_total")
|
||||||
|
description = h.get("description", "")
|
||||||
|
train_results = h.get("train_results", h.get("results", []))
|
||||||
|
test_results = h.get("test_results", [])
|
||||||
|
|
||||||
|
# Create lookups for results by query
|
||||||
|
train_by_query = {r["query"]: r for r in train_results}
|
||||||
|
test_by_query = {r["query"]: r for r in test_results} if test_results else {}
|
||||||
|
|
||||||
|
# Compute aggregate correct/total runs across all retries
|
||||||
|
def aggregate_runs(results: list[dict]) -> tuple[int, int]:
|
||||||
|
correct = 0
|
||||||
|
total = 0
|
||||||
|
for r in results:
|
||||||
|
runs = r.get("runs", 0)
|
||||||
|
triggers = r.get("triggers", 0)
|
||||||
|
total += runs
|
||||||
|
if r.get("should_trigger", True):
|
||||||
|
correct += triggers
|
||||||
|
else:
|
||||||
|
correct += runs - triggers
|
||||||
|
return correct, total
|
||||||
|
|
||||||
|
train_correct, train_runs = aggregate_runs(train_results)
|
||||||
|
test_correct, test_runs = aggregate_runs(test_results)
|
||||||
|
|
||||||
|
# Determine score classes
|
||||||
|
def score_class(correct: int, total: int) -> str:
|
||||||
|
if total > 0:
|
||||||
|
ratio = correct / total
|
||||||
|
if ratio >= 0.8:
|
||||||
|
return "score-good"
|
||||||
|
elif ratio >= 0.5:
|
||||||
|
return "score-ok"
|
||||||
|
return "score-bad"
|
||||||
|
|
||||||
|
train_class = score_class(train_correct, train_runs)
|
||||||
|
test_class = score_class(test_correct, test_runs)
|
||||||
|
|
||||||
|
row_class = "best-row" if iteration == best_iter else ""
|
||||||
|
|
||||||
|
html_parts.append(f""" <tr class="{row_class}">
|
||||||
|
<td>{iteration}</td>
|
||||||
|
<td><span class="score {train_class}">{train_correct}/{train_runs}</span></td>
|
||||||
|
<td><span class="score {test_class}">{test_correct}/{test_runs}</span></td>
|
||||||
|
<td class="description">{html.escape(description)}</td>
|
||||||
|
""")
|
||||||
|
|
||||||
|
# Add result for each train query
|
||||||
|
for qinfo in train_queries:
|
||||||
|
r = train_by_query.get(qinfo["query"], {})
|
||||||
|
did_pass = r.get("pass", False)
|
||||||
|
triggers = r.get("triggers", 0)
|
||||||
|
runs = r.get("runs", 0)
|
||||||
|
|
||||||
|
icon = "✓" if did_pass else "✗"
|
||||||
|
css_class = "pass" if did_pass else "fail"
|
||||||
|
|
||||||
|
html_parts.append(f' <td class="result {css_class}">{icon}<span class="rate">{triggers}/{runs}</span></td>\n')
|
||||||
|
|
||||||
|
# Add result for each test query (with different background)
|
||||||
|
for qinfo in test_queries:
|
||||||
|
r = test_by_query.get(qinfo["query"], {})
|
||||||
|
did_pass = r.get("pass", False)
|
||||||
|
triggers = r.get("triggers", 0)
|
||||||
|
runs = r.get("runs", 0)
|
||||||
|
|
||||||
|
icon = "✓" if did_pass else "✗"
|
||||||
|
css_class = "pass" if did_pass else "fail"
|
||||||
|
|
||||||
|
html_parts.append(f' <td class="result test-result {css_class}">{icon}<span class="rate">{triggers}/{runs}</span></td>\n')
|
||||||
|
|
||||||
|
html_parts.append(" </tr>\n")
|
||||||
|
|
||||||
|
html_parts.append(""" </tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
""")
|
||||||
|
|
||||||
|
html_parts.append("""
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
|
""")
|
||||||
|
|
||||||
|
return "".join(html_parts)
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description="Generate HTML report from run_loop output")
|
||||||
|
parser.add_argument("input", help="Path to JSON output from run_loop.py (or - for stdin)")
|
||||||
|
parser.add_argument("-o", "--output", default=None, help="Output HTML file (default: stdout)")
|
||||||
|
parser.add_argument("--skill-name", default="", help="Skill name to include in the report title")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
if args.input == "-":
|
||||||
|
data = json.load(sys.stdin)
|
||||||
|
else:
|
||||||
|
data = json.loads(Path(args.input).read_text())
|
||||||
|
|
||||||
|
html_output = generate_html(data, skill_name=args.skill_name)
|
||||||
|
|
||||||
|
if args.output:
|
||||||
|
Path(args.output).write_text(html_output)
|
||||||
|
print(f"Report written to {args.output}", file=sys.stderr)
|
||||||
|
else:
|
||||||
|
print(html_output)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
247
third_party/zeroclaw/.claude/skills/skill-creator/scripts/improve_description.py
vendored
Executable file
247
third_party/zeroclaw/.claude/skills/skill-creator/scripts/improve_description.py
vendored
Executable file
@@ -0,0 +1,247 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Improve a skill description based on eval results.
|
||||||
|
|
||||||
|
Takes eval results (from run_eval.py) and generates an improved description
|
||||||
|
by calling `claude -p` as a subprocess (same auth pattern as run_eval.py —
|
||||||
|
uses the session's Claude Code auth, no separate ANTHROPIC_API_KEY needed).
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from scripts.utils import parse_skill_md
|
||||||
|
|
||||||
|
|
||||||
|
def _call_claude(prompt: str, model: str | None, timeout: int = 300) -> str:
|
||||||
|
"""Run `claude -p` with the prompt on stdin and return the text response.
|
||||||
|
|
||||||
|
Prompt goes over stdin (not argv) because it embeds the full SKILL.md
|
||||||
|
body and can easily exceed comfortable argv length.
|
||||||
|
"""
|
||||||
|
cmd = ["claude", "-p", "--output-format", "text"]
|
||||||
|
if model:
|
||||||
|
cmd.extend(["--model", model])
|
||||||
|
|
||||||
|
# Remove CLAUDECODE env var to allow nesting claude -p inside a
|
||||||
|
# Claude Code session. The guard is for interactive terminal conflicts;
|
||||||
|
# programmatic subprocess usage is safe. Same pattern as run_eval.py.
|
||||||
|
env = {k: v for k, v in os.environ.items() if k != "CLAUDECODE"}
|
||||||
|
|
||||||
|
result = subprocess.run(
|
||||||
|
cmd,
|
||||||
|
input=prompt,
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
env=env,
|
||||||
|
timeout=timeout,
|
||||||
|
)
|
||||||
|
if result.returncode != 0:
|
||||||
|
raise RuntimeError(
|
||||||
|
f"claude -p exited {result.returncode}\nstderr: {result.stderr}"
|
||||||
|
)
|
||||||
|
return result.stdout
|
||||||
|
|
||||||
|
|
||||||
|
def improve_description(
|
||||||
|
skill_name: str,
|
||||||
|
skill_content: str,
|
||||||
|
current_description: str,
|
||||||
|
eval_results: dict,
|
||||||
|
history: list[dict],
|
||||||
|
model: str,
|
||||||
|
test_results: dict | None = None,
|
||||||
|
log_dir: Path | None = None,
|
||||||
|
iteration: int | None = None,
|
||||||
|
) -> str:
|
||||||
|
"""Call Claude to improve the description based on eval results."""
|
||||||
|
failed_triggers = [
|
||||||
|
r for r in eval_results["results"]
|
||||||
|
if r["should_trigger"] and not r["pass"]
|
||||||
|
]
|
||||||
|
false_triggers = [
|
||||||
|
r for r in eval_results["results"]
|
||||||
|
if not r["should_trigger"] and not r["pass"]
|
||||||
|
]
|
||||||
|
|
||||||
|
# Build scores summary
|
||||||
|
train_score = f"{eval_results['summary']['passed']}/{eval_results['summary']['total']}"
|
||||||
|
if test_results:
|
||||||
|
test_score = f"{test_results['summary']['passed']}/{test_results['summary']['total']}"
|
||||||
|
scores_summary = f"Train: {train_score}, Test: {test_score}"
|
||||||
|
else:
|
||||||
|
scores_summary = f"Train: {train_score}"
|
||||||
|
|
||||||
|
prompt = f"""You are optimizing a skill description for a Claude Code skill called "{skill_name}". A "skill" is sort of like a prompt, but with progressive disclosure -- there's a title and description that Claude sees when deciding whether to use the skill, and then if it does use the skill, it reads the .md file which has lots more details and potentially links to other resources in the skill folder like helper files and scripts and additional documentation or examples.
|
||||||
|
|
||||||
|
The description appears in Claude's "available_skills" list. When a user sends a query, Claude decides whether to invoke the skill based solely on the title and on this description. Your goal is to write a description that triggers for relevant queries, and doesn't trigger for irrelevant ones.
|
||||||
|
|
||||||
|
Here's the current description:
|
||||||
|
<current_description>
|
||||||
|
"{current_description}"
|
||||||
|
</current_description>
|
||||||
|
|
||||||
|
Current scores ({scores_summary}):
|
||||||
|
<scores_summary>
|
||||||
|
"""
|
||||||
|
if failed_triggers:
|
||||||
|
prompt += "FAILED TO TRIGGER (should have triggered but didn't):\n"
|
||||||
|
for r in failed_triggers:
|
||||||
|
prompt += f' - "{r["query"]}" (triggered {r["triggers"]}/{r["runs"]} times)\n'
|
||||||
|
prompt += "\n"
|
||||||
|
|
||||||
|
if false_triggers:
|
||||||
|
prompt += "FALSE TRIGGERS (triggered but shouldn't have):\n"
|
||||||
|
for r in false_triggers:
|
||||||
|
prompt += f' - "{r["query"]}" (triggered {r["triggers"]}/{r["runs"]} times)\n'
|
||||||
|
prompt += "\n"
|
||||||
|
|
||||||
|
if history:
|
||||||
|
prompt += "PREVIOUS ATTEMPTS (do NOT repeat these — try something structurally different):\n\n"
|
||||||
|
for h in history:
|
||||||
|
train_s = f"{h.get('train_passed', h.get('passed', 0))}/{h.get('train_total', h.get('total', 0))}"
|
||||||
|
test_s = f"{h.get('test_passed', '?')}/{h.get('test_total', '?')}" if h.get('test_passed') is not None else None
|
||||||
|
score_str = f"train={train_s}" + (f", test={test_s}" if test_s else "")
|
||||||
|
prompt += f'<attempt {score_str}>\n'
|
||||||
|
prompt += f'Description: "{h["description"]}"\n'
|
||||||
|
if "results" in h:
|
||||||
|
prompt += "Train results:\n"
|
||||||
|
for r in h["results"]:
|
||||||
|
status = "PASS" if r["pass"] else "FAIL"
|
||||||
|
prompt += f' [{status}] "{r["query"][:80]}" (triggered {r["triggers"]}/{r["runs"]})\n'
|
||||||
|
if h.get("note"):
|
||||||
|
prompt += f'Note: {h["note"]}\n'
|
||||||
|
prompt += "</attempt>\n\n"
|
||||||
|
|
||||||
|
prompt += f"""</scores_summary>
|
||||||
|
|
||||||
|
Skill content (for context on what the skill does):
|
||||||
|
<skill_content>
|
||||||
|
{skill_content}
|
||||||
|
</skill_content>
|
||||||
|
|
||||||
|
Based on the failures, write a new and improved description that is more likely to trigger correctly. When I say "based on the failures", it's a bit of a tricky line to walk because we don't want to overfit to the specific cases you're seeing. So what I DON'T want you to do is produce an ever-expanding list of specific queries that this skill should or shouldn't trigger for. Instead, try to generalize from the failures to broader categories of user intent and situations where this skill would be useful or not useful. The reason for this is twofold:
|
||||||
|
|
||||||
|
1. Avoid overfitting
|
||||||
|
2. The list might get loooong and it's injected into ALL queries and there might be a lot of skills, so we don't want to blow too much space on any given description.
|
||||||
|
|
||||||
|
Concretely, your description should not be more than about 100-200 words, even if that comes at the cost of accuracy. There is a hard limit of 1024 characters — descriptions over that will be truncated, so stay comfortably under it.
|
||||||
|
|
||||||
|
Here are some tips that we've found to work well in writing these descriptions:
|
||||||
|
- The skill should be phrased in the imperative -- "Use this skill for" rather than "this skill does"
|
||||||
|
- The skill description should focus on the user's intent, what they are trying to achieve, vs. the implementation details of how the skill works.
|
||||||
|
- The description competes with other skills for Claude's attention — make it distinctive and immediately recognizable.
|
||||||
|
- If you're getting lots of failures after repeated attempts, change things up. Try different sentence structures or wordings.
|
||||||
|
|
||||||
|
I'd encourage you to be creative and mix up the style in different iterations since you'll have multiple opportunities to try different approaches and we'll just grab the highest-scoring one at the end.
|
||||||
|
|
||||||
|
Please respond with only the new description text in <new_description> tags, nothing else."""
|
||||||
|
|
||||||
|
text = _call_claude(prompt, model)
|
||||||
|
|
||||||
|
match = re.search(r"<new_description>(.*?)</new_description>", text, re.DOTALL)
|
||||||
|
description = match.group(1).strip().strip('"') if match else text.strip().strip('"')
|
||||||
|
|
||||||
|
transcript: dict = {
|
||||||
|
"iteration": iteration,
|
||||||
|
"prompt": prompt,
|
||||||
|
"response": text,
|
||||||
|
"parsed_description": description,
|
||||||
|
"char_count": len(description),
|
||||||
|
"over_limit": len(description) > 1024,
|
||||||
|
}
|
||||||
|
|
||||||
|
# Safety net: the prompt already states the 1024-char hard limit, but if
|
||||||
|
# the model blew past it anyway, make one fresh single-turn call that
|
||||||
|
# quotes the too-long version and asks for a shorter rewrite. (The old
|
||||||
|
# SDK path did this as a true multi-turn; `claude -p` is one-shot, so we
|
||||||
|
# inline the prior output into the new prompt instead.)
|
||||||
|
if len(description) > 1024:
|
||||||
|
shorten_prompt = (
|
||||||
|
f"{prompt}\n\n"
|
||||||
|
f"---\n\n"
|
||||||
|
f"A previous attempt produced this description, which at "
|
||||||
|
f"{len(description)} characters is over the 1024-character hard limit:\n\n"
|
||||||
|
f'"{description}"\n\n'
|
||||||
|
f"Rewrite it to be under 1024 characters while keeping the most "
|
||||||
|
f"important trigger words and intent coverage. Respond with only "
|
||||||
|
f"the new description in <new_description> tags."
|
||||||
|
)
|
||||||
|
shorten_text = _call_claude(shorten_prompt, model)
|
||||||
|
match = re.search(r"<new_description>(.*?)</new_description>", shorten_text, re.DOTALL)
|
||||||
|
shortened = match.group(1).strip().strip('"') if match else shorten_text.strip().strip('"')
|
||||||
|
|
||||||
|
transcript["rewrite_prompt"] = shorten_prompt
|
||||||
|
transcript["rewrite_response"] = shorten_text
|
||||||
|
transcript["rewrite_description"] = shortened
|
||||||
|
transcript["rewrite_char_count"] = len(shortened)
|
||||||
|
description = shortened
|
||||||
|
|
||||||
|
transcript["final_description"] = description
|
||||||
|
|
||||||
|
if log_dir:
|
||||||
|
log_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
log_file = log_dir / f"improve_iter_{iteration or 'unknown'}.json"
|
||||||
|
log_file.write_text(json.dumps(transcript, indent=2))
|
||||||
|
|
||||||
|
return description
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description="Improve a skill description based on eval results")
|
||||||
|
parser.add_argument("--eval-results", required=True, help="Path to eval results JSON (from run_eval.py)")
|
||||||
|
parser.add_argument("--skill-path", required=True, help="Path to skill directory")
|
||||||
|
parser.add_argument("--history", default=None, help="Path to history JSON (previous attempts)")
|
||||||
|
parser.add_argument("--model", required=True, help="Model for improvement")
|
||||||
|
parser.add_argument("--verbose", action="store_true", help="Print thinking to stderr")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
skill_path = Path(args.skill_path)
|
||||||
|
if not (skill_path / "SKILL.md").exists():
|
||||||
|
print(f"Error: No SKILL.md found at {skill_path}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
eval_results = json.loads(Path(args.eval_results).read_text())
|
||||||
|
history = []
|
||||||
|
if args.history:
|
||||||
|
history = json.loads(Path(args.history).read_text())
|
||||||
|
|
||||||
|
name, _, content = parse_skill_md(skill_path)
|
||||||
|
current_description = eval_results["description"]
|
||||||
|
|
||||||
|
if args.verbose:
|
||||||
|
print(f"Current: {current_description}", file=sys.stderr)
|
||||||
|
print(f"Score: {eval_results['summary']['passed']}/{eval_results['summary']['total']}", file=sys.stderr)
|
||||||
|
|
||||||
|
new_description = improve_description(
|
||||||
|
skill_name=name,
|
||||||
|
skill_content=content,
|
||||||
|
current_description=current_description,
|
||||||
|
eval_results=eval_results,
|
||||||
|
history=history,
|
||||||
|
model=args.model,
|
||||||
|
)
|
||||||
|
|
||||||
|
if args.verbose:
|
||||||
|
print(f"Improved: {new_description}", file=sys.stderr)
|
||||||
|
|
||||||
|
# Output as JSON with both the new description and updated history
|
||||||
|
output = {
|
||||||
|
"description": new_description,
|
||||||
|
"history": history + [{
|
||||||
|
"description": current_description,
|
||||||
|
"passed": eval_results["summary"]["passed"],
|
||||||
|
"failed": eval_results["summary"]["failed"],
|
||||||
|
"total": eval_results["summary"]["total"],
|
||||||
|
"results": eval_results["results"],
|
||||||
|
}],
|
||||||
|
}
|
||||||
|
print(json.dumps(output, indent=2))
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
136
third_party/zeroclaw/.claude/skills/skill-creator/scripts/package_skill.py
vendored
Executable file
136
third_party/zeroclaw/.claude/skills/skill-creator/scripts/package_skill.py
vendored
Executable file
@@ -0,0 +1,136 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Skill Packager - Creates a distributable .skill file of a skill folder
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python utils/package_skill.py <path/to/skill-folder> [output-directory]
|
||||||
|
|
||||||
|
Example:
|
||||||
|
python utils/package_skill.py skills/public/my-skill
|
||||||
|
python utils/package_skill.py skills/public/my-skill ./dist
|
||||||
|
"""
|
||||||
|
|
||||||
|
import fnmatch
|
||||||
|
import sys
|
||||||
|
import zipfile
|
||||||
|
from pathlib import Path
|
||||||
|
from scripts.quick_validate import validate_skill
|
||||||
|
|
||||||
|
# Patterns to exclude when packaging skills.
|
||||||
|
EXCLUDE_DIRS = {"__pycache__", "node_modules"}
|
||||||
|
EXCLUDE_GLOBS = {"*.pyc"}
|
||||||
|
EXCLUDE_FILES = {".DS_Store"}
|
||||||
|
# Directories excluded only at the skill root (not when nested deeper).
|
||||||
|
ROOT_EXCLUDE_DIRS = {"evals"}
|
||||||
|
|
||||||
|
|
||||||
|
def should_exclude(rel_path: Path) -> bool:
|
||||||
|
"""Check if a path should be excluded from packaging."""
|
||||||
|
parts = rel_path.parts
|
||||||
|
if any(part in EXCLUDE_DIRS for part in parts):
|
||||||
|
return True
|
||||||
|
# rel_path is relative to skill_path.parent, so parts[0] is the skill
|
||||||
|
# folder name and parts[1] (if present) is the first subdir.
|
||||||
|
if len(parts) > 1 and parts[1] in ROOT_EXCLUDE_DIRS:
|
||||||
|
return True
|
||||||
|
name = rel_path.name
|
||||||
|
if name in EXCLUDE_FILES:
|
||||||
|
return True
|
||||||
|
return any(fnmatch.fnmatch(name, pat) for pat in EXCLUDE_GLOBS)
|
||||||
|
|
||||||
|
|
||||||
|
def package_skill(skill_path, output_dir=None):
|
||||||
|
"""
|
||||||
|
Package a skill folder into a .skill file.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
skill_path: Path to the skill folder
|
||||||
|
output_dir: Optional output directory for the .skill file (defaults to current directory)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Path to the created .skill file, or None if error
|
||||||
|
"""
|
||||||
|
skill_path = Path(skill_path).resolve()
|
||||||
|
|
||||||
|
# Validate skill folder exists
|
||||||
|
if not skill_path.exists():
|
||||||
|
print(f"❌ Error: Skill folder not found: {skill_path}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
if not skill_path.is_dir():
|
||||||
|
print(f"❌ Error: Path is not a directory: {skill_path}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Validate SKILL.md exists
|
||||||
|
skill_md = skill_path / "SKILL.md"
|
||||||
|
if not skill_md.exists():
|
||||||
|
print(f"❌ Error: SKILL.md not found in {skill_path}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Run validation before packaging
|
||||||
|
print("🔍 Validating skill...")
|
||||||
|
valid, message = validate_skill(skill_path)
|
||||||
|
if not valid:
|
||||||
|
print(f"❌ Validation failed: {message}")
|
||||||
|
print(" Please fix the validation errors before packaging.")
|
||||||
|
return None
|
||||||
|
print(f"✅ {message}\n")
|
||||||
|
|
||||||
|
# Determine output location
|
||||||
|
skill_name = skill_path.name
|
||||||
|
if output_dir:
|
||||||
|
output_path = Path(output_dir).resolve()
|
||||||
|
output_path.mkdir(parents=True, exist_ok=True)
|
||||||
|
else:
|
||||||
|
output_path = Path.cwd()
|
||||||
|
|
||||||
|
skill_filename = output_path / f"{skill_name}.skill"
|
||||||
|
|
||||||
|
# Create the .skill file (zip format)
|
||||||
|
try:
|
||||||
|
with zipfile.ZipFile(skill_filename, 'w', zipfile.ZIP_DEFLATED) as zipf:
|
||||||
|
# Walk through the skill directory, excluding build artifacts
|
||||||
|
for file_path in skill_path.rglob('*'):
|
||||||
|
if not file_path.is_file():
|
||||||
|
continue
|
||||||
|
arcname = file_path.relative_to(skill_path.parent)
|
||||||
|
if should_exclude(arcname):
|
||||||
|
print(f" Skipped: {arcname}")
|
||||||
|
continue
|
||||||
|
zipf.write(file_path, arcname)
|
||||||
|
print(f" Added: {arcname}")
|
||||||
|
|
||||||
|
print(f"\n✅ Successfully packaged skill to: {skill_filename}")
|
||||||
|
return skill_filename
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"❌ Error creating .skill file: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
if len(sys.argv) < 2:
|
||||||
|
print("Usage: python utils/package_skill.py <path/to/skill-folder> [output-directory]")
|
||||||
|
print("\nExample:")
|
||||||
|
print(" python utils/package_skill.py skills/public/my-skill")
|
||||||
|
print(" python utils/package_skill.py skills/public/my-skill ./dist")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
skill_path = sys.argv[1]
|
||||||
|
output_dir = sys.argv[2] if len(sys.argv) > 2 else None
|
||||||
|
|
||||||
|
print(f"📦 Packaging skill: {skill_path}")
|
||||||
|
if output_dir:
|
||||||
|
print(f" Output directory: {output_dir}")
|
||||||
|
print()
|
||||||
|
|
||||||
|
result = package_skill(skill_path, output_dir)
|
||||||
|
|
||||||
|
if result:
|
||||||
|
sys.exit(0)
|
||||||
|
else:
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
103
third_party/zeroclaw/.claude/skills/skill-creator/scripts/quick_validate.py
vendored
Executable file
103
third_party/zeroclaw/.claude/skills/skill-creator/scripts/quick_validate.py
vendored
Executable file
@@ -0,0 +1,103 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Quick validation script for skills - minimal version
|
||||||
|
"""
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
import yaml
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
def validate_skill(skill_path):
|
||||||
|
"""Basic validation of a skill"""
|
||||||
|
skill_path = Path(skill_path)
|
||||||
|
|
||||||
|
# Check SKILL.md exists
|
||||||
|
skill_md = skill_path / 'SKILL.md'
|
||||||
|
if not skill_md.exists():
|
||||||
|
return False, "SKILL.md not found"
|
||||||
|
|
||||||
|
# Read and validate frontmatter
|
||||||
|
content = skill_md.read_text()
|
||||||
|
if not content.startswith('---'):
|
||||||
|
return False, "No YAML frontmatter found"
|
||||||
|
|
||||||
|
# Extract frontmatter
|
||||||
|
match = re.match(r'^---\n(.*?)\n---', content, re.DOTALL)
|
||||||
|
if not match:
|
||||||
|
return False, "Invalid frontmatter format"
|
||||||
|
|
||||||
|
frontmatter_text = match.group(1)
|
||||||
|
|
||||||
|
# Parse YAML frontmatter
|
||||||
|
try:
|
||||||
|
frontmatter = yaml.safe_load(frontmatter_text)
|
||||||
|
if not isinstance(frontmatter, dict):
|
||||||
|
return False, "Frontmatter must be a YAML dictionary"
|
||||||
|
except yaml.YAMLError as e:
|
||||||
|
return False, f"Invalid YAML in frontmatter: {e}"
|
||||||
|
|
||||||
|
# Define allowed properties
|
||||||
|
ALLOWED_PROPERTIES = {'name', 'description', 'license', 'allowed-tools', 'metadata', 'compatibility'}
|
||||||
|
|
||||||
|
# Check for unexpected properties (excluding nested keys under metadata)
|
||||||
|
unexpected_keys = set(frontmatter.keys()) - ALLOWED_PROPERTIES
|
||||||
|
if unexpected_keys:
|
||||||
|
return False, (
|
||||||
|
f"Unexpected key(s) in SKILL.md frontmatter: {', '.join(sorted(unexpected_keys))}. "
|
||||||
|
f"Allowed properties are: {', '.join(sorted(ALLOWED_PROPERTIES))}"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check required fields
|
||||||
|
if 'name' not in frontmatter:
|
||||||
|
return False, "Missing 'name' in frontmatter"
|
||||||
|
if 'description' not in frontmatter:
|
||||||
|
return False, "Missing 'description' in frontmatter"
|
||||||
|
|
||||||
|
# Extract name for validation
|
||||||
|
name = frontmatter.get('name', '')
|
||||||
|
if not isinstance(name, str):
|
||||||
|
return False, f"Name must be a string, got {type(name).__name__}"
|
||||||
|
name = name.strip()
|
||||||
|
if name:
|
||||||
|
# Check naming convention (kebab-case: lowercase with hyphens)
|
||||||
|
if not re.match(r'^[a-z0-9-]+$', name):
|
||||||
|
return False, f"Name '{name}' should be kebab-case (lowercase letters, digits, and hyphens only)"
|
||||||
|
if name.startswith('-') or name.endswith('-') or '--' in name:
|
||||||
|
return False, f"Name '{name}' cannot start/end with hyphen or contain consecutive hyphens"
|
||||||
|
# Check name length (max 64 characters per spec)
|
||||||
|
if len(name) > 64:
|
||||||
|
return False, f"Name is too long ({len(name)} characters). Maximum is 64 characters."
|
||||||
|
|
||||||
|
# Extract and validate description
|
||||||
|
description = frontmatter.get('description', '')
|
||||||
|
if not isinstance(description, str):
|
||||||
|
return False, f"Description must be a string, got {type(description).__name__}"
|
||||||
|
description = description.strip()
|
||||||
|
if description:
|
||||||
|
# Check for angle brackets
|
||||||
|
if '<' in description or '>' in description:
|
||||||
|
return False, "Description cannot contain angle brackets (< or >)"
|
||||||
|
# Check description length (max 1024 characters per spec)
|
||||||
|
if len(description) > 1024:
|
||||||
|
return False, f"Description is too long ({len(description)} characters). Maximum is 1024 characters."
|
||||||
|
|
||||||
|
# Validate compatibility field if present (optional)
|
||||||
|
compatibility = frontmatter.get('compatibility', '')
|
||||||
|
if compatibility:
|
||||||
|
if not isinstance(compatibility, str):
|
||||||
|
return False, f"Compatibility must be a string, got {type(compatibility).__name__}"
|
||||||
|
if len(compatibility) > 500:
|
||||||
|
return False, f"Compatibility is too long ({len(compatibility)} characters). Maximum is 500 characters."
|
||||||
|
|
||||||
|
return True, "Skill is valid!"
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
if len(sys.argv) != 2:
|
||||||
|
print("Usage: python quick_validate.py <skill_directory>")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
valid, message = validate_skill(sys.argv[1])
|
||||||
|
print(message)
|
||||||
|
sys.exit(0 if valid else 1)
|
||||||
310
third_party/zeroclaw/.claude/skills/skill-creator/scripts/run_eval.py
vendored
Executable file
310
third_party/zeroclaw/.claude/skills/skill-creator/scripts/run_eval.py
vendored
Executable file
@@ -0,0 +1,310 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Run trigger evaluation for a skill description.
|
||||||
|
|
||||||
|
Tests whether a skill's description causes Claude to trigger (read the skill)
|
||||||
|
for a set of queries. Outputs results as JSON.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import select
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
import uuid
|
||||||
|
from concurrent.futures import ProcessPoolExecutor, as_completed
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from scripts.utils import parse_skill_md
|
||||||
|
|
||||||
|
|
||||||
|
def find_project_root() -> Path:
|
||||||
|
"""Find the project root by walking up from cwd looking for .claude/.
|
||||||
|
|
||||||
|
Mimics how Claude Code discovers its project root, so the command file
|
||||||
|
we create ends up where claude -p will look for it.
|
||||||
|
"""
|
||||||
|
current = Path.cwd()
|
||||||
|
for parent in [current, *current.parents]:
|
||||||
|
if (parent / ".claude").is_dir():
|
||||||
|
return parent
|
||||||
|
return current
|
||||||
|
|
||||||
|
|
||||||
|
def run_single_query(
|
||||||
|
query: str,
|
||||||
|
skill_name: str,
|
||||||
|
skill_description: str,
|
||||||
|
timeout: int,
|
||||||
|
project_root: str,
|
||||||
|
model: str | None = None,
|
||||||
|
) -> bool:
|
||||||
|
"""Run a single query and return whether the skill was triggered.
|
||||||
|
|
||||||
|
Creates a command file in .claude/commands/ so it appears in Claude's
|
||||||
|
available_skills list, then runs `claude -p` with the raw query.
|
||||||
|
Uses --include-partial-messages to detect triggering early from
|
||||||
|
stream events (content_block_start) rather than waiting for the
|
||||||
|
full assistant message, which only arrives after tool execution.
|
||||||
|
"""
|
||||||
|
unique_id = uuid.uuid4().hex[:8]
|
||||||
|
clean_name = f"{skill_name}-skill-{unique_id}"
|
||||||
|
project_commands_dir = Path(project_root) / ".claude" / "commands"
|
||||||
|
command_file = project_commands_dir / f"{clean_name}.md"
|
||||||
|
|
||||||
|
try:
|
||||||
|
project_commands_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
# Use YAML block scalar to avoid breaking on quotes in description
|
||||||
|
indented_desc = "\n ".join(skill_description.split("\n"))
|
||||||
|
command_content = (
|
||||||
|
f"---\n"
|
||||||
|
f"description: |\n"
|
||||||
|
f" {indented_desc}\n"
|
||||||
|
f"---\n\n"
|
||||||
|
f"# {skill_name}\n\n"
|
||||||
|
f"This skill handles: {skill_description}\n"
|
||||||
|
)
|
||||||
|
command_file.write_text(command_content)
|
||||||
|
|
||||||
|
cmd = [
|
||||||
|
"claude",
|
||||||
|
"-p", query,
|
||||||
|
"--output-format", "stream-json",
|
||||||
|
"--verbose",
|
||||||
|
"--include-partial-messages",
|
||||||
|
]
|
||||||
|
if model:
|
||||||
|
cmd.extend(["--model", model])
|
||||||
|
|
||||||
|
# Remove CLAUDECODE env var to allow nesting claude -p inside a
|
||||||
|
# Claude Code session. The guard is for interactive terminal conflicts;
|
||||||
|
# programmatic subprocess usage is safe.
|
||||||
|
env = {k: v for k, v in os.environ.items() if k != "CLAUDECODE"}
|
||||||
|
|
||||||
|
process = subprocess.Popen(
|
||||||
|
cmd,
|
||||||
|
stdout=subprocess.PIPE,
|
||||||
|
stderr=subprocess.DEVNULL,
|
||||||
|
cwd=project_root,
|
||||||
|
env=env,
|
||||||
|
)
|
||||||
|
|
||||||
|
triggered = False
|
||||||
|
start_time = time.time()
|
||||||
|
buffer = ""
|
||||||
|
# Track state for stream event detection
|
||||||
|
pending_tool_name = None
|
||||||
|
accumulated_json = ""
|
||||||
|
|
||||||
|
try:
|
||||||
|
while time.time() - start_time < timeout:
|
||||||
|
if process.poll() is not None:
|
||||||
|
remaining = process.stdout.read()
|
||||||
|
if remaining:
|
||||||
|
buffer += remaining.decode("utf-8", errors="replace")
|
||||||
|
break
|
||||||
|
|
||||||
|
ready, _, _ = select.select([process.stdout], [], [], 1.0)
|
||||||
|
if not ready:
|
||||||
|
continue
|
||||||
|
|
||||||
|
chunk = os.read(process.stdout.fileno(), 8192)
|
||||||
|
if not chunk:
|
||||||
|
break
|
||||||
|
buffer += chunk.decode("utf-8", errors="replace")
|
||||||
|
|
||||||
|
while "\n" in buffer:
|
||||||
|
line, buffer = buffer.split("\n", 1)
|
||||||
|
line = line.strip()
|
||||||
|
if not line:
|
||||||
|
continue
|
||||||
|
|
||||||
|
try:
|
||||||
|
event = json.loads(line)
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Early detection via stream events
|
||||||
|
if event.get("type") == "stream_event":
|
||||||
|
se = event.get("event", {})
|
||||||
|
se_type = se.get("type", "")
|
||||||
|
|
||||||
|
if se_type == "content_block_start":
|
||||||
|
cb = se.get("content_block", {})
|
||||||
|
if cb.get("type") == "tool_use":
|
||||||
|
tool_name = cb.get("name", "")
|
||||||
|
if tool_name in ("Skill", "Read"):
|
||||||
|
pending_tool_name = tool_name
|
||||||
|
accumulated_json = ""
|
||||||
|
else:
|
||||||
|
return False
|
||||||
|
|
||||||
|
elif se_type == "content_block_delta" and pending_tool_name:
|
||||||
|
delta = se.get("delta", {})
|
||||||
|
if delta.get("type") == "input_json_delta":
|
||||||
|
accumulated_json += delta.get("partial_json", "")
|
||||||
|
if clean_name in accumulated_json:
|
||||||
|
return True
|
||||||
|
|
||||||
|
elif se_type in ("content_block_stop", "message_stop"):
|
||||||
|
if pending_tool_name:
|
||||||
|
return clean_name in accumulated_json
|
||||||
|
if se_type == "message_stop":
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Fallback: full assistant message
|
||||||
|
elif event.get("type") == "assistant":
|
||||||
|
message = event.get("message", {})
|
||||||
|
for content_item in message.get("content", []):
|
||||||
|
if content_item.get("type") != "tool_use":
|
||||||
|
continue
|
||||||
|
tool_name = content_item.get("name", "")
|
||||||
|
tool_input = content_item.get("input", {})
|
||||||
|
if tool_name == "Skill" and clean_name in tool_input.get("skill", ""):
|
||||||
|
triggered = True
|
||||||
|
elif tool_name == "Read" and clean_name in tool_input.get("file_path", ""):
|
||||||
|
triggered = True
|
||||||
|
return triggered
|
||||||
|
|
||||||
|
elif event.get("type") == "result":
|
||||||
|
return triggered
|
||||||
|
finally:
|
||||||
|
# Clean up process on any exit path (return, exception, timeout)
|
||||||
|
if process.poll() is None:
|
||||||
|
process.kill()
|
||||||
|
process.wait()
|
||||||
|
|
||||||
|
return triggered
|
||||||
|
finally:
|
||||||
|
if command_file.exists():
|
||||||
|
command_file.unlink()
|
||||||
|
|
||||||
|
|
||||||
|
def run_eval(
|
||||||
|
eval_set: list[dict],
|
||||||
|
skill_name: str,
|
||||||
|
description: str,
|
||||||
|
num_workers: int,
|
||||||
|
timeout: int,
|
||||||
|
project_root: Path,
|
||||||
|
runs_per_query: int = 1,
|
||||||
|
trigger_threshold: float = 0.5,
|
||||||
|
model: str | None = None,
|
||||||
|
) -> dict:
|
||||||
|
"""Run the full eval set and return results."""
|
||||||
|
results = []
|
||||||
|
|
||||||
|
with ProcessPoolExecutor(max_workers=num_workers) as executor:
|
||||||
|
future_to_info = {}
|
||||||
|
for item in eval_set:
|
||||||
|
for run_idx in range(runs_per_query):
|
||||||
|
future = executor.submit(
|
||||||
|
run_single_query,
|
||||||
|
item["query"],
|
||||||
|
skill_name,
|
||||||
|
description,
|
||||||
|
timeout,
|
||||||
|
str(project_root),
|
||||||
|
model,
|
||||||
|
)
|
||||||
|
future_to_info[future] = (item, run_idx)
|
||||||
|
|
||||||
|
query_triggers: dict[str, list[bool]] = {}
|
||||||
|
query_items: dict[str, dict] = {}
|
||||||
|
for future in as_completed(future_to_info):
|
||||||
|
item, _ = future_to_info[future]
|
||||||
|
query = item["query"]
|
||||||
|
query_items[query] = item
|
||||||
|
if query not in query_triggers:
|
||||||
|
query_triggers[query] = []
|
||||||
|
try:
|
||||||
|
query_triggers[query].append(future.result())
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Warning: query failed: {e}", file=sys.stderr)
|
||||||
|
query_triggers[query].append(False)
|
||||||
|
|
||||||
|
for query, triggers in query_triggers.items():
|
||||||
|
item = query_items[query]
|
||||||
|
trigger_rate = sum(triggers) / len(triggers)
|
||||||
|
should_trigger = item["should_trigger"]
|
||||||
|
if should_trigger:
|
||||||
|
did_pass = trigger_rate >= trigger_threshold
|
||||||
|
else:
|
||||||
|
did_pass = trigger_rate < trigger_threshold
|
||||||
|
results.append({
|
||||||
|
"query": query,
|
||||||
|
"should_trigger": should_trigger,
|
||||||
|
"trigger_rate": trigger_rate,
|
||||||
|
"triggers": sum(triggers),
|
||||||
|
"runs": len(triggers),
|
||||||
|
"pass": did_pass,
|
||||||
|
})
|
||||||
|
|
||||||
|
passed = sum(1 for r in results if r["pass"])
|
||||||
|
total = len(results)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"skill_name": skill_name,
|
||||||
|
"description": description,
|
||||||
|
"results": results,
|
||||||
|
"summary": {
|
||||||
|
"total": total,
|
||||||
|
"passed": passed,
|
||||||
|
"failed": total - passed,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description="Run trigger evaluation for a skill description")
|
||||||
|
parser.add_argument("--eval-set", required=True, help="Path to eval set JSON file")
|
||||||
|
parser.add_argument("--skill-path", required=True, help="Path to skill directory")
|
||||||
|
parser.add_argument("--description", default=None, help="Override description to test")
|
||||||
|
parser.add_argument("--num-workers", type=int, default=10, help="Number of parallel workers")
|
||||||
|
parser.add_argument("--timeout", type=int, default=30, help="Timeout per query in seconds")
|
||||||
|
parser.add_argument("--runs-per-query", type=int, default=3, help="Number of runs per query")
|
||||||
|
parser.add_argument("--trigger-threshold", type=float, default=0.5, help="Trigger rate threshold")
|
||||||
|
parser.add_argument("--model", default=None, help="Model to use for claude -p (default: user's configured model)")
|
||||||
|
parser.add_argument("--verbose", action="store_true", help="Print progress to stderr")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
eval_set = json.loads(Path(args.eval_set).read_text())
|
||||||
|
skill_path = Path(args.skill_path)
|
||||||
|
|
||||||
|
if not (skill_path / "SKILL.md").exists():
|
||||||
|
print(f"Error: No SKILL.md found at {skill_path}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
name, original_description, content = parse_skill_md(skill_path)
|
||||||
|
description = args.description or original_description
|
||||||
|
project_root = find_project_root()
|
||||||
|
|
||||||
|
if args.verbose:
|
||||||
|
print(f"Evaluating: {description}", file=sys.stderr)
|
||||||
|
|
||||||
|
output = run_eval(
|
||||||
|
eval_set=eval_set,
|
||||||
|
skill_name=name,
|
||||||
|
description=description,
|
||||||
|
num_workers=args.num_workers,
|
||||||
|
timeout=args.timeout,
|
||||||
|
project_root=project_root,
|
||||||
|
runs_per_query=args.runs_per_query,
|
||||||
|
trigger_threshold=args.trigger_threshold,
|
||||||
|
model=args.model,
|
||||||
|
)
|
||||||
|
|
||||||
|
if args.verbose:
|
||||||
|
summary = output["summary"]
|
||||||
|
print(f"Results: {summary['passed']}/{summary['total']} passed", file=sys.stderr)
|
||||||
|
for r in output["results"]:
|
||||||
|
status = "PASS" if r["pass"] else "FAIL"
|
||||||
|
rate_str = f"{r['triggers']}/{r['runs']}"
|
||||||
|
print(f" [{status}] rate={rate_str} expected={r['should_trigger']}: {r['query'][:70]}", file=sys.stderr)
|
||||||
|
|
||||||
|
print(json.dumps(output, indent=2))
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
328
third_party/zeroclaw/.claude/skills/skill-creator/scripts/run_loop.py
vendored
Executable file
328
third_party/zeroclaw/.claude/skills/skill-creator/scripts/run_loop.py
vendored
Executable file
@@ -0,0 +1,328 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Run the eval + improve loop until all pass or max iterations reached.
|
||||||
|
|
||||||
|
Combines run_eval.py and improve_description.py in a loop, tracking history
|
||||||
|
and returning the best description found. Supports train/test split to prevent
|
||||||
|
overfitting.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import random
|
||||||
|
import sys
|
||||||
|
import tempfile
|
||||||
|
import time
|
||||||
|
import webbrowser
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from scripts.generate_report import generate_html
|
||||||
|
from scripts.improve_description import improve_description
|
||||||
|
from scripts.run_eval import find_project_root, run_eval
|
||||||
|
from scripts.utils import parse_skill_md
|
||||||
|
|
||||||
|
|
||||||
|
def split_eval_set(eval_set: list[dict], holdout: float, seed: int = 42) -> tuple[list[dict], list[dict]]:
|
||||||
|
"""Split eval set into train and test sets, stratified by should_trigger."""
|
||||||
|
random.seed(seed)
|
||||||
|
|
||||||
|
# Separate by should_trigger
|
||||||
|
trigger = [e for e in eval_set if e["should_trigger"]]
|
||||||
|
no_trigger = [e for e in eval_set if not e["should_trigger"]]
|
||||||
|
|
||||||
|
# Shuffle each group
|
||||||
|
random.shuffle(trigger)
|
||||||
|
random.shuffle(no_trigger)
|
||||||
|
|
||||||
|
# Calculate split points
|
||||||
|
n_trigger_test = max(1, int(len(trigger) * holdout))
|
||||||
|
n_no_trigger_test = max(1, int(len(no_trigger) * holdout))
|
||||||
|
|
||||||
|
# Split
|
||||||
|
test_set = trigger[:n_trigger_test] + no_trigger[:n_no_trigger_test]
|
||||||
|
train_set = trigger[n_trigger_test:] + no_trigger[n_no_trigger_test:]
|
||||||
|
|
||||||
|
return train_set, test_set
|
||||||
|
|
||||||
|
|
||||||
|
def run_loop(
|
||||||
|
eval_set: list[dict],
|
||||||
|
skill_path: Path,
|
||||||
|
description_override: str | None,
|
||||||
|
num_workers: int,
|
||||||
|
timeout: int,
|
||||||
|
max_iterations: int,
|
||||||
|
runs_per_query: int,
|
||||||
|
trigger_threshold: float,
|
||||||
|
holdout: float,
|
||||||
|
model: str,
|
||||||
|
verbose: bool,
|
||||||
|
live_report_path: Path | None = None,
|
||||||
|
log_dir: Path | None = None,
|
||||||
|
) -> dict:
|
||||||
|
"""Run the eval + improvement loop."""
|
||||||
|
project_root = find_project_root()
|
||||||
|
name, original_description, content = parse_skill_md(skill_path)
|
||||||
|
current_description = description_override or original_description
|
||||||
|
|
||||||
|
# Split into train/test if holdout > 0
|
||||||
|
if holdout > 0:
|
||||||
|
train_set, test_set = split_eval_set(eval_set, holdout)
|
||||||
|
if verbose:
|
||||||
|
print(f"Split: {len(train_set)} train, {len(test_set)} test (holdout={holdout})", file=sys.stderr)
|
||||||
|
else:
|
||||||
|
train_set = eval_set
|
||||||
|
test_set = []
|
||||||
|
|
||||||
|
history = []
|
||||||
|
exit_reason = "unknown"
|
||||||
|
|
||||||
|
for iteration in range(1, max_iterations + 1):
|
||||||
|
if verbose:
|
||||||
|
print(f"\n{'='*60}", file=sys.stderr)
|
||||||
|
print(f"Iteration {iteration}/{max_iterations}", file=sys.stderr)
|
||||||
|
print(f"Description: {current_description}", file=sys.stderr)
|
||||||
|
print(f"{'='*60}", file=sys.stderr)
|
||||||
|
|
||||||
|
# Evaluate train + test together in one batch for parallelism
|
||||||
|
all_queries = train_set + test_set
|
||||||
|
t0 = time.time()
|
||||||
|
all_results = run_eval(
|
||||||
|
eval_set=all_queries,
|
||||||
|
skill_name=name,
|
||||||
|
description=current_description,
|
||||||
|
num_workers=num_workers,
|
||||||
|
timeout=timeout,
|
||||||
|
project_root=project_root,
|
||||||
|
runs_per_query=runs_per_query,
|
||||||
|
trigger_threshold=trigger_threshold,
|
||||||
|
model=model,
|
||||||
|
)
|
||||||
|
eval_elapsed = time.time() - t0
|
||||||
|
|
||||||
|
# Split results back into train/test by matching queries
|
||||||
|
train_queries_set = {q["query"] for q in train_set}
|
||||||
|
train_result_list = [r for r in all_results["results"] if r["query"] in train_queries_set]
|
||||||
|
test_result_list = [r for r in all_results["results"] if r["query"] not in train_queries_set]
|
||||||
|
|
||||||
|
train_passed = sum(1 for r in train_result_list if r["pass"])
|
||||||
|
train_total = len(train_result_list)
|
||||||
|
train_summary = {"passed": train_passed, "failed": train_total - train_passed, "total": train_total}
|
||||||
|
train_results = {"results": train_result_list, "summary": train_summary}
|
||||||
|
|
||||||
|
if test_set:
|
||||||
|
test_passed = sum(1 for r in test_result_list if r["pass"])
|
||||||
|
test_total = len(test_result_list)
|
||||||
|
test_summary = {"passed": test_passed, "failed": test_total - test_passed, "total": test_total}
|
||||||
|
test_results = {"results": test_result_list, "summary": test_summary}
|
||||||
|
else:
|
||||||
|
test_results = None
|
||||||
|
test_summary = None
|
||||||
|
|
||||||
|
history.append({
|
||||||
|
"iteration": iteration,
|
||||||
|
"description": current_description,
|
||||||
|
"train_passed": train_summary["passed"],
|
||||||
|
"train_failed": train_summary["failed"],
|
||||||
|
"train_total": train_summary["total"],
|
||||||
|
"train_results": train_results["results"],
|
||||||
|
"test_passed": test_summary["passed"] if test_summary else None,
|
||||||
|
"test_failed": test_summary["failed"] if test_summary else None,
|
||||||
|
"test_total": test_summary["total"] if test_summary else None,
|
||||||
|
"test_results": test_results["results"] if test_results else None,
|
||||||
|
# For backward compat with report generator
|
||||||
|
"passed": train_summary["passed"],
|
||||||
|
"failed": train_summary["failed"],
|
||||||
|
"total": train_summary["total"],
|
||||||
|
"results": train_results["results"],
|
||||||
|
})
|
||||||
|
|
||||||
|
# Write live report if path provided
|
||||||
|
if live_report_path:
|
||||||
|
partial_output = {
|
||||||
|
"original_description": original_description,
|
||||||
|
"best_description": current_description,
|
||||||
|
"best_score": "in progress",
|
||||||
|
"iterations_run": len(history),
|
||||||
|
"holdout": holdout,
|
||||||
|
"train_size": len(train_set),
|
||||||
|
"test_size": len(test_set),
|
||||||
|
"history": history,
|
||||||
|
}
|
||||||
|
live_report_path.write_text(generate_html(partial_output, auto_refresh=True, skill_name=name))
|
||||||
|
|
||||||
|
if verbose:
|
||||||
|
def print_eval_stats(label, results, elapsed):
|
||||||
|
pos = [r for r in results if r["should_trigger"]]
|
||||||
|
neg = [r for r in results if not r["should_trigger"]]
|
||||||
|
tp = sum(r["triggers"] for r in pos)
|
||||||
|
pos_runs = sum(r["runs"] for r in pos)
|
||||||
|
fn = pos_runs - tp
|
||||||
|
fp = sum(r["triggers"] for r in neg)
|
||||||
|
neg_runs = sum(r["runs"] for r in neg)
|
||||||
|
tn = neg_runs - fp
|
||||||
|
total = tp + tn + fp + fn
|
||||||
|
precision = tp / (tp + fp) if (tp + fp) > 0 else 1.0
|
||||||
|
recall = tp / (tp + fn) if (tp + fn) > 0 else 1.0
|
||||||
|
accuracy = (tp + tn) / total if total > 0 else 0.0
|
||||||
|
print(f"{label}: {tp+tn}/{total} correct, precision={precision:.0%} recall={recall:.0%} accuracy={accuracy:.0%} ({elapsed:.1f}s)", file=sys.stderr)
|
||||||
|
for r in results:
|
||||||
|
status = "PASS" if r["pass"] else "FAIL"
|
||||||
|
rate_str = f"{r['triggers']}/{r['runs']}"
|
||||||
|
print(f" [{status}] rate={rate_str} expected={r['should_trigger']}: {r['query'][:60]}", file=sys.stderr)
|
||||||
|
|
||||||
|
print_eval_stats("Train", train_results["results"], eval_elapsed)
|
||||||
|
if test_summary:
|
||||||
|
print_eval_stats("Test ", test_results["results"], 0)
|
||||||
|
|
||||||
|
if train_summary["failed"] == 0:
|
||||||
|
exit_reason = f"all_passed (iteration {iteration})"
|
||||||
|
if verbose:
|
||||||
|
print(f"\nAll train queries passed on iteration {iteration}!", file=sys.stderr)
|
||||||
|
break
|
||||||
|
|
||||||
|
if iteration == max_iterations:
|
||||||
|
exit_reason = f"max_iterations ({max_iterations})"
|
||||||
|
if verbose:
|
||||||
|
print(f"\nMax iterations reached ({max_iterations}).", file=sys.stderr)
|
||||||
|
break
|
||||||
|
|
||||||
|
# Improve the description based on train results
|
||||||
|
if verbose:
|
||||||
|
print(f"\nImproving description...", file=sys.stderr)
|
||||||
|
|
||||||
|
t0 = time.time()
|
||||||
|
# Strip test scores from history so improvement model can't see them
|
||||||
|
blinded_history = [
|
||||||
|
{k: v for k, v in h.items() if not k.startswith("test_")}
|
||||||
|
for h in history
|
||||||
|
]
|
||||||
|
new_description = improve_description(
|
||||||
|
skill_name=name,
|
||||||
|
skill_content=content,
|
||||||
|
current_description=current_description,
|
||||||
|
eval_results=train_results,
|
||||||
|
history=blinded_history,
|
||||||
|
model=model,
|
||||||
|
log_dir=log_dir,
|
||||||
|
iteration=iteration,
|
||||||
|
)
|
||||||
|
improve_elapsed = time.time() - t0
|
||||||
|
|
||||||
|
if verbose:
|
||||||
|
print(f"Proposed ({improve_elapsed:.1f}s): {new_description}", file=sys.stderr)
|
||||||
|
|
||||||
|
current_description = new_description
|
||||||
|
|
||||||
|
# Find the best iteration by TEST score (or train if no test set)
|
||||||
|
if test_set:
|
||||||
|
best = max(history, key=lambda h: h["test_passed"] or 0)
|
||||||
|
best_score = f"{best['test_passed']}/{best['test_total']}"
|
||||||
|
else:
|
||||||
|
best = max(history, key=lambda h: h["train_passed"])
|
||||||
|
best_score = f"{best['train_passed']}/{best['train_total']}"
|
||||||
|
|
||||||
|
if verbose:
|
||||||
|
print(f"\nExit reason: {exit_reason}", file=sys.stderr)
|
||||||
|
print(f"Best score: {best_score} (iteration {best['iteration']})", file=sys.stderr)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"exit_reason": exit_reason,
|
||||||
|
"original_description": original_description,
|
||||||
|
"best_description": best["description"],
|
||||||
|
"best_score": best_score,
|
||||||
|
"best_train_score": f"{best['train_passed']}/{best['train_total']}",
|
||||||
|
"best_test_score": f"{best['test_passed']}/{best['test_total']}" if test_set else None,
|
||||||
|
"final_description": current_description,
|
||||||
|
"iterations_run": len(history),
|
||||||
|
"holdout": holdout,
|
||||||
|
"train_size": len(train_set),
|
||||||
|
"test_size": len(test_set),
|
||||||
|
"history": history,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description="Run eval + improve loop")
|
||||||
|
parser.add_argument("--eval-set", required=True, help="Path to eval set JSON file")
|
||||||
|
parser.add_argument("--skill-path", required=True, help="Path to skill directory")
|
||||||
|
parser.add_argument("--description", default=None, help="Override starting description")
|
||||||
|
parser.add_argument("--num-workers", type=int, default=10, help="Number of parallel workers")
|
||||||
|
parser.add_argument("--timeout", type=int, default=30, help="Timeout per query in seconds")
|
||||||
|
parser.add_argument("--max-iterations", type=int, default=5, help="Max improvement iterations")
|
||||||
|
parser.add_argument("--runs-per-query", type=int, default=3, help="Number of runs per query")
|
||||||
|
parser.add_argument("--trigger-threshold", type=float, default=0.5, help="Trigger rate threshold")
|
||||||
|
parser.add_argument("--holdout", type=float, default=0.4, help="Fraction of eval set to hold out for testing (0 to disable)")
|
||||||
|
parser.add_argument("--model", required=True, help="Model for improvement")
|
||||||
|
parser.add_argument("--verbose", action="store_true", help="Print progress to stderr")
|
||||||
|
parser.add_argument("--report", default="auto", help="Generate HTML report at this path (default: 'auto' for temp file, 'none' to disable)")
|
||||||
|
parser.add_argument("--results-dir", default=None, help="Save all outputs (results.json, report.html, log.txt) to a timestamped subdirectory here")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
eval_set = json.loads(Path(args.eval_set).read_text())
|
||||||
|
skill_path = Path(args.skill_path)
|
||||||
|
|
||||||
|
if not (skill_path / "SKILL.md").exists():
|
||||||
|
print(f"Error: No SKILL.md found at {skill_path}", file=sys.stderr)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
name, _, _ = parse_skill_md(skill_path)
|
||||||
|
|
||||||
|
# Set up live report path
|
||||||
|
if args.report != "none":
|
||||||
|
if args.report == "auto":
|
||||||
|
timestamp = time.strftime("%Y%m%d_%H%M%S")
|
||||||
|
live_report_path = Path(tempfile.gettempdir()) / f"skill_description_report_{skill_path.name}_{timestamp}.html"
|
||||||
|
else:
|
||||||
|
live_report_path = Path(args.report)
|
||||||
|
# Open the report immediately so the user can watch
|
||||||
|
live_report_path.write_text("<html><body><h1>Starting optimization loop...</h1><meta http-equiv='refresh' content='5'></body></html>")
|
||||||
|
webbrowser.open(str(live_report_path))
|
||||||
|
else:
|
||||||
|
live_report_path = None
|
||||||
|
|
||||||
|
# Determine output directory (create before run_loop so logs can be written)
|
||||||
|
if args.results_dir:
|
||||||
|
timestamp = time.strftime("%Y-%m-%d_%H%M%S")
|
||||||
|
results_dir = Path(args.results_dir) / timestamp
|
||||||
|
results_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
else:
|
||||||
|
results_dir = None
|
||||||
|
|
||||||
|
log_dir = results_dir / "logs" if results_dir else None
|
||||||
|
|
||||||
|
output = run_loop(
|
||||||
|
eval_set=eval_set,
|
||||||
|
skill_path=skill_path,
|
||||||
|
description_override=args.description,
|
||||||
|
num_workers=args.num_workers,
|
||||||
|
timeout=args.timeout,
|
||||||
|
max_iterations=args.max_iterations,
|
||||||
|
runs_per_query=args.runs_per_query,
|
||||||
|
trigger_threshold=args.trigger_threshold,
|
||||||
|
holdout=args.holdout,
|
||||||
|
model=args.model,
|
||||||
|
verbose=args.verbose,
|
||||||
|
live_report_path=live_report_path,
|
||||||
|
log_dir=log_dir,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Save JSON output
|
||||||
|
json_output = json.dumps(output, indent=2)
|
||||||
|
print(json_output)
|
||||||
|
if results_dir:
|
||||||
|
(results_dir / "results.json").write_text(json_output)
|
||||||
|
|
||||||
|
# Write final HTML report (without auto-refresh)
|
||||||
|
if live_report_path:
|
||||||
|
live_report_path.write_text(generate_html(output, auto_refresh=False, skill_name=name))
|
||||||
|
print(f"\nReport: {live_report_path}", file=sys.stderr)
|
||||||
|
|
||||||
|
if results_dir and live_report_path:
|
||||||
|
(results_dir / "report.html").write_text(generate_html(output, auto_refresh=False, skill_name=name))
|
||||||
|
|
||||||
|
if results_dir:
|
||||||
|
print(f"Results saved to: {results_dir}", file=sys.stderr)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
47
third_party/zeroclaw/.claude/skills/skill-creator/scripts/utils.py
vendored
Normal file
47
third_party/zeroclaw/.claude/skills/skill-creator/scripts/utils.py
vendored
Normal file
@@ -0,0 +1,47 @@
|
|||||||
|
"""Shared utilities for skill-creator scripts."""
|
||||||
|
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def parse_skill_md(skill_path: Path) -> tuple[str, str, str]:
|
||||||
|
"""Parse a SKILL.md file, returning (name, description, full_content)."""
|
||||||
|
content = (skill_path / "SKILL.md").read_text()
|
||||||
|
lines = content.split("\n")
|
||||||
|
|
||||||
|
if lines[0].strip() != "---":
|
||||||
|
raise ValueError("SKILL.md missing frontmatter (no opening ---)")
|
||||||
|
|
||||||
|
end_idx = None
|
||||||
|
for i, line in enumerate(lines[1:], start=1):
|
||||||
|
if line.strip() == "---":
|
||||||
|
end_idx = i
|
||||||
|
break
|
||||||
|
|
||||||
|
if end_idx is None:
|
||||||
|
raise ValueError("SKILL.md missing frontmatter (no closing ---)")
|
||||||
|
|
||||||
|
name = ""
|
||||||
|
description = ""
|
||||||
|
frontmatter_lines = lines[1:end_idx]
|
||||||
|
i = 0
|
||||||
|
while i < len(frontmatter_lines):
|
||||||
|
line = frontmatter_lines[i]
|
||||||
|
if line.startswith("name:"):
|
||||||
|
name = line[len("name:"):].strip().strip('"').strip("'")
|
||||||
|
elif line.startswith("description:"):
|
||||||
|
value = line[len("description:"):].strip()
|
||||||
|
# Handle YAML multiline indicators (>, |, >-, |-)
|
||||||
|
if value in (">", "|", ">-", "|-"):
|
||||||
|
continuation_lines: list[str] = []
|
||||||
|
i += 1
|
||||||
|
while i < len(frontmatter_lines) and (frontmatter_lines[i].startswith(" ") or frontmatter_lines[i].startswith("\t")):
|
||||||
|
continuation_lines.append(frontmatter_lines[i].strip())
|
||||||
|
i += 1
|
||||||
|
description = " ".join(continuation_lines)
|
||||||
|
continue
|
||||||
|
else:
|
||||||
|
description = value.strip('"').strip("'")
|
||||||
|
i += 1
|
||||||
|
|
||||||
|
return name, description, content
|
||||||
285
third_party/zeroclaw/.claude/skills/zeroclaw/SKILL.md
vendored
Normal file
285
third_party/zeroclaw/.claude/skills/zeroclaw/SKILL.md
vendored
Normal file
@@ -0,0 +1,285 @@
|
|||||||
|
---
|
||||||
|
name: zeroclaw
|
||||||
|
description: "Help users operate and interact with their ZeroClaw agent instance — through both the CLI (`zeroclaw` commands) and the REST/WebSocket gateway API. Use this skill whenever the user wants to: send messages to ZeroClaw, manage memory or cron jobs, check system status, configure channels or providers, hit the gateway API, troubleshoot their ZeroClaw setup, build from source, or do anything involving the `zeroclaw` binary or its HTTP endpoints. Trigger this even if the user just says things like 'check my agent status', 'schedule a reminder', 'store this in memory', 'list my cron jobs', 'send a message to my bot', 'set up Telegram', 'build zeroclaw', or 'my bot is broken' — these are all ZeroClaw operations."
|
||||||
|
---
|
||||||
|
|
||||||
|
# ZeroClaw Skill
|
||||||
|
|
||||||
|
You are helping a user operate their ZeroClaw agent instance. ZeroClaw is an autonomous agent runtime with a CLI and an HTTP/WebSocket gateway.
|
||||||
|
|
||||||
|
Your job is to understand what the user wants to accomplish and then **execute it** — run the command, make the API call, report the result. Do not just show commands for the user to copy-paste. Actually run them via the Bash tool and tell the user what happened. The only exception is destructive operations (clearing all memory, estop kill-all) where you should confirm first.
|
||||||
|
|
||||||
|
## Adaptive Expertise
|
||||||
|
|
||||||
|
Pay attention to how the user talks. Someone who says "can you hit the webhook endpoint with a POST" is telling you they know what they're doing — be concise, skip explanations, just execute. Someone who says "how do I make my bot remember things" needs more context about what's happening under the hood.
|
||||||
|
|
||||||
|
Signals of technical comfort: mentions specific endpoints, HTTP methods, JSON fields, talks about tokens/auth, uses CLI flags fluently, references config files directly.
|
||||||
|
|
||||||
|
Signals of less familiarity: asks "what does X do", uses casual language about the bot/agent, describes goals rather than mechanisms ("I want it to check something every morning").
|
||||||
|
|
||||||
|
Default to a middle ground — brief explanation of what you're about to do, then do it. Dial up or down from there based on cues.
|
||||||
|
|
||||||
|
## Discovery — Before You Act
|
||||||
|
|
||||||
|
Before running any ZeroClaw operation, make sure you know where things are:
|
||||||
|
|
||||||
|
1. **Find the binary.** Search in this order:
|
||||||
|
- `which zeroclaw` (PATH)
|
||||||
|
- The current project's build output: `./target/release/zeroclaw` or `./target/debug/zeroclaw` — this is the right choice when the user is working inside the ZeroClaw source tree and may have local changes
|
||||||
|
- Common install locations: `~/.cargo/bin/zeroclaw`, `~/Downloads/zeroclaw-bin/zeroclaw`
|
||||||
|
|
||||||
|
If no binary is found anywhere, offer to build from source (see "Building from Source" below). If the user is a developer working on ZeroClaw itself, they'll likely want the local build — watch for cues like them editing source files, mentioning PRs, or being in the project directory.
|
||||||
|
|
||||||
|
2. **Check if the gateway is running** (only needed for REST/WebSocket operations). A quick `curl -sf http://127.0.0.1:42617/health` tells you. If it's not running and the user wants REST access, let them know and offer to start it (`zeroclaw gateway` or `zeroclaw daemon`).
|
||||||
|
|
||||||
|
3. **Check auth status.** If the gateway requires pairing (`require_pairing = true` is the default), REST calls need a bearer token. Run `zeroclaw status` to see the current state, or check `~/.zeroclaw/config.toml` for a stored token under `[gateway]`.
|
||||||
|
|
||||||
|
Cache these findings for the conversation — don't re-discover every time.
|
||||||
|
|
||||||
|
## Important: REPL Limitation
|
||||||
|
|
||||||
|
`zeroclaw agent` (interactive REPL) requires interactive stdin, which doesn't work through the Bash tool. When the user wants to chat with their agent, use single-message mode instead:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
zeroclaw agent -m "the message"
|
||||||
|
```
|
||||||
|
|
||||||
|
Each `-m` invocation is independent (no conversation history between calls). If the user needs multi-turn conversation, let them know they can run `zeroclaw agent` directly in their terminal, or use the WebSocket endpoint for programmatic streaming.
|
||||||
|
|
||||||
|
## First-Time Setup
|
||||||
|
|
||||||
|
If the user hasn't set up ZeroClaw yet (no `~/.zeroclaw/config.toml` exists), guide them through onboarding:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
zeroclaw onboard # Quick mode — defaults to OpenRouter
|
||||||
|
zeroclaw onboard --provider anthropic # Use Anthropic directly
|
||||||
|
zeroclaw onboard # Guided wizard (default)
|
||||||
|
```
|
||||||
|
|
||||||
|
After onboarding, verify everything works:
|
||||||
|
```bash
|
||||||
|
zeroclaw status
|
||||||
|
zeroclaw doctor
|
||||||
|
```
|
||||||
|
|
||||||
|
If they already have a config but something is broken, `zeroclaw onboard --channels-only` repairs just the channel configuration without overwriting everything else.
|
||||||
|
|
||||||
|
## Building from Source
|
||||||
|
|
||||||
|
If the user wants to build ZeroClaw (or no binary is installed):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo build --release
|
||||||
|
```
|
||||||
|
|
||||||
|
This produces `target/release/zeroclaw`. For faster iteration during development, `cargo build` (debug mode) is quicker but produces a slower binary at `target/debug/zeroclaw`.
|
||||||
|
|
||||||
|
You can also run directly without a separate build step:
|
||||||
|
```bash
|
||||||
|
cargo run --release -- <subcommand> [args]
|
||||||
|
```
|
||||||
|
|
||||||
|
Before building, `cargo check` gives a quick compile validation without the full build.
|
||||||
|
|
||||||
|
## Choosing CLI vs REST
|
||||||
|
|
||||||
|
Both surfaces can do most things. Rules of thumb:
|
||||||
|
|
||||||
|
- **CLI is simpler** for one-off operations from the terminal. It handles auth internally and formats output nicely. Prefer CLI when the user is working locally.
|
||||||
|
- **REST is needed** when the user is building an integration, scripting from another language, or accessing a remote ZeroClaw instance. Also needed for streaming (WebSocket, SSE).
|
||||||
|
- If unclear, **default to CLI** — it's less setup.
|
||||||
|
|
||||||
|
## Core Operations
|
||||||
|
|
||||||
|
### Sending Messages
|
||||||
|
|
||||||
|
**CLI:** `zeroclaw agent -m "your message here"` — remember, always use `-m` mode, not bare `zeroclaw agent`.
|
||||||
|
|
||||||
|
**REST:**
|
||||||
|
```bash
|
||||||
|
curl -X POST http://127.0.0.1:42617/webhook \
|
||||||
|
-H "Authorization: Bearer <token>" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"message": "your message here"}'
|
||||||
|
```
|
||||||
|
Response: `{"response": "...", "model": "..."}`
|
||||||
|
|
||||||
|
**WebSocket** (for streaming): connect to `ws://127.0.0.1:42617/ws/chat?token=<token>`, send `{"type": "message", "content": "..."}`, receive `{"type": "done", "full_response": "..."}`.
|
||||||
|
|
||||||
|
### System Status
|
||||||
|
|
||||||
|
Run `zeroclaw status` to see provider, model, uptime, channels, memory backend. For deeper diagnostics: `zeroclaw doctor`.
|
||||||
|
|
||||||
|
**REST:** `GET /api/status` (same info as JSON), `GET /health` (no auth, quick ok/not-ok).
|
||||||
|
|
||||||
|
### Memory
|
||||||
|
|
||||||
|
The CLI can list, get, and clear memories but **cannot store** them directly. To store a memory:
|
||||||
|
- Via agent: `zeroclaw agent -m "remember that my favorite color is blue"`
|
||||||
|
- Via REST: `POST /api/memory` with `{"key": "...", "content": "...", "category": "core"}`
|
||||||
|
|
||||||
|
**CLI (read/delete):**
|
||||||
|
- `zeroclaw memory list` — list all entries
|
||||||
|
- `zeroclaw memory list --category core --limit 10` — filtered
|
||||||
|
- `zeroclaw memory get "key-name"` — get specific entry
|
||||||
|
- `zeroclaw memory stats` — usage statistics
|
||||||
|
- `zeroclaw memory clear --key "prefix" --yes` — delete entries (confirm with user first)
|
||||||
|
|
||||||
|
**REST (full CRUD):**
|
||||||
|
- `GET /api/memory` — list all (optional: `?query=search+text&category=core`)
|
||||||
|
- `POST /api/memory` — store: `{"key": "...", "content": "...", "category": "core"}`
|
||||||
|
- `DELETE /api/memory/{key}` — delete entry
|
||||||
|
|
||||||
|
Categories: `core`, `daily`, `conversation`, or any custom string.
|
||||||
|
|
||||||
|
### Cron / Scheduling
|
||||||
|
|
||||||
|
**CLI:**
|
||||||
|
- `zeroclaw cron list` — show all jobs
|
||||||
|
- `zeroclaw cron add '0 9 * * 1-5' 'Good morning' --tz America/New_York` — recurring
|
||||||
|
- `zeroclaw cron add-at '2026-03-11T10:00:00Z' 'Remind me'` — one-time at specific time
|
||||||
|
- `zeroclaw cron add-every 3600000 'Check health'` — interval in ms
|
||||||
|
- `zeroclaw cron once 30m 'Follow up'` — delay from now
|
||||||
|
- `zeroclaw cron pause <id>` / `zeroclaw cron resume <id>` / `zeroclaw cron remove <id>`
|
||||||
|
|
||||||
|
**REST:**
|
||||||
|
- `GET /api/cron` — list jobs
|
||||||
|
- `POST /api/cron` — add: `{"name": "...", "schedule": "0 9 * * *", "command": "..."}`
|
||||||
|
- `DELETE /api/cron/{id}` — remove job
|
||||||
|
|
||||||
|
### Tools
|
||||||
|
|
||||||
|
Tools are used automatically by the agent during conversations (shell, file ops, memory, browser, HTTP, web search, git, etc. — 30+ tools gated by security policy).
|
||||||
|
|
||||||
|
To see what's available: `GET /api/tools` (REST) lists all registered tools with descriptions and parameter schemas.
|
||||||
|
|
||||||
|
### Configuration
|
||||||
|
|
||||||
|
Edit `~/.zeroclaw/config.toml` directly, or re-run `zeroclaw onboard` to reconfigure.
|
||||||
|
|
||||||
|
**REST:**
|
||||||
|
- `GET /api/config` — get current config (secrets masked as `***MASKED***`)
|
||||||
|
- `PUT /api/config` — update config (send raw TOML as body, 1MB limit)
|
||||||
|
|
||||||
|
### Providers & Models
|
||||||
|
|
||||||
|
- `zeroclaw providers` — list all supported providers
|
||||||
|
- `zeroclaw models list` — cached model catalog
|
||||||
|
- `zeroclaw models refresh --all` — refresh from providers
|
||||||
|
- `zeroclaw models set anthropic/claude-sonnet-4-6` — set default model
|
||||||
|
|
||||||
|
Override per-message: `zeroclaw agent -p anthropic --model claude-sonnet-4-6 -m "hello"`
|
||||||
|
|
||||||
|
### Real-Time Events (SSE)
|
||||||
|
|
||||||
|
REST only — useful for building dashboards or monitoring:
|
||||||
|
```bash
|
||||||
|
curl -N -H "Authorization: Bearer <token>" http://127.0.0.1:42617/api/events
|
||||||
|
```
|
||||||
|
Streams JSON events: `llm_request`, `tool_call_start`, `tool_call`, `agent_start`, `agent_end`, `error`.
|
||||||
|
|
||||||
|
### Cost Tracking
|
||||||
|
|
||||||
|
`GET /api/cost` — returns session/daily/monthly costs, token counts, per-model breakdown.
|
||||||
|
|
||||||
|
### Emergency Stop
|
||||||
|
|
||||||
|
Confirm with the user before running any estop command — these are disruptive.
|
||||||
|
|
||||||
|
- `zeroclaw estop --level kill-all` — stop everything
|
||||||
|
- `zeroclaw estop --level network-kill` — block all network
|
||||||
|
- `zeroclaw estop --level tool-freeze --tool shell` — freeze specific tool
|
||||||
|
- `zeroclaw estop status` — check current estop state
|
||||||
|
- `zeroclaw estop resume --network` — resume
|
||||||
|
|
||||||
|
### Gateway Lifecycle
|
||||||
|
|
||||||
|
- `zeroclaw gateway` — start HTTP gateway (foreground)
|
||||||
|
- `zeroclaw gateway -p 8080 --host 127.0.0.1` — custom bind
|
||||||
|
- `zeroclaw daemon` — start gateway + channels + scheduler + heartbeat
|
||||||
|
- `zeroclaw service install/start/stop/status/uninstall` — OS service management
|
||||||
|
|
||||||
|
### Channels
|
||||||
|
|
||||||
|
ZeroClaw supports 21 messaging channels. To add one, you need to edit `~/.zeroclaw/config.toml`. For example, to set up Telegram:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[channels]
|
||||||
|
telegram = true
|
||||||
|
|
||||||
|
[channels_config.telegram]
|
||||||
|
bot_token = "your-bot-token-from-botfather"
|
||||||
|
allowed_users = [123456789]
|
||||||
|
```
|
||||||
|
|
||||||
|
Then restart the daemon. Check channel health with `zeroclaw channels doctor`.
|
||||||
|
|
||||||
|
For the full list of channels and their config fields, read `references/cli-reference.md` (Channels section).
|
||||||
|
|
||||||
|
### Pairing (Authentication Setup)
|
||||||
|
|
||||||
|
When `require_pairing = true` (default), REST clients need a bearer token:
|
||||||
|
```bash
|
||||||
|
curl -X POST http://127.0.0.1:42617/pair -H "X-Pairing-Code: <code>"
|
||||||
|
```
|
||||||
|
Response includes `{"token": "..."}` — save this for subsequent requests.
|
||||||
|
|
||||||
|
## Common Workflows
|
||||||
|
|
||||||
|
Here are multi-step sequences you're likely to need:
|
||||||
|
|
||||||
|
**"Is my agent healthy?"**
|
||||||
|
1. Run `zeroclaw status` — check provider, model, channels
|
||||||
|
2. Run `zeroclaw doctor` — check connectivity, diagnose issues
|
||||||
|
3. If gateway needed: `curl -sf http://127.0.0.1:42617/health`
|
||||||
|
|
||||||
|
**"Set up a new channel"**
|
||||||
|
1. Read the current config: `cat ~/.zeroclaw/config.toml`
|
||||||
|
2. Add the channel config (edit the TOML)
|
||||||
|
3. Restart: `zeroclaw service restart` (or restart daemon manually)
|
||||||
|
4. Verify: `zeroclaw channels doctor`
|
||||||
|
|
||||||
|
**"Switch to a different model"**
|
||||||
|
1. Check available: `zeroclaw models list`
|
||||||
|
2. Set it: `zeroclaw models set <provider/model>`
|
||||||
|
3. Verify: `zeroclaw status`
|
||||||
|
4. Test: `zeroclaw agent -m "hello, what model are you?"`
|
||||||
|
|
||||||
|
## Gateway Defaults
|
||||||
|
|
||||||
|
- **Port:** 42617
|
||||||
|
- **Host:** 127.0.0.1
|
||||||
|
- **Auth:** Pairing required (bearer token)
|
||||||
|
- **Rate limits:** 60 webhook requests/min, 10 pairing attempts/min
|
||||||
|
- **Body limit:** 64KB (1MB for config updates)
|
||||||
|
- **Timeout:** 30 seconds
|
||||||
|
- **Idempotency:** Optional `X-Idempotency-Key` header on `/webhook` (300s TTL)
|
||||||
|
- **Config location:** `~/.zeroclaw/config.toml`
|
||||||
|
|
||||||
|
## Reference Files
|
||||||
|
|
||||||
|
For the complete API specification with every endpoint, field, and edge case, read `references/rest-api.md`.
|
||||||
|
|
||||||
|
For the full CLI command tree with all flags and options, read `references/cli-reference.md`.
|
||||||
|
|
||||||
|
Only load these when you need precise details beyond what's in this file — for most operations, the quick references above are sufficient.
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
**"zeroclaw: command not found"** — Binary not in PATH. Check `./target/release/zeroclaw`, `~/.cargo/bin/zeroclaw`, or build from source with `cargo build --release`.
|
||||||
|
|
||||||
|
**"Connection refused" on REST calls** — Gateway isn't running. Start it with `zeroclaw gateway` or `zeroclaw daemon`.
|
||||||
|
|
||||||
|
**"Unauthorized" (401/403)** — Bearer token is missing or invalid. Re-pair via `POST /pair` with the pairing code, or check `~/.zeroclaw/config.toml` for the stored token.
|
||||||
|
|
||||||
|
**"LLM request failed" (500)** — Provider issue. Run `zeroclaw doctor` to check connectivity. Common causes: expired API key, provider outage, rate limiting on the provider side.
|
||||||
|
|
||||||
|
**"Too many requests" (429)** — You're hitting ZeroClaw's rate limit. Back off — the response includes `retry_after` with the number of seconds to wait.
|
||||||
|
|
||||||
|
**Agent not using tools / acting limited** — Check autonomy settings in config.toml under `[autonomy]`. `level = "read_only"` disables most tools. Try `level = "supervised"` or `level = "full"`.
|
||||||
|
|
||||||
|
**Memory not persisting** — Check `[memory]` config. If `backend = "none"`, nothing is stored. Switch to `"sqlite"` or `"markdown"`. Also verify `auto_save = true`.
|
||||||
|
|
||||||
|
**Channel not responding** — Run `zeroclaw channels doctor` for the specific channel. Common issues: expired bot token, wrong allowed_users list, channel not enabled in `[channels]`.
|
||||||
|
|
||||||
|
Report errors to the user with context appropriate to their expertise level. For beginners, explain what went wrong and suggest the fix. For experts, just show the error and the fix.
|
||||||
23
third_party/zeroclaw/.claude/skills/zeroclaw/evals/evals.json
vendored
Normal file
23
third_party/zeroclaw/.claude/skills/zeroclaw/evals/evals.json
vendored
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
{
|
||||||
|
"skill_name": "zeroclaw",
|
||||||
|
"evals": [
|
||||||
|
{
|
||||||
|
"id": 0,
|
||||||
|
"prompt": "how do i make my bot remember my name",
|
||||||
|
"expected_output": "Executes a zeroclaw command to store a memory, explains what happened in beginner-friendly language",
|
||||||
|
"files": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 1,
|
||||||
|
"prompt": "I want to schedule a daily health check on my ZeroClaw instance every morning at 9am ET",
|
||||||
|
"expected_output": "Executes zeroclaw cron add with correct cron expression and timezone flag",
|
||||||
|
"files": []
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": 2,
|
||||||
|
"prompt": "Set up a Python script that monitors my ZeroClaw agent's activity via SSE and logs tool calls to a file",
|
||||||
|
"expected_output": "Writes a Python script that connects to /api/events SSE endpoint with auth, filters for tool_call events, and logs to a file",
|
||||||
|
"files": []
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
277
third_party/zeroclaw/.claude/skills/zeroclaw/references/cli-reference.md
vendored
Normal file
277
third_party/zeroclaw/.claude/skills/zeroclaw/references/cli-reference.md
vendored
Normal file
@@ -0,0 +1,277 @@
|
|||||||
|
# ZeroClaw CLI Reference
|
||||||
|
|
||||||
|
Complete command reference for the `zeroclaw` binary.
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
1. [Agent](#agent)
|
||||||
|
2. [Onboarding](#onboarding)
|
||||||
|
3. [Status & Diagnostics](#status--diagnostics)
|
||||||
|
4. [Memory](#memory)
|
||||||
|
5. [Cron](#cron)
|
||||||
|
6. [Providers & Models](#providers--models)
|
||||||
|
7. [Gateway & Daemon](#gateway--daemon)
|
||||||
|
8. [Service Management](#service-management)
|
||||||
|
9. [Channels](#channels)
|
||||||
|
10. [Security & Emergency Stop](#security--emergency-stop)
|
||||||
|
11. [Hardware Peripherals](#hardware-peripherals)
|
||||||
|
12. [Skills](#skills)
|
||||||
|
13. [Shell Completions](#shell-completions)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Agent
|
||||||
|
|
||||||
|
Interactive chat or single-message mode.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
zeroclaw agent # Interactive REPL
|
||||||
|
zeroclaw agent -m "Summarize today's logs" # Single message
|
||||||
|
zeroclaw agent -p anthropic --model claude-sonnet-4-6 # Override provider/model
|
||||||
|
zeroclaw agent -t 0.3 # Set temperature
|
||||||
|
zeroclaw agent --peripheral nucleo-f401re:/dev/ttyACM0 # Attach hardware
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key flags:**
|
||||||
|
- `-m <message>` — single message mode (no REPL)
|
||||||
|
- `-p <provider>` — override provider (openrouter, anthropic, openai, ollama)
|
||||||
|
- `--model <model>` — override model
|
||||||
|
- `-t <float>` — temperature (0.0–2.0)
|
||||||
|
- `--peripheral <name>:<port>` — attach hardware peripheral
|
||||||
|
|
||||||
|
The agent has access to 30+ tools gated by security policy: shell, file_read, file_write, file_edit, glob_search, content_search, memory_store, memory_recall, memory_forget, browser, http_request, web_fetch, web_search, cron, delegate, git, and more. Max tool iterations defaults to 10.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Onboarding
|
||||||
|
|
||||||
|
First-time setup or reconfiguration.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
zeroclaw onboard # Quick mode (default: openrouter)
|
||||||
|
zeroclaw onboard --provider anthropic # Quick mode with specific provider
|
||||||
|
zeroclaw onboard # Guided wizard (default)
|
||||||
|
zeroclaw onboard --memory sqlite # Set memory backend
|
||||||
|
zeroclaw onboard --force # Overwrite existing config
|
||||||
|
zeroclaw onboard --channels-only # Repair channels only
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key flags:**
|
||||||
|
- `--provider <name>` — openrouter (default), anthropic, openai, ollama
|
||||||
|
- `--model <model>` — default model
|
||||||
|
- `--memory <backend>` — sqlite, markdown, lucid, none
|
||||||
|
- `--force` — overwrite existing config.toml
|
||||||
|
- `--channels-only` — only repair channel configuration
|
||||||
|
- `--reinit` — start fresh (backs up existing config)
|
||||||
|
|
||||||
|
Creates `~/.zeroclaw/config.toml` with `0600` permissions.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Status & Diagnostics
|
||||||
|
|
||||||
|
```bash
|
||||||
|
zeroclaw status # System overview
|
||||||
|
zeroclaw doctor # Run all diagnostic checks
|
||||||
|
zeroclaw doctor models # Probe model connectivity
|
||||||
|
zeroclaw doctor traces # Query execution traces
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Memory
|
||||||
|
|
||||||
|
```bash
|
||||||
|
zeroclaw memory list # List all entries
|
||||||
|
zeroclaw memory list --category core --limit 10 # Filtered list
|
||||||
|
zeroclaw memory get "some-key" # Get specific entry
|
||||||
|
zeroclaw memory stats # Usage statistics
|
||||||
|
zeroclaw memory clear --key "prefix" --yes # Delete entries (requires --yes)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key flags:**
|
||||||
|
- `--category <name>` — filter by category (core, daily, conversation, custom)
|
||||||
|
- `--limit <n>` — limit results
|
||||||
|
- `--key <prefix>` — key prefix for clear operations
|
||||||
|
- `--yes` — skip confirmation (required for clear)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Cron
|
||||||
|
|
||||||
|
```bash
|
||||||
|
zeroclaw cron list # List all jobs
|
||||||
|
zeroclaw cron add '0 9 * * 1-5' 'Good morning' --tz America/New_York # Recurring (cron expr)
|
||||||
|
zeroclaw cron add-at '2026-03-11T10:00:00Z' 'Remind me about meeting' # One-time at specific time
|
||||||
|
zeroclaw cron add-every 3600000 'Check server health' # Interval in milliseconds
|
||||||
|
zeroclaw cron once 30m 'Follow up on that task' # Delay from now
|
||||||
|
zeroclaw cron pause <id> # Pause job
|
||||||
|
zeroclaw cron resume <id> # Resume job
|
||||||
|
zeroclaw cron remove <id> # Delete job
|
||||||
|
```
|
||||||
|
|
||||||
|
**Subcommands:**
|
||||||
|
- `add <cron-expr> <command>` — standard cron expression (5-field)
|
||||||
|
- `add-at <iso-datetime> <command>` — fire once at exact time
|
||||||
|
- `add-every <ms> <command>` — repeating interval
|
||||||
|
- `once <duration> <command>` — delay from now (e.g., `30m`, `2h`, `1d`)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Providers & Models
|
||||||
|
|
||||||
|
```bash
|
||||||
|
zeroclaw providers # List all 40+ supported providers
|
||||||
|
zeroclaw models list # Show cached model catalog
|
||||||
|
zeroclaw models refresh --all # Refresh catalogs from all providers
|
||||||
|
zeroclaw models set anthropic/claude-sonnet-4-6 # Set default model
|
||||||
|
zeroclaw models status # Current model info
|
||||||
|
```
|
||||||
|
|
||||||
|
Model routing in config.toml:
|
||||||
|
```toml
|
||||||
|
[[model_routes]]
|
||||||
|
hint = "reasoning"
|
||||||
|
provider = "openrouter"
|
||||||
|
model = "anthropic/claude-sonnet-4-6"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Gateway & Daemon
|
||||||
|
|
||||||
|
```bash
|
||||||
|
zeroclaw gateway # Start HTTP gateway (foreground)
|
||||||
|
zeroclaw gateway -p 8080 --host 127.0.0.1 # Custom port/host
|
||||||
|
|
||||||
|
zeroclaw daemon # Gateway + channels + scheduler + heartbeat
|
||||||
|
zeroclaw daemon -p 8080 --host 0.0.0.0 # Custom bind
|
||||||
|
```
|
||||||
|
|
||||||
|
**Gateway defaults:**
|
||||||
|
- Port: 42617
|
||||||
|
- Host: 127.0.0.1
|
||||||
|
- Pairing required: true
|
||||||
|
- Public bind allowed: false
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Service Management
|
||||||
|
|
||||||
|
OS service lifecycle (systemd on Linux, launchd on macOS).
|
||||||
|
|
||||||
|
```bash
|
||||||
|
zeroclaw service install # Install as system service
|
||||||
|
zeroclaw service start # Start the service
|
||||||
|
zeroclaw service status # Check service status
|
||||||
|
zeroclaw service stop # Stop the service
|
||||||
|
zeroclaw service restart # Restart the service
|
||||||
|
zeroclaw service uninstall # Remove the service
|
||||||
|
```
|
||||||
|
|
||||||
|
**Logs:**
|
||||||
|
- macOS: `~/.zeroclaw/logs/daemon.stdout.log`
|
||||||
|
- Linux: `journalctl -u zeroclaw`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Channels
|
||||||
|
|
||||||
|
Channels are configured in `config.toml` under `[channels]` and `[channels_config.*]`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
zeroclaw channels list # List configured channels
|
||||||
|
zeroclaw channels doctor # Check channel health
|
||||||
|
```
|
||||||
|
|
||||||
|
Supported channels (21 total): Telegram, Discord, Slack, WhatsApp (Meta), WATI, Linq (iMessage/RCS/SMS), Email (IMAP/SMTP), IRC, Matrix, Nostr, Signal, Nextcloud Talk, and more.
|
||||||
|
|
||||||
|
Channel config example (Telegram):
|
||||||
|
```toml
|
||||||
|
[channels]
|
||||||
|
telegram = true
|
||||||
|
|
||||||
|
[channels_config.telegram]
|
||||||
|
bot_token = "..."
|
||||||
|
allowed_users = [123456789]
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Security & Emergency Stop
|
||||||
|
|
||||||
|
```bash
|
||||||
|
zeroclaw estop --level kill-all # Stop everything
|
||||||
|
zeroclaw estop --level network-kill # Block all network access
|
||||||
|
zeroclaw estop --level domain-block --domain "*.example.com" # Block specific domains
|
||||||
|
zeroclaw estop --level tool-freeze --tool shell # Freeze specific tool
|
||||||
|
zeroclaw estop status # Check estop state
|
||||||
|
zeroclaw estop resume --network # Resume (may require OTP)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Estop levels:**
|
||||||
|
- `kill-all` — nuclear option, stops all agent activity
|
||||||
|
- `network-kill` — blocks all outbound network
|
||||||
|
- `domain-block` — blocks specific domain patterns
|
||||||
|
- `tool-freeze` — freezes individual tools
|
||||||
|
|
||||||
|
Autonomy config in config.toml:
|
||||||
|
```toml
|
||||||
|
[autonomy]
|
||||||
|
level = "supervised" # read_only | supervised | full
|
||||||
|
workspace_only = true
|
||||||
|
allowed_commands = ["git", "cargo", "python"]
|
||||||
|
forbidden_paths = ["/etc", "/root", "~/.ssh"]
|
||||||
|
max_actions_per_hour = 20
|
||||||
|
max_cost_per_day_cents = 500
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Hardware Peripherals
|
||||||
|
|
||||||
|
```bash
|
||||||
|
zeroclaw hardware discover # Find USB devices
|
||||||
|
zeroclaw hardware introspect /dev/ttyACM0 # Probe device capabilities
|
||||||
|
zeroclaw peripheral list # List configured peripherals
|
||||||
|
zeroclaw peripheral add nucleo-f401re /dev/ttyACM0 # Add peripheral
|
||||||
|
zeroclaw peripheral flash-nucleo # Flash STM32 firmware
|
||||||
|
zeroclaw peripheral flash --port /dev/cu.usbmodem101 # Flash Arduino firmware
|
||||||
|
```
|
||||||
|
|
||||||
|
**Supported boards:** STM32 Nucleo-F401RE, Arduino Uno R4, Raspberry Pi GPIO, ESP32.
|
||||||
|
|
||||||
|
Attach to agent session: `zeroclaw agent --peripheral nucleo-f401re:/dev/ttyACM0`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Skills
|
||||||
|
|
||||||
|
```bash
|
||||||
|
zeroclaw skills list # List installed skills
|
||||||
|
zeroclaw skills install <path-or-url> # Install a skill
|
||||||
|
zeroclaw skills audit # Audit installed skills
|
||||||
|
zeroclaw skills remove <name> # Remove a skill
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Shell Completions
|
||||||
|
|
||||||
|
```bash
|
||||||
|
zeroclaw completions zsh # Generate Zsh completions
|
||||||
|
zeroclaw completions bash # Generate Bash completions
|
||||||
|
zeroclaw completions fish # Generate Fish completions
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Config File
|
||||||
|
|
||||||
|
Default location: `~/.zeroclaw/config.toml`
|
||||||
|
|
||||||
|
Config resolution order (first match wins):
|
||||||
|
1. `ZEROCLAW_CONFIG_DIR` environment variable
|
||||||
|
2. `ZEROCLAW_WORKSPACE` environment variable
|
||||||
|
3. `~/.zeroclaw/active_workspace.toml` marker file
|
||||||
|
4. `~/.zeroclaw/config.toml` (default)
|
||||||
505
third_party/zeroclaw/.claude/skills/zeroclaw/references/rest-api.md
vendored
Normal file
505
third_party/zeroclaw/.claude/skills/zeroclaw/references/rest-api.md
vendored
Normal file
@@ -0,0 +1,505 @@
|
|||||||
|
# ZeroClaw REST API Reference
|
||||||
|
|
||||||
|
Complete endpoint reference for the ZeroClaw gateway HTTP API.
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
1. [Authentication](#authentication)
|
||||||
|
2. [Public Endpoints](#public-endpoints)
|
||||||
|
3. [Webhook](#webhook)
|
||||||
|
4. [WebSocket Chat](#websocket-chat)
|
||||||
|
5. [Status & Health](#status--health)
|
||||||
|
6. [Memory](#memory)
|
||||||
|
7. [Cron](#cron)
|
||||||
|
8. [Tools](#tools)
|
||||||
|
9. [Configuration](#configuration)
|
||||||
|
10. [Integrations](#integrations)
|
||||||
|
11. [Cost](#cost)
|
||||||
|
12. [Events (SSE)](#events-sse)
|
||||||
|
13. [Channel Webhooks](#channel-webhooks)
|
||||||
|
14. [Rate Limiting](#rate-limiting)
|
||||||
|
15. [Error Responses](#error-responses)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Authentication
|
||||||
|
|
||||||
|
Three authentication mechanisms:
|
||||||
|
|
||||||
|
### Bearer Token (Primary)
|
||||||
|
```
|
||||||
|
Authorization: Bearer <token>
|
||||||
|
```
|
||||||
|
Obtained via `POST /pair`. Required for all `/api/*` endpoints when `require_pairing = true` (default).
|
||||||
|
|
||||||
|
### Webhook Secret
|
||||||
|
```
|
||||||
|
X-Webhook-Secret: <raw_secret>
|
||||||
|
```
|
||||||
|
Optional additional auth for `/webhook`. Server SHA-256 hashes and compares using constant-time comparison.
|
||||||
|
|
||||||
|
### WebSocket Token
|
||||||
|
```
|
||||||
|
ws://host:port/ws/chat?token=<bearer_token>
|
||||||
|
```
|
||||||
|
WebSocket connections pass the token as a query parameter (browsers can't set custom headers on WS handshake).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Public Endpoints
|
||||||
|
|
||||||
|
### GET /health
|
||||||
|
No authentication required.
|
||||||
|
|
||||||
|
**Response 200:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"status": "ok",
|
||||||
|
"paired": true,
|
||||||
|
"require_pairing": true,
|
||||||
|
"runtime": {}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### GET /metrics
|
||||||
|
Prometheus text exposition format.
|
||||||
|
|
||||||
|
**Response 200:**
|
||||||
|
```
|
||||||
|
Content-Type: text/plain; version=0.0.4; charset=utf-8
|
||||||
|
```
|
||||||
|
|
||||||
|
### POST /pair
|
||||||
|
Exchange a one-time pairing code for a bearer token.
|
||||||
|
|
||||||
|
**Rate Limit:** Configurable per-minute limit per IP (default: 10/min).
|
||||||
|
|
||||||
|
**Headers:**
|
||||||
|
- `X-Pairing-Code: <code>` (required)
|
||||||
|
|
||||||
|
**Response 200 (success):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"paired": true,
|
||||||
|
"persisted": true,
|
||||||
|
"token": "<bearer_token>",
|
||||||
|
"message": "Save this token — use it as Authorization: Bearer <token>"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response 200 (persistence failure):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"paired": true,
|
||||||
|
"persisted": false,
|
||||||
|
"token": "<bearer_token>",
|
||||||
|
"message": "Paired for this process, but failed to persist token to config.toml..."
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response 403:**
|
||||||
|
```json
|
||||||
|
{"error": "Invalid pairing code"}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response 429:**
|
||||||
|
```json
|
||||||
|
{"error": "Too many pairing requests. Please retry later.", "retry_after": 60}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response 429 (lockout):**
|
||||||
|
```json
|
||||||
|
{"error": "Too many failed attempts. Try again in {lockout_secs}s.", "retry_after": 120}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Webhook
|
||||||
|
|
||||||
|
### POST /webhook
|
||||||
|
Send a message to the agent and receive a response.
|
||||||
|
|
||||||
|
**Rate Limit:** Configurable per-minute limit per IP (default: 60/min).
|
||||||
|
|
||||||
|
**Headers:**
|
||||||
|
- `Authorization: Bearer <token>` (if pairing enabled)
|
||||||
|
- `Content-Type: application/json`
|
||||||
|
- `X-Webhook-Secret: <secret>` (optional)
|
||||||
|
- `X-Idempotency-Key: <uuid>` (optional)
|
||||||
|
|
||||||
|
**Request Body:**
|
||||||
|
```json
|
||||||
|
{"message": "your prompt here"}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response 200:**
|
||||||
|
```json
|
||||||
|
{"response": "<llm_response>", "model": "<model_name>"}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response 200 (duplicate — idempotency key match):**
|
||||||
|
```json
|
||||||
|
{"status": "duplicate", "idempotent": true, "message": "Request already processed for this idempotency key"}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response 401:**
|
||||||
|
```json
|
||||||
|
{"error": "Unauthorized — pair first via POST /pair, then send Authorization: Bearer <token>"}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response 429:**
|
||||||
|
```json
|
||||||
|
{"error": "Too many webhook requests. Please retry later.", "retry_after": 60}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response 500:**
|
||||||
|
```json
|
||||||
|
{"error": "LLM request failed"}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Idempotency
|
||||||
|
- Header: `X-Idempotency-Key: <uuid>`
|
||||||
|
- TTL: configurable, default 300 seconds
|
||||||
|
- Max tracked keys: configurable, default 10,000
|
||||||
|
- Duplicate requests within TTL return `"status": "duplicate"` instead of re-processing
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## WebSocket Chat
|
||||||
|
|
||||||
|
### GET /ws/chat?token=<bearer_token>
|
||||||
|
Streaming agent chat over WebSocket.
|
||||||
|
|
||||||
|
**Client → Server:**
|
||||||
|
```json
|
||||||
|
{"type": "message", "content": "Hello, what's the weather?"}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Server → Client (complete response):**
|
||||||
|
```json
|
||||||
|
{"type": "done", "full_response": "The weather in San Francisco is sunny..."}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Server → Client (error):**
|
||||||
|
```json
|
||||||
|
{"type": "error", "message": "Error message here"}
|
||||||
|
```
|
||||||
|
|
||||||
|
Ignore unknown message types. Invalid JSON triggers an error response.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Status & Health
|
||||||
|
|
||||||
|
### GET /api/status
|
||||||
|
**Response 200:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"provider": "openrouter",
|
||||||
|
"model": "anthropic/claude-sonnet-4",
|
||||||
|
"temperature": 0.7,
|
||||||
|
"uptime_seconds": 3600,
|
||||||
|
"gateway_port": 42617,
|
||||||
|
"locale": "en",
|
||||||
|
"memory_backend": "sqlite",
|
||||||
|
"paired": true,
|
||||||
|
"channels": {
|
||||||
|
"telegram": false,
|
||||||
|
"discord": true,
|
||||||
|
"slack": false
|
||||||
|
},
|
||||||
|
"health": {}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### GET /api/health
|
||||||
|
Component health snapshot (requires auth).
|
||||||
|
```json
|
||||||
|
{"health": {}}
|
||||||
|
```
|
||||||
|
|
||||||
|
### GET or POST /api/doctor
|
||||||
|
Run system diagnostics.
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"results": [
|
||||||
|
{"name": "provider_connectivity", "severity": "ok", "message": "OpenRouter API reachable"}
|
||||||
|
],
|
||||||
|
"summary": {"ok": 5, "warnings": 1, "errors": 0}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Memory
|
||||||
|
|
||||||
|
### GET /api/memory
|
||||||
|
List or search memory entries.
|
||||||
|
|
||||||
|
**Query Parameters:**
|
||||||
|
- `query` (string, optional) — search text; triggers search mode
|
||||||
|
- `category` (string, optional) — filter by category
|
||||||
|
|
||||||
|
**Response 200:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"entries": [
|
||||||
|
{
|
||||||
|
"key": "memory_key",
|
||||||
|
"content": "memory content",
|
||||||
|
"category": "core",
|
||||||
|
"timestamp": "2025-01-10T12:00:00Z"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### POST /api/memory
|
||||||
|
Store a memory entry.
|
||||||
|
|
||||||
|
**Request Body:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"key": "unique_key",
|
||||||
|
"content": "memory content",
|
||||||
|
"category": "core"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
Category defaults to `"core"` if omitted. Other values: `daily`, `conversation`, or any custom string.
|
||||||
|
|
||||||
|
**Response 200:**
|
||||||
|
```json
|
||||||
|
{"status": "ok"}
|
||||||
|
```
|
||||||
|
|
||||||
|
### DELETE /api/memory/{key}
|
||||||
|
Delete a memory entry.
|
||||||
|
|
||||||
|
**Response 200:**
|
||||||
|
```json
|
||||||
|
{"status": "ok", "deleted": true}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Cron
|
||||||
|
|
||||||
|
### GET /api/cron
|
||||||
|
List all scheduled jobs.
|
||||||
|
|
||||||
|
**Response 200:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"jobs": [
|
||||||
|
{
|
||||||
|
"id": "<uuid>",
|
||||||
|
"name": "daily-backup",
|
||||||
|
"command": "backup.sh",
|
||||||
|
"next_run": "2025-01-10T15:00:00Z",
|
||||||
|
"last_run": "2025-01-09T15:00:00Z",
|
||||||
|
"last_status": "success",
|
||||||
|
"enabled": true
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### POST /api/cron
|
||||||
|
Add a new job.
|
||||||
|
|
||||||
|
**Request Body:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"name": "job-name",
|
||||||
|
"schedule": "0 9 * * *",
|
||||||
|
"command": "command to run"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response 200:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"status": "ok",
|
||||||
|
"job": {"id": "<uuid>", "name": "job-name", "command": "command to run", "enabled": true}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### DELETE /api/cron/{id}
|
||||||
|
Remove a job.
|
||||||
|
|
||||||
|
**Response 200:**
|
||||||
|
```json
|
||||||
|
{"status": "ok"}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tools
|
||||||
|
|
||||||
|
### GET /api/tools
|
||||||
|
List all registered tools with descriptions and parameter schemas.
|
||||||
|
|
||||||
|
**Response 200:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"tools": [
|
||||||
|
{"name": "shell", "description": "Execute shell commands", "parameters": {}},
|
||||||
|
{"name": "file_read", "description": "Read file contents", "parameters": {}}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
### GET /api/config
|
||||||
|
Get current config. Secrets are masked as `***MASKED***`.
|
||||||
|
|
||||||
|
**Response 200:**
|
||||||
|
```json
|
||||||
|
{"format": "toml", "content": "<toml_string>"}
|
||||||
|
```
|
||||||
|
|
||||||
|
### PUT /api/config
|
||||||
|
Update config from TOML body. Body limit: 1 MB.
|
||||||
|
|
||||||
|
**Request Body:** Raw TOML text.
|
||||||
|
|
||||||
|
**Response 200:**
|
||||||
|
```json
|
||||||
|
{"status": "ok"}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response 400:**
|
||||||
|
```json
|
||||||
|
{"error": "Invalid TOML: <details>"}
|
||||||
|
```
|
||||||
|
or
|
||||||
|
```json
|
||||||
|
{"error": "Invalid config: <validation_error>"}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Integrations
|
||||||
|
|
||||||
|
### GET /api/integrations
|
||||||
|
List all integrations and their status.
|
||||||
|
|
||||||
|
**Response 200:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"integrations": [
|
||||||
|
{"name": "openrouter", "description": "OpenRouter LLM provider", "category": "providers", "status": "ok"},
|
||||||
|
{"name": "telegram", "description": "Telegram messaging channel", "category": "channels", "status": "configured"}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Cost
|
||||||
|
|
||||||
|
### GET /api/cost
|
||||||
|
Cost tracking summary.
|
||||||
|
|
||||||
|
**Response 200:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"cost": {
|
||||||
|
"session_cost_usd": 1.50,
|
||||||
|
"daily_cost_usd": 5.00,
|
||||||
|
"monthly_cost_usd": 150.00,
|
||||||
|
"total_tokens": 50000,
|
||||||
|
"request_count": 25,
|
||||||
|
"by_model": {"anthropic/claude-sonnet-4": 1.50}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Events (SSE)
|
||||||
|
|
||||||
|
### GET /api/events
|
||||||
|
Server-Sent Events stream. Requires bearer token.
|
||||||
|
|
||||||
|
**Content-Type:** `text/event-stream`
|
||||||
|
|
||||||
|
**Event types:**
|
||||||
|
|
||||||
|
| Type | Fields | Description |
|
||||||
|
|------|--------|-------------|
|
||||||
|
| `llm_request` | provider, model, timestamp | LLM call started |
|
||||||
|
| `tool_call_start` | tool, timestamp | Tool execution started |
|
||||||
|
| `tool_call` | tool, duration_ms, success, timestamp | Tool execution completed |
|
||||||
|
| `agent_start` | provider, model, timestamp | Agent loop started |
|
||||||
|
| `agent_end` | provider, model, duration_ms, tokens_used, cost_usd, timestamp | Agent loop completed |
|
||||||
|
| `error` | component, message, timestamp | Error occurred |
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
```bash
|
||||||
|
curl -N -H "Authorization: Bearer <token>" http://127.0.0.1:42617/api/events
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Channel Webhooks
|
||||||
|
|
||||||
|
These are incoming webhook endpoints for specific messaging channels. They're set up automatically when channels are configured.
|
||||||
|
|
||||||
|
### WhatsApp (Meta Cloud API)
|
||||||
|
- `GET /whatsapp` — verification (echoes `hub.challenge`)
|
||||||
|
- `POST /whatsapp` — incoming messages (signature verified via `X-Hub-Signature-256`)
|
||||||
|
|
||||||
|
### WATI (WhatsApp Business)
|
||||||
|
- `GET /wati` — verification (echoes `challenge`)
|
||||||
|
- `POST /wati` — incoming messages
|
||||||
|
|
||||||
|
### Linq (iMessage/RCS/SMS)
|
||||||
|
- `POST /linq` — incoming messages (signature verified via `X-Webhook-Signature` + `X-Webhook-Timestamp`)
|
||||||
|
|
||||||
|
### Nextcloud Talk
|
||||||
|
- `POST /nextcloud-talk` — bot API webhook (signature verified via `X-Nextcloud-Talk-Signature`)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Rate Limiting
|
||||||
|
|
||||||
|
Sliding window (60-second window), per client IP.
|
||||||
|
|
||||||
|
| Endpoint | Default Limit |
|
||||||
|
|----------|--------------|
|
||||||
|
| `POST /pair` | 10/min |
|
||||||
|
| `POST /webhook` | 60/min |
|
||||||
|
|
||||||
|
If `trust_forwarded_headers` is enabled, uses `X-Forwarded-For` for client IP.
|
||||||
|
|
||||||
|
Max tracked keys: configurable (default: 10,000).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Error Responses
|
||||||
|
|
||||||
|
**Standard format:**
|
||||||
|
```json
|
||||||
|
{"error": "Human-readable error message"}
|
||||||
|
```
|
||||||
|
|
||||||
|
**With retry info:**
|
||||||
|
```json
|
||||||
|
{"error": "...", "retry_after": 60}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Status codes:**
|
||||||
|
| Code | Meaning |
|
||||||
|
|------|---------|
|
||||||
|
| 200 | Success |
|
||||||
|
| 400 | Invalid JSON, missing fields, invalid TOML |
|
||||||
|
| 401 | Invalid/missing bearer token or webhook secret |
|
||||||
|
| 403 | Pairing verification failed |
|
||||||
|
| 404 | Endpoint or channel not configured |
|
||||||
|
| 408 | Request timeout (30s) |
|
||||||
|
| 429 | Rate limited (check `retry_after`) |
|
||||||
|
| 500 | LLM error, database error, internal failure |
|
||||||
71
third_party/zeroclaw/.coderabbit.yaml
vendored
Normal file
71
third_party/zeroclaw/.coderabbit.yaml
vendored
Normal file
@@ -0,0 +1,71 @@
|
|||||||
|
# CodeRabbit configuration for ZeroClaw
|
||||||
|
# Documentation: https://docs.coderabbit.ai/reference/configuration
|
||||||
|
|
||||||
|
language: en-US
|
||||||
|
early_access: false
|
||||||
|
|
||||||
|
# Enable tone control for reviews
|
||||||
|
reviews:
|
||||||
|
# Request changes workflow
|
||||||
|
request_changes_workflow: false
|
||||||
|
|
||||||
|
# High level summary of the PR
|
||||||
|
high_level_summary: true
|
||||||
|
|
||||||
|
# Generate sequence diagrams
|
||||||
|
sequence_diagrams: true
|
||||||
|
|
||||||
|
# Auto-review configuration
|
||||||
|
auto_review:
|
||||||
|
enabled: true
|
||||||
|
# Only review PRs targeting these branches
|
||||||
|
base_branches:
|
||||||
|
- master
|
||||||
|
# Skip reviews for draft PRs or WIP
|
||||||
|
drafts: false
|
||||||
|
|
||||||
|
# Poem feature toggle (must be a boolean, not an object)
|
||||||
|
poem: false
|
||||||
|
|
||||||
|
# Reviewer suggestions
|
||||||
|
reviewer:
|
||||||
|
# Suggest reviewers based on blame data
|
||||||
|
enabled: true
|
||||||
|
# Automatically assign suggested reviewers
|
||||||
|
auto_assign: false
|
||||||
|
|
||||||
|
# Enable finishing touches
|
||||||
|
finishing_touches:
|
||||||
|
# Generate docstrings
|
||||||
|
docstrings:
|
||||||
|
enabled: true
|
||||||
|
# Generate unit tests
|
||||||
|
unit_tests:
|
||||||
|
enabled: true
|
||||||
|
|
||||||
|
# Tools configuration
|
||||||
|
tools:
|
||||||
|
# Rust-specific tools
|
||||||
|
cargo:
|
||||||
|
enabled: true
|
||||||
|
|
||||||
|
# Chat configuration
|
||||||
|
chat:
|
||||||
|
auto_reply: true
|
||||||
|
|
||||||
|
# Path filters - ignore generated files
|
||||||
|
path_filters:
|
||||||
|
- "!**/target/**"
|
||||||
|
- "!**/node_modules/**"
|
||||||
|
- "!**/.cargo/**"
|
||||||
|
- "!**/Cargo.lock"
|
||||||
|
|
||||||
|
# Review instructions specific to Rust and this project
|
||||||
|
review_instructions:
|
||||||
|
- "Focus on Rust best practices and idiomatic code"
|
||||||
|
- "Check for security vulnerabilities in encryption/crypto code"
|
||||||
|
- "Ensure proper error handling with Result types"
|
||||||
|
- "Verify memory safety and avoid unnecessary clones"
|
||||||
|
- "Check for proper use of lifetimes and borrowing"
|
||||||
|
- "Ensure tests cover critical security paths"
|
||||||
|
- "Review configuration migration code carefully"
|
||||||
71
third_party/zeroclaw/.dockerignore
vendored
Normal file
71
third_party/zeroclaw/.dockerignore
vendored
Normal file
@@ -0,0 +1,71 @@
|
|||||||
|
# Git history (may contain old secrets)
|
||||||
|
.git
|
||||||
|
.gitignore
|
||||||
|
.githooks
|
||||||
|
|
||||||
|
# Rust build artifacts (can be multiple GB)
|
||||||
|
target
|
||||||
|
|
||||||
|
# Documentation and examples (not needed for runtime)
|
||||||
|
docs
|
||||||
|
examples
|
||||||
|
tests
|
||||||
|
|
||||||
|
# Markdown files (README, CHANGELOG, etc.)
|
||||||
|
*.md
|
||||||
|
|
||||||
|
# Images (unnecessary for build)
|
||||||
|
*.png
|
||||||
|
*.svg
|
||||||
|
*.jpg
|
||||||
|
*.jpeg
|
||||||
|
*.gif
|
||||||
|
|
||||||
|
# SQLite databases (conversation history, cron jobs)
|
||||||
|
*.db
|
||||||
|
*.db-journal
|
||||||
|
|
||||||
|
# macOS artifacts
|
||||||
|
.DS_Store
|
||||||
|
.AppleDouble
|
||||||
|
.LSOverride
|
||||||
|
|
||||||
|
# CI/CD configs (not needed in image)
|
||||||
|
.github
|
||||||
|
|
||||||
|
# Cargo deny config (lint tool, not runtime)
|
||||||
|
deny.toml
|
||||||
|
|
||||||
|
# License file (not needed for runtime)
|
||||||
|
LICENSE
|
||||||
|
|
||||||
|
# Temporary files
|
||||||
|
.tmp_*
|
||||||
|
*.tmp
|
||||||
|
*.bak
|
||||||
|
*.swp
|
||||||
|
*~
|
||||||
|
|
||||||
|
# IDE and editor configs
|
||||||
|
.idea
|
||||||
|
.vscode
|
||||||
|
*.iml
|
||||||
|
|
||||||
|
# Windsurf workflows
|
||||||
|
.windsurf
|
||||||
|
|
||||||
|
# Environment files (may contain secrets)
|
||||||
|
.env
|
||||||
|
.env.*
|
||||||
|
!.env.example
|
||||||
|
|
||||||
|
# Coverage and profiling
|
||||||
|
*.profraw
|
||||||
|
*.profdata
|
||||||
|
coverage
|
||||||
|
lcov.info
|
||||||
|
|
||||||
|
# Application and script directories (not needed for Docker runtime)
|
||||||
|
apps/
|
||||||
|
python/
|
||||||
|
scripts/
|
||||||
44
third_party/zeroclaw/.editorconfig
vendored
Normal file
44
third_party/zeroclaw/.editorconfig
vendored
Normal file
@@ -0,0 +1,44 @@
|
|||||||
|
# EditorConfig is awesome: https://EditorConfig.org
|
||||||
|
|
||||||
|
# top-most EditorConfig file
|
||||||
|
root = true
|
||||||
|
|
||||||
|
# All files
|
||||||
|
[*]
|
||||||
|
indent_style = space
|
||||||
|
indent_size = 2
|
||||||
|
end_of_line = lf
|
||||||
|
charset = utf-8
|
||||||
|
trim_trailing_whitespace = true
|
||||||
|
insert_final_newline = true
|
||||||
|
|
||||||
|
# Rust files - match rustfmt.toml
|
||||||
|
[*.rs]
|
||||||
|
indent_size = 4
|
||||||
|
max_line_length = 100
|
||||||
|
|
||||||
|
# Markdown files
|
||||||
|
[*.md]
|
||||||
|
trim_trailing_whitespace = false
|
||||||
|
max_line_length = 80
|
||||||
|
|
||||||
|
# TOML files
|
||||||
|
[*.toml]
|
||||||
|
indent_size = 2
|
||||||
|
|
||||||
|
# YAML files
|
||||||
|
[*.{yml,yaml}]
|
||||||
|
indent_size = 2
|
||||||
|
|
||||||
|
# Python files
|
||||||
|
[*.py]
|
||||||
|
indent_size = 4
|
||||||
|
max_line_length = 100
|
||||||
|
|
||||||
|
# Shell scripts
|
||||||
|
[*.{sh,bash}]
|
||||||
|
indent_size = 2
|
||||||
|
|
||||||
|
# JSON files
|
||||||
|
[*.json]
|
||||||
|
indent_size = 2
|
||||||
122
third_party/zeroclaw/.env.example
vendored
Normal file
122
third_party/zeroclaw/.env.example
vendored
Normal file
@@ -0,0 +1,122 @@
|
|||||||
|
# ZeroClaw Environment Variables
|
||||||
|
# Copy this file to `.env` and fill in your local values.
|
||||||
|
# Never commit `.env` or any real secrets.
|
||||||
|
|
||||||
|
# ── Core Runtime ──────────────────────────────────────────────
|
||||||
|
# Provider key resolution at runtime:
|
||||||
|
# 1) explicit key passed from config/CLI
|
||||||
|
# 2) provider-specific env var (OPENROUTER_API_KEY, OPENAI_API_KEY, ...)
|
||||||
|
# 3) generic fallback env vars below
|
||||||
|
|
||||||
|
# Generic fallback API key (used when provider-specific key is absent)
|
||||||
|
API_KEY=your-api-key-here
|
||||||
|
# ZEROCLAW_API_KEY=your-api-key-here
|
||||||
|
|
||||||
|
# Default provider/model (can be overridden by CLI flags)
|
||||||
|
PROVIDER=openrouter
|
||||||
|
# ZEROCLAW_PROVIDER=openrouter
|
||||||
|
# ZEROCLAW_MODEL=anthropic/claude-sonnet-4-6
|
||||||
|
# ZEROCLAW_TEMPERATURE=0.7
|
||||||
|
|
||||||
|
# Workspace directory override
|
||||||
|
# ZEROCLAW_WORKSPACE=/path/to/workspace
|
||||||
|
|
||||||
|
# Reasoning mode (enables extended thinking for supported models)
|
||||||
|
# ZEROCLAW_REASONING_ENABLED=false
|
||||||
|
# REASONING_ENABLED=false
|
||||||
|
|
||||||
|
# ── Provider-Specific API Keys ────────────────────────────────
|
||||||
|
# OpenRouter
|
||||||
|
# OPENROUTER_API_KEY=sk-or-v1-...
|
||||||
|
|
||||||
|
# Anthropic
|
||||||
|
# ANTHROPIC_OAUTH_TOKEN=...
|
||||||
|
# ANTHROPIC_API_KEY=sk-ant-...
|
||||||
|
|
||||||
|
# OpenAI / Gemini
|
||||||
|
# OPENAI_API_KEY=sk-...
|
||||||
|
# GEMINI_API_KEY=...
|
||||||
|
# GOOGLE_API_KEY=...
|
||||||
|
|
||||||
|
# Other supported providers
|
||||||
|
# VENICE_API_KEY=...
|
||||||
|
# GROQ_API_KEY=...
|
||||||
|
# MISTRAL_API_KEY=...
|
||||||
|
# DEEPSEEK_API_KEY=...
|
||||||
|
# XAI_API_KEY=...
|
||||||
|
# TOGETHER_API_KEY=...
|
||||||
|
# FIREWORKS_API_KEY=...
|
||||||
|
# PERPLEXITY_API_KEY=...
|
||||||
|
# COHERE_API_KEY=...
|
||||||
|
# MOONSHOT_API_KEY=...
|
||||||
|
# GLM_API_KEY=...
|
||||||
|
# MINIMAX_OAUTH_TOKEN=...
|
||||||
|
# MINIMAX_API_KEY=...
|
||||||
|
# MINIMAX_OAUTH_REFRESH_TOKEN=...
|
||||||
|
# MINIMAX_OAUTH_REGION=global # optional: global|cn
|
||||||
|
# QIANFAN_API_KEY=...
|
||||||
|
# DASHSCOPE_API_KEY=...
|
||||||
|
# ZAI_API_KEY=...
|
||||||
|
# SYNTHETIC_API_KEY=...
|
||||||
|
# OPENCODE_API_KEY=...
|
||||||
|
# OPENCODE_GO_API_KEY=...
|
||||||
|
# VERCEL_API_KEY=...
|
||||||
|
# CLOUDFLARE_API_KEY=...
|
||||||
|
|
||||||
|
# ── Gateway ──────────────────────────────────────────────────
|
||||||
|
# ZEROCLAW_GATEWAY_PORT=3000
|
||||||
|
# ZEROCLAW_GATEWAY_HOST=127.0.0.1
|
||||||
|
# ZEROCLAW_ALLOW_PUBLIC_BIND=false
|
||||||
|
|
||||||
|
# ── Storage ─────────────────────────────────────────────────
|
||||||
|
# Backend override for persistent storage (default: sqlite)
|
||||||
|
# ZEROCLAW_STORAGE_PROVIDER=sqlite
|
||||||
|
|
||||||
|
# ── Proxy ──────────────────────────────────────────────────
|
||||||
|
# Forward provider/service traffic through an HTTP(S) proxy.
|
||||||
|
# ZEROCLAW_PROXY_ENABLED=false
|
||||||
|
# ZEROCLAW_HTTP_PROXY=http://proxy.example.com:8080
|
||||||
|
# ZEROCLAW_HTTPS_PROXY=http://proxy.example.com:8080
|
||||||
|
# ZEROCLAW_ALL_PROXY=socks5://proxy.example.com:1080
|
||||||
|
# ZEROCLAW_NO_PROXY=localhost,127.0.0.1
|
||||||
|
# ZEROCLAW_PROXY_SCOPE=zeroclaw # environment|zeroclaw|services
|
||||||
|
# ZEROCLAW_PROXY_SERVICES=openai,anthropic
|
||||||
|
|
||||||
|
# ── Optional Integrations ────────────────────────────────────
|
||||||
|
# Pushover notifications (`pushover` tool)
|
||||||
|
# PUSHOVER_TOKEN=your-pushover-app-token
|
||||||
|
# PUSHOVER_USER_KEY=your-pushover-user-key
|
||||||
|
|
||||||
|
# ── Docker Compose ───────────────────────────────────────────
|
||||||
|
# Host port mapping (used by docker-compose.yml)
|
||||||
|
# HOST_PORT=3000
|
||||||
|
|
||||||
|
# ── Z.AI GLM Coding Plan ───────────────────────────────────────
|
||||||
|
# Z.AI provides GLM models through OpenAI-compatible endpoints.
|
||||||
|
# API key format: id.secret (e.g., abc123.xyz789)
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# zeroclaw onboard --provider zai --api-key YOUR_ZAI_API_KEY
|
||||||
|
#
|
||||||
|
# Or set the environment variable:
|
||||||
|
# ZAI_API_KEY=your-id.secret
|
||||||
|
#
|
||||||
|
# Common models: glm-5, glm-4.7, glm-4-plus, glm-4-flash
|
||||||
|
# See docs/zai-glm-setup.md for detailed configuration.
|
||||||
|
|
||||||
|
# ── Web Search ────────────────────────────────────────────────
|
||||||
|
# Web search tool for finding information on the internet.
|
||||||
|
# Enabled by default with DuckDuckGo (free, no API key required).
|
||||||
|
#
|
||||||
|
# WEB_SEARCH_ENABLED=true
|
||||||
|
# WEB_SEARCH_PROVIDER=duckduckgo
|
||||||
|
# WEB_SEARCH_MAX_RESULTS=5
|
||||||
|
# WEB_SEARCH_TIMEOUT_SECS=15
|
||||||
|
#
|
||||||
|
# Optional: Brave Search (requires API key from https://brave.com/search/api)
|
||||||
|
# WEB_SEARCH_PROVIDER=brave
|
||||||
|
# BRAVE_API_KEY=your-brave-search-api-key
|
||||||
|
#
|
||||||
|
# Optional: SearXNG (self-hosted, requires instance URL)
|
||||||
|
# WEB_SEARCH_PROVIDER=searxng
|
||||||
|
# SEARXNG_INSTANCE_URL=https://searx.example.com
|
||||||
1
third_party/zeroclaw/.envrc
vendored
Normal file
1
third_party/zeroclaw/.envrc
vendored
Normal file
@@ -0,0 +1 @@
|
|||||||
|
use flake
|
||||||
89
third_party/zeroclaw/.gemini/style-guide.md
vendored
Normal file
89
third_party/zeroclaw/.gemini/style-guide.md
vendored
Normal file
@@ -0,0 +1,89 @@
|
|||||||
|
# ZeroClaw Code Style Guide
|
||||||
|
|
||||||
|
This style guide provides instructions for Gemini Code Assist when reviewing pull requests for the ZeroClaw project.
|
||||||
|
|
||||||
|
## Project Overview
|
||||||
|
|
||||||
|
ZeroClaw is a Rust-based security-focused project that handles encryption, secrets management, and secure configuration. Code reviews should prioritize security, memory safety, and Rust best practices.
|
||||||
|
|
||||||
|
## General Principles
|
||||||
|
|
||||||
|
### Priority Levels
|
||||||
|
|
||||||
|
- **CRITICAL**: Security vulnerabilities, memory safety issues, data leaks
|
||||||
|
- **HIGH**: Logic errors, incorrect error handling, API misuse
|
||||||
|
- **MEDIUM**: Code quality, performance concerns, non-idiomatic Rust
|
||||||
|
- **LOW**: Style issues, documentation improvements, minor refactoring
|
||||||
|
|
||||||
|
## Rust-Specific Guidelines
|
||||||
|
|
||||||
|
### Memory Safety
|
||||||
|
|
||||||
|
1. **Borrowing and Lifetimes**: Verify proper use of borrowing and lifetime annotations
|
||||||
|
2. **Unsafe Code**: Flag any `unsafe` blocks for careful review - they should be minimal and well-justified
|
||||||
|
3. **Clone Usage**: Identify unnecessary `.clone()` calls that could be replaced with borrowing
|
||||||
|
4. **Memory Leaks**: Watch for potential memory leaks in long-running processes
|
||||||
|
|
||||||
|
### Error Handling
|
||||||
|
|
||||||
|
1. **Result Types**: All fallible operations should return `Result` types
|
||||||
|
2. **Error Propagation**: Use `?` operator for clean error propagation
|
||||||
|
3. **Custom Errors**: Ensure custom error types implement appropriate traits
|
||||||
|
4. **Panic**: Flag any uses of `panic!`, `unwrap()`, or `expect()` in production code
|
||||||
|
|
||||||
|
### Security
|
||||||
|
|
||||||
|
1. **Cryptography**: Review all crypto code for:
|
||||||
|
- Proper key generation and storage
|
||||||
|
- Secure random number generation
|
||||||
|
- No hardcoded secrets or keys
|
||||||
|
- Use of well-vetted crypto libraries
|
||||||
|
|
||||||
|
2. **Secrets Management**:
|
||||||
|
- Secrets should never be logged
|
||||||
|
- Use secure memory wiping when appropriate
|
||||||
|
- Validate encryption/decryption implementations
|
||||||
|
|
||||||
|
3. **Input Validation**: All external input must be validated
|
||||||
|
|
||||||
|
### Code Quality
|
||||||
|
|
||||||
|
1. **Documentation**: Public APIs should have doc comments with examples
|
||||||
|
2. **Tests**: Critical paths should have comprehensive test coverage
|
||||||
|
3. **Type Safety**: Prefer type-safe abstractions over primitive types
|
||||||
|
4. **Idiomatic Rust**: Follow Rust API guidelines and conventions
|
||||||
|
|
||||||
|
## Project-Specific Rules
|
||||||
|
|
||||||
|
### Configuration Management
|
||||||
|
|
||||||
|
- Configuration migrations must be backward compatible
|
||||||
|
- Validate all configuration before applying
|
||||||
|
- Test migration paths from legacy to new formats
|
||||||
|
|
||||||
|
### Dependencies
|
||||||
|
|
||||||
|
- Prefer well-maintained crates with security audit history
|
||||||
|
- Avoid unnecessary dependencies
|
||||||
|
- Check for known vulnerabilities in dependencies
|
||||||
|
|
||||||
|
## Review Focus Areas
|
||||||
|
|
||||||
|
When reviewing PRs, pay special attention to:
|
||||||
|
|
||||||
|
1. Changes in `src/security/` - highest security scrutiny
|
||||||
|
2. Configuration migration code - ensure data integrity
|
||||||
|
3. Error handling paths - verify all edge cases
|
||||||
|
4. Public API changes - check for breaking changes
|
||||||
|
5. Test coverage - ensure critical code is tested
|
||||||
|
|
||||||
|
## Common Issues to Flag
|
||||||
|
|
||||||
|
- Unhandled errors or generic error messages
|
||||||
|
- Missing input validation
|
||||||
|
- Hardcoded credentials or secrets
|
||||||
|
- Unsafe code without justification
|
||||||
|
- Missing documentation on public APIs
|
||||||
|
- Inadequate test coverage on security-critical code
|
||||||
|
- Performance issues (unnecessary allocations, inefficient algorithms)
|
||||||
|
- Breaking API changes without deprecation warnings
|
||||||
61
third_party/zeroclaw/.gitattributes
vendored
Normal file
61
third_party/zeroclaw/.gitattributes
vendored
Normal file
@@ -0,0 +1,61 @@
|
|||||||
|
# Git attributes for ZeroClaw
|
||||||
|
# https://git-scm.com/docs/gitattributes
|
||||||
|
|
||||||
|
# Auto detect text files and perform LF normalization
|
||||||
|
* text=auto
|
||||||
|
|
||||||
|
# Source code
|
||||||
|
*.rs text eol=lf linguist-language=Rust
|
||||||
|
*.toml text eol=lf linguist-language=TOML
|
||||||
|
*.py text eol=lf linguist-language=Python
|
||||||
|
*.js text eol=lf linguist-language=JavaScript
|
||||||
|
*.ts text eol=lf linguist-language=TypeScript
|
||||||
|
*.html text eol=lf linguist-language=HTML
|
||||||
|
*.css text eol=lf linguist-language=CSS
|
||||||
|
*.scss text eol=lf linguist-language=SCSS
|
||||||
|
*.json text eol=lf linguist-language=JSON
|
||||||
|
*.yaml text eol=lf linguist-language=YAML
|
||||||
|
*.yml text eol=lf linguist-language=YAML
|
||||||
|
*.md text eol=lf linguist-language=Markdown
|
||||||
|
*.sh text eol=lf linguist-language=Shell
|
||||||
|
*.bash text eol=lf linguist-language=Shell
|
||||||
|
*.ps1 text eol=crlf linguist-language=PowerShell
|
||||||
|
|
||||||
|
# Documentation
|
||||||
|
*.txt text eol=lf
|
||||||
|
LICENSE* text eol=lf
|
||||||
|
|
||||||
|
# Configuration files
|
||||||
|
.editorconfig text eol=lf
|
||||||
|
.gitattributes text eol=lf
|
||||||
|
.gitignore text eol=lf
|
||||||
|
.dockerignore text eol=lf
|
||||||
|
|
||||||
|
# Rust-specific
|
||||||
|
Cargo.lock text eol=lf linguist-generated
|
||||||
|
Cargo.toml text eol=lf
|
||||||
|
|
||||||
|
# Declare files that will always have CRLF line endings on checkout
|
||||||
|
*.sln text eol=crlf
|
||||||
|
|
||||||
|
# Denote all files that are truly binary and should not be modified
|
||||||
|
*.png binary
|
||||||
|
*.jpg binary
|
||||||
|
*.jpeg binary
|
||||||
|
*.gif binary
|
||||||
|
*.ico binary
|
||||||
|
*.svg text
|
||||||
|
*.wasm binary
|
||||||
|
*.woff binary
|
||||||
|
*.woff2 binary
|
||||||
|
*.ttf binary
|
||||||
|
*.eot binary
|
||||||
|
*.mp3 binary
|
||||||
|
*.mp4 binary
|
||||||
|
*.webm binary
|
||||||
|
*.zip binary
|
||||||
|
*.tar binary
|
||||||
|
*.gz binary
|
||||||
|
*.bz2 binary
|
||||||
|
*.7z binary
|
||||||
|
*.db binary
|
||||||
8
third_party/zeroclaw/.githooks/pre-commit
vendored
Executable file
8
third_party/zeroclaw/.githooks/pre-commit
vendored
Executable file
@@ -0,0 +1,8 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
if command -v gitleaks >/dev/null 2>&1; then
|
||||||
|
gitleaks protect --staged --redact
|
||||||
|
else
|
||||||
|
echo "warning: gitleaks not found; skipping staged secret scan" >&2
|
||||||
|
fi
|
||||||
53
third_party/zeroclaw/.githooks/pre-push
vendored
Executable file
53
third_party/zeroclaw/.githooks/pre-push
vendored
Executable file
@@ -0,0 +1,53 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
#
|
||||||
|
# pre-push hook — runs fmt, clippy, and tests before every push.
|
||||||
|
# Install: git config core.hooksPath .githooks
|
||||||
|
# Skip: git push --no-verify
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
echo "==> pre-push: running rust quality gate..."
|
||||||
|
./scripts/ci/rust_quality_gate.sh || {
|
||||||
|
echo "FAIL: rust quality gate failed."
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
if [ "${ZEROCLAW_STRICT_LINT:-0}" = "1" ]; then
|
||||||
|
echo "==> pre-push: running strict clippy warnings gate (ZEROCLAW_STRICT_LINT=1)..."
|
||||||
|
./scripts/ci/rust_quality_gate.sh --strict || {
|
||||||
|
echo "FAIL: strict clippy warnings gate reported issues."
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "${ZEROCLAW_STRICT_DELTA_LINT:-0}" = "1" ]; then
|
||||||
|
echo "==> pre-push: running strict delta lint gate (ZEROCLAW_STRICT_DELTA_LINT=1)..."
|
||||||
|
./scripts/ci/rust_strict_delta_gate.sh || {
|
||||||
|
echo "FAIL: strict delta lint gate reported issues."
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "${ZEROCLAW_DOCS_LINT:-0}" = "1" ]; then
|
||||||
|
echo "==> pre-push: running docs quality gate (ZEROCLAW_DOCS_LINT=1)..."
|
||||||
|
./scripts/ci/docs_quality_gate.sh || {
|
||||||
|
echo "FAIL: docs quality gate reported issues."
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "${ZEROCLAW_DOCS_LINKS:-0}" = "1" ]; then
|
||||||
|
echo "==> pre-push: running docs links gate (ZEROCLAW_DOCS_LINKS=1)..."
|
||||||
|
./scripts/ci/docs_links_gate.sh || {
|
||||||
|
echo "FAIL: docs links gate reported issues."
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "==> pre-push: running tests..."
|
||||||
|
cargo test --locked || {
|
||||||
|
echo "FAIL: some tests did not pass."
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
echo "==> pre-push: all checks passed."
|
||||||
32
third_party/zeroclaw/.github/CODEOWNERS
vendored
Normal file
32
third_party/zeroclaw/.github/CODEOWNERS
vendored
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
# Default owner for all files
|
||||||
|
* @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||||
|
|
||||||
|
# Important functional modules
|
||||||
|
/src/agent/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||||
|
/src/providers/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||||
|
/src/channels/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||||
|
/src/tools/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||||
|
/src/gateway/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||||
|
/src/runtime/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||||
|
/src/memory/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||||
|
/Cargo.toml @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||||
|
/Cargo.lock @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||||
|
|
||||||
|
# Security / tests / CI-CD ownership
|
||||||
|
/src/security/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||||
|
/tests/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||||
|
/.github/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||||
|
/.github/workflows/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||||
|
/.github/codeql/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||||
|
/.github/dependabot.yml @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||||
|
/SECURITY.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||||
|
/docs/actions-source-policy.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||||
|
/docs/ci-map.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||||
|
|
||||||
|
# Docs & governance
|
||||||
|
/docs/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||||
|
/AGENTS.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||||
|
/CLAUDE.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||||
|
/CONTRIBUTING.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||||
|
/docs/pr-workflow.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||||
|
/docs/reviewer-playbook.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||||
138
third_party/zeroclaw/.github/ISSUE_TEMPLATE/bug_report.yml
vendored
Normal file
138
third_party/zeroclaw/.github/ISSUE_TEMPLATE/bug_report.yml
vendored
Normal file
@@ -0,0 +1,138 @@
|
|||||||
|
name: Bug Report
|
||||||
|
description: Report a reproducible defect in ZeroClaw
|
||||||
|
title: "[Bug]: "
|
||||||
|
labels:
|
||||||
|
- bug
|
||||||
|
body:
|
||||||
|
- type: markdown
|
||||||
|
attributes:
|
||||||
|
value: |
|
||||||
|
Thanks for taking the time to report a bug.
|
||||||
|
Please provide a minimal reproducible case so maintainers can triage quickly.
|
||||||
|
Do not include personal/sensitive data; redact and anonymize all logs/payloads.
|
||||||
|
|
||||||
|
- type: dropdown
|
||||||
|
id: component
|
||||||
|
attributes:
|
||||||
|
label: Affected component
|
||||||
|
options:
|
||||||
|
- runtime/daemon
|
||||||
|
- provider
|
||||||
|
- channel
|
||||||
|
- memory
|
||||||
|
- security/sandbox
|
||||||
|
- tooling/ci
|
||||||
|
- docs
|
||||||
|
- unknown
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: dropdown
|
||||||
|
id: severity
|
||||||
|
attributes:
|
||||||
|
label: Severity
|
||||||
|
options:
|
||||||
|
- S0 - data loss / security risk
|
||||||
|
- S1 - workflow blocked
|
||||||
|
- S2 - degraded behavior
|
||||||
|
- S3 - minor issue
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: current
|
||||||
|
attributes:
|
||||||
|
label: Current behavior
|
||||||
|
description: What is happening now?
|
||||||
|
placeholder: The process exits with ...
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: expected
|
||||||
|
attributes:
|
||||||
|
label: Expected behavior
|
||||||
|
description: What should happen instead?
|
||||||
|
placeholder: The daemon should stay alive and ...
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: reproduce
|
||||||
|
attributes:
|
||||||
|
label: Steps to reproduce
|
||||||
|
description: Please provide exact commands/config.
|
||||||
|
placeholder: |
|
||||||
|
1. zeroclaw onboard
|
||||||
|
2. zeroclaw daemon
|
||||||
|
3. Observe crash in logs
|
||||||
|
render: bash
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: impact
|
||||||
|
attributes:
|
||||||
|
label: Impact
|
||||||
|
description: Who is affected, how often, and practical consequences (optional but helps triage).
|
||||||
|
placeholder: |
|
||||||
|
Affected users: ...
|
||||||
|
Frequency: always/intermittent
|
||||||
|
Consequence: ...
|
||||||
|
validations:
|
||||||
|
required: false
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: logs
|
||||||
|
attributes:
|
||||||
|
label: Logs / stack traces
|
||||||
|
description: Paste relevant logs (redact secrets, personal identifiers, and sensitive data).
|
||||||
|
render: text
|
||||||
|
validations:
|
||||||
|
required: false
|
||||||
|
|
||||||
|
- type: input
|
||||||
|
id: version
|
||||||
|
attributes:
|
||||||
|
label: ZeroClaw version
|
||||||
|
placeholder: v0.1.0 / commit SHA
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: input
|
||||||
|
id: rust
|
||||||
|
attributes:
|
||||||
|
label: Rust version
|
||||||
|
description: Required for runtime/build bugs; optional for docs/config issues.
|
||||||
|
placeholder: rustc 1.xx.x
|
||||||
|
validations:
|
||||||
|
required: false
|
||||||
|
|
||||||
|
- type: input
|
||||||
|
id: os
|
||||||
|
attributes:
|
||||||
|
label: Operating system
|
||||||
|
placeholder: Ubuntu 24.04 / macOS 15 / Windows 11
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: dropdown
|
||||||
|
id: regression
|
||||||
|
attributes:
|
||||||
|
label: Regression?
|
||||||
|
options:
|
||||||
|
- Unknown
|
||||||
|
- Yes, it worked before
|
||||||
|
- No, first-time setup
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: checkboxes
|
||||||
|
id: checks
|
||||||
|
attributes:
|
||||||
|
label: Pre-flight checks
|
||||||
|
options:
|
||||||
|
- label: I reproduced this on the latest master branch or latest release.
|
||||||
|
required: true
|
||||||
|
- label: I redacted secrets, tokens, and personal data from all submitted content.
|
||||||
|
required: true
|
||||||
11
third_party/zeroclaw/.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
11
third_party/zeroclaw/.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
blank_issues_enabled: false
|
||||||
|
contact_links:
|
||||||
|
- name: Security vulnerability report
|
||||||
|
url: https://github.com/zeroclaw-labs/zeroclaw/security/policy
|
||||||
|
about: Please report security vulnerabilities privately via SECURITY.md policy.
|
||||||
|
- name: Contribution guide
|
||||||
|
url: https://github.com/zeroclaw-labs/zeroclaw/blob/master/CONTRIBUTING.md
|
||||||
|
about: Please read contribution and PR requirements before opening an issue.
|
||||||
|
- name: PR workflow & reviewer expectations
|
||||||
|
url: https://github.com/zeroclaw-labs/zeroclaw/blob/master/docs/pr-workflow.md
|
||||||
|
about: Read risk-based PR tracks, CI gates, and merge criteria before filing feature requests.
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user