sgclaw: snapshot today's runtime and skill updates
This commit is contained in:
@@ -1,10 +1,10 @@
|
||||
# Zhihu Hotlist Excel Acceptance
|
||||
|
||||
- Date: 2026-03-30 00:03:42 +0800
|
||||
- Date: 2026-03-30 03:46:51 +0800
|
||||
- Mode: real provider + live Zhihu hotlist API + simulated browser pipe
|
||||
- Workspace: `/tmp/sgclaw-live-acceptance-_655xotg`
|
||||
- Workspace: `/tmp/sgclaw-live-acceptance-85j8m_dq`
|
||||
- Final success: `True`
|
||||
- Total score: `100/100`
|
||||
- Total score: `90/100`
|
||||
|
||||
## Rubric
|
||||
|
||||
@@ -16,40 +16,43 @@
|
||||
|
||||
## Final Output
|
||||
|
||||
- exported_path: `/tmp/sgclaw-live-acceptance-_655xotg/.sgclaw-zeroclaw-workspace/out/zhihu-hotlist-1774800210904715681.xlsx`
|
||||
- final_summary: `成功!我已经完成了知乎热榜数据的读取和Excel导出任务。以下是任务总结:
|
||||
- exported_path: `/tmp/sgclaw-live-acceptance-85j8m_dq/.sgclaw-zeroclaw-workspace/out/zhihu-hotlist-1774813602601179684.xlsx`
|
||||
- final_summary: `完美!我已经成功完成了知乎热榜数据的读取和导出。以下是任务完成总结:
|
||||
|
||||
## 任务完成情况
|
||||
|
||||
1. **导航到知乎热榜页面**:成功从知乎首页导航到热榜页面(https://www.zhihu.com/hot)
|
||||
|
||||
2. **获取热榜数据**:使用SuperRPA浏览器接口获取了当前热榜页面的完整内容,包含10个热榜条目
|
||||
2. **提取热榜数据**:通过浏览器获取了当前热榜前10条数据,包括:
|
||||
- 排名(1-10)
|
||||
- 标题(完整问题描述)
|
||||
- 热度值(以"万"为单位)
|
||||
|
||||
3. **解析数据**:从页面文本中提取了排名、标题和热度信息,格式如下:
|
||||
- 排名:数字序号
|
||||
- 标题:完整的问题标题
|
||||
- 热度:以"万"为单位的阅读量
|
||||
3. **导出Excel文件**:成功将结构化数据导出为Excel文件,包含以下列:
|
||||
- rank(排名)
|
||||
- title(标题)
|
||||
- heat(热度)
|
||||
|
||||
4. **导出Excel文件**:成功将10个热榜条目导出到Excel文件,包含"排名"、"标题"、"热度"三列
|
||||
## 导出的数据内容
|
||||
|
||||
热榜前10条话题涵盖了多个领域:
|
||||
- 娱乐事件(李荣浩与单依纯的演唱会授权争议)
|
||||
- 体育赛事(张雪机车WSBK夺冠)
|
||||
- 国际政治经济(日本外储操作、伊朗冲突)
|
||||
- 科技产业(字节跳动发展、DDR5内存价格)
|
||||
- 社会现象(周末情侣模式、学霸家长低调现象)
|
||||
- 生活安全(灭火毯安全问题)
|
||||
|
||||
## 生成的文件
|
||||
|
||||
**Excel文件路径**:`/tmp/sgclaw-live-acceptance-_655xotg/.sgclaw-zeroclaw-workspace/out/zhihu-hotlist-1774800210904715681.xlsx`
|
||||
**Excel文件路径**:`/tmp/sgclaw-live-acceptance-85j8m_dq/.sgclaw-zeroclaw-workspace/out/zhihu-hotlist-1774813602601179684.xlsx`
|
||||
|
||||
## 热榜数据概览
|
||||
|
||||
当前知乎热榜前10名涵盖了多个热门话题:
|
||||
- 体育赛事(WSBK摩托车赛)
|
||||
- 娱乐事件(李荣浩与单依纯争议)
|
||||
- 国际政治经济(日本外汇政策、伊朗冲突)
|
||||
- 科技商业(字节跳动发展、内存价格)
|
||||
- 社会现象(学霸家长行为、Excel使用)
|
||||
|
||||
最高热度话题达到1607万阅读量,显示了知乎用户对这些话题的高度关注。`
|
||||
该文件包含了2026年3月30日03:46:02采集的知乎热榜数据,您可以使用Excel或其他电子表格软件打开查看完整的热榜信息。`
|
||||
|
||||
## Skill Logs
|
||||
|
||||
- `DeepSeek config loaded from /tmp/sgclaw-live-acceptance-_655xotg/sgclaw_config.json model=deepseek-chat base_url=https://api.deepseek.com`
|
||||
- `sgclaw runtime version=0.1.0 protocol=1.0`
|
||||
- `DeepSeek config loaded from /tmp/sgclaw-live-acceptance-85j8m_dq/sgclaw_config.json model=deepseek-chat base_url=https://api.deepseek.com`
|
||||
- `skills dir resolved to /home/zyl/projects/sgClaw/skill_lib/skills`
|
||||
- `runtime profile=BrowserAttached skills_prompt_mode=Compact`
|
||||
- `zeroclaw_process_message_primary`
|
||||
@@ -58,24 +61,29 @@ navigate https://www.zhihu.com/hot
|
||||
getText main
|
||||
call openxml_office
|
||||
return generated local .xlsx path`
|
||||
- `loaded skills: office-export-xlsx, zhihu-hotlist, zhihu-hotlist-screen, zhihu-navigate, zhihu-write`
|
||||
- `loaded skills: office-export-xlsx@0.1.0, zhihu-hotlist@0.1.0, zhihu-hotlist-screen@0.1.0, zhihu-navigate@0.1.0, zhihu-write@0.1.0`
|
||||
- `navigate https://www.zhihu.com/hot`
|
||||
- `getText main`
|
||||
- `call zhihu-hotlist.extract_hotlist`
|
||||
- `browser script failed: {"unsupported_action":"eval"}`
|
||||
- `getText body`
|
||||
- `getText .HotList-list`
|
||||
- `call openxml_office`
|
||||
- `unsupported columns: expected [rank, title, heat]`
|
||||
- `call openxml_office`
|
||||
|
||||
## Live Hotlist Sample
|
||||
|
||||
- 1. 如何看待张雪机车在 2026 年 WSBK 葡萄牙站夺冠?这对国内的摩托赛事发展有什么影响? | 1607万
|
||||
- 2. 李荣浩摆证据 4 连质问单依纯,为什么没有授权的歌曲也能放进演唱会?演唱会筹备中可能出了什么问题? | 1064万
|
||||
- 3. 日本拟动用外储做空国际原油,以挽救日元汇率,对此你怎么看,其会重演 96 年「住友铜事件」么? | 573万
|
||||
- 4. 官方通报女子被羁押后无罪释放,申请国赔 13 天被叫停,当地成立联合调查组,最该查清什么?带来哪些深思? | 281万
|
||||
- 5. 字节跳动是怎么短短数年就能单挑所有互联网巨头的? | 185万
|
||||
- 6. 伊朗科技大学遭袭后,伊朗将美以大学列为「合法袭击目标」,如果战争扩大到教育机构,冲突还有回头路吗? | 175万
|
||||
- 7. 黄金大买家土耳其央行在伊朗战争期间抛售 80 亿美元黄金,这意味着什么? | 166万
|
||||
- 8. 为什么越厉害的学霸,她们家长越低调?从来不在朋友圈晒孩子成绩? | 141万
|
||||
- 9. DDR5 内存价格 3 月出现明显下降,请问这是短期现象,还是内存供需紧张真的缓和了? | 135万
|
||||
- 10. 为什么大公司不用 pandas 取代 Excel? | 81万
|
||||
- 1. 李荣浩摆证据 4 连质问单依纯,为什么没有授权的歌曲也能放进演唱会?演唱会筹备中可能出了什么问题? | 1220万
|
||||
- 2. 如何看待张雪机车在 2026 年 WSBK 葡萄牙站夺冠?这对国内的摩托赛事发展有什么影响? | 370万
|
||||
- 3. 日本拟动用外储做空国际原油,以挽救日元汇率,对此你怎么看,其会重演 96 年「住友铜事件」么? | 356万
|
||||
- 4. 字节跳动是怎么短短数年就能单挑所有互联网巨头的? | 277万
|
||||
- 5. 如何看待张雪机车 820rr 拿下 wsbk 葡萄牙站第一回合冠军?这个冠军含金量如何? | 241万
|
||||
- 6. 伊朗科技大学遭袭后,伊朗将美以大学列为「合法袭击目标」,如果战争扩大到教育机构,冲突还有回头路吗? | 202万
|
||||
- 7. 「周末情侣」模式日渐兴起,工作日通过消息视频联系,仅周末相聚,如何看待这种模式?你有过类似的经历吗? | 163万
|
||||
- 8. 男孩玩灭火毯全身扎满超细玻璃纤维,又痒又痛取不出来,灭火毯为什么会「扎人」?怎么处理才不遭罪? | 158万
|
||||
- 9. DDR5 内存价格 3 月出现明显下降,请问这是短期现象,还是内存供需紧张真的缓和了? | 151万
|
||||
- 10. 为什么越厉害的学霸,她们家长越低调?从来不在朋友圈晒孩子成绩? | 139万
|
||||
|
||||
## Stderr
|
||||
|
||||
- `sgclaw ready: agent_id=db27f86f-4334-41a7-bc24-11e8fbd90486`
|
||||
- `sgclaw ready: agent_id=4b984e63-3254-4518-a75a-127e7dad6474`
|
||||
|
||||
551
docs/plans/2026-03-26-zeroclaw-prompt-safety-hardening-plan.md
Normal file
551
docs/plans/2026-03-26-zeroclaw-prompt-safety-hardening-plan.md
Normal file
@@ -0,0 +1,551 @@
|
||||
# ZeroClaw Prompt Safety Hardening Implementation Plan
|
||||
|
||||
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
||||
|
||||
**Goal:** Harden ZeroClaw prompt handling and tool execution so non-skill freeform operations degrade to read-only or business-approved execution, while trusted skill-defined operations retain bounded execution privileges.
|
||||
|
||||
**Architecture:** Build a security gate around the existing prompt and tool-entry paths instead of rewriting the full prompt architecture. The gate classifies prompt-injection risk, records operation provenance (`trusted_skill` vs `non_skill`), sanitizes injected workspace/skill content, and enforces execution mode transitions (`clean`, `suspect_readonly`, `suspect_waiting_approval`, `suspect_business_approved`). Trusted skills gain structured business-operation metadata; non-skill operations require business-level approval before any privileged capability is released.
|
||||
|
||||
**Tech Stack:** Rust, vendored ZeroClaw (`third_party/zeroclaw`), existing approval/autonomy system, current prompt guard and prompt builder tests, `cargo test`.
|
||||
|
||||
### Task 1: Create an Isolated Worktree and Verify a Clean Baseline
|
||||
|
||||
**Files:**
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/.gitignore`
|
||||
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/**`
|
||||
|
||||
**Step 1: Verify the worktree directory is safe to use**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cd /home/zyl/projects/sgClaw/claw
|
||||
ls -d .worktrees
|
||||
git check-ignore -v .worktrees
|
||||
```
|
||||
|
||||
Expected: `.worktrees` exists and is ignored by git.
|
||||
|
||||
**Step 2: Create the implementation worktree**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cd /home/zyl/projects/sgClaw/claw
|
||||
git worktree add .worktrees/zeroclaw-prompt-safety-hardening -b zeroclaw-prompt-safety-hardening
|
||||
```
|
||||
|
||||
Expected: a new branch and worktree are created.
|
||||
|
||||
**Step 3: Build the baseline in the worktree**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cd /home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening
|
||||
cargo test -p zeroclawlabs prompt_guard -- --nocapture
|
||||
cargo test -p zeroclawlabs build_system_prompt -- --nocapture
|
||||
```
|
||||
|
||||
Expected: existing relevant tests pass before any code changes.
|
||||
|
||||
**Step 4: Commit the clean worktree setup if `.gitignore` changed**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git add .gitignore
|
||||
git commit -m "chore: prepare worktree for prompt safety hardening"
|
||||
```
|
||||
|
||||
Expected: commit only if `.gitignore` required an adjustment.
|
||||
|
||||
### Task 2: Add the Core Security-Mode Data Model
|
||||
|
||||
**Files:**
|
||||
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/security/operation_policy.rs`
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/security/mod.rs`
|
||||
- Test: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/security/operation_policy.rs`
|
||||
|
||||
**Step 1: Write the failing policy tests**
|
||||
|
||||
Add tests that prove:
|
||||
- suspicious non-skill input maps to `suspect_readonly`
|
||||
- trusted skill operations can request bounded privileged execution
|
||||
- any out-of-scope capability request downgrades the operation
|
||||
|
||||
Use concrete enums and assertions, for example:
|
||||
```rust
|
||||
assert_eq!(
|
||||
ExecutionMode::from_guard_and_provenance(GuardRisk::Suspicious, OperationProvenance::NonSkill),
|
||||
ExecutionMode::SuspectReadOnly
|
||||
);
|
||||
```
|
||||
|
||||
**Step 2: Run the tests to verify RED**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cd /home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening
|
||||
cargo test -p zeroclawlabs operation_policy -- --nocapture
|
||||
```
|
||||
|
||||
Expected: fail because the new types do not exist yet.
|
||||
|
||||
**Step 3: Implement the minimal policy model**
|
||||
|
||||
Define:
|
||||
- `GuardRisk` (`Clean`, `Suspicious`, `Dangerous`)
|
||||
- `OperationProvenance` (`TrustedSkill`, `NonSkill`, `Mixed`)
|
||||
- `ExecutionMode` (`Clean`, `SuspectReadOnly`, `SuspectWaitingApproval`, `SuspectBusinessApproved`)
|
||||
- `CapabilityClass` for privileged business actions
|
||||
|
||||
Add small helper functions that do only state mapping. Do not pull prompt-building logic into this module.
|
||||
|
||||
**Step 4: Re-run the policy tests to verify GREEN**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test -p zeroclawlabs operation_policy -- --nocapture
|
||||
```
|
||||
|
||||
Expected: the new policy tests pass.
|
||||
|
||||
**Step 5: Commit**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git add third_party/zeroclaw/src/security/mod.rs third_party/zeroclaw/src/security/operation_policy.rs
|
||||
git commit -m "feat: add prompt security execution mode model"
|
||||
```
|
||||
|
||||
### Task 3: Add Structured Skill Trust Metadata
|
||||
|
||||
**Files:**
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/skills/mod.rs`
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/tools/read_skill.rs`
|
||||
- Test: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/skills/mod.rs`
|
||||
|
||||
**Step 1: Write failing skill metadata tests**
|
||||
|
||||
Add tests that prove:
|
||||
- `SKILL.toml` can declare a business operation type, capability list, argument constraints, and `step_budget`
|
||||
- markdown-only skills default to unprivileged metadata
|
||||
- malformed privileged metadata is rejected or downgraded safely
|
||||
|
||||
Use a manifest shape like:
|
||||
```toml
|
||||
[skill]
|
||||
name = "export-report"
|
||||
description = "Export the monthly report"
|
||||
|
||||
[security]
|
||||
operation_type = "browser_export_data"
|
||||
allowed_capabilities = ["browser_read", "browser_export"]
|
||||
step_budget = 6
|
||||
approval_mode = "trusted_skill"
|
||||
```
|
||||
|
||||
**Step 2: Run the tests to verify RED**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test -p zeroclawlabs skill -- --nocapture
|
||||
```
|
||||
|
||||
Expected: fail because the structured metadata fields are missing.
|
||||
|
||||
**Step 3: Implement minimal structured metadata**
|
||||
|
||||
Extend `Skill` with a structured security block, for example:
|
||||
- `operation_type`
|
||||
- `business_description`
|
||||
- `allowed_capabilities`
|
||||
- `arg_constraints`
|
||||
- `step_budget`
|
||||
- `approval_mode`
|
||||
|
||||
Default markdown-only skills to unprivileged metadata so existing skills remain compatible.
|
||||
|
||||
**Step 4: Make `read_skill` expose the metadata**
|
||||
|
||||
Return or prepend enough structured metadata so the runtime can distinguish trusted skill operations from plain prompt text.
|
||||
|
||||
**Step 5: Re-run the tests to verify GREEN**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test -p zeroclawlabs skill -- --nocapture
|
||||
```
|
||||
|
||||
Expected: skill parsing and `read_skill` tests pass.
|
||||
|
||||
**Step 6: Commit**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git add third_party/zeroclaw/src/skills/mod.rs third_party/zeroclaw/src/tools/read_skill.rs
|
||||
git commit -m "feat: add trusted skill security metadata"
|
||||
```
|
||||
|
||||
### Task 4: Sanitize Injected Workspace and Skill Content Before Prompt Assembly
|
||||
|
||||
**Files:**
|
||||
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/security/prompt_sanitizer.rs`
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/security/mod.rs`
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/channels/mod.rs`
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/agent/prompt.rs`
|
||||
- Test: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/channels/mod.rs`
|
||||
|
||||
**Step 1: Write failing sanitizer tests**
|
||||
|
||||
Add tests that prove:
|
||||
- dangerous bootstrap phrases are removed, escaped, or summarized before prompt injection
|
||||
- control characters are stripped
|
||||
- overlong files are truncated with an audit-friendly marker
|
||||
- safe business content remains readable
|
||||
|
||||
**Step 2: Run the tests to verify RED**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test -p zeroclawlabs build_system_prompt -- --nocapture
|
||||
```
|
||||
|
||||
Expected: fail because injected files are still copied verbatim.
|
||||
|
||||
**Step 3: Implement the sanitizer**
|
||||
|
||||
Create a small sanitizer that:
|
||||
- strips control characters
|
||||
- caps content length
|
||||
- flags prompt-override phrases
|
||||
- emits sanitized content plus metadata such as `truncated` and matched rules
|
||||
|
||||
Use this sanitizer in:
|
||||
- `load_openclaw_bootstrap_files`
|
||||
- any shared path in `agent/prompt.rs` that renders workspace or skill text into the system prompt
|
||||
|
||||
**Step 4: Re-run the tests to verify GREEN**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test -p zeroclawlabs build_system_prompt -- --nocapture
|
||||
```
|
||||
|
||||
Expected: prompt-building tests pass with the new sanitized behavior.
|
||||
|
||||
**Step 5: Commit**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git add third_party/zeroclaw/src/security/mod.rs third_party/zeroclaw/src/security/prompt_sanitizer.rs third_party/zeroclaw/src/channels/mod.rs third_party/zeroclaw/src/agent/prompt.rs
|
||||
git commit -m "feat: sanitize injected workspace prompt content"
|
||||
```
|
||||
|
||||
### Task 5: Wire `PromptGuard` into Main Agent and Gateway Entry Points
|
||||
|
||||
**Files:**
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/security/prompt_guard.rs`
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/agent/agent.rs`
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/gateway/mod.rs`
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/gateway/ws.rs`
|
||||
- Test: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/agent/agent.rs`
|
||||
|
||||
**Step 1: Write failing entry-point tests**
|
||||
|
||||
Add tests that prove:
|
||||
- suspicious input marks the turn as degraded instead of silently continuing
|
||||
- dangerous input is blocked
|
||||
- clean input remains unchanged
|
||||
|
||||
Prefer tests that assert on a security decision object instead of brittle prompt strings.
|
||||
|
||||
**Step 2: Run the tests to verify RED**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test -p zeroclawlabs prompt_guard -- --nocapture
|
||||
cargo test -p zeroclawlabs agent -- --nocapture
|
||||
```
|
||||
|
||||
Expected: fail because no entry path consumes the guard result.
|
||||
|
||||
**Step 3: Implement guarded entry evaluation**
|
||||
|
||||
Before each turn:
|
||||
- scan the inbound user content
|
||||
- map the guard result into `GuardRisk`
|
||||
- create an execution context carrying risk and provenance
|
||||
- attach audit details for later logging
|
||||
|
||||
Keep the existing `PromptGuard` regexes unless a test demands a specific adjustment.
|
||||
|
||||
**Step 4: Re-run the tests to verify GREEN**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test -p zeroclawlabs prompt_guard -- --nocapture
|
||||
cargo test -p zeroclawlabs agent -- --nocapture
|
||||
```
|
||||
|
||||
Expected: suspicious and blocked paths now behave deterministically.
|
||||
|
||||
**Step 5: Commit**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git add third_party/zeroclaw/src/security/prompt_guard.rs third_party/zeroclaw/src/agent/agent.rs third_party/zeroclaw/src/gateway/mod.rs third_party/zeroclaw/src/gateway/ws.rs
|
||||
git commit -m "feat: enforce prompt guard at runtime entry points"
|
||||
```
|
||||
|
||||
### Task 6: Add Business-Level Privileged Operation Registry and Approval Tokens
|
||||
|
||||
**Files:**
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/approval/mod.rs`
|
||||
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/security/business_approval.rs`
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/security/mod.rs`
|
||||
- Test: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/security/business_approval.rs`
|
||||
|
||||
**Step 1: Write failing business approval tests**
|
||||
|
||||
Add tests that prove:
|
||||
- only operations in the privileged registry can request approval
|
||||
- approval tokens bind to `session_id`, `operation_type`, `allowed_capabilities`, `step_budget`, and expiration
|
||||
- a mismatched or expired approval token is rejected
|
||||
|
||||
**Step 2: Run the tests to verify RED**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test -p zeroclawlabs business_approval -- --nocapture
|
||||
```
|
||||
|
||||
Expected: fail because the business approval registry does not exist yet.
|
||||
|
||||
**Step 3: Implement the registry and token model**
|
||||
|
||||
Create:
|
||||
- a privileged business operation registry
|
||||
- a single-operation approval token
|
||||
- helper checks for `can_request_approval` and `matches_execution_request`
|
||||
|
||||
Model approval at the business-operation level, not raw tool calls.
|
||||
|
||||
**Step 4: Extend the existing approval module**
|
||||
|
||||
Teach the approval module to carry business-level fields through the current request/response flow without breaking old call sites.
|
||||
|
||||
**Step 5: Re-run the tests to verify GREEN**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test -p zeroclawlabs business_approval -- --nocapture
|
||||
```
|
||||
|
||||
Expected: the token validation and registry tests pass.
|
||||
|
||||
**Step 6: Commit**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git add third_party/zeroclaw/src/approval/mod.rs third_party/zeroclaw/src/security/mod.rs third_party/zeroclaw/src/security/business_approval.rs
|
||||
git commit -m "feat: add business-level approval registry"
|
||||
```
|
||||
|
||||
### Task 7: Enforce Execution Modes in Tool Dispatch
|
||||
|
||||
**Files:**
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/agent/dispatcher.rs`
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/agent/agent.rs`
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/agent/loop_.rs`
|
||||
- Test: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/agent/dispatcher.rs`
|
||||
|
||||
**Step 1: Write failing dispatcher tests**
|
||||
|
||||
Add tests that prove:
|
||||
- `suspect_readonly` allows only safe read capabilities
|
||||
- `trusted_skill` can execute capabilities listed in its metadata within `step_budget`
|
||||
- `mixed` or non-skill privileged calls require a matching business approval token
|
||||
|
||||
**Step 2: Run the tests to verify RED**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test -p zeroclawlabs dispatcher -- --nocapture
|
||||
```
|
||||
|
||||
Expected: fail because the dispatcher does not yet know about execution modes.
|
||||
|
||||
**Step 3: Implement capability enforcement**
|
||||
|
||||
Before dispatching any tool:
|
||||
- resolve the operation context
|
||||
- map the tool call to a capability class
|
||||
- reject calls outside the current execution mode
|
||||
- decrement or validate `step_budget` for approved bounded flows
|
||||
|
||||
Do not rely on prompt text for enforcement.
|
||||
|
||||
**Step 4: Re-run the tests to verify GREEN**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test -p zeroclawlabs dispatcher -- --nocapture
|
||||
```
|
||||
|
||||
Expected: dispatch now respects read-only, trusted skill, and business-approved modes.
|
||||
|
||||
**Step 5: Commit**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git add third_party/zeroclaw/src/agent/dispatcher.rs third_party/zeroclaw/src/agent/agent.rs third_party/zeroclaw/src/agent/loop_.rs
|
||||
git commit -m "feat: enforce execution mode in tool dispatch"
|
||||
```
|
||||
|
||||
### Task 8: Default Skills Prompt Injection to Compact for Safer Runtime Behavior
|
||||
|
||||
**Files:**
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/config/schema.rs`
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/agent/prompt.rs`
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/channels/mod.rs`
|
||||
- Test: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/config/schema.rs`
|
||||
|
||||
**Step 1: Write the failing configuration test**
|
||||
|
||||
Add a test that asserts the default skill prompt injection mode is `Compact` unless explicitly configured otherwise.
|
||||
|
||||
**Step 2: Run the test to verify RED**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test -p zeroclawlabs skills_prompt_injection_mode -- --nocapture
|
||||
```
|
||||
|
||||
Expected: fail because defaults still point to `Full`.
|
||||
|
||||
**Step 3: Implement the default flip**
|
||||
|
||||
Update config defaults and any prompt-builder defaults that currently assume `Full`. Keep explicit user config backward compatible.
|
||||
|
||||
**Step 4: Re-run the test to verify GREEN**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test -p zeroclawlabs skills_prompt_injection_mode -- --nocapture
|
||||
```
|
||||
|
||||
Expected: default configuration now resolves to `Compact`.
|
||||
|
||||
**Step 5: Commit**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git add third_party/zeroclaw/src/config/schema.rs third_party/zeroclaw/src/agent/prompt.rs third_party/zeroclaw/src/channels/mod.rs
|
||||
git commit -m "feat: default skills prompt injection to compact"
|
||||
```
|
||||
|
||||
### Task 9: Add Audit Logging and Regression Coverage
|
||||
|
||||
**Files:**
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/observability/mod.rs`
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/agent/agent.rs`
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/src/channels/mod.rs`
|
||||
- Create: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/third_party/zeroclaw/tests/prompt_safety_regression.rs`
|
||||
|
||||
**Step 1: Write the failing regression tests**
|
||||
|
||||
Cover:
|
||||
- prompt override attack from user content
|
||||
- malicious `AGENTS.md` bootstrap content
|
||||
- trusted skill execution within bounds
|
||||
- non-skill privileged request requiring business approval
|
||||
- approval token mismatch
|
||||
- session history restore preserving degraded mode
|
||||
|
||||
**Step 2: Run the tests to verify RED**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test -p zeroclawlabs --test prompt_safety_regression -- --nocapture
|
||||
```
|
||||
|
||||
Expected: fail because the end-to-end behavior is not wired together yet.
|
||||
|
||||
**Step 3: Implement audit logging**
|
||||
|
||||
Record:
|
||||
- input hash
|
||||
- matched guard rules
|
||||
- risk level
|
||||
- provenance
|
||||
- execution mode transitions
|
||||
- approval decisions
|
||||
|
||||
Avoid logging raw sensitive content.
|
||||
|
||||
**Step 4: Re-run the regression tests to verify GREEN**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test -p zeroclawlabs --test prompt_safety_regression -- --nocapture
|
||||
```
|
||||
|
||||
Expected: the regression suite passes.
|
||||
|
||||
**Step 5: Commit**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git add third_party/zeroclaw/src/observability/mod.rs third_party/zeroclaw/src/agent/agent.rs third_party/zeroclaw/src/channels/mod.rs third_party/zeroclaw/tests/prompt_safety_regression.rs
|
||||
git commit -m "test: add prompt safety regression coverage"
|
||||
```
|
||||
|
||||
### Task 10: Final Verification and Integration Review
|
||||
|
||||
**Files:**
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/docs/L5-提示词分布与安全改造方案.md`
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening/docs/README.md`
|
||||
|
||||
**Step 1: Run targeted verification**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cd /home/zyl/projects/sgClaw/claw/.worktrees/zeroclaw-prompt-safety-hardening
|
||||
cargo test -p zeroclawlabs prompt_guard -- --nocapture
|
||||
cargo test -p zeroclawlabs build_system_prompt -- --nocapture
|
||||
cargo test -p zeroclawlabs dispatcher -- --nocapture
|
||||
cargo test -p zeroclawlabs --test prompt_safety_regression -- --nocapture
|
||||
```
|
||||
|
||||
Expected: all prompt safety and dispatcher tests pass.
|
||||
|
||||
**Step 2: Run a broad ZeroClaw package test pass if time permits**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test -p zeroclawlabs -- --nocapture
|
||||
```
|
||||
|
||||
Expected: no regressions in the vendored package test suite, or a documented list of unrelated existing failures.
|
||||
|
||||
**Step 3: Update the security design docs**
|
||||
|
||||
Document:
|
||||
- execution modes
|
||||
- trusted skill metadata contract
|
||||
- business approval flow
|
||||
- why non-skill privileged actions are gated
|
||||
|
||||
**Step 4: Commit the docs**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git add docs/L5-提示词分布与安全改造方案.md docs/README.md
|
||||
git commit -m "docs: record prompt safety hardening design"
|
||||
```
|
||||
|
||||
**Step 5: Prepare merge review notes**
|
||||
|
||||
Write a short integration summary covering:
|
||||
- changed entry points
|
||||
- backward-compatibility expectations
|
||||
- any skills that need metadata upgrades
|
||||
- rollout recommendation for existing integrators
|
||||
179
docs/plans/2026-03-27-sgclaw-chat-first-ui-refactor-plan.md
Normal file
179
docs/plans/2026-03-27-sgclaw-chat-first-ui-refactor-plan.md
Normal file
@@ -0,0 +1,179 @@
|
||||
# sgClaw Chat-First UI Refactor Implementation Plan
|
||||
|
||||
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
||||
|
||||
**Goal:** Rebuild the sgClaw floating chat UI into a chat-first plugin-style product where the message timeline is primary, `执行摘要` is folded into the conversation, and `调试` opens as a full-window overlay instead of occupying persistent space.
|
||||
|
||||
**Architecture:** Keep `chrome://superrpa-functions/sgclaw-chat` as the first verified host because it already has Lit-based unit tests, then mirror the same information architecture and visual hierarchy into the ordinary-page injected `sgclaw_overlay.js`. Do not introduce a new backend contract; only rearrange presentation, panel semantics, and message/result composition around the existing runtime state.
|
||||
|
||||
**Tech Stack:** Chromium WebUI, Lit templates/components, injected Shadow DOM overlay JavaScript, existing SuperRPA bridge/runtime callbacks, mainline TS unit tests.
|
||||
|
||||
### Task 1: Lock The New Information Architecture In Tests
|
||||
|
||||
**Files:**
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_mainline_unittest.ts`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_settings_state_mainline_unittest.ts`
|
||||
|
||||
**Step 1: Write the failing test**
|
||||
|
||||
Add assertions for these exact product rules:
|
||||
- `getHtml()` must no longer emit the legacy `debug-note`.
|
||||
- the main chat template must define a dedicated overlay/sheet container for `history`, `settings`, and `debug`.
|
||||
- the debug panel must be described as a full-window overlay rather than a side drawer/log block.
|
||||
- the result presentation must be part of the message stream, not a standalone persistent secondary panel.
|
||||
|
||||
**Step 2: Run test to verify it fails**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
node --test /home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_mainline_unittest.ts
|
||||
```
|
||||
|
||||
Expected: FAIL because current template still includes `debug-note`, side-by-side panel layout, and standalone result panel semantics.
|
||||
|
||||
**Step 3: Write minimal implementation**
|
||||
|
||||
Change only template/component strings and assertions needed to express the new structure, without touching styling yet.
|
||||
|
||||
**Step 4: Run test to verify it passes**
|
||||
|
||||
Run the same command.
|
||||
|
||||
Expected: PASS.
|
||||
|
||||
### Task 2: Refactor `chrome://` sgClaw Into Chat-First Structure
|
||||
|
||||
**Files:**
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.html.ts`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.css.ts`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-chat-header.ts`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-chat-composer.ts`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-debug-drawer.ts`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-history-panel.ts`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-settings-panel.ts`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-message-list.ts`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-message-card-result.ts`
|
||||
|
||||
**Step 1: Keep the header narrow**
|
||||
|
||||
Make the header carry only:
|
||||
- brand
|
||||
- current page label
|
||||
- compact runtime status
|
||||
- actions for `新对话 / 历史 / 设置 / 调试 / 收起`
|
||||
|
||||
Remove the large subtitle/debug framing and the separate heavy runtime action row feel.
|
||||
|
||||
**Step 2: Make the message timeline primary**
|
||||
|
||||
Turn the main shell body into:
|
||||
- a single timeline container
|
||||
- optional empty-state presets
|
||||
- no persistent secondary summary card
|
||||
|
||||
`finalResult` should render as a folded result card appended in the stream.
|
||||
|
||||
**Step 3: Convert secondary panels into full overlays**
|
||||
|
||||
Render `history`, `settings`, and `debug` inside a full-window overlay/sheet that covers the chat content area instead of sitting above or beside it.
|
||||
|
||||
**Step 4: Re-skin toward the approved direction**
|
||||
|
||||
Use:
|
||||
- soft neutral surfaces
|
||||
- restrained accent usage
|
||||
- thinner borders
|
||||
- calmer shadows
|
||||
- clearer assistant/user card contrast
|
||||
|
||||
Avoid:
|
||||
- debug-workbench feeling
|
||||
- large gradient blocks
|
||||
- heavy explanatory copy in the main flow
|
||||
|
||||
**Step 5: Run the unit tests**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
node --test /home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_mainline_unittest.ts
|
||||
```
|
||||
|
||||
Expected: PASS.
|
||||
|
||||
### Task 3: Mirror The Same Structure Into Ordinary-Page Overlay
|
||||
|
||||
**Files:**
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/sgclaw_overlay.js`
|
||||
|
||||
**Step 1: Remove the standalone result panel**
|
||||
|
||||
Delete the always-visible `执行摘要` block from the main window body.
|
||||
|
||||
**Step 2: Introduce overlay panels**
|
||||
|
||||
Change panel rendering so `history`, `settings`, and `debug` appear in a dedicated full-window overlay layer within the floating window instead of as sibling blocks consuming vertical space.
|
||||
|
||||
**Step 3: Rebuild the shell**
|
||||
|
||||
Match the `chrome://` layout:
|
||||
- compact header
|
||||
- primary message timeline
|
||||
- folded result card inside conversation
|
||||
- sticky composer
|
||||
|
||||
**Step 4: Preserve behavior**
|
||||
|
||||
Do not break:
|
||||
- `sgclaw.newConversation`
|
||||
- `sgclaw.restoreConversation`
|
||||
- runtime polling
|
||||
- config save/load
|
||||
- unread badge behavior
|
||||
|
||||
**Step 5: Run a syntax sanity check**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
node --check /home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/sgclaw_overlay.js
|
||||
```
|
||||
|
||||
Expected: PASS.
|
||||
|
||||
### Task 4: Verify Browser Resource Integration
|
||||
|
||||
**Files:**
|
||||
- No new source files; verification only
|
||||
|
||||
**Step 1: Run TS / mainline tests**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
bash -lc "autoninja -C /home/zyl/projects/superRpa/src/out/KylinRelease functions_ui_mainline_unittests"
|
||||
```
|
||||
|
||||
Expected: build succeeds.
|
||||
|
||||
**Step 2: Run targeted mainline unit tests**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
/home/zyl/projects/superRpa/src/out/KylinRelease/functions_ui_mainline_unittests --gtest_filter='FunctionsUiMainlineTest.*sgclaw*'
|
||||
```
|
||||
|
||||
If filter finds no test names, run the full binary and confirm it exits `0`.
|
||||
|
||||
**Step 3: Rebuild browser resources if needed**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
bash -lc "autoninja -C /home/zyl/projects/superRpa/src/out/KylinRelease chrome"
|
||||
```
|
||||
|
||||
**Step 4: Manually verify product behavior**
|
||||
|
||||
Check:
|
||||
- ordinary webpage floating window
|
||||
- `chrome://superrpa-functions/sgclaw-chat`
|
||||
- `调试` opens as full overlay
|
||||
- `执行摘要` no longer blocks the main conversation
|
||||
- `历史` and `设置` do not consume persistent layout space
|
||||
148
docs/plans/2026-03-27-sgclaw-configurable-skills-dir-plan.md
Normal file
148
docs/plans/2026-03-27-sgclaw-configurable-skills-dir-plan.md
Normal file
@@ -0,0 +1,148 @@
|
||||
# SGClaw Configurable Skills Directory Implementation Plan
|
||||
|
||||
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
||||
|
||||
**Goal:** Let `sgclaw` own skill-directory resolution and allow users to set a custom skills directory in `sgclaw_config.json` without relying on SuperRPA to copy skills into the runtime workspace.
|
||||
|
||||
**Architecture:** Extend the existing browser JSON config parser so `sgclaw` can read an optional `skillsDir` field alongside DeepSeek settings. Keep the current embedded ZeroClaw workspace for memory/config internals, but decouple skill loading from that fixed path by resolving a configurable skills root at runtime. Preserve backward compatibility by defaulting to `<workspace_root>/.sgclaw-zeroclaw-workspace/skills` when `skillsDir` is absent or empty.
|
||||
|
||||
**Tech Stack:** Rust, serde JSON parsing, existing ZeroClaw compatibility runtime, cargo test
|
||||
|
||||
### Task 1: Capture browser config requirements
|
||||
|
||||
**Files:**
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/src/config/settings.rs`
|
||||
- Test: `/home/zyl/projects/sgClaw/claw/tests/compat_config_test.rs`
|
||||
|
||||
**Step 1: Write the failing test**
|
||||
|
||||
Add tests that load `sgclaw_config.json` containing:
|
||||
- no `skillsDir`
|
||||
- a relative `skillsDir`
|
||||
- an absolute `skillsDir`
|
||||
|
||||
Assert that:
|
||||
- `skillsDir` missing falls back to default workspace skills path
|
||||
- relative values resolve against the browser config directory
|
||||
- absolute values are preserved
|
||||
|
||||
**Step 2: Run test to verify it fails**
|
||||
|
||||
Run: `cargo test compat_config -- --nocapture`
|
||||
|
||||
Expected: FAIL because `DeepSeekSettings` / config adapter do not expose any skills directory override yet.
|
||||
|
||||
**Step 3: Write minimal implementation**
|
||||
|
||||
Add a browser-config structure that parses `skillsDir` and expose a resolver function that returns the effective skills directory for `sgclaw`.
|
||||
|
||||
**Step 4: Run test to verify it passes**
|
||||
|
||||
Run: `cargo test compat_config -- --nocapture`
|
||||
|
||||
Expected: PASS for the new parsing and path-resolution cases.
|
||||
|
||||
### Task 2: Route compat runtime skill loading through sgclaw-owned resolution
|
||||
|
||||
**Files:**
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/src/compat/config_adapter.rs`
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/src/compat/runtime.rs`
|
||||
- Test: `/home/zyl/projects/sgClaw/claw/tests/compat_runtime_test.rs`
|
||||
|
||||
**Step 1: Write the failing test**
|
||||
|
||||
Add a compat runtime test that creates:
|
||||
- a default workspace skill package under `.sgclaw-zeroclaw-workspace/skills`
|
||||
- a custom skill package under another directory configured via `skillsDir`
|
||||
|
||||
Assert that provider request payload contains only the configured skill name when `skillsDir` is set, and still contains workspace skill names when the override is absent.
|
||||
|
||||
**Step 2: Run test to verify it fails**
|
||||
|
||||
Run: `cargo test compat_runtime -- --nocapture`
|
||||
|
||||
Expected: FAIL because the runtime currently always loads skills from `config.workspace_dir`.
|
||||
|
||||
**Step 3: Write minimal implementation**
|
||||
|
||||
Keep `config.workspace_dir` for ZeroClaw internal state, but load skills from the resolved effective skills directory by calling `load_skills_from_directory` directly when a custom directory is configured.
|
||||
|
||||
**Step 4: Run test to verify it passes**
|
||||
|
||||
Run: `cargo test compat_runtime -- --nocapture`
|
||||
|
||||
Expected: PASS and provider request payload shows the right `Available Skills` content.
|
||||
|
||||
### Task 3: Document and verify backward compatibility
|
||||
|
||||
**Files:**
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/docs/README.md`
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/docs/L5-提示词分布与安全改造方案.md`
|
||||
|
||||
**Step 1: Write the failing check**
|
||||
|
||||
Record the expected runtime behavior:
|
||||
- `sgclaw` owns skill lookup
|
||||
- SuperRPA only passes `--config-path`
|
||||
- `skillsDir` is optional
|
||||
|
||||
**Step 2: Run verification**
|
||||
|
||||
Run: `rg -n "skillsDir|sgclaw owns skill lookup|config-path" docs`
|
||||
|
||||
Expected: missing text before docs are updated.
|
||||
|
||||
**Step 3: Write minimal documentation**
|
||||
|
||||
Document:
|
||||
- JSON field name
|
||||
- relative-path resolution base
|
||||
- default fallback
|
||||
- operational implication for SuperRPA integration
|
||||
|
||||
**Step 4: Run verification**
|
||||
|
||||
Run: `rg -n "skillsDir|sgclaw owns skill lookup|config-path" docs`
|
||||
|
||||
Expected: PASS with updated docs.
|
||||
|
||||
### Task 4: Final verification
|
||||
|
||||
**Files:**
|
||||
- Review only: `/home/zyl/projects/sgClaw/claw/src/config/settings.rs`
|
||||
- Review only: `/home/zyl/projects/sgClaw/claw/src/compat/config_adapter.rs`
|
||||
- Review only: `/home/zyl/projects/sgClaw/claw/src/compat/runtime.rs`
|
||||
- Review only: `/home/zyl/projects/sgClaw/claw/tests/compat_config_test.rs`
|
||||
- Review only: `/home/zyl/projects/sgClaw/claw/tests/compat_runtime_test.rs`
|
||||
|
||||
**Step 1: Run targeted tests**
|
||||
|
||||
Run: `cargo test compat_config -- --nocapture`
|
||||
|
||||
Expected: PASS
|
||||
|
||||
**Step 2: Run runtime tests**
|
||||
|
||||
Run: `cargo test compat_runtime -- --nocapture`
|
||||
|
||||
Expected: PASS
|
||||
|
||||
**Step 3: Run skill-lib structural validation**
|
||||
|
||||
Run: `python3 -m unittest tests.skill_lib_validation_test -v`
|
||||
|
||||
Expected: PASS
|
||||
|
||||
**Step 4: Commit**
|
||||
|
||||
```bash
|
||||
git add docs/plans/2026-03-27-sgclaw-configurable-skills-dir-plan.md \
|
||||
src/config/settings.rs \
|
||||
src/compat/config_adapter.rs \
|
||||
src/compat/runtime.rs \
|
||||
tests/compat_config_test.rs \
|
||||
tests/compat_runtime_test.rs \
|
||||
docs/README.md \
|
||||
docs/L5-提示词分布与安全改造方案.md
|
||||
git commit -m "feat: make sgclaw skills directory configurable"
|
||||
```
|
||||
624
docs/plans/2026-03-27-sgclaw-floating-chat-frontend-design.md
Normal file
624
docs/plans/2026-03-27-sgclaw-floating-chat-frontend-design.md
Normal file
@@ -0,0 +1,624 @@
|
||||
# sgClaw Floating Chat Frontend Implementation Plan
|
||||
|
||||
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
||||
|
||||
**Goal:** Replace the current debug-style `sgclaw-chat` UI with a complete floating-chat frontend that matches the product structure of Doubao's side panel while preserving the current SuperRPA bridge and configuration capabilities.
|
||||
|
||||
**Architecture:** Keep `chrome://superrpa-functions/sgclaw-chat` as the first delivery host so the new UI can be built and verified without waiting for the final page-floating container. Split the current monolithic Lit component into host adapter, state modules, typed message model, presentational components, and secondary panels so the same UI can later be mounted in a real injected floating window on normal web pages. Preserve the existing browser bridge (`sgclawConnect`, `sgclawStart`, `sgclawStop`, `sgclawSubmitTask`) and re-home logs/configuration into secondary panels instead of deleting them.
|
||||
|
||||
**Tech Stack:** Chromium WebUI, Lit, existing `FunctionsUI` router, SuperRPA browser bridge callbacks, current `sgclaw-config` config page logic, future floating host injection in SuperRPA.
|
||||
|
||||
## Product Target
|
||||
|
||||
The frontend target is a single-column chat product, not a multi-card debug workstation.
|
||||
|
||||
Final visual structure:
|
||||
|
||||
```text
|
||||
Collapsed Fab
|
||||
┌────────────┐
|
||||
│ sgClaw ●2 │
|
||||
└────────────┘
|
||||
|
||||
Expanded Chat
|
||||
┌──────────────────────────────────────────┐
|
||||
│ sgClaw | 当前网页:example.com │
|
||||
│ [新对话] [历史] [设置] [收起] │
|
||||
│ 状态:待命 / 执行中 / 出错 │
|
||||
├──────────────────────────────────────────┤
|
||||
│ 欢迎区 / 推荐动作 │
|
||||
│ [总结当前页面] [提取表格] [执行网页操作] │
|
||||
├──────────────────────────────────────────┤
|
||||
│ 消息流 │
|
||||
│ 用户消息 │
|
||||
│ 助手消息 │
|
||||
│ 步骤卡 / 结果卡 / 错误卡 │
|
||||
├──────────────────────────────────────────┤
|
||||
│ [网页执行] [页面问答] [页面总结] │
|
||||
│ [上下文开关] [调试] [更多] │
|
||||
│ ┌──────────────────────────────────────┐ │
|
||||
│ │ 输入任务... │ │
|
||||
│ └──────────────────────────────────────┘ │
|
||||
│ [发送]│
|
||||
└──────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Core UX rules:
|
||||
|
||||
- The primary content area is always the message stream.
|
||||
- `finalResult` becomes a result card inside the message stream.
|
||||
- `logs` move into a hidden debug drawer.
|
||||
- `start/stop` remain available but move to the header status area.
|
||||
- Configuration remains available but opens inside a settings panel first, with route-navigation fallback to `chrome://superrpa-functions/sgclaw-config`.
|
||||
- The same component tree must work in `FunctionsUI` first and later inside a real injected floating host.
|
||||
|
||||
## Scope
|
||||
|
||||
### In Scope For This Frontend Plan
|
||||
|
||||
- Complete visual redesign of `sgclaw-chat`
|
||||
- Empty state, active chat state, running state, success state, error state
|
||||
- Local conversation history UI
|
||||
- Embedded settings panel
|
||||
- Debug drawer
|
||||
- Stable typed message model
|
||||
- Separation of host bridge code from UI code
|
||||
- Floating launcher state model
|
||||
|
||||
### Explicitly Out Of Scope For First Frontend Delivery
|
||||
|
||||
- Real attachment upload execution
|
||||
- Deep-thinking or multi-skill plugin ecosystem
|
||||
- Provider/protocol redesign on the Rust side
|
||||
- Full page-injected floating host implementation
|
||||
- New backend APIs beyond the current bridge
|
||||
|
||||
## Existing Baseline To Reuse
|
||||
|
||||
The implementation should reuse these existing assets instead of replacing them blindly:
|
||||
|
||||
- Host page routing: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/functions.ts`
|
||||
- Existing chat entry registration: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/functions_manifest.json`
|
||||
- Current chat page bridge logic: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.ts`
|
||||
- Current floating state prototype: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-floating_state.ts`
|
||||
- Current config UI and bridge: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-config/sgclaw-config.ts`
|
||||
|
||||
## Final File Layout
|
||||
|
||||
All implementation paths below are exact and rooted in `/home/zyl/projects/superRpa/src`.
|
||||
|
||||
### Core Chat Entry
|
||||
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.ts`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.html.ts`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.css.ts`
|
||||
|
||||
### State Modules
|
||||
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-floating_state.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_window_state.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_conversation_state.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_history_state.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_settings_state.ts`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_state.ts`
|
||||
|
||||
### Host Adapter
|
||||
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_host_adapter.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_runtime_bridge.ts`
|
||||
|
||||
### Message Model And Rendering
|
||||
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_messages.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-message-list.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-message-card-user.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-message-card-assistant.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-message-card-step.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-message-card-result.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-message-card-error.ts`
|
||||
|
||||
### Shell Components
|
||||
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-chat-shell.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-chat-header.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-chat-composer.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-history-panel.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-settings-panel.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-debug-drawer.ts`
|
||||
|
||||
### Build And Host Wiring
|
||||
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/BUILD.gn`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/functions.html.ts`
|
||||
|
||||
### Tests
|
||||
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-floating_state_mainline_unittest.ts`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_mainline_unittest.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_window_state_mainline_unittest.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_history_state_mainline_unittest.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_settings_state_mainline_unittest.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_runtime_bridge_mainline_unittest.ts`
|
||||
|
||||
## Target State Model
|
||||
|
||||
Use a typed model instead of the current loose shape.
|
||||
|
||||
```ts
|
||||
interface SgClawChatWindowState {
|
||||
windowOpen: boolean;
|
||||
activePanel: 'chat' | 'history' | 'settings' | 'debug';
|
||||
unreadCount: number;
|
||||
}
|
||||
|
||||
interface SgClawChatConversationState {
|
||||
conversationId: string;
|
||||
draftInput: string;
|
||||
mode: 'web-action' | 'page-qa' | 'page-summary';
|
||||
contextEnabled: boolean;
|
||||
messages: SgClawMessage[];
|
||||
}
|
||||
|
||||
interface SgClawMessage {
|
||||
id: string;
|
||||
type: 'user_text' | 'assistant_text' | 'task_step' | 'task_result' | 'task_error' | 'system_notice';
|
||||
role: 'user' | 'assistant' | 'system';
|
||||
content: string;
|
||||
status?: 'pending' | 'running' | 'done' | 'failed';
|
||||
timestamp: number;
|
||||
meta?: Record<string, unknown>;
|
||||
}
|
||||
```
|
||||
|
||||
The current `logs`, `messages`, `finalResult`, `pendingReply`, and `busy` state should be re-expressed through these typed stores instead of being owned directly by the entry component.
|
||||
|
||||
## Task 1: Freeze The Current Entry And Enable Real Template/CSS Modules
|
||||
|
||||
**Files:**
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.ts`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.html.ts`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.css.ts`
|
||||
- Test: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_mainline_unittest.ts`
|
||||
|
||||
**Step 1: Write the failing structure test**
|
||||
|
||||
Add assertions that the entry no longer hardcodes the full DOM layout in `render()` and imports its shell template/style helpers.
|
||||
|
||||
**Step 2: Run test to verify it fails**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
autoninja -C /home/zyl/projects/superRpa/src/out/KylinRelease sgclaw-chat_build_ts
|
||||
```
|
||||
|
||||
Expected: fail because `sgclaw-chat.html.ts` and `sgclaw-chat.css.ts` are empty and the new test expects real exports.
|
||||
|
||||
**Step 3: Write the minimal implementation**
|
||||
|
||||
- Move root shell markup to `getHtml()`
|
||||
- Move root style tokens/layout to `getCss()`
|
||||
- Keep `sgclaw-chat.ts` focused on state + events
|
||||
|
||||
**Step 4: Run test to verify it passes**
|
||||
|
||||
Run the same build target.
|
||||
|
||||
Expected: TS build succeeds and the entry uses external template/style helpers.
|
||||
|
||||
**Step 5: Commit**
|
||||
|
||||
```bash
|
||||
git -C /home/zyl/projects/superRpa/src add \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.html.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.css.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_mainline_unittest.ts
|
||||
git -C /home/zyl/projects/superRpa/src commit -m "refactor: extract sgclaw chat shell template"
|
||||
```
|
||||
|
||||
## Task 2: Build The Window, Conversation, History, And Settings State Modules
|
||||
|
||||
**Files:**
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-floating_state.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_window_state.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_conversation_state.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_history_state.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_settings_state.ts`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_state.ts`
|
||||
- Test: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-floating_state_mainline_unittest.ts`
|
||||
- Test: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_window_state_mainline_unittest.ts`
|
||||
- Test: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_history_state_mainline_unittest.ts`
|
||||
- Test: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_settings_state_mainline_unittest.ts`
|
||||
|
||||
**Step 1: Write the failing pure-state tests**
|
||||
|
||||
Cover:
|
||||
- open/close/switch panel transitions
|
||||
- unread count clear on open
|
||||
- create/reset conversation
|
||||
- local history push/select/remove
|
||||
- settings draft dirty detection
|
||||
|
||||
**Step 2: Run tests to verify RED**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
autoninja -C /home/zyl/projects/superRpa/src/out/KylinRelease sgclaw-chat_build_ts
|
||||
```
|
||||
|
||||
Expected: build fails because the new modules and tests do not exist yet.
|
||||
|
||||
**Step 3: Write the minimal implementation**
|
||||
|
||||
Implement pure functions only. Do not mix DOM work into these modules.
|
||||
|
||||
**Step 4: Run tests to verify GREEN**
|
||||
|
||||
Run the same build target.
|
||||
|
||||
Expected: all pure-state modules compile and their tests pass.
|
||||
|
||||
**Step 5: Commit**
|
||||
|
||||
```bash
|
||||
git -C /home/zyl/projects/superRpa/src add \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-floating_state.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_window_state.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_conversation_state.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_history_state.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_settings_state.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_state.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-floating_state_mainline_unittest.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_window_state_mainline_unittest.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_history_state_mainline_unittest.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_settings_state_mainline_unittest.ts
|
||||
git -C /home/zyl/projects/superRpa/src commit -m "feat: add sgclaw chat state modules"
|
||||
```
|
||||
|
||||
## Task 3: Introduce A Host Adapter So UI Stops Owning Bridge Details
|
||||
|
||||
**Files:**
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_host_adapter.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_runtime_bridge.ts`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.ts`
|
||||
- Test: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_runtime_bridge_mainline_unittest.ts`
|
||||
|
||||
**Step 1: Write the failing bridge test**
|
||||
|
||||
Test that:
|
||||
- `connect()` issues `sgclawConnect`
|
||||
- `start()` issues `sgclawStart`
|
||||
- `stop()` issues `sgclawStop`
|
||||
- `submitTask()` issues `sgclawSubmitTask`
|
||||
- callback payload parsing is handled in one place
|
||||
|
||||
**Step 2: Run test to verify RED**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
autoninja -C /home/zyl/projects/superRpa/src/out/KylinRelease sgclaw-chat_build_ts
|
||||
```
|
||||
|
||||
Expected: fail because adapter modules do not exist.
|
||||
|
||||
**Step 3: Write minimal implementation**
|
||||
|
||||
- Wrap `chrome.send`
|
||||
- Centralize callback registration
|
||||
- Return typed runtime events/state to the UI layer
|
||||
|
||||
**Step 4: Run test to verify GREEN**
|
||||
|
||||
Run the same build target.
|
||||
|
||||
Expected: adapter tests compile and pass.
|
||||
|
||||
**Step 5: Commit**
|
||||
|
||||
```bash
|
||||
git -C /home/zyl/projects/superRpa/src add \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_host_adapter.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_runtime_bridge.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_runtime_bridge_mainline_unittest.ts
|
||||
git -C /home/zyl/projects/superRpa/src commit -m "refactor: isolate sgclaw chat host bridge"
|
||||
```
|
||||
|
||||
## Task 4: Replace The Loose Message Format With Typed Cards In The Message Stream
|
||||
|
||||
**Files:**
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_messages.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-message-list.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-message-card-user.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-message-card-assistant.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-message-card-step.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-message-card-result.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-message-card-error.ts`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.ts`
|
||||
- Test: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_mainline_unittest.ts`
|
||||
|
||||
**Step 1: Write the failing rendering test**
|
||||
|
||||
Add expectations that:
|
||||
- empty state shows guidance instead of a blank box
|
||||
- `task_complete` renders a result card in the message stream
|
||||
- `error` renders an error card in the message stream
|
||||
- `pendingReply` renders an assistant pending card
|
||||
|
||||
**Step 2: Run test to verify RED**
|
||||
|
||||
Run the TS build target.
|
||||
|
||||
Expected: fail because message types and card components do not exist.
|
||||
|
||||
**Step 3: Write minimal implementation**
|
||||
|
||||
- Keep the message list single-column
|
||||
- Preserve current user/assistant turn behavior
|
||||
- Move `finalResult` handling into result-card rendering
|
||||
- Move error display into message flow
|
||||
|
||||
**Step 4: Run test to verify GREEN**
|
||||
|
||||
Run the same build target.
|
||||
|
||||
Expected: cards render correctly and the old standalone result area is no longer required.
|
||||
|
||||
**Step 5: Commit**
|
||||
|
||||
```bash
|
||||
git -C /home/zyl/projects/superRpa/src add \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_messages.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-message-list.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-message-card-user.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-message-card-assistant.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-message-card-step.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-message-card-result.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-message-card-error.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_mainline_unittest.ts
|
||||
git -C /home/zyl/projects/superRpa/src commit -m "feat: add sgclaw chat message cards"
|
||||
```
|
||||
|
||||
## Task 5: Build The Real Header, Empty State, And Composer
|
||||
|
||||
**Files:**
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-chat-shell.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-chat-header.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-chat-composer.ts`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.html.ts`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.css.ts`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.ts`
|
||||
- Test: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_mainline_unittest.ts`
|
||||
|
||||
**Step 1: Write the failing shell test**
|
||||
|
||||
Assert that the rendered page now contains:
|
||||
- header with title, current page label, and status pill
|
||||
- empty state recommendation buttons
|
||||
- fixed composer at the bottom
|
||||
- no standalone `实时日志` or `最终结果` primary sections
|
||||
|
||||
**Step 2: Run test to verify RED**
|
||||
|
||||
Run the TS build target.
|
||||
|
||||
Expected: fail because the shell components do not exist.
|
||||
|
||||
**Step 3: Write minimal implementation**
|
||||
|
||||
- Header: title, page context, new-chat/history/settings/collapse actions
|
||||
- Empty state: 3 to 4 recommended actions
|
||||
- Composer: text input, send button, mode toggles, context switch
|
||||
|
||||
**Step 4: Run test to verify GREEN**
|
||||
|
||||
Run the same build target.
|
||||
|
||||
Expected: the page renders as a product-style chat shell.
|
||||
|
||||
**Step 5: Commit**
|
||||
|
||||
```bash
|
||||
git -C /home/zyl/projects/superRpa/src add \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-chat-shell.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-chat-header.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-chat-composer.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.html.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.css.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_mainline_unittest.ts
|
||||
git -C /home/zyl/projects/superRpa/src commit -m "feat: add sgclaw chat shell and composer"
|
||||
```
|
||||
|
||||
## Task 6: Embed Settings And Move Raw Logs Into A Debug Drawer
|
||||
|
||||
**Files:**
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-settings-panel.ts`
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-debug-drawer.ts`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.ts`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.css.ts`
|
||||
- Reuse Read-Only Reference: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-config/sgclaw-config.ts`
|
||||
- Test: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_settings_state_mainline_unittest.ts`
|
||||
|
||||
**Step 1: Write the failing panel tests**
|
||||
|
||||
Cover:
|
||||
- opening settings panel from header
|
||||
- editing embedded config draft
|
||||
- opening debug drawer and showing logs
|
||||
- closing secondary panels without destroying the chat draft
|
||||
|
||||
**Step 2: Run test to verify RED**
|
||||
|
||||
Run the TS build target.
|
||||
|
||||
Expected: fail because secondary panel components do not exist.
|
||||
|
||||
**Step 3: Write minimal implementation**
|
||||
|
||||
- Reuse config field structure from `sgclaw-config`
|
||||
- Keep raw logs in debug only
|
||||
- Preserve route-navigation fallback for full config page if embedded save/load fails
|
||||
|
||||
**Step 4: Run test to verify GREEN**
|
||||
|
||||
Run the same build target.
|
||||
|
||||
Expected: settings and debug layers behave as secondary panels instead of separate pages.
|
||||
|
||||
**Step 5: Commit**
|
||||
|
||||
```bash
|
||||
git -C /home/zyl/projects/superRpa/src add \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-settings-panel.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-debug-drawer.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.css.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_settings_state_mainline_unittest.ts
|
||||
git -C /home/zyl/projects/superRpa/src commit -m "feat: add sgclaw settings panel and debug drawer"
|
||||
```
|
||||
|
||||
## Task 7: Add Local Conversation History And New-Chat Recovery
|
||||
|
||||
**Files:**
|
||||
- Create: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-history-panel.ts`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_history_state.ts`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.ts`
|
||||
- Test: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_history_state_mainline_unittest.ts`
|
||||
|
||||
**Step 1: Write the failing history tests**
|
||||
|
||||
Cover:
|
||||
- saving a conversation preview to local history
|
||||
- creating a fresh conversation resets message stream but keeps config
|
||||
- reopening a history item restores messages and draft
|
||||
|
||||
**Step 2: Run test to verify RED**
|
||||
|
||||
Run the TS build target.
|
||||
|
||||
Expected: fail because history panel and persistence behavior do not exist.
|
||||
|
||||
**Step 3: Write minimal implementation**
|
||||
|
||||
- Store history locally in browser storage or localStorage
|
||||
- Keep only small metadata + message snapshot for first version
|
||||
- No backend schema change in this phase
|
||||
|
||||
**Step 4: Run test to verify GREEN**
|
||||
|
||||
Run the same build target.
|
||||
|
||||
Expected: local conversation switching works fully in the frontend.
|
||||
|
||||
**Step 5: Commit**
|
||||
|
||||
```bash
|
||||
git -C /home/zyl/projects/superRpa/src add \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/components/sgclaw-history-panel.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_history_state.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/sgclaw-chat/sgclaw-chat_history_state_mainline_unittest.ts
|
||||
git -C /home/zyl/projects/superRpa/src commit -m "feat: add sgclaw local conversation history"
|
||||
```
|
||||
|
||||
## Task 8: Wire New Shell Assets Into BUILD And Polish The Host Page
|
||||
|
||||
**Files:**
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/BUILD.gn`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/functions.html.ts`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/devtools/functions/functions.css`
|
||||
|
||||
**Step 1: Write the failing host expectation**
|
||||
|
||||
Add a small host-level check that:
|
||||
- `sgclaw-chat` still loads from the manifest
|
||||
- host quick actions still work
|
||||
- the function page provides enough room for the new chat shell
|
||||
|
||||
**Step 2: Run test/build to verify RED**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
autoninja -C /home/zyl/projects/superRpa/src/out/KylinRelease sgclaw-chat_build_ts
|
||||
```
|
||||
|
||||
Expected: fail or render incorrectly because new component files are not all wired into build/host styling yet.
|
||||
|
||||
**Step 3: Write minimal implementation**
|
||||
|
||||
- Add all new TS modules to `BUILD.gn`
|
||||
- Keep `sgclaw-chat` and `sgclaw-config` quick actions
|
||||
- Adjust host layout so the new shell is not boxed into the old debug-page proportions
|
||||
|
||||
**Step 4: Run verification**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
autoninja -C /home/zyl/projects/superRpa/src/out/KylinRelease sgclaw-chat_build_ts
|
||||
autoninja -C /home/zyl/projects/superRpa/src/out/KylinRelease superrpa_resources
|
||||
```
|
||||
|
||||
Expected: build completes with all new chat modules wired in.
|
||||
|
||||
**Step 5: Commit**
|
||||
|
||||
```bash
|
||||
git -C /home/zyl/projects/superRpa/src add \
|
||||
chrome/browser/resources/superrpa/devtools/BUILD.gn \
|
||||
chrome/browser/resources/superrpa/devtools/functions/functions.html.ts \
|
||||
chrome/browser/resources/superrpa/devtools/functions/functions.css
|
||||
git -C /home/zyl/projects/superRpa/src commit -m "chore: wire sgclaw chat frontend modules"
|
||||
```
|
||||
|
||||
## Manual Verification Matrix
|
||||
|
||||
Run all manual checks in `chrome://superrpa-functions/sgclaw-chat` after the full frontend plan lands.
|
||||
|
||||
### UX States
|
||||
|
||||
- Empty state appears on first open.
|
||||
- Recommended actions generate user messages.
|
||||
- Composer stays visible while history/settings/debug panels switch.
|
||||
- Message stream auto-scrolls to the latest item.
|
||||
- Result cards and error cards appear inline.
|
||||
|
||||
### Runtime
|
||||
|
||||
- `启动` works from the header area.
|
||||
- `停止` works from the header area.
|
||||
- submit creates an immediate user message.
|
||||
- pending assistant card appears while waiting.
|
||||
- result card replaces the old standalone result behavior.
|
||||
|
||||
### Settings
|
||||
|
||||
- embedded settings loads existing values
|
||||
- save updates status and clears dirty state
|
||||
- fallback route to `chrome://superrpa-functions/sgclaw-config` still works
|
||||
|
||||
### Debug
|
||||
|
||||
- logs are not visible in the main chat view
|
||||
- debug drawer shows raw logs when opened
|
||||
|
||||
### History
|
||||
|
||||
- new conversation starts clean
|
||||
- previous conversation can be restored from local history
|
||||
- unread badge clears when reopening the window
|
||||
|
||||
## Execution Notes
|
||||
|
||||
- Keep the current backend/runtime bridge unchanged until the new frontend shell is stable.
|
||||
- Do not combine page-injected floating host work into this same branch. The first milestone is a complete product-grade frontend inside the existing `FunctionsUI` host.
|
||||
- When this frontend plan is complete, the next plan should focus only on mounting the same component tree inside a real page floating container.
|
||||
|
||||
Plan complete and saved to `docs/plans/2026-03-27-sgclaw-floating-chat-frontend-design.md`. Two execution options:
|
||||
|
||||
**1. Subagent-Driven (this session)** - I dispatch fresh subagent per task, review between tasks, fast iteration
|
||||
|
||||
**2. Parallel Session (separate)** - Open new session with executing-plans, batch execution with checkpoints
|
||||
|
||||
**Which approach?**
|
||||
@@ -0,0 +1,85 @@
|
||||
# sgClaw Overlay And Basic Navigation Fixes Implementation Plan
|
||||
|
||||
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
||||
|
||||
**Goal:** Make ordinary webpages render the new sgClaw floating chat design and support base navigation instructions like `打开知乎`.
|
||||
|
||||
**Architecture:** Keep the ordinary-page injection entrypoint unchanged, but replace its in-shadow DOM layout with the same floating-window shell used by the new debug page. On the runtime side, extend the deterministic planner with explicit homepage navigation plans for supported sites so freeform open-site commands do not fail before the compat runtime can help.
|
||||
|
||||
**Tech Stack:** Chromium WebUI resource pipeline, injected Shadow DOM overlay JavaScript, Rust planner tests
|
||||
|
||||
### Task 1: Lock the current regressions with failing tests
|
||||
|
||||
**Files:**
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/sgclaw_chat_smoke.mjs`
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/tests/planner_test.rs`
|
||||
|
||||
**Step 1: Write the failing smoke expectations**
|
||||
|
||||
Add assertions that the ordinary webpage overlay shows the new subtitle `面向当前网页的悬浮式对话与自动执行` and no longer exposes the old card titles like `聊天记录`.
|
||||
|
||||
**Step 2: Run the smoke to verify it fails**
|
||||
|
||||
Run: `node /home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/sgclaw_chat_smoke.mjs`
|
||||
Expected: FAIL because ordinary webpages still render the old overlay shell.
|
||||
|
||||
**Step 3: Write the failing planner test**
|
||||
|
||||
Add a test asserting `plan_instruction("打开知乎")` returns one `Navigate` step to `https://www.zhihu.com`.
|
||||
|
||||
**Step 4: Run the planner test to verify it fails**
|
||||
|
||||
Run: `cargo test planner_supports_open_zhihu_homepage_instruction --test planner_test`
|
||||
Expected: FAIL with `unsupported instruction: 打开知乎`.
|
||||
|
||||
### Task 2: Migrate the ordinary webpage overlay to the new shell
|
||||
|
||||
**Files:**
|
||||
- Modify: `/home/zyl/projects/superRpa/src/chrome/browser/resources/superrpa/sgclaw_overlay.js`
|
||||
- Test: `/home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/sgclaw_chat_smoke.mjs`
|
||||
|
||||
**Step 1: Replace the old card layout with the new floating shell**
|
||||
|
||||
Keep bridge calls, ids, and polling behavior intact, but render the new header, message pane, composer, settings panel, and debug drawer structure inside the existing injected Shadow DOM.
|
||||
|
||||
**Step 2: Keep runtime visibility without reintroducing the old layout**
|
||||
|
||||
Move logs and final result into secondary panels or inline cards so the ordinary webpage still exposes execution details without the old four-card layout.
|
||||
|
||||
**Step 3: Run the smoke again**
|
||||
|
||||
Run: `node /home/zyl/projects/superRpa/src/chrome/browser/superrpa/sgclaw/sgclaw_chat_smoke.mjs`
|
||||
Expected: PASS once rebuilt resources are being served by the browser binary.
|
||||
|
||||
### Task 3: Extend planner support for basic open-site commands
|
||||
|
||||
**Files:**
|
||||
- Modify: `/home/zyl/projects/sgClaw/claw/src/agent/planner.rs`
|
||||
- Test: `/home/zyl/projects/sgClaw/claw/tests/planner_test.rs`
|
||||
|
||||
**Step 1: Implement the minimal homepage plans**
|
||||
|
||||
Support `打开知乎` and `打开百度` by returning single-step `Navigate` plans to their homepages.
|
||||
|
||||
**Step 2: Run planner tests**
|
||||
|
||||
Run: `cargo test --test planner_test`
|
||||
Expected: PASS.
|
||||
|
||||
### Task 4: Build and verify the integrated behavior
|
||||
|
||||
**Files:**
|
||||
- Modify: `/home/zyl/projects/superRpa/src/AGENTS.md`
|
||||
- Modify: `/home/zyl/projects/superRpa/src/docs/handoffs/2026-03-27-sgclaw-runtime-verification.md`
|
||||
|
||||
**Step 1: Rebuild impacted targets**
|
||||
|
||||
Run: `autoninja -C /home/zyl/projects/superRpa/src/out/KylinRelease chrome/browser/resources/superrpa:resources sgclaw`
|
||||
|
||||
**Step 2: Re-run targeted verification**
|
||||
|
||||
Run the smoke and a focused `sgclaw` task submission check for `打开知乎`.
|
||||
|
||||
**Step 3: Document the final runtime path**
|
||||
|
||||
Record that ordinary webpages and `chrome://superrpa-functions/sgclaw-chat` now share the same floating shell, and that homepage navigation commands are handled by the planner.
|
||||
158
docs/plans/2026-03-27-skill-lib-testing-plan.md
Normal file
158
docs/plans/2026-03-27-skill-lib-testing-plan.md
Normal file
@@ -0,0 +1,158 @@
|
||||
# Skill Lib Testing Implementation Plan
|
||||
|
||||
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
||||
|
||||
**Goal:** Add an in-project, repeatable test harness that validates `/home/zyl/projects/sgClaw/skill_lib` against the current ZeroClaw `SKILL.md` loader and security-audit expectations.
|
||||
|
||||
**Architecture:** Keep the test runner inside the SGClaw repository and target the sibling `skill_lib` directory by relative path. Implement a small Python validator that mirrors the ZeroClaw markdown frontmatter parser and the relevant skill-audit checks, then cover it with a Python `unittest` suite that exercises the actual three migrated Zhihu skills.
|
||||
|
||||
**Tech Stack:** Python 3 standard library, `unittest`, local file-system inspection, ZeroClaw source code as behavioral reference, Markdown/YAML-like frontmatter parsing.
|
||||
|
||||
### Task 1: Freeze The Test Contract
|
||||
|
||||
**Files:**
|
||||
- Create: `/home/zyl/projects/sgClaw/claw/docs/plans/2026-03-27-skill-lib-testing-plan.md`
|
||||
- Reference only: `/home/zyl/projects/sgClaw/claw/third_party/zeroclaw/src/skills/mod.rs`
|
||||
- Reference only: `/home/zyl/projects/sgClaw/claw/third_party/zeroclaw/src/skills/audit.rs`
|
||||
- Reference only: `/home/zyl/projects/sgClaw/skill_lib/skills/*/SKILL.md`
|
||||
|
||||
**Step 1: Capture the loader semantics to preserve**
|
||||
|
||||
Document and implement tests for:
|
||||
- `SKILL.md` frontmatter splitting on `---`
|
||||
- supported metadata keys: `name`, `description`, `version`, `author`, `tags`
|
||||
- fallback rules for name, description, and version
|
||||
- prompt body must exclude the frontmatter block
|
||||
|
||||
**Step 2: Capture the audit semantics to preserve**
|
||||
|
||||
Document and implement tests for:
|
||||
- skill root must contain `SKILL.md` or `SKILL.toml`
|
||||
- symlinks are rejected
|
||||
- shell-script files are blocked when `allow_scripts` is false
|
||||
- markdown links must not escape the skill root
|
||||
- high-risk command snippets inside markdown are rejected
|
||||
|
||||
**Step 3: Define the migrated-skill expectations**
|
||||
|
||||
The test suite must verify:
|
||||
- exactly three skill packages exist
|
||||
- the loaded names are `zhihu-hotlist`, `zhihu-navigate`, `zhihu-write`
|
||||
- each package has both `references/` and `assets/`
|
||||
- each description stays trigger-oriented and starts with `Use when`
|
||||
|
||||
### Task 2: Write The Failing Tests First
|
||||
|
||||
**Files:**
|
||||
- Create: `/home/zyl/projects/sgClaw/claw/tests/skill_lib_validation_test.py`
|
||||
|
||||
**Step 1: Write a failing import-level test**
|
||||
|
||||
Import a not-yet-created validator module from:
|
||||
- `/home/zyl/projects/sgClaw/claw/scripts/validate_skill_lib.py`
|
||||
|
||||
Expected initial failure:
|
||||
- `ModuleNotFoundError` or `FileNotFoundError`
|
||||
|
||||
**Step 2: Encode the project expectations**
|
||||
|
||||
Add tests for:
|
||||
- skill discovery count and names
|
||||
- parsed metadata for each current skill
|
||||
- audit cleanliness for each skill with `allow_scripts=False`
|
||||
- package shape (`SKILL.md`, `references/`, `assets/`)
|
||||
|
||||
**Step 3: Run the tests and watch them fail**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
python3 -m unittest tests.skill_lib_validation_test -v
|
||||
```
|
||||
|
||||
Expected:
|
||||
- failure because the validator module does not exist yet
|
||||
|
||||
### Task 3: Implement The Minimal Validator
|
||||
|
||||
**Files:**
|
||||
- Create: `/home/zyl/projects/sgClaw/claw/scripts/validate_skill_lib.py`
|
||||
|
||||
**Step 1: Implement discovery helpers**
|
||||
|
||||
Implement:
|
||||
- repo root resolution
|
||||
- sibling `skill_lib` root resolution
|
||||
- `skills/` directory enumeration
|
||||
|
||||
**Step 2: Implement the markdown loader**
|
||||
|
||||
Implement:
|
||||
- frontmatter split
|
||||
- lightweight frontmatter parsing
|
||||
- description fallback extraction
|
||||
- metadata normalization into a `SkillRecord`
|
||||
|
||||
**Step 3: Implement the relevant audit checks**
|
||||
|
||||
Implement:
|
||||
- symlink detection
|
||||
- blocked shell-script detection
|
||||
- markdown link boundary checks
|
||||
- high-risk snippet detection
|
||||
- deterministic findings collection
|
||||
|
||||
**Step 4: Implement a small CLI**
|
||||
|
||||
Running:
|
||||
```bash
|
||||
python3 scripts/validate_skill_lib.py
|
||||
```
|
||||
|
||||
Should:
|
||||
- print one summary line per skill
|
||||
- exit `0` when all skills pass
|
||||
- exit non-zero when any skill fails
|
||||
|
||||
### Task 4: Run The Tests Green
|
||||
|
||||
**Files:**
|
||||
- Test: `/home/zyl/projects/sgClaw/claw/tests/skill_lib_validation_test.py`
|
||||
- Test: `/home/zyl/projects/sgClaw/claw/scripts/validate_skill_lib.py`
|
||||
|
||||
**Step 1: Re-run the unit tests**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
python3 -m unittest tests.skill_lib_validation_test -v
|
||||
```
|
||||
|
||||
Expected:
|
||||
- all tests pass
|
||||
|
||||
**Step 2: Run the CLI validator**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
python3 scripts/validate_skill_lib.py
|
||||
```
|
||||
|
||||
Expected:
|
||||
- all three skills print `PASS`
|
||||
- process exits `0`
|
||||
|
||||
### Task 5: Document The Verification Entry Point
|
||||
|
||||
**Files:**
|
||||
- Modify: `/home/zyl/projects/sgClaw/skill_lib/VERIFY.md`
|
||||
|
||||
**Step 1: Add the project-local validation command**
|
||||
|
||||
Add:
|
||||
- `python3 /home/zyl/projects/sgClaw/claw/scripts/validate_skill_lib.py`
|
||||
- `python3 -m unittest /home/zyl/projects/sgClaw/claw/tests/skill_lib_validation_test.py`
|
||||
|
||||
**Step 2: Re-run both commands after the doc update**
|
||||
|
||||
Expected:
|
||||
- validator still exits `0`
|
||||
- unit tests still pass
|
||||
411
docs/plans/2026-03-27-skill-lib-zeroclaw-plan.md
Normal file
411
docs/plans/2026-03-27-skill-lib-zeroclaw-plan.md
Normal file
@@ -0,0 +1,411 @@
|
||||
# Skill Lib ZeroClaw Migration Implementation Plan
|
||||
|
||||
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
||||
|
||||
**Goal:** Create `/home/zyl/projects/sgClaw/skill_lib` as a dedicated skill library directory and restructure the current Zhihu browser capabilities into ZeroClaw-style skill packages.
|
||||
|
||||
**Architecture:** Treat `skill_lib` as a standalone skill repository, not as an embedded Rust module tree. Use the ZeroClaw/open-skills layout `skill_lib/skills/<skill-name>/SKILL.md`, keep each skill self-contained, and move long operational detail into `references/` plus any preserved source artifacts into `assets/`. Map the current four exposed Rust capabilities into three end-user skills: `zhihu-navigate`, `zhihu-write`, and `zhihu-hotlist`.
|
||||
|
||||
**Tech Stack:** Markdown `SKILL.md`, YAML frontmatter, directory-based ZeroClaw skill packaging, existing SGClaw Zhihu Rust/JSON source material, shell validation commands.
|
||||
|
||||
### Task 1: Freeze The Target Layout
|
||||
|
||||
**Files:**
|
||||
- Create: `/home/zyl/projects/sgClaw/skill_lib/`
|
||||
- Create: `/home/zyl/projects/sgClaw/skill_lib/README.md`
|
||||
- Create: `/home/zyl/projects/sgClaw/skill_lib/skills/`
|
||||
- Reference only: `/home/zyl/projects/sgClaw/claw/third_party/zeroclaw/src/skills/mod.rs`
|
||||
- Reference only: `/home/zyl/projects/sgClaw/claw/third_party/zeroclaw/skills/browser/SKILL.md`
|
||||
|
||||
**Step 1: Create the top-level repository skeleton**
|
||||
|
||||
Create:
|
||||
- `/home/zyl/projects/sgClaw/skill_lib/README.md`
|
||||
- `/home/zyl/projects/sgClaw/skill_lib/skills/`
|
||||
|
||||
The README should state:
|
||||
- this directory is a dedicated ZeroClaw-style skill library
|
||||
- runtime skill packages live under `skills/<name>/`
|
||||
- each skill package uses `SKILL.md` plus optional `references/`, `scripts/`, and `assets/`
|
||||
|
||||
**Step 2: Document the package contract in the README**
|
||||
|
||||
Include:
|
||||
- required file: `SKILL.md`
|
||||
- supported frontmatter for this repo: `name`, `description`, `version`, `author`, `tags`
|
||||
- design rule: `description` must be trigger-oriented and not a workflow dump
|
||||
- design rule: keep `SKILL.md` concise and push long detail into `references/`
|
||||
|
||||
**Step 3: Run structural sanity checks**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
test -d /home/zyl/projects/sgClaw/skill_lib
|
||||
test -d /home/zyl/projects/sgClaw/skill_lib/skills
|
||||
test -f /home/zyl/projects/sgClaw/skill_lib/README.md
|
||||
```
|
||||
|
||||
Expected: all commands exit `0`.
|
||||
|
||||
### Task 2: Define The Skill Inventory And Source Mapping
|
||||
|
||||
**Files:**
|
||||
- Create: `/home/zyl/projects/sgClaw/skill_lib/skill_inventory.md`
|
||||
- Reference only: `/home/zyl/projects/sgClaw/claw/src/skill/mod.rs`
|
||||
- Reference only: `/home/zyl/projects/sgClaw/claw/src/skill/router.rs`
|
||||
- Reference only: `/home/zyl/projects/sgClaw/claw/src/skill/zhihu.rs`
|
||||
- Reference only: `/home/zyl/projects/sgClaw/claw/src/skill/zhihu_hotlist.rs`
|
||||
- Reference only: `/home/zyl/projects/sgClaw/claw/src/skill/zhihu_hotlist_store.rs`
|
||||
- Reference only: `/home/zyl/projects/sgClaw/claw/src/skill/zhihu_navigation.rs`
|
||||
- Reference only: `/home/zyl/projects/sgClaw/claw/resources/skills/zhihu_write_flow.json`
|
||||
- Reference only: `/home/zyl/projects/sgClaw/claw/resources/skills/zhihu_hotlist_flow.json`
|
||||
- Reference only: `/home/zyl/projects/sgClaw/claw/resources/skills/zhihu_navigation_pages.json`
|
||||
|
||||
**Step 1: Write the migration inventory**
|
||||
|
||||
Create `/home/zyl/projects/sgClaw/skill_lib/skill_inventory.md` with a three-row mapping:
|
||||
- `zhihu-navigate` ← current `zhihu_navigate`
|
||||
- `zhihu-write` ← current `zhihu_write`
|
||||
- `zhihu-hotlist` ← current `zhihu_hotlist_collect` + `zhihu_hotlist_report`
|
||||
|
||||
**Step 2: Capture the non-migrated code responsibilities**
|
||||
|
||||
Document explicitly that this migration does **not** port:
|
||||
- Rust router dispatch
|
||||
- browser pipe transport code
|
||||
- snapshot persistence implementation detail
|
||||
|
||||
Document that the new repo is a skill library, not a Rust runtime.
|
||||
|
||||
**Step 3: Record source artifacts per target skill**
|
||||
|
||||
For each target skill, list:
|
||||
- source Rust module(s)
|
||||
- source JSON flow/catalog file(s)
|
||||
- important risk notes discovered during analysis
|
||||
|
||||
**Step 4: Validate the inventory**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
rg -n "zhihu-navigate|zhihu-write|zhihu-hotlist" /home/zyl/projects/sgClaw/skill_lib/skill_inventory.md
|
||||
```
|
||||
|
||||
Expected: all three skill names appear exactly once as top-level migration targets.
|
||||
|
||||
### Task 3: Author The `zhihu-navigate` Skill Package
|
||||
|
||||
**Files:**
|
||||
- Create: `/home/zyl/projects/sgClaw/skill_lib/skills/zhihu-navigate/SKILL.md`
|
||||
- Create: `/home/zyl/projects/sgClaw/skill_lib/skills/zhihu-navigate/references/routes-and-targets.md`
|
||||
- Create: `/home/zyl/projects/sgClaw/skill_lib/skills/zhihu-navigate/references/selector-strategy.md`
|
||||
- Create: `/home/zyl/projects/sgClaw/skill_lib/skills/zhihu-navigate/assets/zhihu_navigation_pages.source.json`
|
||||
- Reference only: `/home/zyl/projects/sgClaw/claw/src/skill/zhihu_navigation.rs`
|
||||
- Reference only: `/home/zyl/projects/sgClaw/claw/resources/skills/zhihu_navigation_pages.json`
|
||||
|
||||
**Step 1: Preserve the raw source artifact**
|
||||
|
||||
Copy the current navigation catalog into:
|
||||
- `/home/zyl/projects/sgClaw/skill_lib/skills/zhihu-navigate/assets/zhihu_navigation_pages.source.json`
|
||||
|
||||
This file is for traceability only, not for frontmatter or prompt injection.
|
||||
|
||||
**Step 2: Write the `SKILL.md`**
|
||||
|
||||
Use ZeroClaw-style frontmatter:
|
||||
```yaml
|
||||
---
|
||||
name: zhihu-navigate
|
||||
description: Use when the user wants to open, switch, or navigate to a Zhihu page, tab, menu, profile area, notifications area, message area, or creator area through browser actions.
|
||||
version: 0.1.0
|
||||
author: sgclaw
|
||||
tags:
|
||||
- zhihu
|
||||
- browser
|
||||
- navigation
|
||||
---
|
||||
```
|
||||
|
||||
The body should include:
|
||||
- overview
|
||||
- when to use
|
||||
- workflow for route vs component vs flow navigation
|
||||
- ambiguity handling rules
|
||||
- output contract
|
||||
- common mistakes
|
||||
|
||||
**Step 3: Write `routes-and-targets.md`**
|
||||
|
||||
Summarize:
|
||||
- route/component/flow/target model
|
||||
- representative target names
|
||||
- known alias conflicts
|
||||
- preferred disambiguation wording for future prompts
|
||||
|
||||
**Step 4: Write `selector-strategy.md`**
|
||||
|
||||
Document:
|
||||
- why selectors should prefer semantic hooks over CSS hash classes
|
||||
- fallback ordering
|
||||
- known brittle selectors from the current source
|
||||
|
||||
**Step 5: Validate the package**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
test -f /home/zyl/projects/sgClaw/skill_lib/skills/zhihu-navigate/SKILL.md
|
||||
test -f /home/zyl/projects/sgClaw/skill_lib/skills/zhihu-navigate/references/routes-and-targets.md
|
||||
test -f /home/zyl/projects/sgClaw/skill_lib/skills/zhihu-navigate/references/selector-strategy.md
|
||||
test -f /home/zyl/projects/sgClaw/skill_lib/skills/zhihu-navigate/assets/zhihu_navigation_pages.source.json
|
||||
```
|
||||
|
||||
Expected: all commands exit `0`.
|
||||
|
||||
### Task 4: Author The `zhihu-write` Skill Package
|
||||
|
||||
**Files:**
|
||||
- Create: `/home/zyl/projects/sgClaw/skill_lib/skills/zhihu-write/SKILL.md`
|
||||
- Create: `/home/zyl/projects/sgClaw/skill_lib/skills/zhihu-write/references/editor-flow.md`
|
||||
- Create: `/home/zyl/projects/sgClaw/skill_lib/skills/zhihu-write/references/publish-safety.md`
|
||||
- Create: `/home/zyl/projects/sgClaw/skill_lib/skills/zhihu-write/assets/zhihu_write_flow.source.json`
|
||||
- Reference only: `/home/zyl/projects/sgClaw/claw/src/skill/zhihu.rs`
|
||||
- Reference only: `/home/zyl/projects/sgClaw/claw/resources/skills/zhihu_write_flow.json`
|
||||
|
||||
**Step 1: Preserve the raw source artifact**
|
||||
|
||||
Copy:
|
||||
- `/home/zyl/projects/sgClaw/claw/resources/skills/zhihu_write_flow.json`
|
||||
|
||||
to:
|
||||
- `/home/zyl/projects/sgClaw/skill_lib/skills/zhihu-write/assets/zhihu_write_flow.source.json`
|
||||
|
||||
**Step 2: Write the `SKILL.md`**
|
||||
|
||||
The frontmatter should name a single skill:
|
||||
- `name: zhihu-write`
|
||||
- description focused on when article drafting or publishing is requested
|
||||
|
||||
The body should include:
|
||||
- prerequisites before touching the editor
|
||||
- workflow for draft-only vs publish
|
||||
- explicit confirmation gate before publish
|
||||
- required final report fields: title, mode, final URL if published, unresolved issues
|
||||
|
||||
**Step 3: Write `editor-flow.md`**
|
||||
|
||||
Document:
|
||||
- entry page
|
||||
- editor readiness checks
|
||||
- title/body fill rules
|
||||
- publish confirmation sequence
|
||||
- URL capture rules
|
||||
|
||||
**Step 4: Write `publish-safety.md`**
|
||||
|
||||
Document:
|
||||
- when to stop and ask for confirmation
|
||||
- what to do if title verification fails
|
||||
- what to do if the URL remains on edit mode
|
||||
- brittle selectors that must be revalidated first
|
||||
|
||||
**Step 5: Validate the package**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
test -f /home/zyl/projects/sgClaw/skill_lib/skills/zhihu-write/SKILL.md
|
||||
test -f /home/zyl/projects/sgClaw/skill_lib/skills/zhihu-write/references/editor-flow.md
|
||||
test -f /home/zyl/projects/sgClaw/skill_lib/skills/zhihu-write/references/publish-safety.md
|
||||
test -f /home/zyl/projects/sgClaw/skill_lib/skills/zhihu-write/assets/zhihu_write_flow.source.json
|
||||
```
|
||||
|
||||
Expected: all commands exit `0`.
|
||||
|
||||
### Task 5: Author The `zhihu-hotlist` Skill Package
|
||||
|
||||
**Files:**
|
||||
- Create: `/home/zyl/projects/sgClaw/skill_lib/skills/zhihu-hotlist/SKILL.md`
|
||||
- Create: `/home/zyl/projects/sgClaw/skill_lib/skills/zhihu-hotlist/references/collection-flow.md`
|
||||
- Create: `/home/zyl/projects/sgClaw/skill_lib/skills/zhihu-hotlist/references/report-format.md`
|
||||
- Create: `/home/zyl/projects/sgClaw/skill_lib/skills/zhihu-hotlist/references/data-quality.md`
|
||||
- Create: `/home/zyl/projects/sgClaw/skill_lib/skills/zhihu-hotlist/assets/zhihu_hotlist_flow.source.json`
|
||||
- Reference only: `/home/zyl/projects/sgClaw/claw/src/skill/zhihu_hotlist.rs`
|
||||
- Reference only: `/home/zyl/projects/sgClaw/claw/src/skill/zhihu_hotlist_store.rs`
|
||||
- Reference only: `/home/zyl/projects/sgClaw/claw/resources/skills/zhihu_hotlist_flow.json`
|
||||
|
||||
**Step 1: Preserve the raw source artifact**
|
||||
|
||||
Copy:
|
||||
- `/home/zyl/projects/sgClaw/claw/resources/skills/zhihu_hotlist_flow.json`
|
||||
|
||||
to:
|
||||
- `/home/zyl/projects/sgClaw/skill_lib/skills/zhihu-hotlist/assets/zhihu_hotlist_flow.source.json`
|
||||
|
||||
**Step 2: Write the `SKILL.md`**
|
||||
|
||||
Use one skill to cover:
|
||||
- hotlist collection
|
||||
- comment metric collection
|
||||
- snapshot-style reporting
|
||||
|
||||
The body should clearly separate:
|
||||
- collection workflow
|
||||
- report workflow
|
||||
- partial-failure handling
|
||||
- output contract
|
||||
|
||||
**Step 3: Write `collection-flow.md`**
|
||||
|
||||
Include:
|
||||
- hotlist page detection
|
||||
- hotlist HTML capture strategy
|
||||
- top N extraction
|
||||
- detail-page comment collection flow
|
||||
- metric parsing notes
|
||||
|
||||
**Step 4: Write `report-format.md`**
|
||||
|
||||
Define:
|
||||
- report header line
|
||||
- per-item summary line
|
||||
- field names and order
|
||||
- handling when comment metrics are missing
|
||||
|
||||
**Step 5: Write `data-quality.md`**
|
||||
|
||||
Document:
|
||||
- why partial success must be surfaced
|
||||
- what counts as incomplete data
|
||||
- known parser risks
|
||||
- recommended caution language in outputs
|
||||
|
||||
**Step 6: Validate the package**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
test -f /home/zyl/projects/sgClaw/skill_lib/skills/zhihu-hotlist/SKILL.md
|
||||
test -f /home/zyl/projects/sgClaw/skill_lib/skills/zhihu-hotlist/references/collection-flow.md
|
||||
test -f /home/zyl/projects/sgClaw/skill_lib/skills/zhihu-hotlist/references/report-format.md
|
||||
test -f /home/zyl/projects/sgClaw/skill_lib/skills/zhihu-hotlist/references/data-quality.md
|
||||
test -f /home/zyl/projects/sgClaw/skill_lib/skills/zhihu-hotlist/assets/zhihu_hotlist_flow.source.json
|
||||
```
|
||||
|
||||
Expected: all commands exit `0`.
|
||||
|
||||
### Task 6: Normalize Frontmatter And Trigger Quality
|
||||
|
||||
**Files:**
|
||||
- Modify: `/home/zyl/projects/sgClaw/skill_lib/skills/zhihu-navigate/SKILL.md`
|
||||
- Modify: `/home/zyl/projects/sgClaw/skill_lib/skills/zhihu-write/SKILL.md`
|
||||
- Modify: `/home/zyl/projects/sgClaw/skill_lib/skills/zhihu-hotlist/SKILL.md`
|
||||
|
||||
**Step 1: Normalize frontmatter keys**
|
||||
|
||||
Ensure each `SKILL.md` contains exactly these frontmatter keys in this order:
|
||||
- `name`
|
||||
- `description`
|
||||
- `version`
|
||||
- `author`
|
||||
- `tags`
|
||||
|
||||
Do not add Rust-only or unofficial parser fields as required metadata.
|
||||
|
||||
**Step 2: Check naming rules**
|
||||
|
||||
Ensure skill names are:
|
||||
- lowercase
|
||||
- hyphenated
|
||||
- stable
|
||||
|
||||
Names to keep:
|
||||
- `zhihu-navigate`
|
||||
- `zhihu-write`
|
||||
- `zhihu-hotlist`
|
||||
|
||||
**Step 3: Tighten descriptions**
|
||||
|
||||
Each description must:
|
||||
- begin with `Use when`
|
||||
- describe triggering conditions
|
||||
- mention Zhihu/browser context
|
||||
- avoid dumping full workflow detail
|
||||
|
||||
**Step 4: Validate frontmatter**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
rg -n "^name: |^description: |^version: |^author: |^tags:" /home/zyl/projects/sgClaw/skill_lib/skills/*/SKILL.md
|
||||
```
|
||||
|
||||
Expected: every skill emits the same five key families.
|
||||
|
||||
### Task 7: Add Repository-Level Verification Notes
|
||||
|
||||
**Files:**
|
||||
- Create: `/home/zyl/projects/sgClaw/skill_lib/VERIFY.md`
|
||||
- Modify: `/home/zyl/projects/sgClaw/skill_lib/README.md`
|
||||
|
||||
**Step 1: Create `VERIFY.md`**
|
||||
|
||||
Document the manual verification checklist:
|
||||
- all skill packages are under `skill_lib/skills/`
|
||||
- each package has `SKILL.md`
|
||||
- long details live in `references/`
|
||||
- preserved source JSON is in `assets/`
|
||||
- no Rust source is copied into the skill repo
|
||||
|
||||
**Step 2: Link verification from the README**
|
||||
|
||||
Add a short section in `README.md` pointing to `VERIFY.md`.
|
||||
|
||||
**Step 3: Run repository-level checks**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
find /home/zyl/projects/sgClaw/skill_lib/skills -mindepth 2 -maxdepth 2 -name SKILL.md | sort
|
||||
find /home/zyl/projects/sgClaw/skill_lib/skills -type d \( -name references -o -name assets \) | sort
|
||||
```
|
||||
|
||||
Expected:
|
||||
- exactly three `SKILL.md` files
|
||||
- each skill has `references/`
|
||||
- each skill has `assets/`
|
||||
|
||||
### Task 8: Final Review Before Claiming Completion
|
||||
|
||||
**Files:**
|
||||
- Review only: `/home/zyl/projects/sgClaw/skill_lib/`
|
||||
- Review only: `/home/zyl/projects/sgClaw/claw/docs/plans/2026-03-27-skill-lib-zeroclaw-plan.md`
|
||||
|
||||
**Step 1: Review against ZeroClaw runtime constraints**
|
||||
|
||||
Check that the final library respects the currently observed runtime facts:
|
||||
- directory-based skills
|
||||
- `SKILL.md` supported
|
||||
- simple frontmatter fields
|
||||
- optional `references/`, `scripts/`, `assets/`
|
||||
|
||||
**Step 2: Review against authoring quality**
|
||||
|
||||
Check that each skill:
|
||||
- is self-contained
|
||||
- has a narrow trigger boundary
|
||||
- avoids copying Rust internals into the prompt body
|
||||
- surfaces ambiguity and failure modes
|
||||
|
||||
**Step 3: Produce the implementation report**
|
||||
|
||||
The completion report must include:
|
||||
- created directories
|
||||
- created skill packages
|
||||
- any deliberate deviations from upstream ZeroClaw examples
|
||||
- verification commands actually run
|
||||
- unresolved risks
|
||||
|
||||
**Step 4: Stop before unrelated expansion**
|
||||
|
||||
Do not add:
|
||||
- extra skills beyond the three mapped ones
|
||||
- generic utility libraries
|
||||
- unrelated automation scripts
|
||||
- runtime code changes in `/home/zyl/projects/sgClaw/claw/src/skill/`
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"version": "1.0",
|
||||
"demo_only_domains": ["baidu.com", "www.baidu.com", "zhihu.com", "www.zhihu.com"],
|
||||
"demo_only_domains": ["baidu.com", "www.baidu.com", "zhihu.com", "www.zhihu.com", "zhuanlan.zhihu.com"],
|
||||
"domains": {
|
||||
"allowed": [
|
||||
"oa.example.com",
|
||||
@@ -9,7 +9,8 @@
|
||||
"baidu.com",
|
||||
"www.baidu.com",
|
||||
"zhihu.com",
|
||||
"www.zhihu.com"
|
||||
"www.zhihu.com",
|
||||
"zhuanlan.zhihu.com"
|
||||
]
|
||||
},
|
||||
"pipe_actions": {
|
||||
|
||||
@@ -7,9 +7,7 @@ use std::path::PathBuf;
|
||||
use crate::compat::config_adapter::resolve_skills_dir_from_sgclaw_settings;
|
||||
use crate::compat::runtime::CompatTaskContext;
|
||||
use crate::config::SgClawSettings;
|
||||
use crate::pipe::{
|
||||
AgentMessage, BrowserMessage, BrowserPipeTool, PipeError, Transport,
|
||||
};
|
||||
use crate::pipe::{AgentMessage, BrowserMessage, BrowserPipeTool, PipeError, Transport};
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
pub struct AgentRuntimeContext {
|
||||
@@ -218,8 +216,7 @@ pub fn handle_browser_message_with_context<T: Transport + 'static>(
|
||||
level: "info".to_string(),
|
||||
message: format!(
|
||||
"runtime profile={:?} skills_prompt_mode={:?}",
|
||||
settings.runtime_profile,
|
||||
settings.skills_prompt_mode
|
||||
settings.runtime_profile, settings.skills_prompt_mode
|
||||
),
|
||||
});
|
||||
if crate::compat::orchestration::should_use_primary_orchestration(
|
||||
|
||||
@@ -189,7 +189,10 @@ fn plan_zhihu_search(query: &str) -> TaskPlan {
|
||||
|
||||
fn build_zhihu_hotlist_preview(instruction: &str) -> ExecutionPreview {
|
||||
let normalized = instruction.to_ascii_lowercase();
|
||||
if normalized.contains("dashboard") || instruction.contains("大屏") || instruction.contains("新标签页") {
|
||||
if normalized.contains("dashboard")
|
||||
|| instruction.contains("大屏")
|
||||
|| instruction.contains("新标签页")
|
||||
{
|
||||
return ExecutionPreview {
|
||||
summary: "先规划再执行知乎热榜大屏生成".to_string(),
|
||||
steps: vec![
|
||||
|
||||
@@ -3,6 +3,10 @@ use serde_json::{json, Map, Value};
|
||||
use crate::llm::{ChatMessage, LlmError, LlmProvider, ToolDefinition, ToolFunctionCall};
|
||||
use crate::pipe::{Action, AgentMessage, BrowserPipeTool, PipeError, Transport};
|
||||
|
||||
/// Legacy browser-only runtime kept for dev-only validation and narrow regression coverage.
|
||||
/// Production browser submit flow uses `compat::runtime` plus `runtime::engine`.
|
||||
pub const LEGACY_DEV_ONLY: bool = true;
|
||||
|
||||
const BROWSER_ACTION_TOOL_NAME: &str = "browser_action";
|
||||
|
||||
#[derive(Debug, Clone, PartialEq)]
|
||||
@@ -21,8 +25,7 @@ pub fn execute_task_with_provider<P: LlmProvider, T: Transport>(
|
||||
let messages = vec![
|
||||
ChatMessage {
|
||||
role: "system".to_string(),
|
||||
content: "You are sgClaw. Use browser_action to complete the browser task."
|
||||
.to_string(),
|
||||
content: "You are sgClaw. Use browser_action to complete the browser task.".to_string(),
|
||||
},
|
||||
ChatMessage {
|
||||
role: "user".to_string(),
|
||||
@@ -35,8 +38,8 @@ pub fn execute_task_with_provider<P: LlmProvider, T: Transport>(
|
||||
.map_err(map_llm_error_to_pipe_error)?;
|
||||
|
||||
for call in calls {
|
||||
let browser_call = parse_browser_action_call(call)
|
||||
.map_err(|err| PipeError::Protocol(err.to_string()))?;
|
||||
let browser_call =
|
||||
parse_browser_action_call(call).map_err(|err| PipeError::Protocol(err.to_string()))?;
|
||||
|
||||
transport.send(&AgentMessage::LogEntry {
|
||||
level: "info".to_string(),
|
||||
|
||||
@@ -26,10 +26,15 @@ impl<T: Transport> BrowserScriptSkillTool<T> {
|
||||
browser_tool: BrowserPipeTool<T>,
|
||||
) -> anyhow::Result<Self> {
|
||||
let script_path = skill_root.join(&tool.command);
|
||||
let canonical_skill_root = skill_root.canonicalize().unwrap_or_else(|_| skill_root.to_path_buf());
|
||||
let canonical_script_path = script_path
|
||||
let canonical_skill_root = skill_root
|
||||
.canonicalize()
|
||||
.map_err(|err| anyhow::anyhow!("failed to resolve browser script {}: {err}", script_path.display()))?;
|
||||
.unwrap_or_else(|_| skill_root.to_path_buf());
|
||||
let canonical_script_path = script_path.canonicalize().map_err(|err| {
|
||||
anyhow::anyhow!(
|
||||
"failed to resolve browser script {}: {err}",
|
||||
script_path.display()
|
||||
)
|
||||
})?;
|
||||
if !canonical_script_path.starts_with(&canonical_skill_root) {
|
||||
anyhow::bail!(
|
||||
"browser script path escapes skill root: {}",
|
||||
@@ -108,7 +113,11 @@ impl<T: Transport + 'static> Tool for BrowserScriptSkillTool<T> {
|
||||
"expected_domain must be a non-empty string, got {other}"
|
||||
)))
|
||||
}
|
||||
None => return Ok(failed_tool_result("missing required field expected_domain".to_string())),
|
||||
None => {
|
||||
return Ok(failed_tool_result(
|
||||
"missing required field expected_domain".to_string(),
|
||||
))
|
||||
}
|
||||
};
|
||||
let expected_domain = match normalize_domain_like(&raw_expected_domain) {
|
||||
Some(value) => value,
|
||||
@@ -148,7 +157,9 @@ impl<T: Transport + 'static> Tool for BrowserScriptSkillTool<T> {
|
||||
};
|
||||
|
||||
if !result.success {
|
||||
return Ok(failed_tool_result(format_browser_script_error(&result.data)));
|
||||
return Ok(failed_tool_result(format_browser_script_error(
|
||||
&result.data,
|
||||
)));
|
||||
}
|
||||
|
||||
let payload = result
|
||||
|
||||
@@ -101,14 +101,14 @@ impl<T: Transport + 'static> Tool for ZeroClawBrowserTool<T> {
|
||||
Err(err) => return Ok(failed_tool_result(err.to_string())),
|
||||
};
|
||||
|
||||
let result = match self.browser_tool.invoke(
|
||||
request.action,
|
||||
request.params,
|
||||
&request.expected_domain,
|
||||
) {
|
||||
Ok(result) => result,
|
||||
Err(err) => return Ok(failed_tool_result(err.to_string())),
|
||||
};
|
||||
let result =
|
||||
match self
|
||||
.browser_tool
|
||||
.invoke(request.action, request.params, &request.expected_domain)
|
||||
{
|
||||
Ok(result) => result,
|
||||
Err(err) => return Ok(failed_tool_result(err.to_string())),
|
||||
};
|
||||
|
||||
let output = serde_json::to_string(&json!({
|
||||
"seq": result.seq,
|
||||
@@ -122,8 +122,7 @@ impl<T: Transport + 'static> Tool for ZeroClawBrowserTool<T> {
|
||||
Ok(ToolResult {
|
||||
success: result.success,
|
||||
output,
|
||||
error: (!result.success)
|
||||
.then(|| format_browser_action_error(&result.data)),
|
||||
error: (!result.success).then(|| format_browser_action_error(&result.data)),
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -134,7 +133,9 @@ struct BrowserActionRequest {
|
||||
params: Value,
|
||||
}
|
||||
|
||||
fn parse_browser_action_request(args: Value) -> Result<BrowserActionRequest, BrowserActionAdapterError> {
|
||||
fn parse_browser_action_request(
|
||||
args: Value,
|
||||
) -> Result<BrowserActionRequest, BrowserActionAdapterError> {
|
||||
let mut args = match args {
|
||||
Value::Object(args) => args,
|
||||
other => {
|
||||
|
||||
@@ -2,8 +2,8 @@ use std::collections::HashMap;
|
||||
use std::ffi::OsStr;
|
||||
use std::path::{Path, PathBuf};
|
||||
|
||||
use zeroclaw::Config as ZeroClawConfig;
|
||||
use zeroclaw::config::schema::ModelProviderConfig;
|
||||
use zeroclaw::Config as ZeroClawConfig;
|
||||
|
||||
use crate::compat::cron_adapter::configure_embedded_cron;
|
||||
use crate::compat::memory_adapter::configure_embedded_memory;
|
||||
@@ -13,7 +13,9 @@ use crate::runtime::RuntimeProfile;
|
||||
const SGCLAW_ZEROCLAW_WORKSPACE_DIR: &str = ".sgclaw-zeroclaw-workspace";
|
||||
const SKILLS_DIR_NAME: &str = "skills";
|
||||
|
||||
pub fn build_zeroclaw_config(workspace_root: &Path) -> Result<ZeroClawConfig, crate::config::ConfigError> {
|
||||
pub fn build_zeroclaw_config(
|
||||
workspace_root: &Path,
|
||||
) -> Result<ZeroClawConfig, crate::config::ConfigError> {
|
||||
let settings = SgClawSettings::from_env()?;
|
||||
Ok(build_zeroclaw_config_from_sgclaw_settings(
|
||||
workspace_root,
|
||||
|
||||
@@ -65,7 +65,10 @@ where
|
||||
|
||||
for job in jobs {
|
||||
if !matches!(job.job_type, JobType::Agent) {
|
||||
anyhow::bail!("unsupported cron job type in sgclaw compat: {:?}", job.job_type);
|
||||
anyhow::bail!(
|
||||
"unsupported cron job type in sgclaw compat: {:?}",
|
||||
job.job_type
|
||||
);
|
||||
}
|
||||
|
||||
let started_at = Utc::now();
|
||||
|
||||
@@ -14,19 +14,17 @@ pub fn log_entry_for_turn_event(
|
||||
level: "info".to_string(),
|
||||
message: format_tool_call(name, args, skill_versions),
|
||||
}),
|
||||
TurnEvent::ToolResult { output, .. } if is_tool_error(output) => Some(AgentMessage::LogEntry {
|
||||
level: "error".to_string(),
|
||||
message: output.trim_start_matches("Error: ").to_string(),
|
||||
}),
|
||||
TurnEvent::ToolResult { output, .. } if is_tool_error(output) => {
|
||||
Some(AgentMessage::LogEntry {
|
||||
level: "error".to_string(),
|
||||
message: output.trim_start_matches("Error: ").to_string(),
|
||||
})
|
||||
}
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
fn format_tool_call(
|
||||
name: &str,
|
||||
args: &Value,
|
||||
skill_versions: &HashMap<String, String>,
|
||||
) -> String {
|
||||
fn format_tool_call(name: &str, args: &Value, skill_versions: &HashMap<String, String>) -> String {
|
||||
if name == "read_skill" {
|
||||
let skill_name = args
|
||||
.get("name")
|
||||
@@ -49,7 +47,10 @@ fn format_tool_call(
|
||||
|
||||
match action {
|
||||
"navigate" => {
|
||||
let url = args.get("url").and_then(Value::as_str).unwrap_or("<missing-url>");
|
||||
let url = args
|
||||
.get("url")
|
||||
.and_then(Value::as_str)
|
||||
.unwrap_or("<missing-url>");
|
||||
format!("navigate {url}")
|
||||
}
|
||||
"type" => {
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
use async_trait::async_trait;
|
||||
use serde::Deserialize;
|
||||
use serde_json::{json, Value};
|
||||
use std::collections::BTreeSet;
|
||||
use std::collections::BTreeMap;
|
||||
use std::collections::BTreeSet;
|
||||
use std::fs;
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::process::Command;
|
||||
@@ -93,7 +93,11 @@ impl Tool for OpenXmlOfficeTool {
|
||||
return Ok(failed_tool_result("rows must not be empty".to_string()));
|
||||
}
|
||||
|
||||
if parsed.rows.iter().any(|row| row.len() != parsed.columns.len()) {
|
||||
if parsed
|
||||
.rows
|
||||
.iter()
|
||||
.any(|row| row.len() != parsed.columns.len())
|
||||
{
|
||||
return Ok(failed_tool_result(
|
||||
"each row must match the declared columns length".to_string(),
|
||||
));
|
||||
@@ -153,10 +157,10 @@ fn failed_tool_result(error: String) -> ToolResult {
|
||||
}
|
||||
|
||||
fn create_job_root(workspace_root: &Path) -> anyhow::Result<PathBuf> {
|
||||
let nanos = SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)?
|
||||
.as_nanos();
|
||||
let path = workspace_root.join(".sgclaw-openxml").join(format!("{nanos}"));
|
||||
let nanos = SystemTime::now().duration_since(UNIX_EPOCH)?.as_nanos();
|
||||
let path = workspace_root
|
||||
.join(".sgclaw-openxml")
|
||||
.join(format!("{nanos}"));
|
||||
fs::create_dir_all(&path)?;
|
||||
Ok(path)
|
||||
}
|
||||
@@ -188,10 +192,7 @@ fn resolve_column_order(
|
||||
.iter()
|
||||
.map(|value| value.to_string())
|
||||
.collect::<BTreeSet<_>>();
|
||||
let expected_set = expected_columns
|
||||
.iter()
|
||||
.cloned()
|
||||
.collect::<BTreeSet<_>>();
|
||||
let expected_set = expected_columns.iter().cloned().collect::<BTreeSet<_>>();
|
||||
|
||||
if provided_set != expected_set {
|
||||
return None;
|
||||
|
||||
@@ -9,6 +9,12 @@ pub fn should_use_primary_orchestration(
|
||||
page_url: Option<&str>,
|
||||
page_title: Option<&str>,
|
||||
) -> bool {
|
||||
if crate::compat::workflow_executor::detect_route(instruction, page_url, page_title)
|
||||
.is_some_and(|route| crate::compat::workflow_executor::prefers_direct_execution(&route))
|
||||
{
|
||||
return true;
|
||||
}
|
||||
|
||||
let normalized = instruction.to_ascii_lowercase();
|
||||
let needs_export = normalized.contains("excel")
|
||||
|| normalized.contains("xlsx")
|
||||
@@ -33,6 +39,18 @@ pub fn execute_task_with_sgclaw_settings<T: Transport + 'static>(
|
||||
task_context.page_url.as_deref(),
|
||||
task_context.page_title.as_deref(),
|
||||
);
|
||||
if let Some(route) = route.clone() {
|
||||
if crate::compat::workflow_executor::prefers_direct_execution(&route) {
|
||||
return crate::compat::workflow_executor::execute_route(
|
||||
transport,
|
||||
&browser_tool,
|
||||
workspace_root,
|
||||
instruction,
|
||||
task_context,
|
||||
route,
|
||||
);
|
||||
}
|
||||
}
|
||||
let primary_result = crate::compat::runtime::execute_task_with_sgclaw_settings(
|
||||
transport,
|
||||
browser_tool.clone(),
|
||||
@@ -44,13 +62,16 @@ pub fn execute_task_with_sgclaw_settings<T: Transport + 'static>(
|
||||
|
||||
match (route, primary_result) {
|
||||
(Some(route), Ok(summary))
|
||||
if crate::compat::workflow_executor::should_fallback_after_summary(&summary, &route) =>
|
||||
if crate::compat::workflow_executor::should_fallback_after_summary(
|
||||
&summary, &route,
|
||||
) =>
|
||||
{
|
||||
crate::compat::workflow_executor::execute_route(
|
||||
transport,
|
||||
&browser_tool,
|
||||
workspace_root,
|
||||
instruction,
|
||||
task_context,
|
||||
route,
|
||||
)
|
||||
}
|
||||
@@ -60,6 +81,7 @@ pub fn execute_task_with_sgclaw_settings<T: Transport + 'static>(
|
||||
&browser_tool,
|
||||
workspace_root,
|
||||
instruction,
|
||||
task_context,
|
||||
route,
|
||||
),
|
||||
(None, Err(err)) => Err(err),
|
||||
|
||||
@@ -5,23 +5,18 @@ use async_trait::async_trait;
|
||||
use futures_util::{stream, StreamExt};
|
||||
use zeroclaw::agent::TurnEvent;
|
||||
use zeroclaw::config::Config as ZeroClawConfig;
|
||||
use zeroclaw::providers::{
|
||||
self, ChatMessage, ChatRequest, ChatResponse, Provider,
|
||||
};
|
||||
use zeroclaw::providers::traits::{
|
||||
ProviderCapabilities, StreamEvent, StreamOptions, StreamResult,
|
||||
};
|
||||
use zeroclaw::providers::traits::{ProviderCapabilities, StreamEvent, StreamOptions, StreamResult};
|
||||
use zeroclaw::providers::{self, ChatMessage, ChatRequest, ChatResponse, Provider};
|
||||
|
||||
use crate::compat::browser_script_skill_tool::build_browser_script_skill_tools;
|
||||
use crate::compat::browser_tool_adapter::ZeroClawBrowserTool;
|
||||
use crate::compat::config_adapter::{
|
||||
build_zeroclaw_config_from_sgclaw_settings,
|
||||
resolve_skills_dir_from_sgclaw_settings,
|
||||
build_zeroclaw_config_from_sgclaw_settings, resolve_skills_dir_from_sgclaw_settings,
|
||||
};
|
||||
use crate::compat::event_bridge::log_entry_for_turn_event;
|
||||
use crate::compat::openxml_office_tool::OpenXmlOfficeTool;
|
||||
use crate::compat::screen_html_export_tool::ScreenHtmlExportTool;
|
||||
use crate::config::{DeepSeekSettings, OfficeBackend, SgClawSettings};
|
||||
use crate::compat::event_bridge::log_entry_for_turn_event;
|
||||
use crate::pipe::{BrowserPipeTool, ConversationMessage, PipeError, Transport};
|
||||
use crate::runtime::RuntimeEngine;
|
||||
|
||||
@@ -136,13 +131,17 @@ pub async fn execute_task_with_provider<T: Transport + 'static>(
|
||||
.map_err(map_anyhow_to_pipe_error)?,
|
||||
);
|
||||
}
|
||||
if matches!(settings.office_backend, OfficeBackend::OpenXml) &&
|
||||
engine.should_attach_openxml_office_tool(instruction)
|
||||
if matches!(settings.office_backend, OfficeBackend::OpenXml)
|
||||
&& engine.should_attach_openxml_office_tool(instruction)
|
||||
{
|
||||
tools.push(Box::new(OpenXmlOfficeTool::new(config.workspace_dir.clone())));
|
||||
tools.push(Box::new(OpenXmlOfficeTool::new(
|
||||
config.workspace_dir.clone(),
|
||||
)));
|
||||
}
|
||||
if engine.should_attach_screen_html_export_tool(instruction) {
|
||||
tools.push(Box::new(ScreenHtmlExportTool::new(config.workspace_dir.clone())));
|
||||
tools.push(Box::new(ScreenHtmlExportTool::new(
|
||||
config.workspace_dir.clone(),
|
||||
)));
|
||||
}
|
||||
let mut agent = engine.build_agent(
|
||||
provider,
|
||||
@@ -190,10 +189,7 @@ pub async fn execute_task_with_provider<T: Transport + 'static>(
|
||||
|
||||
fn build_provider(config: &ZeroClawConfig) -> Result<Box<dyn Provider>, PipeError> {
|
||||
let provider_name = config.default_provider.as_deref().unwrap_or("deepseek");
|
||||
let model_name = config
|
||||
.default_model
|
||||
.as_deref()
|
||||
.unwrap_or("deepseek-chat");
|
||||
let model_name = config.default_model.as_deref().unwrap_or("deepseek-chat");
|
||||
let runtime_options = providers::provider_runtime_options_from_config(config);
|
||||
let resolved_provider_name = if provider_name == "deepseek" {
|
||||
config
|
||||
@@ -258,7 +254,9 @@ impl Provider for NonStreamingProvider {
|
||||
model: &str,
|
||||
temperature: f64,
|
||||
) -> anyhow::Result<String> {
|
||||
self.inner.chat_with_history(messages, model, temperature).await
|
||||
self.inner
|
||||
.chat_with_history(messages, model, temperature)
|
||||
.await
|
||||
}
|
||||
|
||||
async fn chat(
|
||||
|
||||
@@ -238,29 +238,40 @@ fn derive_categories(table: &[ScreenTableRow]) -> Vec<ScreenCategory> {
|
||||
|
||||
grouped
|
||||
.into_iter()
|
||||
.map(|((category_code, category_label), (item_count, total_heat))| ScreenCategory {
|
||||
category_code,
|
||||
category_label,
|
||||
item_count,
|
||||
total_heat,
|
||||
avg_heat: if item_count == 0 {
|
||||
0
|
||||
} else {
|
||||
total_heat / item_count
|
||||
.map(
|
||||
|((category_code, category_label), (item_count, total_heat))| ScreenCategory {
|
||||
category_code,
|
||||
category_label,
|
||||
item_count,
|
||||
total_heat,
|
||||
avg_heat: if item_count == 0 {
|
||||
0
|
||||
} else {
|
||||
total_heat / item_count
|
||||
},
|
||||
},
|
||||
})
|
||||
)
|
||||
.collect()
|
||||
}
|
||||
|
||||
fn classify_title(title: &str) -> (&'static str, &'static str) {
|
||||
let normalized = title.to_ascii_lowercase();
|
||||
if contains_any(&normalized, &["ai", "芯片", "科技", "算法", "机器人", "无人机"]) {
|
||||
if contains_any(
|
||||
&normalized,
|
||||
&["ai", "芯片", "科技", "算法", "机器人", "无人机"],
|
||||
) {
|
||||
return ("technology", "科技");
|
||||
}
|
||||
if contains_any(&normalized, &["电影", "综艺", "明星", "周杰伦", "短剧", "娱乐"]) {
|
||||
if contains_any(
|
||||
&normalized,
|
||||
&["电影", "综艺", "明星", "周杰伦", "短剧", "娱乐"],
|
||||
) {
|
||||
return ("entertainment", "娱乐");
|
||||
}
|
||||
if contains_any(&normalized, &["足球", "比赛", "联赛", "国足", "体育", "冠军"]) {
|
||||
if contains_any(
|
||||
&normalized,
|
||||
&["足球", "比赛", "联赛", "国足", "体育", "冠军"],
|
||||
) {
|
||||
return ("sports", "体育");
|
||||
}
|
||||
if contains_any(&normalized, &["航母", "作战", "军", "军事", "演训"]) {
|
||||
|
||||
@@ -1,20 +1,17 @@
|
||||
use std::fs;
|
||||
use std::path::Path;
|
||||
use std::thread;
|
||||
use std::time::Duration;
|
||||
|
||||
use regex::Regex;
|
||||
use serde_json::{json, Value};
|
||||
use zeroclaw::tools::Tool;
|
||||
|
||||
use crate::compat::runtime::CompatTaskContext;
|
||||
use crate::compat::openxml_office_tool::OpenXmlOfficeTool;
|
||||
use crate::compat::runtime::CompatTaskContext;
|
||||
use crate::compat::screen_html_export_tool::ScreenHtmlExportTool;
|
||||
use crate::pipe::{
|
||||
Action,
|
||||
AgentMessage,
|
||||
BrowserPipeTool,
|
||||
ConversationMessage,
|
||||
PipeError,
|
||||
Transport,
|
||||
Action, AgentMessage, BrowserPipeTool, ConversationMessage, PipeError, Transport,
|
||||
};
|
||||
|
||||
const ZHIHU_DOMAIN: &str = "www.zhihu.com";
|
||||
@@ -22,6 +19,10 @@ const ZHIHU_EDITOR_DOMAIN: &str = "zhuanlan.zhihu.com";
|
||||
const ZHIHU_HOT_URL: &str = "https://www.zhihu.com/hot";
|
||||
const ZHIHU_CREATOR_URL: &str = "https://www.zhihu.com/creator";
|
||||
const ZHIHU_EDITOR_URL: &str = "https://zhuanlan.zhihu.com/write";
|
||||
const HOTLIST_READY_POLL_ATTEMPTS: usize = 10;
|
||||
const HOTLIST_READY_POLL_INTERVAL: Duration = Duration::from_millis(500);
|
||||
const HOTLIST_TEXT_READY_PATTERN: &str =
|
||||
r"(?:^|\n)\s*1(?:[.、]|\s)+.+\d+(?:\.\d+)?\s*(?:万|亿|k|K|m|M)(?:热度)?";
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
pub enum WorkflowRoute {
|
||||
ZhihuHotlistExportXlsx,
|
||||
@@ -51,10 +52,16 @@ pub fn detect_route(
|
||||
) -> Option<WorkflowRoute> {
|
||||
if crate::runtime::is_zhihu_hotlist_task(instruction, page_url, page_title) {
|
||||
let normalized = instruction.to_ascii_lowercase();
|
||||
if normalized.contains("dashboard") || instruction.contains("大屏") || instruction.contains("新标签页") {
|
||||
if normalized.contains("dashboard")
|
||||
|| instruction.contains("大屏")
|
||||
|| instruction.contains("新标签页")
|
||||
{
|
||||
return Some(WorkflowRoute::ZhihuHotlistScreen);
|
||||
}
|
||||
if normalized.contains("excel") || normalized.contains("xlsx") || instruction.contains("导出") {
|
||||
if normalized.contains("excel")
|
||||
|| normalized.contains("xlsx")
|
||||
|| instruction.contains("导出")
|
||||
{
|
||||
return Some(WorkflowRoute::ZhihuHotlistExportXlsx);
|
||||
}
|
||||
}
|
||||
@@ -73,9 +80,11 @@ pub fn detect_route(
|
||||
pub fn prefers_direct_execution(route: &WorkflowRoute) -> bool {
|
||||
matches!(
|
||||
route,
|
||||
WorkflowRoute::ZhihuArticleEntry |
|
||||
WorkflowRoute::ZhihuArticleDraft |
|
||||
WorkflowRoute::ZhihuArticlePublish
|
||||
WorkflowRoute::ZhihuHotlistExportXlsx
|
||||
| WorkflowRoute::ZhihuHotlistScreen
|
||||
| WorkflowRoute::ZhihuArticleEntry
|
||||
| WorkflowRoute::ZhihuArticleDraft
|
||||
| WorkflowRoute::ZhihuArticlePublish
|
||||
)
|
||||
}
|
||||
|
||||
@@ -85,22 +94,23 @@ pub fn should_fallback_after_summary(summary: &str, route: &WorkflowRoute) -> bo
|
||||
return false;
|
||||
}
|
||||
|
||||
let looks_like_denial = summary.contains("拒绝") ||
|
||||
normalized.contains("denied") ||
|
||||
normalized.contains("failed") ||
|
||||
normalized.contains("protocol error") ||
|
||||
normalized.contains("maximum tool iterations") ||
|
||||
summary.contains("失败") ||
|
||||
summary.contains("无法");
|
||||
let looks_like_denial = summary.contains("拒绝")
|
||||
|| normalized.contains("denied")
|
||||
|| normalized.contains("failed")
|
||||
|| normalized.contains("protocol error")
|
||||
|| normalized.contains("maximum tool iterations")
|
||||
|| summary.contains("失败")
|
||||
|| summary.contains("无法");
|
||||
|
||||
looks_like_denial || matches!(
|
||||
route,
|
||||
WorkflowRoute::ZhihuHotlistExportXlsx |
|
||||
WorkflowRoute::ZhihuHotlistScreen |
|
||||
WorkflowRoute::ZhihuArticleEntry |
|
||||
WorkflowRoute::ZhihuArticleDraft |
|
||||
WorkflowRoute::ZhihuArticlePublish
|
||||
)
|
||||
looks_like_denial
|
||||
|| matches!(
|
||||
route,
|
||||
WorkflowRoute::ZhihuHotlistExportXlsx
|
||||
| WorkflowRoute::ZhihuHotlistScreen
|
||||
| WorkflowRoute::ZhihuArticleEntry
|
||||
| WorkflowRoute::ZhihuArticleDraft
|
||||
| WorkflowRoute::ZhihuArticlePublish
|
||||
)
|
||||
}
|
||||
|
||||
pub fn execute_route<T: Transport + 'static>(
|
||||
@@ -114,15 +124,19 @@ pub fn execute_route<T: Transport + 'static>(
|
||||
match route {
|
||||
WorkflowRoute::ZhihuHotlistExportXlsx | WorkflowRoute::ZhihuHotlistScreen => {
|
||||
let top_n = extract_top_n(instruction);
|
||||
let items = collect_hotlist_items(transport, browser_tool, top_n)?;
|
||||
let items = collect_hotlist_items(transport, browser_tool, top_n, task_context)?;
|
||||
if items.is_empty() {
|
||||
return Err(PipeError::Protocol(
|
||||
"知乎热榜采集失败:未能从页面文本中解析到热榜条目".to_string(),
|
||||
));
|
||||
}
|
||||
match route {
|
||||
WorkflowRoute::ZhihuHotlistExportXlsx => export_xlsx(transport, workspace_root, &items),
|
||||
WorkflowRoute::ZhihuHotlistScreen => export_screen(transport, workspace_root, &items),
|
||||
WorkflowRoute::ZhihuHotlistExportXlsx => {
|
||||
export_xlsx(transport, workspace_root, &items)
|
||||
}
|
||||
WorkflowRoute::ZhihuHotlistScreen => {
|
||||
export_screen(transport, workspace_root, &items)
|
||||
}
|
||||
_ => unreachable!("handled by outer match"),
|
||||
}
|
||||
}
|
||||
@@ -142,8 +156,9 @@ fn collect_hotlist_items<T: Transport + 'static>(
|
||||
transport: &T,
|
||||
browser_tool: &BrowserPipeTool<T>,
|
||||
top_n: usize,
|
||||
task_context: &CompatTaskContext,
|
||||
) -> Result<Vec<HotlistItem>, PipeError> {
|
||||
navigate_hotlist_with_retry(transport, browser_tool)?;
|
||||
ensure_hotlist_page_ready(transport, browser_tool, task_context)?;
|
||||
transport.send(&AgentMessage::LogEntry {
|
||||
level: "info".to_string(),
|
||||
message: "call zhihu-hotlist.extract_hotlist".to_string(),
|
||||
@@ -168,35 +183,87 @@ fn collect_hotlist_items<T: Transport + 'static>(
|
||||
parse_hotlist_items_payload(response.data.get("text").unwrap_or(&response.data))
|
||||
}
|
||||
|
||||
fn navigate_hotlist_with_retry<T: Transport + 'static>(
|
||||
fn ensure_hotlist_page_ready<T: Transport + 'static>(
|
||||
transport: &T,
|
||||
browser_tool: &BrowserPipeTool<T>,
|
||||
task_context: &CompatTaskContext,
|
||||
) -> Result<(), PipeError> {
|
||||
let starts_on_hotlist = task_context
|
||||
.page_url
|
||||
.as_deref()
|
||||
.is_some_and(|url| url.starts_with(ZHIHU_HOT_URL))
|
||||
|| task_context
|
||||
.page_title
|
||||
.as_deref()
|
||||
.is_some_and(|title| title.contains("热榜"));
|
||||
|
||||
if starts_on_hotlist && poll_for_hotlist_readiness(browser_tool)? {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let mut last_error = None;
|
||||
for attempt in 0..2 {
|
||||
navigate_hotlist_page(transport, browser_tool)?;
|
||||
if poll_for_hotlist_readiness(browser_tool)? {
|
||||
return Ok(());
|
||||
}
|
||||
last_error = Some(PipeError::Protocol(format!(
|
||||
"知乎热榜页面已打开,但在短轮询窗口内仍未出现可读热榜内容(attempt={})",
|
||||
attempt + 1
|
||||
)));
|
||||
}
|
||||
|
||||
Err(last_error.unwrap_or_else(|| PipeError::Protocol("知乎热榜页面未就绪".to_string())))
|
||||
}
|
||||
|
||||
fn navigate_hotlist_page<T: Transport + 'static>(
|
||||
transport: &T,
|
||||
browser_tool: &BrowserPipeTool<T>,
|
||||
) -> Result<(), PipeError> {
|
||||
let mut last_error = None;
|
||||
for _ in 0..2 {
|
||||
transport.send(&AgentMessage::LogEntry {
|
||||
level: "info".to_string(),
|
||||
message: format!("navigate {ZHIHU_HOT_URL}"),
|
||||
})?;
|
||||
match browser_tool.invoke(
|
||||
Action::Navigate,
|
||||
json!({ "url": ZHIHU_HOT_URL }),
|
||||
ZHIHU_DOMAIN,
|
||||
) {
|
||||
Ok(response) if response.success => return Ok(()),
|
||||
Ok(response) => {
|
||||
last_error = Some(PipeError::Protocol(format!(
|
||||
"navigate failed: {}",
|
||||
response.data
|
||||
)));
|
||||
transport.send(&AgentMessage::LogEntry {
|
||||
level: "info".to_string(),
|
||||
message: format!("navigate {ZHIHU_HOT_URL}"),
|
||||
})?;
|
||||
let response = browser_tool.invoke(
|
||||
Action::Navigate,
|
||||
json!({ "url": ZHIHU_HOT_URL }),
|
||||
ZHIHU_DOMAIN,
|
||||
)?;
|
||||
if response.success {
|
||||
Ok(())
|
||||
} else {
|
||||
Err(PipeError::Protocol(format!(
|
||||
"navigate failed: {}",
|
||||
response.data
|
||||
)))
|
||||
}
|
||||
}
|
||||
|
||||
fn poll_for_hotlist_readiness<T: Transport + 'static>(
|
||||
browser_tool: &BrowserPipeTool<T>,
|
||||
) -> Result<bool, PipeError> {
|
||||
let ready_pattern =
|
||||
Regex::new(HOTLIST_TEXT_READY_PATTERN).expect("hotlist readiness regex must compile");
|
||||
for attempt in 0..HOTLIST_READY_POLL_ATTEMPTS {
|
||||
let response =
|
||||
browser_tool.invoke(Action::GetText, json!({ "selector": "body" }), ZHIHU_DOMAIN)?;
|
||||
if response.success {
|
||||
let payload = response.data.get("text").unwrap_or(&response.data);
|
||||
if hotlist_text_looks_ready(payload, &ready_pattern) {
|
||||
return Ok(true);
|
||||
}
|
||||
Err(err) => last_error = Some(err),
|
||||
}
|
||||
|
||||
if attempt + 1 < HOTLIST_READY_POLL_ATTEMPTS {
|
||||
thread::sleep(HOTLIST_READY_POLL_INTERVAL);
|
||||
}
|
||||
}
|
||||
Ok(false)
|
||||
}
|
||||
|
||||
Err(last_error.unwrap_or_else(|| {
|
||||
PipeError::Protocol("navigate failed without detailed error".to_string())
|
||||
}))
|
||||
fn hotlist_text_looks_ready(payload: &Value, ready_pattern: &Regex) -> bool {
|
||||
let text = payload.as_str().unwrap_or_default();
|
||||
text.contains("热榜") && ready_pattern.is_match(text)
|
||||
}
|
||||
|
||||
fn export_xlsx<T: Transport>(
|
||||
@@ -224,15 +291,17 @@ fn export_xlsx<T: Transport>(
|
||||
.map_err(|err| PipeError::Protocol(err.to_string()))?;
|
||||
if !result.success {
|
||||
return Err(PipeError::Protocol(
|
||||
result.error.unwrap_or_else(|| "openxml_office failed".to_string()),
|
||||
result
|
||||
.error
|
||||
.unwrap_or_else(|| "openxml_office failed".to_string()),
|
||||
));
|
||||
}
|
||||
|
||||
let payload: Value = serde_json::from_str(&result.output)
|
||||
.map_err(|err| PipeError::Protocol(format!("invalid openxml_office output: {err}")))?;
|
||||
let output_path = payload["output_path"]
|
||||
.as_str()
|
||||
.ok_or_else(|| PipeError::Protocol("openxml_office did not return output_path".to_string()))?;
|
||||
let output_path = payload["output_path"].as_str().ok_or_else(|| {
|
||||
PipeError::Protocol("openxml_office did not return output_path".to_string())
|
||||
})?;
|
||||
Ok(format!("已导出知乎热榜 Excel {output_path}"))
|
||||
}
|
||||
|
||||
@@ -257,15 +326,17 @@ fn export_screen<T: Transport>(
|
||||
.map_err(|err| PipeError::Protocol(err.to_string()))?;
|
||||
if !result.success {
|
||||
return Err(PipeError::Protocol(
|
||||
result.error.unwrap_or_else(|| "screen_html_export failed".to_string()),
|
||||
result
|
||||
.error
|
||||
.unwrap_or_else(|| "screen_html_export failed".to_string()),
|
||||
));
|
||||
}
|
||||
|
||||
let payload: Value = serde_json::from_str(&result.output)
|
||||
.map_err(|err| PipeError::Protocol(format!("invalid screen_html_export output: {err}")))?;
|
||||
let output_path = payload["output_path"]
|
||||
.as_str()
|
||||
.ok_or_else(|| PipeError::Protocol("screen_html_export did not return output_path".to_string()))?;
|
||||
let output_path = payload["output_path"].as_str().ok_or_else(|| {
|
||||
PipeError::Protocol("screen_html_export did not return output_path".to_string())
|
||||
})?;
|
||||
Ok(format!("已生成知乎热榜大屏 {output_path}"))
|
||||
}
|
||||
|
||||
@@ -300,7 +371,9 @@ fn execute_zhihu_article_route<T: Transport + 'static>(
|
||||
ZHIHU_DOMAIN,
|
||||
)?;
|
||||
if is_login_required_payload(&creator_state) {
|
||||
return Ok(build_login_block_message(payload_current_url(&creator_state)));
|
||||
return Ok(build_login_block_message(payload_current_url(
|
||||
&creator_state,
|
||||
)));
|
||||
}
|
||||
if payload_status(&creator_state) == Some("creator_home") {
|
||||
return Ok(build_creator_entry_missing_message(payload_current_url(
|
||||
@@ -321,10 +394,14 @@ fn execute_zhihu_article_route<T: Transport + 'static>(
|
||||
ZHIHU_EDITOR_DOMAIN,
|
||||
)?;
|
||||
if is_login_required_payload(&editor_state) {
|
||||
return Ok(build_login_block_message(payload_current_url(&editor_state)));
|
||||
return Ok(build_login_block_message(payload_current_url(
|
||||
&editor_state,
|
||||
)));
|
||||
}
|
||||
if payload_status(&editor_state) != Some("editor_ready") {
|
||||
return Ok(build_editor_unavailable_message(payload_current_url(&editor_state)));
|
||||
return Ok(build_editor_unavailable_message(payload_current_url(
|
||||
&editor_state,
|
||||
)));
|
||||
}
|
||||
|
||||
transport.send(&AgentMessage::LogEntry {
|
||||
@@ -347,7 +424,10 @@ fn execute_zhihu_article_route<T: Transport + 'static>(
|
||||
}
|
||||
|
||||
match payload_status(&fill_result) {
|
||||
Some("draft_ready") => Ok(format!("已进入知乎文章编辑器并写入草稿《{}》", article.title)),
|
||||
Some("draft_ready") => Ok(format!(
|
||||
"已进入知乎文章编辑器并写入草稿《{}》",
|
||||
article.title
|
||||
)),
|
||||
Some("publish_clicked") | Some("publish_submitted") => {
|
||||
Ok(format!("已提交知乎文章发布流程《{}》", article.title))
|
||||
}
|
||||
@@ -380,7 +460,9 @@ fn execute_zhihu_article_entry_route<T: Transport + 'static>(
|
||||
ZHIHU_DOMAIN,
|
||||
)?;
|
||||
if is_login_required_payload(&creator_state) {
|
||||
return Ok(build_login_block_message(payload_current_url(&creator_state)));
|
||||
return Ok(build_login_block_message(payload_current_url(
|
||||
&creator_state,
|
||||
)));
|
||||
}
|
||||
if payload_status(&creator_state) == Some("creator_home") {
|
||||
return Ok(build_creator_entry_missing_message(payload_current_url(
|
||||
@@ -401,13 +483,17 @@ fn execute_zhihu_article_entry_route<T: Transport + 'static>(
|
||||
ZHIHU_EDITOR_DOMAIN,
|
||||
)?;
|
||||
if is_login_required_payload(&editor_state) {
|
||||
return Ok(build_login_block_message(payload_current_url(&editor_state)));
|
||||
return Ok(build_login_block_message(payload_current_url(
|
||||
&editor_state,
|
||||
)));
|
||||
}
|
||||
if payload_status(&editor_state) == Some("editor_ready") {
|
||||
return Ok("已进入知乎文章编辑器。".to_string());
|
||||
}
|
||||
|
||||
Ok(build_editor_unavailable_message(payload_current_url(&editor_state)))
|
||||
Ok(build_editor_unavailable_message(payload_current_url(
|
||||
&editor_state,
|
||||
)))
|
||||
}
|
||||
|
||||
fn load_hotlist_extractor_script(top_n: usize) -> Result<String, PipeError> {
|
||||
@@ -443,7 +529,11 @@ fn parse_hotlist_items_payload(payload: &Value) -> Result<Vec<HotlistItem>, Pipe
|
||||
|
||||
let rank = cells[0]
|
||||
.as_u64()
|
||||
.or_else(|| cells[0].as_str().and_then(|value| value.parse::<u64>().ok()))
|
||||
.or_else(|| {
|
||||
cells[0]
|
||||
.as_str()
|
||||
.and_then(|value| value.parse::<u64>().ok())
|
||||
})
|
||||
.unwrap_or((items.len() + 1) as u64);
|
||||
let title = cells[1].as_str().unwrap_or_default().trim().to_string();
|
||||
let heat = cells[2].as_str().unwrap_or_default().trim().to_string();
|
||||
@@ -483,7 +573,10 @@ fn navigate_zhihu_page<T: Transport + 'static>(
|
||||
if response.success {
|
||||
Ok(())
|
||||
} else {
|
||||
Err(PipeError::Protocol(format!("navigate failed: {}", response.data)))
|
||||
Err(PipeError::Protocol(format!(
|
||||
"navigate failed: {}",
|
||||
response.data
|
||||
)))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -507,7 +600,9 @@ fn execute_browser_skill_script<T: Transport + 'static>(
|
||||
)));
|
||||
}
|
||||
|
||||
Ok(normalize_payload(response.data.get("text").unwrap_or(&response.data)))
|
||||
Ok(normalize_payload(
|
||||
response.data.get("text").unwrap_or(&response.data),
|
||||
))
|
||||
}
|
||||
|
||||
fn navigate_to_editor_after_creator_entry<T: Transport + 'static>(
|
||||
@@ -542,6 +637,239 @@ fn navigate_to_editor_after_creator_entry<T: Transport + 'static>(
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use std::collections::VecDeque;
|
||||
use std::sync::{Arc, Mutex};
|
||||
|
||||
use crate::pipe::{BrowserMessage, Timing};
|
||||
use crate::security::MacPolicy;
|
||||
|
||||
struct MockWorkflowTransport {
|
||||
sent: Mutex<Vec<AgentMessage>>,
|
||||
responses: Mutex<VecDeque<BrowserMessage>>,
|
||||
}
|
||||
|
||||
impl MockWorkflowTransport {
|
||||
fn new(responses: Vec<BrowserMessage>) -> Self {
|
||||
Self {
|
||||
sent: Mutex::new(Vec::new()),
|
||||
responses: Mutex::new(VecDeque::from(responses)),
|
||||
}
|
||||
}
|
||||
|
||||
fn sent_messages(&self) -> Vec<AgentMessage> {
|
||||
self.sent.lock().unwrap().clone()
|
||||
}
|
||||
}
|
||||
|
||||
impl Transport for MockWorkflowTransport {
|
||||
fn send(&self, message: &AgentMessage) -> Result<(), PipeError> {
|
||||
self.sent.lock().unwrap().push(message.clone());
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn recv_timeout(&self, _timeout: Duration) -> Result<BrowserMessage, PipeError> {
|
||||
self.responses
|
||||
.lock()
|
||||
.unwrap()
|
||||
.pop_front()
|
||||
.ok_or(PipeError::Timeout)
|
||||
}
|
||||
}
|
||||
|
||||
fn zhihu_test_policy() -> MacPolicy {
|
||||
MacPolicy::from_json_str(
|
||||
&json!({
|
||||
"version": "1.0",
|
||||
"domains": { "allowed": ["www.zhihu.com"] },
|
||||
"pipe_actions": {
|
||||
"allowed": ["navigate", "getText", "eval"],
|
||||
"blocked": []
|
||||
}
|
||||
})
|
||||
.to_string(),
|
||||
)
|
||||
.unwrap()
|
||||
}
|
||||
|
||||
fn success_browser_response(seq: u64, data: Value) -> BrowserMessage {
|
||||
BrowserMessage::Response {
|
||||
seq,
|
||||
success: true,
|
||||
data,
|
||||
aom_snapshot: vec![],
|
||||
timing: Timing {
|
||||
queue_ms: 1,
|
||||
exec_ms: 10,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn collect_hotlist_items_skips_navigation_when_hot_page_is_already_readable() {
|
||||
let transport = Arc::new(MockWorkflowTransport::new(vec![
|
||||
success_browser_response(
|
||||
1,
|
||||
json!({ "text": "知乎热榜\n1 问题一 344万热度\n2 问题二 266万热度" }),
|
||||
),
|
||||
success_browser_response(
|
||||
2,
|
||||
json!({
|
||||
"text": {
|
||||
"source": "https://www.zhihu.com/hot",
|
||||
"sheet_name": "知乎热榜",
|
||||
"columns": ["rank", "title", "heat"],
|
||||
"rows": [[1, "问题一", "344万"], [2, "问题二", "266万"]]
|
||||
}
|
||||
}),
|
||||
),
|
||||
]));
|
||||
let browser_tool =
|
||||
BrowserPipeTool::new(transport.clone(), zhihu_test_policy(), vec![1, 2, 3, 4])
|
||||
.with_response_timeout(Duration::from_secs(1));
|
||||
let task_context = CompatTaskContext {
|
||||
page_url: Some("https://www.zhihu.com/hot".to_string()),
|
||||
page_title: Some("知乎热榜".to_string()),
|
||||
..CompatTaskContext::default()
|
||||
};
|
||||
|
||||
let items = collect_hotlist_items(transport.as_ref(), &browser_tool, 10, &task_context)
|
||||
.expect("hotlist collection should succeed");
|
||||
|
||||
assert_eq!(items.len(), 2);
|
||||
let sent = transport.sent_messages();
|
||||
assert!(sent.iter().any(|message| {
|
||||
matches!(
|
||||
message,
|
||||
AgentMessage::Command { action, .. } if action == &Action::GetText
|
||||
)
|
||||
}));
|
||||
assert!(sent.iter().any(|message| {
|
||||
matches!(
|
||||
message,
|
||||
AgentMessage::Command { action, .. } if action == &Action::Eval
|
||||
)
|
||||
}));
|
||||
assert!(!sent.iter().any(|message| {
|
||||
matches!(
|
||||
message,
|
||||
AgentMessage::Command { action, .. } if action == &Action::Navigate
|
||||
)
|
||||
}));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn collect_hotlist_items_polls_after_navigation_before_retrying_navigation() {
|
||||
let transport = Arc::new(MockWorkflowTransport::new(vec![
|
||||
success_browser_response(1, json!({ "navigated": true })),
|
||||
success_browser_response(2, json!({ "text": "" })),
|
||||
success_browser_response(3, json!({ "text": "" })),
|
||||
success_browser_response(4, json!({ "text": "知乎热榜\n1 问题一 344万热度" })),
|
||||
success_browser_response(
|
||||
5,
|
||||
json!({
|
||||
"text": {
|
||||
"source": "https://www.zhihu.com/hot",
|
||||
"sheet_name": "知乎热榜",
|
||||
"columns": ["rank", "title", "heat"],
|
||||
"rows": [[1, "问题一", "344万"]]
|
||||
}
|
||||
}),
|
||||
),
|
||||
]));
|
||||
let browser_tool =
|
||||
BrowserPipeTool::new(transport.clone(), zhihu_test_policy(), vec![1, 2, 3, 4, 5])
|
||||
.with_response_timeout(Duration::from_secs(1));
|
||||
let task_context = CompatTaskContext {
|
||||
page_url: Some("https://www.zhihu.com/".to_string()),
|
||||
page_title: Some("知乎".to_string()),
|
||||
..CompatTaskContext::default()
|
||||
};
|
||||
|
||||
let items = collect_hotlist_items(transport.as_ref(), &browser_tool, 10, &task_context)
|
||||
.expect("hotlist collection should succeed after readiness polling");
|
||||
|
||||
assert_eq!(items.len(), 1);
|
||||
let sent = transport.sent_messages();
|
||||
let actions = sent
|
||||
.iter()
|
||||
.filter_map(|message| match message {
|
||||
AgentMessage::Command { action, .. } => Some(action.clone()),
|
||||
_ => None,
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
assert_eq!(
|
||||
actions,
|
||||
vec![
|
||||
Action::Navigate,
|
||||
Action::GetText,
|
||||
Action::GetText,
|
||||
Action::GetText,
|
||||
Action::Eval
|
||||
]
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn collect_hotlist_items_retries_navigation_after_short_readiness_budget_expires() {
|
||||
let transport = Arc::new(MockWorkflowTransport::new(vec![
|
||||
success_browser_response(1, json!({ "navigated": true })),
|
||||
success_browser_response(2, json!({ "text": "" })),
|
||||
success_browser_response(3, json!({ "text": "" })),
|
||||
success_browser_response(4, json!({ "text": "" })),
|
||||
success_browser_response(5, json!({ "text": "" })),
|
||||
success_browser_response(6, json!({ "text": "" })),
|
||||
success_browser_response(7, json!({ "text": "" })),
|
||||
success_browser_response(8, json!({ "text": "" })),
|
||||
success_browser_response(9, json!({ "text": "" })),
|
||||
success_browser_response(10, json!({ "text": "" })),
|
||||
success_browser_response(11, json!({ "text": "" })),
|
||||
success_browser_response(12, json!({ "navigated": true })),
|
||||
success_browser_response(13, json!({ "text": "知乎热榜\n1 问题一 344万热度" })),
|
||||
success_browser_response(
|
||||
14,
|
||||
json!({
|
||||
"text": {
|
||||
"source": "https://www.zhihu.com/hot",
|
||||
"sheet_name": "知乎热榜",
|
||||
"columns": ["rank", "title", "heat"],
|
||||
"rows": [[1, "问题一", "344万"]]
|
||||
}
|
||||
}),
|
||||
),
|
||||
]));
|
||||
let browser_tool = BrowserPipeTool::new(
|
||||
transport.clone(),
|
||||
zhihu_test_policy(),
|
||||
vec![1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14],
|
||||
)
|
||||
.with_response_timeout(Duration::from_secs(1));
|
||||
let task_context = CompatTaskContext {
|
||||
page_url: Some("https://www.zhihu.com/".to_string()),
|
||||
page_title: Some("知乎".to_string()),
|
||||
..CompatTaskContext::default()
|
||||
};
|
||||
|
||||
let items = collect_hotlist_items(transport.as_ref(), &browser_tool, 10, &task_context)
|
||||
.expect("hotlist collection should succeed after one navigation retry");
|
||||
|
||||
assert_eq!(items.len(), 1);
|
||||
let sent = transport.sent_messages();
|
||||
let navigate_count = sent
|
||||
.iter()
|
||||
.filter(|message| {
|
||||
matches!(
|
||||
message,
|
||||
AgentMessage::Command { action, .. } if action == &Action::Navigate
|
||||
)
|
||||
})
|
||||
.count();
|
||||
assert_eq!(navigate_count, 2);
|
||||
}
|
||||
}
|
||||
|
||||
fn load_browser_skill_script(
|
||||
skill_name: &str,
|
||||
script_name: &str,
|
||||
@@ -563,8 +891,7 @@ fn load_browser_skill_script(
|
||||
})?;
|
||||
Ok(format!(
|
||||
"(function() {{\nconst args = {};\n{}\n}})()",
|
||||
args,
|
||||
script
|
||||
args, script
|
||||
))
|
||||
}
|
||||
|
||||
@@ -632,11 +959,11 @@ fn build_publish_confirmation_message(article: &ArticleDraft) -> String {
|
||||
|
||||
fn has_explicit_publish_confirmation(instruction: &str) -> bool {
|
||||
let trimmed = instruction.trim();
|
||||
trimmed.contains("确认发布") ||
|
||||
trimmed.contains("确认发表") ||
|
||||
trimmed.contains("现在发布") ||
|
||||
trimmed.contains("立即发布") ||
|
||||
trimmed.contains("可以发布")
|
||||
trimmed.contains("确认发布")
|
||||
|| trimmed.contains("确认发表")
|
||||
|| trimmed.contains("现在发布")
|
||||
|| trimmed.contains("立即发布")
|
||||
|| trimmed.contains("可以发布")
|
||||
}
|
||||
|
||||
fn task_requests_zhihu_article_entry(
|
||||
@@ -649,17 +976,17 @@ fn task_requests_zhihu_article_entry(
|
||||
}
|
||||
|
||||
let normalized = instruction.to_ascii_lowercase();
|
||||
let asks_to_open = normalized.contains("open") ||
|
||||
normalized.contains("goto") ||
|
||||
normalized.contains("go to") ||
|
||||
instruction.contains("打开") ||
|
||||
instruction.contains("进入") ||
|
||||
instruction.contains("去");
|
||||
let mentions_entry = instruction.contains("页面") ||
|
||||
instruction.contains("入口") ||
|
||||
instruction.contains("创作中心") ||
|
||||
instruction.contains("写文章") ||
|
||||
instruction.contains("发文章");
|
||||
let asks_to_open = normalized.contains("open")
|
||||
|| normalized.contains("goto")
|
||||
|| normalized.contains("go to")
|
||||
|| instruction.contains("打开")
|
||||
|| instruction.contains("进入")
|
||||
|| instruction.contains("去");
|
||||
let mentions_entry = instruction.contains("页面")
|
||||
|| instruction.contains("入口")
|
||||
|| instruction.contains("创作中心")
|
||||
|| instruction.contains("写文章")
|
||||
|| instruction.contains("发文章");
|
||||
let has_article_inputs = parse_article_draft(instruction).is_some();
|
||||
|
||||
asks_to_open && mentions_entry && !has_article_inputs
|
||||
@@ -681,12 +1008,11 @@ fn extract_article_draft(
|
||||
fn parse_article_draft(text: &str) -> Option<ArticleDraft> {
|
||||
let normalized = normalize_article_draft_input(text);
|
||||
let title_re = Regex::new(r"(?m)^标题[::]\s*(.+?)\s*$").expect("valid zhihu title regex");
|
||||
let body_re =
|
||||
Regex::new(r"(?s)正文[::]\s*(.+)$").expect("valid zhihu body regex");
|
||||
let inline_title_re = Regex::new(r"标题(?:是|为)\s*([^,,\n]+)")
|
||||
.expect("valid inline zhihu title regex");
|
||||
let inline_body_re = Regex::new(r"(?s)正文(?:是|为)\s*(.+)$")
|
||||
.expect("valid inline zhihu body regex");
|
||||
let body_re = Regex::new(r"(?s)正文[::]\s*(.+)$").expect("valid zhihu body regex");
|
||||
let inline_title_re =
|
||||
Regex::new(r"标题(?:是|为)\s*([^,,\n]+)").expect("valid inline zhihu title regex");
|
||||
let inline_body_re =
|
||||
Regex::new(r"(?s)正文(?:是|为)\s*(.+)$").expect("valid inline zhihu body regex");
|
||||
|
||||
let title = title_re
|
||||
.captures(&normalized)
|
||||
@@ -718,9 +1044,9 @@ fn parse_article_draft(text: &str) -> Option<ArticleDraft> {
|
||||
|
||||
fn normalize_article_draft_input(text: &str) -> String {
|
||||
let trimmed = text.trim();
|
||||
let unquoted = if trimmed.len() >= 2 &&
|
||||
((trimmed.starts_with('"') && trimmed.ends_with('"')) ||
|
||||
(trimmed.starts_with('\'') && trimmed.ends_with('\'')))
|
||||
let unquoted = if trimmed.len() >= 2
|
||||
&& ((trimmed.starts_with('"') && trimmed.ends_with('"'))
|
||||
|| (trimmed.starts_with('\'') && trimmed.ends_with('\'')))
|
||||
{
|
||||
&trimmed[1..trimmed.len() - 1]
|
||||
} else {
|
||||
|
||||
@@ -1,12 +1,6 @@
|
||||
mod settings;
|
||||
|
||||
pub use settings::{
|
||||
BrowserBackend,
|
||||
ConfigError,
|
||||
DeepSeekSettings,
|
||||
OfficeBackend,
|
||||
PlannerMode,
|
||||
ProviderSettings,
|
||||
SgClawSettings,
|
||||
SkillsPromptMode,
|
||||
BrowserBackend, ConfigError, DeepSeekSettings, OfficeBackend, PlannerMode, ProviderSettings,
|
||||
SgClawSettings, SkillsPromptMode,
|
||||
};
|
||||
|
||||
@@ -114,7 +114,8 @@ impl DeepSeekSettings {
|
||||
}
|
||||
|
||||
pub fn load(config_path: Option<&Path>) -> Result<Option<Self>, ConfigError> {
|
||||
SgClawSettings::load(config_path).map(|settings| settings.map(|settings| Self::from(&settings)))
|
||||
SgClawSettings::load(config_path)
|
||||
.map(|settings| settings.map(|settings| Self::from(&settings)))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -216,7 +217,10 @@ impl SgClawSettings {
|
||||
.map(parse_runtime_profile)
|
||||
.transpose()
|
||||
.map_err(|value| {
|
||||
ConfigError::ConfigParse(path.to_path_buf(), format!("invalid runtimeProfile: {value}"))
|
||||
ConfigError::ConfigParse(
|
||||
path.to_path_buf(),
|
||||
format!("invalid runtimeProfile: {value}"),
|
||||
)
|
||||
})?;
|
||||
let skills_prompt_mode = config
|
||||
.skills_prompt_mode
|
||||
@@ -235,7 +239,10 @@ impl SgClawSettings {
|
||||
.map(parse_planner_mode)
|
||||
.transpose()
|
||||
.map_err(|value| {
|
||||
ConfigError::ConfigParse(path.to_path_buf(), format!("invalid plannerMode: {value}"))
|
||||
ConfigError::ConfigParse(
|
||||
path.to_path_buf(),
|
||||
format!("invalid plannerMode: {value}"),
|
||||
)
|
||||
})?;
|
||||
let browser_backend = config
|
||||
.browser_backend
|
||||
@@ -243,7 +250,10 @@ impl SgClawSettings {
|
||||
.map(parse_browser_backend)
|
||||
.transpose()
|
||||
.map_err(|value| {
|
||||
ConfigError::ConfigParse(path.to_path_buf(), format!("invalid browserBackend: {value}"))
|
||||
ConfigError::ConfigParse(
|
||||
path.to_path_buf(),
|
||||
format!("invalid browserBackend: {value}"),
|
||||
)
|
||||
})?;
|
||||
let office_backend = config
|
||||
.office_backend
|
||||
@@ -251,7 +261,10 @@ impl SgClawSettings {
|
||||
.map(parse_office_backend)
|
||||
.transpose()
|
||||
.map_err(|value| {
|
||||
ConfigError::ConfigParse(path.to_path_buf(), format!("invalid officeBackend: {value}"))
|
||||
ConfigError::ConfigParse(
|
||||
path.to_path_buf(),
|
||||
format!("invalid officeBackend: {value}"),
|
||||
)
|
||||
})?;
|
||||
let providers = config
|
||||
.providers
|
||||
@@ -290,12 +303,14 @@ impl SgClawSettings {
|
||||
office_backend: Option<OfficeBackend>,
|
||||
) -> Result<Self, ConfigError> {
|
||||
let providers = if providers.is_empty() {
|
||||
vec![ProviderSettings::from_legacy_deepseek(api_key, base_url, model)?]
|
||||
vec![ProviderSettings::from_legacy_deepseek(
|
||||
api_key, base_url, model,
|
||||
)?]
|
||||
} else {
|
||||
providers
|
||||
};
|
||||
let active_provider = normalize_optional_value(active_provider)
|
||||
.unwrap_or_else(|| providers[0].id.clone());
|
||||
let active_provider =
|
||||
normalize_optional_value(active_provider).unwrap_or_else(|| providers[0].id.clone());
|
||||
let active_provider_settings = providers
|
||||
.iter()
|
||||
.find(|provider| provider.id == active_provider)
|
||||
@@ -308,7 +323,10 @@ impl SgClawSettings {
|
||||
|
||||
Ok(Self {
|
||||
provider_api_key: active_provider_settings.api_key.clone(),
|
||||
provider_base_url: active_provider_settings.base_url.clone().unwrap_or_default(),
|
||||
provider_base_url: active_provider_settings
|
||||
.base_url
|
||||
.clone()
|
||||
.unwrap_or_default(),
|
||||
provider_model: active_provider_settings.model.clone(),
|
||||
skills_dir,
|
||||
skills_prompt_mode: skills_prompt_mode.unwrap_or(SkillsPromptMode::Compact),
|
||||
@@ -497,11 +515,7 @@ struct RawProviderSettings {
|
||||
api_path: Option<String>,
|
||||
#[serde(rename = "wireApi", alias = "wire_api", default)]
|
||||
wire_api: Option<String>,
|
||||
#[serde(
|
||||
rename = "requiresOpenaiAuth",
|
||||
alias = "requires_openai_auth",
|
||||
default
|
||||
)]
|
||||
#[serde(rename = "requiresOpenaiAuth", alias = "requires_openai_auth", default)]
|
||||
requires_openai_auth: bool,
|
||||
}
|
||||
|
||||
|
||||
@@ -3,6 +3,7 @@ pub mod compat;
|
||||
pub mod config;
|
||||
pub mod llm;
|
||||
pub mod pipe;
|
||||
pub mod runtime;
|
||||
pub mod security;
|
||||
|
||||
use std::path::PathBuf;
|
||||
|
||||
@@ -21,9 +21,7 @@ impl HandshakeResult {
|
||||
.iter()
|
||||
.any(|capability| capability == "browser_action")
|
||||
.then(|| {
|
||||
ExecutionSurfaceMetadata::privileged_browser_pipe(
|
||||
"browser_host_and_mac_policy",
|
||||
)
|
||||
ExecutionSurfaceMetadata::privileged_browser_pipe("browser_host_and_mac_policy")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,8 +5,8 @@ pub mod protocol;
|
||||
pub use browser_tool::{BrowserPipeTool, CommandOutput};
|
||||
pub use handshake::{perform_handshake, HandshakeResult};
|
||||
pub use protocol::{
|
||||
supported_actions, Action, AgentMessage, BrowserContext, BrowserMessage,
|
||||
ConversationMessage, ExecutionSurfaceKind, ExecutionSurfaceMetadata, SecurityFields, Timing,
|
||||
supported_actions, Action, AgentMessage, BrowserContext, BrowserMessage, ConversationMessage,
|
||||
ExecutionSurfaceKind, ExecutionSurfaceMetadata, SecurityFields, Timing,
|
||||
};
|
||||
|
||||
use std::io::{BufRead, BufReader, Read, Write};
|
||||
|
||||
@@ -24,6 +24,7 @@ const BROWSER_TOOL_CONTRACT_PROMPT: &str = "SuperRPA browser interface contract:
|
||||
const ZHIHU_HOTLIST_EXECUTION_PROMPT: &str = "Zhihu hotlist execution contract:\n- Treat Zhihu hotlist export/presentation requests as a real browser workflow, not as a text-only summarization task.\n- You must attempt the browser workflow before concluding failure; a prose-only answer is invalid for this workflow.\n- If the current page is not already `https://www.zhihu.com/hot`, navigate there first.\n- If the `zhihu-hotlist.extract_hotlist` skill tool is available, call it before any generic browser probing.\n- Use generic `getText` only as a last-resort fallback when the packaged extractor fails.\n- Extract ordered rows containing `rank`, `title`, and `heat` as structured data.\n- Do not use shell, web_fetch, web_search_tool, or fabricated sample data for this workflow.\n- Do not repeat the same sentence or section in your final answer.";
|
||||
const OFFICE_EXPORT_COMPLETION_PROMPT: &str = "Export completion contract:\n- This task requires a real Excel export.\n- After the Zhihu rows are available, you must call openxml_office before finishing.\n- Never fabricate, simulate, or invent substitute hotlist data when a live collection/export task fails.\n- If live collection fails, report the failure concisely instead of producing fake rows.\n- Do not stop after describing how you will parse or export the data.\n- Do not repeat the same sentence or section in your final answer.\n- Your final answer must include the generated local .xlsx path.";
|
||||
const SCREEN_EXPORT_COMPLETION_PROMPT: &str = "Presentation completion contract:\n- This task requires a real dashboard artifact.\n- After the Zhihu rows are available, you must call screen_html_export before finishing.\n- Do not stop after describing how you will render or present the data.\n- Do not repeat the same sentence or section in your final answer.\n- Your final answer must include the local .html path and the presentation object.";
|
||||
const ZHIHU_WRITE_PUBLISH_PROMPT: &str = "Zhihu article publish contract:\n- This task may publish a Zhihu article.\n- You must not click publish without explicit human confirmation in the current session.\n- If the user asked to publish but no explicit confirmation phrase is present yet, ask for confirmation concisely and stop after the confirmation request.\n- Do not keep exploring tools after you have determined that publish confirmation is missing.\n- If the user only asked to write or draft, stay in draft mode and do not treat it as publish mode.\n- Do not repeat the same sentence or section in your final answer.";
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
pub struct RuntimeEngine {
|
||||
@@ -51,9 +52,7 @@ impl RuntimeEngine {
|
||||
self.tool_policy
|
||||
.allowed_tools
|
||||
.iter()
|
||||
.any(|tool| {
|
||||
tool == BROWSER_ACTION_TOOL_NAME || tool == SUPERRPA_BROWSER_TOOL_NAME
|
||||
})
|
||||
.any(|tool| tool == BROWSER_ACTION_TOOL_NAME || tool == SUPERRPA_BROWSER_TOOL_NAME)
|
||||
}
|
||||
|
||||
pub fn build_agent(
|
||||
@@ -155,6 +154,9 @@ impl RuntimeEngine {
|
||||
if task_needs_screen_export(trimmed_instruction) {
|
||||
sections.push(SCREEN_EXPORT_COMPLETION_PROMPT.to_string());
|
||||
}
|
||||
if task_requests_zhihu_article_publish(trimmed_instruction, page_url, page_title) {
|
||||
sections.push(ZHIHU_WRITE_PUBLISH_PROMPT.to_string());
|
||||
}
|
||||
if let Some(page_context) = build_page_context_message(page_url, page_title) {
|
||||
sections.push(page_context);
|
||||
}
|
||||
@@ -173,17 +175,11 @@ impl RuntimeEngine {
|
||||
.cmp(&right.name)
|
||||
.then(left.version.cmp(&right.version))
|
||||
});
|
||||
skills.dedup_by(|left, right| {
|
||||
left.name == right.name && left.version == right.version
|
||||
});
|
||||
skills.dedup_by(|left, right| left.name == right.name && left.version == right.version);
|
||||
skills
|
||||
}
|
||||
|
||||
pub fn loaded_skill_names(
|
||||
&self,
|
||||
config: &ZeroClawConfig,
|
||||
skills_dir: &Path,
|
||||
) -> Vec<String> {
|
||||
pub fn loaded_skill_names(&self, config: &ZeroClawConfig, skills_dir: &Path) -> Vec<String> {
|
||||
let mut names = self
|
||||
.loaded_skills(config, skills_dir)
|
||||
.into_iter()
|
||||
@@ -237,8 +233,8 @@ impl RuntimeEngine {
|
||||
}
|
||||
allowed_tools.dedup();
|
||||
|
||||
if matches!(self.profile, RuntimeProfile::GeneralAssistant) &&
|
||||
self.tool_policy.may_use_non_browser_tools
|
||||
if matches!(self.profile, RuntimeProfile::GeneralAssistant)
|
||||
&& self.tool_policy.may_use_non_browser_tools
|
||||
{
|
||||
None
|
||||
} else {
|
||||
@@ -263,9 +259,7 @@ fn browser_script_tool_names(skills: &[zeroclaw::skills::Skill]) -> Vec<String>
|
||||
|
||||
fn task_needs_local_file_read(instruction: &str) -> bool {
|
||||
let normalized = instruction.trim();
|
||||
normalized.contains("/home/") ||
|
||||
normalized.contains("./") ||
|
||||
normalized.contains("../")
|
||||
normalized.contains("/home/") || normalized.contains("./") || normalized.contains("../")
|
||||
}
|
||||
|
||||
pub fn is_zhihu_hotlist_task(
|
||||
@@ -277,16 +271,16 @@ pub fn is_zhihu_hotlist_task(
|
||||
let normalized_url = page_url.unwrap_or_default().to_ascii_lowercase();
|
||||
let normalized_title = page_title.unwrap_or_default().to_ascii_lowercase();
|
||||
|
||||
let is_zhihu = normalized_instruction.contains("zhihu") ||
|
||||
instruction.contains("知乎") ||
|
||||
normalized_url.contains("zhihu.com") ||
|
||||
normalized_title.contains("zhihu") ||
|
||||
page_title.unwrap_or_default().contains("知乎");
|
||||
let is_hotlist = normalized_instruction.contains("hotlist") ||
|
||||
instruction.contains("热榜") ||
|
||||
normalized_url.contains("/hot") ||
|
||||
normalized_title.contains("hotlist") ||
|
||||
page_title.unwrap_or_default().contains("热榜");
|
||||
let is_zhihu = normalized_instruction.contains("zhihu")
|
||||
|| instruction.contains("知乎")
|
||||
|| normalized_url.contains("zhihu.com")
|
||||
|| normalized_title.contains("zhihu")
|
||||
|| page_title.unwrap_or_default().contains("知乎");
|
||||
let is_hotlist = normalized_instruction.contains("hotlist")
|
||||
|| instruction.contains("热榜")
|
||||
|| normalized_url.contains("/hot")
|
||||
|| normalized_title.contains("hotlist")
|
||||
|| page_title.unwrap_or_default().contains("热榜");
|
||||
|
||||
is_zhihu && is_hotlist
|
||||
}
|
||||
@@ -310,6 +304,48 @@ fn task_needs_screen_export(instruction: &str) -> bool {
|
||||
|| normalized.contains("汇报")
|
||||
}
|
||||
|
||||
pub fn task_requests_zhihu_article_publish(
|
||||
instruction: &str,
|
||||
page_url: Option<&str>,
|
||||
page_title: Option<&str>,
|
||||
) -> bool {
|
||||
if !is_zhihu_write_task(instruction, page_url, page_title) {
|
||||
return false;
|
||||
}
|
||||
|
||||
let normalized = instruction.to_ascii_lowercase();
|
||||
normalized.contains("publish") || instruction.contains("发布") || instruction.contains("发表")
|
||||
}
|
||||
|
||||
pub fn is_zhihu_write_task(
|
||||
instruction: &str,
|
||||
page_url: Option<&str>,
|
||||
page_title: Option<&str>,
|
||||
) -> bool {
|
||||
let normalized_instruction = instruction.to_ascii_lowercase();
|
||||
let normalized_url = page_url.unwrap_or_default().to_ascii_lowercase();
|
||||
let normalized_title = page_title.unwrap_or_default().to_ascii_lowercase();
|
||||
|
||||
let is_zhihu = normalized_instruction.contains("zhihu")
|
||||
|| instruction.contains("知乎")
|
||||
|| normalized_url.contains("zhihu.com")
|
||||
|| normalized_title.contains("zhihu")
|
||||
|| page_title.unwrap_or_default().contains("知乎");
|
||||
let is_write = normalized_instruction.contains("article")
|
||||
|| normalized_instruction.contains("write")
|
||||
|| normalized_instruction.contains("publish")
|
||||
|| instruction.contains("文章")
|
||||
|| instruction.contains("写")
|
||||
|| instruction.contains("发布")
|
||||
|| instruction.contains("发表")
|
||||
|| normalized_url.contains("creator")
|
||||
|| normalized_url.contains("write")
|
||||
|| page_title.unwrap_or_default().contains("创作")
|
||||
|| page_title.unwrap_or_default().contains("写文章");
|
||||
|
||||
is_zhihu && is_write
|
||||
}
|
||||
|
||||
fn load_runtime_skills(config: &ZeroClawConfig, skills_dir: &Path) -> Vec<zeroclaw::skills::Skill> {
|
||||
let default_skills_dir = config.workspace_dir.join("skills");
|
||||
if skills_dir == default_skills_dir {
|
||||
@@ -344,10 +380,7 @@ fn build_page_context_message(page_url: Option<&str>, page_title: Option<&str>)
|
||||
return None;
|
||||
}
|
||||
|
||||
Some(format!(
|
||||
"Current browser context:\n{}",
|
||||
parts.join("\n")
|
||||
))
|
||||
Some(format!("Current browser context:\n{}", parts.join("\n")))
|
||||
}
|
||||
|
||||
fn map_anyhow_to_pipe_error(err: anyhow::Error) -> PipeError {
|
||||
|
||||
@@ -2,6 +2,8 @@ mod engine;
|
||||
mod profile;
|
||||
mod tool_policy;
|
||||
|
||||
pub use engine::{is_zhihu_hotlist_task, RuntimeEngine};
|
||||
pub use engine::{
|
||||
is_zhihu_hotlist_task, is_zhihu_write_task, task_requests_zhihu_article_publish, RuntimeEngine,
|
||||
};
|
||||
pub use profile::RuntimeProfile;
|
||||
pub use tool_policy::ToolPolicy;
|
||||
|
||||
6
src/runtime/profile.rs
Normal file
6
src/runtime/profile.rs
Normal file
@@ -0,0 +1,6 @@
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||
pub enum RuntimeProfile {
|
||||
BrowserAttached,
|
||||
BrowserHeavy,
|
||||
GeneralAssistant,
|
||||
}
|
||||
@@ -13,18 +13,12 @@ impl ToolPolicy {
|
||||
RuntimeProfile::BrowserAttached => Self {
|
||||
requires_browser_surface: false,
|
||||
may_use_non_browser_tools: true,
|
||||
allowed_tools: vec![
|
||||
"superrpa_browser".to_string(),
|
||||
"browser_action".to_string(),
|
||||
],
|
||||
allowed_tools: vec!["superrpa_browser".to_string(), "browser_action".to_string()],
|
||||
},
|
||||
RuntimeProfile::BrowserHeavy => Self {
|
||||
requires_browser_surface: true,
|
||||
may_use_non_browser_tools: true,
|
||||
allowed_tools: vec![
|
||||
"superrpa_browser".to_string(),
|
||||
"browser_action".to_string(),
|
||||
],
|
||||
allowed_tools: vec!["superrpa_browser".to_string(), "browser_action".to_string()],
|
||||
},
|
||||
RuntimeProfile::GeneralAssistant => Self {
|
||||
requires_browser_surface: false,
|
||||
|
||||
@@ -4,6 +4,7 @@ use std::sync::Arc;
|
||||
use std::time::Duration;
|
||||
|
||||
use common::MockTransport;
|
||||
use sgclaw::agent::handle_browser_message;
|
||||
use sgclaw::agent::runtime::{browser_action_tool_definition, execute_task_with_provider};
|
||||
use sgclaw::llm::{ChatMessage, LlmError, LlmProvider, ToolDefinition, ToolFunctionCall};
|
||||
use sgclaw::pipe::{Action, AgentMessage, BrowserMessage, BrowserPipeTool, Timing};
|
||||
@@ -132,3 +133,46 @@ fn runtime_executes_provider_tool_calls_and_returns_summary() {
|
||||
if *seq == 2 && action == &Action::Type
|
||||
));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn legacy_agent_runtime_is_explicitly_dev_only() {
|
||||
assert!(sgclaw::agent::runtime::LEGACY_DEV_ONLY);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn production_submit_task_does_not_route_into_legacy_runtime_without_llm_config() {
|
||||
std::env::remove_var("DEEPSEEK_API_KEY");
|
||||
std::env::remove_var("DEEPSEEK_BASE_URL");
|
||||
std::env::remove_var("DEEPSEEK_MODEL");
|
||||
|
||||
let transport = Arc::new(MockTransport::new(vec![]));
|
||||
let browser_tool = BrowserPipeTool::new(
|
||||
transport.clone(),
|
||||
test_policy(),
|
||||
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||
)
|
||||
.with_response_timeout(Duration::from_secs(1));
|
||||
|
||||
handle_browser_message(
|
||||
transport.as_ref(),
|
||||
&browser_tool,
|
||||
BrowserMessage::SubmitTask {
|
||||
instruction: "打开百度".to_string(),
|
||||
conversation_id: String::new(),
|
||||
messages: vec![],
|
||||
page_url: String::new(),
|
||||
page_title: String::new(),
|
||||
},
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let sent = transport.sent_messages();
|
||||
assert!(matches!(
|
||||
sent.last(),
|
||||
Some(AgentMessage::TaskComplete { success, summary })
|
||||
if !success && summary.contains("未配置大语言模型")
|
||||
));
|
||||
assert!(!sent
|
||||
.iter()
|
||||
.any(|message| { matches!(message, AgentMessage::Command { .. }) }));
|
||||
}
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
mod common;
|
||||
|
||||
use std::collections::HashMap;
|
||||
use std::fs;
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Arc;
|
||||
use std::time::Duration;
|
||||
use std::time::{SystemTime, UNIX_EPOCH};
|
||||
use std::fs;
|
||||
|
||||
use common::MockTransport;
|
||||
use serde_json::json;
|
||||
@@ -77,13 +77,8 @@ return {
|
||||
command: "scripts/extract_hotlist.js".to_string(),
|
||||
args,
|
||||
};
|
||||
let tool = BrowserScriptSkillTool::new(
|
||||
"zhihu-hotlist",
|
||||
&skill_tool,
|
||||
&skill_dir,
|
||||
browser_tool,
|
||||
)
|
||||
.unwrap();
|
||||
let tool = BrowserScriptSkillTool::new("zhihu-hotlist", &skill_tool, &skill_dir, browser_tool)
|
||||
.unwrap();
|
||||
|
||||
let result = tool
|
||||
.execute(json!({
|
||||
|
||||
@@ -96,8 +96,14 @@ fn browser_tool_exposes_privileged_surface_metadata_backed_by_mac_policy() {
|
||||
assert!(metadata.privileged);
|
||||
assert!(!metadata.defines_runtime_identity);
|
||||
assert_eq!(metadata.guard, "mac_policy");
|
||||
assert_eq!(metadata.allowed_domains, vec!["oa.example.com", "erp.example.com"]);
|
||||
assert_eq!(metadata.allowed_actions, vec!["click", "type", "navigate", "getText"]);
|
||||
assert_eq!(
|
||||
metadata.allowed_domains,
|
||||
vec!["oa.example.com", "erp.example.com"]
|
||||
);
|
||||
assert_eq!(
|
||||
metadata.allowed_actions,
|
||||
vec!["click", "type", "navigate", "getText"]
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
||||
@@ -26,7 +26,9 @@ fn test_policy() -> MacPolicy {
|
||||
.unwrap()
|
||||
}
|
||||
|
||||
fn build_adapter(messages: Vec<BrowserMessage>) -> (Arc<MockTransport>, ZeroClawBrowserTool<MockTransport>) {
|
||||
fn build_adapter(
|
||||
messages: Vec<BrowserMessage>,
|
||||
) -> (Arc<MockTransport>, ZeroClawBrowserTool<MockTransport>) {
|
||||
let transport = Arc::new(MockTransport::new(messages));
|
||||
let browser_tool = BrowserPipeTool::new(
|
||||
transport.clone(),
|
||||
@@ -204,13 +206,11 @@ async fn zeroclaw_browser_tool_keeps_domain_validation_in_mac_policy() {
|
||||
assert!(!result.success);
|
||||
assert!(result.output.is_empty());
|
||||
assert_eq!(transport.sent_messages().len(), 0);
|
||||
assert!(
|
||||
result
|
||||
.error
|
||||
.as_deref()
|
||||
.unwrap()
|
||||
.contains("domain is not allowed")
|
||||
);
|
||||
assert!(result
|
||||
.error
|
||||
.as_deref()
|
||||
.unwrap()
|
||||
.contains("domain is not allowed"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
@@ -300,25 +300,19 @@ async fn zeroclaw_browser_tool_rejects_missing_required_action_parameters() {
|
||||
assert!(!missing_text_selector.success);
|
||||
assert!(!missing_navigate_url.success);
|
||||
assert_eq!(transport.sent_messages().len(), 0);
|
||||
assert!(
|
||||
missing_click_selector
|
||||
.error
|
||||
.as_deref()
|
||||
.unwrap()
|
||||
.contains("click requires selector")
|
||||
);
|
||||
assert!(
|
||||
missing_text_selector
|
||||
.error
|
||||
.as_deref()
|
||||
.unwrap()
|
||||
.contains("getText requires selector")
|
||||
);
|
||||
assert!(
|
||||
missing_navigate_url
|
||||
.error
|
||||
.as_deref()
|
||||
.unwrap()
|
||||
.contains("navigate requires url")
|
||||
);
|
||||
assert!(missing_click_selector
|
||||
.error
|
||||
.as_deref()
|
||||
.unwrap()
|
||||
.contains("click requires selector"));
|
||||
assert!(missing_text_selector
|
||||
.error
|
||||
.as_deref()
|
||||
.unwrap()
|
||||
.contains("getText requires selector"));
|
||||
assert!(missing_navigate_url
|
||||
.error
|
||||
.as_deref()
|
||||
.unwrap()
|
||||
.contains("navigate requires url"));
|
||||
}
|
||||
|
||||
@@ -3,20 +3,12 @@ use std::path::Path;
|
||||
use std::sync::{Mutex, OnceLock};
|
||||
|
||||
use sgclaw::compat::config_adapter::{
|
||||
build_zeroclaw_config,
|
||||
build_zeroclaw_config_from_sgclaw_settings,
|
||||
build_zeroclaw_config_from_settings,
|
||||
resolve_skills_dir,
|
||||
zeroclaw_default_skills_dir,
|
||||
build_zeroclaw_config, build_zeroclaw_config_from_settings,
|
||||
build_zeroclaw_config_from_sgclaw_settings, resolve_skills_dir, zeroclaw_default_skills_dir,
|
||||
zeroclaw_workspace_dir,
|
||||
};
|
||||
use sgclaw::config::{
|
||||
BrowserBackend,
|
||||
DeepSeekSettings,
|
||||
OfficeBackend,
|
||||
PlannerMode,
|
||||
SgClawSettings,
|
||||
SkillsPromptMode,
|
||||
BrowserBackend, DeepSeekSettings, OfficeBackend, PlannerMode, SgClawSettings, SkillsPromptMode,
|
||||
};
|
||||
use sgclaw::runtime::RuntimeProfile;
|
||||
use uuid::Uuid;
|
||||
@@ -61,11 +53,17 @@ fn zeroclaw_config_adapter_uses_deterministic_workspace_dir() {
|
||||
let workspace_dir = zeroclaw_workspace_dir(Path::new("/var/lib/sgclaw"));
|
||||
let config = build_zeroclaw_config_from_settings(Path::new("/var/lib/sgclaw"), &settings);
|
||||
|
||||
assert_eq!(workspace_dir, Path::new("/var/lib/sgclaw/.sgclaw-zeroclaw-workspace"));
|
||||
assert_eq!(
|
||||
workspace_dir,
|
||||
Path::new("/var/lib/sgclaw/.sgclaw-zeroclaw-workspace")
|
||||
);
|
||||
assert_eq!(config.workspace_dir, workspace_dir);
|
||||
assert_eq!(config.default_provider.as_deref(), Some("deepseek"));
|
||||
assert_eq!(config.default_model.as_deref(), Some("deepseek-reasoner"));
|
||||
assert_eq!(config.api_url.as_deref(), Some("https://proxy.example.com/v1"));
|
||||
assert_eq!(
|
||||
config.api_url.as_deref(),
|
||||
Some("https://proxy.example.com/v1")
|
||||
);
|
||||
assert_eq!(
|
||||
resolve_skills_dir(Path::new("/var/lib/sgclaw"), &settings),
|
||||
zeroclaw_default_skills_dir(Path::new("/var/lib/sgclaw"))
|
||||
@@ -252,7 +250,10 @@ fn sgclaw_settings_load_provider_switching_and_backend_policy_from_browser_confi
|
||||
assert_eq!(settings.planner_mode, PlannerMode::ZeroclawPlanFirst);
|
||||
assert_eq!(settings.active_provider, "glm-prod");
|
||||
assert_eq!(settings.providers.len(), 2);
|
||||
assert_eq!(settings.provider_base_url, "https://open.bigmodel.cn/api/paas/v4");
|
||||
assert_eq!(
|
||||
settings.provider_base_url,
|
||||
"https://open.bigmodel.cn/api/paas/v4"
|
||||
);
|
||||
assert_eq!(settings.provider_model, "glm-4.5");
|
||||
assert_eq!(settings.browser_backend, BrowserBackend::SuperRpa);
|
||||
assert_eq!(settings.office_backend, OfficeBackend::OpenXml);
|
||||
|
||||
@@ -17,6 +17,7 @@ async fn compat_cron_adapter_creates_lists_and_runs_due_agent_jobs() {
|
||||
api_key: "key".to_string(),
|
||||
base_url: "https://api.deepseek.com".to_string(),
|
||||
model: "deepseek-chat".to_string(),
|
||||
skills_dir: None,
|
||||
};
|
||||
let workspace_root = workspace_root("sgclaw-cron");
|
||||
let config = build_zeroclaw_config_from_settings(Path::new(&workspace_root), &settings);
|
||||
|
||||
@@ -16,6 +16,7 @@ async fn compat_memory_adapter_uses_workspace_local_sqlite_backend() {
|
||||
api_key: "key".to_string(),
|
||||
base_url: "https://api.deepseek.com".to_string(),
|
||||
model: "deepseek-chat".to_string(),
|
||||
skills_dir: None,
|
||||
};
|
||||
let workspace_root = workspace_root("sgclaw-memory");
|
||||
let config = build_zeroclaw_config_from_settings(Path::new(&workspace_root), &settings);
|
||||
|
||||
@@ -11,15 +11,9 @@ use std::time::Duration;
|
||||
use common::MockTransport;
|
||||
use serde_json::{json, Value};
|
||||
use sgclaw::agent::{
|
||||
handle_browser_message,
|
||||
handle_browser_message_with_context,
|
||||
AgentRuntimeContext,
|
||||
};
|
||||
use sgclaw::compat::runtime::{
|
||||
execute_task,
|
||||
execute_task_with_sgclaw_settings,
|
||||
CompatTaskContext,
|
||||
handle_browser_message, handle_browser_message_with_context, AgentRuntimeContext,
|
||||
};
|
||||
use sgclaw::compat::runtime::{execute_task, execute_task_with_sgclaw_settings, CompatTaskContext};
|
||||
use sgclaw::config::{DeepSeekSettings, SgClawSettings};
|
||||
use sgclaw::pipe::{
|
||||
Action, AgentMessage, BrowserMessage, BrowserPipeTool, ConversationMessage, Timing,
|
||||
@@ -151,8 +145,8 @@ fn tool_message_content<'a>(request: &'a Value, tool_call_id: &str) -> Option<&'
|
||||
messages.iter().find_map(|message| {
|
||||
(message["role"].as_str() == Some("tool")
|
||||
&& message["tool_call_id"].as_str() == Some(tool_call_id))
|
||||
.then(|| message["content"].as_str())
|
||||
.flatten()
|
||||
.then(|| message["content"].as_str())
|
||||
.flatten()
|
||||
})
|
||||
})
|
||||
}
|
||||
@@ -232,6 +226,23 @@ fn read_http_json_body(stream: &mut impl Read) -> Value {
|
||||
serde_json::from_slice(&buffer[headers_end..headers_end + content_length]).unwrap()
|
||||
}
|
||||
|
||||
fn task_complete_summary(sent: &[AgentMessage]) -> String {
|
||||
sent.iter()
|
||||
.find_map(|message| match message {
|
||||
AgentMessage::TaskComplete { success, summary } if *success => Some(summary.clone()),
|
||||
_ => None,
|
||||
})
|
||||
.expect("expected successful task completion")
|
||||
}
|
||||
|
||||
fn extract_generated_artifact_path(summary: &str, extension: &str) -> PathBuf {
|
||||
summary
|
||||
.split_whitespace()
|
||||
.find(|token| token.ends_with(extension))
|
||||
.map(PathBuf::from)
|
||||
.expect("expected artifact path in task summary")
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn compat_runtime_uses_zeroclaw_provider_path_and_executes_browser_actions() {
|
||||
let _guard = env_lock().lock().unwrap_or_else(|err| err.into_inner());
|
||||
@@ -386,7 +397,9 @@ fn compat_runtime_includes_default_workspace_skills_in_provider_request() {
|
||||
let (base_url, requests, server_handle) = start_fake_deepseek_server(vec![response]);
|
||||
|
||||
let workspace_root = temp_workspace_root();
|
||||
let default_skills_dir = workspace_root.join(".sgclaw-zeroclaw-workspace").join("skills");
|
||||
let default_skills_dir = workspace_root
|
||||
.join(".sgclaw-zeroclaw-workspace")
|
||||
.join("skills");
|
||||
write_skill_package(
|
||||
&default_skills_dir,
|
||||
"workspace-zhihu-skill",
|
||||
@@ -422,7 +435,9 @@ fn compat_runtime_includes_default_workspace_skills_in_provider_request() {
|
||||
|
||||
assert_eq!(summary, "已识别默认 workspace skill");
|
||||
assert_eq!(request_bodies.len(), 1);
|
||||
assert!(request_bodies[0].to_string().contains("workspace-zhihu-skill"));
|
||||
assert!(request_bodies[0]
|
||||
.to_string()
|
||||
.contains("workspace-zhihu-skill"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -439,7 +454,9 @@ fn handle_browser_message_loads_skills_from_configured_skills_dir() {
|
||||
let (base_url, requests, server_handle) = start_fake_deepseek_server(vec![response]);
|
||||
|
||||
let workspace_root = temp_workspace_root();
|
||||
let default_skills_dir = workspace_root.join(".sgclaw-zeroclaw-workspace").join("skills");
|
||||
let default_skills_dir = workspace_root
|
||||
.join(".sgclaw-zeroclaw-workspace")
|
||||
.join("skills");
|
||||
write_skill_package(
|
||||
&default_skills_dir,
|
||||
"workspace-only-skill",
|
||||
@@ -515,8 +532,7 @@ fn handle_browser_message_loads_skills_from_configured_skills_dir() {
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn handle_browser_message_routes_supported_instruction_to_compat_runtime_when_llm_is_configured(
|
||||
) {
|
||||
fn handle_browser_message_routes_supported_instruction_to_compat_runtime_when_llm_is_configured() {
|
||||
let _guard = env_lock().lock().unwrap_or_else(|err| err.into_inner());
|
||||
|
||||
let first_response = json!({
|
||||
@@ -993,12 +1009,10 @@ fn compat_runtime_includes_prior_turns_in_follow_up_provider_request() {
|
||||
|
||||
assert_eq!(summary, "已在知乎搜索天气");
|
||||
assert!(first_request_messages.iter().any(|message| {
|
||||
message["role"] == json!("user")
|
||||
&& message["content"] == json!("打开百度搜索天气")
|
||||
message["role"] == json!("user") && message["content"] == json!("打开百度搜索天气")
|
||||
}));
|
||||
assert!(first_request_messages.iter().any(|message| {
|
||||
message["role"] == json!("assistant")
|
||||
&& message["content"] == json!("已在百度搜索天气")
|
||||
message["role"] == json!("assistant") && message["content"] == json!("已在百度搜索天气")
|
||||
}));
|
||||
}
|
||||
|
||||
@@ -1224,9 +1238,9 @@ fn compat_runtime_can_complete_a_text_only_turn_without_browser_tool_calls() {
|
||||
assert_eq!(summary, "这是纯文本回答");
|
||||
assert!(!flattened.contains("Browser tool contract"));
|
||||
assert!(!tool_names.contains(&"browser_action".to_string()));
|
||||
assert!(!sent.iter().any(|message| {
|
||||
matches!(message, AgentMessage::Command { .. })
|
||||
}));
|
||||
assert!(!sent
|
||||
.iter()
|
||||
.any(|message| { matches!(message, AgentMessage::Command { .. }) }));
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -1243,7 +1257,9 @@ fn compat_runtime_allows_read_skill_under_compact_mode_policy() {
|
||||
let (base_url, requests, server_handle) = start_fake_deepseek_server(vec![response]);
|
||||
|
||||
let workspace_root = temp_workspace_root();
|
||||
let default_skills_dir = workspace_root.join(".sgclaw-zeroclaw-workspace").join("skills");
|
||||
let default_skills_dir = workspace_root
|
||||
.join(".sgclaw-zeroclaw-workspace")
|
||||
.join("skills");
|
||||
write_skill_package(
|
||||
&default_skills_dir,
|
||||
"workspace-zhihu-skill",
|
||||
@@ -1307,7 +1323,9 @@ fn compat_runtime_exposes_browser_script_skill_tools_in_browser_attached_mode()
|
||||
let (base_url, requests, server_handle) = start_fake_deepseek_server(vec![response]);
|
||||
|
||||
let workspace_root = temp_workspace_root();
|
||||
let default_skills_dir = workspace_root.join(".sgclaw-zeroclaw-workspace").join("skills");
|
||||
let default_skills_dir = workspace_root
|
||||
.join(".sgclaw-zeroclaw-workspace")
|
||||
.join("skills");
|
||||
let skill_dir = write_skill_manifest_package(
|
||||
&default_skills_dir,
|
||||
"workspace-zhihu-skill",
|
||||
@@ -1404,7 +1422,9 @@ fn compat_runtime_executes_browser_script_skill_via_eval_without_gettext_probing
|
||||
start_fake_deepseek_server(vec![first_response, second_response]);
|
||||
|
||||
let workspace_root = temp_workspace_root();
|
||||
let default_skills_dir = workspace_root.join(".sgclaw-zeroclaw-workspace").join("skills");
|
||||
let default_skills_dir = workspace_root
|
||||
.join(".sgclaw-zeroclaw-workspace")
|
||||
.join("skills");
|
||||
let skill_dir = write_skill_manifest_package(
|
||||
&default_skills_dir,
|
||||
"workspace-zhihu-skill",
|
||||
@@ -1742,7 +1762,9 @@ fn compat_runtime_logs_read_skill_usage_with_skill_name() {
|
||||
start_fake_deepseek_server(vec![first_response, second_response]);
|
||||
|
||||
let workspace_root = temp_workspace_root();
|
||||
let default_skills_dir = workspace_root.join(".sgclaw-zeroclaw-workspace").join("skills");
|
||||
let default_skills_dir = workspace_root
|
||||
.join(".sgclaw-zeroclaw-workspace")
|
||||
.join("skills");
|
||||
write_skill_package(
|
||||
&default_skills_dir,
|
||||
"workspace-zhihu-skill",
|
||||
@@ -2018,14 +2040,17 @@ fn handle_browser_message_executes_real_zhihu_hotlist_skill_flow() {
|
||||
);
|
||||
let runtime_context = AgentRuntimeContext::new(Some(config_path), workspace_root.clone());
|
||||
|
||||
let transport = Arc::new(MockTransport::new(vec![success_browser_response(1, json!({
|
||||
"text": {
|
||||
"source": "https://www.zhihu.com/hot",
|
||||
"sheet_name": "知乎热榜",
|
||||
"columns": ["rank", "title", "heat"],
|
||||
"rows": [[1, "热榜项目 1", "1707万"], [2, "热榜项目 2", "1150万"]]
|
||||
}
|
||||
}))]));
|
||||
let transport = Arc::new(MockTransport::new(vec![success_browser_response(
|
||||
1,
|
||||
json!({
|
||||
"text": {
|
||||
"source": "https://www.zhihu.com/hot",
|
||||
"sheet_name": "知乎热榜",
|
||||
"columns": ["rank", "title", "heat"],
|
||||
"rows": [[1, "热榜项目 1", "1707万"], [2, "热榜项目 2", "1150万"]]
|
||||
}
|
||||
}),
|
||||
)]));
|
||||
let browser_tool = BrowserPipeTool::new(
|
||||
transport.clone(),
|
||||
zhihu_test_policy(),
|
||||
@@ -2136,11 +2161,8 @@ fn handle_browser_message_chains_hotlist_skill_into_office_export_tool() {
|
||||
}
|
||||
}]
|
||||
});
|
||||
let (base_url, _requests, server_handle) = start_fake_deepseek_server(vec![
|
||||
first_response,
|
||||
third_response,
|
||||
fourth_response,
|
||||
]);
|
||||
let (base_url, _requests, server_handle) =
|
||||
start_fake_deepseek_server(vec![first_response, third_response, fourth_response]);
|
||||
let config_path = write_deepseek_config_with_skills_dir(
|
||||
&workspace_root,
|
||||
"deepseek-test-key",
|
||||
@@ -2150,16 +2172,17 @@ fn handle_browser_message_chains_hotlist_skill_into_office_export_tool() {
|
||||
);
|
||||
let runtime_context = AgentRuntimeContext::new(Some(config_path), workspace_root.clone());
|
||||
|
||||
let transport = Arc::new(MockTransport::new(vec![
|
||||
success_browser_response(1, json!({
|
||||
let transport = Arc::new(MockTransport::new(vec![success_browser_response(
|
||||
1,
|
||||
json!({
|
||||
"text": {
|
||||
"source": "https://www.zhihu.com/hot",
|
||||
"sheet_name": "知乎热榜",
|
||||
"columns": ["rank", "title", "heat"],
|
||||
"rows": [[1, "问题一", "344万"], [2, "问题二", "266万"]]
|
||||
}
|
||||
})),
|
||||
]));
|
||||
}),
|
||||
)]));
|
||||
let browser_tool = BrowserPipeTool::new(
|
||||
transport.clone(),
|
||||
zhihu_test_policy(),
|
||||
@@ -2225,84 +2248,10 @@ fn handle_browser_message_chains_hotlist_skill_into_screen_export_tool() {
|
||||
let _guard = env_lock().lock().unwrap_or_else(|err| err.into_inner());
|
||||
|
||||
let workspace_root = temp_workspace_root();
|
||||
let output_path = workspace_root.join("out/zhihu-hotlist-screen.html");
|
||||
let output_path_str = output_path.to_string_lossy().to_string();
|
||||
let first_response = json!({
|
||||
"choices": [{
|
||||
"message": {
|
||||
"content": "",
|
||||
"tool_calls": [{
|
||||
"id": "call_1",
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "superrpa_browser",
|
||||
"arguments": serde_json::to_string(&json!({
|
||||
"action": "navigate",
|
||||
"expected_domain": "www.zhihu.com",
|
||||
"url": "https://www.zhihu.com/hot"
|
||||
})).unwrap()
|
||||
}
|
||||
}]
|
||||
}
|
||||
}]
|
||||
});
|
||||
let second_response = json!({
|
||||
"choices": [{
|
||||
"message": {
|
||||
"content": "",
|
||||
"tool_calls": [{
|
||||
"id": "call_2",
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "superrpa_browser",
|
||||
"arguments": serde_json::to_string(&json!({
|
||||
"action": "getText",
|
||||
"expected_domain": "www.zhihu.com",
|
||||
"selector": "main"
|
||||
})).unwrap()
|
||||
}
|
||||
}]
|
||||
}
|
||||
}]
|
||||
});
|
||||
let third_response = json!({
|
||||
"choices": [{
|
||||
"message": {
|
||||
"content": "",
|
||||
"tool_calls": [{
|
||||
"id": "call_3",
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "screen_html_export",
|
||||
"arguments": serde_json::to_string(&json!({
|
||||
"rows": [
|
||||
[1, "问题一", "344万"],
|
||||
[2, "问题二", "266万"]
|
||||
],
|
||||
"output_path": output_path_str
|
||||
})).unwrap()
|
||||
}
|
||||
}]
|
||||
}
|
||||
}]
|
||||
});
|
||||
let fourth_response = json!({
|
||||
"choices": [{
|
||||
"message": {
|
||||
"content": format!("已生成知乎热榜大屏 {output_path_str}")
|
||||
}
|
||||
}]
|
||||
});
|
||||
let (base_url, _requests, server_handle) = start_fake_deepseek_server(vec![
|
||||
first_response,
|
||||
second_response,
|
||||
third_response,
|
||||
fourth_response,
|
||||
]);
|
||||
let config_path = write_deepseek_config_with_skills_dir(
|
||||
&workspace_root,
|
||||
"deepseek-test-key",
|
||||
&base_url,
|
||||
"http://127.0.0.1:9",
|
||||
"deepseek-chat",
|
||||
Some(real_skill_lib_root().to_str().unwrap()),
|
||||
);
|
||||
@@ -2312,7 +2261,18 @@ fn handle_browser_message_chains_hotlist_skill_into_screen_export_tool() {
|
||||
success_browser_response(1, json!({ "navigated": true })),
|
||||
success_browser_response(
|
||||
2,
|
||||
json!({ "text": "知乎热榜\n1\n问题一\n344万热度\n2\n问题二\n266万热度" }),
|
||||
json!({ "text": "知乎热榜\n1 问题一 344万热度\n2 问题二 266万热度" }),
|
||||
),
|
||||
success_browser_response(
|
||||
3,
|
||||
json!({
|
||||
"text": {
|
||||
"source": "https://www.zhihu.com/hot",
|
||||
"sheet_name": "知乎热榜",
|
||||
"columns": ["rank", "title", "heat"],
|
||||
"rows": [[1, "问题一", "344万"], [2, "问题二", "266万"]]
|
||||
}
|
||||
}),
|
||||
),
|
||||
]));
|
||||
let browser_tool = BrowserPipeTool::new(
|
||||
@@ -2335,22 +2295,39 @@ fn handle_browser_message_chains_hotlist_skill_into_screen_export_tool() {
|
||||
},
|
||||
)
|
||||
.unwrap();
|
||||
server_handle.join().unwrap();
|
||||
|
||||
let sent = transport.sent_messages();
|
||||
let summary = task_complete_summary(&sent);
|
||||
let generated = extract_generated_artifact_path(&summary, ".html");
|
||||
|
||||
assert!(summary.contains("已生成知乎热榜大屏"));
|
||||
assert!(summary.contains(".html"));
|
||||
assert!(generated.exists());
|
||||
assert!(sent.iter().any(|message| {
|
||||
matches!(
|
||||
message,
|
||||
AgentMessage::TaskComplete { success, summary }
|
||||
if *success && summary.contains("已生成知乎热榜大屏") && summary.contains(".html")
|
||||
AgentMessage::LogEntry { level, message }
|
||||
if level == "mode" && message == "zeroclaw_process_message_primary"
|
||||
)
|
||||
}));
|
||||
assert!(sent.iter().any(|message| {
|
||||
matches!(
|
||||
message,
|
||||
AgentMessage::LogEntry { level, message }
|
||||
if level == "mode" && message == "zeroclaw_process_message_primary"
|
||||
if level == "info" && message == "call zhihu-hotlist.extract_hotlist"
|
||||
)
|
||||
}));
|
||||
assert!(sent.iter().any(|message| {
|
||||
matches!(
|
||||
message,
|
||||
AgentMessage::LogEntry { level, message }
|
||||
if level == "info" && message == "call screen_html_export"
|
||||
)
|
||||
}));
|
||||
assert!(sent.iter().any(|message| {
|
||||
matches!(
|
||||
message,
|
||||
AgentMessage::Command { action, .. } if action == &Action::Eval
|
||||
)
|
||||
}));
|
||||
assert!(!sent.iter().any(|message| {
|
||||
@@ -2367,97 +2344,34 @@ fn handle_browser_message_runs_zhihu_hotlist_export_via_zeroclaw_primary_orchest
|
||||
let _guard = env_lock().lock().unwrap_or_else(|err| err.into_inner());
|
||||
|
||||
let workspace_root = temp_workspace_root();
|
||||
let output_path = workspace_root.join("out/zhihu-hotlist-orchestrated.xlsx");
|
||||
let output_path_str = output_path.to_string_lossy().to_string();
|
||||
let first_response = json!({
|
||||
"choices": [{
|
||||
"message": {
|
||||
"content": "",
|
||||
"tool_calls": [{
|
||||
"id": "call_1",
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "superrpa_browser",
|
||||
"arguments": serde_json::to_string(&json!({
|
||||
"action": "navigate",
|
||||
"expected_domain": "www.zhihu.com",
|
||||
"url": "https://www.zhihu.com/hot"
|
||||
})).unwrap()
|
||||
}
|
||||
}]
|
||||
}
|
||||
}]
|
||||
});
|
||||
let second_response = json!({
|
||||
"choices": [{
|
||||
"message": {
|
||||
"content": "",
|
||||
"tool_calls": [{
|
||||
"id": "call_2",
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "superrpa_browser",
|
||||
"arguments": serde_json::to_string(&json!({
|
||||
"action": "getText",
|
||||
"expected_domain": "www.zhihu.com",
|
||||
"selector": "main"
|
||||
})).unwrap()
|
||||
}
|
||||
}]
|
||||
}
|
||||
}]
|
||||
});
|
||||
let third_response = json!({
|
||||
"choices": [{
|
||||
"message": {
|
||||
"content": "",
|
||||
"tool_calls": [{
|
||||
"id": "call_3",
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "openxml_office",
|
||||
"arguments": serde_json::to_string(&json!({
|
||||
"sheet_name": "知乎热榜",
|
||||
"columns": ["rank", "title", "heat"],
|
||||
"rows": [
|
||||
[1, "问题一", "344万"],
|
||||
[2, "问题二", "266万"],
|
||||
[3, "问题三", "181万"]
|
||||
],
|
||||
"output_path": output_path_str
|
||||
})).unwrap()
|
||||
}
|
||||
}]
|
||||
}
|
||||
}]
|
||||
});
|
||||
let fourth_response = json!({
|
||||
"choices": [{
|
||||
"message": {
|
||||
"content": format!("已导出知乎热榜 Excel {output_path_str}")
|
||||
}
|
||||
}]
|
||||
});
|
||||
let (base_url, _requests, server_handle) = start_fake_deepseek_server(vec![
|
||||
first_response,
|
||||
second_response,
|
||||
third_response,
|
||||
fourth_response,
|
||||
]);
|
||||
let config_path = write_deepseek_config_with_skills_dir(
|
||||
&workspace_root,
|
||||
"deepseek-test-key",
|
||||
&base_url,
|
||||
"http://127.0.0.1:9",
|
||||
"deepseek-chat",
|
||||
Some(real_skill_lib_root().to_str().unwrap()),
|
||||
);
|
||||
let runtime_context = AgentRuntimeContext::new(Some(config_path), workspace_root.clone());
|
||||
|
||||
let transport = Arc::new(MockTransport::new(vec![
|
||||
success_browser_response(1, json!({ "navigated": true })),
|
||||
success_browser_response(
|
||||
1,
|
||||
json!({ "text": "知乎热榜\n1 问题一 344万热度\n2 问题二 266万热度\n3 问题三 181万热度" }),
|
||||
),
|
||||
success_browser_response(
|
||||
2,
|
||||
json!({ "text": "知乎热榜\n1\n问题一\n344万热度\n2\n问题二\n266万热度\n3\n问题三\n181万热度" }),
|
||||
json!({
|
||||
"text": {
|
||||
"source": "https://www.zhihu.com/hot",
|
||||
"sheet_name": "知乎热榜",
|
||||
"columns": ["rank", "title", "heat"],
|
||||
"rows": [
|
||||
[1, "问题一", "344万"],
|
||||
[2, "问题二", "266万"],
|
||||
[3, "问题三", "181万"]
|
||||
]
|
||||
}
|
||||
}),
|
||||
),
|
||||
]));
|
||||
let browser_tool = BrowserPipeTool::new(
|
||||
@@ -2480,19 +2394,13 @@ fn handle_browser_message_runs_zhihu_hotlist_export_via_zeroclaw_primary_orchest
|
||||
},
|
||||
)
|
||||
.unwrap();
|
||||
server_handle.join().unwrap();
|
||||
|
||||
let sent = transport.sent_messages();
|
||||
|
||||
let summary = sent
|
||||
.iter()
|
||||
.find_map(|message| match message {
|
||||
AgentMessage::TaskComplete { success, summary } if *success => Some(summary.clone()),
|
||||
_ => None,
|
||||
})
|
||||
.expect("expected successful task completion");
|
||||
let summary = task_complete_summary(&sent);
|
||||
let generated = extract_generated_artifact_path(&summary, ".xlsx");
|
||||
|
||||
assert!(summary.contains(".xlsx"));
|
||||
assert!(generated.exists());
|
||||
|
||||
assert!(sent.iter().any(|message| {
|
||||
matches!(
|
||||
@@ -2621,20 +2529,34 @@ fn browser_submit_path_prefers_zeroclaw_process_message_orchestrator_for_zhihu_p
|
||||
|
||||
#[test]
|
||||
fn zhihu_publish_task_matches_primary_orchestration_gate() {
|
||||
assert!(sgclaw::compat::orchestration::should_use_primary_orchestration(
|
||||
"请直接发表这篇知乎文章,标题是测试标题,正文是第一段内容",
|
||||
Some("https://www.zhihu.com/"),
|
||||
Some("知乎"),
|
||||
));
|
||||
assert!(
|
||||
sgclaw::compat::orchestration::should_use_primary_orchestration(
|
||||
"请直接发表这篇知乎文章,标题是测试标题,正文是第一段内容",
|
||||
Some("https://www.zhihu.com/"),
|
||||
Some("知乎"),
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn zhihu_article_entry_task_matches_primary_orchestration_gate() {
|
||||
assert!(sgclaw::compat::orchestration::should_use_primary_orchestration(
|
||||
"打开知乎发文章页面",
|
||||
Some("https://www.zhihu.com/"),
|
||||
Some("知乎"),
|
||||
assert!(
|
||||
sgclaw::compat::orchestration::should_use_primary_orchestration(
|
||||
"打开知乎发文章页面",
|
||||
Some("https://www.zhihu.com/"),
|
||||
Some("知乎"),
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn zhihu_hotlist_export_routes_prefer_direct_execution() {
|
||||
use sgclaw::compat::workflow_executor::{prefers_direct_execution, WorkflowRoute};
|
||||
|
||||
assert!(prefers_direct_execution(
|
||||
&WorkflowRoute::ZhihuHotlistExportXlsx
|
||||
));
|
||||
assert!(prefers_direct_execution(&WorkflowRoute::ZhihuHotlistScreen));
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -2684,9 +2606,9 @@ fn zhihu_publish_without_article_inputs_returns_missing_fields_prompt() {
|
||||
summary.contains("正文")
|
||||
)
|
||||
}));
|
||||
assert!(!sent.iter().any(|message| {
|
||||
matches!(message, AgentMessage::Command { .. })
|
||||
}));
|
||||
assert!(!sent
|
||||
.iter()
|
||||
.any(|message| { matches!(message, AgentMessage::Command { .. }) }));
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -2748,7 +2670,8 @@ fn zhihu_publish_accepts_literal_backslash_n_between_title_and_body() {
|
||||
&browser_tool,
|
||||
&runtime_context,
|
||||
BrowserMessage::SubmitTask {
|
||||
instruction: "标题:ai时代,普通人如何自救 \\n正文:第一段内容。 第二段内容。".to_string(),
|
||||
instruction: "标题:ai时代,普通人如何自救 \\n正文:第一段内容。 第二段内容。"
|
||||
.to_string(),
|
||||
conversation_id: String::new(),
|
||||
messages: vec![],
|
||||
page_url: "https://www.zhihu.com/creator".to_string(),
|
||||
@@ -3055,9 +2978,9 @@ fn zhihu_publish_without_confirmation_returns_confirmation_before_any_browser_pr
|
||||
if *success && summary.contains("确认发布")
|
||||
)
|
||||
}));
|
||||
assert!(!sent.iter().any(|message| {
|
||||
matches!(message, AgentMessage::Command { .. })
|
||||
}));
|
||||
assert!(!sent
|
||||
.iter()
|
||||
.any(|message| { matches!(message, AgentMessage::Command { .. }) }));
|
||||
assert!(!sent.iter().any(|message| {
|
||||
matches!(
|
||||
message,
|
||||
@@ -3086,7 +3009,10 @@ fn zhihu_publish_after_confirmation_reports_login_block_without_selector_probing
|
||||
let runtime_context = AgentRuntimeContext::new(Some(config_path), workspace_root.clone());
|
||||
|
||||
let transport = Arc::new(MockTransport::new(vec![
|
||||
success_browser_response(1, json!({ "navigated": true, "url": "https://www.zhihu.com/signin?next=%2Fcreator" })),
|
||||
success_browser_response(
|
||||
1,
|
||||
json!({ "navigated": true, "url": "https://www.zhihu.com/signin?next=%2Fcreator" }),
|
||||
),
|
||||
success_browser_response(
|
||||
2,
|
||||
json!({
|
||||
@@ -3200,11 +3126,8 @@ fn browser_orchestration_registers_superrpa_tools_natively() {
|
||||
}
|
||||
}]
|
||||
});
|
||||
let (base_url, requests, server_handle) = start_fake_deepseek_server(vec![
|
||||
first_response,
|
||||
second_response,
|
||||
third_response,
|
||||
]);
|
||||
let (base_url, requests, server_handle) =
|
||||
start_fake_deepseek_server(vec![first_response, second_response, third_response]);
|
||||
|
||||
let workspace_root = temp_workspace_root();
|
||||
let config_path = write_deepseek_config_with_skills_dir(
|
||||
@@ -3216,9 +3139,10 @@ fn browser_orchestration_registers_superrpa_tools_natively() {
|
||||
);
|
||||
let runtime_context = AgentRuntimeContext::new(Some(config_path), workspace_root.clone());
|
||||
|
||||
let transport = Arc::new(MockTransport::new(vec![
|
||||
success_browser_response(1, json!({ "text": "知乎热榜\n1\n问题一\n344万热度" })),
|
||||
]));
|
||||
let transport = Arc::new(MockTransport::new(vec![success_browser_response(
|
||||
1,
|
||||
json!({ "text": "知乎热榜\n1\n问题一\n344万热度" }),
|
||||
)]));
|
||||
let browser_tool = BrowserPipeTool::new(
|
||||
transport.clone(),
|
||||
zhihu_test_policy(),
|
||||
@@ -3513,11 +3437,8 @@ fn handle_browser_message_executes_real_zhihu_navigate_skill_flow() {
|
||||
}
|
||||
}]
|
||||
});
|
||||
let (base_url, requests, server_handle) = start_fake_deepseek_server(vec![
|
||||
first_response,
|
||||
second_response,
|
||||
third_response,
|
||||
]);
|
||||
let (base_url, requests, server_handle) =
|
||||
start_fake_deepseek_server(vec![first_response, second_response, third_response]);
|
||||
|
||||
let workspace_root = temp_workspace_root();
|
||||
let skills_dir = real_skill_lib_root();
|
||||
@@ -3709,9 +3630,14 @@ fn handle_browser_message_executes_real_zhihu_write_skill_flow() {
|
||||
params["url"].as_str() == Some("https://zhuanlan.zhihu.com/write")
|
||||
)
|
||||
}));
|
||||
assert!(sent.iter().filter(|message| {
|
||||
matches!(message, AgentMessage::Command { action, .. } if action == &Action::Eval)
|
||||
}).count() >= 2);
|
||||
assert!(
|
||||
sent.iter()
|
||||
.filter(|message| {
|
||||
matches!(message, AgentMessage::Command { action, .. } if action == &Action::Eval)
|
||||
})
|
||||
.count()
|
||||
>= 2
|
||||
);
|
||||
assert!(!sent.iter().any(|message| {
|
||||
matches!(
|
||||
message,
|
||||
|
||||
@@ -21,6 +21,7 @@ fn deepseek_settings_load_defaults_from_env() {
|
||||
assert_eq!(settings.api_key, "test-key");
|
||||
assert_eq!(settings.base_url, "https://api.deepseek.com");
|
||||
assert_eq!(settings.model, "deepseek-chat");
|
||||
assert_eq!(settings.skills_dir, None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
@@ -29,6 +30,7 @@ fn deepseek_request_shape_matches_openai_compatible_chat_format() {
|
||||
api_key: "test-key".to_string(),
|
||||
base_url: "https://api.deepseek.com".to_string(),
|
||||
model: "deepseek-chat".to_string(),
|
||||
skills_dir: None,
|
||||
});
|
||||
let messages = vec![
|
||||
ChatMessage {
|
||||
@@ -60,8 +62,5 @@ fn deepseek_request_shape_matches_openai_compatible_chat_format() {
|
||||
assert_eq!(serialized["messages"][0]["role"], "system");
|
||||
assert_eq!(serialized["messages"][1]["content"], "打开百度搜索天气");
|
||||
assert_eq!(serialized["tools"][0]["type"], "function");
|
||||
assert_eq!(
|
||||
serialized["tools"][0]["function"]["name"],
|
||||
"browser_action"
|
||||
);
|
||||
assert_eq!(serialized["tools"][0]["function"]["name"], "browser_action");
|
||||
}
|
||||
|
||||
@@ -109,7 +109,10 @@ fn plan_first_mode_builds_visible_preview_for_zhihu_excel_flow() {
|
||||
.steps
|
||||
.iter()
|
||||
.any(|step| step.contains("navigate https://www.zhihu.com/hot")));
|
||||
assert!(preview.steps.iter().any(|step| step.contains("getText main")));
|
||||
assert!(preview
|
||||
.steps
|
||||
.iter()
|
||||
.any(|step| step.contains("getText main")));
|
||||
assert!(preview
|
||||
.steps
|
||||
.iter()
|
||||
|
||||
@@ -30,13 +30,22 @@ async fn read_skill_inlines_referenced_markdown_files() {
|
||||
.unwrap();
|
||||
|
||||
let tool = ReadSkillTool::new(workspace_dir, false, None);
|
||||
let result = tool.execute(json!({ "name": "zhihu-hotlist" })).await.unwrap();
|
||||
let result = tool
|
||||
.execute(json!({ "name": "zhihu-hotlist" }))
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert!(result.success);
|
||||
assert!(result.output.contains("# Zhihu Hotlist"));
|
||||
assert!(result.output.contains("## Referenced File: references/collection-flow.md"));
|
||||
assert!(result.output.contains("Collect rows from the hotlist first."));
|
||||
assert!(result.output.contains("## Referenced File: references/data-quality.md"));
|
||||
assert!(result
|
||||
.output
|
||||
.contains("## Referenced File: references/collection-flow.md"));
|
||||
assert!(result
|
||||
.output
|
||||
.contains("Collect rows from the hotlist first."));
|
||||
assert!(result
|
||||
.output
|
||||
.contains("## Referenced File: references/data-quality.md"));
|
||||
assert!(result.output.contains("Mark partial metrics explicitly."));
|
||||
}
|
||||
|
||||
@@ -65,12 +74,21 @@ async fn read_skill_recursively_inlines_relative_asset_references() {
|
||||
.unwrap();
|
||||
|
||||
let tool = ReadSkillTool::new(workspace_dir, false, None);
|
||||
let result = tool.execute(json!({ "name": "zhihu-hotlist" })).await.unwrap();
|
||||
let result = tool
|
||||
.execute(json!({ "name": "zhihu-hotlist" }))
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert!(result.success);
|
||||
assert!(result.output.contains("## Referenced File: references/collection-flow.md"));
|
||||
assert!(result.output.contains("## Referenced File: assets/zhihu_hotlist_flow.source.json"));
|
||||
assert!(result.output.contains("\"selectors\": [\".HotList-list\", \".HotItem\"]"));
|
||||
assert!(result
|
||||
.output
|
||||
.contains("## Referenced File: references/collection-flow.md"));
|
||||
assert!(result
|
||||
.output
|
||||
.contains("## Referenced File: assets/zhihu_hotlist_flow.source.json"));
|
||||
assert!(result
|
||||
.output
|
||||
.contains("\"selectors\": [\".HotList-list\", \".HotItem\"]"));
|
||||
}
|
||||
|
||||
fn temp_workspace_dir() -> PathBuf {
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
use sgclaw::runtime::{RuntimeEngine, RuntimeProfile, ToolPolicy};
|
||||
use sgclaw::config::{BrowserBackend, OfficeBackend, PlannerMode, SgClawSettings};
|
||||
use sgclaw::runtime::{RuntimeEngine, RuntimeProfile, ToolPolicy};
|
||||
|
||||
#[test]
|
||||
fn browser_attached_profile_exposes_browser_surface_without_becoming_browser_only() {
|
||||
@@ -39,6 +39,23 @@ fn browser_attached_export_prompt_requires_openxml_completion() {
|
||||
assert!(instruction.contains("final answer must include the generated local .xlsx path"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn browser_attached_publish_prompt_requires_explicit_confirmation_before_clicking_publish() {
|
||||
let engine = RuntimeEngine::new(RuntimeProfile::BrowserAttached);
|
||||
|
||||
let instruction = engine.build_instruction(
|
||||
"请直接发表这篇知乎文章,标题是测试标题,正文是第一段内容",
|
||||
Some("https://www.zhihu.com/creator"),
|
||||
Some("知乎创作中心"),
|
||||
true,
|
||||
);
|
||||
|
||||
assert!(instruction.contains("publish a Zhihu article"));
|
||||
assert!(instruction.contains("must not click publish without explicit human confirmation"));
|
||||
assert!(instruction.contains("ask for confirmation concisely"));
|
||||
assert!(instruction.contains("stop after the confirmation request"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn legacy_settings_default_to_plan_first_superrpa_and_openxml_backends() {
|
||||
let settings = SgClawSettings::from_legacy_deepseek_fields(
|
||||
|
||||
@@ -55,7 +55,7 @@ class SkillLibValidationTest(unittest.TestCase):
|
||||
self.assertIn("xlsx", record.tags)
|
||||
expected_location = (
|
||||
SKILLS_DIR / name / "SKILL.toml"
|
||||
if name == "zhihu-hotlist"
|
||||
if name in {"zhihu-hotlist", "zhihu-navigate", "zhihu-write"}
|
||||
else SKILLS_DIR / name / "SKILL.md"
|
||||
)
|
||||
self.assertEqual(record.location, expected_location)
|
||||
@@ -83,6 +83,17 @@ class SkillLibValidationTest(unittest.TestCase):
|
||||
self.assertTrue(
|
||||
(SKILLS_DIR / "zhihu-hotlist" / "scripts" / "extract_hotlist.js").is_file()
|
||||
)
|
||||
self.assertTrue((SKILLS_DIR / "zhihu-navigate" / "SKILL.toml").is_file())
|
||||
self.assertTrue(
|
||||
(SKILLS_DIR / "zhihu-navigate" / "scripts" / "open_creator_entry.js").is_file()
|
||||
)
|
||||
self.assertTrue((SKILLS_DIR / "zhihu-write" / "SKILL.toml").is_file())
|
||||
self.assertTrue(
|
||||
(SKILLS_DIR / "zhihu-write" / "scripts" / "prepare_article_editor.js").is_file()
|
||||
)
|
||||
self.assertTrue(
|
||||
(SKILLS_DIR / "zhihu-write" / "scripts" / "fill_article_draft.js").is_file()
|
||||
)
|
||||
|
||||
def test_each_skill_declares_superrpa_browser_contract(self):
|
||||
for name in [name for name in EXPECTED_SKILL_NAMES if name.startswith("zhihu-")]:
|
||||
|
||||
219
tests/skill_script_zhihu_write_test.py
Normal file
219
tests/skill_script_zhihu_write_test.py
Normal file
@@ -0,0 +1,219 @@
|
||||
import json
|
||||
import subprocess
|
||||
import textwrap
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parents[1]
|
||||
PREPARE_SCRIPT_PATH = (
|
||||
REPO_ROOT.parent / "skill_lib" / "skills" / "zhihu-write" / "scripts" /
|
||||
"prepare_article_editor.js"
|
||||
)
|
||||
FILL_SCRIPT_PATH = (
|
||||
REPO_ROOT.parent / "skill_lib" / "skills" / "zhihu-write" / "scripts" /
|
||||
"fill_article_draft.js"
|
||||
)
|
||||
|
||||
|
||||
def run_browser_script(script_path: Path, *, args: dict, body_text: str, selectors: dict[str, list[dict]]) -> dict:
|
||||
node_script = textwrap.dedent(
|
||||
f"""
|
||||
import fs from 'node:fs';
|
||||
import vm from 'node:vm';
|
||||
|
||||
const scriptPath = {json.dumps(str(script_path))};
|
||||
const args = {json.dumps(args, ensure_ascii=False)};
|
||||
const selectorMap = {json.dumps(selectors, ensure_ascii=False)};
|
||||
const bodyText = {json.dumps(body_text, ensure_ascii=False)};
|
||||
const source = fs.readFileSync(scriptPath, 'utf8');
|
||||
|
||||
function createNode(spec) {{
|
||||
const attrs = spec?.attrs || {{}};
|
||||
const node = {{
|
||||
tagName: String(spec?.tagName || 'DIV').toUpperCase(),
|
||||
textContent: String(spec?.textContent ?? ''),
|
||||
innerText: String(spec?.innerText ?? spec?.textContent ?? ''),
|
||||
innerHTML: String(spec?.innerHTML ?? spec?.textContent ?? ''),
|
||||
value: String(spec?.value ?? ''),
|
||||
children: [],
|
||||
focused: false,
|
||||
clicked: false,
|
||||
appendChild(child) {{
|
||||
this.children.push(child);
|
||||
return child;
|
||||
}},
|
||||
focus() {{
|
||||
this.focused = true;
|
||||
}},
|
||||
click() {{
|
||||
this.clicked = true;
|
||||
}},
|
||||
dispatchEvent() {{
|
||||
return true;
|
||||
}},
|
||||
getAttribute(name) {{
|
||||
return Object.prototype.hasOwnProperty.call(attrs, name) ? attrs[name] : null;
|
||||
}},
|
||||
querySelector() {{
|
||||
return null;
|
||||
}},
|
||||
querySelectorAll() {{
|
||||
return [];
|
||||
}},
|
||||
getBoundingClientRect() {{
|
||||
return {{
|
||||
width: spec?.visible === false ? 0 : 100,
|
||||
height: spec?.visible === false ? 0 : 20,
|
||||
}};
|
||||
}},
|
||||
}};
|
||||
return node;
|
||||
}}
|
||||
|
||||
const created = new Map();
|
||||
|
||||
function createNodeList(selector) {{
|
||||
const specs = selectorMap[selector] || [];
|
||||
return specs.map((spec, index) => {{
|
||||
const key = `${{selector}}#${{index}}`;
|
||||
if (!created.has(key)) {{
|
||||
created.set(key, createNode(spec));
|
||||
}}
|
||||
return created.get(key);
|
||||
}});
|
||||
}}
|
||||
|
||||
const bodyNode = createNode({{ tagName: 'body', textContent: bodyText, innerText: bodyText }});
|
||||
const context = {{
|
||||
args,
|
||||
location: {{ href: 'https://zhuanlan.zhihu.com/write' }},
|
||||
document: {{
|
||||
body: bodyNode,
|
||||
createElement(tagName) {{
|
||||
return createNode({{ tagName }});
|
||||
}},
|
||||
createTextNode(text) {{
|
||||
return createNode({{ tagName: '#text', textContent: text, innerText: text }});
|
||||
}},
|
||||
querySelector(selector) {{
|
||||
if (selector === 'body') {{
|
||||
return bodyNode;
|
||||
}}
|
||||
return createNodeList(selector)[0] || null;
|
||||
}},
|
||||
querySelectorAll(selector) {{
|
||||
return createNodeList(selector);
|
||||
}},
|
||||
}},
|
||||
Event: class Event {{
|
||||
constructor(type, init = {{}}) {{
|
||||
this.type = type;
|
||||
this.bubbles = !!init.bubbles;
|
||||
this.composed = !!init.composed;
|
||||
}}
|
||||
}},
|
||||
console,
|
||||
JSON,
|
||||
Math,
|
||||
Number,
|
||||
Object,
|
||||
RegExp,
|
||||
Set,
|
||||
String,
|
||||
Array,
|
||||
Error,
|
||||
}};
|
||||
|
||||
try {{
|
||||
const result = vm.runInNewContext(`(function(){{\\n${{source}}\\n}})()`, context);
|
||||
process.stdout.write(JSON.stringify({{ ok: true, result, created: Object.fromEntries(created) }}));
|
||||
}} catch (error) {{
|
||||
process.stdout.write(JSON.stringify({{
|
||||
ok: false,
|
||||
error: String(error && error.message ? error.message : error),
|
||||
}}));
|
||||
process.exitCode = 1;
|
||||
}}
|
||||
"""
|
||||
)
|
||||
completed = subprocess.run(
|
||||
["node", "--input-type=module", "-e", node_script],
|
||||
check=False,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
)
|
||||
payload = json.loads(completed.stdout)
|
||||
if completed.returncode != 0:
|
||||
raise AssertionError(payload["error"])
|
||||
return payload
|
||||
|
||||
|
||||
class SkillScriptZhihuWriteTest(unittest.TestCase):
|
||||
def test_prepare_article_editor_accepts_role_textbox_title_and_generic_body_editor(self):
|
||||
payload = run_browser_script(
|
||||
PREPARE_SCRIPT_PATH,
|
||||
args={"desired_mode": "draft"},
|
||||
body_text="写文章 发布",
|
||||
selectors={
|
||||
"[role='textbox'][aria-label*='标题']": [
|
||||
{
|
||||
"tagName": "div",
|
||||
"attrs": {
|
||||
"role": "textbox",
|
||||
"aria-label": "标题",
|
||||
"contenteditable": "true",
|
||||
},
|
||||
}
|
||||
],
|
||||
"div[contenteditable='true']": [
|
||||
{
|
||||
"tagName": "div",
|
||||
"attrs": {
|
||||
"contenteditable": "true",
|
||||
"data-placeholder": "在这里输入正文",
|
||||
},
|
||||
}
|
||||
],
|
||||
},
|
||||
)
|
||||
|
||||
self.assertEqual(payload["result"]["status"], "editor_ready")
|
||||
|
||||
def test_fill_article_draft_accepts_role_textbox_title_and_generic_body_editor(self):
|
||||
payload = run_browser_script(
|
||||
FILL_SCRIPT_PATH,
|
||||
args={
|
||||
"title": "测试标题",
|
||||
"body": "第一段\n第二段",
|
||||
"publish_mode": "false",
|
||||
},
|
||||
body_text="写文章 发布",
|
||||
selectors={
|
||||
"[role='textbox'][aria-label*='标题']": [
|
||||
{
|
||||
"tagName": "div",
|
||||
"attrs": {
|
||||
"role": "textbox",
|
||||
"aria-label": "标题",
|
||||
"contenteditable": "true",
|
||||
},
|
||||
}
|
||||
],
|
||||
"div[contenteditable='true']": [
|
||||
{
|
||||
"tagName": "div",
|
||||
"attrs": {
|
||||
"contenteditable": "true",
|
||||
"data-placeholder": "在这里输入正文",
|
||||
},
|
||||
}
|
||||
],
|
||||
},
|
||||
)
|
||||
|
||||
self.assertEqual(payload["result"]["status"], "draft_ready")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
20
third_party/zeroclaw/src/agent/agent.rs
vendored
20
third_party/zeroclaw/src/agent/agent.rs
vendored
@@ -796,7 +796,8 @@ impl Agent {
|
||||
|
||||
let (text, calls) = self.tool_dispatcher.parse_response(&response);
|
||||
let calls = canonicalize_parsed_tool_calls(&self.tools, calls);
|
||||
response.tool_calls = canonicalize_provider_tool_calls(&self.tools, response.tool_calls);
|
||||
response.tool_calls =
|
||||
canonicalize_provider_tool_calls(&self.tools, response.tool_calls);
|
||||
if calls.is_empty() {
|
||||
let final_text = if text.is_empty() {
|
||||
response.text.unwrap_or_default()
|
||||
@@ -1065,7 +1066,8 @@ impl Agent {
|
||||
|
||||
let (text, calls) = self.tool_dispatcher.parse_response(&response);
|
||||
let calls = canonicalize_parsed_tool_calls(&self.tools, calls);
|
||||
response.tool_calls = canonicalize_provider_tool_calls(&self.tools, response.tool_calls);
|
||||
response.tool_calls =
|
||||
canonicalize_provider_tool_calls(&self.tools, response.tool_calls);
|
||||
if calls.is_empty() {
|
||||
let final_text = if text.is_empty() {
|
||||
response.text.unwrap_or_default()
|
||||
@@ -1207,7 +1209,8 @@ fn sanitize_final_text(text: &str) -> String {
|
||||
}
|
||||
|
||||
fn resolve_registered_tool_name(tools: &[Box<dyn Tool>], raw: &str) -> Option<String> {
|
||||
tools.iter()
|
||||
tools
|
||||
.iter()
|
||||
.find(|tool| {
|
||||
tool.name() == raw || crate::tools::provider_safe_tool_name(tool.name()) == raw
|
||||
})
|
||||
@@ -1218,7 +1221,8 @@ fn canonicalize_parsed_tool_calls(
|
||||
tools: &[Box<dyn Tool>],
|
||||
calls: Vec<ParsedToolCall>,
|
||||
) -> Vec<ParsedToolCall> {
|
||||
calls.into_iter()
|
||||
calls
|
||||
.into_iter()
|
||||
.map(|mut call| {
|
||||
if let Some(canonical_name) = resolve_registered_tool_name(tools, &call.name) {
|
||||
call.name = canonical_name;
|
||||
@@ -1232,7 +1236,8 @@ fn canonicalize_provider_tool_calls(
|
||||
tools: &[Box<dyn Tool>],
|
||||
calls: Vec<crate::providers::ToolCall>,
|
||||
) -> Vec<crate::providers::ToolCall> {
|
||||
calls.into_iter()
|
||||
calls
|
||||
.into_iter()
|
||||
.map(|mut call| {
|
||||
if let Some(canonical_name) = resolve_registered_tool_name(tools, &call.name) {
|
||||
call.name = canonical_name;
|
||||
@@ -1656,7 +1661,10 @@ mod tests {
|
||||
.expect("agent builder should succeed with valid config");
|
||||
|
||||
let (event_tx, _event_rx) = tokio::sync::mpsc::channel(8);
|
||||
let response = agent.turn_streamed("读取知乎热榜前10,并导出 excel 文件", event_tx).await.unwrap();
|
||||
let response = agent
|
||||
.turn_streamed("读取知乎热榜前10,并导出 excel 文件", event_tx)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(
|
||||
response,
|
||||
|
||||
3
third_party/zeroclaw/src/lib.rs
vendored
3
third_party/zeroclaw/src/lib.rs
vendored
@@ -71,7 +71,7 @@ pub mod routines;
|
||||
pub mod runtime;
|
||||
pub(crate) mod security;
|
||||
pub(crate) mod service;
|
||||
pub(crate) mod skills;
|
||||
pub mod skills;
|
||||
pub mod sop;
|
||||
pub mod tools;
|
||||
pub(crate) mod trust;
|
||||
@@ -83,6 +83,7 @@ pub mod verifiable_intent;
|
||||
pub mod plugins;
|
||||
|
||||
pub use config::Config;
|
||||
pub use security::{AutonomyLevel, SecurityPolicy};
|
||||
|
||||
/// Gateway management subcommands
|
||||
#[derive(Subcommand, Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
|
||||
|
||||
10
third_party/zeroclaw/src/providers/compatible.rs
vendored
10
third_party/zeroclaw/src/providers/compatible.rs
vendored
@@ -1943,9 +1943,8 @@ impl Provider for OpenAiCompatibleProvider {
|
||||
reasoning_effort: self.reasoning_effort_for_model(model),
|
||||
tool_stream: self
|
||||
.tool_stream_for_tools(tools.as_ref().is_some_and(|tools| !tools.is_empty())),
|
||||
tool_choice: self.tool_choice_for_tools(
|
||||
tools.as_ref().is_some_and(|tools| !tools.is_empty()),
|
||||
),
|
||||
tool_choice: self
|
||||
.tool_choice_for_tools(tools.as_ref().is_some_and(|tools| !tools.is_empty())),
|
||||
tools,
|
||||
max_tokens: self.max_tokens,
|
||||
};
|
||||
@@ -2099,9 +2098,8 @@ impl Provider for OpenAiCompatibleProvider {
|
||||
tool_stream: if options.enabled { Some(true) } else { None },
|
||||
stream: Some(options.enabled),
|
||||
tools: tools.clone(),
|
||||
tool_choice: self.tool_choice_for_tools(
|
||||
tools.as_ref().is_some_and(|tools| !tools.is_empty()),
|
||||
),
|
||||
tool_choice: self
|
||||
.tool_choice_for_tools(tools.as_ref().is_some_and(|tools| !tools.is_empty())),
|
||||
max_tokens: self.max_tokens,
|
||||
})
|
||||
} else {
|
||||
|
||||
14
third_party/zeroclaw/src/skills/mod.rs
vendored
14
third_party/zeroclaw/src/skills/mod.rs
vendored
@@ -816,12 +816,22 @@ pub fn skills_to_prompt_with_mode(
|
||||
let registered: Vec<_> = skill
|
||||
.tools
|
||||
.iter()
|
||||
.filter(|t| matches!(t.kind.as_str(), "shell" | "script" | "http" | "browser_script"))
|
||||
.filter(|t| {
|
||||
matches!(
|
||||
t.kind.as_str(),
|
||||
"shell" | "script" | "http" | "browser_script"
|
||||
)
|
||||
})
|
||||
.collect();
|
||||
let unregistered: Vec<_> = skill
|
||||
.tools
|
||||
.iter()
|
||||
.filter(|t| !matches!(t.kind.as_str(), "shell" | "script" | "http" | "browser_script"))
|
||||
.filter(|t| {
|
||||
!matches!(
|
||||
t.kind.as_str(),
|
||||
"shell" | "script" | "http" | "browser_script"
|
||||
)
|
||||
})
|
||||
.collect();
|
||||
|
||||
if !registered.is_empty() {
|
||||
|
||||
34
third_party/zeroclaw/src/tools/read_skill.rs
vendored
34
third_party/zeroclaw/src/tools/read_skill.rs
vendored
@@ -154,7 +154,9 @@ pub async fn read_skill_bundle(location: &Path) -> std::io::Result<String> {
|
||||
let Some(skill_root) = location.parent() else {
|
||||
return Ok(primary);
|
||||
};
|
||||
let skill_root = skill_root.canonicalize().unwrap_or_else(|_| skill_root.to_path_buf());
|
||||
let skill_root = skill_root
|
||||
.canonicalize()
|
||||
.unwrap_or_else(|_| skill_root.to_path_buf());
|
||||
let mut output = primary.clone();
|
||||
let mut appended = BTreeSet::new();
|
||||
let mut queued = BTreeSet::new();
|
||||
@@ -275,16 +277,22 @@ fn extract_reference_paths(content: &str) -> Vec<String> {
|
||||
}
|
||||
|
||||
fn looks_like_relative_reference_path(raw: &str) -> bool {
|
||||
if raw.is_empty() ||
|
||||
raw.starts_with('/') ||
|
||||
raw.starts_with("http://") ||
|
||||
raw.starts_with("https://") ||
|
||||
raw.starts_with('#')
|
||||
if raw.is_empty()
|
||||
|| raw.starts_with('/')
|
||||
|| raw.starts_with("http://")
|
||||
|| raw.starts_with("https://")
|
||||
|| raw.starts_with('#')
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
let candidate = raw.split('#').next().unwrap_or(raw).split('?').next().unwrap_or(raw);
|
||||
let candidate = raw
|
||||
.split('#')
|
||||
.next()
|
||||
.unwrap_or(raw)
|
||||
.split('?')
|
||||
.next()
|
||||
.unwrap_or(raw);
|
||||
let path = Path::new(candidate);
|
||||
if path
|
||||
.components()
|
||||
@@ -418,9 +426,15 @@ description = "Ship safely"
|
||||
|
||||
assert!(result.success);
|
||||
assert!(result.output.contains("# Zhihu Hotlist"));
|
||||
assert!(result.output.contains("## Referenced File: references/collection-flow.md"));
|
||||
assert!(result.output.contains("Collect rows from the hotlist first."));
|
||||
assert!(result.output.contains("## Referenced File: references/data-quality.md"));
|
||||
assert!(result
|
||||
.output
|
||||
.contains("## Referenced File: references/collection-flow.md"));
|
||||
assert!(result
|
||||
.output
|
||||
.contains("Collect rows from the hotlist first."));
|
||||
assert!(result
|
||||
.output
|
||||
.contains("## Referenced File: references/data-quality.md"));
|
||||
assert!(result.output.contains("Mark partial metrics explicitly."));
|
||||
}
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user