Compare commits
41 Commits
96c3bf1dee
...
feature/cl
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
044d38003d | ||
|
|
f07f7d63ef | ||
|
|
c60cd308ca | ||
|
|
6aa0c110bd | ||
|
|
390a431a4b | ||
|
|
0f70702914 | ||
|
|
8decd9554c | ||
|
|
adb64429ee | ||
|
|
32e2c59a40 | ||
|
|
fae2fd57d6 | ||
|
|
899c670e5c | ||
|
|
583bb117cb | ||
|
|
ad3778d4c5 | ||
|
|
4d1070dff0 | ||
|
|
0303111d5b | ||
|
|
7320fb7f79 | ||
|
|
dbbc5d030b | ||
|
|
ce6b3e6749 | ||
|
|
a957712590 | ||
|
|
0ebe060484 | ||
|
|
695a888840 | ||
|
|
733aee1e9a | ||
|
|
f8f822e1f3 | ||
|
|
3b156e4bd1 | ||
|
|
645dc60bae | ||
|
|
007959b903 | ||
|
|
a8a470481d | ||
|
|
447457b7d3 | ||
|
|
45b60e37f7 | ||
|
|
d230ff0389 | ||
|
|
72b79feca9 | ||
|
|
dd7805d341 | ||
|
|
883647dffc | ||
|
|
b454fa3f54 | ||
|
|
311cc1fee6 | ||
|
|
7443b9da7f | ||
|
|
34035cdc9c | ||
|
|
4becf81066 | ||
|
|
81de162756 | ||
|
|
630190e4d3 | ||
|
|
57b9be733d |
3
.gitignore
vendored
3
.gitignore
vendored
@@ -6,5 +6,8 @@ target/
|
|||||||
.qoder/
|
.qoder/
|
||||||
.sgclaw_workspace/
|
.sgclaw_workspace/
|
||||||
.sgclaw_workspace_dev1/
|
.sgclaw_workspace_dev1/
|
||||||
|
.sgclaw-zeroclaw-workspace/
|
||||||
|
sgclaw_config.json
|
||||||
|
nul
|
||||||
target-test/
|
target-test/
|
||||||
target-zhihu-nav/
|
target-zhihu-nav/
|
||||||
|
|||||||
502
Cargo.lock
generated
502
Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -1,6 +1,6 @@
|
|||||||
[package]
|
[package]
|
||||||
name = "sgclaw"
|
name = "sgclaw"
|
||||||
version = "0.1.0"
|
version = "0.1.0-2026.4.9"
|
||||||
edition = "2021"
|
edition = "2021"
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
@@ -19,5 +19,5 @@ thiserror = "1"
|
|||||||
tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "macros"] }
|
tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "macros"] }
|
||||||
tungstenite = "0.29"
|
tungstenite = "0.29"
|
||||||
uuid = { version = "1", features = ["v4"] }
|
uuid = { version = "1", features = ["v4"] }
|
||||||
|
zip = { version = "0.6.6", default-features = false, features = ["deflate"] }
|
||||||
zeroclaw = { package = "zeroclawlabs", path = "third_party/zeroclaw", default-features = false }
|
zeroclaw = { package = "zeroclawlabs", path = "third_party/zeroclaw", default-features = false }
|
||||||
zip = "8.4"
|
|
||||||
|
|||||||
422
docs/collect_lineloss_troubleshooting_guide.md
Normal file
422
docs/collect_lineloss_troubleshooting_guide.md
Normal file
@@ -0,0 +1,422 @@
|
|||||||
|
# collect_lineloss.js 从生成到可用的完整排查记录
|
||||||
|
|
||||||
|
本文档记录了 `tq-lineloss-report` skill 脚本从初始生成到最终可用的全部排查过程,包括遇到的每个错误、根因分析和修复方法。可作为后续类似 skill 开发的排查模板。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 背景
|
||||||
|
|
||||||
|
### 架构概览
|
||||||
|
|
||||||
|
```
|
||||||
|
用户输入 "兰州公司 月累计 2026-03。。。"
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
sgClaw Rust 进程
|
||||||
|
├── 解析指令 → DeterministicExecutionPlan
|
||||||
|
├── 读取 collect_lineloss.js 脚本
|
||||||
|
├── 包装为 IIFE:(function(){ const args = {...}; <脚本内容> })()
|
||||||
|
├── 调用 sgBrowserExcuteJsCodeByDomain(domain, wrappedJs)
|
||||||
|
│ 注入到浏览器中匹配 domain 的页面执行
|
||||||
|
├── 等待回调:脚本通过 callBackJsToCpp 返回 JSON 结果
|
||||||
|
├── 解析 artifact JSON → 提取 status/rows/reasons
|
||||||
|
└── 生成 XLSX(Rust 侧)→ 返回 outcome
|
||||||
|
```
|
||||||
|
|
||||||
|
### 关键差异:原始场景 vs Skill 模式
|
||||||
|
|
||||||
|
| 对比项 | 原始场景 (index.html) | Skill 模式 |
|
||||||
|
|--------|----------------------|------------|
|
||||||
|
| 脚本注入方式 | `sgBrowserExcuteJsCode(exactURL, js)` — 精确 URL | `sgBrowserExcuteJsCodeByDomain(domain, js)` — 仅域名匹配 |
|
||||||
|
| 执行页面 | 业务子页面 `/tqLinelossStatis/tqQualifyRateMonitor` | 可能命中父框架页 `/gsllys` |
|
||||||
|
| `window.mac` | 有(Vue 实例,`mounted()` 中 `window.mac = this`) | 无(没有 Vue 实例) |
|
||||||
|
| 导出 Excel | JS 调 `localhost:13313`(本地场景页可访问) | JS 无法调 `localhost:13313`(CORS 阻断) |
|
||||||
|
| 结果回传 | Rust 只需要 `.then()` 回调结果 | 同左,但脚本是 async 函数需 `.then()` 处理 |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 排查时间线
|
||||||
|
|
||||||
|
### 第 1 阶段:基础管道问题
|
||||||
|
|
||||||
|
#### 问题 1: `missing_expected_domain`
|
||||||
|
|
||||||
|
**现象**: `status=blocked reasons=missing_expected_domain`
|
||||||
|
|
||||||
|
**根因**: Rust 侧 `deterministic_submit.rs` 构造 args 时没有传 `expected_domain` 字段。`derive_expected_domain()` 从 `page_url` 提取 host 时只取了域名不含端口,但传入 args 时 key 不匹配。
|
||||||
|
|
||||||
|
**修复**: 确保 `deterministic_submit_args()` 正确插入 `expected_domain` 到 args Map。
|
||||||
|
|
||||||
|
**涉及文件**: `src/compat/deterministic_submit.rs`
|
||||||
|
|
||||||
|
**是否需要重新编译**: 是
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### 问题 2: `target_url` 缺少端口号
|
||||||
|
|
||||||
|
**现象**: 脚本注入失败或注入到错误页面。
|
||||||
|
|
||||||
|
**根因**: `target_url` 被设为 `http://20.76.57.61`(无端口),但实际业务页面在 `http://20.76.57.61:18080/gsllys/...`。`sgBrowserExcuteJsCodeByDomain` 需要能匹配到正确的标签页。
|
||||||
|
|
||||||
|
**修复**: 在 `deterministic_submit.rs` 中设置完整 `target_url`:
|
||||||
|
```rust
|
||||||
|
const LINELLOSS_TARGET_URL: &str = "http://20.76.57.61:18080/gsllys/tqLinelossStatis/tqQualifyRateMonitor";
|
||||||
|
```
|
||||||
|
|
||||||
|
**涉及文件**: `src/compat/deterministic_submit.rs`
|
||||||
|
|
||||||
|
**是否需要重新编译**: 是
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### 问题 3: 脚本返回 `{}` 空对象
|
||||||
|
|
||||||
|
**现象**: Rust 侧收到的 artifact 是 `{}`,无任何数据。
|
||||||
|
|
||||||
|
**根因**: `collect_lineloss.js` 的入口 `buildBrowserEntrypointResult()` 是 `async` 函数,返回 Promise。Rust 侧 `build_eval_js` 包装器原来直接调用 `_s(v)` 发送结果,但 `v` 是一个 Promise 对象,JSON.stringify 后变成 `{}`。
|
||||||
|
|
||||||
|
**修复**: 在 `build_eval_js`(`callback_backend.rs`)中增加 Promise 检测:
|
||||||
|
```rust
|
||||||
|
// 旧代码
|
||||||
|
"_s(v);"
|
||||||
|
|
||||||
|
// 新代码
|
||||||
|
"if(v&&typeof v.then==='function'){v.then(_s).catch(function(){});}else{_s(v);}"
|
||||||
|
```
|
||||||
|
|
||||||
|
如果返回值是 thenable(Promise),等它 resolve 后再发送回调。
|
||||||
|
|
||||||
|
**涉及文件**: `src/browser/callback_backend.rs` 中 `build_eval_js` 函数
|
||||||
|
|
||||||
|
**是否需要重新编译**: 是
|
||||||
|
|
||||||
|
**教训**: 所有 browser_script skill 如果入口函数是 async(返回 Promise),都需要这个 `.then()` 处理。这是管道层的通用修复。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 第 2 阶段:页面上下文问题
|
||||||
|
|
||||||
|
#### 问题 4: `page_context_unavailable` (mac_missing)
|
||||||
|
|
||||||
|
**现象**:
|
||||||
|
```
|
||||||
|
tq-lineloss-report 国网兰州供电公司 2026-03 status=blocked rows=0 reasons=page_context_unavailable
|
||||||
|
```
|
||||||
|
|
||||||
|
**排查过程**:
|
||||||
|
|
||||||
|
1. 在 `validatePageContext` 中添加诊断信息:
|
||||||
|
```javascript
|
||||||
|
// 临时诊断代码
|
||||||
|
const diag = 'href=' + href + '|host=' + host + '|port=' + port + '|title=' + title + '|mac=' + hasMac;
|
||||||
|
return { ok: false, reason: 'page_context_unavailable:mac_missing|' + diag };
|
||||||
|
```
|
||||||
|
|
||||||
|
2. 页面返回的诊断结果:
|
||||||
|
```
|
||||||
|
href=http://20.76.57.61:18080/gsllys
|
||||||
|
host=20.76.57.61
|
||||||
|
port=18080
|
||||||
|
title=台区线损大数据分析模块
|
||||||
|
mac=false
|
||||||
|
```
|
||||||
|
|
||||||
|
**根因**: `sgBrowserExcuteJsCodeByDomain("20.76.57.61")` 匹配到了父框架页 `/gsllys`,而不是业务子页面。`window.mac` 是业务子页面的 Vue 实例,在 `mounted()` 中通过 `window.mac = this` 设置,父框架页没有这个实例。
|
||||||
|
|
||||||
|
**关键认知**: 在 Skill 模式下没有 Vue 实例,`window.mac` 检查在架构上就不适用。脚本通过 AJAX 发绝对 URL 请求,不依赖页面本地状态。
|
||||||
|
|
||||||
|
**修复**: 删除 `globalThis.mac` 检查,只保留 host 匹配:
|
||||||
|
```javascript
|
||||||
|
// 修复前
|
||||||
|
validatePageContext(args) {
|
||||||
|
// ... 含 mac 检查 + 诊断代码
|
||||||
|
if (!hasMac) {
|
||||||
|
return { ok: false, reason: 'page_context_unavailable:mac_missing|' + diag };
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// 修复后
|
||||||
|
validatePageContext(args) {
|
||||||
|
const host = normalizeText(globalThis.location?.hostname);
|
||||||
|
const expected = normalizeText(args.expected_domain);
|
||||||
|
if (!host) {
|
||||||
|
return { ok: false, reason: 'page_context_unavailable' };
|
||||||
|
}
|
||||||
|
if (host !== expected) {
|
||||||
|
return { ok: false, reason: 'page_context_mismatch' };
|
||||||
|
}
|
||||||
|
return { ok: true };
|
||||||
|
},
|
||||||
|
```
|
||||||
|
|
||||||
|
**涉及文件**: `collect_lineloss.js` — `validatePageContext` 函数
|
||||||
|
|
||||||
|
**是否需要重新编译**: 否(JS 文件运行时读取)
|
||||||
|
|
||||||
|
**排查技巧**: 在 reasons 中拼接诊断信息(href/host/port/title/mac),不需要 F12 console,直接通过 Rust 侧的 summary 输出就能看到。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 第 3 阶段:API 请求问题
|
||||||
|
|
||||||
|
#### 问题 5: `api_query_failed` — 返回 HTML 而非 JSON
|
||||||
|
|
||||||
|
**现象**:
|
||||||
|
```
|
||||||
|
status=error rows=0 reasons=api_query_failed:month_api_failed: SyntaxError: Unexpected token '<', "<!DOCTYPE "... is not valid JSON
|
||||||
|
```
|
||||||
|
|
||||||
|
**根因**: 后端服务检测到请求缺少 `X-Requested-With: XMLHttpRequest` 头,认为这不是 AJAX 请求,返回了 HTML 登录页面。jQuery 的 `$.ajax` 不会自动添加这个头。
|
||||||
|
|
||||||
|
**修复**: 在 `queryMonthData` 和 `queryWeekData` 的 `$.ajax` 调用中添加请求头:
|
||||||
|
```javascript
|
||||||
|
$.ajax({
|
||||||
|
url,
|
||||||
|
type: 'POST',
|
||||||
|
dataType: 'json',
|
||||||
|
crossDomain: true,
|
||||||
|
headers: { 'X-Requested-With': 'XMLHttpRequest' }, // <-- 新增
|
||||||
|
data: request,
|
||||||
|
contentType: 'application/x-www-form-urlencoded;charset=UTF-8',
|
||||||
|
success: resolve,
|
||||||
|
error: (xhr, _status, err) => reject(new Error(
|
||||||
|
`month_api_failed(${xhr.status}): ${String(err)}|body=${String(xhr.responseText || '').substring(0, 200)}`
|
||||||
|
))
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
**涉及文件**: `collect_lineloss.js` — `queryMonthData` 和 `queryWeekData`
|
||||||
|
|
||||||
|
**是否需要重新编译**: 否
|
||||||
|
|
||||||
|
**排查技巧**: 在 error handler 中拼接 `xhr.responseText` 的前 200 字符到 reasons 中。如果看到 `<!DOCTYPE` 开头,说明后端返回了 HTML 而非 JSON。
|
||||||
|
|
||||||
|
**通用规则**: 内网 Java 后端通常依赖 `X-Requested-With: XMLHttpRequest` 来区分页面请求和 AJAX 请求。所有对内网 API 的 `$.ajax` 调用都应加上此头。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 第 4 阶段:数据规范化问题
|
||||||
|
|
||||||
|
#### 问题 6: `row_normalization_failed` — 列名不匹配
|
||||||
|
|
||||||
|
**现象**:
|
||||||
|
```
|
||||||
|
status=error rows=0 reasons=row_normalization_failed:rawRows=12|keys=YGDL,ORG_NO,YXSL,TG_NUM...
|
||||||
|
```
|
||||||
|
|
||||||
|
**根因**: 初始生成的 `MONTH_COLUMN_DEFS` 使用了猜测的列名:
|
||||||
|
```javascript
|
||||||
|
// 错误的列名
|
||||||
|
['LINE_LOSS_RATE', '线损完成率(%)'],
|
||||||
|
['PPQ', '累计供电量'],
|
||||||
|
['UPQ', '累计售电量'],
|
||||||
|
```
|
||||||
|
|
||||||
|
而 API 实际返回的列名是(参考原始场景 `index.html` 中的 `cols2`):
|
||||||
|
```javascript
|
||||||
|
// 正确的列名
|
||||||
|
['ORG_NAME', '供电单位'],
|
||||||
|
['YGDL', '累计供电量'],
|
||||||
|
['YYDL', '累计售电量'],
|
||||||
|
['YXSL', '线损完成率(%)'],
|
||||||
|
['RAT_SCOPE', '线损率累计目标值'],
|
||||||
|
['BLANK3', '目标完成率'],
|
||||||
|
['BLANK2', '排行']
|
||||||
|
```
|
||||||
|
|
||||||
|
**修复**: 按原始场景 `index.html` 中 `cols2` 的定义修正 `MONTH_COLUMN_DEFS`。
|
||||||
|
|
||||||
|
**排查技巧**: 在 `reasons` 中拼接 `rawRows.length` 和 `Object.keys(rawRows[0]).join(',')` 可以直接看到 API 返回了哪些字段。
|
||||||
|
|
||||||
|
**通用规则**: 生成 skill 脚本时,列定义必须从原始场景代码中精确复制,不能靠猜测。找 `cols1`/`cols2` 或表格渲染相关代码。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### 问题 7: `row_normalization_failed` — 数值类型不兼容
|
||||||
|
|
||||||
|
**现象**: 列名修正后仍报 `row_normalization_failed:rawRows=12`,12 行全部被过滤。
|
||||||
|
|
||||||
|
**根因**: `pickFirstNonEmpty()` 函数只识别字符串类型:
|
||||||
|
```javascript
|
||||||
|
function pickFirstNonEmpty(...values) {
|
||||||
|
for (const value of values) {
|
||||||
|
if (isNonEmptyString(value)) { // isNonEmptyString: typeof value === 'string'
|
||||||
|
return value.trim();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return ''; // API 返回数字 12345.67,typeof === 'number',被当作空值
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
API 返回的字段值是数字(如 `YGDL: 12345.67`),不是字符串。`pickFirstNonEmpty` 对数字返回 `''`,导致所有行的所有字段都为空,全部被过滤。
|
||||||
|
|
||||||
|
**修复**: `normalizeMonthRow` 不使用 `pickFirstNonEmpty`,改为直接处理任意类型值:
|
||||||
|
```javascript
|
||||||
|
// 修复前
|
||||||
|
function normalizeMonthRow(rawRow) {
|
||||||
|
const row = {};
|
||||||
|
for (const key of MONTH_COLUMNS) {
|
||||||
|
row[key] = pickFirstNonEmpty(rawRow?.[key]); // 数字类型 → ''
|
||||||
|
}
|
||||||
|
return MONTH_COLUMNS.every((key) => row[key] !== '') ? row : null;
|
||||||
|
}
|
||||||
|
|
||||||
|
// 修复后
|
||||||
|
function normalizeMonthRow(rawRow) {
|
||||||
|
const row = {};
|
||||||
|
for (const key of MONTH_COLUMNS) {
|
||||||
|
const v = rawRow?.[key];
|
||||||
|
row[key] = (v === null || v === undefined || v === '') ? '' : String(v).trim();
|
||||||
|
}
|
||||||
|
return MONTH_COLUMNS.every((key) => row[key] !== '') ? row : null;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**涉及文件**: `collect_lineloss.js` — `normalizeMonthRow`
|
||||||
|
|
||||||
|
**是否需要重新编译**: 否
|
||||||
|
|
||||||
|
**通用规则**: 内网 API 返回的 JSON 中数值字段通常是 `number` 类型而非字符串。行规范化函数必须用 `String(v)` 进行类型转换,不能依赖 `typeof === 'string'` 判断。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 第 5 阶段:导出问题(架构级)
|
||||||
|
|
||||||
|
#### 问题 8: 导出永久挂起
|
||||||
|
|
||||||
|
**现象**:
|
||||||
|
```
|
||||||
|
tq-lineloss-report 国网兰州供电公司 2026-03 status=pl rows=12
|
||||||
|
```
|
||||||
|
数据采集成功(12 行),但之后永远没有返回,脚本卡死在导出步骤。
|
||||||
|
|
||||||
|
**排查过程**:
|
||||||
|
|
||||||
|
1. `exportWorkbook` 调用 `fetch('http://localhost:13313/...')` — CORS 阻断
|
||||||
|
2. 改用 `$.ajax({ crossDomain: true })` — 同样阻断
|
||||||
|
3. 确认这是浏览器安全模型限制,不是配置问题
|
||||||
|
|
||||||
|
**根因**: 脚本运行在远程页面 `http://20.76.57.61:18080` 上,浏览器禁止从远程页面向 `localhost:13313` 发起请求(同源策略 + Mixed Content)。`crossDomain: true` 只是告诉 jQuery 用跨域模式,并不能绕过浏览器安全策略。
|
||||||
|
|
||||||
|
原始场景的解决方式:有一个本地场景页面(`localhost` 上的 `index.html`)充当代理,先在远程页面采集数据,再通过 `postMessage` 或回调传回本地页面,由本地页面调用 `localhost:13313`。
|
||||||
|
|
||||||
|
Skill 模式没有本地场景页面,因此这种代理机制不存在。
|
||||||
|
|
||||||
|
**解决方案**: 将导出逻辑从浏览器 JS 移到 Rust 侧(方案 A2: Rust 本地生成 XLSX)。
|
||||||
|
|
||||||
|
**最终架构**:
|
||||||
|
```
|
||||||
|
JS (浏览器): 采集数据 → 返回 artifact { rows, column_defs, status }
|
||||||
|
↓
|
||||||
|
Rust (本地): 解析 artifact → 提取 rows + column_defs → 生成 XLSX 文件
|
||||||
|
```
|
||||||
|
|
||||||
|
**具体修改**:
|
||||||
|
|
||||||
|
1. **JS 侧**: 删除 `exportWorkbook()`、`writeReportLog()`、`postJson()`、`buildExportPayload()` 等导出相关代码。artifact 中添加 `column_defs` 字段,export 状态设为 `deferred_to_rust`。
|
||||||
|
|
||||||
|
2. **Rust 侧**: 新增 `lineloss_xlsx_export.rs`,用 `zip` crate + OpenXML XML 生成 XLSX。在 `deterministic_submit.rs` 中,收到 artifact 后调用 XLSX 生成。
|
||||||
|
|
||||||
|
**涉及文件**:
|
||||||
|
- `collect_lineloss.js` — 删除导出代码,添加 `column_defs`
|
||||||
|
- `src/compat/lineloss_xlsx_export.rs` — 新增
|
||||||
|
- `src/compat/deterministic_submit.rs` — 新增导出集成
|
||||||
|
- `src/compat/mod.rs` — 注册新模块
|
||||||
|
|
||||||
|
**是否需要重新编译**: 是
|
||||||
|
|
||||||
|
**通用规则**: 任何从远程页面调用 `localhost` 的操作在 Skill 模式下都不可行。导出/写日志等需要访问本地服务的功能必须放到 Rust 侧实现。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 排查方法论总结
|
||||||
|
|
||||||
|
### 1. 诊断信息注入模式
|
||||||
|
|
||||||
|
脚本运行在浏览器中,无法看 F12 console。唯一的信息通道是 artifact JSON 的 `reasons` 字段。
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// 在 catch 块中注入详细错误
|
||||||
|
reasons: ['api_query_failed:' + String(error?.message || error || 'unknown')]
|
||||||
|
|
||||||
|
// 在规范化失败时注入原始数据摘要
|
||||||
|
reasons: ['row_normalization_failed:rawRows=' + rawRows.length + '|keys=' + Object.keys(rawRows[0]).join(',')]
|
||||||
|
|
||||||
|
// 在页面上下文检查中注入环境信息
|
||||||
|
reason: 'page_context_unavailable:mac_missing|href=' + href + '|host=' + host + '|port=' + port
|
||||||
|
```
|
||||||
|
|
||||||
|
Rust 侧的 summary 输出会包含这些 reasons,直接在日志中可见。
|
||||||
|
|
||||||
|
### 2. 逐层排查顺序
|
||||||
|
|
||||||
|
```
|
||||||
|
Layer 1: 管道层(Rust)
|
||||||
|
├── args 是否正确传入?(expected_domain, target_url, org_code 等)
|
||||||
|
├── 脚本文件是否正确读取?
|
||||||
|
├── async 返回值是否被正确处理?(.then() 模式)
|
||||||
|
└── 回调是否成功返回?
|
||||||
|
|
||||||
|
Layer 2: 页面上下文(JS)
|
||||||
|
├── 脚本注入到了哪个页面?(href, title)
|
||||||
|
├── 页面是否有需要的全局变量?(window.mac 等)
|
||||||
|
└── domain 匹配是否正确?
|
||||||
|
|
||||||
|
Layer 3: API 请求(JS)
|
||||||
|
├── 请求头是否完整?(X-Requested-With)
|
||||||
|
├── 返回格式是否正确?(JSON vs HTML)
|
||||||
|
└── 返回状态码?
|
||||||
|
|
||||||
|
Layer 4: 数据处理(JS)
|
||||||
|
├── API 返回的字段名是否匹配列定义?
|
||||||
|
├── 字段值类型是否兼容?(number vs string)
|
||||||
|
└── 规范化后是否有有效行?
|
||||||
|
|
||||||
|
Layer 5: 导出(架构)
|
||||||
|
├── 是否涉及跨域请求?
|
||||||
|
├── localhost 是否可达?
|
||||||
|
└── 是否需要 Rust 侧处理?
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. 修改后验证检查清单
|
||||||
|
|
||||||
|
- [ ] JS 文件语法检查:`node -e "require('./collect_lineloss.js')"`
|
||||||
|
- [ ] 如果改了 Rust 代码:`cargo build` 编译通过
|
||||||
|
- [ ] `cargo test` 全部通过(排除已知的 pre-existing failures)
|
||||||
|
- [ ] 替换 JS 文件到部署目录
|
||||||
|
- [ ] 如果改了 Rust:重新部署编译后的 sgclaw 二进制
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 最终文件清单
|
||||||
|
|
||||||
|
### JS 文件: `collect_lineloss.js`
|
||||||
|
|
||||||
|
**位置**: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/scripts/collect_lineloss.js`
|
||||||
|
|
||||||
|
**功能**: 纯数据采集。注入到浏览器,查询线损平台 API,返回结构化 artifact。
|
||||||
|
|
||||||
|
**不做的事**: 不调 localhost:13313,不导出 Excel,不写 report log。
|
||||||
|
|
||||||
|
### Rust 文件: 修改清单
|
||||||
|
|
||||||
|
| 文件 | 修改内容 | 修改类型 |
|
||||||
|
|------|---------|---------|
|
||||||
|
| `src/browser/callback_backend.rs` | `build_eval_js` 增加 `.then()` 处理 async 返回值 | 管道层通用修复 |
|
||||||
|
| `src/compat/deterministic_submit.rs` | 完整 `target_url`; 解析 artifact 后调 XLSX 导出 | 业务集成 |
|
||||||
|
| `src/compat/lineloss_xlsx_export.rs` | XLSX 生成(zip + OpenXML) | 新增 |
|
||||||
|
| `src/compat/mod.rs` | 注册 `lineloss_xlsx_export` 模块 | 新增 |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 快速复用模板
|
||||||
|
|
||||||
|
新建类似 skill 时,直接检查以下要点:
|
||||||
|
|
||||||
|
1. **`build_eval_js` 是否支持 async**:入口函数如果是 `async`,确认 `callback_backend.rs` 中有 `.then()` 处理。
|
||||||
|
2. **`validatePageContext` 不检查页面局部状态**:只检查 host,不检查 `window.mac`、`window.app` 等场景页专属变量。
|
||||||
|
3. **API 请求必须带 `X-Requested-With: XMLHttpRequest`**:内网 Java 后端的标配。
|
||||||
|
4. **列定义从原始场景代码精确复制**:找 `cols1`/`cols2` 或表格 `columns` 配置。
|
||||||
|
5. **`normalizeRow` 用 `String(v)` 而非 `pickFirstNonEmpty`**:API 返回数字不是字符串。
|
||||||
|
6. **导出不走浏览器,走 Rust 侧**:JS 返回 rows + column_defs,Rust 生成 XLSX。
|
||||||
@@ -1,455 +0,0 @@
|
|||||||
# Scene Skill Runtime Routing Implementation Plan
|
|
||||||
|
|
||||||
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
|
||||||
|
|
||||||
**Goal:** Add a first scene-routing slice that recognizes staged business scenes from natural language and dispatches them through browser-backed execution, with `fault-details-report` using direct browser execution and `95598-repair-city-dispatch` using agent-mediated browser execution.
|
|
||||||
|
|
||||||
**Architecture:** Introduce a small registry module that loads the first staged `scene.json` contracts plus runtime dispatch policy from the external `skill_staging` root. Route matched scenes through one of two paths: `direct_browser` scenes execute through compat orchestration without the model choosing tools, while `agent_browser` scenes stay in the existing agent flow but get scene-specific browser-first prompt injection. Both modes must still execute through the existing `BrowserScriptSkillTool` / browser backend path so the final business action uses browser-internal methods.
|
|
||||||
|
|
||||||
**Tech Stack:** Rust, serde/JSON metadata loading, existing compat orchestration/runtime/workflow layers, browser-backed skill tools, focused `cargo test` coverage.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## File Map
|
|
||||||
|
|
||||||
**Create:**
|
|
||||||
- `src/runtime/scene_registry.rs` — load staged scene metadata, attach runtime dispatch policy, expose matching helpers for the first slice.
|
|
||||||
- `tests/scene_registry_test.rs` — focused tests for registry loading, matching, and policy behavior.
|
|
||||||
|
|
||||||
**Modify:**
|
|
||||||
- `src/runtime/mod.rs` — export the new scene registry module/types used by runtime and compat layers.
|
|
||||||
- `src/compat/config_adapter.rs` — verify the moved external `skill_staging` root resolves to the staged `skills` child, and only change path resolution if a targeted regression proves it is insufficient.
|
|
||||||
- `src/runtime/engine.rs` — inject scene-specific browser-first contracts for `agent_browser` scenes and keep existing Zhihu prompt behavior intact.
|
|
||||||
- `src/compat/workflow_executor.rs` — extend route detection and direct execution support for `fault-details-report` using the browser-backed skill path.
|
|
||||||
- `src/compat/orchestration.rs` — let primary orchestration prefer direct execution for `direct_browser` scenes while leaving `agent_browser` scenes in the agent path.
|
|
||||||
- `src/compat/browser_script_skill_tool.rs` — expose the thinnest reusable browser-backed execution helper needed so direct scene execution can reuse the same `browser_script` semantics instead of drifting into a duplicate local path.
|
|
||||||
- `src/compat/runtime.rs` — ensure runtime sees the staged skills root and continues exposing browser-backed scene tools.
|
|
||||||
- `tests/compat_config_test.rs` — add path-resolution coverage for the staged external root.
|
|
||||||
- `tests/runtime_profile_test.rs` — add scene-specific instruction contract assertions.
|
|
||||||
- `tests/browser_script_skill_tool_test.rs` — add coverage for any new reusable direct-execution helper introduced in the browser-script layer.
|
|
||||||
- `tests/compat_runtime_test.rs` — add orchestration/direct-route coverage for the new scene behavior.
|
|
||||||
|
|
||||||
**Reference:**
|
|
||||||
- `docs/superpowers/specs/2026-04-06-scene-skill-runtime-routing-design.md`
|
|
||||||
- `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\scenes\fault-details-report\scene.json`
|
|
||||||
- `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\scenes\95598-repair-city-dispatch\scene.json`
|
|
||||||
- `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\fault-details-report\SKILL.toml`
|
|
||||||
- `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\95598-repair-city-dispatch\SKILL.toml`
|
|
||||||
|
|
||||||
### Task 1: Add Scene Registry And Matching
|
|
||||||
|
|
||||||
**Files:**
|
|
||||||
- Create: `src/runtime/scene_registry.rs`
|
|
||||||
- Modify: `src/runtime/mod.rs`
|
|
||||||
- Test: `tests/scene_registry_test.rs`
|
|
||||||
|
|
||||||
- [ ] **Step 1: Write the failing registry tests**
|
|
||||||
|
|
||||||
Add tests that prove the first-slice registry can:
|
|
||||||
- load `fault-details-report` with `dispatch_mode = direct_browser`
|
|
||||||
- load `95598-repair-city-dispatch` with `dispatch_mode = agent_browser`
|
|
||||||
- match natural-language phrases like `导出故障明细` and `95598抢修市指监测`
|
|
||||||
- ignore missing/broken scene files without panicking
|
|
||||||
|
|
||||||
Example assertions to include:
|
|
||||||
|
|
||||||
```rust
|
|
||||||
assert_eq!(entry.scene_id, "fault-details-report");
|
|
||||||
assert_eq!(entry.dispatch_mode, DispatchMode::DirectBrowser);
|
|
||||||
assert_eq!(entry.tool_name(), "fault-details-report.collect_fault_details");
|
|
||||||
assert_eq!(entry.expected_domain, "__scene_fault_details__");
|
|
||||||
```
|
|
||||||
|
|
||||||
```rust
|
|
||||||
assert_eq!(matched.scene_id, "95598-repair-city-dispatch");
|
|
||||||
assert_eq!(matched.dispatch_mode, DispatchMode::AgentBrowser);
|
|
||||||
```
|
|
||||||
|
|
||||||
- [ ] **Step 2: Run the new registry tests to verify they fail**
|
|
||||||
|
|
||||||
Run:
|
|
||||||
```bash
|
|
||||||
cargo test scene_registry --test scene_registry_test -- --nocapture
|
|
||||||
```
|
|
||||||
|
|
||||||
Expected: FAIL because `src/runtime/scene_registry.rs` and the exported registry APIs do not exist yet.
|
|
||||||
|
|
||||||
- [ ] **Step 3: Implement the minimal scene registry module**
|
|
||||||
|
|
||||||
Create `src/runtime/scene_registry.rs` with:
|
|
||||||
- a small deserialized scene metadata struct for `scene.json`
|
|
||||||
- a `DispatchMode` enum
|
|
||||||
- a single runtime registry-entry struct combining scene metadata plus runtime policy
|
|
||||||
- first-slice hardcoded runtime policy for the two initial scenes
|
|
||||||
- helper methods like:
|
|
||||||
|
|
||||||
```rust
|
|
||||||
pub fn load_first_slice_scene_registry() -> Vec<SceneRegistryEntry>
|
|
||||||
pub fn match_scene_instruction(instruction: &str) -> Option<SceneRegistryEntry>
|
|
||||||
```
|
|
||||||
|
|
||||||
Use deterministic keyword/alias matching only. Do not add embeddings, fuzzy search, or generic scoring infrastructure beyond what the spec requires.
|
|
||||||
|
|
||||||
- [ ] **Step 4: Export the registry from `src/runtime/mod.rs`**
|
|
||||||
|
|
||||||
Expose the new types/helpers needed by runtime and compat layers, for example:
|
|
||||||
|
|
||||||
```rust
|
|
||||||
mod scene_registry;
|
|
||||||
|
|
||||||
pub use scene_registry::{
|
|
||||||
load_first_slice_scene_registry,
|
|
||||||
match_scene_instruction,
|
|
||||||
DispatchMode,
|
|
||||||
SceneRegistryEntry,
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
- [ ] **Step 5: Run the registry tests to verify they pass**
|
|
||||||
|
|
||||||
Run:
|
|
||||||
```bash
|
|
||||||
cargo test scene_registry --test scene_registry_test -- --nocapture
|
|
||||||
```
|
|
||||||
|
|
||||||
Expected: PASS
|
|
||||||
|
|
||||||
- [ ] **Step 6: Commit Task 1**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
git add src/runtime/scene_registry.rs src/runtime/mod.rs tests/scene_registry_test.rs
|
|
||||||
git commit -m "feat: add staged scene registry matching"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Task 2: Verify Staged Skills Root Resolution
|
|
||||||
|
|
||||||
**Files:**
|
|
||||||
- Modify if needed: `src/compat/config_adapter.rs:94`
|
|
||||||
- Modify if needed: `src/compat/runtime.rs:152`
|
|
||||||
- Test: `tests/compat_config_test.rs`
|
|
||||||
|
|
||||||
- [ ] **Step 1: Write the targeted staged-root path test**
|
|
||||||
|
|
||||||
Add a focused test that proves an external configured `skill_staging` root resolves to its `skills` child and preserves current nested-skills behavior.
|
|
||||||
|
|
||||||
Add a test shape like:
|
|
||||||
|
|
||||||
```rust
|
|
||||||
let staged_root = root.join("external/skill_staging");
|
|
||||||
fs::create_dir_all(staged_root.join("skills")).unwrap();
|
|
||||||
fs::create_dir_all(staged_root.join("scenes")).unwrap();
|
|
||||||
|
|
||||||
let settings = DeepSeekSettings {
|
|
||||||
api_key: "key".to_string(),
|
|
||||||
base_url: "https://api.deepseek.com".to_string(),
|
|
||||||
model: "deepseek-chat".to_string(),
|
|
||||||
skills_dir: Some(staged_root.clone()),
|
|
||||||
};
|
|
||||||
|
|
||||||
assert_eq!(resolve_skills_dir(&root, &settings), staged_root.join("skills"));
|
|
||||||
```
|
|
||||||
|
|
||||||
- [ ] **Step 2: Run the focused config test and record the actual result**
|
|
||||||
|
|
||||||
Run:
|
|
||||||
```bash
|
|
||||||
cargo test --test compat_config_test resolve_skills_dir_ -- --nocapture
|
|
||||||
```
|
|
||||||
|
|
||||||
Expected: either
|
|
||||||
- PASS immediately, proving current path resolution already supports the staged-root contract, or
|
|
||||||
- FAIL with a concrete staged-root regression that justifies a minimal config fix.
|
|
||||||
|
|
||||||
- [ ] **Step 3: Only if the staged-root test fails, implement the narrowest config fix**
|
|
||||||
|
|
||||||
If the test fails, update `src/compat/config_adapter.rs` so configured external staged roots resolve to the staged skill package directory used by runtime skill loading. Keep the change narrow:
|
|
||||||
- preserve current behavior for normal `skills` roots
|
|
||||||
- add the smallest extra branch needed for the failing staged-root case
|
|
||||||
- do not create a broad path-discovery system
|
|
||||||
|
|
||||||
- [ ] **Step 4: Verify runtime alignment with the resolved staged skills root**
|
|
||||||
|
|
||||||
Confirm `src/compat/runtime.rs` still uses the resolved `skills_dir` as-is. If no runtime code change is needed after the test outcome, leave the file untouched and rely on test coverage.
|
|
||||||
|
|
||||||
- [ ] **Step 5: Run the focused config tests to verify they pass**
|
|
||||||
|
|
||||||
Run:
|
|
||||||
```bash
|
|
||||||
cargo test --test compat_config_test resolve_skills_dir_ -- --nocapture
|
|
||||||
```
|
|
||||||
|
|
||||||
Expected: PASS
|
|
||||||
|
|
||||||
- [ ] **Step 6: Commit Task 2**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
git add src/compat/config_adapter.rs src/compat/runtime.rs tests/compat_config_test.rs
|
|
||||||
git commit -m "test: verify staged scene skill root resolution"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Task 3: Inject Agent-Browser Scene Contract For 95598
|
|
||||||
|
|
||||||
**Files:**
|
|
||||||
- Modify: `src/runtime/engine.rs:135`
|
|
||||||
- Test: `tests/runtime_profile_test.rs`
|
|
||||||
|
|
||||||
- [ ] **Step 1: Write the failing instruction-contract tests**
|
|
||||||
|
|
||||||
Add focused tests proving that when the instruction matches `95598-repair-city-dispatch`, `RuntimeEngine::build_instruction(...)` includes a scene-specific browser contract requiring the tool `95598-repair-city-dispatch.collect_repair_orders` first.
|
|
||||||
|
|
||||||
Example assertion pattern:
|
|
||||||
|
|
||||||
```rust
|
|
||||||
let instruction = engine.build_instruction(
|
|
||||||
"请做95598抢修市指监测",
|
|
||||||
Some("https://example.invalid/dispatch"),
|
|
||||||
Some("95598抢修-市指"),
|
|
||||||
true,
|
|
||||||
);
|
|
||||||
|
|
||||||
assert!(instruction.contains("95598-repair-city-dispatch.collect_repair_orders"));
|
|
||||||
assert!(instruction.contains("browser workflow, not a text-only task"));
|
|
||||||
assert!(instruction.contains("generic browser probing only after"));
|
|
||||||
```
|
|
||||||
|
|
||||||
Also add a negative control showing unrelated tasks do not receive this scene contract.
|
|
||||||
|
|
||||||
- [ ] **Step 2: Run the focused runtime-profile tests to verify they fail**
|
|
||||||
|
|
||||||
Run:
|
|
||||||
```bash
|
|
||||||
cargo test --test runtime_profile_test 95598 -- --nocapture
|
|
||||||
```
|
|
||||||
|
|
||||||
Expected: FAIL because no scene-specific contract is injected yet.
|
|
||||||
|
|
||||||
- [ ] **Step 3: Implement minimal scene-aware prompt injection**
|
|
||||||
|
|
||||||
Update `src/runtime/engine.rs` to:
|
|
||||||
- query the new scene matcher
|
|
||||||
- when the matched scene is `agent_browser`, append a scene execution contract section
|
|
||||||
- preserve existing Zhihu prompt sections unchanged
|
|
||||||
|
|
||||||
Keep the contract explicit and narrow, for example:
|
|
||||||
|
|
||||||
```text
|
|
||||||
Scene execution contract:
|
|
||||||
- Matched scene: 95598-repair-city-dispatch
|
|
||||||
- Required tool: 95598-repair-city-dispatch.collect_repair_orders
|
|
||||||
- This is a browser workflow, not a text-only task.
|
|
||||||
- Business data must come from the matched browser-backed scene tool.
|
|
||||||
- Only use generic browser probing after the matched scene tool fails.
|
|
||||||
```
|
|
||||||
|
|
||||||
Do not add hard allowed-tool narrowing in this task; slice one only promises instruction-level enforcement.
|
|
||||||
|
|
||||||
- [ ] **Step 4: Run the focused runtime-profile tests to verify they pass**
|
|
||||||
|
|
||||||
Run:
|
|
||||||
```bash
|
|
||||||
cargo test --test runtime_profile_test 95598 -- --nocapture
|
|
||||||
```
|
|
||||||
|
|
||||||
Expected: PASS
|
|
||||||
|
|
||||||
- [ ] **Step 5: Commit Task 3**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
git add src/runtime/engine.rs tests/runtime_profile_test.rs
|
|
||||||
git commit -m "feat: inject 95598 scene browser contract"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Task 4: Add Direct Browser Route For Fault Details
|
|
||||||
|
|
||||||
**Files:**
|
|
||||||
- Modify: `src/compat/workflow_executor.rs:58`
|
|
||||||
- Modify: `src/compat/orchestration.rs:9`
|
|
||||||
- Modify: `src/compat/browser_script_skill_tool.rs:101`
|
|
||||||
- Test: `tests/browser_script_skill_tool_test.rs`
|
|
||||||
- Test: `tests/compat_runtime_test.rs`
|
|
||||||
|
|
||||||
- [ ] **Step 1: Write the failing route-detection tests**
|
|
||||||
|
|
||||||
Add focused tests that prove:
|
|
||||||
- natural language like `导出故障明细` is detected as a direct scene route
|
|
||||||
- primary orchestration is selected for that scene
|
|
||||||
- missing scene metadata leaves unrelated routing unchanged
|
|
||||||
|
|
||||||
Target the existing routing seams with test shapes like:
|
|
||||||
|
|
||||||
```rust
|
|
||||||
assert!(sgclaw::compat::orchestration::should_use_primary_orchestration(
|
|
||||||
"导出故障明细",
|
|
||||||
Some("https://example.invalid/fault"),
|
|
||||||
Some("故障明细"),
|
|
||||||
));
|
|
||||||
```
|
|
||||||
|
|
||||||
and a focused route assertion using the new route enum variant.
|
|
||||||
|
|
||||||
- [ ] **Step 2: Run the focused route tests to verify they fail**
|
|
||||||
|
|
||||||
Run:
|
|
||||||
```bash
|
|
||||||
cargo test --test compat_runtime_test fault_details -- --nocapture
|
|
||||||
```
|
|
||||||
|
|
||||||
Expected: FAIL because no direct scene route exists yet.
|
|
||||||
|
|
||||||
- [ ] **Step 3: Write the failing browser-script helper tests**
|
|
||||||
|
|
||||||
Add focused tests in `tests/browser_script_skill_tool_test.rs` for the thinnest reusable helper needed by direct scene execution. The tests should prove that the helper:
|
|
||||||
- reads the packaged script from the skill root
|
|
||||||
- wraps args exactly like `BrowserScriptSkillTool`
|
|
||||||
- invokes browser `Eval`
|
|
||||||
- returns normalized serialized output
|
|
||||||
- fails clearly when required fields like `expected_domain` are missing
|
|
||||||
|
|
||||||
- [ ] **Step 4: Run the focused browser-script helper tests to verify they fail**
|
|
||||||
|
|
||||||
Run:
|
|
||||||
```bash
|
|
||||||
cargo test --test browser_script_skill_tool_test -- --nocapture
|
|
||||||
```
|
|
||||||
|
|
||||||
Expected: FAIL because the reusable helper does not exist yet.
|
|
||||||
|
|
||||||
- [ ] **Step 5: Implement the reusable browser-backed execution helper**
|
|
||||||
|
|
||||||
Update `src/compat/browser_script_skill_tool.rs` with the smallest reusable helper that direct scene execution can call while preserving the same `browser_script` semantics as normal skill execution. Keep it narrow:
|
|
||||||
- reuse the same script loading and wrapping rules
|
|
||||||
- require explicit `expected_domain`
|
|
||||||
- return normalized serialized output
|
|
||||||
- do not introduce a second browser-script execution model
|
|
||||||
|
|
||||||
- [ ] **Step 6: Implement the direct fault-details route on top of that helper**
|
|
||||||
|
|
||||||
Update `src/compat/workflow_executor.rs` to:
|
|
||||||
- introduce a new direct route variant for `fault-details-report`
|
|
||||||
- extend `detect_route(...)` to return it when the scene matcher says `direct_browser`
|
|
||||||
- build required args from scene runtime policy
|
|
||||||
- call the reusable browser-script execution helper
|
|
||||||
- return normalized serialized tool output
|
|
||||||
|
|
||||||
If required scene args cannot be derived safely, return a clear failure instead of guessing.
|
|
||||||
|
|
||||||
- [ ] **Step 7: Wire primary orchestration to prefer the new direct scene route**
|
|
||||||
|
|
||||||
Update `src/compat/orchestration.rs` so `should_use_primary_orchestration(...)` and the direct execution branch treat the new `fault-details-report` route like the existing direct Zhihu routes.
|
|
||||||
|
|
||||||
- [ ] **Step 8: Run the focused direct-route and helper tests to verify they pass**
|
|
||||||
|
|
||||||
Run:
|
|
||||||
```bash
|
|
||||||
cargo test --test browser_script_skill_tool_test -- --nocapture && cargo test --test compat_runtime_test fault_details -- --nocapture
|
|
||||||
```
|
|
||||||
|
|
||||||
Expected: PASS
|
|
||||||
|
|
||||||
- [ ] **Step 9: Commit Task 4**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
git add src/compat/browser_script_skill_tool.rs src/compat/workflow_executor.rs src/compat/orchestration.rs tests/browser_script_skill_tool_test.rs tests/compat_runtime_test.rs
|
|
||||||
git commit -m "feat: add direct fault-details scene routing"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Task 5: Verify Tool Exposure, Browser-Surface Fallback, And Mixed Routing Together
|
|
||||||
|
|
||||||
**Files:**
|
|
||||||
- Modify if needed: `src/compat/runtime.rs:142`
|
|
||||||
- Test: `tests/compat_runtime_test.rs`
|
|
||||||
- Test: `tests/runtime_profile_test.rs`
|
|
||||||
- Test: `tests/scene_registry_test.rs`
|
|
||||||
|
|
||||||
- [ ] **Step 1: Write the failing integration-shape tests**
|
|
||||||
|
|
||||||
Add focused assertions that prove the mixed-mode design works together:
|
|
||||||
- staged browser-backed tool names are exposed
|
|
||||||
- `fault-details-report` uses direct routing
|
|
||||||
- `95598-repair-city-dispatch` stays in the agent path but gets scene-specific browser-first instruction injection
|
|
||||||
- browser-surface-disabled turns do not gain scene browser contracts
|
|
||||||
- browser-surface-disabled turns do not trigger `direct_browser` scene execution
|
|
||||||
- missing scene metadata preserves unchanged runtime behavior for unrelated tasks
|
|
||||||
- unrelated Zhihu behavior still works the same way
|
|
||||||
|
|
||||||
Use existing test seams instead of broad integration scaffolding.
|
|
||||||
|
|
||||||
- [ ] **Step 2: Run the focused mixed-routing tests to verify they fail**
|
|
||||||
|
|
||||||
Run:
|
|
||||||
```bash
|
|
||||||
cargo test --test scene_registry_test -- --nocapture && cargo test --test compat_runtime_test scene_ -- --nocapture && cargo test --test runtime_profile_test scene_ -- --nocapture
|
|
||||||
```
|
|
||||||
|
|
||||||
Expected: FAIL until the mixed-routing assertions are implemented.
|
|
||||||
|
|
||||||
- [ ] **Step 3: Make the minimum runtime adjustments needed**
|
|
||||||
|
|
||||||
Only if required by the tests, adjust `src/compat/runtime.rs` so the loaded staged skills from the resolved external root are visible in the same way as existing browser-backed skills. Keep the shape of `build_browser_script_skill_tools(...)` and runtime tool assembly intact.
|
|
||||||
|
|
||||||
- [ ] **Step 4: Run the focused mixed-routing tests to verify they pass**
|
|
||||||
|
|
||||||
Run:
|
|
||||||
```bash
|
|
||||||
cargo test --test scene_registry_test -- --nocapture && cargo test --test compat_runtime_test scene_ -- --nocapture && cargo test --test runtime_profile_test scene_ -- --nocapture
|
|
||||||
```
|
|
||||||
|
|
||||||
Expected: PASS
|
|
||||||
|
|
||||||
- [ ] **Step 5: Run the broader targeted verification sweep**
|
|
||||||
|
|
||||||
Run:
|
|
||||||
```bash
|
|
||||||
cargo test --test browser_script_skill_tool_test -- --nocapture && cargo test --test scene_registry_test -- --nocapture && cargo test --test compat_config_test resolve_skills_dir_ -- --nocapture && cargo test --test runtime_profile_test -- --nocapture && cargo test --test compat_runtime_test fault_details -- --nocapture
|
|
||||||
```
|
|
||||||
|
|
||||||
Expected: PASS
|
|
||||||
|
|
||||||
- [ ] **Step 6: Commit Task 5**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
git add src/compat/runtime.rs tests/scene_registry_test.rs tests/compat_runtime_test.rs tests/runtime_profile_test.rs
|
|
||||||
git commit -m "feat: wire staged scene mixed routing"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Task 6: Final Verification And Handoff
|
|
||||||
|
|
||||||
**Files:**
|
|
||||||
- Verify: `src/runtime/scene_registry.rs`
|
|
||||||
- Verify: `src/compat/config_adapter.rs`
|
|
||||||
- Verify: `src/runtime/engine.rs`
|
|
||||||
- Verify: `src/compat/workflow_executor.rs`
|
|
||||||
- Verify: `src/compat/orchestration.rs`
|
|
||||||
- Verify: `tests/scene_registry_test.rs`
|
|
||||||
- Verify: `tests/compat_config_test.rs`
|
|
||||||
- Verify: `tests/runtime_profile_test.rs`
|
|
||||||
- Verify: `tests/compat_runtime_test.rs`
|
|
||||||
|
|
||||||
- [ ] **Step 1: Run the full focused verification set**
|
|
||||||
|
|
||||||
Run:
|
|
||||||
```bash
|
|
||||||
cargo test --test scene_registry_test -- --nocapture && cargo test --test compat_config_test -- --nocapture && cargo test --test runtime_profile_test -- --nocapture && cargo test --test compat_runtime_test -- --nocapture
|
|
||||||
```
|
|
||||||
|
|
||||||
Expected: PASS
|
|
||||||
|
|
||||||
- [ ] **Step 2: If any test fails, fix only the minimal root cause and re-run the same command**
|
|
||||||
|
|
||||||
Do not broaden scope. Keep fixes limited to scene registry, path resolution, prompt injection, or direct routing.
|
|
||||||
|
|
||||||
- [ ] **Step 3: Review the resulting diff against the spec**
|
|
||||||
|
|
||||||
Manually verify:
|
|
||||||
- `fault-details-report` is direct-browser
|
|
||||||
- `95598-repair-city-dispatch` is agent-browser
|
|
||||||
- both still use browser-backed execution semantics
|
|
||||||
- no broad Zhihu refactor slipped in
|
|
||||||
- the new scene-routing abstraction stays registry-driven
|
|
||||||
|
|
||||||
- [ ] **Step 4: Commit the final verification pass**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
git add src/runtime/scene_registry.rs src/runtime/mod.rs src/compat/config_adapter.rs src/runtime/engine.rs src/compat/workflow_executor.rs src/compat/orchestration.rs src/compat/runtime.rs tests/scene_registry_test.rs tests/compat_config_test.rs tests/runtime_profile_test.rs tests/compat_runtime_test.rs
|
|
||||||
git commit -m "test: verify scene skill runtime routing"
|
|
||||||
```
|
|
||||||
@@ -0,0 +1,281 @@
|
|||||||
|
# Config-Owned Direct Skill Contract Implementation Plan
|
||||||
|
|
||||||
|
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||||
|
|
||||||
|
**Goal:** Validate the `directSubmitSkill` control surface early and prevent malformed direct-skill configs from entering the submit routing path, without changing the current happy-path direct execution behavior.
|
||||||
|
|
||||||
|
**Architecture:** Keep the existing direct-submit runtime and submit-task seam intact for valid configs. Move `directSubmitSkill` format validation into the normal `SgClawSettings` load path so malformed config fails before routing begins, while leaving valid-but-unresolvable `skill.tool` targets as direct runtime errors in the current direct path.
|
||||||
|
|
||||||
|
**Tech Stack:** Rust 2021, `serde` config parsing, current `BrowserMessage::SubmitTask` path, current direct skill runtime, Rust integration tests.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Execution Context
|
||||||
|
|
||||||
|
- Follow @superpowers:test-driven-development for the Rust code changes in this plan.
|
||||||
|
- Follow @superpowers:verification-before-completion before claiming any task is done.
|
||||||
|
- Do **not** create a git worktree unless the user explicitly asks. This project prefers staying in the current checkout.
|
||||||
|
- Keep scope tight: this plan does **not** add per-skill dispatch metadata, docs changes, intent classification, or LLM routing changes.
|
||||||
|
|
||||||
|
## File Map
|
||||||
|
|
||||||
|
### Existing files to modify
|
||||||
|
|
||||||
|
- Modify: `src/config/settings.rs`
|
||||||
|
- validate `directSubmitSkill` during config normalization
|
||||||
|
- keep the stored field as `Option<String>` so the current direct runtime API stays stable
|
||||||
|
- Modify: `tests/compat_config_test.rs`
|
||||||
|
- add a failing config-load regression for malformed `directSubmitSkill`
|
||||||
|
- Modify: `tests/agent_runtime_test.rs`
|
||||||
|
- add a failing submit-path regression proving malformed config is rejected before direct routing begins
|
||||||
|
|
||||||
|
### Existing files to read but not broaden
|
||||||
|
|
||||||
|
- Reuse without redesign: `src/agent/mod.rs`
|
||||||
|
- Reuse without redesign: `src/compat/direct_skill_runtime.rs`
|
||||||
|
- Reuse without redesign: `docs/superpowers/specs/2026-04-09-config-owned-direct-skill-dispatch-design.md`
|
||||||
|
|
||||||
|
### No new files expected
|
||||||
|
|
||||||
|
This slice should fit in the existing config and tests surfaces only.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 1: Validate `directSubmitSkill` Before Submit Routing
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `tests/compat_config_test.rs`
|
||||||
|
- Modify: `tests/agent_runtime_test.rs`
|
||||||
|
- Modify: `src/config/settings.rs`
|
||||||
|
- Read only: `src/agent/mod.rs`
|
||||||
|
- Read only: `src/compat/direct_skill_runtime.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Write the failing config test for malformed `directSubmitSkill`**
|
||||||
|
|
||||||
|
Add this focused test to `tests/compat_config_test.rs`:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[test]
|
||||||
|
fn sgclaw_settings_reject_invalid_direct_submit_skill_format() {
|
||||||
|
let root = std::env::temp_dir().join(format!(
|
||||||
|
"sgclaw-invalid-direct-submit-skill-{}",
|
||||||
|
Uuid::new_v4()
|
||||||
|
));
|
||||||
|
fs::create_dir_all(&root).unwrap();
|
||||||
|
let config_path = root.join("sgclaw_config.json");
|
||||||
|
|
||||||
|
fs::write(
|
||||||
|
&config_path,
|
||||||
|
r#"{
|
||||||
|
"providers": [],
|
||||||
|
"skillsDir": "skill_lib",
|
||||||
|
"directSubmitSkill": "fault-details-report"
|
||||||
|
}"#,
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let err = SgClawSettings::load(Some(config_path.as_path()))
|
||||||
|
.expect_err("expected invalid directSubmitSkill format");
|
||||||
|
let message = err.to_string();
|
||||||
|
|
||||||
|
assert!(message.contains("directSubmitSkill"));
|
||||||
|
assert!(message.contains("skill.tool"));
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run the focused config test and verify it fails**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test compat_config_test sgclaw_settings_reject_invalid_direct_submit_skill_format -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: FAIL because the current config loader accepts the malformed string instead of rejecting it early.
|
||||||
|
|
||||||
|
- [ ] **Step 3: Write the failing agent regression for malformed config**
|
||||||
|
|
||||||
|
Add this focused test to `tests/agent_runtime_test.rs`:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[test]
|
||||||
|
fn submit_task_rejects_invalid_direct_submit_skill_config_before_routing() {
|
||||||
|
std::env::remove_var("DEEPSEEK_API_KEY");
|
||||||
|
std::env::remove_var("DEEPSEEK_BASE_URL");
|
||||||
|
std::env::remove_var("DEEPSEEK_MODEL");
|
||||||
|
|
||||||
|
let skill_root = build_direct_runtime_skill_root();
|
||||||
|
let workspace_root = std::env::temp_dir().join(format!(
|
||||||
|
"sgclaw-invalid-direct-submit-workspace-{}",
|
||||||
|
Uuid::new_v4()
|
||||||
|
));
|
||||||
|
fs::create_dir_all(&workspace_root).unwrap();
|
||||||
|
let config_path = workspace_root.join("sgclaw_config.json");
|
||||||
|
fs::write(
|
||||||
|
&config_path,
|
||||||
|
serde_json::json!({
|
||||||
|
"providers": [],
|
||||||
|
"skillsDir": skill_root,
|
||||||
|
"directSubmitSkill": "fault-details-report"
|
||||||
|
})
|
||||||
|
.to_string(),
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let runtime_context = AgentRuntimeContext::new(Some(config_path), workspace_root);
|
||||||
|
let transport = Arc::new(MockTransport::new(vec![]));
|
||||||
|
let browser_tool = BrowserPipeTool::new(
|
||||||
|
transport.clone(),
|
||||||
|
direct_runtime_test_policy(),
|
||||||
|
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||||
|
)
|
||||||
|
.with_response_timeout(Duration::from_secs(1));
|
||||||
|
|
||||||
|
handle_browser_message_with_context(
|
||||||
|
transport.as_ref(),
|
||||||
|
&browser_tool,
|
||||||
|
&runtime_context,
|
||||||
|
submit_fault_details_message(),
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let sent = transport.sent_messages();
|
||||||
|
assert!(matches!(
|
||||||
|
sent.last(),
|
||||||
|
Some(AgentMessage::TaskComplete { success, summary })
|
||||||
|
if !success && summary.contains("skill.tool")
|
||||||
|
));
|
||||||
|
assert!(direct_submit_mode_logs(&sent).is_empty());
|
||||||
|
assert!(!sent.iter().any(|message| matches!(message, AgentMessage::Command { .. })));
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 4: Run the focused agent test and verify it fails**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test agent_runtime_test submit_task_rejects_invalid_direct_submit_skill_config_before_routing -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: FAIL because the malformed config currently loads, enters the direct-submit branch, and emits `direct_skill_primary` before failing later.
|
||||||
|
|
||||||
|
- [ ] **Step 5: Implement the minimal config validation**
|
||||||
|
|
||||||
|
In `src/config/settings.rs`, add a small helper that validates the normalized `directSubmitSkill` string during `SgClawSettings::new(...)`.
|
||||||
|
|
||||||
|
Recommended implementation shape:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
fn normalize_direct_submit_skill(raw: Option<String>) -> Result<Option<String>, ConfigError> {
|
||||||
|
let value = normalize_optional_value(raw);
|
||||||
|
let Some(value) = value.as_deref() else {
|
||||||
|
return Ok(None);
|
||||||
|
};
|
||||||
|
|
||||||
|
let Some((skill_name, tool_name)) = value.split_once('.') else {
|
||||||
|
return Err(ConfigError::InvalidValue(
|
||||||
|
"directSubmitSkill",
|
||||||
|
format!("must use skill.tool format, got {value}"),
|
||||||
|
));
|
||||||
|
};
|
||||||
|
|
||||||
|
if skill_name.trim().is_empty() || tool_name.trim().is_empty() {
|
||||||
|
return Err(ConfigError::InvalidValue(
|
||||||
|
"directSubmitSkill",
|
||||||
|
format!("must use skill.tool format, got {value}"),
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(Some(value.to_string()))
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Then use it here:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
let direct_submit_skill = normalize_direct_submit_skill(direct_submit_skill)?;
|
||||||
|
```
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- do not change the public field type from `Option<String>`
|
||||||
|
- do not move parsing responsibility into `src/agent/mod.rs`
|
||||||
|
- do not redesign `src/compat/direct_skill_runtime.rs`
|
||||||
|
- keep valid-but-unresolvable `skill.tool` targets as runtime errors in the direct path
|
||||||
|
|
||||||
|
- [ ] **Step 6: Re-run the two focused tests and verify they pass**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test compat_config_test sgclaw_settings_reject_invalid_direct_submit_skill_format -- --nocapture
|
||||||
|
cargo test --test agent_runtime_test submit_task_rejects_invalid_direct_submit_skill_config_before_routing -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 7: Re-run the broader regression suites**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test compat_config_test -- --nocapture
|
||||||
|
cargo test --test agent_runtime_test -- --nocapture
|
||||||
|
cargo test --test browser_script_skill_tool_test -- --nocapture
|
||||||
|
cargo build --bin sgclaw
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS, including:
|
||||||
|
- the direct-submit happy path
|
||||||
|
- the existing no-LLM fallback behavior when `directSubmitSkill` is absent
|
||||||
|
- unchanged browser-script helper semantics
|
||||||
|
- clean binary build
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Verification Checklist
|
||||||
|
|
||||||
|
### Config validation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test compat_config_test -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: malformed `directSubmitSkill` is rejected early, while the existing direct-only config shape still loads.
|
||||||
|
|
||||||
|
### Submit-path behavior
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test agent_runtime_test -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected:
|
||||||
|
- malformed `directSubmitSkill` never reaches direct routing
|
||||||
|
- valid configured direct skill still succeeds without LLM config
|
||||||
|
- no direct skill configured still returns the existing no-LLM message
|
||||||
|
|
||||||
|
### Browser-script helper safety
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test browser_script_skill_tool_test -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: current browser-script execution semantics remain unchanged.
|
||||||
|
|
||||||
|
### Build
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo build --bin sgclaw
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: the main binary compiles cleanly.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Notes For The Engineer
|
||||||
|
|
||||||
|
- The paired spec is `docs/superpowers/specs/2026-04-09-config-owned-direct-skill-dispatch-design.md`.
|
||||||
|
- Do **not** add sgClaw-specific dispatch metadata under `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging` in this slice.
|
||||||
|
- Do **not** turn this into a per-skill registry task yet. This plan only hardens the current config-owned bootstrap contract.
|
||||||
|
- Keep the current direct target example as `fault-details-report.collect_fault_details`; avoid hard-coding that name into new generic APIs.
|
||||||
|
- If you discover a need for broader policy routing (`direct_browser` / `llm_agent` by skill), stop and write a new spec/plan instead of expanding this one.
|
||||||
@@ -0,0 +1,520 @@
|
|||||||
|
# Direct Skill Invocation Without LLM Implementation Plan
|
||||||
|
|
||||||
|
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||||
|
|
||||||
|
**Goal:** Let the current pipe submit-task flow accept natural-language input but directly invoke one fixed staged browser skill without calling any model, while reserving a clean switch back to LLM-based routing later.
|
||||||
|
|
||||||
|
**Architecture:** Keep the existing `BrowserMessage::SubmitTask` entrypoint and add one narrow pre-routing seam before the current compat/LLM chain. When a new config field points to a fixed direct-submit skill, sgClaw loads that skill package from the configured external skills root, finds the target `browser_script` tool, executes it through the existing browser-script wrapper, and returns the result directly. When the field is absent, the current behavior stays unchanged. This preserves a future path where each skill can later declare `direct_browser` or `llm_agent` dispatch without rewriting the submit pipeline again.
|
||||||
|
|
||||||
|
**Tech Stack:** Rust 2021, existing `BrowserPipeTool`, current submit-task agent entrypoint, current browser-script skill executor, current sgClaw JSON config loader, `zeroclaw` skill manifest loader.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Recommended First Skill
|
||||||
|
|
||||||
|
Use `fault-details-report.collect_fault_details` from:
|
||||||
|
- `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/scenes/fault-details-report/scene.json`
|
||||||
|
- `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/SKILL.toml`
|
||||||
|
- `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.js`
|
||||||
|
|
||||||
|
Why this one first:
|
||||||
|
- it is clearly a report/export skill
|
||||||
|
- it exposes exactly one browser-script tool: `collect_fault_details`
|
||||||
|
- it has the smallest contract surface (`period` only)
|
||||||
|
- its current JS is deterministic and simple, so the first slice can focus on plumbing instead of browser scraping complexity
|
||||||
|
|
||||||
|
## Scope Guardrails
|
||||||
|
|
||||||
|
- Do **not** redesign the existing submit-task protocol.
|
||||||
|
- Do **not** remove or rewrite the current LLM/compat path; leave it as the fallback/default path.
|
||||||
|
- Do **not** introduce generic NL intent routing in this slice; this is one fixed direct skill only.
|
||||||
|
- Do **not** modify `third_party/zeroclaw` skill manifest schema in phase 1.
|
||||||
|
- Do **not** add Excel export wiring in the first slice unless a test explicitly requires it.
|
||||||
|
- Do **not** invent a new browser-script execution model; reuse the existing wrapper semantics.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## File Map
|
||||||
|
|
||||||
|
### Existing files to modify
|
||||||
|
|
||||||
|
- Modify: `src/config/settings.rs`
|
||||||
|
- add a minimal config field for one direct-submit skill name
|
||||||
|
- Modify: `src/agent/mod.rs`
|
||||||
|
- add a narrow pre-routing branch before the current compat/LLM path
|
||||||
|
- Modify: `src/compat/browser_script_skill_tool.rs`
|
||||||
|
- expose the smallest reusable helper for direct browser-script execution
|
||||||
|
- Modify: `src/compat/mod.rs` or the nearest module export surface
|
||||||
|
- export the new narrow direct-skill runtime module if needed
|
||||||
|
- Modify: `tests/compat_config_test.rs`
|
||||||
|
- add config coverage for the new direct-submit field
|
||||||
|
- Modify: `tests/browser_script_skill_tool_test.rs`
|
||||||
|
- add coverage for the reusable direct-execution helper
|
||||||
|
- Modify: `tests/agent_runtime_test.rs`
|
||||||
|
- prove submit-task can bypass the model and directly invoke the fixed skill
|
||||||
|
|
||||||
|
### New files to create
|
||||||
|
|
||||||
|
- Create: `src/compat/direct_skill_runtime.rs`
|
||||||
|
- small runtime for loading one configured skill, resolving one configured tool, deriving minimal args, and executing it directly
|
||||||
|
|
||||||
|
### Files to reuse without changing behavior
|
||||||
|
|
||||||
|
- Reuse: `src/compat/runtime.rs`
|
||||||
|
- Reuse: `src/compat/orchestration.rs`
|
||||||
|
- Reuse: `src/compat/config_adapter.rs`
|
||||||
|
- Reuse: `third_party/zeroclaw/src/skills/mod.rs`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 1: Add A Minimal Direct-Submit Skill Config Field
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/config/settings.rs`
|
||||||
|
- Modify: `tests/compat_config_test.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Write the failing config test for the new field**
|
||||||
|
|
||||||
|
In `tests/compat_config_test.rs`, add a focused config-load test proving the browser config file can declare one fixed direct-submit skill.
|
||||||
|
|
||||||
|
Test shape:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[test]
|
||||||
|
fn sgclaw_settings_load_direct_submit_skill_from_browser_config() {
|
||||||
|
let root = std::env::temp_dir().join(format!("sgclaw-direct-skill-{}", uuid::Uuid::new_v4()));
|
||||||
|
std::fs::create_dir_all(&root).unwrap();
|
||||||
|
let config_path = root.join("sgclaw_config.json");
|
||||||
|
|
||||||
|
std::fs::write(
|
||||||
|
&config_path,
|
||||||
|
r#"{
|
||||||
|
"apiKey": "sk-runtime",
|
||||||
|
"baseUrl": "https://api.deepseek.com",
|
||||||
|
"model": "deepseek-chat",
|
||||||
|
"skillsDir": "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging",
|
||||||
|
"directSubmitSkill": "fault-details-report.collect_fault_details"
|
||||||
|
}"#,
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let settings = sgclaw::config::SgClawSettings::load(Some(config_path.as_path()))
|
||||||
|
.unwrap()
|
||||||
|
.expect("expected sgclaw settings from config file");
|
||||||
|
|
||||||
|
assert_eq!(
|
||||||
|
settings.direct_submit_skill.as_deref(),
|
||||||
|
Some("fault-details-report.collect_fault_details")
|
||||||
|
);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run the focused config test and verify it fails**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test compat_config_test sgclaw_settings_load_direct_submit_skill_from_browser_config -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: FAIL because the config field does not exist yet.
|
||||||
|
|
||||||
|
- [ ] **Step 3: Implement the minimal config field**
|
||||||
|
|
||||||
|
In `src/config/settings.rs`, add:
|
||||||
|
- `direct_submit_skill: Option<String>` to `SgClawSettings`
|
||||||
|
- `direct_submit_skill: Option<String>` to `RawSgClawSettings`
|
||||||
|
- field normalization in `SgClawSettings::new(...)`
|
||||||
|
|
||||||
|
Recommended JSON key shape:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[serde(rename = "directSubmitSkill", alias = "direct_submit_skill", default)]
|
||||||
|
direct_submit_skill: Option<String>,
|
||||||
|
```
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- trim empty values to `None`
|
||||||
|
- keep `DeepSeekSettings` unchanged for this slice unless a compile error proves it must mirror the field
|
||||||
|
- do not alter unrelated config semantics
|
||||||
|
|
||||||
|
- [ ] **Step 4: Re-run the focused config test**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test compat_config_test sgclaw_settings_load_direct_submit_skill_from_browser_config -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 5: Re-run the broader config file tests**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test compat_config_test -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 6: Commit Task 1**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/config/settings.rs tests/compat_config_test.rs
|
||||||
|
git commit -m "feat: add direct submit skill config"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 2: Extract A Reusable Browser-Script Direct Execution Helper
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/compat/browser_script_skill_tool.rs`
|
||||||
|
- Modify: `tests/browser_script_skill_tool_test.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Write the first failing helper test**
|
||||||
|
|
||||||
|
In `tests/browser_script_skill_tool_test.rs`, add a focused test proving direct code can execute a packaged browser script without constructing a full `Tool` object first.
|
||||||
|
|
||||||
|
Test shape:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[tokio::test]
|
||||||
|
async fn execute_browser_script_tool_runs_packaged_script_with_expected_domain() {
|
||||||
|
// build temp skill script
|
||||||
|
// call the helper directly
|
||||||
|
// assert Action::Eval was sent with wrapped args and normalized domain
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Required assertions:
|
||||||
|
- the helper reads the packaged JS file
|
||||||
|
- it wraps args with `const args = ...`
|
||||||
|
- it normalizes URL-like `expected_domain`
|
||||||
|
- it returns the serialized payload string on success
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run the helper test and verify it fails**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test browser_script_skill_tool_test execute_browser_script_tool_runs_packaged_script_with_expected_domain -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: FAIL because the helper does not exist yet.
|
||||||
|
|
||||||
|
- [ ] **Step 3: Add the second failing helper test for required-domain validation**
|
||||||
|
|
||||||
|
Add a focused failure-path test proving the helper rejects missing or invalid `expected_domain` before any browser command is sent.
|
||||||
|
|
||||||
|
- [ ] **Step 4: Run the validation test and verify it fails**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test browser_script_skill_tool_test execute_browser_script_tool_rejects_missing_expected_domain -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: FAIL because the helper does not exist yet.
|
||||||
|
|
||||||
|
- [ ] **Step 5: Implement the minimal reusable helper**
|
||||||
|
|
||||||
|
In `src/compat/browser_script_skill_tool.rs`, extract the smallest reusable function, for example:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
pub async fn execute_browser_script_tool<T: Transport + 'static>(
|
||||||
|
tool: &SkillTool,
|
||||||
|
skill_root: &Path,
|
||||||
|
browser_tool: BrowserPipeTool<T>,
|
||||||
|
args: Value,
|
||||||
|
) -> anyhow::Result<ToolResult>
|
||||||
|
```
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- reuse the current path validation, script loading, wrapping, `Action::Eval`, and payload formatting logic already used by `BrowserScriptSkillTool::execute`
|
||||||
|
- do not change outward behavior of `BrowserScriptSkillTool`
|
||||||
|
- keep the helper narrow and browser-script-only
|
||||||
|
|
||||||
|
- [ ] **Step 6: Refactor `BrowserScriptSkillTool::execute` to call the helper**
|
||||||
|
|
||||||
|
Keep existing behavior and tests green while removing duplicate execution logic.
|
||||||
|
|
||||||
|
- [ ] **Step 7: Re-run the browser-script tests**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test browser_script_skill_tool_test -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 8: Commit Task 2**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/compat/browser_script_skill_tool.rs tests/browser_script_skill_tool_test.rs
|
||||||
|
git commit -m "refactor: extract direct browser script execution helper"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 3: Add A Narrow Direct Skill Runtime For One Fixed Skill
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Create: `src/compat/direct_skill_runtime.rs`
|
||||||
|
- Modify: `src/compat/mod.rs` or nearest module export point
|
||||||
|
- Reuse: `src/compat/config_adapter.rs`
|
||||||
|
- Reuse: `third_party/zeroclaw/src/skills/mod.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Write the first failing direct-runtime test**
|
||||||
|
|
||||||
|
Add a focused test in `tests/agent_runtime_test.rs` or a new narrow compat test proving code can resolve the configured external skills root, load `fault-details-report`, find `collect_fault_details`, and execute it directly.
|
||||||
|
|
||||||
|
Recommended shape:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[test]
|
||||||
|
fn direct_skill_runtime_executes_fault_details_report_without_provider() {
|
||||||
|
// config points at skill_staging root
|
||||||
|
// direct_submit_skill points at fault-details-report.collect_fault_details
|
||||||
|
// browser response returns report-artifact payload
|
||||||
|
// assert no provider/http path is touched
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run the focused direct-runtime test and verify it fails**
|
||||||
|
|
||||||
|
Run the narrowest test command for the new test.
|
||||||
|
|
||||||
|
Expected: FAIL because the direct runtime does not exist yet.
|
||||||
|
|
||||||
|
- [ ] **Step 3: Implement `src/compat/direct_skill_runtime.rs`**
|
||||||
|
|
||||||
|
Add a narrow runtime with responsibilities only to:
|
||||||
|
- resolve the configured skills dir with `resolve_skills_dir_from_sgclaw_settings(...)`
|
||||||
|
- load skills from that directory with `load_skills_from_directory(...)`
|
||||||
|
- parse the configured tool name into `skill_name` + `tool_name`
|
||||||
|
- find the matching skill and matching tool
|
||||||
|
- verify `tool.kind == "browser_script"`
|
||||||
|
- derive the minimal argument object
|
||||||
|
- call the new browser-script helper
|
||||||
|
- return the output string or a clear `PipeError`
|
||||||
|
|
||||||
|
Do **not** add generic routing, scenes, or model fallback here.
|
||||||
|
|
||||||
|
- [ ] **Step 4: Keep argument derivation intentionally minimal**
|
||||||
|
|
||||||
|
For the first slice, derive only:
|
||||||
|
- `expected_domain` from `page_url` when present, otherwise fail with a clear message
|
||||||
|
- `period` from the instruction using a narrow deterministic pattern such as `YYYY-MM`
|
||||||
|
|
||||||
|
If the period cannot be derived, return a concise error telling the user to provide it explicitly. Do not guess.
|
||||||
|
|
||||||
|
- [ ] **Step 5: Re-run the focused direct-runtime test**
|
||||||
|
|
||||||
|
Run the same test command again.
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 6: Commit Task 3**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/compat/direct_skill_runtime.rs src/compat/mod.rs tests/agent_runtime_test.rs
|
||||||
|
git commit -m "feat: add fixed direct skill runtime"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 4: Insert The Pre-Routing Seam In Submit-Task Entry
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/agent/mod.rs`
|
||||||
|
- Modify: `tests/agent_runtime_test.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Write the first failing submit-path bypass test**
|
||||||
|
|
||||||
|
In `tests/agent_runtime_test.rs`, add a focused regression proving that when `directSubmitSkill` is configured, `BrowserMessage::SubmitTask` can succeed without any model/provider being configured.
|
||||||
|
|
||||||
|
Test shape:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[test]
|
||||||
|
fn submit_task_uses_direct_skill_mode_without_llm_configuration() {
|
||||||
|
// config contains skillsDir + directSubmitSkill, but no reachable provider
|
||||||
|
// natural-language instruction includes period and page_url
|
||||||
|
// expect TaskComplete success from direct skill result
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Required assertions:
|
||||||
|
- task succeeds even if provider would be unavailable
|
||||||
|
- output contains the report artifact payload
|
||||||
|
- no summary like `未配置大语言模型`
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run the bypass test and verify it fails**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test agent_runtime_test submit_task_uses_direct_skill_mode_without_llm_configuration -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: FAIL because submit-task still goes into the current LLM-oriented path.
|
||||||
|
|
||||||
|
- [ ] **Step 3: Add the second failing priority test**
|
||||||
|
|
||||||
|
Add one focused test proving the direct-submit branch runs before the existing compat/LLM branch.
|
||||||
|
|
||||||
|
The easiest assertion is that the mode log becomes something new like:
|
||||||
|
- `direct_skill_primary`
|
||||||
|
|
||||||
|
and the normal mode logs do not appear for that turn.
|
||||||
|
|
||||||
|
- [ ] **Step 4: Run the priority test and verify it fails**
|
||||||
|
|
||||||
|
Run the narrow test command for the new test.
|
||||||
|
|
||||||
|
Expected: FAIL because the mode does not exist yet.
|
||||||
|
|
||||||
|
- [ ] **Step 5: Add the narrow pre-routing branch in `src/agent/mod.rs`**
|
||||||
|
|
||||||
|
In `handle_browser_message_with_context(...)`, after config load/logging and before the existing `should_use_primary_orchestration(...)` / `compat::runtime` path:
|
||||||
|
- check `settings.direct_submit_skill`
|
||||||
|
- if present, emit mode log `direct_skill_primary`
|
||||||
|
- call the new direct runtime
|
||||||
|
- send `TaskComplete` and return immediately
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- if `direct_submit_skill` is absent, keep existing behavior byte-for-byte where possible
|
||||||
|
- do not modify `compat::runtime.rs` or `compat::orchestration.rs` for this slice
|
||||||
|
- do not silently fall through to LLM when direct execution fails; return the direct error clearly so the first slice is debuggable
|
||||||
|
|
||||||
|
- [ ] **Step 6: Re-run the focused submit-path tests**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test agent_runtime_test submit_task_uses_direct_skill_mode_without_llm_configuration -- --nocapture
|
||||||
|
cargo test --test agent_runtime_test direct_skill_mode_logs_direct_skill_primary -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 7: Re-run existing no-LLM submit regression coverage**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test agent_runtime_test -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS, including existing cases where no direct skill is configured and the old no-LLM failure still applies.
|
||||||
|
|
||||||
|
- [ ] **Step 8: Commit Task 4**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/agent/mod.rs tests/agent_runtime_test.rs
|
||||||
|
git commit -m "feat: route submit tasks through fixed direct skill mode"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 5: Lock The Future Migration Seam Without Implementing LLM Dispatch Yet
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify only if needed: `src/config/settings.rs`
|
||||||
|
- Modify only if needed: `src/compat/direct_skill_runtime.rs`
|
||||||
|
- Reuse: docs/plan only unless code needs one tiny naming fix
|
||||||
|
|
||||||
|
- [ ] **Step 1: Keep the config naming compatible with future per-skill dispatch**
|
||||||
|
|
||||||
|
Document and preserve this future meaning in code naming:
|
||||||
|
- current field: one fixed direct skill for submit-task bootstrap
|
||||||
|
- future model: each skill can declare dispatch mode such as `direct_browser` or `llm_agent`
|
||||||
|
|
||||||
|
Prefer neutral names in helper code like:
|
||||||
|
- `direct skill mode`
|
||||||
|
- `direct submit skill`
|
||||||
|
|
||||||
|
Avoid hard-coding `fault_details` into generic APIs.
|
||||||
|
|
||||||
|
- [ ] **Step 2: Add one small negative test for fallback behavior**
|
||||||
|
|
||||||
|
Add a focused test proving that when `directSubmitSkill` is not configured, submit-task still behaves exactly as before and can still return the existing no-LLM message.
|
||||||
|
|
||||||
|
If an existing test already proves this, keep it and do not add another.
|
||||||
|
|
||||||
|
- [ ] **Step 3: Re-run the focused end-to-end verification set**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test compat_config_test -- --nocapture
|
||||||
|
cargo test --test browser_script_skill_tool_test -- --nocapture
|
||||||
|
cargo test --test agent_runtime_test -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 4: Build the main binary**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo build --bin sgclaw
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 5: Commit Task 5**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/config/settings.rs src/compat/direct_skill_runtime.rs src/compat/browser_script_skill_tool.rs src/agent/mod.rs tests/compat_config_test.rs tests/browser_script_skill_tool_test.rs tests/agent_runtime_test.rs
|
||||||
|
git commit -m "test: verify fixed direct skill submit path"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Verification Checklist
|
||||||
|
|
||||||
|
### Config loading
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test compat_config_test -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: `directSubmitSkill` loads correctly and existing config behavior remains intact.
|
||||||
|
|
||||||
|
### Browser-script helper
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test browser_script_skill_tool_test -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: direct helper preserves the existing browser-script execution semantics.
|
||||||
|
|
||||||
|
### Submit-path bypass
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test agent_runtime_test -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: configured direct skill bypasses the model path, while unconfigured submit-task behavior stays unchanged.
|
||||||
|
|
||||||
|
### Build
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo build --bin sgclaw
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: the binary compiles cleanly.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Notes For The Engineer
|
||||||
|
|
||||||
|
- The key to keeping this slice small is to avoid changing `compat::runtime.rs` and `compat::orchestration.rs`; they remain the future LLM path.
|
||||||
|
- `fault-details-report.collect_fault_details` is only the bootstrap skill. The plumbing must stay generic enough that the configured tool name can later point to another staged browser skill.
|
||||||
|
- Phase 1 should not add per-skill dispatch metadata to the external skill manifests yet. Keep that decision in sgClaw config first; move it into skill metadata only after the direct path is proven useful.
|
||||||
|
- Once the intranet model is ready, the clean next step is to add a dispatch policy layer that chooses between `direct_browser` and `llm_agent` before the current compat path is entered, reusing this same pre-routing seam.
|
||||||
@@ -0,0 +1,666 @@
|
|||||||
|
# WS Branch Scene Cleanup Implementation Plan
|
||||||
|
|
||||||
|
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||||
|
|
||||||
|
**Goal:** Strip `feature/claw-ws` back to websocket plus Zhihu execution only by removing staged scene-skill routing, `skill_staging`-aware loading, and array-style `skillsDir` config behavior from this branch.
|
||||||
|
|
||||||
|
**Architecture:** Treat `feature/claw-ws` as a transport-focused branch, not a business-scene branch. Keep the browser websocket/callback submit path and the existing Zhihu direct workflows, but delete the fault-details / `95598` scene registry, scene-specific prompt injection, staged scene directory expansion, and scene-only docs/tests so the branch stays small and merges cleanly after the real scene implementation lands on `main`.
|
||||||
|
|
||||||
|
**Tech Stack:** Rust 2021, existing sgClaw compat/runtime/orchestration stack, websocket browser backend, callback-host service path, existing `cargo test` suite.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Preconditions
|
||||||
|
|
||||||
|
- Execute this plan **only after** `main` already contains the desired clean scene-skill implementation.
|
||||||
|
- Run it on `feature/claw-ws`, not on `main`.
|
||||||
|
- Keep websocket and Zhihu behavior intact; this plan is cleanup, not a redesign.
|
||||||
|
- Keep `docs/_tmp_sgbrowser_ws_api_doc.txt`; it remains the browser integration contract for this branch.
|
||||||
|
|
||||||
|
## Scope Guardrails
|
||||||
|
|
||||||
|
- Do **not** change the working Zhihu websocket flow in `tests/agent_runtime_test.rs`.
|
||||||
|
- Do **not** remove `src/browser/ws_backend.rs`, `src/service/server.rs`, or Zhihu routes from `src/compat/workflow_executor.rs`.
|
||||||
|
- Do **not** add a replacement scene abstraction on this branch.
|
||||||
|
- Do **not** keep partial scene plumbing “for future use”; delete it completely if it is scene-only.
|
||||||
|
- Do **not** keep array-style `skillsDir` tests or docs on this branch once the single-path cleanup is complete.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## File Map
|
||||||
|
|
||||||
|
### Delete
|
||||||
|
|
||||||
|
- `src/runtime/scene_registry.rs`
|
||||||
|
- staged scene registry, hard-coded `skill_staging` scene root, scene matching helpers
|
||||||
|
- `tests/scene_registry_test.rs`
|
||||||
|
- scene-registry-specific coverage that should disappear with the feature
|
||||||
|
- `docs/superpowers/specs/2026-04-06-scene-skill-runtime-routing-design.md`
|
||||||
|
- scene-routing design doc that no longer belongs on the ws-only branch
|
||||||
|
- `docs/superpowers/plans/2026-04-06-scene-skill-runtime-routing-plan.md`
|
||||||
|
- scene-routing implementation plan that no longer belongs on the ws-only branch
|
||||||
|
|
||||||
|
### Modify
|
||||||
|
|
||||||
|
- `src/runtime/mod.rs`
|
||||||
|
- stop exporting deleted scene registry APIs
|
||||||
|
- `src/runtime/engine.rs`
|
||||||
|
- remove scene-contract prompt injection and staged scene skill loading
|
||||||
|
- `src/compat/workflow_executor.rs`
|
||||||
|
- remove `FaultDetailsReport` route detection/execution while keeping Zhihu routes
|
||||||
|
- `src/compat/orchestration.rs`
|
||||||
|
- keep direct Zhihu orchestration only; remove scene-driven primary routing triggers
|
||||||
|
- `src/config/settings.rs`
|
||||||
|
- collapse `skillsDir` config handling back to single-path semantics
|
||||||
|
- `src/compat/config_adapter.rs`
|
||||||
|
- remove scene-specific skills-dir helpers and keep one resolved skills dir
|
||||||
|
- `src/compat/runtime.rs`
|
||||||
|
- stop carrying scene-expanded skills dirs through compat runtime
|
||||||
|
- `src/agent/task_runner.rs`
|
||||||
|
- update runtime logging and runtime calls to the single skills-dir contract
|
||||||
|
- `tests/compat_runtime_test.rs`
|
||||||
|
- remove fault-details / `95598` assertions and keep Zhihu/direct-route coverage
|
||||||
|
- `tests/runtime_profile_test.rs`
|
||||||
|
- remove `95598` scene-contract expectations and keep normal browser-profile coverage
|
||||||
|
- `tests/compat_config_test.rs`
|
||||||
|
- remove scene-dir / array-config coverage and add single-path cleanup coverage
|
||||||
|
- `tests/agent_runtime_test.rs`
|
||||||
|
- only extend if one extra Zhihu keep-path regression is needed after the config cleanup
|
||||||
|
|
||||||
|
### Keep As-Is Unless A Signature Change Forces A Tiny Edit
|
||||||
|
|
||||||
|
- `src/browser/ws_backend.rs`
|
||||||
|
- `src/browser/callback_backend.rs`
|
||||||
|
- `src/browser/callback_host.rs`
|
||||||
|
- `src/service/server.rs`
|
||||||
|
- `src/agent/mod.rs`
|
||||||
|
- `tests/browser_ws_backend_test.rs`
|
||||||
|
- `tests/service_ws_session_test.rs`
|
||||||
|
- `tests/task_runner_test.rs`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 1: Lock The Cleanup Contract In Failing Tests
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `tests/compat_runtime_test.rs`
|
||||||
|
- Modify: `tests/runtime_profile_test.rs`
|
||||||
|
- Modify: `tests/compat_config_test.rs`
|
||||||
|
- Reuse: `tests/agent_runtime_test.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Add the first failing route-removal test**
|
||||||
|
|
||||||
|
In `tests/compat_runtime_test.rs`, add a focused assertion proving the ws branch no longer recognizes the fault-details scene as a direct route:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[test]
|
||||||
|
fn ws_cleanup_no_longer_detects_fault_details_scene_route() {
|
||||||
|
use sgclaw::compat::workflow_executor::detect_route;
|
||||||
|
|
||||||
|
assert_eq!(
|
||||||
|
detect_route(
|
||||||
|
"导出故障明细",
|
||||||
|
Some("https://example.invalid/workbench"),
|
||||||
|
Some("业务台账"),
|
||||||
|
),
|
||||||
|
None,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run the focused route test and verify it fails**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test compat_runtime_test ws_cleanup_no_longer_detects_fault_details_scene_route -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: FAIL because `FaultDetailsReport` is still detected today.
|
||||||
|
|
||||||
|
- [ ] **Step 3: Add the second failing orchestration-gate test**
|
||||||
|
|
||||||
|
In `tests/compat_runtime_test.rs`, add one focused assertion proving scene keywords no longer open the primary direct-orchestration path:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[test]
|
||||||
|
fn ws_cleanup_scene_keywords_do_not_trigger_primary_orchestration() {
|
||||||
|
assert!(!sgclaw::compat::orchestration::should_use_primary_orchestration(
|
||||||
|
"请处理95598抢修市指监测",
|
||||||
|
Some("https://95598.example.invalid/dispatch"),
|
||||||
|
Some("95598抢修市指监测"),
|
||||||
|
));
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 4: Run the orchestration-gate test and verify it fails**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test compat_runtime_test ws_cleanup_scene_keywords_do_not_trigger_primary_orchestration -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: FAIL because the scene matcher still feeds primary orchestration today.
|
||||||
|
|
||||||
|
- [ ] **Step 5: Add the third failing runtime-instruction test**
|
||||||
|
|
||||||
|
In `tests/runtime_profile_test.rs`, add a focused negative assertion proving browser-attached turns no longer receive the `95598` scene execution contract:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[test]
|
||||||
|
fn ws_cleanup_browser_profile_does_not_inject_95598_scene_contract() {
|
||||||
|
let engine = RuntimeEngine::new(RuntimeProfile::BrowserAttached);
|
||||||
|
let instruction = engine.build_instruction(
|
||||||
|
"请处理95598-repair-city-dispatch场景,查看抢修市指派单并汇总当前队列",
|
||||||
|
Some("https://95598.example.invalid/dispatch"),
|
||||||
|
Some("95598抢修市指监测"),
|
||||||
|
true,
|
||||||
|
);
|
||||||
|
|
||||||
|
assert!(!instruction.contains("95598-repair-city-dispatch.collect_repair_orders"));
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 6: Run the runtime-profile test and verify it fails**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test runtime_profile_test ws_cleanup_browser_profile_does_not_inject_95598_scene_contract -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: FAIL because `src/runtime/engine.rs` still injects the scene contract today.
|
||||||
|
|
||||||
|
- [ ] **Step 7: Add the fourth failing config-shape test**
|
||||||
|
|
||||||
|
In `tests/compat_config_test.rs`, add one focused assertion proving ws cleanup goes back to a single configured skills path and no longer accepts array-style `skillsDir` JSON:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[test]
|
||||||
|
fn ws_cleanup_rejects_array_style_skills_dir_config() {
|
||||||
|
let root = std::env::temp_dir().join(format!("sgclaw-config-{}", uuid::Uuid::new_v4()));
|
||||||
|
std::fs::create_dir_all(&root).unwrap();
|
||||||
|
let config_path = root.join("sgclaw_config.json");
|
||||||
|
std::fs::write(
|
||||||
|
&config_path,
|
||||||
|
r#"{
|
||||||
|
"apiKey": "sk-test",
|
||||||
|
"baseUrl": "https://api.deepseek.com",
|
||||||
|
"model": "deepseek-chat",
|
||||||
|
"skillsDir": ["skill_lib", "skill_staging"]
|
||||||
|
}"#,
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert!(sgclaw::config::SgClawSettings::load(Some(config_path.as_path())).is_err());
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 8: Run the config-shape test and verify it fails**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test compat_config_test ws_cleanup_rejects_array_style_skills_dir_config -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: FAIL because the current parser still accepts string-or-array `skillsDir` input.
|
||||||
|
|
||||||
|
- [ ] **Step 9: Re-run the existing Zhihu keep-path test as a safety baseline**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test agent_runtime_test production_submit_task_routes_zhihu_through_ws_backend_without_helper_bootstrap -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS, proving the behavior we want to keep is already covered before deletion starts.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 2: Remove Scene Registry, Scene Prompt Injection, And Fault-Details Routing
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Delete: `src/runtime/scene_registry.rs`
|
||||||
|
- Modify: `src/runtime/mod.rs`
|
||||||
|
- Modify: `src/runtime/engine.rs`
|
||||||
|
- Modify: `src/compat/workflow_executor.rs`
|
||||||
|
- Modify: `src/compat/orchestration.rs`
|
||||||
|
- Modify: `tests/compat_runtime_test.rs`
|
||||||
|
- Modify: `tests/runtime_profile_test.rs`
|
||||||
|
- Delete: `tests/scene_registry_test.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Remove the runtime scene module export surface**
|
||||||
|
|
||||||
|
Update `src/runtime/mod.rs` so it no longer declares or re-exports scene registry items.
|
||||||
|
|
||||||
|
Target shape:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
mod engine;
|
||||||
|
mod profile;
|
||||||
|
mod tool_policy;
|
||||||
|
|
||||||
|
pub use engine::{
|
||||||
|
is_zhihu_hotlist_task,
|
||||||
|
is_zhihu_write_task,
|
||||||
|
task_requests_zhihu_article_publish,
|
||||||
|
RuntimeEngine,
|
||||||
|
};
|
||||||
|
pub use profile::RuntimeProfile;
|
||||||
|
pub use tool_policy::ToolPolicy;
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Delete `src/runtime/scene_registry.rs`**
|
||||||
|
|
||||||
|
Remove the file entirely. Do not leave a stub module or comments about future scene support.
|
||||||
|
|
||||||
|
- [ ] **Step 3: Remove scene-aware prompt injection from `src/runtime/engine.rs`**
|
||||||
|
|
||||||
|
Delete:
|
||||||
|
- the `resolve_scene_skills_dir_path` import
|
||||||
|
- the `DispatchMode` / `match_scene_instruction` imports
|
||||||
|
- `REPAIR_CITY_DISPATCH_EXECUTION_PROMPT`
|
||||||
|
- `build_scene_execution_contract(...)`
|
||||||
|
- the `if let Some(scene_contract) = ...` block inside `RuntimeEngine::build_instruction(...)`
|
||||||
|
- staged scene directory loading inside `load_runtime_skills(...)`
|
||||||
|
|
||||||
|
The resulting instruction assembly should keep:
|
||||||
|
- browser tool contract
|
||||||
|
- Zhihu hotlist/export prompts
|
||||||
|
- Zhihu publish guard
|
||||||
|
- page context
|
||||||
|
|
||||||
|
Do **not** change Zhihu prompt text.
|
||||||
|
|
||||||
|
- [ ] **Step 4: Remove the fault-details route from `src/compat/workflow_executor.rs`**
|
||||||
|
|
||||||
|
Shrink `WorkflowRoute` back to Zhihu-only variants:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
pub enum WorkflowRoute {
|
||||||
|
ZhihuHotlistExportXlsx,
|
||||||
|
ZhihuHotlistScreen,
|
||||||
|
ZhihuArticleEntry,
|
||||||
|
ZhihuArticleDraft,
|
||||||
|
ZhihuArticlePublish,
|
||||||
|
ZhihuArticleAutoPublishGenerated,
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Delete:
|
||||||
|
- `FAULT_DETAILS_SCENE_ID`
|
||||||
|
- the scene check at the top of `detect_route(...)`
|
||||||
|
- `WorkflowRoute::FaultDetailsReport`
|
||||||
|
- `execute_fault_details_route(...)`
|
||||||
|
- any scene-only helpers used only by that path
|
||||||
|
|
||||||
|
Keep the Zhihu route order unchanged.
|
||||||
|
|
||||||
|
- [ ] **Step 5: Simplify `src/compat/orchestration.rs` to Zhihu-only direct routing**
|
||||||
|
|
||||||
|
After the fault-details route is gone, keep `should_use_primary_orchestration(...)` and the two execute functions focused on:
|
||||||
|
- Zhihu direct routes detected by `detect_route(...)`
|
||||||
|
- existing Zhihu export/dashboard fallback behavior
|
||||||
|
|
||||||
|
Do not add new conditions.
|
||||||
|
|
||||||
|
- [ ] **Step 6: Remove scene-only tests and replace them with cleanup assertions**
|
||||||
|
|
||||||
|
In `tests/compat_runtime_test.rs` and `tests/runtime_profile_test.rs`:
|
||||||
|
- delete `fault-details` assertions that require the old route to exist
|
||||||
|
- delete `95598` scene-contract assertions that require the old prompt injection to exist
|
||||||
|
- keep the new negative cleanup tests from Task 1
|
||||||
|
- keep the existing Zhihu assertions intact
|
||||||
|
|
||||||
|
Delete `tests/scene_registry_test.rs` completely.
|
||||||
|
|
||||||
|
- [ ] **Step 7: Run the focused cleanup tests**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test compat_runtime_test ws_cleanup_no_longer_detects_fault_details_scene_route -- --nocapture && cargo test --test compat_runtime_test ws_cleanup_scene_keywords_do_not_trigger_primary_orchestration -- --nocapture && cargo test --test runtime_profile_test ws_cleanup_browser_profile_does_not_inject_95598_scene_contract -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 8: Re-run the focused Zhihu runtime tests**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test compat_runtime_test zhihu_ -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS, proving the Zhihu direct routes still work after the scene deletion.
|
||||||
|
|
||||||
|
- [ ] **Step 9: Commit Task 2**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/runtime/mod.rs src/runtime/engine.rs src/compat/workflow_executor.rs src/compat/orchestration.rs tests/compat_runtime_test.rs tests/runtime_profile_test.rs
|
||||||
|
git rm src/runtime/scene_registry.rs tests/scene_registry_test.rs
|
||||||
|
git commit -m "refactor: remove scene routing from ws branch"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 3: Collapse `skillsDir` Back To Single-Path Semantics
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/config/settings.rs`
|
||||||
|
- Modify: `src/compat/config_adapter.rs`
|
||||||
|
- Modify: `src/compat/runtime.rs`
|
||||||
|
- Modify: `src/agent/task_runner.rs`
|
||||||
|
- Modify if needed: `tests/agent_runtime_test.rs`
|
||||||
|
- Modify: `tests/compat_config_test.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Change config parsing to a single configured skills path**
|
||||||
|
|
||||||
|
In `src/config/settings.rs`, replace the string-or-array parser with a single optional string field.
|
||||||
|
|
||||||
|
Target shape:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||||
|
pub struct DeepSeekSettings {
|
||||||
|
pub api_key: String,
|
||||||
|
pub base_url: String,
|
||||||
|
pub model: String,
|
||||||
|
pub skills_dir: Option<PathBuf>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||||
|
pub struct SgClawSettings {
|
||||||
|
// ...
|
||||||
|
pub skills_dir: Option<PathBuf>,
|
||||||
|
// ...
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
And in `RawSgClawSettings`:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[serde(rename = "skillsDir", alias = "skills_dir", default)]
|
||||||
|
skills_dir: Option<String>,
|
||||||
|
```
|
||||||
|
|
||||||
|
Delete `deserialize_skills_dirs(...)` entirely.
|
||||||
|
|
||||||
|
- [ ] **Step 2: Keep relative-path resolution, but only for one path**
|
||||||
|
|
||||||
|
Replace `resolve_configured_skills_dirs(...) -> Vec<PathBuf>` with a single-path helper such as:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
fn resolve_configured_skills_dir(raw: Option<String>, config_dir: &Path) -> Option<PathBuf> {
|
||||||
|
raw.map(|value| value.trim().to_string())
|
||||||
|
.filter(|value| !value.is_empty())
|
||||||
|
.map(PathBuf::from)
|
||||||
|
.map(|path| if path.is_absolute() { path } else { config_dir.join(path) })
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 3: Collapse compat config adapter back to one resolved skills dir**
|
||||||
|
|
||||||
|
In `src/compat/config_adapter.rs`:
|
||||||
|
- keep `zeroclaw_default_skills_dir(...)`
|
||||||
|
- change `resolve_skills_dir(...)` and `resolve_skills_dir_from_sgclaw_settings(...)` to return a single `PathBuf`
|
||||||
|
- delete `resolve_scene_skills_dir_from_sgclaw_settings(...)`
|
||||||
|
- delete `resolve_scene_skills_dir_path(...)`
|
||||||
|
- delete any helper branches that append `skill_staging/skills`
|
||||||
|
|
||||||
|
Recommended shape:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
pub fn resolve_skills_dir_from_sgclaw_settings(
|
||||||
|
workspace_root: &Path,
|
||||||
|
settings: &SgClawSettings,
|
||||||
|
) -> PathBuf {
|
||||||
|
settings
|
||||||
|
.skills_dir
|
||||||
|
.as_ref()
|
||||||
|
.map(|dir| normalize_configured_skills_dir(dir))
|
||||||
|
.unwrap_or_else(|| zeroclaw_default_skills_dir(workspace_root))
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 4: Update runtime callers to the single-path contract**
|
||||||
|
|
||||||
|
In `src/compat/runtime.rs` and `src/agent/task_runner.rs`:
|
||||||
|
- stop passing vectors of skills dirs around
|
||||||
|
- update logging from `skills dirs resolved to [...]` to a single-path message such as `skills dir resolved to ...`
|
||||||
|
- keep the rest of the runtime behavior unchanged
|
||||||
|
|
||||||
|
In `src/runtime/engine.rs`, if the method still needs a collection internally, convert the one path at the call site instead of preserving public multi-root plumbing.
|
||||||
|
|
||||||
|
- [ ] **Step 5: Replace config tests with single-path cleanup coverage**
|
||||||
|
|
||||||
|
In `tests/compat_config_test.rs`:
|
||||||
|
- keep single-string `skillsDir` resolution tests
|
||||||
|
- remove `resolve_scene_skills_dir_path_*` coverage
|
||||||
|
- remove array-acceptance expectations
|
||||||
|
- keep the new rejecting-array test from Task 1
|
||||||
|
|
||||||
|
Add one focused positive test like:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[test]
|
||||||
|
fn ws_cleanup_resolves_single_configured_skills_dir() {
|
||||||
|
let root = std::env::temp_dir().join(format!("sgclaw-skills-{}", uuid::Uuid::new_v4()));
|
||||||
|
std::fs::create_dir_all(root.join("skill_lib/skills")).unwrap();
|
||||||
|
|
||||||
|
let settings = DeepSeekSettings {
|
||||||
|
api_key: "key".to_string(),
|
||||||
|
base_url: "https://api.deepseek.com".to_string(),
|
||||||
|
model: "deepseek-chat".to_string(),
|
||||||
|
skills_dir: Some(root.join("skill_lib")),
|
||||||
|
};
|
||||||
|
|
||||||
|
assert_eq!(
|
||||||
|
resolve_skills_dir(&root, &settings),
|
||||||
|
root.join("skill_lib/skills"),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 6: Run the focused config tests**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test compat_config_test ws_cleanup_ -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 7: Re-run the Zhihu websocket keep-path test**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test agent_runtime_test production_submit_task_routes_zhihu_through_ws_backend_without_helper_bootstrap -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 8: Commit Task 3**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/config/settings.rs src/compat/config_adapter.rs src/compat/runtime.rs src/agent/task_runner.rs tests/compat_config_test.rs tests/agent_runtime_test.rs
|
||||||
|
git commit -m "refactor: restore single skills dir on ws branch"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 4: Remove Scene-Only Docs And Residual Test References
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Delete: `docs/superpowers/specs/2026-04-06-scene-skill-runtime-routing-design.md`
|
||||||
|
- Delete: `docs/superpowers/plans/2026-04-06-scene-skill-runtime-routing-plan.md`
|
||||||
|
- Modify: `tests/compat_runtime_test.rs`
|
||||||
|
- Modify: `tests/runtime_profile_test.rs`
|
||||||
|
- Modify: `tests/compat_config_test.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Delete the two scene-only planning documents**
|
||||||
|
|
||||||
|
Remove exactly these files:
|
||||||
|
- `docs/superpowers/specs/2026-04-06-scene-skill-runtime-routing-design.md`
|
||||||
|
- `docs/superpowers/plans/2026-04-06-scene-skill-runtime-routing-plan.md`
|
||||||
|
|
||||||
|
Keep the websocket/browser docs and Zhihu docs.
|
||||||
|
|
||||||
|
- [ ] **Step 2: Sweep remaining tests for scene-only names**
|
||||||
|
|
||||||
|
Remove or rewrite any remaining test blocks that still require:
|
||||||
|
- `fault-details-report`
|
||||||
|
- `95598-repair-city-dispatch`
|
||||||
|
- `resolve_scene_skills_dir_path`
|
||||||
|
- `resolve_scene_skills_dir_from_sgclaw_settings`
|
||||||
|
- `scene_registry`
|
||||||
|
|
||||||
|
Do not delete Zhihu-related assertions during this sweep.
|
||||||
|
|
||||||
|
- [ ] **Step 3: Run a focused grep-style audit from the shell**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git grep -n "fault-details-report\|95598-repair-city-dispatch\|resolve_scene_skills_dir_path\|resolve_scene_skills_dir_from_sgclaw_settings\|scene_registry" -- src tests docs
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: no matches in `src/` or `tests/`; doc matches should be gone after the deletions.
|
||||||
|
|
||||||
|
- [ ] **Step 4: Commit Task 4**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add tests/compat_runtime_test.rs tests/runtime_profile_test.rs tests/compat_config_test.rs
|
||||||
|
git rm docs/superpowers/specs/2026-04-06-scene-skill-runtime-routing-design.md docs/superpowers/plans/2026-04-06-scene-skill-runtime-routing-plan.md
|
||||||
|
git commit -m "docs: remove ws-only scene planning artifacts"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 5: Verify The Branch Is Back To WS Plus Zhihu Only
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Verify only unless a failing test proves one tiny follow-up fix is needed
|
||||||
|
|
||||||
|
- [ ] **Step 1: Run the retained Zhihu websocket regression**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test agent_runtime_test production_submit_task_routes_zhihu_through_ws_backend_without_helper_bootstrap -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run websocket/backend focused coverage**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test browser_ws_backend_test -- --nocapture && cargo test --test service_ws_session_test -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 3: Run direct-route/runtime Zhihu coverage**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test compat_runtime_test zhihu_ -- --nocapture && cargo test --test task_runner_test -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 4: Run config/runtime verification after the single-dir cleanup**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test compat_config_test -- --nocapture && cargo test --test runtime_profile_test -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 5: Build the affected binaries**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo build --bin sgclaw --bin sg_claw --bin sg_claw_client
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 6: Audit the remaining branch diff against `main`**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git diff --stat main...HEAD
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: the remaining meaningful differences are websocket/browser transport work and Zhihu-related behavior, not scene-routing or staged-scene config churn.
|
||||||
|
|
||||||
|
- [ ] **Step 7: Commit the final verification pass**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/config/settings.rs src/compat/config_adapter.rs src/compat/runtime.rs src/compat/workflow_executor.rs src/compat/orchestration.rs src/runtime/mod.rs src/runtime/engine.rs tests/compat_config_test.rs tests/runtime_profile_test.rs tests/compat_runtime_test.rs tests/agent_runtime_test.rs tests/task_runner_test.rs
|
||||||
|
git commit -m "test: verify ws branch cleanup preserves zhihu websocket flow"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Verification Checklist
|
||||||
|
|
||||||
|
### Cleanup regressions
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test compat_runtime_test ws_cleanup_ -- --nocapture
|
||||||
|
cargo test --test runtime_profile_test ws_cleanup_ -- --nocapture
|
||||||
|
cargo test --test compat_config_test ws_cleanup_ -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: scene detection, scene prompt injection, and array-style `skillsDir` behavior are gone.
|
||||||
|
|
||||||
|
### Retained Zhihu websocket behavior
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test agent_runtime_test production_submit_task_routes_zhihu_through_ws_backend_without_helper_bootstrap -- --nocapture
|
||||||
|
cargo test --test browser_ws_backend_test -- --nocapture
|
||||||
|
cargo test --test service_ws_session_test -- --nocapture
|
||||||
|
cargo test --test compat_runtime_test zhihu_ -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: websocket submit path and Zhihu direct workflows still pass.
|
||||||
|
|
||||||
|
### Runtime/config verification
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test compat_config_test -- --nocapture
|
||||||
|
cargo test --test runtime_profile_test -- --nocapture
|
||||||
|
cargo test --test task_runner_test -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: runtime/config plumbing is stable after the single-dir cleanup.
|
||||||
|
|
||||||
|
### Build verification
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo build --bin sgclaw --bin sg_claw --bin sg_claw_client
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: the branch still compiles cleanly.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Notes For The Engineer
|
||||||
|
|
||||||
|
- The current scene support touches three different seams: runtime prompt injection, direct route detection/execution, and multi-root `skillsDir` plumbing. Remove all three; deleting only one leaves conflict-prone leftovers.
|
||||||
|
- If collapsing `skillsDir` to `Option<PathBuf>` creates more churn than expected, keep the internal representation temporarily as a one-element collection, but the public config contract and tests on this branch must still go back to a single configured path.
|
||||||
|
- Do not delete browser websocket or callback-host code just because it is adjacent to the scene work; this plan is about stripping scene behavior, not reworking transport.
|
||||||
|
- If `git diff --stat main...HEAD` still shows scene-specific files after Task 5, stop and remove them before merging `main` back into this branch.
|
||||||
@@ -0,0 +1,672 @@
|
|||||||
|
# Fault Details Full Skill Alignment Implementation Plan
|
||||||
|
|
||||||
|
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||||
|
|
||||||
|
**Goal:** Upgrade `fault-details-report.collect_fault_details` into a real staged browser skill that matches the original fault-details workflow, and make `claw-new` interpret the returned artifact status correctly in the direct-submit path.
|
||||||
|
|
||||||
|
**Architecture:** Keep routing and direct-skill selection in `claw-new`, but move all fault-details collection, normalization, classification, summary, export, and report-log behavior into the staged skill under `skill_staging`. Implement the staged skill as a true browser-eval entrypoint that remains valid in page context, while exposing testable pure helpers through an environment-safe export guard for `node:test`; then add a narrow Rust artifact interpreter in `src/compat/direct_skill_runtime.rs` so `ok` / `partial` / `empty` map to successful task completion while `blocked` / `error` map to failed completion.
|
||||||
|
|
||||||
|
**Tech Stack:** Rust 2021, `serde_json`, existing `BrowserPipeTool` / `browser_script` runtime, `node:test`, staged skill fixtures, Cargo integration tests.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Execution Context
|
||||||
|
|
||||||
|
- Follow @superpowers:test-driven-development for every behavior change.
|
||||||
|
- Follow @superpowers:verification-before-completion before claiming each task is done.
|
||||||
|
- Do **not** create a git worktree unless the user explicitly asks. This repo preference is already established.
|
||||||
|
- Keep scope tight. Do **not** add a new browser protocol, new dispatch metadata, new UI opener behavior, or Rust-side fault classification logic.
|
||||||
|
- Keep the current direct path bootstrap requirement intact: the user instruction must still include an explicit `YYYY-MM`, but the staged skill must treat the page-selected range as the source of truth for collection once execution begins.
|
||||||
|
- Preserve parity with the original package’s real behavior: port the original classification table, `qxxcjl`-based reason heuristics, canonical detail mapping, summary aggregation rules, localhost export call, and report-log call into the staged skill rather than implementing a fixture-only subset.
|
||||||
|
|
||||||
|
## File Map
|
||||||
|
|
||||||
|
### Existing files to modify in `claw-new`
|
||||||
|
|
||||||
|
- Modify: `src/compat/direct_skill_runtime.rs`
|
||||||
|
- add narrow structured artifact parsing and status-to-summary mapping
|
||||||
|
- keep direct-skill routing/config ownership unchanged
|
||||||
|
- Modify: `tests/agent_runtime_test.rs`
|
||||||
|
- add direct-submit regressions for `ok`, `partial`, `empty`, `blocked`, and `error`
|
||||||
|
- Modify: `tests/browser_script_skill_tool_test.rs`
|
||||||
|
- add browser-script execution-shape regression for browser-eval return payloads used by fault-details
|
||||||
|
|
||||||
|
### Existing files to modify in `skill_staging`
|
||||||
|
|
||||||
|
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.js`
|
||||||
|
- replace empty shell with browser-eval entrypoint plus parity helpers
|
||||||
|
- Create: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.test.js`
|
||||||
|
- deterministic fixture coverage for normalization, classification, summary, artifact contract, export/logging degradation, and entrypoint shape helpers
|
||||||
|
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/SKILL.toml`
|
||||||
|
- align tool description with real collection/export/report-log behavior
|
||||||
|
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/SKILL.md`
|
||||||
|
- align written contract with actual runtime behavior and artifact fields
|
||||||
|
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/references/collection-flow.md`
|
||||||
|
- align flow with page-range/query/export/report-log sequence
|
||||||
|
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/references/data-quality.md`
|
||||||
|
- make canonical columns, original classification tables, reason heuristics, summary rules, and partial semantics explicit
|
||||||
|
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/scenes/fault-details-report/scene.json`
|
||||||
|
- keep scene output/state contract aligned with real staged artifact behavior
|
||||||
|
|
||||||
|
### Existing files to read but not redesign
|
||||||
|
|
||||||
|
- Read only: `docs/superpowers/specs/2026-04-10-fault-details-full-skill-alignment-design.md`
|
||||||
|
- Read only: `src/agent/mod.rs`
|
||||||
|
- Read only: `src/compat/browser_script_skill_tool.rs`
|
||||||
|
- Read only: `D:/desk/智能体资料/大四区报告监测项/故障明细/index.html`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 1: Add staged-skill red tests for normalization, summary, and artifact-contract semantics
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Create: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.test.js`
|
||||||
|
- Read only: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.js`
|
||||||
|
- Read only: `D:/desk/智能体资料/大四区报告监测项/故障明细/index.html`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Write the failing staged-skill test file**
|
||||||
|
|
||||||
|
Add `collect_fault_details.test.js` using `node:test` and `assert/strict`. Cover these behaviors with fixed fixtures:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const test = require('node:test');
|
||||||
|
const assert = require('node:assert/strict');
|
||||||
|
|
||||||
|
const {
|
||||||
|
DETAIL_COLUMNS,
|
||||||
|
SUMMARY_COLUMNS,
|
||||||
|
normalizeDetailRow,
|
||||||
|
deriveSummaryRows,
|
||||||
|
determineArtifactStatus,
|
||||||
|
buildFaultDetailsArtifact,
|
||||||
|
buildBrowserEntrypointResult
|
||||||
|
} = require('./collect_fault_details.js');
|
||||||
|
|
||||||
|
test('normalizeDetailRow maps canonical detail fields from raw repair rows', () => {
|
||||||
|
const row = normalizeDetailRow({
|
||||||
|
qxdbh: 'QX-1',
|
||||||
|
bxsj: '2026-03-09 08:00:00',
|
||||||
|
cityName: '国网兰州供电公司',
|
||||||
|
maintOrgName: '城关供电服务班',
|
||||||
|
maintGroupName: '抢修一班',
|
||||||
|
bdzMc: '110kV东岗变',
|
||||||
|
xlmc10: '10kV东岗线',
|
||||||
|
byqmc: '东岗1号变',
|
||||||
|
yjflMc: '电网故障',
|
||||||
|
ejflMc: '线路故障',
|
||||||
|
sjflMc: '低压线路',
|
||||||
|
qxxcjl: '现场检查:低压线路断线,已处理完成',
|
||||||
|
gzms: '客户报修停电'
|
||||||
|
}, {
|
||||||
|
companyName: '国网兰州供电公司'
|
||||||
|
});
|
||||||
|
|
||||||
|
assert.equal(row.slsj, '2026-03-09 08:00:00');
|
||||||
|
assert.equal(row.gssgs, '甘肃省电力公司');
|
||||||
|
assert.equal(row.gddw, '城关供电服务班');
|
||||||
|
assert.equal(row.gds, '抢修一班');
|
||||||
|
assert.equal(row.clzt, '处理完成');
|
||||||
|
assert.equal(row.bdz, '110kV东岗变');
|
||||||
|
assert.equal(row.line, '10kV东岗线');
|
||||||
|
assert.equal(row.pb, '东岗1号变');
|
||||||
|
});
|
||||||
|
|
||||||
|
test('deriveSummaryRows groups normalized rows by gds and computes counters', () => {
|
||||||
|
const rows = [
|
||||||
|
{ gds: '抢修一班', gddw: '城关供电服务班', sgs: '国网兰州供电公司', sxfl1: '无效', sxfl2: '无效', gzsb: '' },
|
||||||
|
{ gds: '抢修一班', gddw: '城关供电服务班', sgs: '国网兰州供电公司', sxfl1: '有效', sxfl2: '用户侧', gzsb: '表后线' },
|
||||||
|
{ gds: '抢修一班', gddw: '城关供电服务班', sgs: '国网兰州供电公司', sxfl1: '有效', sxfl2: '电网侧', dwcFl: '低压故障', gzsb: '低压线路' }
|
||||||
|
];
|
||||||
|
|
||||||
|
const summaryRows = deriveSummaryRows(rows, { companyName: '国网兰州供电公司' });
|
||||||
|
assert.equal(summaryRows.length, 1);
|
||||||
|
assert.equal(summaryRows[0].className, '抢修一班');
|
||||||
|
assert.equal(summaryRows[0].allCount, 3);
|
||||||
|
assert.equal(summaryRows[0].wxCount, 1);
|
||||||
|
assert.equal(summaryRows[0].khcCount, 0);
|
||||||
|
assert.equal(summaryRows[0].dyGzCount, 1);
|
||||||
|
assert.equal(summaryRows[0].dyxlCount, 1);
|
||||||
|
assert.equal(summaryRows[0].bhxCount, 1);
|
||||||
|
});
|
||||||
|
|
||||||
|
test('determineArtifactStatus follows blocked > error > partial > empty > ok precedence', () => {
|
||||||
|
assert.equal(determineArtifactStatus({ blockedReason: 'missing_session', fatalError: null, partialReasons: [], detailRows: [{}] }), 'blocked');
|
||||||
|
assert.equal(determineArtifactStatus({ blockedReason: null, fatalError: 'parse_failed', partialReasons: [], detailRows: [{}] }), 'error');
|
||||||
|
assert.equal(determineArtifactStatus({ blockedReason: null, fatalError: null, partialReasons: ['export_failed'], detailRows: [{}] }), 'partial');
|
||||||
|
assert.equal(determineArtifactStatus({ blockedReason: null, fatalError: null, partialReasons: [], detailRows: [] }), 'empty');
|
||||||
|
assert.equal(determineArtifactStatus({ blockedReason: null, fatalError: null, partialReasons: [], detailRows: [{}] }), 'ok');
|
||||||
|
});
|
||||||
|
|
||||||
|
test('buildFaultDetailsArtifact keeps canonical fields, selected range, counts, and downstream results', () => {
|
||||||
|
const artifact = buildFaultDetailsArtifact({
|
||||||
|
period: '2026-03',
|
||||||
|
selectedRange: { start: '2026-03-08 16:00:00', end: '2026-03-09 16:00:00' },
|
||||||
|
detailRows: [{ qxdbh: 'QX-1' }],
|
||||||
|
summaryRows: [{ index: 1 }],
|
||||||
|
partialReasons: ['report_log_failed'],
|
||||||
|
downstream: {
|
||||||
|
export: { attempted: true, success: true, path: 'http://localhost/export.xlsx' },
|
||||||
|
report_log: { attempted: true, success: false, error: '500' }
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
assert.equal(artifact.type, 'report-artifact');
|
||||||
|
assert.equal(artifact.status, 'partial');
|
||||||
|
assert.deepEqual(artifact.selected_range, { start: '2026-03-08 16:00:00', end: '2026-03-09 16:00:00' });
|
||||||
|
assert.equal(artifact.counts.detail_rows, 1);
|
||||||
|
assert.equal(artifact.counts.summary_rows, 1);
|
||||||
|
assert.deepEqual(artifact.partial_reasons, ['report_log_failed']);
|
||||||
|
});
|
||||||
|
|
||||||
|
test('buildFaultDetailsArtifact keeps required top-level fields for blocked artifact', () => {
|
||||||
|
const artifact = buildFaultDetailsArtifact({
|
||||||
|
period: '2026-03',
|
||||||
|
blockedReason: 'selected_range_unavailable',
|
||||||
|
partialReasons: ['selected_range_unavailable']
|
||||||
|
});
|
||||||
|
|
||||||
|
assert.equal(artifact.type, 'report-artifact');
|
||||||
|
assert.equal(artifact.report_name, 'fault-details-report');
|
||||||
|
assert.equal(artifact.period, '2026-03');
|
||||||
|
assert.equal(artifact.status, 'blocked');
|
||||||
|
assert.deepEqual(artifact.partial_reasons, ['selected_range_unavailable']);
|
||||||
|
assert.equal('downstream' in artifact, false);
|
||||||
|
});
|
||||||
|
|
||||||
|
test('buildFaultDetailsArtifact keeps known selected range and counts on late error', () => {
|
||||||
|
const artifact = buildFaultDetailsArtifact({
|
||||||
|
period: '2026-03',
|
||||||
|
selectedRange: { start: '2026-03-08 16:00:00', end: '2026-03-09 16:00:00' },
|
||||||
|
detailRows: [],
|
||||||
|
summaryRows: [],
|
||||||
|
fatalError: 'summary_failed',
|
||||||
|
partialReasons: ['summary_failed']
|
||||||
|
});
|
||||||
|
|
||||||
|
assert.equal(artifact.status, 'error');
|
||||||
|
assert.deepEqual(artifact.selected_range, { start: '2026-03-08 16:00:00', end: '2026-03-09 16:00:00' });
|
||||||
|
assert.equal(artifact.counts.detail_rows, 0);
|
||||||
|
assert.equal(artifact.counts.summary_rows, 0);
|
||||||
|
});
|
||||||
|
|
||||||
|
test('buildBrowserEntrypointResult returns blocked artifact when selected range is unavailable', async () => {
|
||||||
|
const artifact = await buildBrowserEntrypointResult({
|
||||||
|
period: '2026-03'
|
||||||
|
}, {
|
||||||
|
readSelectedRange: async () => null
|
||||||
|
});
|
||||||
|
|
||||||
|
assert.equal(artifact.status, 'blocked');
|
||||||
|
assert.ok(artifact.partial_reasons.includes('selected_range_unavailable'));
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run the staged-skill test file and verify it fails**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
node "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.test.js"
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: FAIL because `collect_fault_details.js` does not export these helpers yet and still only returns an empty shell.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 2: Implement staged-skill parity helpers and a valid browser entrypoint
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.js`
|
||||||
|
- Test: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.test.js`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Implement the helper exports and browser entrypoint shape needed to satisfy the red tests**
|
||||||
|
|
||||||
|
Refactor `collect_fault_details.js` so the file remains a valid browser-eval script in page context while still supporting `node:test` through an environment-safe export guard.
|
||||||
|
|
||||||
|
Required implementation pieces:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const DETAIL_COLUMNS = [/* existing canonical columns */];
|
||||||
|
const SUMMARY_COLUMNS = [/* existing summary columns */];
|
||||||
|
|
||||||
|
function normalizeDetailRow(raw, context) {
|
||||||
|
// map qxdbh/gssgs/sgs/gddw/gds/slsj/clzt/bdz/line/pb
|
||||||
|
// derive sxfl1/sxfl2/sxfl3/gzsb/gzyy from the original package rules
|
||||||
|
}
|
||||||
|
|
||||||
|
function deriveSummaryRows(detailRows, context) {
|
||||||
|
// group by gds and compute all original package counters
|
||||||
|
}
|
||||||
|
|
||||||
|
function determineArtifactStatus({ blockedReason, fatalError, partialReasons, detailRows }) {
|
||||||
|
// blocked > error > partial > empty > ok
|
||||||
|
}
|
||||||
|
|
||||||
|
function buildFaultDetailsArtifact({
|
||||||
|
period,
|
||||||
|
selectedRange,
|
||||||
|
detailRows,
|
||||||
|
summaryRows,
|
||||||
|
partialReasons,
|
||||||
|
blockedReason,
|
||||||
|
fatalError,
|
||||||
|
downstream
|
||||||
|
}) {
|
||||||
|
// return report-artifact with columns, sections, counts, status, partial_reasons, downstream
|
||||||
|
}
|
||||||
|
|
||||||
|
async function buildBrowserEntrypointResult(input, deps = defaultBrowserDeps()) {
|
||||||
|
// read selected range from page
|
||||||
|
// collect raw rows from page query
|
||||||
|
// normalize rows
|
||||||
|
// derive summary
|
||||||
|
// attempt export + report log
|
||||||
|
// return final artifact
|
||||||
|
}
|
||||||
|
|
||||||
|
if (typeof module !== 'undefined' && module.exports) {
|
||||||
|
module.exports = {
|
||||||
|
DETAIL_COLUMNS,
|
||||||
|
SUMMARY_COLUMNS,
|
||||||
|
normalizeDetailRow,
|
||||||
|
deriveSummaryRows,
|
||||||
|
determineArtifactStatus,
|
||||||
|
buildFaultDetailsArtifact,
|
||||||
|
buildBrowserEntrypointResult
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
return await buildBrowserEntrypointResult(args);
|
||||||
|
```
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- keep `DETAIL_COLUMNS` and `SUMMARY_COLUMNS` canonical and stable
|
||||||
|
- keep helper functions self-contained in this file unless a separate pure helper file becomes necessary for runtime validity
|
||||||
|
- keep the browser entrypoint compatible with current `eval` wrapper
|
||||||
|
- keep browser runtime free of unguarded Node-only assumptions
|
||||||
|
- do **not** invent a new protocol or callback surface
|
||||||
|
|
||||||
|
- [ ] **Step 2: Re-run the staged-skill test file and verify it now reaches deeper failures or passes the initial helper coverage**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
node "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.test.js"
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: either PASS for the Task 1 cases, or fail only on the still-missing full parity/export/history specifics added in Task 3.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 3: Add red tests for full classification parity, downstream partials, and empty-result export semantics
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.test.js`
|
||||||
|
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.js`
|
||||||
|
- Read only: `D:/desk/智能体资料/大四区报告监测项/故障明细/index.html`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Extend the staged-skill tests with failing parity and downstream cases**
|
||||||
|
|
||||||
|
Add focused failing tests such as:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
test('normalizeDetailRow derives gzyy from qxxcjl text heuristics', () => {
|
||||||
|
const row = normalizeDetailRow({
|
||||||
|
qxxcjl: '现场检查:客户表后线烧损,已恢复送电',
|
||||||
|
ejflMc: '客户侧故障',
|
||||||
|
sjflMc: '表后线'
|
||||||
|
}, { companyName: '国网兰州供电公司' });
|
||||||
|
|
||||||
|
assert.equal(row.gzsb, '表后线');
|
||||||
|
assert.equal(row.gzyy, '表后线烧损');
|
||||||
|
});
|
||||||
|
|
||||||
|
test('buildBrowserEntrypointResult returns partial when export fails after detail collection succeeds', async () => {
|
||||||
|
const artifact = await buildBrowserEntrypointResult({ period: '2026-03' }, {
|
||||||
|
readSelectedRange: async () => ({ start: '2026-03-08 16:00:00', end: '2026-03-09 16:00:00' }),
|
||||||
|
queryFaultRows: async () => [{ qxdbh: 'QX-1', bxsj: '2026-03-09 08:00:00', maintGroupName: '抢修一班' }],
|
||||||
|
readCompanyContext: () => ({ companyName: '国网兰州供电公司' }),
|
||||||
|
exportWorkbook: async () => {
|
||||||
|
throw new Error('export_failed');
|
||||||
|
},
|
||||||
|
writeReportLog: async () => ({ success: true })
|
||||||
|
});
|
||||||
|
|
||||||
|
assert.equal(artifact.status, 'partial');
|
||||||
|
assert.ok(artifact.partial_reasons.includes('export_failed'));
|
||||||
|
assert.equal(artifact.counts.detail_rows, 1);
|
||||||
|
assert.equal(artifact.downstream.export.attempted, true);
|
||||||
|
assert.equal(artifact.downstream.export.success, false);
|
||||||
|
});
|
||||||
|
|
||||||
|
test('buildBrowserEntrypointResult returns error when normalized detail rows cannot be produced', async () => {
|
||||||
|
const artifact = await buildBrowserEntrypointResult({ period: '2026-03' }, {
|
||||||
|
readSelectedRange: async () => ({ start: '2026-03-08 16:00:00', end: '2026-03-09 16:00:00' }),
|
||||||
|
queryFaultRows: async () => [{ qxdbh: '', bxsj: '' }],
|
||||||
|
readCompanyContext: () => ({ companyName: '国网兰州供电公司' })
|
||||||
|
});
|
||||||
|
|
||||||
|
assert.equal(artifact.status, 'error');
|
||||||
|
assert.ok(artifact.partial_reasons.includes('detail_normalization_failed'));
|
||||||
|
});
|
||||||
|
|
||||||
|
test('buildBrowserEntrypointResult keeps canonical rows empty for empty result and omits downstream before attempts', async () => {
|
||||||
|
const artifact = await buildBrowserEntrypointResult({ period: '2026-03' }, {
|
||||||
|
readSelectedRange: async () => ({ start: '2026-03-08 16:00:00', end: '2026-03-09 16:00:00' }),
|
||||||
|
queryFaultRows: async () => [],
|
||||||
|
readCompanyContext: () => ({ companyName: '国网兰州供电公司' })
|
||||||
|
});
|
||||||
|
|
||||||
|
assert.equal(artifact.status, 'empty');
|
||||||
|
assert.deepEqual(artifact.rows, []);
|
||||||
|
assert.equal('downstream' in artifact, false);
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
Also add fixture cases derived from the original package’s full classification table and summary counters so the staged skill is forced toward parity, not a subset implementation.
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run the staged-skill test file and verify it fails on the new cases**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
node "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.test.js"
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: FAIL on missing full classification parity or downstream partial/error behavior.
|
||||||
|
|
||||||
|
- [ ] **Step 3: Implement the full business logic needed to satisfy the new tests**
|
||||||
|
|
||||||
|
In `collect_fault_details.js`:
|
||||||
|
- port the original classification table and `qxxcjl` text heuristics for `sxfl1`, `sxfl2`, `sxfl3`, `gzsb`, `gzyy`
|
||||||
|
- port the original summary derivation rules and counters completely
|
||||||
|
- add required-field validation so structurally unusable normalized rows escalate to `error`
|
||||||
|
- add downstream `exportWorkbook` and `writeReportLog` stages that record `{attempted, success, path, error}`
|
||||||
|
- keep collection success distinct from downstream failures so export/logging failures become `partial`, not full failure
|
||||||
|
- keep placeholder rows, if needed for downstream empty-export payloads, downstream-only and never in canonical returned `rows`
|
||||||
|
- include both `period` and `selected_range` in the artifact
|
||||||
|
- omit `downstream` when export/report-log have not been attempted yet
|
||||||
|
|
||||||
|
- [ ] **Step 4: Re-run the staged-skill test file and verify it passes**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
node "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.test.js"
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 4: Align staged-skill metadata and reference docs with the implemented behavior
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/SKILL.toml`
|
||||||
|
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/SKILL.md`
|
||||||
|
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/references/collection-flow.md`
|
||||||
|
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/references/data-quality.md`
|
||||||
|
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/scenes/fault-details-report/scene.json`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Update the staged metadata/docs to match the implemented runtime contract**
|
||||||
|
|
||||||
|
Required changes:
|
||||||
|
- `SKILL.toml`: description must say the tool collects rows, derives summary, attempts localhost export, and records report history
|
||||||
|
- `SKILL.md`: artifact example must include `selected_range`, `counts`, `status`, `partial_reasons`, and `downstream`
|
||||||
|
- `references/collection-flow.md`: sequence must explicitly include page-selected range -> raw query -> normalization -> summary -> export -> report-log
|
||||||
|
- `references/data-quality.md`: document the original classification tables, `qxxcjl` heuristics, summary rules, partial/error escalation rules, and empty-result semantics explicitly enough to match the implemented helpers
|
||||||
|
- `scene.json`: keep inputs/outputs/status semantics aligned with the richer artifact; do not add routing policy there
|
||||||
|
|
||||||
|
- [ ] **Step 2: Read the updated staged docs and verify they match the implemented JS behavior**
|
||||||
|
|
||||||
|
Read and confirm:
|
||||||
|
- descriptions no longer claim “artifact shell” behavior
|
||||||
|
- docs do not move routing ownership out of `claw-new`
|
||||||
|
- docs do not promise auto-opening/downloading behavior in this slice
|
||||||
|
- docs reflect blocked/error field-presence rules and downstream-attempt semantics
|
||||||
|
|
||||||
|
Expected: staged metadata/docs accurately reflect the implemented collector.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 5: Add Rust red tests for artifact-status interpretation in the direct-submit runtime
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `tests/agent_runtime_test.rs`
|
||||||
|
- Modify: `tests/browser_script_skill_tool_test.rs`
|
||||||
|
- Modify: `src/compat/direct_skill_runtime.rs`
|
||||||
|
- Read only: `src/compat/browser_script_skill_tool.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Add failing direct-submit runtime tests for structured artifact statuses**
|
||||||
|
|
||||||
|
Extend `tests/agent_runtime_test.rs` with focused regressions that use the existing temp skill-root harness but return real `report-artifact` payloads:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[test]
|
||||||
|
fn submit_task_treats_partial_report_artifact_as_success_with_warning_summary() {
|
||||||
|
let skill_root = build_direct_runtime_skill_root();
|
||||||
|
let runtime_context = direct_submit_runtime_context(&skill_root);
|
||||||
|
let transport = Arc::new(MockTransport::new(vec![success_browser_response(
|
||||||
|
1,
|
||||||
|
serde_json::json!({
|
||||||
|
"text": {
|
||||||
|
"type": "report-artifact",
|
||||||
|
"report_name": "fault-details-report",
|
||||||
|
"period": "2026-03",
|
||||||
|
"selected_range": { "start": "2026-03-08 16:00:00", "end": "2026-03-09 16:00:00" },
|
||||||
|
"columns": ["qxdbh"],
|
||||||
|
"rows": [{ "qxdbh": "QX-1" }],
|
||||||
|
"sections": [{ "name": "summary-sheet", "columns": ["index"], "rows": [{ "index": 1 }] }],
|
||||||
|
"counts": { "detail_rows": 1, "summary_rows": 1 },
|
||||||
|
"status": "partial",
|
||||||
|
"partial_reasons": ["report_log_failed"],
|
||||||
|
"downstream": {
|
||||||
|
"export": { "attempted": true, "success": true, "path": "http://localhost/export.xlsx" },
|
||||||
|
"report_log": { "attempted": true, "success": false, "error": "500" }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}),
|
||||||
|
)]));
|
||||||
|
// ... invoke handle_browser_message_with_context(...)
|
||||||
|
// assert TaskComplete.success == true
|
||||||
|
// assert summary contains partial/report_log_failed/detail_rows=1
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn submit_task_treats_empty_report_artifact_as_success() { /* status=empty => success=true */ }
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn submit_task_treats_blocked_report_artifact_as_failure() { /* status=blocked => success=false */ }
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn submit_task_treats_error_report_artifact_as_failure() { /* status=error => success=false */ }
|
||||||
|
```
|
||||||
|
|
||||||
|
Also add one focused helper regression to `tests/browser_script_skill_tool_test.rs` that proves the browser-script helper can return a structured object payload used by the fault-details path without flattening required fields away.
|
||||||
|
|
||||||
|
Suggested test name:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[tokio::test]
|
||||||
|
async fn execute_browser_script_tool_preserves_structured_report_artifact_payload() { /* ... */ }
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run the focused Rust tests and verify they fail**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test agent_runtime_test submit_task_treats_partial_report_artifact_as_success_with_warning_summary -- --nocapture
|
||||||
|
cargo test --test browser_script_skill_tool_test execute_browser_script_tool_preserves_structured_report_artifact_payload -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: the new `agent_runtime_test` case fails because `execute_direct_submit_skill` still returns raw JSON text and `src/agent/mod.rs` still marks all direct-submit results as success when no Rust-side interpretation exists.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 6: Implement narrow Rust artifact interpretation without moving business rules into Rust
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/compat/direct_skill_runtime.rs`
|
||||||
|
- Modify: `tests/agent_runtime_test.rs`
|
||||||
|
- Modify: `tests/browser_script_skill_tool_test.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Implement a narrow structured-artifact interpreter in `src/compat/direct_skill_runtime.rs`**
|
||||||
|
|
||||||
|
Add a small internal result type and parser, for example:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
struct DirectSubmitOutcome {
|
||||||
|
success: bool,
|
||||||
|
summary: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
fn interpret_direct_submit_output(output: &str) -> DirectSubmitOutcome {
|
||||||
|
// parse JSON if possible
|
||||||
|
// if type == "report-artifact", read status/counts/partial_reasons/downstream
|
||||||
|
// map ok/partial/empty => success=true
|
||||||
|
// map blocked/error => success=false
|
||||||
|
// build concise summary with report_name, period, detail_rows, summary_rows, status, partial reasons
|
||||||
|
// fall back to raw output text when payload is not a recognized artifact
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Then change the public entrypoint shape from `Result<String, PipeError>` to a narrow result carrying `success` and `summary`, or add a second helper that `src/agent/mod.rs` can use without changing routing ownership.
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- do **not** reimplement fault normalization/classification/summary in Rust
|
||||||
|
- do **not** add fault-specific branching in `src/agent/mod.rs`
|
||||||
|
- keep unrecognized non-artifact outputs working as before
|
||||||
|
- keep explicit `YYYY-MM` derivation and configured `skill.tool` resolution unchanged
|
||||||
|
|
||||||
|
- [ ] **Step 2: Update the submit-path caller to use the interpreted success flag**
|
||||||
|
|
||||||
|
Adjust the direct-submit branch so `TaskComplete.success` comes from the artifact interpretation instead of blindly treating every `Ok(summary)` as success.
|
||||||
|
|
||||||
|
Implementation target:
|
||||||
|
- keep the direct path in `src/agent/mod.rs`
|
||||||
|
- keep error handling narrow
|
||||||
|
- if needed, return a dedicated direct-submit outcome from `execute_direct_submit_skill`
|
||||||
|
|
||||||
|
- [ ] **Step 3: Re-run the focused Rust tests and verify they pass**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test agent_runtime_test submit_task_treats_partial_report_artifact_as_success_with_warning_summary -- --nocapture
|
||||||
|
cargo test --test agent_runtime_test submit_task_treats_empty_report_artifact_as_success -- --nocapture
|
||||||
|
cargo test --test agent_runtime_test submit_task_treats_blocked_report_artifact_as_failure -- --nocapture
|
||||||
|
cargo test --test agent_runtime_test submit_task_treats_error_report_artifact_as_failure -- --nocapture
|
||||||
|
cargo test --test browser_script_skill_tool_test execute_browser_script_tool_preserves_structured_report_artifact_payload -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 7: Run the full verification sweep for the staged skill and direct runtime
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Verify only
|
||||||
|
|
||||||
|
- [ ] **Step 1: Run the staged-skill deterministic test file**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
node "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.test.js"
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run the relevant Rust regression suites**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test browser_script_skill_tool_test -- --nocapture
|
||||||
|
cargo test --test agent_runtime_test -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 3: Run the broader compatibility coverage and build**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test compat_runtime_test -- --nocapture
|
||||||
|
cargo test --test compat_config_test -- --nocapture
|
||||||
|
cargo build --bin sgclaw
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 4: Manually verify the requirements against the approved spec**
|
||||||
|
|
||||||
|
Checklist:
|
||||||
|
- staged skill now reads page-selected range instead of inventing a month window after entry
|
||||||
|
- staged skill returns canonical detail rows and summary rows
|
||||||
|
- staged skill ports the original classification table, `qxxcjl` heuristics, and summary counters with parity coverage
|
||||||
|
- staged skill records downstream export/report-log outcome
|
||||||
|
- staged skill distinguishes `ok` / `partial` / `empty` / `blocked` / `error`
|
||||||
|
- `blocked` / `error` artifacts keep the required top-level fields, and preserve known `selected_range` / `counts` when failure happens late enough
|
||||||
|
- `downstream` is omitted when export/report-log were not attempted and included with attempted/success flags once they were attempted
|
||||||
|
- empty-result canonical `rows` stay empty even if downstream export uses a placeholder transport row
|
||||||
|
- `claw-new` maps `ok` / `partial` / `empty` to success and `blocked` / `error` to failure
|
||||||
|
- no new routing metadata was added to `SKILL.toml` or `scene.json`
|
||||||
|
- no new browser protocol or opener/UI behavior was introduced
|
||||||
|
|
||||||
|
Expected: all checklist items satisfied before calling the work complete.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Verification Checklist
|
||||||
|
|
||||||
|
### Staged skill behavior
|
||||||
|
|
||||||
|
```bash
|
||||||
|
node "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.test.js"
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: deterministic fixture coverage passes for normalization, full classification parity, summary derivation, artifact shape, empty semantics, and downstream partial semantics.
|
||||||
|
|
||||||
|
### Direct-submit runtime mapping
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test agent_runtime_test -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected:
|
||||||
|
- valid artifact `ok` / `partial` / `empty` completes successfully
|
||||||
|
- valid artifact `blocked` / `error` completes as failure
|
||||||
|
- existing invalid config regression still passes
|
||||||
|
- existing direct-submit happy path still passes
|
||||||
|
|
||||||
|
### Browser-script helper safety
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test browser_script_skill_tool_test -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: current browser-script execution semantics remain intact while returning structured artifact payloads.
|
||||||
|
|
||||||
|
### Compatibility/build
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cargo test --test compat_runtime_test -- --nocapture
|
||||||
|
cargo test --test compat_config_test -- --nocapture
|
||||||
|
cargo build --bin sgclaw
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: no regressions in compat execution/config loading; main binary builds cleanly.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Notes For The Engineer
|
||||||
|
|
||||||
|
- The paired spec is `docs/superpowers/specs/2026-04-10-fault-details-full-skill-alignment-design.md`.
|
||||||
|
- Keep all fault business transforms in `skill_staging`, not in Rust.
|
||||||
|
- Keep direct routing config-owned via `skillsDir` + `directSubmitSkill`.
|
||||||
|
- Do **not** broaden this slice into LLM routing, generic dispatch policy, new browser opcodes, or export auto-open behavior.
|
||||||
|
- If the original package reveals extra classification rules that are needed for parity, add them only inside `collect_fault_details.js` and its staged references/tests, not in `claw-new`.
|
||||||
551
docs/superpowers/plans/2026-04-11-main-into-ws-merge-v2-plan.md
Normal file
551
docs/superpowers/plans/2026-04-11-main-into-ws-merge-v2-plan.md
Normal file
@@ -0,0 +1,551 @@
|
|||||||
|
# Main → WS Merge v2 Implementation Plan
|
||||||
|
|
||||||
|
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||||
|
|
||||||
|
**Goal:** 把最新 `origin/main` 合并到 `feature/claw-ws`,让 `ws` 分支最终同时保留 **pipe + ws** 两套通信能力、保留 Zhihu 行为,并用 `main` 上正式的 fault-details 实现替换 `ws` 上已 cleanup 删除的旧重复实现。
|
||||||
|
|
||||||
|
**Architecture:** 这次合并不是“把 cleanup 永久保持成没有 fault-details”,而是“先删除 ws 上旧重复实现,再吸收 main 上正式实现”。冲突裁决优先级是:**先保 pipe、再保 ws、再保 Zhihu、同时拒绝 ws 上旧重复 scene/fault-details 实现回流**。整个过程使用 `git merge --no-commit --no-ff origin/main`,冲突解决后只做聚焦验证,停在未提交状态。
|
||||||
|
|
||||||
|
**Tech Stack:** Git, Rust 2021, Cargo test, sgClaw pipe transport, ws transport, compat/runtime/orchestration stack, Zhihu direct workflow tests.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Preconditions
|
||||||
|
|
||||||
|
- 当前分支必须是 `feature/claw-ws`
|
||||||
|
- `2026-04-09-ws-branch-scene-cleanup-plan.md` 已完成
|
||||||
|
- 当前不在 merge 状态
|
||||||
|
- 当前没有 tracked 未提交改动
|
||||||
|
- 本次**不创建 worktree**,按当前仓库执行
|
||||||
|
- 本次结束点是:**已合并、已验证、未提交**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Final Merge Principles
|
||||||
|
|
||||||
|
### 1) `main` 是 pipe 主线
|
||||||
|
合并后不能把 `main` 上现有的 pipe 管道通信破坏掉。
|
||||||
|
|
||||||
|
### 2) `ws` 分支最终要同时保留 pipe + ws
|
||||||
|
合并后不能让 `ws` 分支丢掉 websocket 路径,也不能只剩 pipe。
|
||||||
|
|
||||||
|
### 3) 两边都有 Zhihu
|
||||||
|
合并后不能把现有 Zhihu 行为合坏,尤其是 ws→Zhihu 保留路径。
|
||||||
|
|
||||||
|
### 4) fault-details 以 `main` 正式实现为准
|
||||||
|
- `ws` 上那套旧重复实现:**不能回流**
|
||||||
|
- `main` 上正式实现:**应被合进来**
|
||||||
|
- 最终结果不是“没有 fault-details”,而是“没有 ws 那套旧 fault-details,只保留 main 正式版本”
|
||||||
|
|
||||||
|
### 5) 不回流旧 scene plumbing
|
||||||
|
以下旧面不能作为最终结果保留:
|
||||||
|
- ws 自己那套旧 scene registry / old scene plumbing
|
||||||
|
- ws cleanup 已删掉的旧重复 route/contract
|
||||||
|
- 仅为旧 `skill_staging` 场景装配服务的残留逻辑
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## File Map
|
||||||
|
|
||||||
|
### A. 合并时重点观察的共享/高风险文件
|
||||||
|
- `Cargo.toml`
|
||||||
|
- `Cargo.lock`
|
||||||
|
- `src/agent/mod.rs`
|
||||||
|
- `src/agent/task_runner.rs`
|
||||||
|
- `src/config/settings.rs`
|
||||||
|
- `src/compat/config_adapter.rs`
|
||||||
|
- `src/compat/runtime.rs`
|
||||||
|
- `src/compat/orchestration.rs`
|
||||||
|
- `src/compat/workflow_executor.rs`
|
||||||
|
- `src/compat/browser_script_skill_tool.rs`
|
||||||
|
- `src/compat/direct_skill_runtime.rs`
|
||||||
|
- `src/compat/openxml_office_tool.rs`
|
||||||
|
|
||||||
|
### B. pipe / ws / Zhihu 保护面
|
||||||
|
- `src/compat/runtime.rs`
|
||||||
|
- `src/compat/orchestration.rs`
|
||||||
|
- `src/compat/workflow_executor.rs`
|
||||||
|
- `src/agent/task_runner.rs`
|
||||||
|
- `tests/agent_runtime_test.rs`
|
||||||
|
- `tests/browser_ws_backend_test.rs`
|
||||||
|
- `tests/service_ws_session_test.rs`
|
||||||
|
- `tests/task_runner_test.rs`
|
||||||
|
|
||||||
|
### C. cleanup 后仍需防止旧实现回流的文件
|
||||||
|
- `src/runtime/mod.rs`
|
||||||
|
- `src/runtime/engine.rs`
|
||||||
|
- `src/config/settings.rs`
|
||||||
|
- `src/compat/config_adapter.rs`
|
||||||
|
- `tests/compat_runtime_test.rs`
|
||||||
|
- `tests/runtime_profile_test.rs`
|
||||||
|
- `tests/compat_config_test.rs`
|
||||||
|
|
||||||
|
### D. 可能需要随 main 正式 fault-details 一起更新的测试面
|
||||||
|
- `tests/compat_runtime_test.rs`
|
||||||
|
- `tests/compat_config_test.rs`
|
||||||
|
- `tests/browser_script_skill_tool_test.rs`
|
||||||
|
- `tests/compat_openxml_office_tool_test.rs`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Conflict Resolution Rule Table
|
||||||
|
|
||||||
|
| 类别 | 最终保留原则 |
|
||||||
|
|---|---|
|
||||||
|
| pipe 主路径 | **优先保留可工作的 main 版本**,不能被 ws 改坏 |
|
||||||
|
| ws 路径 | **必须继续保留 ws 能力**,不能因吸收 main 而丢失 |
|
||||||
|
| Zhihu | 两边相关能力都不能合坏,至少保住现有 keep-path |
|
||||||
|
| fault-details | **保留 main 正式实现**,不保留 ws 旧重复实现 |
|
||||||
|
| old scene/95598 cleanup 残留 | 不允许以 ws 旧重复实现形式回流 |
|
||||||
|
| `skillsDir` / config | 以最终产品需要为准;若 main 正式实现不要求旧 array-style/scene expansion,则不回流 |
|
||||||
|
| 临时 merge 修补 | 一律不保留 |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 1: Confirm Merge Preconditions And Diff Surface
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- No code changes expected
|
||||||
|
- Observe repo state and branch diff only
|
||||||
|
|
||||||
|
- [ ] **Step 1: Confirm current branch**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
git rev-parse --abbrev-ref HEAD
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected:
|
||||||
|
```text
|
||||||
|
feature/claw-ws
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Confirm no merge is in progress**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
git rev-parse -q --verify MERGE_HEAD
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: exit code `1`.
|
||||||
|
|
||||||
|
- [ ] **Step 3: Confirm no tracked local changes**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
git diff --name-only && printf '\n---STAGED---\n' && git diff --cached --name-only
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected:
|
||||||
|
```text
|
||||||
|
|
||||||
|
---STAGED---
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 4: List current untracked files**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
git status --short
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: only known local untracked items, or a clearly understood list.
|
||||||
|
|
||||||
|
- [ ] **Step 5: Update `origin/main`**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
git fetch origin main
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 6: Show ws vs main diff surface before merge**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
git diff --name-status HEAD...origin/main
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: clear file list to compare likely merge surface.
|
||||||
|
|
||||||
|
- [ ] **Step 7: Stop if preconditions fail**
|
||||||
|
|
||||||
|
Stop if:
|
||||||
|
- branch is wrong
|
||||||
|
- merge is in progress
|
||||||
|
- tracked changes exist
|
||||||
|
- untracked file collision with `origin/main` is found and unresolved
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 2: Start The Merge Without Committing
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Merge index / working tree only
|
||||||
|
|
||||||
|
- [ ] **Step 1: Start no-commit merge**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
git merge --no-commit --no-ff origin/main
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected:
|
||||||
|
- either auto-merge pauses before commit
|
||||||
|
- or Git reports conflicts
|
||||||
|
|
||||||
|
- [ ] **Step 2: Capture merge surface immediately**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
git status --short
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 3: Separate results into three buckets**
|
||||||
|
Create a working list of conflicted files under:
|
||||||
|
1. pipe-critical
|
||||||
|
2. ws/Zhihu-critical
|
||||||
|
3. shared infra / tests
|
||||||
|
|
||||||
|
- [ ] **Step 4: If no conflicts, proceed directly to Task 4 verification**
|
||||||
|
|
||||||
|
- [ ] **Step 5: If conflicts exist, proceed to Task 3**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 3: Resolve Conflicts By System Role, Not By Branch Bias
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Only files reported by Git as conflicted
|
||||||
|
|
||||||
|
#### Global conflict policy
|
||||||
|
For every conflicted hunk, answer these four questions in order:
|
||||||
|
|
||||||
|
1. Does this hunk affect **pipe** correctness?
|
||||||
|
2. Does this hunk affect **ws** correctness?
|
||||||
|
3. Does this hunk affect **Zhihu** correctness?
|
||||||
|
4. Is this hunk part of **ws old duplicate fault-details/scene logic** or **main official implementation**?
|
||||||
|
|
||||||
|
Then apply the rule:
|
||||||
|
- **pipe cannot break**
|
||||||
|
- **ws cannot break**
|
||||||
|
- **Zhihu cannot break**
|
||||||
|
- **ws old duplicate fault-details must stay deleted**
|
||||||
|
- **main official fault-details should come in**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### Task 3A: Resolve pipe-critical shared runtime files
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- `src/compat/runtime.rs`
|
||||||
|
- `src/agent/task_runner.rs`
|
||||||
|
- `src/agent/mod.rs`
|
||||||
|
- `src/config/settings.rs`
|
||||||
|
- `src/compat/config_adapter.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: For each conflict, keep the side that preserves main’s pipe behavior**
|
||||||
|
|
||||||
|
- [ ] **Step 2: Reject ws-only duplicate business logic that main already owns**
|
||||||
|
|
||||||
|
- [ ] **Step 3: Keep ws support if the file also serves ws path**
|
||||||
|
This is additive preservation, not “main wins everything”.
|
||||||
|
|
||||||
|
- [ ] **Step 4: Verify each resolved file has no conflict markers**
|
||||||
|
|
||||||
|
Run per file:
|
||||||
|
```bash
|
||||||
|
git diff --check -- <path>
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### Task 3B: Resolve ws / Zhihu-critical routing files
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- `src/compat/workflow_executor.rs`
|
||||||
|
- `src/compat/orchestration.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Bring in main’s official fault-details path if it lives here**
|
||||||
|
|
||||||
|
- [ ] **Step 2: Do not reintroduce ws’s old duplicate fault-details path**
|
||||||
|
|
||||||
|
- [ ] **Step 3: Preserve ws submit/browser websocket path**
|
||||||
|
|
||||||
|
- [ ] **Step 4: Preserve Zhihu routing path**
|
||||||
|
|
||||||
|
- [ ] **Step 5: Verify each resolved file has no conflict markers**
|
||||||
|
|
||||||
|
Run per file:
|
||||||
|
```bash
|
||||||
|
git diff --check -- <path>
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### Task 3C: Resolve shared infra files minimally
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- `Cargo.toml`
|
||||||
|
- `Cargo.lock`
|
||||||
|
- `src/compat/browser_script_skill_tool.rs`
|
||||||
|
- `src/compat/direct_skill_runtime.rs`
|
||||||
|
- `src/compat/openxml_office_tool.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Keep only the dependency/code shape needed by the merged result**
|
||||||
|
|
||||||
|
- [ ] **Step 2: Do not keep lines from prior failed merge attempts**
|
||||||
|
|
||||||
|
- [ ] **Step 3: Accept main fixes unless they break pipe/ws/Zhihu behavior**
|
||||||
|
|
||||||
|
- [ ] **Step 4: Verify each resolved file has no conflict markers**
|
||||||
|
|
||||||
|
Run per file:
|
||||||
|
```bash
|
||||||
|
git diff --check -- <path>
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### Task 3D: Resolve tests to reflect final intended product
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- `tests/compat_runtime_test.rs`
|
||||||
|
- `tests/runtime_profile_test.rs`
|
||||||
|
- `tests/compat_config_test.rs`
|
||||||
|
- `tests/agent_runtime_test.rs`
|
||||||
|
- `tests/browser_script_skill_tool_test.rs`
|
||||||
|
- `tests/compat_openxml_office_tool_test.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Keep tests proving pipe path still works**
|
||||||
|
|
||||||
|
- [ ] **Step 2: Keep tests proving ws path still works**
|
||||||
|
|
||||||
|
- [ ] **Step 3: Keep Zhihu keep-path regression**
|
||||||
|
|
||||||
|
- [ ] **Step 4: Replace cleanup-only “fault-details absent” assertions if final intended state is now “fault-details present via main official implementation”**
|
||||||
|
|
||||||
|
- [ ] **Step 5: Do not keep assertions that only prove ws’s old duplicate implementation is absent if they now contradict the intended merged product**
|
||||||
|
|
||||||
|
- [ ] **Step 6: Verify each resolved test file has no conflict markers**
|
||||||
|
|
||||||
|
Run per file:
|
||||||
|
```bash
|
||||||
|
git diff --check -- <path>
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### Task 3E: Confirm merge is fully resolved
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- No code changes expected
|
||||||
|
|
||||||
|
- [ ] **Step 1: Confirm no unmerged entries remain**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
git diff --name-only --diff-filter=U
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: no output.
|
||||||
|
|
||||||
|
- [ ] **Step 2: Show final resolved file list**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
git diff --cached --name-only
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 4: Verify Final Product Behavior, Not Cleanup Intermediate State
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Test: `tests/agent_runtime_test.rs`
|
||||||
|
- Test: `tests/browser_ws_backend_test.rs`
|
||||||
|
- Test: `tests/service_ws_session_test.rs`
|
||||||
|
- Test: `tests/task_runner_test.rs`
|
||||||
|
- Test: `tests/compat_runtime_test.rs`
|
||||||
|
- Test: `tests/runtime_profile_test.rs`
|
||||||
|
- Test: `tests/compat_config_test.rs`
|
||||||
|
- Conditional: `tests/browser_script_skill_tool_test.rs`
|
||||||
|
- Conditional: `tests/compat_openxml_office_tool_test.rs`
|
||||||
|
|
||||||
|
#### Verification goals
|
||||||
|
This task must prove all four:
|
||||||
|
|
||||||
|
1. **pipe path still works**
|
||||||
|
2. **ws path still works**
|
||||||
|
3. **Zhihu still works**
|
||||||
|
4. **final fault-details implementation is the main version, not ws’s old duplicate**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### Task 4A: Verify pipe-related behavior
|
||||||
|
|
||||||
|
- [ ] **Step 1: Run task runner coverage**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test --test task_runner_test -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run compat runtime suite relevant to main path**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test --test compat_runtime_test -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 3: If pipe-specific tests fail, stop and fix merge resolution before continuing**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### Task 4B: Verify ws-related behavior
|
||||||
|
|
||||||
|
- [ ] **Step 1: Run browser websocket backend suite**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test --test browser_ws_backend_test -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run service websocket session suite**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test --test service_ws_session_test -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 3: If ws-specific tests fail, stop and fix merge resolution before continuing**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### Task 4C: Verify Zhihu behavior
|
||||||
|
|
||||||
|
- [ ] **Step 1: Re-run ws→Zhihu keep-path regression**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test --test agent_runtime_test production_submit_task_routes_zhihu_through_ws_backend_without_helper_bootstrap -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected:
|
||||||
|
```text
|
||||||
|
1 passed; 0 failed
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: If additional Zhihu tests were touched by conflicts, run the smallest affected test target**
|
||||||
|
|
||||||
|
Run as needed:
|
||||||
|
```bash
|
||||||
|
cargo test --test agent_runtime_test -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### Task 4D: Verify config/runtime contracts
|
||||||
|
|
||||||
|
- [ ] **Step 1: Run runtime profile suite**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test --test runtime_profile_test -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run compat config suite**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test --test compat_config_test -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 3: Ensure contracts now reflect final merged product, not the cleanup-only intermediate**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### Task 4E: Verify shared infra if touched
|
||||||
|
|
||||||
|
- [ ] **Step 1: If browser-script tool files were touched**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test --test browser_script_skill_tool_test -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: If openxml files were touched**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test --test compat_openxml_office_tool_test -- --nocapture
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
#### Task 4F: Compile guard
|
||||||
|
|
||||||
|
- [ ] **Step 1: Run compile-only full test build**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test --no-run
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: exit code `0`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 5: Confirm The Merge Outcome Matches The Principle
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- No code changes expected
|
||||||
|
|
||||||
|
- [ ] **Step 1: Show final status**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
git status --short
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected:
|
||||||
|
- no `UU` / `AA` / `DD`
|
||||||
|
- merged, validated, uncommitted state only
|
||||||
|
|
||||||
|
- [ ] **Step 2: Show final staged summary**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
git diff --cached --stat
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 3: Report the four required facts with command-backed evidence**
|
||||||
|
Only if verified:
|
||||||
|
1. pipe 没坏
|
||||||
|
2. ws 没坏
|
||||||
|
3. Zhihu 没坏
|
||||||
|
4. 最终 fault-details 来自 main 正式实现,而不是 ws 旧重复实现
|
||||||
|
|
||||||
|
- [ ] **Step 4: Stop here**
|
||||||
|
Do **not** run:
|
||||||
|
```bash
|
||||||
|
git commit
|
||||||
|
git push
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Stop Conditions
|
||||||
|
|
||||||
|
出现以下任一情况立即停止,不擅自扩展处理:
|
||||||
|
|
||||||
|
- `origin/main` 的正式 fault-details 实现依赖 cleanup 已删掉的契约,而这已经超出简单 merge 范围
|
||||||
|
- pipe 与 ws 同时依赖同一段共享代码,但两边要求已结构性冲突
|
||||||
|
- Zhihu keep-path 失败
|
||||||
|
- `cargo test --no-run` 失败且问题超出本次 merge surface
|
||||||
|
- 需要重新设计 pipe/ws 共存方式,而不是单纯合并
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## One-line Execution Rule
|
||||||
|
|
||||||
|
**这次 merge 的最终标准不是“继续保持 ws 没有 fault-details”,而是“保住 pipe、保住 ws、保住 Zhihu,并让 main 的正式 fault-details 替换 ws 旧重复实现”。**
|
||||||
@@ -0,0 +1,808 @@
|
|||||||
|
# TQ Lineloss Deterministic Skill Implementation Plan
|
||||||
|
|
||||||
|
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||||
|
|
||||||
|
**Goal:** Add a staged `tq-lineloss-report.collect_lineloss` browser-script skill plus a `。。。` deterministic submit path in `claw-new` that extracts and normalizes company/month/week parameters without LLM, executes through the existing pipe browser-script seam, and does not regress Zhihu hotlist behavior.
|
||||||
|
|
||||||
|
**Architecture:** Keep the new behavior behind a narrow deterministic branch that activates only when the raw instruction ends with the exact suffix `。。。`. `claw-new` owns deterministic trigger detection, explicit scene matching, semantic extraction, canonical normalization, prompt-or-execute control flow, and artifact interpretation; the staged skill owns page inspection, source/API collection, row normalization, export/report-log behavior, and final artifact generation. Reuse the existing `browser_script` execution seam already used by the direct browser path so the backend can later swap from pipe to ws without changing the deterministic contract.
|
||||||
|
|
||||||
|
**Tech Stack:** Rust 2021, Cargo tests, existing `BrowserPipeTool` / `execute_browser_script_tool` seam, staged skill packaging under `claw/claw/skills/skill_staging`, browser-side JavaScript, deterministic string parsing and normalization.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Execution Context
|
||||||
|
|
||||||
|
- Follow @superpowers:test-driven-development for every behavior change.
|
||||||
|
- Follow @superpowers:verification-before-completion before claiming each task is done.
|
||||||
|
- Do **not** create a git worktree unless the user explicitly asks.
|
||||||
|
- Keep the new behavior as a narrow branch; do **not** redesign the whole runtime into a general registry engine in this slice.
|
||||||
|
- Preserve `src/runtime/engine.rs:147-159` and `src/runtime/engine.rs:265-286` behavior unless a failing regression test proves a change is required.
|
||||||
|
- Do **not** add ws runtime requirements on `main`; keep ws-readiness isolated to backend-neutral contracts only.
|
||||||
|
- Never fall back to page defaults for missing company, mode, or period in deterministic mode.
|
||||||
|
- If a deterministic request does not match the lineloss whitelist scene, return a deterministic mismatch prompt instead of falling through to ordinary orchestration.
|
||||||
|
|
||||||
|
## File Map
|
||||||
|
|
||||||
|
### New or modified files in `claw-new`
|
||||||
|
|
||||||
|
- Create: `src/compat/deterministic_submit.rs`
|
||||||
|
- suffix detection, deterministic scene match, prompt-or-execute decision
|
||||||
|
- Create: `src/compat/tq_lineloss/mod.rs`
|
||||||
|
- public normalization and artifact helpers
|
||||||
|
- Create: `src/compat/tq_lineloss/contracts.rs`
|
||||||
|
- canonical request/result data structures and status semantics
|
||||||
|
- Create: `src/compat/tq_lineloss/org_resolver.rs`
|
||||||
|
- alias generation, canonical label/code resolution, ambiguity handling
|
||||||
|
- Create: `src/compat/tq_lineloss/period_resolver.rs`
|
||||||
|
- month/week extraction, contradiction detection, canonical payload building
|
||||||
|
- Create: `src/compat/tq_lineloss/org_units.rs`
|
||||||
|
- checked-in canonical unit dictionary derived from the real source tree data
|
||||||
|
- Modify: `src/compat/mod.rs`
|
||||||
|
- export the deterministic and lineloss modules
|
||||||
|
- Modify: `src/agent/mod.rs`
|
||||||
|
- insert the deterministic branch before ordinary LLM interpretation, but only when the exact suffix is present
|
||||||
|
- Modify only if code duplication would otherwise occur: `src/compat/direct_skill_runtime.rs`
|
||||||
|
- extract narrow shared browser-script execution helpers without changing current configured direct-submit behavior
|
||||||
|
- Read but avoid changing unless tests force it: `src/runtime/engine.rs`
|
||||||
|
- existing Zhihu hotlist routing/prompt logic must remain intact
|
||||||
|
|
||||||
|
### New staged skill package in `claw`
|
||||||
|
|
||||||
|
- Create: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/SKILL.md`
|
||||||
|
- Create: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/SKILL.toml`
|
||||||
|
- Create: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/references/collection-flow.md`
|
||||||
|
- Create: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/references/data-quality.md`
|
||||||
|
- Create: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/assets/scene-snapshot/index.html`
|
||||||
|
- Create: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/scripts/collect_lineloss.js`
|
||||||
|
- Create: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/scripts/collect_lineloss.test.js`
|
||||||
|
- Create if staging conventions require it: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/scenes/tq-lineloss-report/scene.json`
|
||||||
|
|
||||||
|
### Tests
|
||||||
|
|
||||||
|
- Create: `tests/deterministic_submit_test.rs`
|
||||||
|
- Modify: `tests/compat_runtime_test.rs`
|
||||||
|
- Modify only if end-to-end submit coverage requires it: `tests/runtime_task_flow_test.rs`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Locked contracts
|
||||||
|
|
||||||
|
### Deterministic trigger contract
|
||||||
|
|
||||||
|
- Trigger only when the raw instruction ends with the exact suffix `。。。`.
|
||||||
|
- No suffix: current behavior unchanged.
|
||||||
|
- Suffix + unsupported scene: explicit deterministic mismatch prompt.
|
||||||
|
- Suffix is not permission for arbitrary browser actions; only fixed deterministic scenes are allowed.
|
||||||
|
- Negative cases must stay non-deterministic or mismatched exactly as designed:
|
||||||
|
- ASCII `...` is not the trigger
|
||||||
|
- `。。。。` is not the trigger
|
||||||
|
- `。。。` appearing in the middle of the instruction is not the trigger
|
||||||
|
- any trailing whitespace after `。。。` is not the trigger in this slice
|
||||||
|
|
||||||
|
### Canonical org contract
|
||||||
|
|
||||||
|
The resolver must output both display and backend values:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
pub struct ResolvedOrg {
|
||||||
|
pub label: String,
|
||||||
|
pub code: String,
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Required supported inputs include:
|
||||||
|
- `兰州公司`
|
||||||
|
- `天水公司`
|
||||||
|
- `国网兰州供电公司`
|
||||||
|
- `城关供电分公司`
|
||||||
|
- `榆中县供电公司`
|
||||||
|
- normalized shorthand such as `榆中县公司`
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- derive aliases from the real unit tree data
|
||||||
|
- require uniqueness before execution
|
||||||
|
- ambiguous aliases prompt and stop
|
||||||
|
- missing company prompts and stop
|
||||||
|
|
||||||
|
### Canonical period contract
|
||||||
|
|
||||||
|
```rust
|
||||||
|
pub enum PeriodMode {
|
||||||
|
Month,
|
||||||
|
Week,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct ResolvedPeriod {
|
||||||
|
pub mode: PeriodMode,
|
||||||
|
pub mode_code: String,
|
||||||
|
pub value: String,
|
||||||
|
pub payload: serde_json::Value,
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Required supported inputs include:
|
||||||
|
- `月累计 2026-03`
|
||||||
|
- `月累计 2026年3月`
|
||||||
|
- `周累计 2026年第12周`
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- month and week intent are mutually exclusive
|
||||||
|
- missing mode prompts and stop
|
||||||
|
- missing period prompts and stop
|
||||||
|
- bare `第12周` is incomplete in this slice and must prompt for year instead of guessing
|
||||||
|
- derive the real backend `period_mode_code` values and request payload field names from the source page/API contract before implementation; do not ship placeholder enum echoes such as `month`/`week` unless the source materials prove those are the real backend codes
|
||||||
|
- never use page-selected defaults in deterministic mode
|
||||||
|
|
||||||
|
### Artifact contract
|
||||||
|
|
||||||
|
Lock the field names now so `claw-new` can interpret status without re-embedding business logic:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "report-artifact",
|
||||||
|
"report_name": "tq-lineloss-report",
|
||||||
|
"status": "ok",
|
||||||
|
"org": {
|
||||||
|
"label": "国网兰州供电公司",
|
||||||
|
"code": "008df5db70319f73e0508eoac23e0c3c"
|
||||||
|
},
|
||||||
|
"period": {
|
||||||
|
"mode": "month",
|
||||||
|
"mode_code": "<real-backend-mode-code>",
|
||||||
|
"value": "2026-03",
|
||||||
|
"payload": {
|
||||||
|
"<real-backend-field>": "<real-backend-value>"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"columns": [],
|
||||||
|
"rows": [],
|
||||||
|
"counts": {
|
||||||
|
"rows": 0
|
||||||
|
},
|
||||||
|
"export": {
|
||||||
|
"attempted": false,
|
||||||
|
"status": "skipped",
|
||||||
|
"message": null
|
||||||
|
},
|
||||||
|
"reasons": []
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Status mapping in `claw-new`:
|
||||||
|
- `ok` -> task success
|
||||||
|
- `partial` -> task success with partial summary
|
||||||
|
- `blocked` -> task failure
|
||||||
|
- `error` -> task failure
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 1: Scaffold the staged skill package and written contract
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Create: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/SKILL.md`
|
||||||
|
- Create: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/SKILL.toml`
|
||||||
|
- Create: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/references/collection-flow.md`
|
||||||
|
- Create: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/references/data-quality.md`
|
||||||
|
- Create: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/assets/scene-snapshot/index.html`
|
||||||
|
- Create if required: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/scenes/tq-lineloss-report/scene.json`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Write the failing package contract files**
|
||||||
|
|
||||||
|
Create the package using `fault-details-report` as the structure reference. Lock one tool only:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[tools]]
|
||||||
|
name = "collect_lineloss"
|
||||||
|
kind = "browser_script"
|
||||||
|
description = "Collect 台区线损月/周累计线损率 rows using normalized company and period parameters and return a structured report artifact."
|
||||||
|
```
|
||||||
|
|
||||||
|
Declare required args in `SKILL.toml`:
|
||||||
|
- `expected_domain`
|
||||||
|
- `org_label`
|
||||||
|
- `org_code`
|
||||||
|
- `period_mode`
|
||||||
|
- `period_mode_code`
|
||||||
|
- `period_value`
|
||||||
|
- `period_payload`
|
||||||
|
|
||||||
|
- [ ] **Step 2: Write `SKILL.md` before implementation**
|
||||||
|
|
||||||
|
Document:
|
||||||
|
- when to use / when not to use
|
||||||
|
- required normalized args only
|
||||||
|
- blocked/error semantics
|
||||||
|
- exact returned artifact fields
|
||||||
|
- no raw natural-language values passed to backend requests
|
||||||
|
|
||||||
|
- [ ] **Step 3: Write the reference docs**
|
||||||
|
|
||||||
|
`references/collection-flow.md` must describe:
|
||||||
|
- relevant page state
|
||||||
|
- month request mapping
|
||||||
|
- week request mapping
|
||||||
|
- export/report-log flow if retained
|
||||||
|
|
||||||
|
`references/data-quality.md` must define:
|
||||||
|
- canonical output columns
|
||||||
|
- required field coverage
|
||||||
|
- status semantics
|
||||||
|
- partial/error rules
|
||||||
|
- org/period normalization assumptions
|
||||||
|
|
||||||
|
- [ ] **Step 4: Add scene metadata if the current staging registry needs it**
|
||||||
|
|
||||||
|
Keep it narrow: one scene, one tool, one artifact type.
|
||||||
|
|
||||||
|
- [ ] **Step 5: Add an automated staged-skill load/resolve check**
|
||||||
|
|
||||||
|
Add `tests/deterministic_submit_test.rs` coverage that loads the staged skills root used by runtime tests, resolves `tq-lineloss-report.collect_lineloss`, and asserts the tool is discoverable with the required args:
|
||||||
|
- `expected_domain`
|
||||||
|
- `org_label`
|
||||||
|
- `org_code`
|
||||||
|
- `period_mode`
|
||||||
|
- `period_mode_code`
|
||||||
|
- `period_value`
|
||||||
|
- `period_payload`
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test deterministic_submit_discovers_tq_lineloss_skill_contract -- --exact
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: FAIL before the package is fully wired, PASS once the staged skill contract is discoverable and complete.
|
||||||
|
|
||||||
|
- [ ] **Step 6: Verify structural parity with `fault-details-report`**
|
||||||
|
|
||||||
|
Run a manual file-layout diff and confirm there are no placeholder descriptions or missing required docs.
|
||||||
|
|
||||||
|
- [ ] **Step 7: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report" "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/scenes/tq-lineloss-report/scene.json"
|
||||||
|
git commit -m "feat: scaffold tq lineloss staged skill contract"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 2: Add browser-side JS red tests and implement the staged collector
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Create: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/scripts/collect_lineloss.js`
|
||||||
|
- Create: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/scripts/collect_lineloss.test.js`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Write the failing JS tests first**
|
||||||
|
|
||||||
|
Cover deterministic pure helpers for:
|
||||||
|
- missing normalized args -> blocked/error contract
|
||||||
|
- month request shape uses `org_code` + canonical month payload
|
||||||
|
- week request shape uses `org_code` + canonical week payload
|
||||||
|
- artifact field names and counts
|
||||||
|
- partial/error status shaping
|
||||||
|
- no raw user-entered org text leakage into request fields
|
||||||
|
|
||||||
|
Example test skeleton:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const test = require('node:test');
|
||||||
|
const assert = require('node:assert/strict');
|
||||||
|
|
||||||
|
const {
|
||||||
|
validateArgs,
|
||||||
|
buildMonthRequest,
|
||||||
|
buildWeekRequest,
|
||||||
|
normalizeRows,
|
||||||
|
buildArtifact
|
||||||
|
} = require('./collect_lineloss.js');
|
||||||
|
|
||||||
|
test('buildMonthRequest uses canonical org code and month payload', () => {
|
||||||
|
const request = buildMonthRequest({
|
||||||
|
org_code: 'ORG-1',
|
||||||
|
period_payload: { year: 2026, month: 3 }
|
||||||
|
});
|
||||||
|
|
||||||
|
assert.equal(request.orgCode, 'ORG-1');
|
||||||
|
assert.equal(request.year, 2026);
|
||||||
|
assert.equal(request.month, 3);
|
||||||
|
});
|
||||||
|
|
||||||
|
test('buildArtifact locks field names and partial semantics', () => {
|
||||||
|
const artifact = buildArtifact({
|
||||||
|
org_label: '国网兰州供电公司',
|
||||||
|
org_code: 'ORG-1',
|
||||||
|
period_mode: 'month',
|
||||||
|
period_mode_code: 'month',
|
||||||
|
period_value: '2026-03',
|
||||||
|
period_payload: { year: 2026, month: 3 },
|
||||||
|
rows: [{ id: 1 }],
|
||||||
|
status: 'partial',
|
||||||
|
reasons: ['export_failed']
|
||||||
|
});
|
||||||
|
|
||||||
|
assert.equal(artifact.report_name, 'tq-lineloss-report');
|
||||||
|
assert.equal(artifact.org.code, 'ORG-1');
|
||||||
|
assert.equal(artifact.period.value, '2026-03');
|
||||||
|
assert.deepEqual(artifact.reasons, ['export_failed']);
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run the JS test file to confirm failure**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
node --test "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/scripts/collect_lineloss.test.js"
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: FAIL because the script/helpers do not exist yet.
|
||||||
|
|
||||||
|
- [ ] **Step 3: Write the minimal browser-side implementation**
|
||||||
|
|
||||||
|
Required structure:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
function validateArgs(args) { /* require normalized canonical args */ }
|
||||||
|
function buildMonthRequest(args) { /* build month request from canonical values */ }
|
||||||
|
function buildWeekRequest(args) { /* build week request from canonical values */ }
|
||||||
|
function normalizeRows(rawRows) { /* canonical columns only */ }
|
||||||
|
function buildArtifact(input) { /* locked artifact shape */ }
|
||||||
|
|
||||||
|
return (async () => {
|
||||||
|
const args = __SKILL_ARGS__;
|
||||||
|
validateArgs(args);
|
||||||
|
// validate page context
|
||||||
|
// collect from page/API
|
||||||
|
// normalize rows
|
||||||
|
// optionally attempt export/report-log if the real business flow requires it
|
||||||
|
return buildArtifact(result);
|
||||||
|
})();
|
||||||
|
```
|
||||||
|
|
||||||
|
Keep test exports behind an environment-safe guard so the file still works as browser-eval code.
|
||||||
|
|
||||||
|
- [ ] **Step 4: Re-run the JS tests until they pass**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
node --test "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/scripts/collect_lineloss.test.js"
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 5: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/scripts/collect_lineloss.js" "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/scripts/collect_lineloss.test.js"
|
||||||
|
git commit -m "feat: add tq lineloss browser collection script"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 3: Add deterministic suffix detection and explicit scene routing
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Create: `src/compat/deterministic_submit.rs`
|
||||||
|
- Modify: `src/compat/mod.rs`
|
||||||
|
- Modify: `src/agent/mod.rs`
|
||||||
|
- Create: `tests/deterministic_submit_test.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Write failing routing tests**
|
||||||
|
|
||||||
|
Add Rust tests for:
|
||||||
|
- exact raw `。。。` suffix enables deterministic mode
|
||||||
|
- no suffix leaves current routing untouched
|
||||||
|
- suffix + unsupported deterministic request returns supported-scene prompt
|
||||||
|
- when page URL/title context is available and does not match the lineloss scene, deterministic routing returns mismatch/block prompt instead of proceeding
|
||||||
|
- Zhihu hotlist request without suffix keeps the current route
|
||||||
|
- ASCII `...` does not trigger deterministic mode
|
||||||
|
- `。。。。` does not trigger deterministic mode
|
||||||
|
- `。。。` in the middle of the instruction does not trigger deterministic mode
|
||||||
|
- trailing whitespace after `。。。` does not trigger deterministic mode in this slice
|
||||||
|
|
||||||
|
Suggested tests:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[test]
|
||||||
|
fn deterministic_submit_requires_exact_suffix() {}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn deterministic_submit_nonmatch_returns_supported_scene_message() {}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn deterministic_submit_rejects_page_context_mismatch() {}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn zhihu_hotlist_request_without_suffix_keeps_existing_route() {}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn deterministic_submit_rejects_non_exact_suffix_variants() {}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run the targeted routing tests and confirm failure**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test deterministic_submit_requires_exact_suffix -- --exact
|
||||||
|
cargo test deterministic_submit_nonmatch_returns_supported_scene_message -- --exact
|
||||||
|
cargo test zhihu_hotlist_request_without_suffix_keeps_existing_route -- --exact
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: FAIL because the deterministic routing seam does not exist yet.
|
||||||
|
|
||||||
|
- [ ] **Step 3: Implement the narrow deterministic routing module**
|
||||||
|
|
||||||
|
Recommended public shape:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
pub enum DeterministicSubmitDecision {
|
||||||
|
NotDeterministic,
|
||||||
|
Prompt { summary: String },
|
||||||
|
Execute(DeterministicExecutionPlan),
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
`src/agent/mod.rs` should:
|
||||||
|
1. detect deterministic suffix
|
||||||
|
2. if not deterministic, continue current flow untouched
|
||||||
|
3. if prompt, return `TaskComplete`
|
||||||
|
4. if execute, pass the plan into the browser-script execution seam
|
||||||
|
|
||||||
|
- [ ] **Step 4: Re-run the routing tests**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test deterministic_submit_requires_exact_suffix -- --exact
|
||||||
|
cargo test deterministic_submit_nonmatch_returns_supported_scene_message -- --exact
|
||||||
|
cargo test zhihu_hotlist_request_without_suffix_keeps_existing_route -- --exact
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 5: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/compat/deterministic_submit.rs src/compat/mod.rs src/agent/mod.rs tests/deterministic_submit_test.rs
|
||||||
|
git commit -m "feat: add deterministic submit routing seam"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 4: Implement company/unit normalization from real source data
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Create: `src/compat/tq_lineloss/mod.rs`
|
||||||
|
- Create: `src/compat/tq_lineloss/contracts.rs`
|
||||||
|
- Create: `src/compat/tq_lineloss/org_resolver.rs`
|
||||||
|
- Create: `src/compat/tq_lineloss/org_units.rs`
|
||||||
|
- Modify: `tests/deterministic_submit_test.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Write failing org resolver tests**
|
||||||
|
|
||||||
|
Cover:
|
||||||
|
- `兰州公司` -> canonical `国网兰州供电公司` + correct code
|
||||||
|
- `天水公司` -> canonical `国网天水供电公司` + correct code
|
||||||
|
- `城关供电分公司` -> lower-level direct match
|
||||||
|
- `榆中县公司` -> normalized county alias match
|
||||||
|
- ambiguous alias prompts instead of guessing
|
||||||
|
- missing company prompts instead of executing
|
||||||
|
|
||||||
|
Example skeleton:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[test]
|
||||||
|
fn lineloss_org_resolver_matches_city_alias() {}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn lineloss_org_resolver_matches_county_alias() {}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn lineloss_org_resolver_prompts_on_ambiguity() {}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run the org tests and confirm failure**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test lineloss_org_resolver_matches_city_alias -- --exact
|
||||||
|
cargo test lineloss_org_resolver_matches_county_alias -- --exact
|
||||||
|
cargo test lineloss_org_resolver_prompts_on_ambiguity -- --exact
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: FAIL because the resolver and checked-in unit dictionary do not exist yet.
|
||||||
|
|
||||||
|
- [ ] **Step 3: Check in the canonical unit dictionary and implement alias resolution**
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- derive data from the real source materials, not guessed literals
|
||||||
|
- keep canonical `label` and `code`
|
||||||
|
- generate normalized aliases from formal names
|
||||||
|
- support both city-company and district/county/sub-company levels
|
||||||
|
- require uniqueness before execution
|
||||||
|
|
||||||
|
- [ ] **Step 4: Implement explicit prompt messages**
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
- `已命中台区线损报表技能,但缺少供电单位,请补充如“兰州公司”或“城关供电分公司”。`
|
||||||
|
- `已命中台区线损报表技能,但供电单位存在歧义,请补充更完整名称。`
|
||||||
|
|
||||||
|
- [ ] **Step 5: Re-run the org tests**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test lineloss_org_resolver_matches_city_alias -- --exact
|
||||||
|
cargo test lineloss_org_resolver_matches_county_alias -- --exact
|
||||||
|
cargo test lineloss_org_resolver_prompts_on_ambiguity -- --exact
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 6: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/compat/tq_lineloss/mod.rs src/compat/tq_lineloss/contracts.rs src/compat/tq_lineloss/org_resolver.rs src/compat/tq_lineloss/org_units.rs tests/deterministic_submit_test.rs
|
||||||
|
git commit -m "feat: add tq lineloss org normalization"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 5: Implement period extraction and canonical payload building
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Create: `src/compat/tq_lineloss/period_resolver.rs`
|
||||||
|
- Modify: `src/compat/tq_lineloss/mod.rs`
|
||||||
|
- Modify: `tests/deterministic_submit_test.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Write failing period resolver tests**
|
||||||
|
|
||||||
|
Cover:
|
||||||
|
- `月累计 2026-03`
|
||||||
|
- `月累计 2026年3月`
|
||||||
|
- `周累计 2026年第12周`
|
||||||
|
- contradictory month/week expressions prompt
|
||||||
|
- missing mode prompts
|
||||||
|
- missing period prompts
|
||||||
|
- bare `第12周` prompts for year in this slice
|
||||||
|
- real backend month/week mode codes and request payload field names are derived from source materials instead of placeholder values
|
||||||
|
|
||||||
|
Example skeleton:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[test]
|
||||||
|
fn lineloss_period_resolver_parses_month_text() {}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn lineloss_period_resolver_parses_week_text() {}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn lineloss_period_resolver_prompts_for_missing_year_on_week() {}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn lineloss_period_resolver_rejects_contradictory_mode() {}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run the period tests and confirm failure**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test lineloss_period_resolver_parses_month_text -- --exact
|
||||||
|
cargo test lineloss_period_resolver_parses_week_text -- --exact
|
||||||
|
cargo test lineloss_period_resolver_prompts_for_missing_year_on_week -- --exact
|
||||||
|
cargo test lineloss_period_resolver_rejects_contradictory_mode -- --exact
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: FAIL because the period resolver does not exist yet.
|
||||||
|
|
||||||
|
- [ ] **Step 3: Implement the minimal resolver**
|
||||||
|
|
||||||
|
Output contract:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
pub struct ResolvedPeriod {
|
||||||
|
pub mode: PeriodMode,
|
||||||
|
pub mode_code: String,
|
||||||
|
pub value: String,
|
||||||
|
pub payload: serde_json::Value,
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- no page-default fallback
|
||||||
|
- no implicit current-year assumptions
|
||||||
|
- no mixed month/week execution
|
||||||
|
|
||||||
|
- [ ] **Step 4: Re-run the period tests**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test lineloss_period_resolver_parses_month_text -- --exact
|
||||||
|
cargo test lineloss_period_resolver_parses_week_text -- --exact
|
||||||
|
cargo test lineloss_period_resolver_prompts_for_missing_year_on_week -- --exact
|
||||||
|
cargo test lineloss_period_resolver_rejects_contradictory_mode -- --exact
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 5: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/compat/tq_lineloss/period_resolver.rs src/compat/tq_lineloss/mod.rs tests/deterministic_submit_test.rs
|
||||||
|
git commit -m "feat: add tq lineloss period normalization"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 6: Wire deterministic execution through the existing browser-script seam
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/compat/deterministic_submit.rs`
|
||||||
|
- Modify: `src/agent/mod.rs`
|
||||||
|
- Modify if needed: `src/compat/direct_skill_runtime.rs`
|
||||||
|
- Modify: `tests/deterministic_submit_test.rs`
|
||||||
|
- Modify: `tests/compat_runtime_test.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Write failing execution tests**
|
||||||
|
|
||||||
|
Cover:
|
||||||
|
- successful deterministic lineloss request builds canonical tool args
|
||||||
|
- missing company/mode/period returns prompt without browser execution
|
||||||
|
- `partial` artifact maps to successful partial summary
|
||||||
|
- `blocked` and `error` artifacts map to failed completion
|
||||||
|
|
||||||
|
Example skeleton:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[test]
|
||||||
|
fn deterministic_lineloss_execution_passes_canonical_args() {}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn deterministic_lineloss_missing_company_does_not_invoke_browser() {}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn deterministic_lineloss_partial_artifact_maps_to_partial_summary() {}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run the execution tests and confirm failure**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test deterministic_lineloss_execution_passes_canonical_args -- --exact
|
||||||
|
cargo test deterministic_lineloss_missing_company_does_not_invoke_browser -- --exact
|
||||||
|
cargo test deterministic_lineloss_partial_artifact_maps_to_partial_summary -- --exact
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: FAIL because the deterministic execution plan is not wired yet.
|
||||||
|
|
||||||
|
- [ ] **Step 3: Implement execution via the existing `browser_script` seam**
|
||||||
|
|
||||||
|
Build tool args only from normalized values:
|
||||||
|
- `expected_domain`
|
||||||
|
- `org_label`
|
||||||
|
- `org_code`
|
||||||
|
- `period_mode`
|
||||||
|
- `period_mode_code`
|
||||||
|
- `period_value`
|
||||||
|
- `period_payload`
|
||||||
|
|
||||||
|
Resolve the tool explicitly to:
|
||||||
|
- `tq-lineloss-report.collect_lineloss`
|
||||||
|
|
||||||
|
Do not introduce a new browser opcode family or second browser protocol.
|
||||||
|
|
||||||
|
- [ ] **Step 4: Implement central artifact interpretation**
|
||||||
|
|
||||||
|
Recommended helper:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
fn summarize_lineloss_artifact(artifact: &serde_json::Value) -> (bool, String)
|
||||||
|
```
|
||||||
|
|
||||||
|
Summary must include canonical org/period and row counts, and surface blocked/partial/error reasons.
|
||||||
|
|
||||||
|
- [ ] **Step 5: Re-run the execution tests**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test deterministic_lineloss_execution_passes_canonical_args -- --exact
|
||||||
|
cargo test deterministic_lineloss_missing_company_does_not_invoke_browser -- --exact
|
||||||
|
cargo test deterministic_lineloss_partial_artifact_maps_to_partial_summary -- --exact
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 6: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/compat/deterministic_submit.rs src/agent/mod.rs src/compat/direct_skill_runtime.rs tests/deterministic_submit_test.rs tests/compat_runtime_test.rs
|
||||||
|
git commit -m "feat: execute deterministic tq lineloss skill through browser script seam"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 7: Add Zhihu regression coverage and run the full verification set
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `tests/compat_runtime_test.rs`
|
||||||
|
- Modify only if required: `tests/runtime_task_flow_test.rs`
|
||||||
|
- Reuse: `tests/deterministic_submit_test.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Add focused Zhihu regression tests**
|
||||||
|
|
||||||
|
Required assertions:
|
||||||
|
- ordinary Zhihu hotlist requests without `。。。` still use the current path
|
||||||
|
- existing export/presentation requests still preserve their current behavior
|
||||||
|
- deterministic suffix does not silently route unmatched requests into Zhihu logic
|
||||||
|
- an existing non-lineloss direct `browser_script` path outside the new scene still behaves unchanged
|
||||||
|
|
||||||
|
- [ ] **Step 2: Add end-to-end deterministic submit coverage**
|
||||||
|
|
||||||
|
Required assertions:
|
||||||
|
- suffix detection
|
||||||
|
- scene match
|
||||||
|
- page-context mismatch prompt/block behavior when URL/title contradict the lineloss scene
|
||||||
|
- missing/ambiguous prompts
|
||||||
|
- canonical args passed to the browser-script tool
|
||||||
|
- returned summary shows canonical org and period
|
||||||
|
- execution stays on the existing pipe-backed browser-script seam with no ws-only dependency introduced on `main`
|
||||||
|
|
||||||
|
- [ ] **Step 3: Run the focused Rust tests**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test --test deterministic_submit_test
|
||||||
|
cargo test --test compat_runtime_test
|
||||||
|
cargo test --test runtime_task_flow_test
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 4: Run the whole Rust suite**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 5: Re-run the staged skill JS tests**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
node --test "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/scripts/collect_lineloss.test.js"
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 6: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add tests/deterministic_submit_test.rs tests/compat_runtime_test.rs tests/runtime_task_flow_test.rs
|
||||||
|
git commit -m "test: cover deterministic tq lineloss routing and zhihu regression"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Final verification checklist
|
||||||
|
|
||||||
|
- [ ] `。。。` is the only deterministic trigger.
|
||||||
|
- [ ] Non-`。。。` requests preserve current routing.
|
||||||
|
- [ ] Deterministic page-context mismatch blocks or mismatches before execution when URL/title contradict the lineloss scene.
|
||||||
|
- [ ] Zhihu hotlist behavior is unchanged.
|
||||||
|
- [ ] Existing non-lineloss direct `browser_script` behavior is unchanged.
|
||||||
|
- [ ] Deterministic non-match returns an explicit supported-scene message.
|
||||||
|
- [ ] Missing company prompts.
|
||||||
|
- [ ] Ambiguous company prompts.
|
||||||
|
- [ ] Missing mode prompts.
|
||||||
|
- [ ] Missing period prompts.
|
||||||
|
- [ ] Bare `第12周` prompts for year.
|
||||||
|
- [ ] Canonical org code is passed to the staged skill.
|
||||||
|
- [ ] Canonical period mode code and payload are passed to the staged skill.
|
||||||
|
- [ ] The staged skill returns the locked artifact shape.
|
||||||
|
- [ ] Execution uses the existing `browser_script` seam only.
|
||||||
|
- [ ] No ws-specific runtime dependency is added on `main`.
|
||||||
|
|
||||||
|
## Implementation notes
|
||||||
|
|
||||||
|
- Prefer extracting a tiny shared execution helper from `src/compat/direct_skill_runtime.rs` if needed instead of duplicating tool lookup or browser-script invocation code.
|
||||||
|
- Keep deterministic whitelist configuration in one place, but do not expand this slice into a full general scene-registry redesign.
|
||||||
|
- If a failing test suggests changing Zhihu behavior, fix the deterministic branch or test harness instead of weakening the existing Zhihu path.
|
||||||
|
- The checked-in unit dictionary is part of the deterministic contract; treat updates to that data as explicit behavior changes and cover them with tests.
|
||||||
@@ -0,0 +1,448 @@
|
|||||||
|
# TQ Lineloss WS Dual-Transport Implementation Plan
|
||||||
|
|
||||||
|
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||||
|
|
||||||
|
**Goal:** Add ws communication support for the existing `tq-lineloss-report.collect_lineloss` deterministic browser_script path on the `feature/claw-ws` branch while preserving the current pipe path and validated Zhihu ws behavior.
|
||||||
|
|
||||||
|
**Architecture:** Reuse the existing backend-neutral execution seam that already exists for deterministic submit and browser_script execution. Keep lineloss business parsing, canonical args, and artifact interpretation unchanged; only make the ws backend/protocol and submit-path verification complete enough for the same lineloss skill contract to run over both pipe and ws.
|
||||||
|
|
||||||
|
**Tech Stack:** Rust 2021, Cargo tests, existing `BrowserBackend` abstraction, `WsBrowserBackend`, `ws_protocol`, browser websocket contract in `docs/_tmp_sgbrowser_ws_api_doc.txt`, existing staged `browser_script` skill execution seam.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Execution Context
|
||||||
|
|
||||||
|
- Follow @superpowers:test-driven-development for each behavior change.
|
||||||
|
- Follow @superpowers:verification-before-completion before claiming each task is done.
|
||||||
|
- Do **not** create a git worktree unless the user explicitly asks.
|
||||||
|
- This plan is **ws enablement only** for the already-added lineloss deterministic skill path.
|
||||||
|
- Do **not** redesign deterministic routing, org parsing, period parsing, staged skill packaging, or artifact contracts unless a failing ws-specific test proves a minimal compatibility fix is required.
|
||||||
|
- Do **not** modify validated Zhihu hotlist/export business behavior; only add regression coverage around it.
|
||||||
|
- Preserve the current pipe execution path as the control implementation.
|
||||||
|
- Preserve the current `BrowserBackend` seam; do not introduce a second lineloss-specific ws execution path.
|
||||||
|
|
||||||
|
## Scope Boundary
|
||||||
|
|
||||||
|
### In scope
|
||||||
|
- Make the existing lineloss deterministic `browser_script` skill path run through ws on this branch.
|
||||||
|
- Keep the same canonical tool args and returned artifact interpretation for both pipe and ws.
|
||||||
|
- Verify ws browser-script execution against the documented browser ws contract.
|
||||||
|
- Add focused tests for ws lineloss execution and regressions for Zhihu ws + pipe lineloss.
|
||||||
|
|
||||||
|
### Out of scope
|
||||||
|
- Changing lineloss trigger semantics (`。。。`).
|
||||||
|
- Changing org/unit normalization semantics or source dictionary shape.
|
||||||
|
- Changing period normalization semantics.
|
||||||
|
- Reworking staged skill docs or JS business collection logic beyond ws-compatibility necessities.
|
||||||
|
- Any Zhihu feature work.
|
||||||
|
- Any pipe-only cleanup/refactor.
|
||||||
|
- Any general scene-registry redesign.
|
||||||
|
|
||||||
|
## File Map
|
||||||
|
|
||||||
|
### Expected code changes
|
||||||
|
- Modify: `src/pipe/protocol.rs:49-78,130-165,192-209`
|
||||||
|
- keep `Action::Eval` encoding aligned with the current transport contract and lineloss skill expectations
|
||||||
|
- Modify: `src/pipe/browser_tool.rs:62-125`
|
||||||
|
- ensure eval response correlation and payload handling remain sufficient for deterministic lineloss execution
|
||||||
|
- Modify only if a focused test proves it is necessary: `src/compat/browser_script_skill_tool.rs:135-255`
|
||||||
|
- preserve browser_script contract; only make minimal output-shape handling fixes if eval payloads differ from the pipe baseline in a way current code cannot consume
|
||||||
|
- Modify only if a focused parity test proves it is necessary: `src/compat/direct_skill_runtime.rs:50-129`
|
||||||
|
- preserve shared backend-neutral execution helper behavior; no business logic changes
|
||||||
|
- Read and normally leave unchanged: `src/compat/deterministic_submit.rs:96-157`
|
||||||
|
- this is the business contract baseline and should not be rewritten for transport parity work
|
||||||
|
- Read and normally leave unchanged: `src/agent/mod.rs:242-285`
|
||||||
|
- this contains the current deterministic dispatch split used by this branch
|
||||||
|
|
||||||
|
### Expected test changes
|
||||||
|
- Modify: `tests/agent_runtime_test.rs`
|
||||||
|
- add/extend deterministic lineloss runtime coverage and parity assertions using the current runtime path
|
||||||
|
- Modify: `tests/compat_runtime_test.rs`
|
||||||
|
- add/extend focused pipe lineloss regression assertions so transport work cannot silently break pipe
|
||||||
|
- Modify only if end-to-end submit coverage truly needs it: `tests/runtime_task_flow_test.rs`
|
||||||
|
- verify broader submit-flow expectations remain intact
|
||||||
|
|
||||||
|
### Reference-only files
|
||||||
|
- Read only: `docs/superpowers/plans/2026-04-11-tq-lineloss-deterministic-skill-plan.md`
|
||||||
|
- Read only: `docs/superpowers/specs/2026-04-11-tq-lineloss-deterministic-skill-design.md`
|
||||||
|
- Read only: `docs/_tmp_sgbrowser_ws_api_doc.txt`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Locked contracts
|
||||||
|
|
||||||
|
### Contract 1: Same lineloss deterministic business contract on both transports
|
||||||
|
The ws path must reuse the existing values produced by `src/compat/deterministic_submit.rs:84-95` and `src/compat/deterministic_submit.rs:135-166`:
|
||||||
|
- `expected_domain`
|
||||||
|
- `org_label`
|
||||||
|
- `org_code`
|
||||||
|
- `period_mode`
|
||||||
|
- `period_mode_code`
|
||||||
|
- `period_value`
|
||||||
|
- `period_payload`
|
||||||
|
|
||||||
|
No ws-specific lineloss args may be introduced in this slice.
|
||||||
|
|
||||||
|
### Contract 2: Same browser_script execution seam on both transports
|
||||||
|
The ws path must continue to use `execute_browser_script_skill_raw_output_with_browser_backend(...)` from `src/compat/direct_skill_runtime.rs:95-112`, which in turn uses the same browser_script tool path as pipe. Do not add a second lineloss-only ws runner.
|
||||||
|
|
||||||
|
### Contract 3: Same artifact interpretation on both transports
|
||||||
|
The ws path must produce output that remains consumable by `summarize_lineloss_output(...)` / `summarize_lineloss_artifact(...)` in `src/compat/deterministic_submit.rs:168-257` without transport-specific branching.
|
||||||
|
|
||||||
|
### Contract 4: Zhihu ws behavior must stay unchanged
|
||||||
|
The existing ws browser-script / export path already validated by `tests/agent_runtime_test.rs` and `tests/compat_runtime_test.rs` is a hard regression boundary. If a change breaks Zhihu tests, fix the ws seam instead of weakening Zhihu expectations.
|
||||||
|
|
||||||
|
### Contract 5: Pipe remains the baseline
|
||||||
|
For identical lineloss deterministic inputs, the pipe path should continue to succeed without requiring ws configuration.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 1: Lock the ws contract with failing transport-level tests
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `tests/agent_runtime_test.rs`
|
||||||
|
- Modify: `tests/compat_runtime_test.rs`
|
||||||
|
- Read: `docs/_tmp_sgbrowser_ws_api_doc.txt`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Add a failing ws lineloss deterministic runtime test**
|
||||||
|
|
||||||
|
Model it after the existing ws harness in `tests/agent_runtime_test.rs:69-166`, but target lineloss deterministic execution instead of Zhihu. The test should:
|
||||||
|
- configure `browserWsUrl`
|
||||||
|
- submit a deterministic lineloss instruction ending with `。。。`
|
||||||
|
- return a ws callback payload representing a lineloss `report-artifact`
|
||||||
|
- assert success summary includes canonical org, period, status, and rows
|
||||||
|
|
||||||
|
Suggested skeleton:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[test]
|
||||||
|
fn ws_deterministic_lineloss_submit_executes_browser_script_and_summarizes_artifact() {
|
||||||
|
// arrange ws config + ws server + lineloss artifact callback
|
||||||
|
// act handle_browser_message_with_context(... SubmitTask ...)
|
||||||
|
// assert TaskComplete success summary contains canonical org/period/rows
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Add a failing pipe regression test for the same lineloss contract**
|
||||||
|
|
||||||
|
In `tests/compat_runtime_test.rs`, add a focused pipe-side assertion that the same deterministic lineloss instruction still succeeds through the current pipe seam and uses the same summary contract.
|
||||||
|
|
||||||
|
Suggested skeleton:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[test]
|
||||||
|
fn pipe_deterministic_lineloss_submit_preserves_existing_summary_contract() {
|
||||||
|
// arrange MockTransport responses for browser_script eval
|
||||||
|
// act handle_browser_message_with_context(...)
|
||||||
|
// assert success summary matches canonical contract
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 3: Add a failing ws regression assertion for Zhihu**
|
||||||
|
|
||||||
|
Add or tighten a Zhihu ws assertion proving ordinary Zhihu requests still use the existing ws path and do not get intercepted by lineloss deterministic logic.
|
||||||
|
|
||||||
|
- [ ] **Step 4: Run the three focused tests to confirm failure**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test ws_deterministic_lineloss_submit_executes_browser_script_and_summarizes_artifact -- --exact
|
||||||
|
cargo test pipe_deterministic_lineloss_submit_preserves_existing_summary_contract -- --exact
|
||||||
|
cargo test ws_zhihu_submit_path_remains_unchanged_after_lineloss_transport_work -- --exact
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: at least the new ws lineloss test fails before the seam is completed.
|
||||||
|
|
||||||
|
- [ ] **Step 5: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add tests/agent_runtime_test.rs tests/compat_runtime_test.rs
|
||||||
|
git commit -m "test: lock ws and pipe lineloss transport contracts"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 2: Make the current eval transport contract explicitly satisfy browser-script requirements
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/pipe/protocol.rs:49-78,130-165,192-209`
|
||||||
|
- Modify: `src/pipe/browser_tool.rs:62-124`
|
||||||
|
- Modify only if tests prove necessary: `src/compat/browser_script_skill_tool.rs:99-180,214-255`
|
||||||
|
- Modify: `tests/pipe_protocol_test.rs`
|
||||||
|
- Modify: `tests/browser_tool_test.rs`
|
||||||
|
- Modify: `tests/browser_script_skill_tool_test.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Add failing protocol/result-contract tests first**
|
||||||
|
|
||||||
|
Extend or add focused tests to lock the current branch's real transport contract:
|
||||||
|
- `Action::Eval` remains supported by the line protocol and command encoding
|
||||||
|
- eval request/response correlation remains stable via `seq` matching for lineloss-style target URLs
|
||||||
|
- eval/browser_script result handling preserves the full JSON artifact string without truncation before deterministic lineloss summarization consumes it
|
||||||
|
|
||||||
|
Suggested skeletons:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[test]
|
||||||
|
fn eval_action_remains_supported_in_protocol() {}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn browser_tool_matches_eval_response_by_seq_for_lineloss_flow() {}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn browser_script_tool_preserves_json_artifact_string_for_lineloss() {}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run the focused Task 2 tests to confirm failure**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test eval_action_remains_supported_in_protocol -- --exact
|
||||||
|
cargo test browser_tool_matches_eval_response_by_seq_for_lineloss_flow -- --exact
|
||||||
|
cargo test browser_script_tool_preserves_json_artifact_string_for_lineloss -- --exact
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: at least one test fails if the current protocol/correlation/result handling is still insufficient for the lineloss artifact path.
|
||||||
|
|
||||||
|
- [ ] **Step 3: Implement the minimal transport-contract fix**
|
||||||
|
|
||||||
|
Allowed changes:
|
||||||
|
- adjust only the `Action::Eval` protocol/encoding support in `src/pipe/protocol.rs`
|
||||||
|
- adjust only request/response correlation in `src/pipe/browser_tool.rs`
|
||||||
|
- if and only if tests still prove it necessary, make a tiny result-shape/stringification fix in `src/compat/browser_script_skill_tool.rs`
|
||||||
|
- keep existing Zhihu-compatible behavior intact
|
||||||
|
|
||||||
|
Not allowed:
|
||||||
|
- adding lineloss-only transport fields
|
||||||
|
- adding a second lineloss-specific execution path
|
||||||
|
- changing deterministic lineloss business parsing or summary rules
|
||||||
|
|
||||||
|
- [ ] **Step 4: Re-run the focused Task 2 tests**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test eval_action_remains_supported_in_protocol -- --exact
|
||||||
|
cargo test browser_tool_matches_eval_response_by_seq_for_lineloss_flow -- --exact
|
||||||
|
cargo test browser_script_tool_preserves_json_artifact_string_for_lineloss -- --exact
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 5: Re-run the focused ws lineloss runtime test from Task 1**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test ws_deterministic_lineloss_submit_executes_browser_script_and_summarizes_artifact -- --exact
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 6: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/pipe/protocol.rs src/pipe/browser_tool.rs src/compat/browser_script_skill_tool.rs tests/pipe_protocol_test.rs tests/browser_tool_test.rs tests/browser_script_skill_tool_test.rs
|
||||||
|
git commit -m "fix: align eval transport contract with lineloss browser script flow"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 3: Make eval result-shape handling surface the lineloss artifact cleanly
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/pipe/browser_tool.rs:62-125`
|
||||||
|
- Modify only if tests prove necessary: `src/compat/browser_script_skill_tool.rs:159-180,248-255`
|
||||||
|
- Modify: `tests/browser_script_skill_tool_test.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Add a failing result-shape test**
|
||||||
|
|
||||||
|
Lock that an eval response carrying a JSON string report artifact is surfaced as the same browser_script tool output shape expected by `execute_browser_script_tool(...)`.
|
||||||
|
|
||||||
|
Suggested skeleton:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[test]
|
||||||
|
fn ws_backend_eval_returns_text_payload_consumable_by_browser_script_tool() {
|
||||||
|
// arrange an eval response whose data.text is a JSON string artifact
|
||||||
|
// assert execute_browser_script_tool(...) returns the full artifact text without truncation
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run the result-shape test to confirm failure**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test ws_backend_eval_returns_text_payload_consumable_by_browser_script_tool -- --exact
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: FAIL only if current eval/result handling is not sufficient for full lineloss artifact output.
|
||||||
|
|
||||||
|
- [ ] **Step 3: Implement the minimal result-shape fix**
|
||||||
|
|
||||||
|
Allowed fixes:
|
||||||
|
- adjust `BrowserPipeTool::invoke(...)` only if response packaging itself is wrong
|
||||||
|
- if and only if still required, make a tiny output-shape compatibility fix in `src/compat/browser_script_skill_tool.rs` so JSON string `data.text` payloads are preserved identically to the pipe baseline
|
||||||
|
|
||||||
|
Not allowed:
|
||||||
|
- transport-specific lineloss parsing
|
||||||
|
- changes to deterministic business logic
|
||||||
|
- adding a second lineloss-specific execution path
|
||||||
|
|
||||||
|
- [ ] **Step 4: Re-run the result-shape test**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test ws_backend_eval_returns_text_payload_consumable_by_browser_script_tool -- --exact
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 5: Re-run the focused ws lineloss runtime test from Task 1**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test ws_deterministic_lineloss_submit_executes_browser_script_and_summarizes_artifact -- --exact
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 6: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/pipe/browser_tool.rs src/compat/browser_script_skill_tool.rs tests/browser_script_skill_tool_test.rs
|
||||||
|
git commit -m "fix: make eval result shape match browser script contract"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 4: Verify the current backend-neutral deterministic execution path without changing business rules
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Read baseline: `src/agent/mod.rs:242-285`
|
||||||
|
- Read baseline: `src/compat/deterministic_submit.rs:96-157`
|
||||||
|
- Modify only if a focused parity test proves it is necessary: `src/compat/direct_skill_runtime.rs:50-129`
|
||||||
|
- Modify: `tests/agent_runtime_test.rs`
|
||||||
|
- Modify: `tests/compat_runtime_test.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Add a failing integration test for backend-neutral parity**
|
||||||
|
|
||||||
|
Add a test proving these two current-branch paths produce the same lineloss summary contract for equivalent artifact payloads:
|
||||||
|
- pipe path via the existing deterministic submit flow in `tests/compat_runtime_test.rs`
|
||||||
|
- runtime path via `handle_browser_message_with_context(...)` deterministic submit routing in `tests/agent_runtime_test.rs`
|
||||||
|
|
||||||
|
Suggested skeleton:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[test]
|
||||||
|
fn deterministic_lineloss_pipe_and_ws_paths_share_summary_contract() {}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run the parity test to confirm failure or gap**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test deterministic_lineloss_pipe_and_ws_paths_share_summary_contract -- --exact
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: FAIL only if a remaining shared execution seam gap still exists.
|
||||||
|
|
||||||
|
- [ ] **Step 3: Apply the smallest shared execution fix if needed**
|
||||||
|
|
||||||
|
Allowed changes:
|
||||||
|
- tiny helper extraction or result handling in `src/compat/direct_skill_runtime.rs`
|
||||||
|
- no new lineloss-specific branch
|
||||||
|
- no change to deterministic lineloss business parsing or summary rules
|
||||||
|
- no change to configured direct-submit behavior for non-lineloss skills
|
||||||
|
|
||||||
|
- [ ] **Step 4: Re-run the parity test**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test deterministic_lineloss_pipe_and_ws_paths_share_summary_contract -- --exact
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 5: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/compat/direct_skill_runtime.rs tests/agent_runtime_test.rs tests/compat_runtime_test.rs
|
||||||
|
git commit -m "fix: preserve shared deterministic execution across pipe and ws"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 5: Run the full focused verification set and stop if any Zhihu or pipe regression appears
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Reuse: `tests/agent_runtime_test.rs`
|
||||||
|
- Reuse: `tests/compat_runtime_test.rs`
|
||||||
|
- Reuse: `tests/runtime_task_flow_test.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Run focused ws + lineloss + Zhihu regression tests**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test --test agent_runtime_test
|
||||||
|
cargo test --test compat_runtime_test
|
||||||
|
cargo test --test runtime_task_flow_test
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run targeted protocol/backend unit tests**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test eval_action_remains_supported_in_protocol -- --exact
|
||||||
|
cargo test browser_tool_matches_eval_response_by_seq_for_lineloss_flow -- --exact
|
||||||
|
cargo test browser_script_tool_preserves_json_artifact_string_for_lineloss -- --exact
|
||||||
|
cargo test ws_backend_eval_returns_text_payload_consumable_by_browser_script_tool -- --exact
|
||||||
|
cargo test deterministic_lineloss_pipe_and_ws_paths_share_summary_contract -- --exact
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 3: Run the full Rust suite**
|
||||||
|
|
||||||
|
Run:
|
||||||
|
```bash
|
||||||
|
cargo test
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 4: Manual review of diff scope**
|
||||||
|
|
||||||
|
Confirm the diff only touches:
|
||||||
|
- current transport/result seam files (`src/pipe/protocol.rs`, `src/pipe/browser_tool.rs`)
|
||||||
|
- narrow shared browser_script/result compatibility helpers if strictly necessary
|
||||||
|
- tests
|
||||||
|
|
||||||
|
If diff includes Zhihu business logic, lineloss parsing rules, staged skill business JS, or unrelated cleanup, remove those changes before completion.
|
||||||
|
|
||||||
|
- [ ] **Step 5: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/pipe/protocol.rs src/pipe/browser_tool.rs src/compat/browser_script_skill_tool.rs src/compat/direct_skill_runtime.rs tests/pipe_protocol_test.rs tests/browser_tool_test.rs tests/browser_script_skill_tool_test.rs tests/agent_runtime_test.rs tests/compat_runtime_test.rs
|
||||||
|
git commit -m "test: verify lineloss ws transport without regressing pipe or zhihu"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Final verification checklist
|
||||||
|
|
||||||
|
- [ ] The same lineloss deterministic instruction works on pipe and ws.
|
||||||
|
- [ ] Pipe still works without any ws configuration.
|
||||||
|
- [ ] Eval transport support remains available for deterministic lineloss execution.
|
||||||
|
- [ ] Eval response payloads preserve the full lineloss artifact JSON string.
|
||||||
|
- [ ] `src/compat/deterministic_submit.rs` business rules remain transport-neutral.
|
||||||
|
- [ ] No ws-specific lineloss args were introduced.
|
||||||
|
- [ ] Zhihu ws tests still pass unchanged in behavior.
|
||||||
|
- [ ] No ordinary Zhihu request is intercepted by lineloss deterministic routing.
|
||||||
|
- [ ] No new transport-specific business branch was added for lineloss.
|
||||||
|
|
||||||
|
## Implementation notes
|
||||||
|
|
||||||
|
- Default to changing the current transport/result seam first: `src/pipe/protocol.rs` and `src/pipe/browser_tool.rs`.
|
||||||
|
- Treat `src/compat/browser_script_skill_tool.rs` and `src/compat/direct_skill_runtime.rs` as shared seams: change them only if a focused failing test shows a transport-neutral compatibility bug.
|
||||||
|
- If a proposed fix requires changing `src/compat/deterministic_submit.rs` business logic, stop and re-evaluate; that likely means the seam fix is happening at the wrong layer.
|
||||||
|
- If a proposed fix changes Zhihu expectations, stop and repair the seam instead.
|
||||||
@@ -0,0 +1,228 @@
|
|||||||
|
# Async Browser Script 支持实现计划
|
||||||
|
|
||||||
|
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||||
|
|
||||||
|
**Goal:** 修改 `build_eval_js` 函数支持异步脚本,解决 Promise 被 JSON.stringify 序列化为 `{}` 的问题。
|
||||||
|
|
||||||
|
**Architecture:** 将 `build_eval_js` 生成的 JavaScript 代码从同步 IIFE 改为 async IIFE,用 await 等待脚本执行结果,并检测 Promise-like 对象进行二次等待。
|
||||||
|
|
||||||
|
**Tech Stack:** Rust, JavaScript (生成代码)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 文件结构
|
||||||
|
|
||||||
|
| 文件 | 操作 | 说明 |
|
||||||
|
|------|------|------|
|
||||||
|
| `src/browser/callback_backend.rs` | 修改 | 修改 `build_eval_js` 函数 |
|
||||||
|
| `tests/browser_script_skill_tool_test.rs` | 新增测试 | 添加异步脚本测试用例 |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 1: 修改 build_eval_js 支持异步脚本
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/browser/callback_backend.rs:433-447`
|
||||||
|
|
||||||
|
**当前代码:**
|
||||||
|
```rust
|
||||||
|
fn build_eval_js(source_url: &str, script: &str) -> String {
|
||||||
|
let escaped_source_url = escape_js_single_quoted(source_url);
|
||||||
|
let callback = EVAL_CALLBACK_NAME;
|
||||||
|
let events_url = escape_js_single_quoted(&events_endpoint_url(source_url));
|
||||||
|
|
||||||
|
format!(
|
||||||
|
"(function(){{try{{var v=(function(){{return {script}}})();\
|
||||||
|
var t=(typeof v==='string')?v:JSON.stringify(v);\
|
||||||
|
try{{callBackJsToCpp('{escaped_source_url}@_@'+window.location.href+'@_@{callback}@_@sgBrowserExcuteJsCodeByDomain@_@'+(t??''))}}catch(_){{}}\
|
||||||
|
var j=JSON.stringify({{type:'callback',callback:'{callback}',request_url:'{escaped_source_url}',payload:{{value:(t??'')}}}});\
|
||||||
|
try{{var r=new XMLHttpRequest();r.open('POST','{events_url}',true);r.setRequestHeader('Content-Type','application/json');r.send(j)}}catch(_){{}}\
|
||||||
|
try{{navigator.sendBeacon('{events_url}',new Blob([j],{{type:'application/json'}}))}}catch(_){{}}\
|
||||||
|
}}catch(e){{}}}})()"
|
||||||
|
)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**修改后代码:**
|
||||||
|
```rust
|
||||||
|
fn build_eval_js(source_url: &str, script: &str) -> String {
|
||||||
|
let escaped_source_url = escape_js_single_quoted(source_url);
|
||||||
|
let callback = EVAL_CALLBACK_NAME;
|
||||||
|
let events_url = escape_js_single_quoted(&events_endpoint_url(source_url));
|
||||||
|
|
||||||
|
format!(
|
||||||
|
"(async function(){{try{{\
|
||||||
|
var v=await (async function(){{return {script}}})();\
|
||||||
|
if(v&&typeof v.then==='function'){{v=await v;}}\
|
||||||
|
var t=(typeof v==='string')?v:JSON.stringify(v);\
|
||||||
|
try{{callBackJsToCpp('{escaped_source_url}@_@'+window.location.href+'@_@{callback}@_@sgBrowserExcuteJsCodeByDomain@_@'+(t??''))}}catch(_){{}}\
|
||||||
|
var j=JSON.stringify({{type:'callback',callback:'{callback}',request_url:'{escaped_source_url}',payload:{{value:(t??'')}}}});\
|
||||||
|
try{{var r=new XMLHttpRequest();r.open('POST','{events_url}',true);r.setRequestHeader('Content-Type','application/json');r.send(j)}}catch(_){{}}\
|
||||||
|
try{{navigator.sendBeacon('{events_url}',new Blob([j],{{type:'application/json'}}))}}catch(_){{}}\
|
||||||
|
}}catch(e){{}}}})()"
|
||||||
|
)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**关键变更说明:**
|
||||||
|
1. `(function()` → `(async function()` - 整个 IIFE 变为异步
|
||||||
|
2. `var v=(function(){return {script}})()` → `var v=await (async function(){return {script}})()` - 内部包装也变为异步并 await
|
||||||
|
3. 新增 `if(v&&typeof v.then==='function'){v=await v;}` - 检测并等待 Promise-like 对象
|
||||||
|
|
||||||
|
- [ ] **Step 1: 修改 build_eval_js 函数**
|
||||||
|
|
||||||
|
编辑 `src/browser/callback_backend.rs` 第 433-447 行,替换为上述新代码。
|
||||||
|
|
||||||
|
- [ ] **Step 2: 编译验证**
|
||||||
|
|
||||||
|
Run: `cargo build`
|
||||||
|
Expected: 编译成功,无错误
|
||||||
|
|
||||||
|
- [ ] **Step 3: 运行现有测试**
|
||||||
|
|
||||||
|
Run: `cargo test browser_script_skill_tool`
|
||||||
|
Expected: 所有测试通过
|
||||||
|
|
||||||
|
- [ ] **Step 4: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/browser/callback_backend.rs
|
||||||
|
git commit -m "fix: support async browser scripts in build_eval_js
|
||||||
|
|
||||||
|
Wrap eval script in async IIFE and await Promise-like results.
|
||||||
|
Fixes Promise serialization returning '{}' for async skill scripts.
|
||||||
|
|
||||||
|
🤖 Generated with [Qoder][https://qoder.com]"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 2: 添加异步脚本测试用例
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `tests/browser_script_skill_tool_test.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: 添加异步脚本测试用例**
|
||||||
|
|
||||||
|
在 `tests/browser_script_skill_tool_test.rs` 文件末尾添加新测试:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[tokio::test]
|
||||||
|
async fn execute_browser_script_tool_awaits_async_script() {
|
||||||
|
let skill_dir = unique_temp_dir("sgclaw-browser-script-async");
|
||||||
|
let scripts_dir = skill_dir.join("scripts");
|
||||||
|
fs::create_dir_all(&scripts_dir).unwrap();
|
||||||
|
// 异步脚本,返回 Promise
|
||||||
|
fs::write(
|
||||||
|
scripts_dir.join("async_extract.js"),
|
||||||
|
"return (async function() { return { async: true, args: args }; })();\n",
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let transport = Arc::new(MockTransport::new(vec![BrowserMessage::Response {
|
||||||
|
seq: 1,
|
||||||
|
success: true,
|
||||||
|
data: json!({
|
||||||
|
"text": {
|
||||||
|
"async": true,
|
||||||
|
"args": { "expected_domain": "example.com" }
|
||||||
|
}
|
||||||
|
}),
|
||||||
|
aom_snapshot: vec![],
|
||||||
|
timing: Timing {
|
||||||
|
queue_ms: 1,
|
||||||
|
exec_ms: 5,
|
||||||
|
},
|
||||||
|
}]));
|
||||||
|
|
||||||
|
let mut policy_json = test_policy();
|
||||||
|
// 允许 example.com
|
||||||
|
policy_json = MacPolicy::from_json_str(
|
||||||
|
r#"{
|
||||||
|
"version": "1.0",
|
||||||
|
"domains": { "allowed": ["www.zhihu.com", "example.com"] },
|
||||||
|
"pipe_actions": {
|
||||||
|
"allowed": ["click", "type", "navigate", "getText", "eval"],
|
||||||
|
"blocked": []
|
||||||
|
}
|
||||||
|
}"#,
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let browser_tool = BrowserPipeTool::new(
|
||||||
|
transport.clone(),
|
||||||
|
policy_json,
|
||||||
|
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||||
|
)
|
||||||
|
.with_response_timeout(Duration::from_secs(1));
|
||||||
|
|
||||||
|
let skill_tool = SkillTool {
|
||||||
|
name: "async_extract".to_string(),
|
||||||
|
description: "Extract data asynchronously".to_string(),
|
||||||
|
kind: "browser_script".to_string(),
|
||||||
|
command: "scripts/async_extract.js".to_string(),
|
||||||
|
args: HashMap::new(),
|
||||||
|
};
|
||||||
|
|
||||||
|
let result = execute_browser_script_tool(
|
||||||
|
&skill_tool,
|
||||||
|
&skill_dir,
|
||||||
|
&PipeBrowserBackend::from_inner(browser_tool),
|
||||||
|
json!({
|
||||||
|
"expected_domain": "example.com"
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert!(result.success);
|
||||||
|
let output = serde_json::from_str::<serde_json::Value>(&result.output).unwrap();
|
||||||
|
assert_eq!(output["async"], true);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: 运行新测试**
|
||||||
|
|
||||||
|
Run: `cargo test execute_browser_script_tool_awaits_async_script`
|
||||||
|
Expected: 测试通过
|
||||||
|
|
||||||
|
- [ ] **Step 3: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add tests/browser_script_skill_tool_test.rs
|
||||||
|
git commit -m "test: add async browser script test case
|
||||||
|
|
||||||
|
🤖 Generated with [Qoder][https://qoder.com]"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 3: 端到端验证
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- 无文件修改,仅验证
|
||||||
|
|
||||||
|
- [ ] **Step 1: 完整构建**
|
||||||
|
|
||||||
|
Run: `cargo build`
|
||||||
|
Expected: 编译成功
|
||||||
|
|
||||||
|
- [ ] **Step 2: 运行全部测试**
|
||||||
|
|
||||||
|
Run: `cargo test`
|
||||||
|
Expected: 所有测试通过
|
||||||
|
|
||||||
|
- [ ] **Step 3: 手动端到端测试**
|
||||||
|
|
||||||
|
使用 service console 测试 `tq-lineloss-report.collect_lineloss`:
|
||||||
|
1. 启动 sgclaw: `target/debug/sg_claw.exe`
|
||||||
|
2. 在 service console 输入: `兰州公司 台区线损大数据 月累计线损率统计分析。。。`
|
||||||
|
3. 预期结果: 返回实际报表数据,而非 `{}`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 自检清单
|
||||||
|
|
||||||
|
- [x] Spec 覆盖: 设计文档中所有要点都有对应任务
|
||||||
|
- [x] 无占位符: 所有代码都是完整的
|
||||||
|
- [x] 类型一致性: 函数签名无变化
|
||||||
73
docs/superpowers/plans/2026-04-13-async-eval-then-fix.md
Normal file
73
docs/superpowers/plans/2026-04-13-async-eval-then-fix.md
Normal file
@@ -0,0 +1,73 @@
|
|||||||
|
# Async Eval .then() Fix Implementation Plan
|
||||||
|
|
||||||
|
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||||
|
|
||||||
|
**Goal:** Fix `build_eval_js` to handle async script return values using `.then()` instead of `async IIFE`.
|
||||||
|
|
||||||
|
**Architecture:** Extract callback-sending logic into a `_s` helper function inside the generated JS. If the script returns a Promise, call `_s` via `.then()`; otherwise call `_s` synchronously. This keeps the outer IIFE synchronous for C++ injection compatibility.
|
||||||
|
|
||||||
|
**Tech Stack:** Rust, JavaScript
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Files
|
||||||
|
|
||||||
|
- Modify: `src/browser/callback_backend.rs:433-447` - `build_eval_js` function
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 1: Modify build_eval_js to support async via .then()
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/browser/callback_backend.rs:433-447`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Replace build_eval_js implementation**
|
||||||
|
|
||||||
|
Replace the entire `build_eval_js` function body (lines 433-447) with:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
fn build_eval_js(source_url: &str, script: &str) -> String {
|
||||||
|
let escaped_source_url = escape_js_single_quoted(source_url);
|
||||||
|
let callback = EVAL_CALLBACK_NAME;
|
||||||
|
let events_url = escape_js_single_quoted(&events_endpoint_url(source_url));
|
||||||
|
|
||||||
|
format!(
|
||||||
|
"(function(){{try{{\
|
||||||
|
var v=(function(){{return {script}}})();\
|
||||||
|
function _s(v){{\
|
||||||
|
var t=(typeof v==='string')?v:JSON.stringify(v);\
|
||||||
|
try{{callBackJsToCpp('{escaped_source_url}@_@'+window.location.href+'@_@{callback}@_@sgBrowserExcuteJsCodeByDomain@_@'+(t??''))}}catch(_){{}}\
|
||||||
|
var j=JSON.stringify({{type:'callback',callback:'{callback}',request_url:'{escaped_source_url}',payload:{{value:(t??'')}}}});\
|
||||||
|
try{{var r=new XMLHttpRequest();r.open('POST','{events_url}',true);r.setRequestHeader('Content-Type','application/json');r.send(j)}}catch(_){{}}\
|
||||||
|
try{{navigator.sendBeacon('{events_url}',new Blob([j],{{type:'application/json'}}))}}catch(_){{}}\
|
||||||
|
}}\
|
||||||
|
if(v&&typeof v.then==='function'){{v.then(_s).catch(function(){{}});}}else{{_s(v);}}\
|
||||||
|
}}catch(e){{}}}})()"
|
||||||
|
)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run tests**
|
||||||
|
|
||||||
|
Run: `cargo test browser_script_skill_tool --no-fail-fast`
|
||||||
|
|
||||||
|
Expected: All tests pass.
|
||||||
|
|
||||||
|
- [ ] **Step 3: Run full test suite**
|
||||||
|
|
||||||
|
Run: `cargo test`
|
||||||
|
|
||||||
|
Expected: All tests pass (except pre-existing `lineloss_period_resolver_prompts_for_missing_period` failure which is unrelated).
|
||||||
|
|
||||||
|
- [ ] **Step 4: Build**
|
||||||
|
|
||||||
|
Run: `cargo build`
|
||||||
|
|
||||||
|
Expected: Compiles with no errors.
|
||||||
|
|
||||||
|
- [ ] **Step 5: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/browser/callback_backend.rs
|
||||||
|
git commit -m "fix: support async browser scripts via .then() in build_eval_js"
|
||||||
|
```
|
||||||
52
docs/superpowers/plans/2026-04-13-expected-domain-arg-fix.md
Normal file
52
docs/superpowers/plans/2026-04-13-expected-domain-arg-fix.md
Normal file
@@ -0,0 +1,52 @@
|
|||||||
|
# Expected Domain Arg Fix Implementation Plan
|
||||||
|
|
||||||
|
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||||
|
|
||||||
|
**Goal:** Fix browser_script_skill_tool to pass expected_domain to wrapped JS scripts.
|
||||||
|
|
||||||
|
**Architecture:** Insert the normalized expected_domain back into args HashMap after domain normalization, before script wrapping.
|
||||||
|
|
||||||
|
**Tech Stack:** Rust, serde_json
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Files
|
||||||
|
|
||||||
|
- Modify: `src/compat/browser_script_skill_tool.rs:210` - Insert expected_domain back into args
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 1: Insert expected_domain into args
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/compat/browser_script_skill_tool.rs:210`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Add expected_domain to args after normalization**
|
||||||
|
|
||||||
|
Edit `src/compat/browser_script_skill_tool.rs`, insert after line 209 (`eprintln!("[execute_browser_script_impl] expected_domain: {}", expected_domain);`):
|
||||||
|
|
||||||
|
```rust
|
||||||
|
args.insert("expected_domain".to_string(), Value::String(expected_domain.clone()));
|
||||||
|
```
|
||||||
|
|
||||||
|
The context around line 209-211 should look like this after the edit:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
eprintln!("[execute_browser_script_impl] expected_domain: {}", expected_domain);
|
||||||
|
args.insert("expected_domain".to_string(), Value::String(expected_domain.clone()));
|
||||||
|
|
||||||
|
for required_arg in tool.args.keys() {
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run tests to verify the fix**
|
||||||
|
|
||||||
|
Run: `cargo test browser_script_skill_tool --no-fail-fast -- --nocapture`
|
||||||
|
|
||||||
|
Expected: All tests pass, including `execute_browser_script_tool_runs_packaged_script_with_expected_domain`
|
||||||
|
|
||||||
|
- [ ] **Step 3: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/compat/browser_script_skill_tool.rs
|
||||||
|
git commit -m "fix: pass expected_domain to wrapped browser scripts"
|
||||||
|
```
|
||||||
163
docs/superpowers/plans/2026-04-13-lineloss-requesturl-fix.md
Normal file
163
docs/superpowers/plans/2026-04-13-lineloss-requesturl-fix.md
Normal file
@@ -0,0 +1,163 @@
|
|||||||
|
# 台区线损 requesturl 快速修复 实现计划
|
||||||
|
|
||||||
|
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||||
|
|
||||||
|
**Goal:** 在 `derive_request_url_from_instruction` 中添加台区线损 URL 映射,使 `sgHideBrowerserOpenPage` 命令能正确执行。
|
||||||
|
|
||||||
|
**Architecture:** 在现有知乎 URL 映射模式后追加台区线损场景的硬编码映射。
|
||||||
|
|
||||||
|
**Tech Stack:** Rust
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 1: 添加测试用例
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/service/server.rs:828` (tests 模块)
|
||||||
|
|
||||||
|
- [ ] **Step 1: 在 tests 模块中添加台区线损 URL 映射测试**
|
||||||
|
|
||||||
|
在 `initial_request_url_falls_back_to_zhihu_origin_for_generated_article_publish_routes` 测试后添加新测试:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[test]
|
||||||
|
fn initial_request_url_falls_back_to_lineloss_origin_for_lineloss_instructions() {
|
||||||
|
let request = SubmitTaskRequest {
|
||||||
|
instruction: "兰州公司 台区线损大数据 月累计线损率统计分析。。。".to_string(),
|
||||||
|
..SubmitTaskRequest::default()
|
||||||
|
};
|
||||||
|
|
||||||
|
assert_eq!(
|
||||||
|
initial_request_url_for_submit_task(&request),
|
||||||
|
"http://20.76.57.61:18080"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: 运行测试验证失败**
|
||||||
|
|
||||||
|
Run: `cargo test initial_request_url_falls_back_to_lineloss_origin_for_lineloss_instructions -- --nocapture`
|
||||||
|
|
||||||
|
Expected: FAIL - 测试应该失败,因为还未实现映射逻辑
|
||||||
|
|
||||||
|
- [ ] **Step 3: 提交测试文件**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/service/server.rs
|
||||||
|
git commit -m "test: add lineloss requesturl mapping test"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 2: 实现台区线损 URL 映射
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/service/server.rs:354-382` (derive_request_url_from_instruction 函数)
|
||||||
|
|
||||||
|
- [ ] **Step 1: 在 derive_request_url_from_instruction 中添加台区线损映射**
|
||||||
|
|
||||||
|
在第二个知乎判断块后、`None` 之前添加:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
// 台区线损相关
|
||||||
|
// TODO: 临时方案,后续应从 skill 配置或 deterministic_submit 解析结果中获取
|
||||||
|
if instruction.contains("线损") || instruction.contains("lineloss") {
|
||||||
|
return Some("http://20.76.57.61:18080".to_string());
|
||||||
|
}
|
||||||
|
|
||||||
|
None
|
||||||
|
```
|
||||||
|
|
||||||
|
完整函数应为:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
fn derive_request_url_from_instruction(instruction: &str) -> Option<String> {
|
||||||
|
if crate::compat::workflow_executor::detect_route(instruction, None, None)
|
||||||
|
.is_some_and(|route| {
|
||||||
|
matches!(
|
||||||
|
route,
|
||||||
|
crate::compat::workflow_executor::WorkflowRoute::ZhihuHotlistExportXlsx
|
||||||
|
| crate::compat::workflow_executor::WorkflowRoute::ZhihuHotlistScreen
|
||||||
|
| crate::compat::workflow_executor::WorkflowRoute::ZhihuArticleEntry
|
||||||
|
| crate::compat::workflow_executor::WorkflowRoute::ZhihuArticleAutoPublishGenerated
|
||||||
|
)
|
||||||
|
})
|
||||||
|
{
|
||||||
|
return Some("https://www.zhihu.com".to_string());
|
||||||
|
}
|
||||||
|
|
||||||
|
if crate::compat::workflow_executor::detect_route(instruction, None, None)
|
||||||
|
.is_some_and(|route| {
|
||||||
|
matches!(
|
||||||
|
route,
|
||||||
|
crate::compat::workflow_executor::WorkflowRoute::ZhihuArticleDraft
|
||||||
|
| crate::compat::workflow_executor::WorkflowRoute::ZhihuArticlePublish
|
||||||
|
)
|
||||||
|
})
|
||||||
|
{
|
||||||
|
return Some("https://zhuanlan.zhihu.com".to_string());
|
||||||
|
}
|
||||||
|
|
||||||
|
// 台区线损相关
|
||||||
|
// TODO: 临时方案,后续应从 skill 配置或 deterministic_submit 解析结果中获取
|
||||||
|
if instruction.contains("线损") || instruction.contains("lineloss") {
|
||||||
|
return Some("http://20.76.57.61:18080".to_string());
|
||||||
|
}
|
||||||
|
|
||||||
|
None
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: 运行测试验证通过**
|
||||||
|
|
||||||
|
Run: `cargo test initial_request_url_falls_back_to_lineloss_origin_for_lineloss_instructions -- --nocapture`
|
||||||
|
|
||||||
|
Expected: PASS
|
||||||
|
|
||||||
|
- [ ] **Step 3: 运行所有相关测试**
|
||||||
|
|
||||||
|
Run: `cargo test initial_request_url -- --nocapture`
|
||||||
|
|
||||||
|
Expected: 所有测试通过
|
||||||
|
|
||||||
|
- [ ] **Step 4: 构建项目**
|
||||||
|
|
||||||
|
Run: `cargo build`
|
||||||
|
|
||||||
|
Expected: 编译成功,无错误
|
||||||
|
|
||||||
|
- [ ] **Step 5: 提交实现**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/service/server.rs
|
||||||
|
git commit -m "feat: add lineloss URL mapping in derive_request_url_from_instruction
|
||||||
|
|
||||||
|
临时方案:检测指令中包含'线损'或'lineloss'时返回台区线损平台 URL
|
||||||
|
|
||||||
|
🤖 Generated with [Qoder][https://qoder.com]"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 3: 端到端验证
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- 无文件修改,仅运行验证
|
||||||
|
|
||||||
|
- [ ] **Step 1: 停止现有 sgclaw 进程**
|
||||||
|
|
||||||
|
确保没有 `sg_claw.exe` 在运行
|
||||||
|
|
||||||
|
- [ ] **Step 2: 启动 sgclaw 服务**
|
||||||
|
|
||||||
|
Run: `target\debug\sg_claw.exe --config-path ..\sgclaw_config.json service`
|
||||||
|
|
||||||
|
- [ ] **Step 3: 在 service console 发送测试指令**
|
||||||
|
|
||||||
|
指令: `兰州公司 台区线损大数据 月累计线损率统计分析。。。`
|
||||||
|
|
||||||
|
Expected: 日志显示 `bootstrap_url=http://20.76.57.61:18080`,而非 `about:blank`
|
||||||
|
|
||||||
|
- [ ] **Step 4: 验证 helper page 打开成功**
|
||||||
|
|
||||||
|
Expected: 日志显示 `helper_loaded=true, ready=true`,不再超时
|
||||||
76
docs/superpowers/plans/2026-04-13-lineloss-target-url-fix.md
Normal file
76
docs/superpowers/plans/2026-04-13-lineloss-target-url-fix.md
Normal file
@@ -0,0 +1,76 @@
|
|||||||
|
# 台区线损 target_url 缺失修复 实现计划
|
||||||
|
|
||||||
|
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||||
|
|
||||||
|
**Goal:** 在 `browser_script_skill_tool.rs` 调用 `Action::Eval` 时添加 `target_url` 参数。
|
||||||
|
|
||||||
|
**Architecture:** 从 `expected_domain` 构造完整 URL(`http://{expected_domain}`),添加到 invoke 的 params 中。
|
||||||
|
|
||||||
|
**Tech Stack:** Rust, serde_json
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 1: 添加 target_url 参数
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/compat/browser_script_skill_tool.rs:238-241` (invoke 调用)
|
||||||
|
|
||||||
|
- [ ] **Step 1: 修改 invoke 调用,添加 target_url**
|
||||||
|
|
||||||
|
将:
|
||||||
|
```rust
|
||||||
|
let result = match browser_tool.invoke(
|
||||||
|
Action::Eval,
|
||||||
|
json!({ "script": wrapped_script }),
|
||||||
|
&expected_domain,
|
||||||
|
) {
|
||||||
|
```
|
||||||
|
|
||||||
|
改为:
|
||||||
|
```rust
|
||||||
|
let target_url = format!("http://{}", expected_domain);
|
||||||
|
let result = match browser_tool.invoke(
|
||||||
|
Action::Eval,
|
||||||
|
json!({
|
||||||
|
"script": wrapped_script,
|
||||||
|
"target_url": target_url,
|
||||||
|
}),
|
||||||
|
&expected_domain,
|
||||||
|
) {
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: 构建项目**
|
||||||
|
|
||||||
|
Run: `cargo build`
|
||||||
|
|
||||||
|
Expected: 编译成功,无错误
|
||||||
|
|
||||||
|
- [ ] **Step 3: 提交修改**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/compat/browser_script_skill_tool.rs
|
||||||
|
git commit -m "fix: add target_url param for Action::Eval in browser_script_skill_tool
|
||||||
|
|
||||||
|
🤖 Generated with [Qoder][https://qoder.com]"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 2: 端到端验证
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- 无文件修改,仅运行验证
|
||||||
|
|
||||||
|
- [ ] **Step 1: 停止现有 sgclaw 进程**
|
||||||
|
|
||||||
|
确保没有 `sg_claw.exe` 在运行
|
||||||
|
|
||||||
|
- [ ] **Step 2: 启动 sgclaw 服务**
|
||||||
|
|
||||||
|
Run: `target\debug\sg_claw.exe --config-path ..\sgclaw_config.json service`
|
||||||
|
|
||||||
|
- [ ] **Step 3: 在 service console 发送测试指令**
|
||||||
|
|
||||||
|
指令: `兰州公司 台区线损大数据 月累计线损率统计分析。。。`
|
||||||
|
|
||||||
|
Expected: 日志显示 `invoke 成功`,不再出现 `target_url is required for eval` 错误
|
||||||
@@ -0,0 +1,912 @@
|
|||||||
|
# Rust-Side Lineloss XLSX Export Implementation Plan
|
||||||
|
|
||||||
|
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||||
|
|
||||||
|
**Goal:** Move XLSX export from browser JS (blocked by CORS) to Rust side, so `collect_lineloss.js` only collects data and Rust generates the `.xlsx` file locally.
|
||||||
|
|
||||||
|
**Architecture:** JS collects API data and returns a `report-artifact` JSON with `rows`, `column_defs`, and metadata. Rust parses the artifact, extracts rows + column definitions, and generates a standard `.xlsx` file using the `zip` crate + OpenXML XML strings (same pattern as `openxml_office_tool.rs`). Report log is deferred.
|
||||||
|
|
||||||
|
**Tech Stack:** Rust, `zip` 0.6.6, `serde_json`, OpenXML Spreadsheet ML, JavaScript (browser-injected)
|
||||||
|
|
||||||
|
**Spec:** `docs/superpowers/specs/2026-04-13-rust-side-lineloss-xlsx-export.md`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## File Structure
|
||||||
|
|
||||||
|
| File | Responsibility |
|
||||||
|
|------|---------------|
|
||||||
|
| `src/compat/lineloss_xlsx_export.rs` | **New.** Pure XLSX generation: takes column defs + row data, produces `.xlsx` file. No business logic. |
|
||||||
|
| `src/compat/deterministic_submit.rs` | **Modify.** After receiving JS artifact, extract rows + column_defs, call XLSX export, attach path to outcome. |
|
||||||
|
| `src/compat/mod.rs` | **Modify.** Register `lineloss_xlsx_export` module. |
|
||||||
|
| `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/scripts/collect_lineloss.js` | **Modify.** Remove `exportWorkbook`/`writeReportLog` calls. Add `column_defs` to artifact. |
|
||||||
|
| `tests/lineloss_xlsx_export_test.rs` | **New.** Unit tests for XLSX generation. |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 1: Create `lineloss_xlsx_export.rs` with Tests
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Create: `src/compat/lineloss_xlsx_export.rs`
|
||||||
|
- Create: `tests/lineloss_xlsx_export_test.rs`
|
||||||
|
- Modify: `src/compat/mod.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Register the new module in `src/compat/mod.rs`**
|
||||||
|
|
||||||
|
Add the module declaration in alphabetical order. In `src/compat/mod.rs`, insert after `pub mod event_bridge;`:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
pub mod lineloss_xlsx_export;
|
||||||
|
```
|
||||||
|
|
||||||
|
The full file becomes:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
pub mod artifact_open;
|
||||||
|
pub mod browser_script_skill_tool;
|
||||||
|
pub mod browser_tool_adapter;
|
||||||
|
pub mod config_adapter;
|
||||||
|
pub mod cron_adapter;
|
||||||
|
pub mod deterministic_submit;
|
||||||
|
pub mod direct_skill_runtime;
|
||||||
|
pub mod event_bridge;
|
||||||
|
pub mod lineloss_xlsx_export;
|
||||||
|
pub mod memory_adapter;
|
||||||
|
pub mod openxml_office_tool;
|
||||||
|
pub mod orchestration;
|
||||||
|
pub mod runtime;
|
||||||
|
pub mod screen_html_export_tool;
|
||||||
|
pub mod tq_lineloss;
|
||||||
|
pub mod workflow_executor;
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Write the failing test for XLSX generation**
|
||||||
|
|
||||||
|
Create `tests/lineloss_xlsx_export_test.rs`:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use std::fs;
|
||||||
|
use std::path::PathBuf;
|
||||||
|
|
||||||
|
use serde_json::json;
|
||||||
|
use sgclaw::compat::lineloss_xlsx_export::{export_lineloss_xlsx, LinelossExportRequest};
|
||||||
|
|
||||||
|
fn temp_output_path(name: &str) -> PathBuf {
|
||||||
|
let dir = std::env::temp_dir().join("sgclaw-test-xlsx");
|
||||||
|
fs::create_dir_all(&dir).unwrap();
|
||||||
|
dir.join(name)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn export_month_lineloss_produces_valid_xlsx() {
|
||||||
|
let output_path = temp_output_path("month-test.xlsx");
|
||||||
|
if output_path.exists() {
|
||||||
|
fs::remove_file(&output_path).unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
let request = LinelossExportRequest {
|
||||||
|
sheet_name: "国网兰州供电公司月度线损分析报表(2026-03)".to_string(),
|
||||||
|
column_defs: vec![
|
||||||
|
("ORG_NAME".to_string(), "供电单位".to_string()),
|
||||||
|
("YGDL".to_string(), "累计供电量".to_string()),
|
||||||
|
("YYDL".to_string(), "累计售电量".to_string()),
|
||||||
|
("YXSL".to_string(), "线损完成率(%)".to_string()),
|
||||||
|
("RAT_SCOPE".to_string(), "线损率累计目标值".to_string()),
|
||||||
|
("BLANK3".to_string(), "目标完成率".to_string()),
|
||||||
|
("BLANK2".to_string(), "排行".to_string()),
|
||||||
|
],
|
||||||
|
rows: vec![
|
||||||
|
serde_json::from_value(json!({
|
||||||
|
"ORG_NAME": "城关供电",
|
||||||
|
"YGDL": "12345.67",
|
||||||
|
"YYDL": "11234.56",
|
||||||
|
"YXSL": "9.00",
|
||||||
|
"RAT_SCOPE": "9.50",
|
||||||
|
"BLANK3": "94.74",
|
||||||
|
"BLANK2": "1"
|
||||||
|
}))
|
||||||
|
.unwrap(),
|
||||||
|
serde_json::from_value(json!({
|
||||||
|
"ORG_NAME": "七里河供电",
|
||||||
|
"YGDL": "9876.54",
|
||||||
|
"YYDL": "8765.43",
|
||||||
|
"YXSL": "11.24",
|
||||||
|
"RAT_SCOPE": "10.00",
|
||||||
|
"BLANK3": "112.40",
|
||||||
|
"BLANK2": "2"
|
||||||
|
}))
|
||||||
|
.unwrap(),
|
||||||
|
],
|
||||||
|
output_path: output_path.clone(),
|
||||||
|
};
|
||||||
|
|
||||||
|
let result_path = export_lineloss_xlsx(&request).unwrap();
|
||||||
|
assert_eq!(result_path, output_path);
|
||||||
|
assert!(output_path.exists());
|
||||||
|
|
||||||
|
// Verify it's a valid ZIP (xlsx is a zip archive)
|
||||||
|
let file = fs::File::open(&output_path).unwrap();
|
||||||
|
let mut archive = zip::ZipArchive::new(file).unwrap();
|
||||||
|
|
||||||
|
// Must contain the standard OpenXML entries
|
||||||
|
let entry_names: Vec<String> = (0..archive.len())
|
||||||
|
.map(|i| archive.by_index(i).unwrap().name().to_string())
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
assert!(entry_names.contains(&"[Content_Types].xml".to_string()));
|
||||||
|
assert!(entry_names.contains(&"xl/worksheets/sheet1.xml".to_string()));
|
||||||
|
assert!(entry_names.contains(&"xl/workbook.xml".to_string()));
|
||||||
|
|
||||||
|
// Read sheet1.xml and verify it contains our data
|
||||||
|
let mut sheet = archive.by_name("xl/worksheets/sheet1.xml").unwrap();
|
||||||
|
let mut xml = String::new();
|
||||||
|
std::io::Read::read_to_string(&mut sheet, &mut xml).unwrap();
|
||||||
|
|
||||||
|
assert!(xml.contains("供电单位"), "header row should contain 供电单位");
|
||||||
|
assert!(xml.contains("累计供电量"), "header row should contain 累计供电量");
|
||||||
|
assert!(xml.contains("城关供电"), "data should contain 城关供电");
|
||||||
|
assert!(xml.contains("12345.67"), "data should contain 12345.67");
|
||||||
|
assert!(xml.contains("七里河供电"), "data should contain second row");
|
||||||
|
|
||||||
|
// Cleanup
|
||||||
|
fs::remove_file(&output_path).unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn export_empty_rows_returns_error() {
|
||||||
|
let output_path = temp_output_path("empty-test.xlsx");
|
||||||
|
|
||||||
|
let request = LinelossExportRequest {
|
||||||
|
sheet_name: "test".to_string(),
|
||||||
|
column_defs: vec![("A".to_string(), "ColA".to_string())],
|
||||||
|
rows: vec![],
|
||||||
|
output_path: output_path.clone(),
|
||||||
|
};
|
||||||
|
|
||||||
|
let result = export_lineloss_xlsx(&request);
|
||||||
|
assert!(result.is_err());
|
||||||
|
assert!(
|
||||||
|
result.unwrap_err().to_string().contains("rows must not be empty"),
|
||||||
|
"should reject empty rows"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 3: Run the test to verify it fails**
|
||||||
|
|
||||||
|
Run: `cargo test --test lineloss_xlsx_export_test -- --nocapture`
|
||||||
|
|
||||||
|
Expected: compilation error — `lineloss_xlsx_export` module doesn't exist yet or `export_lineloss_xlsx` / `LinelossExportRequest` not defined.
|
||||||
|
|
||||||
|
- [ ] **Step 4: Implement `src/compat/lineloss_xlsx_export.rs`**
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use std::fs;
|
||||||
|
use std::io::Write;
|
||||||
|
use std::path::{Path, PathBuf};
|
||||||
|
|
||||||
|
use serde_json::{Map, Value};
|
||||||
|
use zip::write::FileOptions;
|
||||||
|
use zip::{CompressionMethod, ZipWriter};
|
||||||
|
|
||||||
|
pub struct LinelossExportRequest {
|
||||||
|
pub sheet_name: String,
|
||||||
|
pub column_defs: Vec<(String, String)>,
|
||||||
|
pub rows: Vec<Map<String, Value>>,
|
||||||
|
pub output_path: PathBuf,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn export_lineloss_xlsx(request: &LinelossExportRequest) -> anyhow::Result<PathBuf> {
|
||||||
|
if request.rows.is_empty() {
|
||||||
|
anyhow::bail!("rows must not be empty");
|
||||||
|
}
|
||||||
|
if request.column_defs.is_empty() {
|
||||||
|
anyhow::bail!("column_defs must not be empty");
|
||||||
|
}
|
||||||
|
|
||||||
|
let sheet_xml = build_worksheet_xml(&request.column_defs, &request.rows);
|
||||||
|
|
||||||
|
write_xlsx(
|
||||||
|
&request.output_path,
|
||||||
|
&request.sheet_name,
|
||||||
|
&sheet_xml,
|
||||||
|
)?;
|
||||||
|
|
||||||
|
Ok(request.output_path.clone())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn build_worksheet_xml(
|
||||||
|
column_defs: &[(String, String)],
|
||||||
|
rows: &[Map<String, Value>],
|
||||||
|
) -> String {
|
||||||
|
let mut xml_rows = Vec::with_capacity(rows.len() + 1);
|
||||||
|
|
||||||
|
// Header row (row 1)
|
||||||
|
let header_cells: Vec<String> = column_defs
|
||||||
|
.iter()
|
||||||
|
.enumerate()
|
||||||
|
.map(|(col_idx, (_key, label))| {
|
||||||
|
let col_letter = column_letter(col_idx);
|
||||||
|
format!(
|
||||||
|
"<c r=\"{col_letter}1\" t=\"inlineStr\"><is><t>{}</t></is></c>",
|
||||||
|
xml_escape(label)
|
||||||
|
)
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
xml_rows.push(format!("<row r=\"1\">{}</row>", header_cells.join("")));
|
||||||
|
|
||||||
|
// Data rows (row 2+)
|
||||||
|
for (row_idx, row) in rows.iter().enumerate() {
|
||||||
|
let excel_row = row_idx + 2;
|
||||||
|
let cells: Vec<String> = column_defs
|
||||||
|
.iter()
|
||||||
|
.enumerate()
|
||||||
|
.map(|(col_idx, (key, _label))| {
|
||||||
|
let col_letter = column_letter(col_idx);
|
||||||
|
let value = row
|
||||||
|
.get(key)
|
||||||
|
.map(|v| value_to_string(v))
|
||||||
|
.unwrap_or_default();
|
||||||
|
format!(
|
||||||
|
"<c r=\"{col_letter}{excel_row}\" t=\"inlineStr\"><is><t>{}</t></is></c>",
|
||||||
|
xml_escape(&value)
|
||||||
|
)
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
xml_rows.push(format!("<row r=\"{excel_row}\">{}</row>", cells.join("")));
|
||||||
|
}
|
||||||
|
|
||||||
|
format!(
|
||||||
|
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\
|
||||||
|
<worksheet xmlns=\"http://schemas.openxmlformats.org/spreadsheetml/2006/main\">\
|
||||||
|
<sheetData>{}</sheetData>\
|
||||||
|
</worksheet>",
|
||||||
|
xml_rows.join("")
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn column_letter(index: usize) -> String {
|
||||||
|
let mut result = String::new();
|
||||||
|
let mut n = index;
|
||||||
|
loop {
|
||||||
|
result.insert(0, (b'A' + (n % 26) as u8) as char);
|
||||||
|
if n < 26 {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
n = n / 26 - 1;
|
||||||
|
}
|
||||||
|
result
|
||||||
|
}
|
||||||
|
|
||||||
|
fn value_to_string(value: &Value) -> String {
|
||||||
|
match value {
|
||||||
|
Value::String(text) => text.clone(),
|
||||||
|
Value::Number(number) => number.to_string(),
|
||||||
|
Value::Bool(flag) => flag.to_string(),
|
||||||
|
Value::Null => String::new(),
|
||||||
|
other => other.to_string(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn xml_escape(value: &str) -> String {
|
||||||
|
value
|
||||||
|
.replace('&', "&")
|
||||||
|
.replace('<', "<")
|
||||||
|
.replace('>', ">")
|
||||||
|
}
|
||||||
|
|
||||||
|
fn write_xlsx(output_path: &Path, sheet_name: &str, sheet_xml: &str) -> anyhow::Result<()> {
|
||||||
|
if let Some(parent) = output_path.parent() {
|
||||||
|
fs::create_dir_all(parent)?;
|
||||||
|
}
|
||||||
|
if output_path.exists() {
|
||||||
|
fs::remove_file(output_path)?;
|
||||||
|
}
|
||||||
|
|
||||||
|
let file = fs::File::create(output_path)?;
|
||||||
|
let mut zip = ZipWriter::new(file);
|
||||||
|
let options = FileOptions::default().compression_method(CompressionMethod::Stored);
|
||||||
|
|
||||||
|
zip.start_file("[Content_Types].xml", options)?;
|
||||||
|
zip.write_all(content_types_xml().as_bytes())?;
|
||||||
|
|
||||||
|
zip.start_file("_rels/.rels", options)?;
|
||||||
|
zip.write_all(root_rels_xml().as_bytes())?;
|
||||||
|
|
||||||
|
zip.start_file("docProps/app.xml", options)?;
|
||||||
|
zip.write_all(app_xml().as_bytes())?;
|
||||||
|
|
||||||
|
zip.start_file("docProps/core.xml", options)?;
|
||||||
|
zip.write_all(core_xml().as_bytes())?;
|
||||||
|
|
||||||
|
zip.start_file("xl/workbook.xml", options)?;
|
||||||
|
zip.write_all(workbook_xml(&xml_escape(sheet_name)).as_bytes())?;
|
||||||
|
|
||||||
|
zip.start_file("xl/_rels/workbook.xml.rels", options)?;
|
||||||
|
zip.write_all(workbook_rels_xml().as_bytes())?;
|
||||||
|
|
||||||
|
zip.start_file("xl/worksheets/sheet1.xml", options)?;
|
||||||
|
zip.write_all(sheet_xml.as_bytes())?;
|
||||||
|
|
||||||
|
zip.finish()?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn content_types_xml() -> &'static str {
|
||||||
|
r#"<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
|
||||||
|
<Types xmlns="http://schemas.openxmlformats.org/package/2006/content-types">
|
||||||
|
<Default Extension="rels" ContentType="application/vnd.openxmlformats-package.relationships+xml"/>
|
||||||
|
<Default Extension="xml" ContentType="application/xml"/>
|
||||||
|
<Override PartName="/xl/workbook.xml" ContentType="application/vnd.openxmlformats-officedocument.spreadsheetml.sheet.main+xml"/>
|
||||||
|
<Override PartName="/xl/worksheets/sheet1.xml" ContentType="application/vnd.openxmlformats-officedocument.spreadsheetml.worksheet+xml"/>
|
||||||
|
<Override PartName="/docProps/core.xml" ContentType="application/vnd.openxmlformats-package.core-properties+xml"/>
|
||||||
|
<Override PartName="/docProps/app.xml" ContentType="application/vnd.openxmlformats-officedocument.extended-properties+xml"/>
|
||||||
|
</Types>"#
|
||||||
|
}
|
||||||
|
|
||||||
|
fn root_rels_xml() -> &'static str {
|
||||||
|
r#"<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
|
||||||
|
<Relationships xmlns="http://schemas.openxmlformats.org/package/2006/relationships">
|
||||||
|
<Relationship Id="rId1" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/officeDocument" Target="xl/workbook.xml"/>
|
||||||
|
<Relationship Id="rId2" Type="http://schemas.openxmlformats.org/package/2006/relationships/metadata/core-properties" Target="docProps/core.xml"/>
|
||||||
|
<Relationship Id="rId3" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/extended-properties" Target="docProps/app.xml"/>
|
||||||
|
</Relationships>"#
|
||||||
|
}
|
||||||
|
|
||||||
|
fn app_xml() -> &'static str {
|
||||||
|
r#"<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
|
||||||
|
<Properties xmlns="http://schemas.openxmlformats.org/officeDocument/2006/extended-properties"
|
||||||
|
xmlns:vt="http://schemas.openxmlformats.org/officeDocument/2006/docPropsVTypes">
|
||||||
|
<Application>sgClaw</Application>
|
||||||
|
</Properties>"#
|
||||||
|
}
|
||||||
|
|
||||||
|
fn core_xml() -> &'static str {
|
||||||
|
r#"<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
|
||||||
|
<cp:coreProperties xmlns:cp="http://schemas.openxmlformats.org/package/2006/metadata/core-properties"
|
||||||
|
xmlns:dc="http://purl.org/dc/elements/1.1/"
|
||||||
|
xmlns:dcterms="http://purl.org/dc/terms/"
|
||||||
|
xmlns:dcmitype="http://purl.org/dc/dcmitype/"
|
||||||
|
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
|
||||||
|
<dc:title>台区线损报表</dc:title>
|
||||||
|
</cp:coreProperties>"#
|
||||||
|
}
|
||||||
|
|
||||||
|
fn workbook_xml(sheet_name: &str) -> String {
|
||||||
|
format!(
|
||||||
|
r#"<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
|
||||||
|
<workbook xmlns="http://schemas.openxmlformats.org/spreadsheetml/2006/main"
|
||||||
|
xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships">
|
||||||
|
<sheets>
|
||||||
|
<sheet name="{sheet_name}" sheetId="1" r:id="rId1"/>
|
||||||
|
</sheets>
|
||||||
|
</workbook>"#
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn workbook_rels_xml() -> &'static str {
|
||||||
|
r#"<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
|
||||||
|
<Relationships xmlns="http://schemas.openxmlformats.org/package/2006/relationships">
|
||||||
|
<Relationship Id="rId1" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/worksheet" Target="worksheets/sheet1.xml"/>
|
||||||
|
</Relationships>"#
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::column_letter;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn column_letter_maps_indices_correctly() {
|
||||||
|
assert_eq!(column_letter(0), "A");
|
||||||
|
assert_eq!(column_letter(1), "B");
|
||||||
|
assert_eq!(column_letter(6), "G");
|
||||||
|
assert_eq!(column_letter(25), "Z");
|
||||||
|
assert_eq!(column_letter(26), "AA");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 5: Run the tests to verify they pass**
|
||||||
|
|
||||||
|
Run: `cargo test --test lineloss_xlsx_export_test -- --nocapture`
|
||||||
|
|
||||||
|
Expected: both `export_month_lineloss_produces_valid_xlsx` and `export_empty_rows_returns_error` PASS.
|
||||||
|
|
||||||
|
Also run the internal unit test:
|
||||||
|
|
||||||
|
Run: `cargo test lineloss_xlsx_export -- --nocapture`
|
||||||
|
|
||||||
|
Expected: `column_letter_maps_indices_correctly` PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 6: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/compat/lineloss_xlsx_export.rs src/compat/mod.rs tests/lineloss_xlsx_export_test.rs
|
||||||
|
git commit -m "feat(lineloss): add Rust-side XLSX generation for lineloss reports"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 2: Integrate XLSX Export into `deterministic_submit.rs`
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/compat/deterministic_submit.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Add imports and helper function to extract export data from artifact**
|
||||||
|
|
||||||
|
At the top of `src/compat/deterministic_submit.rs`, add the import:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use crate::compat::lineloss_xlsx_export::{export_lineloss_xlsx, LinelossExportRequest};
|
||||||
|
```
|
||||||
|
|
||||||
|
Then add a new helper function after `summarize_lineloss_artifact`:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
struct LinelossArtifactExportData {
|
||||||
|
sheet_name: String,
|
||||||
|
column_defs: Vec<(String, String)>,
|
||||||
|
rows: Vec<Map<String, Value>>,
|
||||||
|
}
|
||||||
|
|
||||||
|
fn extract_export_data(output: &str) -> Option<LinelossArtifactExportData> {
|
||||||
|
let payload: Value = serde_json::from_str(output).ok()?;
|
||||||
|
let artifact = payload
|
||||||
|
.as_object()
|
||||||
|
.and_then(|object| object.get("text"))
|
||||||
|
.unwrap_or(&payload);
|
||||||
|
let artifact = artifact.as_object()?;
|
||||||
|
|
||||||
|
if artifact.get("type").and_then(Value::as_str) != Some("report-artifact") {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
|
||||||
|
let status = artifact.get("status").and_then(Value::as_str).unwrap_or("");
|
||||||
|
if !matches!(status, "ok" | "partial") {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
|
||||||
|
let rows = artifact
|
||||||
|
.get("rows")
|
||||||
|
.and_then(Value::as_array)?;
|
||||||
|
if rows.is_empty() {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
let rows: Vec<Map<String, Value>> = rows
|
||||||
|
.iter()
|
||||||
|
.filter_map(|row| row.as_object().cloned())
|
||||||
|
.collect();
|
||||||
|
if rows.is_empty() {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
|
||||||
|
let column_defs: Vec<(String, String)> = artifact
|
||||||
|
.get("column_defs")
|
||||||
|
.and_then(Value::as_array)
|
||||||
|
.map(|defs| {
|
||||||
|
defs.iter()
|
||||||
|
.filter_map(|def| {
|
||||||
|
let arr = def.as_array()?;
|
||||||
|
let key = arr.first()?.as_str()?.to_string();
|
||||||
|
let label = arr.get(1)?.as_str()?.to_string();
|
||||||
|
Some((key, label))
|
||||||
|
})
|
||||||
|
.collect()
|
||||||
|
})
|
||||||
|
.unwrap_or_default();
|
||||||
|
|
||||||
|
// Fallback: if column_defs not in artifact, try "columns" array as keys
|
||||||
|
let column_defs = if column_defs.is_empty() {
|
||||||
|
let columns = artifact
|
||||||
|
.get("columns")
|
||||||
|
.and_then(Value::as_array)?;
|
||||||
|
columns
|
||||||
|
.iter()
|
||||||
|
.filter_map(|col| {
|
||||||
|
let key = col.as_str()?.to_string();
|
||||||
|
Some((key.clone(), key))
|
||||||
|
})
|
||||||
|
.collect()
|
||||||
|
} else {
|
||||||
|
column_defs
|
||||||
|
};
|
||||||
|
|
||||||
|
if column_defs.is_empty() {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
|
||||||
|
let org_label = artifact
|
||||||
|
.get("org")
|
||||||
|
.and_then(Value::as_object)
|
||||||
|
.and_then(|org| org.get("label"))
|
||||||
|
.and_then(Value::as_str)
|
||||||
|
.unwrap_or("lineloss");
|
||||||
|
let period_mode = artifact
|
||||||
|
.get("period")
|
||||||
|
.and_then(Value::as_object)
|
||||||
|
.and_then(|p| p.get("mode"))
|
||||||
|
.and_then(Value::as_str)
|
||||||
|
.unwrap_or("month");
|
||||||
|
let period_value = artifact
|
||||||
|
.get("period")
|
||||||
|
.and_then(Value::as_object)
|
||||||
|
.and_then(|p| p.get("value"))
|
||||||
|
.and_then(Value::as_str)
|
||||||
|
.unwrap_or("");
|
||||||
|
let mode_label = if period_mode == "week" { "周度" } else { "月度" };
|
||||||
|
let sheet_name = format!("{org_label}{mode_label}线损分析报表({period_value})");
|
||||||
|
|
||||||
|
Some(LinelossArtifactExportData {
|
||||||
|
sheet_name,
|
||||||
|
column_defs,
|
||||||
|
rows,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Add the export-after-collection function**
|
||||||
|
|
||||||
|
Add a new function that wraps the existing flow with XLSX export:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
fn try_export_lineloss_xlsx(
|
||||||
|
output: &str,
|
||||||
|
workspace_root: &Path,
|
||||||
|
) -> Option<PathBuf> {
|
||||||
|
let data = extract_export_data(output)?;
|
||||||
|
let nanos = std::time::SystemTime::now()
|
||||||
|
.duration_since(std::time::UNIX_EPOCH)
|
||||||
|
.map(|d| d.as_nanos())
|
||||||
|
.unwrap_or_default();
|
||||||
|
let out_dir = workspace_root.join("out");
|
||||||
|
let output_path = out_dir.join(format!("tq-lineloss-{nanos}.xlsx"));
|
||||||
|
|
||||||
|
let request = LinelossExportRequest {
|
||||||
|
sheet_name: data.sheet_name,
|
||||||
|
column_defs: data.column_defs,
|
||||||
|
rows: data.rows,
|
||||||
|
output_path,
|
||||||
|
};
|
||||||
|
|
||||||
|
match export_lineloss_xlsx(&request) {
|
||||||
|
Ok(path) => {
|
||||||
|
eprintln!("[deterministic_submit] XLSX exported to: {}", path.display());
|
||||||
|
Some(path)
|
||||||
|
}
|
||||||
|
Err(err) => {
|
||||||
|
eprintln!("[deterministic_submit] XLSX export failed: {err}");
|
||||||
|
None
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 3: Modify `execute_deterministic_submit_with_browser_backend` to call export**
|
||||||
|
|
||||||
|
Replace the body of `execute_deterministic_submit_with_browser_backend` (lines 119-136 of the original file):
|
||||||
|
|
||||||
|
```rust
|
||||||
|
pub fn execute_deterministic_submit_with_browser_backend(
|
||||||
|
browser_backend: Arc<dyn BrowserBackend>,
|
||||||
|
plan: &DeterministicExecutionPlan,
|
||||||
|
workspace_root: &Path,
|
||||||
|
settings: &SgClawSettings,
|
||||||
|
) -> Result<DirectSubmitOutcome, PipeError> {
|
||||||
|
let args = deterministic_submit_args(plan);
|
||||||
|
let output =
|
||||||
|
crate::compat::direct_skill_runtime::execute_browser_script_skill_raw_output_with_browser_backend(
|
||||||
|
browser_backend,
|
||||||
|
&plan.tool_name,
|
||||||
|
workspace_root,
|
||||||
|
settings,
|
||||||
|
args,
|
||||||
|
)?;
|
||||||
|
|
||||||
|
let export_path = try_export_lineloss_xlsx(&output, workspace_root);
|
||||||
|
Ok(summarize_lineloss_output_with_export(&output, export_path.as_deref()))
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Apply the same change to `execute_deterministic_submit` (the non-backend variant, lines 101-117):
|
||||||
|
|
||||||
|
```rust
|
||||||
|
pub fn execute_deterministic_submit<T: Transport + 'static>(
|
||||||
|
browser_tool: BrowserPipeTool<T>,
|
||||||
|
plan: &DeterministicExecutionPlan,
|
||||||
|
workspace_root: &Path,
|
||||||
|
settings: &SgClawSettings,
|
||||||
|
) -> Result<DirectSubmitOutcome, PipeError> {
|
||||||
|
let args = deterministic_submit_args(plan);
|
||||||
|
let output = crate::compat::direct_skill_runtime::execute_browser_script_skill_raw_output(
|
||||||
|
browser_tool,
|
||||||
|
&plan.tool_name,
|
||||||
|
workspace_root,
|
||||||
|
settings,
|
||||||
|
args,
|
||||||
|
)?;
|
||||||
|
|
||||||
|
let export_path = try_export_lineloss_xlsx(&output, workspace_root);
|
||||||
|
Ok(summarize_lineloss_output_with_export(&output, export_path.as_deref()))
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 4: Add `summarize_lineloss_output_with_export` function**
|
||||||
|
|
||||||
|
Add this new function. It wraps the existing `summarize_lineloss_output` and appends the export path:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
fn summarize_lineloss_output_with_export(output: &str, export_path: Option<&Path>) -> DirectSubmitOutcome {
|
||||||
|
let mut outcome = summarize_lineloss_output(output);
|
||||||
|
|
||||||
|
if let Some(path) = export_path {
|
||||||
|
outcome.summary.push_str(&format!(" export_path={}", path.display()));
|
||||||
|
}
|
||||||
|
|
||||||
|
outcome
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 5: Run existing tests to ensure nothing breaks**
|
||||||
|
|
||||||
|
Run: `cargo test --test deterministic_submit_test -- --nocapture`
|
||||||
|
|
||||||
|
Expected: all existing tests PASS (the tests don't call `execute_deterministic_submit`, they test `decide_deterministic_submit` and parsing logic which is unchanged).
|
||||||
|
|
||||||
|
Run: `cargo test deterministic_submit -- --nocapture`
|
||||||
|
|
||||||
|
Expected: PASS.
|
||||||
|
|
||||||
|
- [ ] **Step 6: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/compat/deterministic_submit.rs
|
||||||
|
git commit -m "feat(lineloss): integrate Rust-side XLSX export into deterministic submit pipeline"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 3: Modify `collect_lineloss.js` to Skip Browser-Side Export
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/scripts/collect_lineloss.js`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Add `column_defs` to the artifact returned by `buildArtifact`**
|
||||||
|
|
||||||
|
In the `buildArtifact` function (around line 198), the `columns` field currently contains just column keys (e.g., `["ORG_NAME", "YGDL", ...]`). Add a `column_defs` field that includes the full key+label pairs. Change the `buildArtifact` function to also accept and emit `column_defs`:
|
||||||
|
|
||||||
|
Find this block in `buildArtifact` (line 198-242):
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
function buildArtifact({
|
||||||
|
status,
|
||||||
|
blockedReason = '',
|
||||||
|
fatalError = '',
|
||||||
|
org_label = '',
|
||||||
|
org_code = '',
|
||||||
|
period_mode = '',
|
||||||
|
period_mode_code = '',
|
||||||
|
period_value = '',
|
||||||
|
period_payload = {},
|
||||||
|
columns = [],
|
||||||
|
rows = [],
|
||||||
|
export: exportState,
|
||||||
|
reasons = []
|
||||||
|
}) {
|
||||||
|
```
|
||||||
|
|
||||||
|
Replace with:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
function buildArtifact({
|
||||||
|
status,
|
||||||
|
blockedReason = '',
|
||||||
|
fatalError = '',
|
||||||
|
org_label = '',
|
||||||
|
org_code = '',
|
||||||
|
period_mode = '',
|
||||||
|
period_mode_code = '',
|
||||||
|
period_value = '',
|
||||||
|
period_payload = {},
|
||||||
|
columns = [],
|
||||||
|
column_defs = [],
|
||||||
|
rows = [],
|
||||||
|
export: exportState,
|
||||||
|
reasons = []
|
||||||
|
}) {
|
||||||
|
```
|
||||||
|
|
||||||
|
In the returned object (the `return { ... }` block inside `buildArtifact`), add `column_defs` after `columns`:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
columns: [...columns],
|
||||||
|
column_defs: [...column_defs],
|
||||||
|
rows: [...rows],
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Pass `column_defs` from `buildBrowserEntrypointResult`**
|
||||||
|
|
||||||
|
In `buildBrowserEntrypointResult`, after the `columns` assignment (around line 452), add:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const columns = normalizedArgs.period_mode === 'week' ? WEEK_COLUMNS : MONTH_COLUMNS;
|
||||||
|
const columnDefs = normalizedArgs.period_mode === 'week' ? WEEK_COLUMN_DEFS : MONTH_COLUMN_DEFS;
|
||||||
|
```
|
||||||
|
|
||||||
|
Then in every call to `buildArtifact` inside `buildBrowserEntrypointResult`, add `column_defs: columnDefs` alongside `columns`. There are 5 calls:
|
||||||
|
|
||||||
|
**Call 1** (API error, around line 466):
|
||||||
|
```javascript
|
||||||
|
columns,
|
||||||
|
column_defs: columnDefs,
|
||||||
|
rows: [],
|
||||||
|
```
|
||||||
|
|
||||||
|
**Call 2** (empty rows, around line 483):
|
||||||
|
```javascript
|
||||||
|
columns,
|
||||||
|
column_defs: columnDefs,
|
||||||
|
rows: []
|
||||||
|
```
|
||||||
|
|
||||||
|
**Call 3** (normalization failure, around line 497):
|
||||||
|
```javascript
|
||||||
|
columns,
|
||||||
|
column_defs: columnDefs,
|
||||||
|
rows: [],
|
||||||
|
```
|
||||||
|
|
||||||
|
**Call 4** (success, around line 558):
|
||||||
|
```javascript
|
||||||
|
columns,
|
||||||
|
column_defs: columnDefs,
|
||||||
|
rows,
|
||||||
|
```
|
||||||
|
|
||||||
|
Note: the two `buildArtifact` calls before the `columns` variable is assigned (validation failure and page context failure, around lines 422 and 439) don't need `column_defs` since they don't have data.
|
||||||
|
|
||||||
|
- [ ] **Step 3: Remove the `exportWorkbook` and `writeReportLog` calls from the success path**
|
||||||
|
|
||||||
|
In `buildBrowserEntrypointResult`, replace the entire export block (lines 518-556) with a simplified version:
|
||||||
|
|
||||||
|
Find:
|
||||||
|
```javascript
|
||||||
|
const exportState = {
|
||||||
|
attempted: false,
|
||||||
|
status: 'skipped',
|
||||||
|
message: null
|
||||||
|
};
|
||||||
|
|
||||||
|
if (typeof deps.exportWorkbook === 'function') {
|
||||||
|
exportState.attempted = true;
|
||||||
|
try {
|
||||||
|
const exportPayload = buildExportPayload({
|
||||||
|
mode: normalizedArgs.period_mode,
|
||||||
|
orgLabel: normalizedArgs.org_label,
|
||||||
|
periodValue: normalizedArgs.period_value,
|
||||||
|
rows
|
||||||
|
});
|
||||||
|
const exportResult = await deps.exportWorkbook(exportPayload);
|
||||||
|
const exportPath = pickFirstNonEmpty(exportResult?.path, exportResult?.data?.path, exportResult?.data?.data);
|
||||||
|
if (!exportPath) {
|
||||||
|
throw new Error('export_failed');
|
||||||
|
}
|
||||||
|
exportState.status = 'ok';
|
||||||
|
exportState.message = exportPath;
|
||||||
|
|
||||||
|
if (typeof deps.writeReportLog === 'function') {
|
||||||
|
try {
|
||||||
|
const reportLog = await deps.writeReportLog(buildReportName(normalizedArgs), exportPath);
|
||||||
|
if (reportLog?.success === false) {
|
||||||
|
reasons.push('report_log_failed');
|
||||||
|
}
|
||||||
|
} catch (_error) {
|
||||||
|
reasons.push('report_log_failed');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
reasons.push('export_failed');
|
||||||
|
exportState.status = 'failed';
|
||||||
|
exportState.message = pickFirstNonEmpty(error?.message, 'export_failed');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Replace with:
|
||||||
|
```javascript
|
||||||
|
// Export is handled by Rust side after receiving the artifact.
|
||||||
|
// JS only provides rows + column_defs in the artifact.
|
||||||
|
const exportState = {
|
||||||
|
attempted: false,
|
||||||
|
status: 'deferred_to_rust',
|
||||||
|
message: null
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 4: Remove unused constants and functions**
|
||||||
|
|
||||||
|
Remove these constants (lines 5-6) since they are no longer called from JS:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const EXPORT_SERVICE_URL = 'http://localhost:13313/SurfaceServices/personalBread/export/faultDetailsExportXLSX';
|
||||||
|
const REPORT_LOG_URL = 'http://localhost:13313/ReportServices/Api/setReportLog';
|
||||||
|
```
|
||||||
|
|
||||||
|
Remove the `postJson` function (lines 264-294) — it is no longer needed since no JS-side HTTP calls are made to localhost.
|
||||||
|
|
||||||
|
Remove these functions from `defaultBrowserDeps()`:
|
||||||
|
- `exportWorkbook` (lines 350-373)
|
||||||
|
- `writeReportLog` (lines 375-409)
|
||||||
|
|
||||||
|
Remove these now-unused functions:
|
||||||
|
- `buildExportTitles` (lines 244-254)
|
||||||
|
- `buildExportPayload` (lines 256-262)
|
||||||
|
- `buildReportName` (lines 413-415)
|
||||||
|
|
||||||
|
- [ ] **Step 5: Update the module.exports to remove unused exports**
|
||||||
|
|
||||||
|
Update the `module.exports` block (lines 572-586). Remove `buildBrowserEntrypointResult` from exports if it was only used for testing with full deps, or keep it for test compatibility. The final exports block:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
if (typeof module !== 'undefined' && module.exports) {
|
||||||
|
module.exports = {
|
||||||
|
MONTH_COLUMNS,
|
||||||
|
WEEK_COLUMNS,
|
||||||
|
MONTH_COLUMN_DEFS,
|
||||||
|
WEEK_COLUMN_DEFS,
|
||||||
|
validateArgs,
|
||||||
|
buildMonthRequest,
|
||||||
|
buildWeekRequest,
|
||||||
|
normalizeRows,
|
||||||
|
determineArtifactStatus,
|
||||||
|
buildArtifact,
|
||||||
|
buildBrowserEntrypointResult
|
||||||
|
};
|
||||||
|
} else {
|
||||||
|
return buildBrowserEntrypointResult(args);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 6: Verify the JS file has no syntax errors**
|
||||||
|
|
||||||
|
Run: `node -c "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/scripts/collect_lineloss.js"`
|
||||||
|
|
||||||
|
Expected: no syntax errors. (Note: the file uses `return` at top level inside a wrapped IIFE when injected into the browser, so Node syntax check may warn — the important thing is no parse errors.)
|
||||||
|
|
||||||
|
Alternatively, check the test file still works:
|
||||||
|
|
||||||
|
Run: `node "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/scripts/collect_lineloss.test.js"`
|
||||||
|
|
||||||
|
Expected: tests pass (or at least no JS parse errors).
|
||||||
|
|
||||||
|
- [ ] **Step 7: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/scripts/collect_lineloss.js"
|
||||||
|
git commit -m "feat(lineloss): remove browser-side export, defer to Rust-side XLSX generation"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 4: Full Build Verification
|
||||||
|
|
||||||
|
**Files:** None (verification only)
|
||||||
|
|
||||||
|
- [ ] **Step 1: Run full cargo build**
|
||||||
|
|
||||||
|
Run: `cargo build`
|
||||||
|
|
||||||
|
Expected: successful compilation with no errors.
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run all tests**
|
||||||
|
|
||||||
|
Run: `cargo test -- --nocapture`
|
||||||
|
|
||||||
|
Expected: all tests pass, including:
|
||||||
|
- `lineloss_xlsx_export_test::export_month_lineloss_produces_valid_xlsx`
|
||||||
|
- `lineloss_xlsx_export_test::export_empty_rows_returns_error`
|
||||||
|
- `lineloss_xlsx_export::tests::column_letter_maps_indices_correctly`
|
||||||
|
- All existing `deterministic_submit_test` tests
|
||||||
|
|
||||||
|
- [ ] **Step 3: Commit (if any fixups needed)**
|
||||||
|
|
||||||
|
Only if compilation or test fixes were required in this step.
|
||||||
@@ -0,0 +1,117 @@
|
|||||||
|
# Helper Page Lifecycle Fix v2 — Same-Connection Close + Open
|
||||||
|
|
||||||
|
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||||
|
|
||||||
|
**Goal:** Prevent orphaned helper pages across process restarts by closing existing ones before opening new ones, all on the same WebSocket connection.
|
||||||
|
|
||||||
|
**Architecture:** In `bootstrap_helper_page`, after registering with the browser WS, send `sgHideBrowerserClosePage` (best-effort, silently ignored if no page exists), then send `sgHideBrowerserOpenPage`. Change `use_hidden_domain` to `true`.
|
||||||
|
|
||||||
|
**Tech Stack:** Rust, tungstenite, SuperRPA browser WS protocol
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 1: Add close-before-open in bootstrap_helper_page
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/browser/callback_host.rs:345-374` (bootstrap_helper_page function)
|
||||||
|
|
||||||
|
- [ ] **Step 1: Add close command before open command in bootstrap_helper_page**
|
||||||
|
|
||||||
|
Replace the current `bootstrap_helper_page` function. After `recv_bootstrap_prelude`, send the close command first, then the open command:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
fn bootstrap_helper_page(
|
||||||
|
browser_ws_url: &str,
|
||||||
|
request_url: &str,
|
||||||
|
helper_url: &str,
|
||||||
|
use_hidden_domain: bool,
|
||||||
|
) -> Result<(), PipeError> {
|
||||||
|
let (mut websocket, _) = connect(browser_ws_url)
|
||||||
|
.map_err(|err| PipeError::Protocol(format!("browser websocket connect failed: {err}")))?;
|
||||||
|
configure_bootstrap_socket(&mut websocket)?;
|
||||||
|
websocket
|
||||||
|
.send(Message::Text(
|
||||||
|
r#"{"type":"register","role":"web"}"#.to_string().into(),
|
||||||
|
))
|
||||||
|
.map_err(|err| PipeError::Protocol(format!("browser websocket register failed: {err}")))?;
|
||||||
|
let _ = recv_bootstrap_prelude(&mut websocket);
|
||||||
|
|
||||||
|
// Close any orphaned helper page from a previous process run.
|
||||||
|
// Best-effort: if no page exists, the browser silently ignores this.
|
||||||
|
let (open_action, close_action) = if use_hidden_domain {
|
||||||
|
("sgHideBrowerserOpenPage", "sgHideBrowerserClosePage")
|
||||||
|
} else {
|
||||||
|
("sgBrowerserOpenPage", "sgBrowserClosePage")
|
||||||
|
};
|
||||||
|
let close_payload = json!([request_url, close_action, helper_url]).to_string();
|
||||||
|
let _ = websocket.send(Message::Text(close_payload.into()));
|
||||||
|
|
||||||
|
let payload = json!([
|
||||||
|
request_url,
|
||||||
|
open_action,
|
||||||
|
helper_url,
|
||||||
|
])
|
||||||
|
.to_string();
|
||||||
|
websocket
|
||||||
|
.send(Message::Text(payload.into()))
|
||||||
|
.map_err(|err| PipeError::Protocol(format!("helper bootstrap send failed: {err}")))?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Key changes from current code:
|
||||||
|
- After `recv_bootstrap_prelude`, add the close command (best-effort, ignore errors)
|
||||||
|
- Compute both `open_action` and `close_action` from `use_hidden_domain` flag
|
||||||
|
- Send close first, then open on the same WebSocket connection
|
||||||
|
|
||||||
|
- [ ] **Step 2: Change `use_hidden_domain` to `true` in server.rs**
|
||||||
|
|
||||||
|
In `src/service/server.rs`, at the `start_with_browser_ws_url` call, change `false` to `true`:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
match LiveBrowserCallbackHost::start_with_browser_ws_url(
|
||||||
|
browser_ws_url,
|
||||||
|
&bootstrap_url,
|
||||||
|
Duration::from_secs(15),
|
||||||
|
BROWSER_RESPONSE_TIMEOUT,
|
||||||
|
true, // use_hidden_domain: hidden domain for invisible helper
|
||||||
|
) {
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 3: Build**
|
||||||
|
|
||||||
|
Run: `cargo build 2>&1`
|
||||||
|
Expected: 0 errors.
|
||||||
|
|
||||||
|
- [ ] **Step 4: Run callback_host tests**
|
||||||
|
|
||||||
|
Run: `cargo test --lib -- callback_host 2>&1`
|
||||||
|
Expected: 12 tests pass (including `live_callback_host_sends_bootstrap_open_page_command` which still checks for `sgBrowerserOpenPage` because the test passes `false`, and `live_callback_host_hidden_domain_sends_hide_open_page_command` which passes `true`).
|
||||||
|
|
||||||
|
Note: The test passes `false` for `use_hidden_domain`, so the close command will use `sgBrowserClosePage`. The test's fake WebSocket server will receive both the close and open frames. The test only checks that `sgBrowerserOpenPage` is present, which is still true.
|
||||||
|
|
||||||
|
- [ ] **Step 5: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/browser/callback_host.rs src/service/server.rs
|
||||||
|
git commit -m "fix(callback_host): close orphaned helper page before opening new one on same WS"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 2: Full verification
|
||||||
|
|
||||||
|
**Files:** None (verification only)
|
||||||
|
|
||||||
|
- [ ] **Step 1: Full test suite**
|
||||||
|
|
||||||
|
Run: `cargo test 2>&1`
|
||||||
|
Expected: All tests pass except pre-existing `lineloss_period_resolver_prompts_for_missing_period` failure.
|
||||||
|
|
||||||
|
- [ ] **Step 2: Verify key behavioral changes**
|
||||||
|
|
||||||
|
Manually confirm:
|
||||||
|
1. `bootstrap_helper_page` sends close command before open command (both on same WS connection)
|
||||||
|
2. `use_hidden_domain` is `true` in `server.rs` — helper page opens in hidden domain
|
||||||
|
3. `Drop for LiveBrowserCallbackHost` remains simple (shutdown only, no close attempt)
|
||||||
|
4. `cached_host` is still in `mod.rs` outer loop (process-internal deduplication)
|
||||||
@@ -0,0 +1,475 @@
|
|||||||
|
# Helper Page Lifecycle Fix & Hidden Domain Support — Implementation Plan
|
||||||
|
|
||||||
|
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||||
|
|
||||||
|
**Goal:** Fix duplicate browser-helper.html pages caused by WebSocket reconnections, add cleanup on Drop, and introduce a config switch for hidden-domain page opening.
|
||||||
|
|
||||||
|
**Architecture:** Three changes: (1) lift `cached_host` from `serve_client()` to the outer `run()` loop so reconnections share one host, (2) enhance `Drop for LiveBrowserCallbackHost` to send a close-page command via browser WS, (3) add `use_hidden_domain: bool` parameter that selects between `sgBrowerserOpenPage`/`sgHideBrowerserOpenPage` and their corresponding close APIs.
|
||||||
|
|
||||||
|
**Tech Stack:** Rust, tungstenite WebSocket crate, SuperRPA browser WS protocol
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 1: Add `use_hidden_domain` field and update `bootstrap_helper_page`
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/browser/callback_host.rs:28` (constant), `:44-51` (struct), `:215-252` (constructor), `:340-359` (bootstrap fn)
|
||||||
|
|
||||||
|
- [ ] **Step 1: Change `HELPER_BOOTSTRAP_ACTION` from constant to a function of `use_hidden_domain`**
|
||||||
|
|
||||||
|
Replace the constant and update `bootstrap_helper_page` to accept and use the flag:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
// DELETE this line:
|
||||||
|
// const HELPER_BOOTSTRAP_ACTION: &str = "sgBrowerserOpenPage";
|
||||||
|
|
||||||
|
// REPLACE bootstrap_helper_page signature and body:
|
||||||
|
fn bootstrap_helper_page(
|
||||||
|
browser_ws_url: &str,
|
||||||
|
request_url: &str,
|
||||||
|
helper_url: &str,
|
||||||
|
use_hidden_domain: bool,
|
||||||
|
) -> Result<(), PipeError> {
|
||||||
|
let (mut websocket, _) = connect(browser_ws_url)
|
||||||
|
.map_err(|err| PipeError::Protocol(format!("browser websocket connect failed: {err}")))?;
|
||||||
|
configure_bootstrap_socket(&mut websocket)?;
|
||||||
|
websocket
|
||||||
|
.send(Message::Text(
|
||||||
|
r#"{"type":"register","role":"web"}"#.to_string().into(),
|
||||||
|
))
|
||||||
|
.map_err(|err| PipeError::Protocol(format!("browser websocket register failed: {err}")))?;
|
||||||
|
let _ = recv_bootstrap_prelude(&mut websocket);
|
||||||
|
let open_action = if use_hidden_domain {
|
||||||
|
"sgHideBrowerserOpenPage"
|
||||||
|
} else {
|
||||||
|
"sgBrowerserOpenPage"
|
||||||
|
};
|
||||||
|
let payload = json!([
|
||||||
|
request_url,
|
||||||
|
open_action,
|
||||||
|
helper_url,
|
||||||
|
])
|
||||||
|
.to_string();
|
||||||
|
websocket
|
||||||
|
.send(Message::Text(payload.into()))
|
||||||
|
.map_err(|err| PipeError::Protocol(format!("helper bootstrap send failed: {err}")))?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Add new fields to `LiveBrowserCallbackHost`**
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[derive(Debug)]
|
||||||
|
pub(crate) struct LiveBrowserCallbackHost {
|
||||||
|
host: Arc<BrowserCallbackHost>,
|
||||||
|
shutdown: Arc<AtomicBool>,
|
||||||
|
server_thread: Mutex<Option<JoinHandle<()>>>,
|
||||||
|
command_lock: Mutex<()>,
|
||||||
|
result_timeout: Duration,
|
||||||
|
browser_ws_url: String,
|
||||||
|
use_hidden_domain: bool,
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 3: Update `start_with_browser_ws_url` to accept and store the new parameter**
|
||||||
|
|
||||||
|
```rust
|
||||||
|
impl LiveBrowserCallbackHost {
|
||||||
|
pub(crate) fn start_with_browser_ws_url(
|
||||||
|
browser_ws_url: &str,
|
||||||
|
bootstrap_request_url: &str,
|
||||||
|
ready_timeout: Duration,
|
||||||
|
result_timeout: Duration,
|
||||||
|
use_hidden_domain: bool,
|
||||||
|
) -> Result<Self, PipeError> {
|
||||||
|
let listener = TcpListener::bind("127.0.0.1:0").map_err(|err| {
|
||||||
|
PipeError::Protocol(format!("failed to bind callback host listener: {err}"))
|
||||||
|
})?;
|
||||||
|
listener.set_nonblocking(true).map_err(|err| {
|
||||||
|
PipeError::Protocol(format!("failed to configure callback host listener: {err}"))
|
||||||
|
})?;
|
||||||
|
let origin = format!(
|
||||||
|
"http://{}",
|
||||||
|
listener.local_addr().map_err(|err| {
|
||||||
|
PipeError::Protocol(format!(
|
||||||
|
"failed to resolve callback host listener address: {err}"
|
||||||
|
))
|
||||||
|
})?
|
||||||
|
);
|
||||||
|
let host = Arc::new(BrowserCallbackHost::with_urls(&origin, browser_ws_url));
|
||||||
|
let shutdown = Arc::new(AtomicBool::new(false));
|
||||||
|
let thread_host = host.clone();
|
||||||
|
let thread_shutdown = shutdown.clone();
|
||||||
|
let server_thread = thread::spawn(move || serve_loop(listener, thread_host, thread_shutdown));
|
||||||
|
|
||||||
|
bootstrap_helper_page(browser_ws_url, bootstrap_request_url, host.helper_url(), use_hidden_domain)?;
|
||||||
|
wait_for_helper_ready(host.as_ref(), ready_timeout)?;
|
||||||
|
|
||||||
|
let live_host = Self {
|
||||||
|
host,
|
||||||
|
shutdown,
|
||||||
|
server_thread: Mutex::new(Some(server_thread)),
|
||||||
|
command_lock: Mutex::new(()),
|
||||||
|
result_timeout,
|
||||||
|
browser_ws_url: browser_ws_url.to_string(),
|
||||||
|
use_hidden_domain,
|
||||||
|
};
|
||||||
|
Ok(live_host)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 4: Fix the inline test struct literal that constructs `LiveBrowserCallbackHost` directly**
|
||||||
|
|
||||||
|
In the `live_callback_host_treats_simulated_mouse_command_as_fire_and_forget` test (around line 1110), add the new fields:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
let host = LiveBrowserCallbackHost {
|
||||||
|
host: Arc::new(BrowserCallbackHost::new()),
|
||||||
|
shutdown: Arc::new(AtomicBool::new(false)),
|
||||||
|
server_thread: Mutex::new(None),
|
||||||
|
command_lock: Mutex::new(()),
|
||||||
|
result_timeout: Duration::from_millis(10),
|
||||||
|
browser_ws_url: "ws://127.0.0.1:12345".to_string(),
|
||||||
|
use_hidden_domain: false,
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 5: Run build to verify compilation**
|
||||||
|
|
||||||
|
Run: `cargo build 2>&1`
|
||||||
|
Expected: 0 errors. The `HELPER_BOOTSTRAP_ACTION` constant removal and signature changes should all be internally consistent.
|
||||||
|
|
||||||
|
- [ ] **Step 6: Run tests to verify existing behavior is preserved**
|
||||||
|
|
||||||
|
Run: `cargo test -- callback_host 2>&1`
|
||||||
|
Expected: All existing callback_host tests pass (including `live_callback_host_sends_bootstrap_open_page_command` which still checks for `sgBrowerserOpenPage` since no caller passes `true` yet).
|
||||||
|
|
||||||
|
- [ ] **Step 7: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/browser/callback_host.rs
|
||||||
|
git commit -m "feat(callback_host): add use_hidden_domain param to bootstrap_helper_page"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 2: Enhance `Drop` to close the helper page
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/browser/callback_host.rs:321-328` (Drop impl)
|
||||||
|
|
||||||
|
- [ ] **Step 1: Add `close_helper_page` helper function**
|
||||||
|
|
||||||
|
Add this function near `bootstrap_helper_page` (after line ~360):
|
||||||
|
|
||||||
|
```rust
|
||||||
|
/// Best-effort attempt to close the helper page tab via browser WebSocket.
|
||||||
|
/// Silently ignores all errors — this runs during Drop and must not panic.
|
||||||
|
fn close_helper_page(browser_ws_url: &str, helper_url: &str, use_hidden_domain: bool) {
|
||||||
|
let close_action = if use_hidden_domain {
|
||||||
|
"sgHideBrowerserClosePage"
|
||||||
|
} else {
|
||||||
|
"sgBrowserClosePage"
|
||||||
|
};
|
||||||
|
|
||||||
|
let result: Result<(), Box<dyn std::error::Error>> = (|| {
|
||||||
|
// Use a raw TcpStream with timeouts instead of tungstenite::connect
|
||||||
|
// which does not expose a connection timeout.
|
||||||
|
let addr = browser_ws_url
|
||||||
|
.trim_start_matches("ws://")
|
||||||
|
.trim_start_matches("wss://");
|
||||||
|
let stream = TcpStream::connect_timeout(
|
||||||
|
&addr.parse().map_err(|e| format!("addr parse: {e}"))?,
|
||||||
|
Duration::from_millis(100),
|
||||||
|
)?;
|
||||||
|
stream.set_read_timeout(Some(Duration::from_millis(200)))?;
|
||||||
|
stream.set_write_timeout(Some(Duration::from_millis(200)))?;
|
||||||
|
let (mut websocket, _) = tungstenite::client(
|
||||||
|
browser_ws_url,
|
||||||
|
stream,
|
||||||
|
)?;
|
||||||
|
websocket.send(Message::Text(
|
||||||
|
r#"{"type":"register","role":"web"}"#.to_string().into(),
|
||||||
|
))?;
|
||||||
|
// Drain the welcome prelude (best-effort, ignore timeout).
|
||||||
|
let _ = websocket.read();
|
||||||
|
let payload = json!([helper_url, close_action, helper_url]).to_string();
|
||||||
|
websocket.send(Message::Text(payload.into()))?;
|
||||||
|
Ok(())
|
||||||
|
})();
|
||||||
|
|
||||||
|
if let Err(err) = result {
|
||||||
|
eprintln!("close_helper_page best-effort failed (harmless): {err}");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Update `Drop for LiveBrowserCallbackHost` to call `close_helper_page`**
|
||||||
|
|
||||||
|
```rust
|
||||||
|
impl Drop for LiveBrowserCallbackHost {
|
||||||
|
fn drop(&mut self) {
|
||||||
|
// Best-effort: tell the browser to close the helper page tab.
|
||||||
|
close_helper_page(
|
||||||
|
&self.browser_ws_url,
|
||||||
|
self.host.helper_url(),
|
||||||
|
self.use_hidden_domain,
|
||||||
|
);
|
||||||
|
|
||||||
|
self.shutdown.store(true, Ordering::Relaxed);
|
||||||
|
if let Some(handle) = self.server_thread.lock().unwrap().take() {
|
||||||
|
let _ = handle.join();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 3: Run build to verify compilation**
|
||||||
|
|
||||||
|
Run: `cargo build 2>&1`
|
||||||
|
Expected: 0 errors. `close_helper_page` uses types already imported (`TcpStream`, `Duration`, `json!`, `Message`).
|
||||||
|
|
||||||
|
- [ ] **Step 4: Run tests**
|
||||||
|
|
||||||
|
Run: `cargo test -- callback_host 2>&1`
|
||||||
|
Expected: All pass. The Drop enhancement is best-effort and the test helper constructs hosts with `server_thread: Mutex::new(None)`, so Drop completes cleanly.
|
||||||
|
|
||||||
|
- [ ] **Step 5: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/browser/callback_host.rs
|
||||||
|
git commit -m "feat(callback_host): close helper page on Drop via browser WS"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 3: Lift `cached_host` to outer loop and update `serve_client` signature
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/service/mod.rs:72-96` (run loop)
|
||||||
|
- Modify: `src/service/server.rs:232-241` (serve_client signature and cached_host init)
|
||||||
|
|
||||||
|
- [ ] **Step 1: Change `serve_client` to accept `cached_host` as a parameter**
|
||||||
|
|
||||||
|
In `src/service/server.rs`, change the function signature and remove the local `cached_host` variable:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
pub fn serve_client(
|
||||||
|
context: &AgentRuntimeContext,
|
||||||
|
session: &ServiceSession,
|
||||||
|
sink: Arc<ServiceEventSink>,
|
||||||
|
browser_ws_url: &str,
|
||||||
|
mac_policy: &MacPolicy,
|
||||||
|
cached_host: &mut Option<Arc<LiveBrowserCallbackHost>>,
|
||||||
|
) -> Result<(), PipeError> {
|
||||||
|
// DELETE the line: let mut cached_host: Option<Arc<LiveBrowserCallbackHost>> = None;
|
||||||
|
|
||||||
|
loop {
|
||||||
|
// ... rest of function body unchanged, `cached_host` is now the parameter
|
||||||
|
```
|
||||||
|
|
||||||
|
The body references to `cached_host` remain identical — they just operate on the borrowed mutable reference instead of a local variable.
|
||||||
|
|
||||||
|
- [ ] **Step 2: Update `start_with_browser_ws_url` call to pass `false` for `use_hidden_domain`**
|
||||||
|
|
||||||
|
In `src/service/server.rs`, at the `LiveBrowserCallbackHost::start_with_browser_ws_url` call (around line 288), add the `false` argument:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
match LiveBrowserCallbackHost::start_with_browser_ws_url(
|
||||||
|
browser_ws_url,
|
||||||
|
&bootstrap_url,
|
||||||
|
Duration::from_secs(15),
|
||||||
|
BROWSER_RESPONSE_TIMEOUT,
|
||||||
|
false, // use_hidden_domain: visible tab for now
|
||||||
|
) {
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 3: Lift `cached_host` into `run()` in `mod.rs`**
|
||||||
|
|
||||||
|
In `src/service/mod.rs`, declare `cached_host` before the loop and pass it to `serve_client`:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
// Add this import at the top of the function or file:
|
||||||
|
use crate::browser::callback_host::LiveBrowserCallbackHost;
|
||||||
|
|
||||||
|
// Before the loop (after line 64, after `let session = ...`):
|
||||||
|
let mut cached_host: Option<Arc<LiveBrowserCallbackHost>> = None;
|
||||||
|
|
||||||
|
loop {
|
||||||
|
let (stream, _) = listener.accept()?;
|
||||||
|
let websocket = accept(stream)
|
||||||
|
.map_err(|err| PipeError::Protocol(format!("service websocket accept failed: {err}")))?;
|
||||||
|
let sink = Arc::new(ServiceEventSink::from_websocket(websocket));
|
||||||
|
match session.try_attach_client() {
|
||||||
|
Ok(()) => {
|
||||||
|
let result = serve_client(
|
||||||
|
&runtime_context,
|
||||||
|
&session,
|
||||||
|
sink.clone(),
|
||||||
|
browser_ws_url,
|
||||||
|
&mac_policy,
|
||||||
|
&mut cached_host,
|
||||||
|
);
|
||||||
|
session.detach_client();
|
||||||
|
match result {
|
||||||
|
Ok(()) | Err(PipeError::PipeClosed) => {}
|
||||||
|
Err(err) => return Err(err),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(message) => {
|
||||||
|
sink.send_service_message(message)?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 4: Update the `pub use` export if needed**
|
||||||
|
|
||||||
|
Check `src/service/mod.rs:17`:
|
||||||
|
```rust
|
||||||
|
pub use server::{serve_client, ServiceEventSink, ServiceSession};
|
||||||
|
```
|
||||||
|
The signature change is compatible — `serve_client` is still public with an added parameter. Any external callers will get a compile error guiding them to add the parameter, which is the desired behavior.
|
||||||
|
|
||||||
|
- [ ] **Step 5: Run build to verify compilation**
|
||||||
|
|
||||||
|
Run: `cargo build 2>&1`
|
||||||
|
Expected: 0 errors. If there are external test files calling `serve_client`, they will fail here and need the new parameter added.
|
||||||
|
|
||||||
|
- [ ] **Step 6: Run full test suite**
|
||||||
|
|
||||||
|
Run: `cargo test 2>&1`
|
||||||
|
Expected: All tests pass. External test files that call `serve_client` indirectly through the service protocol tests should still work because they use the WS protocol layer, not `serve_client` directly. (Verified: grep found 0 test files referencing `serve_client` or `LiveBrowserCallbackHost`.)
|
||||||
|
|
||||||
|
- [ ] **Step 7: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/service/mod.rs src/service/server.rs
|
||||||
|
git commit -m "fix(service): lift cached_host to outer loop to prevent duplicate helper pages"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 4: Add tests for hidden domain bootstrap
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/browser/callback_host.rs` (inline tests module, around line 1071)
|
||||||
|
|
||||||
|
- [ ] **Step 1: Update existing `live_callback_host_sends_bootstrap_open_page_command` test**
|
||||||
|
|
||||||
|
The test currently calls `start_with_browser_ws_url` with 4 args. Add the 5th arg `false`:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[test]
|
||||||
|
fn live_callback_host_sends_bootstrap_open_page_command() {
|
||||||
|
let (ws_url, frames, handle) = start_fake_browser_status_server();
|
||||||
|
|
||||||
|
let result = LiveBrowserCallbackHost::start_with_browser_ws_url(
|
||||||
|
&ws_url,
|
||||||
|
"https://www.zhihu.com",
|
||||||
|
Duration::from_millis(100),
|
||||||
|
Duration::from_millis(50),
|
||||||
|
false,
|
||||||
|
);
|
||||||
|
assert!(result.is_err(), "expected timeout because no real helper page loads");
|
||||||
|
drop(result);
|
||||||
|
handle.join().unwrap();
|
||||||
|
|
||||||
|
let sent = frames.lock().unwrap().clone();
|
||||||
|
assert!(
|
||||||
|
sent.iter().any(|frame| frame.contains("sgBrowerserOpenPage")),
|
||||||
|
"bootstrap should send sgBrowerserOpenPage to the browser WS; sent frames: {sent:?}"
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
sent.iter().any(|frame| frame.contains("/sgclaw/browser-helper.html")),
|
||||||
|
"bootstrap should include the helper page URL; sent frames: {sent:?}"
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
sent.iter().any(|frame| frame.contains("https://www.zhihu.com")),
|
||||||
|
"bootstrap requestUrl should be the provided page URL; sent frames: {sent:?}"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Add new test for hidden domain bootstrap**
|
||||||
|
|
||||||
|
Add this test after the existing one:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[test]
|
||||||
|
fn live_callback_host_hidden_domain_sends_hide_open_page_command() {
|
||||||
|
let (ws_url, frames, handle) = start_fake_browser_status_server();
|
||||||
|
|
||||||
|
let result = LiveBrowserCallbackHost::start_with_browser_ws_url(
|
||||||
|
&ws_url,
|
||||||
|
"https://www.zhihu.com",
|
||||||
|
Duration::from_millis(100),
|
||||||
|
Duration::from_millis(50),
|
||||||
|
true,
|
||||||
|
);
|
||||||
|
assert!(result.is_err(), "expected timeout because no real helper page loads");
|
||||||
|
drop(result);
|
||||||
|
handle.join().unwrap();
|
||||||
|
|
||||||
|
let sent = frames.lock().unwrap().clone();
|
||||||
|
assert!(
|
||||||
|
sent.iter().any(|frame| frame.contains("sgHideBrowerserOpenPage")),
|
||||||
|
"hidden domain bootstrap should send sgHideBrowerserOpenPage; sent frames: {sent:?}"
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
!sent.iter().any(|frame| {
|
||||||
|
frame.contains("\"sgBrowerserOpenPage\"")
|
||||||
|
}),
|
||||||
|
"hidden domain bootstrap should NOT send visible sgBrowerserOpenPage; sent frames: {sent:?}"
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
sent.iter().any(|frame| frame.contains("/sgclaw/browser-helper.html")),
|
||||||
|
"bootstrap should include the helper page URL; sent frames: {sent:?}"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 3: Run all callback_host tests**
|
||||||
|
|
||||||
|
Run: `cargo test -- callback_host 2>&1`
|
||||||
|
Expected: All 3 tests pass:
|
||||||
|
- `live_callback_host_sends_bootstrap_open_page_command` — regression, visible domain
|
||||||
|
- `live_callback_host_hidden_domain_sends_hide_open_page_command` — new, hidden domain
|
||||||
|
- `live_callback_host_treats_simulated_mouse_command_as_fire_and_forget` — unchanged
|
||||||
|
|
||||||
|
- [ ] **Step 4: Run full test suite**
|
||||||
|
|
||||||
|
Run: `cargo test 2>&1`
|
||||||
|
Expected: All tests pass.
|
||||||
|
|
||||||
|
- [ ] **Step 5: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add src/browser/callback_host.rs
|
||||||
|
git commit -m "test(callback_host): add hidden domain bootstrap test"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 5: Full build verification
|
||||||
|
|
||||||
|
**Files:** None (verification only)
|
||||||
|
|
||||||
|
- [ ] **Step 1: Clean build**
|
||||||
|
|
||||||
|
Run: `cargo build 2>&1`
|
||||||
|
Expected: 0 errors. Warnings about dead code in unrelated modules are acceptable.
|
||||||
|
|
||||||
|
- [ ] **Step 2: Full test suite**
|
||||||
|
|
||||||
|
Run: `cargo test 2>&1`
|
||||||
|
Expected: All tests pass. The pre-existing `lineloss_period_resolver_prompts_for_missing_period` failure (from previous work) is known and unrelated.
|
||||||
|
|
||||||
|
- [ ] **Step 3: Verify the key behavioral changes in code**
|
||||||
|
|
||||||
|
Manually confirm:
|
||||||
|
1. `src/service/mod.rs` — `cached_host` is declared BEFORE the `loop`, not inside `serve_client`
|
||||||
|
2. `src/browser/callback_host.rs` — `Drop::drop` calls `close_helper_page` before shutdown
|
||||||
|
3. `src/browser/callback_host.rs` — `bootstrap_helper_page` uses `"sgHideBrowerserOpenPage"` when `use_hidden_domain == true` and `"sgBrowerserOpenPage"` when `false`
|
||||||
|
4. `src/service/server.rs` — `start_with_browser_ws_url` call passes `false` as `use_hidden_domain`
|
||||||
762
docs/superpowers/plans/2026-04-14-service-console-enhancement.md
Normal file
762
docs/superpowers/plans/2026-04-14-service-console-enhancement.md
Normal file
@@ -0,0 +1,762 @@
|
|||||||
|
# Service Console Enhancement Implementation Plan
|
||||||
|
|
||||||
|
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||||
|
|
||||||
|
**Goal:** Add auto-connect on page load and a settings panel to sg_claw_service_console.html, with config save via WebSocket to the sgClaw service.
|
||||||
|
|
||||||
|
**Architecture:** The HTML page auto-connects on load and provides a settings modal. When user saves, the page sends an `update_config` WebSocket message. The Rust service receives it, merges with existing config, writes to `sgclaw_config.json`, and responds.
|
||||||
|
|
||||||
|
**Tech Stack:** Rust (serde, tungstenite), vanilla JavaScript/HTML/CSS
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 1: Add `UpdateConfig` and `ConfigUpdated` protocol types
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/service/protocol.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Add `ConfigUpdatePayload` struct and `UpdateConfig` variant to `ClientMessage`**
|
||||||
|
|
||||||
|
Add this struct above the `ClientMessage` enum, and add the `UpdateConfig` variant to the enum:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use std::path::PathBuf;
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
|
||||||
|
pub struct ConfigUpdatePayload {
|
||||||
|
#[serde(rename = "apiKey", default)]
|
||||||
|
pub api_key: Option<String>,
|
||||||
|
#[serde(rename = "baseUrl", default)]
|
||||||
|
pub base_url: Option<String>,
|
||||||
|
#[serde(default)]
|
||||||
|
pub model: Option<String>,
|
||||||
|
#[serde(rename = "skillsDir", default)]
|
||||||
|
pub skills_dir: Option<String>,
|
||||||
|
#[serde(rename = "directSubmitSkill", default)]
|
||||||
|
pub direct_submit_skill: Option<String>,
|
||||||
|
#[serde(rename = "runtimeProfile", default)]
|
||||||
|
pub runtime_profile: Option<String>,
|
||||||
|
#[serde(rename = "browserBackend", default)]
|
||||||
|
pub browser_backend: Option<String>,
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Add `UpdateConfig` variant to `ClientMessage` enum (after `Ping`):
|
||||||
|
|
||||||
|
```rust
|
||||||
|
UpdateConfig {
|
||||||
|
config: ConfigUpdatePayload,
|
||||||
|
},
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Add `ConfigUpdated` variant to `ServiceMessage`**
|
||||||
|
|
||||||
|
Add after `Pong`:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
ConfigUpdated {
|
||||||
|
success: bool,
|
||||||
|
message: String,
|
||||||
|
},
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 3: Update `into_submit_task_request` to handle `UpdateConfig`**
|
||||||
|
|
||||||
|
In the match arm, add `ClientMessage::UpdateConfig { .. }` to the list that returns `None`:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
ClientMessage::Connect
|
||||||
|
| ClientMessage::Start
|
||||||
|
| ClientMessage::Stop
|
||||||
|
| ClientMessage::Ping
|
||||||
|
| ClientMessage::UpdateConfig { .. } => None,
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 4: Run tests to verify protocol compiles**
|
||||||
|
|
||||||
|
Run: `cargo test --lib service::protocol`
|
||||||
|
Expected: PASS (no protocol-specific tests yet, but it should compile)
|
||||||
|
|
||||||
|
### Task 2: Add `config_path()` getter to `AgentRuntimeContext`
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/agent/task_runner.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Add public getter method**
|
||||||
|
|
||||||
|
In the `impl AgentRuntimeContext` block, add after `load_sgclaw_settings()`:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
pub fn config_path(&self) -> Option<&Path> {
|
||||||
|
self.config_path.as_deref()
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Add the import at the top of the file if not present:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use std::path::Path;
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run tests to verify**
|
||||||
|
|
||||||
|
Run: `cargo test agent::task_runner`
|
||||||
|
Expected: PASS
|
||||||
|
|
||||||
|
### Task 3: Add `save_to_path()` method to `SgClawSettings`
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/config/settings.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Add Serialize derive to SgClawSettings and related types**
|
||||||
|
|
||||||
|
The `RawSgClawSettings` struct uses `Deserialize` only. We need to add `Serialize` to `SgClawSettings` for writing. Add `use serde::Serialize;` at the top.
|
||||||
|
|
||||||
|
Add `Serialize` derive to `SgClawSettings`:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[derive(Debug, Clone, PartialEq, Eq, Serialize)]
|
||||||
|
pub struct SgClawSettings {
|
||||||
|
```
|
||||||
|
|
||||||
|
But wait - `SgClawSettings` has enum fields (`RuntimeProfile`, `SkillsPromptMode`, `PlannerMode`, `BrowserBackend`, `OfficeBackend`) that don't implement `Serialize`. We need to add Serialize derives to those types too.
|
||||||
|
|
||||||
|
Instead, the simpler approach is to write a `to_raw()` method that converts `SgClawSettings` to a serializable struct, then serialize that.
|
||||||
|
|
||||||
|
- [ ] **Step 2: Create serializable raw config struct**
|
||||||
|
|
||||||
|
Add a new struct at the bottom of the file (before tests if any):
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[derive(Debug, Serialize)]
|
||||||
|
struct SerializableRawSgClawSettings {
|
||||||
|
#[serde(rename = "apiKey")]
|
||||||
|
api_key: String,
|
||||||
|
#[serde(rename = "baseUrl")]
|
||||||
|
base_url: String,
|
||||||
|
model: String,
|
||||||
|
#[serde(rename = "skillsDir", skip_serializing_if = "Option::is_none")]
|
||||||
|
skills_dir: Option<String>,
|
||||||
|
#[serde(rename = "directSubmitSkill", skip_serializing_if = "Option::is_none")]
|
||||||
|
direct_submit_skill: Option<String>,
|
||||||
|
#[serde(rename = "skillsPromptMode", skip_serializing_if = "Option::is_none")]
|
||||||
|
skills_prompt_mode: Option<String>,
|
||||||
|
#[serde(rename = "runtimeProfile", skip_serializing_if = "Option::is_none")]
|
||||||
|
runtime_profile: Option<String>,
|
||||||
|
#[serde(rename = "plannerMode", skip_serializing_if = "Option::is_none")]
|
||||||
|
planner_mode: Option<String>,
|
||||||
|
#[serde(rename = "activeProvider", skip_serializing_if = "Option::is_none")]
|
||||||
|
active_provider: Option<String>,
|
||||||
|
#[serde(rename = "browserBackend", skip_serializing_if = "Option::is_none")]
|
||||||
|
browser_backend: Option<String>,
|
||||||
|
#[serde(rename = "officeBackend", skip_serializing_if = "Option::is_none")]
|
||||||
|
office_backend: Option<String>,
|
||||||
|
#[serde(rename = "browserWsUrl", skip_serializing_if = "Option::is_none")]
|
||||||
|
browser_ws_url: Option<String>,
|
||||||
|
#[serde(rename = "serviceWsListenAddr", skip_serializing_if = "Option::is_none")]
|
||||||
|
service_ws_listen_addr: Option<String>,
|
||||||
|
#[serde(default)]
|
||||||
|
providers: Vec<SerializableProviderSettings>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Serialize)]
|
||||||
|
struct SerializableProviderSettings {
|
||||||
|
id: String,
|
||||||
|
provider: Option<String>,
|
||||||
|
#[serde(rename = "apiKey")]
|
||||||
|
api_key: String,
|
||||||
|
#[serde(rename = "baseUrl", skip_serializing_if = "Option::is_none")]
|
||||||
|
base_url: Option<String>,
|
||||||
|
model: String,
|
||||||
|
#[serde(rename = "apiPath", skip_serializing_if = "Option::is_none")]
|
||||||
|
api_path: Option<String>,
|
||||||
|
#[serde(rename = "wireApi", skip_serializing_if = "Option::is_none")]
|
||||||
|
wire_api: Option<String>,
|
||||||
|
#[serde(rename = "requiresOpenaiAuth")]
|
||||||
|
requires_openai_auth: bool,
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Add `use serde::Serialize;` at the top of the file (combine with existing `use serde::Deserialize;`):
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 3: Add `to_serializable()` method to `SgClawSettings`**
|
||||||
|
|
||||||
|
In the `impl SgClawSettings` block, add:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
fn to_serializable(&self) -> SerializableRawSgClawSettings {
|
||||||
|
let format_enum_value = |s: &str| s.to_string();
|
||||||
|
|
||||||
|
SerializableRawSgClawSettings {
|
||||||
|
api_key: self.provider_api_key.clone(),
|
||||||
|
base_url: self.provider_base_url.clone(),
|
||||||
|
model: self.provider_model.clone(),
|
||||||
|
skills_dir: self.skills_dir.as_ref().map(|p| p.to_string_lossy().into_owned()),
|
||||||
|
direct_submit_skill: self.direct_submit_skill.clone(),
|
||||||
|
skills_prompt_mode: Some(format_enum_value(match self.skills_prompt_mode {
|
||||||
|
SkillsPromptMode::Full => "full",
|
||||||
|
SkillsPromptMode::Compact => "compact",
|
||||||
|
})),
|
||||||
|
runtime_profile: Some(format_enum_value(match self.runtime_profile {
|
||||||
|
RuntimeProfile::BrowserAttached => "browser-attached",
|
||||||
|
RuntimeProfile::BrowserHeavy => "browser-heavy",
|
||||||
|
RuntimeProfile::GeneralAssistant => "general-assistant",
|
||||||
|
})),
|
||||||
|
planner_mode: Some(format_enum_value(match self.planner_mode {
|
||||||
|
PlannerMode::ZeroclawPlanFirst => "zeroclaw-plan-first",
|
||||||
|
PlannerMode::LegacyDeterministic => "legacy-deterministic",
|
||||||
|
})),
|
||||||
|
active_provider: Some(self.active_provider.clone()),
|
||||||
|
browser_backend: Some(format_enum_value(match self.browser_backend {
|
||||||
|
BrowserBackend::SuperRpa => "super-rpa",
|
||||||
|
BrowserBackend::AgentBrowser => "agent-browser",
|
||||||
|
BrowserBackend::RustNative => "rust-native",
|
||||||
|
BrowserBackend::ComputerUse => "computer-use",
|
||||||
|
BrowserBackend::Auto => "auto",
|
||||||
|
})),
|
||||||
|
office_backend: Some(format_enum_value(match self.office_backend {
|
||||||
|
OfficeBackend::OpenXml => "openxml",
|
||||||
|
OfficeBackend::Disabled => "disabled",
|
||||||
|
})),
|
||||||
|
browser_ws_url: self.browser_ws_url.clone(),
|
||||||
|
service_ws_listen_addr: self.service_ws_listen_addr.clone(),
|
||||||
|
providers: self
|
||||||
|
.providers
|
||||||
|
.iter()
|
||||||
|
.map(|p| SerializableProviderSettings {
|
||||||
|
id: p.id.clone(),
|
||||||
|
provider: Some(p.provider.clone()),
|
||||||
|
api_key: p.api_key.clone(),
|
||||||
|
base_url: p.base_url.clone(),
|
||||||
|
model: p.model.clone(),
|
||||||
|
api_path: p.api_path.clone(),
|
||||||
|
wire_api: p.wire_api.clone(),
|
||||||
|
requires_openai_auth: p.requires_openai_auth,
|
||||||
|
})
|
||||||
|
.collect(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 4: Add `save_to_path()` method**
|
||||||
|
|
||||||
|
In the same `impl SgClawSettings` block, add:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
pub fn save_to_path(&self, path: &Path) -> Result<(), ConfigError> {
|
||||||
|
let serializable = self.to_serializable();
|
||||||
|
let json = serde_json::to_string_pretty(&serializable)
|
||||||
|
.map_err(|err| ConfigError::ConfigParse(path.to_path_buf(), err.to_string()))?;
|
||||||
|
std::fs::write(path, json)
|
||||||
|
.map_err(|err| ConfigError::ConfigRead(path.to_path_buf(), err.to_string()))
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 5: Run tests to verify compilation**
|
||||||
|
|
||||||
|
Run: `cargo test --lib config::settings`
|
||||||
|
Expected: PASS
|
||||||
|
|
||||||
|
### Task 4: Handle `UpdateConfig` in the service server
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `src/service/server.rs`
|
||||||
|
- Modify: `src/service/mod.rs` (if needed for imports)
|
||||||
|
|
||||||
|
- [ ] **Step 1: Add `UpdateConfig` match arm in `serve_client`**
|
||||||
|
|
||||||
|
In the `match message` block in `serve_client`, after the `SubmitTask` arm, add:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
ClientMessage::UpdateConfig { config } => {
|
||||||
|
let Some(config_path) = context.config_path() else {
|
||||||
|
sink.send_service_message(ServiceMessage::ConfigUpdated {
|
||||||
|
success: false,
|
||||||
|
message: "未找到配置文件路径。请通过 --config-path 参数启动 sg_claw 后再使用此功能。".to_string(),
|
||||||
|
})?;
|
||||||
|
continue;
|
||||||
|
};
|
||||||
|
|
||||||
|
if !config_path.exists() {
|
||||||
|
sink.send_service_message(ServiceMessage::ConfigUpdated {
|
||||||
|
success: false,
|
||||||
|
message: format!("配置文件不存在: {}", config_path.display()),
|
||||||
|
})?;
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
let result = update_config_file(config_path, config);
|
||||||
|
match result {
|
||||||
|
Ok(()) => {
|
||||||
|
sink.send_service_message(ServiceMessage::ConfigUpdated {
|
||||||
|
success: true,
|
||||||
|
message: "配置已保存。重启 sg_claw 以应用新配置。".to_string(),
|
||||||
|
})?;
|
||||||
|
}
|
||||||
|
Err(err) => {
|
||||||
|
sink.send_service_message(ServiceMessage::ConfigUpdated {
|
||||||
|
success: false,
|
||||||
|
message: format!("保存配置失败: {}", err),
|
||||||
|
})?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Add `update_config_file` helper function**
|
||||||
|
|
||||||
|
Add this function above `serve_client` in `server.rs`:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use crate::config::settings::{ConfigError, SgClawSettings};
|
||||||
|
use crate::service::protocol::ConfigUpdatePayload;
|
||||||
|
use std::path::Path;
|
||||||
|
|
||||||
|
fn update_config_file(config_path: &Path, config: ConfigUpdatePayload) -> Result<(), String> {
|
||||||
|
let mut settings = SgClawSettings::load(Some(config_path))
|
||||||
|
.map_err(|e| e.to_string())?
|
||||||
|
.ok_or_else(|| "无法读取现有配置".to_string())?;
|
||||||
|
|
||||||
|
if let Some(v) = config.api_key {
|
||||||
|
settings.provider_api_key = v;
|
||||||
|
}
|
||||||
|
if let Some(v) = config.base_url {
|
||||||
|
settings.provider_base_url = v;
|
||||||
|
}
|
||||||
|
if let Some(v) = config.model {
|
||||||
|
settings.provider_model = v;
|
||||||
|
}
|
||||||
|
if let Some(v) = config.skills_dir {
|
||||||
|
settings.skills_dir = Some(PathBuf::from(&v));
|
||||||
|
}
|
||||||
|
if let Some(v) = config.direct_submit_skill {
|
||||||
|
settings.direct_submit_skill = Some(v);
|
||||||
|
}
|
||||||
|
if let Some(v) = config.runtime_profile {
|
||||||
|
settings.runtime_profile = match v.as_str() {
|
||||||
|
"browser-attached" => crate::config::settings::RuntimeProfile::BrowserAttached,
|
||||||
|
"browser-heavy" => crate::config::settings::RuntimeProfile::BrowserHeavy,
|
||||||
|
"general-assistant" => crate::config::settings::RuntimeProfile::GeneralAssistant,
|
||||||
|
_ => return Err(format!("无效的 runtimeProfile: {}", v)),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
if let Some(v) = config.browser_backend {
|
||||||
|
settings.browser_backend = match v.as_str() {
|
||||||
|
"super-rpa" => crate::config::settings::BrowserBackend::SuperRpa,
|
||||||
|
"agent-browser" => crate::config::settings::BrowserBackend::AgentBrowser,
|
||||||
|
"rust-native" => crate::config::settings::BrowserBackend::RustNative,
|
||||||
|
"computer-use" => crate::config::settings::BrowserBackend::ComputerUse,
|
||||||
|
"auto" => crate::config::settings::BrowserBackend::Auto,
|
||||||
|
_ => return Err(format!("无效的 browserBackend: {}", v)),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
settings
|
||||||
|
.save_to_path(config_path)
|
||||||
|
.map_err(|e| format!("写入配置文件失败: {}", e))
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Add the import at the top of server.rs:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use std::path::PathBuf;
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 3: Run tests to verify compilation**
|
||||||
|
|
||||||
|
Run: `cargo build`
|
||||||
|
Expected: SUCCESS
|
||||||
|
|
||||||
|
### Task 5: Add auto-connect and settings UI to the service console HTML
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `frontend/service-console/sg_claw_service_console.html`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Add auto-connect on page load**
|
||||||
|
|
||||||
|
At the very end of the `<script>` section, after the existing event listeners and `updateUiState()`, add:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Auto-connect on page load
|
||||||
|
window.addEventListener("DOMContentLoaded", () => {
|
||||||
|
connectOrDisconnectService(true);
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Add Settings button HTML**
|
||||||
|
|
||||||
|
In the sidebar section of the HTML, after the connect button and before the "Composer" section label, add:
|
||||||
|
|
||||||
|
```html
|
||||||
|
<button id="settingsBtn" class="ghost-btn" style="margin-top: 8px;">⚙ 设置</button>
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 3: Add Settings modal HTML**
|
||||||
|
|
||||||
|
Before the closing `</body>` tag, add the modal HTML:
|
||||||
|
|
||||||
|
```html
|
||||||
|
<!-- Settings Modal -->
|
||||||
|
<div id="settingsModal" style="display: none; position: fixed; top: 0; left: 0; width: 100%; height: 100%; background: rgba(0,0,0,0.5); z-index: 1000; align-items: center; justify-content: center;">
|
||||||
|
<div style="background: var(--panel); border-radius: 20px; padding: 28px; width: min(520px, 90%); max-height: 85vh; overflow-y: auto; box-shadow: var(--shadow);">
|
||||||
|
<h3 style="margin: 0 0 20px; font-size: 1.2rem;">sgClaw 配置</h3>
|
||||||
|
|
||||||
|
<div class="field">
|
||||||
|
<label for="settingApiKey">API 密钥 *</label>
|
||||||
|
<input id="settingApiKey" type="password" placeholder="输入模型 API 密钥" />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="field">
|
||||||
|
<label for="settingBaseUrl">模型服务地址 *</label>
|
||||||
|
<input id="settingBaseUrl" type="url" placeholder="例如:https://api.deepseek.com" />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="field">
|
||||||
|
<label for="settingModel">模型名称 *</label>
|
||||||
|
<input id="settingModel" type="text" placeholder="例如:deepseek-chat" />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="field">
|
||||||
|
<label for="settingSkillsDir">Skills 目录路径</label>
|
||||||
|
<input id="settingSkillsDir" type="text" placeholder="例如:D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills" />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="field">
|
||||||
|
<label for="settingDirectSubmitSkill">直接提交技能</label>
|
||||||
|
<input id="settingDirectSubmitSkill" type="text" placeholder="例如:tq-lineloss-report.collect_lineloss" />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="field">
|
||||||
|
<label for="settingRuntimeProfile">运行模式</label>
|
||||||
|
<select id="settingRuntimeProfile" style="width: 100%; border: 1px solid var(--line); border-radius: 16px; padding: 14px 16px; background: rgba(255, 255, 255, 0.92); color: var(--text); font: inherit;">
|
||||||
|
<option value="browser-attached">browser-attached</option>
|
||||||
|
<option value="browser-heavy">browser-heavy</option>
|
||||||
|
<option value="general-assistant">general-assistant</option>
|
||||||
|
</select>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="field">
|
||||||
|
<label for="settingBrowserBackend">浏览器后端</label>
|
||||||
|
<select id="settingBrowserBackend" style="width: 100%; border: 1px solid var(--line); border-radius: 16px; padding: 14px 16px; background: rgba(255, 255, 255, 0.92); color: var(--text); font: inherit;">
|
||||||
|
<option value="super-rpa">super-rpa</option>
|
||||||
|
<option value="agent-browser">agent-browser</option>
|
||||||
|
<option value="rust-native">rust-native</option>
|
||||||
|
<option value="computer-use">computer-use</option>
|
||||||
|
<option value="auto">auto</option>
|
||||||
|
</select>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div id="settingsValidation" style="color: var(--error); font-size: 0.92rem; min-height: 1.4em; margin: 10px 0;"></div>
|
||||||
|
|
||||||
|
<div style="display: flex; gap: 12px; margin-top: 16px;">
|
||||||
|
<button id="settingsSaveBtn" class="primary-btn" style="flex: 1;">保存</button>
|
||||||
|
<button id="settingsCancelBtn" class="ghost-btn" style="flex: 1;">取消</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 4: Add settings modal CSS**
|
||||||
|
|
||||||
|
Add these CSS rules inside the `<style>` block, before the `@media` query:
|
||||||
|
|
||||||
|
```css
|
||||||
|
/* Settings modal elements */
|
||||||
|
select {
|
||||||
|
width: 100%;
|
||||||
|
border: 1px solid var(--line);
|
||||||
|
border-radius: 16px;
|
||||||
|
padding: 14px 16px;
|
||||||
|
background: rgba(255, 255, 255, 0.92);
|
||||||
|
color: var(--text);
|
||||||
|
font: inherit;
|
||||||
|
outline: none;
|
||||||
|
cursor: pointer;
|
||||||
|
}
|
||||||
|
|
||||||
|
select:focus {
|
||||||
|
border-color: rgba(15, 118, 110, 0.5);
|
||||||
|
box-shadow: 0 0 0 4px rgba(15, 118, 110, 0.12);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 5: Add settings modal JavaScript logic**
|
||||||
|
|
||||||
|
Add this JavaScript at the end of the `<script>` section, before the closing `</script>` tag:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Settings modal state
|
||||||
|
const settingsElements = {
|
||||||
|
modal: document.getElementById("settingsModal"),
|
||||||
|
apiKey: document.getElementById("settingApiKey"),
|
||||||
|
baseUrl: document.getElementById("settingBaseUrl"),
|
||||||
|
model: document.getElementById("settingModel"),
|
||||||
|
skillsDir: document.getElementById("settingSkillsDir"),
|
||||||
|
directSubmitSkill: document.getElementById("settingDirectSubmitSkill"),
|
||||||
|
runtimeProfile: document.getElementById("settingRuntimeProfile"),
|
||||||
|
browserBackend: document.getElementById("settingBrowserBackend"),
|
||||||
|
validation: document.getElementById("settingsValidation"),
|
||||||
|
saveBtn: document.getElementById("settingsSaveBtn"),
|
||||||
|
cancelBtn: document.getElementById("settingsCancelBtn"),
|
||||||
|
};
|
||||||
|
let settingsOpenBtn = null; // will be set below
|
||||||
|
|
||||||
|
function openSettingsModal() {
|
||||||
|
// Pre-fill with current values from wsUrl field (for baseUrl hint)
|
||||||
|
settingsElements.apiKey.value = "";
|
||||||
|
settingsElements.baseUrl.value = "";
|
||||||
|
settingsElements.model.value = "";
|
||||||
|
settingsElements.skillsDir.value = "";
|
||||||
|
settingsElements.directSubmitSkill.value = "";
|
||||||
|
settingsElements.runtimeProfile.value = "browser-attached";
|
||||||
|
settingsElements.browserBackend.value = "super-rpa";
|
||||||
|
settingsElements.validation.textContent = "";
|
||||||
|
settingsElements.modal.style.display = "flex";
|
||||||
|
}
|
||||||
|
|
||||||
|
function closeSettingsModal() {
|
||||||
|
settingsElements.modal.style.display = "none";
|
||||||
|
}
|
||||||
|
|
||||||
|
function validateSettings() {
|
||||||
|
const apiKey = settingsElements.apiKey.value.trim();
|
||||||
|
const baseUrl = settingsElements.baseUrl.value.trim();
|
||||||
|
const model = settingsElements.model.value.trim();
|
||||||
|
|
||||||
|
if (!apiKey) {
|
||||||
|
return "API 密钥不能为空";
|
||||||
|
}
|
||||||
|
if (!model) {
|
||||||
|
return "模型名称不能为空";
|
||||||
|
}
|
||||||
|
if (!baseUrl) {
|
||||||
|
return "模型服务地址不能为空";
|
||||||
|
}
|
||||||
|
try {
|
||||||
|
new URL(baseUrl);
|
||||||
|
} catch {
|
||||||
|
return "模型服务地址格式无效,请输入有效的 URL";
|
||||||
|
}
|
||||||
|
return "";
|
||||||
|
}
|
||||||
|
|
||||||
|
function saveSettings() {
|
||||||
|
const error = validateSettings();
|
||||||
|
if (error) {
|
||||||
|
settingsElements.validation.textContent = error;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!socket || socket.readyState !== WebSocket.OPEN) {
|
||||||
|
settingsElements.validation.textContent = "请先连接服务";
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
settingsElements.validation.textContent = "";
|
||||||
|
settingsElements.saveBtn.disabled = true;
|
||||||
|
settingsElements.saveBtn.textContent = "保存中...";
|
||||||
|
|
||||||
|
const config = {
|
||||||
|
apiKey: settingsElements.apiKey.value.trim(),
|
||||||
|
baseUrl: settingsElements.baseUrl.value.trim(),
|
||||||
|
model: settingsElements.model.value.trim(),
|
||||||
|
};
|
||||||
|
|
||||||
|
const skillsDir = settingsElements.skillsDir.value.trim();
|
||||||
|
if (skillsDir) config.skillsDir = skillsDir;
|
||||||
|
|
||||||
|
const directSubmitSkill = settingsElements.directSubmitSkill.value.trim();
|
||||||
|
if (directSubmitSkill) config.directSubmitSkill = directSubmitSkill;
|
||||||
|
|
||||||
|
config.runtimeProfile = settingsElements.runtimeProfile.value;
|
||||||
|
config.browserBackend = settingsElements.browserBackend.value;
|
||||||
|
|
||||||
|
socket.send(JSON.stringify({
|
||||||
|
type: "update_config",
|
||||||
|
config,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
function handleConfigResponse(message) {
|
||||||
|
settingsElements.saveBtn.disabled = false;
|
||||||
|
settingsElements.saveBtn.textContent = "保存";
|
||||||
|
|
||||||
|
if (message.success) {
|
||||||
|
settingsElements.validation.textContent = message.message;
|
||||||
|
settingsElements.validation.style.color = "var(--success)";
|
||||||
|
// Auto-close after 2 seconds on success
|
||||||
|
setTimeout(closeSettingsModal, 2000);
|
||||||
|
} else {
|
||||||
|
settingsElements.validation.textContent = message.message;
|
||||||
|
settingsElements.validation.style.color = "var(--error)";
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Event listeners for settings
|
||||||
|
settingsOpenBtn = document.getElementById("settingsBtn");
|
||||||
|
settingsOpenBtn.addEventListener("click", openSettingsModal);
|
||||||
|
settingsElements.cancelBtn.addEventListener("click", closeSettingsModal);
|
||||||
|
settingsElements.saveBtn.addEventListener("click", saveSettings);
|
||||||
|
|
||||||
|
// Close modal on background click
|
||||||
|
settingsElements.modal.addEventListener("click", (e) => {
|
||||||
|
if (e.target === settingsElements.modal) {
|
||||||
|
closeSettingsModal();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 6: Handle `config_updated` message in `handleMessage`**
|
||||||
|
|
||||||
|
In the existing `handleMessage` function, add a new case in the switch statement:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
case "config_updated":
|
||||||
|
handleConfigResponse(message);
|
||||||
|
break;
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 7: Verify the HTML is well-formed**
|
||||||
|
|
||||||
|
Open the file in a browser and visually check that:
|
||||||
|
- The settings button appears below the connect button
|
||||||
|
- Clicking it opens the modal
|
||||||
|
- The modal closes on Cancel or background click
|
||||||
|
|
||||||
|
### Task 6: Add protocol tests for new message types
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `tests/service_console_html_test.rs`
|
||||||
|
- Create: `tests/service_protocol_update_config_test.rs`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Create protocol serialization test**
|
||||||
|
|
||||||
|
Create `tests/service_protocol_update_config_test.rs`:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use sgclaw::service::protocol::{ClientMessage, ConfigUpdatePayload, ServiceMessage};
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn update_config_serializes_correctly() {
|
||||||
|
let config = ConfigUpdatePayload {
|
||||||
|
api_key: Some("test-key".to_string()),
|
||||||
|
base_url: Some("https://api.example.com".to_string()),
|
||||||
|
model: Some("test-model".to_string()),
|
||||||
|
skills_dir: Some("/path/to/skills".to_string()),
|
||||||
|
direct_submit_skill: Some("my-skill.my-tool".to_string()),
|
||||||
|
runtime_profile: Some("browser-attached".to_string()),
|
||||||
|
browser_backend: Some("super-rpa".to_string()),
|
||||||
|
};
|
||||||
|
|
||||||
|
let msg = ClientMessage::UpdateConfig { config };
|
||||||
|
let json = serde_json::to_string(&msg).unwrap();
|
||||||
|
|
||||||
|
assert!(json.contains("\"type\":\"update_config\""));
|
||||||
|
assert!(json.contains("\"apiKey\":\"test-key\""));
|
||||||
|
assert!(json.contains("\"baseUrl\":\"https://api.example.com\""));
|
||||||
|
assert!(json.contains("\"model\":\"test-model\""));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn update_config_deserializes_correctly() {
|
||||||
|
let json = r#"{
|
||||||
|
"type": "update_config",
|
||||||
|
"config": {
|
||||||
|
"apiKey": "key123",
|
||||||
|
"baseUrl": "https://api.test.com",
|
||||||
|
"model": "gpt-4"
|
||||||
|
}
|
||||||
|
}"#;
|
||||||
|
|
||||||
|
let msg: ClientMessage = serde_json::from_str(json).unwrap();
|
||||||
|
match msg {
|
||||||
|
ClientMessage::UpdateConfig { config } => {
|
||||||
|
assert_eq!(config.api_key, Some("key123".to_string()));
|
||||||
|
assert_eq!(config.base_url, Some("https://api.test.com".to_string()));
|
||||||
|
assert_eq!(config.model, Some("gpt-4".to_string()));
|
||||||
|
assert!(config.skills_dir.is_none());
|
||||||
|
}
|
||||||
|
_ => panic!("expected UpdateConfig variant"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn config_updated_serializes_correctly() {
|
||||||
|
let msg = ServiceMessage::ConfigUpdated {
|
||||||
|
success: true,
|
||||||
|
message: "配置已保存".to_string(),
|
||||||
|
};
|
||||||
|
let json = serde_json::to_string(&msg).unwrap();
|
||||||
|
|
||||||
|
assert!(json.contains("\"type\":\"config_updated\""));
|
||||||
|
assert!(json.contains("\"success\":true"));
|
||||||
|
assert!(json.contains("配置已保存"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn config_updated_deserializes_correctly() {
|
||||||
|
let json = r#"{"type":"config_updated","success":false,"message":"保存失败"}"#;
|
||||||
|
let msg: ServiceMessage = serde_json::from_str(json).unwrap();
|
||||||
|
|
||||||
|
match msg {
|
||||||
|
ServiceMessage::ConfigUpdated { success, message } => {
|
||||||
|
assert!(!success);
|
||||||
|
assert_eq!(message, "保存失败");
|
||||||
|
}
|
||||||
|
_ => panic!("expected ConfigUpdated variant"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Update service console HTML test**
|
||||||
|
|
||||||
|
Add to `tests/service_console_html_test.rs`, at the end of the existing test:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
// New enhancement assertions
|
||||||
|
assert!(source.contains("DOMContentLoaded"));
|
||||||
|
assert!(source.contains("settingsBtn"));
|
||||||
|
assert!(source.contains("settingsModal"));
|
||||||
|
assert!(source.contains("update_config"));
|
||||||
|
assert!(source.contains("config_updated"));
|
||||||
|
assert!(source.contains("settingApiKey"));
|
||||||
|
assert!(source.contains("settingBaseUrl"));
|
||||||
|
assert!(source.contains("settingModel"));
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 3: Run all new tests**
|
||||||
|
|
||||||
|
Run: `cargo test --test service_protocol_update_config_test`
|
||||||
|
Run: `cargo test --test service_console_html_test`
|
||||||
|
Expected: All PASS
|
||||||
|
|
||||||
|
### Task 7: Full build and test verification
|
||||||
|
|
||||||
|
- [ ] **Step 1: Run full test suite**
|
||||||
|
|
||||||
|
Run: `cargo test 2>&1`
|
||||||
|
Expected: All tests pass (except pre-existing `lineloss_period_resolver_prompts_for_missing_period` which was already failing before our changes)
|
||||||
|
|
||||||
|
- [ ] **Step 2: Build release binary**
|
||||||
|
|
||||||
|
Run: `cargo build --release 2>&1`
|
||||||
|
Expected: SUCCESS
|
||||||
|
|
||||||
|
### Task 8: Manual smoke test instructions
|
||||||
|
|
||||||
|
After implementation, verify manually:
|
||||||
|
|
||||||
|
1. Start sg_claw with config path: `sg_claw.exe --config-path sgclaw_config.json`
|
||||||
|
2. Open `sg_claw_service_console.html` in browser
|
||||||
|
3. Verify: Page auto-connects (should show "已连接" within a few seconds)
|
||||||
|
4. Click "设置" button
|
||||||
|
5. Fill in API Key, Base URL, Model
|
||||||
|
6. Click "保存"
|
||||||
|
7. Verify: Modal shows "配置已保存。重启 sg_claw 以应用新配置。" and auto-closes after 2 seconds
|
||||||
|
8. Verify: `sgclaw_config.json` file contains the new values
|
||||||
|
9. Verify: Existing task submission still works (send a test instruction)
|
||||||
@@ -1,291 +0,0 @@
|
|||||||
# Scene Skill Runtime Routing Design
|
|
||||||
|
|
||||||
**Goal:** Add a minimal, extensible scene-routing layer so staged business scenes can be triggered from natural language while still executing through the existing browser-backed skill path.
|
|
||||||
|
|
||||||
**Architecture:** Introduce a registry-driven scene contract loader that reads staged `scene.json` metadata, matches user instructions to a scene, and chooses one of two dispatch modes: direct browser execution or agent-mediated browser execution. Both modes must reuse the same browser-backed skill tool path so scene skills continue to execute through browser-internal methods rather than text-only responses or local fake execution.
|
|
||||||
|
|
||||||
**Tech Stack:** Rust, serde/JSON scene metadata loading, existing `BrowserScriptSkillTool`, existing compat runtime / runtime engine / workflow executor layers, focused Rust unit tests.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Problem Statement
|
|
||||||
|
|
||||||
The codebase already supports two useful but separate ideas:
|
|
||||||
|
|
||||||
1. **Zhihu special-case runtime routing**
|
|
||||||
- `src/compat/workflow_executor.rs` detects a narrow set of Zhihu tasks and can execute them directly without relying on the model to choose tools.
|
|
||||||
- This is stable, but not extensible for a growing set of business scenes.
|
|
||||||
|
|
||||||
2. **Browser-backed skills**
|
|
||||||
- `src/compat/runtime.rs` loads skills and exposes `browser_script` tools through `BrowserScriptSkillTool`.
|
|
||||||
- `src/compat/browser_script_skill_tool.rs` executes those tools by calling the browser backend with `Action::Eval`, so actual execution already happens through browser-internal methods.
|
|
||||||
- This is extensible, but tool choice currently depends too heavily on generic agent behavior.
|
|
||||||
|
|
||||||
The staged business scenes under `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging` already provide most of the metadata needed to bridge these two ideas. We need a first integration slice that uses scene metadata to improve routing without turning every scene into a hardcoded Zhihu-style exception.
|
|
||||||
|
|
||||||
## Design Goals
|
|
||||||
|
|
||||||
- Support natural-language triggering for staged scenes.
|
|
||||||
- Preserve the current browser-backed execution contract: both scene modes must end in browser-internal execution via the existing browser tool path.
|
|
||||||
- Support both dispatch styles discussed with the user:
|
|
||||||
- one scene that can execute without the model
|
|
||||||
- one scene that still uses the model for orchestration
|
|
||||||
- Keep the first slice small, covering only:
|
|
||||||
- `fault-details-report`
|
|
||||||
- `95598-repair-city-dispatch`
|
|
||||||
- Keep the design extensible so more scene skills can be added in the same directory later without more ad hoc routing branches.
|
|
||||||
- Avoid broad refactors or a new generic workflow platform in this slice.
|
|
||||||
|
|
||||||
## Non-Goals
|
|
||||||
|
|
||||||
- Do not build a scene editor, scene UI, or registry authoring workflow.
|
|
||||||
- Do not implement a full artifact post-processing platform for all report/monitor types.
|
|
||||||
- Do not convert every staged scene into a direct Rust executor.
|
|
||||||
- Do not replace the existing Zhihu-specific runtime path in this slice.
|
|
||||||
|
|
||||||
## Source of Truth and Paths
|
|
||||||
|
|
||||||
### Staged scene source
|
|
||||||
The new staged scene source for this work is:
|
|
||||||
|
|
||||||
- `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging`
|
|
||||||
|
|
||||||
The runtime integration must read scene metadata from this location for the initial slice.
|
|
||||||
|
|
||||||
### Existing runtime integration points
|
|
||||||
- `src/compat/config_adapter.rs` — current skills-dir resolution logic
|
|
||||||
- `src/compat/runtime.rs` — current skill loading and browser-script tool exposure
|
|
||||||
- `src/runtime/engine.rs` — runtime instruction building and allowed-tool shaping
|
|
||||||
- `src/compat/workflow_executor.rs` — existing direct execution routing pattern
|
|
||||||
- `src/compat/browser_script_skill_tool.rs` — browser-backed execution path for `browser_script` tools
|
|
||||||
|
|
||||||
## Scene Contract Model
|
|
||||||
|
|
||||||
Introduce a small internal scene contract model derived from `scene.json` and paired runtime policy. The loader should extract only the fields needed for the first slice:
|
|
||||||
|
|
||||||
- `id`
|
|
||||||
- `name`
|
|
||||||
- `summary`
|
|
||||||
- `tags`
|
|
||||||
- `inputs`
|
|
||||||
- `outputs`
|
|
||||||
- `skill.package`
|
|
||||||
- `skill.tool`
|
|
||||||
- `skill.artifact_type`
|
|
||||||
|
|
||||||
Add a runtime-only dispatch policy associated with each enabled scene inside the same internal registry entry used at runtime:
|
|
||||||
|
|
||||||
- `dispatch_mode`
|
|
||||||
- `direct_browser`
|
|
||||||
- `agent_browser`
|
|
||||||
- `expected_domain`
|
|
||||||
- bare hostname required by the underlying browser-backed skill tool
|
|
||||||
- optional `aliases`
|
|
||||||
- additional deterministic keywords/phrases when `id/name/summary/tags` are not enough for first-slice matching
|
|
||||||
- optional `default_args`
|
|
||||||
- runtime-supplied tool arguments when a scene needs fixed/default values for first execution
|
|
||||||
|
|
||||||
This runtime policy may be hardcoded in Rust for the first slice, but it must be represented through one consistent scene-routing abstraction so future scenes can join the same path without rewriting the whole design. The abstraction should be a single registry entry type that combines scene metadata with runtime dispatch policy, rather than a metadata loader plus a separate ad hoc match table.
|
|
||||||
|
|
||||||
## Dispatch Modes
|
|
||||||
|
|
||||||
### 1. `direct_browser`
|
|
||||||
This mode is for scenes whose collection flow is deterministic enough to bypass the model once the scene is recognized.
|
|
||||||
|
|
||||||
**Initial scene:** `fault-details-report`
|
|
||||||
|
|
||||||
**Behavior:**
|
|
||||||
- Detect scene from natural language.
|
|
||||||
- Resolve the corresponding browser-backed skill tool.
|
|
||||||
- Execute it directly through the existing browser-backed skill path.
|
|
||||||
- Return the collected artifact result without delegating tool choice to the model.
|
|
||||||
|
|
||||||
**Important constraint:**
|
|
||||||
This is not a local fake implementation. Even in direct mode, the actual collection must still go through the existing browser-backed execution path, meaning it ultimately uses browser-internal methods through the browser backend.
|
|
||||||
|
|
||||||
### 2. `agent_browser`
|
|
||||||
This mode is for scenes that still benefit from agent orchestration, explanation, or downstream reasoning, but whose business data must still come from browser-backed execution.
|
|
||||||
|
|
||||||
**Initial scene:** `95598-repair-city-dispatch`
|
|
||||||
|
|
||||||
**Behavior:**
|
|
||||||
- Detect scene from natural language.
|
|
||||||
- Inject a strong scene execution contract into the runtime instruction.
|
|
||||||
- Treat calling the matching browser-backed skill tool first as a policy requirement for the scene.
|
|
||||||
- In slice one, enforce that policy through scene-specific instruction injection rather than a hard runtime gate.
|
|
||||||
- Allow generic browser probing only as a fallback after the scene tool fails.
|
|
||||||
- Keep final explanation/summarization in the agent path, but never let the model invent business data.
|
|
||||||
|
|
||||||
## Matching Strategy
|
|
||||||
|
|
||||||
Implement a minimal matcher that scores user instructions against enabled scenes using:
|
|
||||||
|
|
||||||
- scene `id`
|
|
||||||
- scene `name`
|
|
||||||
- scene `summary`
|
|
||||||
- scene `tags`
|
|
||||||
- optional runtime aliases for the first slice
|
|
||||||
|
|
||||||
The matcher should be intentionally simple and deterministic in this slice. Avoid semantic embedding or fuzzy retrieval infrastructure.
|
|
||||||
|
|
||||||
Expected first-slice matches:
|
|
||||||
|
|
||||||
- `fault-details-report`
|
|
||||||
- phrases like `故障明细`, `故障明细报表`, `导出故障明细`
|
|
||||||
- `95598-repair-city-dispatch`
|
|
||||||
- phrases like `95598抢修市指`, `市指抢修监测`, `95598抢修队列`
|
|
||||||
|
|
||||||
If no scene matches, runtime behavior must remain unchanged.
|
|
||||||
|
|
||||||
## Runtime Loading Design
|
|
||||||
|
|
||||||
### Scene registry loading
|
|
||||||
Add a small loader that reads enabled scenes from the staged scene directory. For the first slice, it is acceptable to read the concrete scene files directly instead of implementing a full generic registry parser, as long as the resulting module boundary is registry-oriented rather than one-off.
|
|
||||||
|
|
||||||
The loader should:
|
|
||||||
- resolve the staged scene root
|
|
||||||
- read the two initial `scene.json` files
|
|
||||||
- deserialize them into a small internal scene metadata struct
|
|
||||||
- pair them with dispatch policy in the same in-memory registry entry
|
|
||||||
- ignore malformed or missing scenes safely
|
|
||||||
- never fail runtime startup solely because one or both initial scene files are absent
|
|
||||||
|
|
||||||
### Skill loading alignment
|
|
||||||
The corresponding skill packages must still be loaded into runtime skill exposure so the browser-backed tools are available to the runtime.
|
|
||||||
|
|
||||||
For this slice, the staged scene source and staged skill packages should be treated as coming from the same external root:
|
|
||||||
- staged scenes under `.../skill_staging/scenes`
|
|
||||||
- staged skill packages under `.../skill_staging/skills`
|
|
||||||
|
|
||||||
The implementation must make that staged skill package root visible to runtime skill loading. If current `skills_dir` resolution cannot express that directly, the design should extend configuration/path resolution to support a staged external skills root explicitly rather than relying on implicit mirroring.
|
|
||||||
|
|
||||||
## Execution Design
|
|
||||||
|
|
||||||
### Direct browser path (`fault-details-report`)
|
|
||||||
Add a direct execution route that is scene-driven rather than Zhihu-specific.
|
|
||||||
|
|
||||||
High-level flow:
|
|
||||||
1. Runtime receives user instruction.
|
|
||||||
2. Scene matcher recognizes `fault-details-report`.
|
|
||||||
3. Runtime resolves the browser-backed tool name `fault-details-report.collect_fault_details`.
|
|
||||||
4. Runtime builds the required tool arguments, including:
|
|
||||||
- `expected_domain` from the matched scene's runtime policy
|
|
||||||
- any first-slice scene inputs that can be deterministically derived from the current request/context
|
|
||||||
- any fixed/default args declared in runtime policy
|
|
||||||
5. Runtime executes that skill through the existing browser-backed mechanism.
|
|
||||||
6. Runtime returns normalized tool output as the direct route result.
|
|
||||||
|
|
||||||
Input/argument rules for the first slice:
|
|
||||||
- Direct execution is only allowed when all required tool arguments are available.
|
|
||||||
- `expected_domain` must always come from runtime scene policy, not from model inference.
|
|
||||||
- If a required scene/tool input cannot be derived from the user request or current browser context, the direct route must fail clearly instead of fabricating values.
|
|
||||||
- The first slice may keep direct-mode argument mapping intentionally narrow; unsupported requests should fall back safely rather than guessing.
|
|
||||||
|
|
||||||
Return-shape rule for the first slice:
|
|
||||||
- The direct route should return normalized serialized tool output (for example, the tool payload string or normalized JSON text), not a model-authored prose summary. This keeps direct mode deterministic and makes the browser-backed result explicit.
|
|
||||||
|
|
||||||
Implementation note:
|
|
||||||
The cleanest first slice is to add a small scene direct-execution helper in the compat runtime/workflow area that invokes the already-loaded browser-backed skill tool abstraction rather than duplicating browser request logic.
|
|
||||||
|
|
||||||
### Agent browser path (`95598-repair-city-dispatch`)
|
|
||||||
This path stays inside the agent flow.
|
|
||||||
|
|
||||||
High-level flow:
|
|
||||||
1. Runtime receives user instruction.
|
|
||||||
2. Scene matcher recognizes `95598-repair-city-dispatch`.
|
|
||||||
3. `RuntimeEngine::build_instruction` injects a scene execution contract containing:
|
|
||||||
- the matched scene name
|
|
||||||
- the required tool name `95598-repair-city-dispatch.collect_repair_orders`
|
|
||||||
- explicit requirement that this is a browser workflow, not a text-only task
|
|
||||||
- explicit requirement that business data must come from the browser-backed scene tool
|
|
||||||
- fallback rules for generic browser probing only after tool failure
|
|
||||||
4. Agent runs and chooses the required tool.
|
|
||||||
5. Tool executes through the existing browser-backed skill path.
|
|
||||||
6. Agent may summarize the result, but cannot fabricate data.
|
|
||||||
|
|
||||||
Enforcement note for the first slice:
|
|
||||||
- The `agent_browser` guarantee is primarily an instruction-contract guarantee in slice one.
|
|
||||||
- If allowed-tool shaping can narrow the exposed tool set for a matched scene without destabilizing existing behavior, that is a valid enhancement, but it is not required for the first slice.
|
|
||||||
- The minimum guaranteed behavior for slice one is strong scene-specific prompt injection plus preservation of the rule that the model must not invent collected business data.
|
|
||||||
|
|
||||||
## Browser Execution Contract
|
|
||||||
|
|
||||||
This requirement is non-negotiable for both dispatch modes:
|
|
||||||
|
|
||||||
- scene skills must execute like the Zhihu flow in the sense that the final business action is performed through browser-internal methods
|
|
||||||
- scene skills must not devolve into text-only pseudo execution
|
|
||||||
- direct mode and agent mode both reuse the existing browser-backed skill execution path
|
|
||||||
|
|
||||||
Concretely, the final path for scene skill execution should remain compatible with:
|
|
||||||
- `BrowserScriptSkillTool`
|
|
||||||
- browser backend invocation
|
|
||||||
- browser-side `Eval` / browser action execution semantics
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
- **Scene metadata missing or invalid:** skip that scene and continue with normal runtime behavior.
|
|
||||||
- **Scene matched but skill/tool unavailable:** do not crash; log enough context for diagnosis and fall back safely.
|
|
||||||
- **Browser surface unavailable:** disable scene browser routing for that turn and fall back to current non-scene behavior.
|
|
||||||
- **Tool execution fails in `agent_browser` mode:** allow existing fallback prompt behavior to continue, but preserve the rule that the model cannot invent collected data.
|
|
||||||
- **Tool execution fails in `direct_browser` mode:** return a concise execution failure instead of pretending collection succeeded.
|
|
||||||
|
|
||||||
## Extensibility Rules
|
|
||||||
|
|
||||||
This slice should be built so future scene additions only need:
|
|
||||||
- a new scene metadata file under the staged scene path
|
|
||||||
- a matching skill package/tool
|
|
||||||
- a dispatch-mode declaration/policy
|
|
||||||
- optional aliases if the natural-language names are not sufficiently explicit
|
|
||||||
|
|
||||||
Avoid these anti-patterns:
|
|
||||||
- per-scene `if user said X then do Y` branches scattered across runtime files
|
|
||||||
- duplicating browser execution code for each scene
|
|
||||||
- binding future scenes to Zhihu-specific assumptions
|
|
||||||
|
|
||||||
## Testing Strategy
|
|
||||||
|
|
||||||
### Scene registry tests
|
|
||||||
- load valid metadata for `fault-details-report`
|
|
||||||
- load valid metadata for `95598-repair-city-dispatch`
|
|
||||||
- ignore broken/missing scene files safely
|
|
||||||
|
|
||||||
### Matching tests
|
|
||||||
- instruction variants match `fault-details-report`
|
|
||||||
- instruction variants match `95598-repair-city-dispatch`
|
|
||||||
- unrelated instructions do not match
|
|
||||||
|
|
||||||
### Instruction-building tests
|
|
||||||
- `agent_browser` scene injects the required browser-first scene contract
|
|
||||||
- unmatched instructions do not gain scene-specific constraints
|
|
||||||
- Zhihu-specific instruction behavior remains unchanged
|
|
||||||
|
|
||||||
### Tool exposure tests
|
|
||||||
- staged skills from the moved path are loaded into runtime
|
|
||||||
- browser-backed tool names include:
|
|
||||||
- `fault-details-report.collect_fault_details`
|
|
||||||
- `95598-repair-city-dispatch.collect_repair_orders`
|
|
||||||
|
|
||||||
### Direct execution tests
|
|
||||||
- `fault-details-report` direct route invokes the browser-backed tool path rather than bypassing the browser layer
|
|
||||||
- direct route returns failure cleanly when tool execution fails
|
|
||||||
|
|
||||||
## Recommended First Implementation Slice
|
|
||||||
|
|
||||||
1. Add a tiny scene metadata loader and dispatch-mode policy module.
|
|
||||||
2. Extend runtime path resolution so the moved staged skills/scenes are visible.
|
|
||||||
3. Add deterministic scene matching for the two initial scenes.
|
|
||||||
4. Implement `agent_browser` instruction injection for `95598-repair-city-dispatch`.
|
|
||||||
5. Implement `direct_browser` execution for `fault-details-report` using the browser-backed skill path.
|
|
||||||
6. Add focused tests for matching, loading, tool exposure, and direct-vs-agent behavior.
|
|
||||||
|
|
||||||
## Open Design Constraint Captured From Discussion
|
|
||||||
|
|
||||||
The user explicitly requires the following combined behavior:
|
|
||||||
|
|
||||||
- support both kinds of scene execution in the same architecture
|
|
||||||
- one initial scene should be able to execute without the model
|
|
||||||
- one initial scene should execute through the model
|
|
||||||
- both must still use browser-internal execution methods like the Zhihu path
|
|
||||||
- the design must stay extensible because more staged skills may be added under the same path later
|
|
||||||
|
|
||||||
This design is built around those exact constraints.
|
|
||||||
@@ -0,0 +1,217 @@
|
|||||||
|
# 95598-repair-city-dispatch 操作分析
|
||||||
|
|
||||||
|
## 1. 场景概述
|
||||||
|
|
||||||
|
`95598-repair-city-dispatch` 对应“95598抢修-市指”场景,目标是监测抢修工单队列,并在必要时触发提醒、日志写入与自动派单等后续动作。根据 `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\scenes\95598-repair-city-dispatch\scene.json`、`D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\95598-repair-city-dispatch\SKILL.md`、`D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\95598-repair-city-dispatch\scripts\collect_repair_orders.js` 以及两份规则资产,当前最严谨的结论是:本场景的 packaged JS collector 已经实现输入驱动的 `monitor-snapshot` 归一化 / 比较逻辑,会按状态分桶 repair orders、解析 monitor/dispose logs、推导 `pending_ids` / `new_pending_ids`、给出 `success/partial/empty/blocked` 状态,并附带 desk 规则来源、配置基础页与已知问题元数据;但更强的业务监测、提醒与自动派单 workflow 证据仍主要存在于 desk 规则资产中,证据等级分别为 `code-confirmed`。
|
||||||
|
|
||||||
|
必须显式区分三层证据:
|
||||||
|
|
||||||
|
1. packaged runtime-snapshot-collector:`collect_repair_orders.js` 已直接实现 repair-order 分类、历史比较、状态判定与标准化快照输出,并显式携带 `workflow_rule_sources`、`config_base_page`、`config_base_role`、`packaged_collector_role` 与 `known_issues`,证据等级:`code-confirmed`。
|
||||||
|
2. 业务监测逻辑:`D:\desk\智能体资料\大四区报告监测项\95598抢修-市指_业务检测配置.txt` 直接展示了队列采集、状态分类、监测日志比较、音频提醒与监测日志写入逻辑,证据等级:`code-confirmed`。
|
||||||
|
3. 自动派单 / 提醒逻辑:`D:\desk\智能体资料\大四区报告监测项\95598抢修-市指_自动处理配置.txt` 直接展示了去重、班组匹配、自动派单请求、音频提醒、短信发送、外呼触发与处置日志写入逻辑,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
但这些 `code-confirmed` 只表示“代码或规则资产中存在这些实现分支或动作定义”,不等于“运行时已验证成功”。本文不对运行时成功做任何拔高表述。
|
||||||
|
|
||||||
|
## 2. 证据来源
|
||||||
|
|
||||||
|
本分析统一只使用四个证据等级标签:`code-confirmed`、`contract-defined`、`implementation intent exists but not rigorous / buggy`、`no direct evidence / candidate only`。
|
||||||
|
|
||||||
|
1. `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\95598-repair-city-dispatch\scripts\collect_repair_orders.js`
|
||||||
|
- 直接定义 `STATUS_GROUPS`、`LOCAL_SERVICE_ENDPOINTS`、`WORKFLOW_RULE_SOURCES`、`CONFIG_BASE_PAGE`、`KNOWN_ISSUES`,并实现 repair-order 分类、monitor/dispose log 解析比较、`new_pending_ids` 推导、`success/partial/empty/blocked` 状态判定,以及带 `evidence` / `known_issues` 的 `monitor-snapshot` 输出,证据等级:`code-confirmed`。
|
||||||
|
2. `D:\desk\智能体资料\大四区报告监测项\95598抢修-市指_业务检测配置.txt`
|
||||||
|
- 直接实现工单队列采集、按状态分桶、待处理列表比较、音频提醒、监测日志写入,且暴露待处理分类 bug,证据等级:`code-confirmed`。
|
||||||
|
3. `D:\desk\智能体资料\大四区报告监测项\95598抢修-市指_自动处理配置.txt`
|
||||||
|
- 直接实现处置日志去重、班组范围匹配、自动派单请求、自动派单成功/失败/异常/未匹配分支、音频日志、短信日志、外呼触发与 `setDisposeLog` 写入,证据等级:`code-confirmed`。
|
||||||
|
4. `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\95598-repair-city-dispatch\SKILL.md`
|
||||||
|
- 定义“优先使用 packaged collector、把监测快照与下游动作分离、允许 partial”的运行契约,证据等级:`contract-defined`。
|
||||||
|
5. `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\95598-repair-city-dispatch\references\collection-flow.md`
|
||||||
|
- 定义以页面配置为入口、结合规则资产理解语义、采集状态 `00/01/06/08`、对比 monitor/dispose logs 的一阶流程,证据等级:`contract-defined`。
|
||||||
|
6. `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\95598-repair-city-dispatch\references\data-quality.md`
|
||||||
|
- 定义状态分类、partial 规则、empty/failure 区分和下游副作用边界,证据等级:`contract-defined`。
|
||||||
|
7. `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\scenes\95598-repair-city-dispatch\scene.json`
|
||||||
|
- 声明场景分类、输入 `time`、依赖和动作类型,证据等级:`code-confirmed`。
|
||||||
|
8. `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\scenes\95598-repair-city-dispatch\scene.draft.json`
|
||||||
|
- 展示早期推断中对 `trigger-alert`、`configServices` 是否拆分的犹豫,属于候选整理结果,证据等级:`no direct evidence / candidate only`。
|
||||||
|
|
||||||
|
## 3. 实际入口与运行边界
|
||||||
|
|
||||||
|
实际入口在 `scene.json` 中已固定:场景页面入口为 `index.html`,技能工具名为 `95598-repair-city-dispatch.collect_repair_orders`,输出类型为 `monitor-snapshot`,输入为 `time`,这些都属于 `code-confirmed`。
|
||||||
|
|
||||||
|
其中 `assets/scene-snapshot/index.html` 只应被视为配置基础页(例如班组、联系人、范围维护),不应被当作规则 workflow 的主执行证据。
|
||||||
|
|
||||||
|
运行边界方面,需要做两个强制区分:
|
||||||
|
|
||||||
|
- packaged JS runtime collector 的实际边界:它已经能基于输入 `repair_orders`、`monitor_logs`、`dispose_logs` 做状态分类、历史比较、`new_pending_ids` 推导与 `success/partial/empty/blocked` 判定,并返回标准 `monitor-snapshot`;但它仍是输入驱动归一化 collector,不直接发起浏览器请求,也不直接承载完整业务 workflow,证据等级:`code-confirmed`。
|
||||||
|
- rule-asset 行为边界:业务检测规则和自动处理规则分别展示了浏览器请求、日志比较、提醒副作用与自动派单副作用,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
也就是说,本场景不能被单句描述成“统一 packaged collector 已完整实现实时队列监测与自动派单”。更严谨的说法是:packaged collector 已实现可测试的输入驱动快照归一化 / 比较逻辑;而较强的实时监测与自动处理链路证据仍来自 desk 规则资产,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
同时,`SKILL.md` 与 reference 明确要求把“快照采集成功”与“音频、短信、外呼、自动派单等下游效果”分开表达;这属于运行契约约束,证据等级:`contract-defined`。
|
||||||
|
|
||||||
|
## 4. 代码已证实的实际操作流程
|
||||||
|
|
||||||
|
### 4.1 packaged runtime-snapshot-collector 已证实流程
|
||||||
|
|
||||||
|
`collect_repair_orders.js` 中现在能严格确认:
|
||||||
|
|
||||||
|
1. 调用 `collectRepairOrders(input)`,读取 `input.repair_orders`、`input.monitor_logs || input.monitor_log`、`input.dispose_logs || input.dispose_log`、`input.local_write_failures`、`input.blocked_reason` 等输入。
|
||||||
|
2. 通过 `classifyRepairOrders(...)` 按 `STATUS_GROUPS.pending = ["00", "01"]`、`STATUS_GROUPS.audit = ["06"]`、`STATUS_GROUPS.processed = ["08"]` 对 repair orders 分桶,并记录未知状态。
|
||||||
|
3. 从 pending orders 提取 `pending_ids`,再解析 monitor/dispose logs,识别 malformed payload,并据此推导 `new_pending_ids`。
|
||||||
|
4. 按 `blocked > partial > empty > success` 的优先级计算 `status`,并把未知状态、日志缺失、日志解析失败、本地写失败等写入 `partial_reasons`。
|
||||||
|
5. 返回 `type: "monitor-snapshot"`、`scene: "95598-repair-city-dispatch"`、`pending`、`audit`、`processed`、`pending_ids`、`new_pending_ids`、`status`、`partial_reasons`。
|
||||||
|
6. 在返回对象中附带 `evidence.workflow_rule_sources`、`evidence.config_base_page`、`evidence.config_base_role`、`evidence.packaged_collector_role = "runtime-snapshot-collector"`,以及 `known_issues`。
|
||||||
|
7. 模块额外导出 `STATUS_GROUPS`、`LOCAL_SERVICE_ENDPOINTS`、`WORKFLOW_RULE_SOURCES`、`CONFIG_BASE_PAGE`、`KNOWN_ISSUES`。
|
||||||
|
|
||||||
|
以上都属于 `code-confirmed`。
|
||||||
|
|
||||||
|
### 4.2 业务监测规则已证实流程
|
||||||
|
|
||||||
|
`95598抢修-市指_业务检测配置.txt` 中可直接确认:
|
||||||
|
|
||||||
|
1. 通过 `BrowserAction("sgBrowerserJsAjax2", ...)` 请求 `repairOrder/list`,查询条件包含 `statusName=00,01,06,08` 与当天时间窗,证据等级:`code-confirmed`。
|
||||||
|
2. 将返回列表按状态分到 `list`、`shlist`、`ycjList`,并构造 `pending/audit/processed` 与 `pendingList`,证据等级:`code-confirmed`。
|
||||||
|
3. 读取 `getMonitorLog`,并基于待处理列表对比决定是否播报音频提醒,证据等级:`code-confirmed`。
|
||||||
|
4. 将监测结果写入 `setMonitorData` 与 `setMonitorLog`,证据等级:`code-confirmed`。
|
||||||
|
5. 音频提醒结果会写入 `setAudioPlayLog` 成功/失败/异常三类状态,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
但这里同时存在一个直接可见的 bug:待处理判断写成了 `item.status == "00" && item.status == "01"`,这在单个状态值上不可能同时成立,因此规则中的 `pending` 列表构造逻辑不严谨,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
|
||||||
|
### 4.3 自动处理规则已证实流程
|
||||||
|
|
||||||
|
`95598抢修-市指_自动处理配置.txt` 中可直接确认:
|
||||||
|
|
||||||
|
1. 先写一条“进入自动派单”的监测日志,再读取 `getDisposeLog` 做已派单去重,证据等级:`code-confirmed`。
|
||||||
|
2. 对未派过单的待处理工单,读取 `getClassList`,按 `scope` 对故障地点 `gzdd` 做班组匹配,证据等级:`code-confirmed`。
|
||||||
|
3. 匹配成功时,请求 `repairOrder/initProcess` 进行自动派单,证据等级:`code-confirmed`。
|
||||||
|
4. 自动派单成功时,会触发成功音频播报、短信发送、外呼触发,并写 `setDisposeLog(state="成功")`,证据等级:`code-confirmed`。
|
||||||
|
5. 自动派单失败时,会触发失败音频播报,并写 `setDisposeLog(state="失败")`,证据等级:`code-confirmed`。
|
||||||
|
6. 自动派单异常时,会触发异常音频播报,并写 `setDisposeLog(state="异常")`,证据等级:`code-confirmed`。
|
||||||
|
7. 未匹配到班组时,会触发未匹配音频播报,并写 `setDisposeLog(state="未匹配")`,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
以上动作都只是“规则层实现分支存在”的直接证据,不代表运行时已经验证成功。
|
||||||
|
|
||||||
|
## 5. 标准化抽象流程
|
||||||
|
|
||||||
|
若为 command-center 做严格抽象,本场景更合理的标准化流程应写成:
|
||||||
|
|
||||||
|
1. 接收监测任务输入 `time`。
|
||||||
|
2. 使用规则资产定义的浏览器请求采集 95598 抢修队列。
|
||||||
|
3. 将源数据分为 `pending`、`audit`、`processed`,并保留规则层可见的待处理列表语义。
|
||||||
|
4. 用 monitor log / dispose log 做比较上下文,得出“新增待处理”或待自动处理集合。
|
||||||
|
5. 若进入标准配置归一层,再把这些结果映射为 `pending_ids`、`new_pending_ids` 等 canonical 字段。
|
||||||
|
6. 先返回或保留监测快照语义。
|
||||||
|
7. 再执行音频提醒、短信、外呼、自动派单、日志写入等下游动作。
|
||||||
|
|
||||||
|
其中第 1 步可由 packaged collector 的显式输入 `time` 支撑,第 3、4、5、6 步可由 packaged collector 的输入驱动归一化 / 比较逻辑支撑,证据等级:`code-confirmed`;第 2、7 步主要由规则资产直接支撑,证据等级:`code-confirmed`;“快照应先于下游副作用表达”这一边界来自 `SKILL.md` / references,证据等级:`contract-defined`。
|
||||||
|
|
||||||
|
如果进一步把这个抽象流程说成“已由统一 packaged collector 严格承载实时浏览器采集与自动派单副作用”,那就不严谨了,因为这些更强 workflow 证据仍在 desk 规则资产而不是 packaged collector 中,证据等级只能降为 `implementation intent exists but not rigorous / buggy`。
|
||||||
|
|
||||||
|
## 6. 输入、上下文与依赖
|
||||||
|
|
||||||
|
### 输入
|
||||||
|
|
||||||
|
- `time` 是 scene 与 packaged script 共同声明的显式输入,证据等级:`code-confirmed`。
|
||||||
|
- 当天时间窗拼接逻辑出现在业务监测规则中,即 `00:00:00` 到 `23:59:59`,证据等级:`code-confirmed`。
|
||||||
|
- “当前队列窗口通常是当天”在 reference 中被明确说明,证据等级:`contract-defined`。
|
||||||
|
|
||||||
|
### 运行上下文
|
||||||
|
|
||||||
|
- 平台 session、org/user 上下文、浏览器可执行 `BrowserAction` 是规则资产和 reference 共同依赖的前提,证据等级分别为 `code-confirmed` 与 `contract-defined`。
|
||||||
|
- 页面本身更偏配置页,而真正监测语义来自规则资产,这一点在 `collection-flow.md` 中被明确指出,证据等级:`contract-defined`。
|
||||||
|
|
||||||
|
### 依赖
|
||||||
|
|
||||||
|
- `scene.json` 中声明 `browser`、`local-service`、`repair-order-source`、`history-log`、`status-classification`,证据等级:`code-confirmed`。
|
||||||
|
- 业务监测规则直接使用 `repairOrder/list`、`MonitorServices/getMonitorLog`、`setMonitorData`、`setMonitorLog`、`setAudioPlayLog`,证据等级:`code-confirmed`。
|
||||||
|
- 自动处理规则直接使用 `getDisposeLog`、`getClassList`、`repairOrder/initProcess`、`setDisposeLog`、`setSendMessageLog` 与外呼触发 `mac.callOutLogin`,证据等级:`code-confirmed`。
|
||||||
|
- `configServices` 是否需要单独提升为正式依赖,在 `scene.draft.json` 中仍是待确认项,证据等级:`no direct evidence / candidate only`。
|
||||||
|
|
||||||
|
## 7. 输出结构
|
||||||
|
|
||||||
|
当前输出结构需要分层描述。
|
||||||
|
|
||||||
|
### 7.1 packaged runtime collector 已直接定义的输出
|
||||||
|
|
||||||
|
`collect_repair_orders.js` 直接定义:
|
||||||
|
|
||||||
|
- `type: "monitor-snapshot"`
|
||||||
|
- `scene: "95598-repair-city-dispatch"`
|
||||||
|
- `time`
|
||||||
|
- `pending`
|
||||||
|
- `audit`
|
||||||
|
- `processed`
|
||||||
|
- `pending_ids`
|
||||||
|
- `new_pending_ids`
|
||||||
|
- `status`
|
||||||
|
- `partial_reasons`
|
||||||
|
- `evidence.workflow_rule_sources`
|
||||||
|
- `evidence.config_base_page`
|
||||||
|
- `evidence.config_base_role`
|
||||||
|
- `evidence.packaged_collector_role`
|
||||||
|
- `known_issues`
|
||||||
|
|
||||||
|
以上全部属于 `code-confirmed`。
|
||||||
|
|
||||||
|
### 7.2 规则资产已展示的实际快照字段语义
|
||||||
|
|
||||||
|
业务监测规则直接构造了:
|
||||||
|
|
||||||
|
- `time`
|
||||||
|
- `type: "95598抢修-市指"`
|
||||||
|
- `pending`
|
||||||
|
- `pendingList`
|
||||||
|
- `audit`
|
||||||
|
- `processed`
|
||||||
|
|
||||||
|
这说明规则层实际快照对象与 packaged stub 的字段命名并不完全一致,尤其是 `pendingList` vs `pending_ids`、`type` vs `scene`,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
### 7.3 `new_pending_ids` 的证据强度
|
||||||
|
|
||||||
|
`SKILL.md`、reference 和 `data-quality.md` 都把 `new_pending_ids` 当作期望输出的一部分,证据等级:`contract-defined`。但在已读规则资产里,能直接看到的是“对 monitor log / dispose log 做比较并决定是否提醒或进入自动派单”,而没有看到显式字段 `new_pending_ids` 被直接写出,因此“存在历史比较意图”是 `code-confirmed`,“`new_pending_ids` 已被当前实现严谨产出”只能标为 `implementation intent exists but not rigorous / buggy`。
|
||||||
|
|
||||||
|
## 8. 下游动作证据表
|
||||||
|
|
||||||
|
| 下游动作 | 当前证据 | 证据等级 | 严谨结论 |
|
||||||
|
| --- | --- | --- | --- |
|
||||||
|
| 返回 `monitor-snapshot` runtime collector 输出 | `collect_repair_orders.js` 直接返回对象 | `code-confirmed` | packaged JS 直接证明标准 snapshot 字段、状态判定与 collector metadata 已存在。 |
|
||||||
|
| 队列采集请求 | 业务监测规则调用 `repairOrder/list` | `code-confirmed` | 队列采集逻辑直接存在于规则资产中。 |
|
||||||
|
| 音频提醒调用 | 业务监测规则和自动处理规则都调用 `mac.audioPlay(...)` | `code-confirmed` | 只能确认规则层存在音频提醒调用,不代表运行时已验证成功。 |
|
||||||
|
| 短信发送调用 | 自动处理规则调用 `mac.sendMessages(request)` | `code-confirmed` | 只能确认规则层存在短信发送调用。 |
|
||||||
|
| 电话 / 外呼触发 | 自动处理规则调用 `mac.callOutLogin(params)` | `code-confirmed` | 只能确认规则层存在外呼触发分支。 |
|
||||||
|
| 自动派单请求调用 | 自动处理规则请求 `repairOrder/initProcess` | `code-confirmed` | 自动派单请求分支可直接定位。 |
|
||||||
|
| `setDisposeLog` 成功写入 | 自动处理规则成功分支写 `state="成功"` | `code-confirmed` | 成功路径处置日志写入定义明确存在。 |
|
||||||
|
| `setDisposeLog` 失败写入 | 自动处理规则失败分支写 `state="失败"` | `code-confirmed` | 失败路径处置日志写入定义明确存在。 |
|
||||||
|
| `setDisposeLog` 异常写入 | 自动处理规则异常分支写 `state="异常"` | `code-confirmed` | 异常路径处置日志写入定义明确存在。 |
|
||||||
|
| `setDisposeLog` 未匹配写入 | 自动处理规则未匹配分支写 `state="未匹配"` | `code-confirmed` | 未匹配路径处置日志写入定义明确存在。 |
|
||||||
|
| `new_pending_ids` 严格产出 | 只在 skill/reference/data-quality 中被要求 | `implementation intent exists but not rigorous / buggy` | 有明确目标语义,但当前读到的规则资产未直接产出同名字段。 |
|
||||||
|
| 把下游动作结果等同于采集成功 | skill/reference 明确禁止 | `contract-defined` | 契约要求把快照成功与副作用成功分离。 |
|
||||||
|
|
||||||
|
## 9. 当前代码疑点 / 不严谨点
|
||||||
|
|
||||||
|
1. 最明显的已知 bug 是业务监测规则中的待处理分类条件写成 `item.status == "00" && item.status == "01"`。这会导致 `pending` 分桶逻辑不可能按作者意图工作,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
2. packaged collector 与规则资产的输出命名仍不一致:collector 使用 `scene`、`pending_ids`、`new_pending_ids`,规则对象使用 `type`、`pendingList`,证据等级:`code-confirmed`。
|
||||||
|
3. `SKILL.md` 把 `new_pending_ids` 作为输出重点,但当前直接证据更强的是“做日志比较并决定提醒/自动派单”,而不是“显式产出同名字段”,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
4. `scene.draft.json` 仍在犹豫 `trigger-alert` 是否拆成 audio-alert、message-alert、callout 三类动作,说明标准动作建模尚未完全收敛,证据等级:`no direct evidence / candidate only`。
|
||||||
|
5. 虽然规则层存在音频、短信、外呼、自动派单与日志写入定义,但本文不能据此声称这些动作已完成运行时验证,任何这种拔高都不严谨。
|
||||||
|
|
||||||
|
## 10. 对 command-center 标准配置的修订建议
|
||||||
|
|
||||||
|
1. 对本场景应显式拆分两层实现证据:
|
||||||
|
- `packaged_collector`: `collect_repair_orders.js` 的 runtime snapshot collector、状态判定、历史比较与 metadata(规则来源、配置基础页角色、已知问题),证据等级:`code-confirmed`;
|
||||||
|
- `rule_asset_workflow`: 业务监测与自动处理规则资产中的真实流程分支,证据等级:`code-confirmed`。
|
||||||
|
2. 在标准配置中把业务监测与自动处理拆成两个子流程:
|
||||||
|
- `monitoring_flow` 对应 `95598抢修-市指_业务检测配置.txt`;
|
||||||
|
- `auto_processing_flow` 对应 `95598抢修-市指_自动处理配置.txt`。
|
||||||
|
这样可以避免把两份规则混成单一 collector。
|
||||||
|
3. 输出 schema 建议区分:
|
||||||
|
- `canonical_snapshot_fields`: `pending_ids` / `new_pending_ids` 等标准字段;
|
||||||
|
- `observed_rule_fields`: `pendingList` / `type` 等规则层字段。
|
||||||
|
当前两套命名并存,证据等级:`code-confirmed`。
|
||||||
|
4. 对状态分类增加 `known_bug_note`,明确记录 `status == "00" && status == "01"` 的待处理分类 bug,防止后续文档误把 pending 计数写成稳定事实,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
5. 对下游动作增加 `effect_channels` 明细,至少拆出 `audio-reminder`、`sms-send`、`callout-trigger`、`auto-dispatch-request`、`dispose-log-write`,因为这些都已在规则资产中直接出现,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
## 11. 最终严谨结论
|
||||||
|
|
||||||
|
关于 `95598-repair-city-dispatch`,当前最可靠的结论是:仓库已经同时存在一个可测试的 packaged JS runtime collector,以及两份更强的 desk 规则脚本实现(`D:\desk\智能体资料\大四区报告监测项\95598抢修-市指_业务检测配置.txt`、`D:\desk\智能体资料\大四区报告监测项\95598抢修-市指_自动处理配置.txt`),其中 packaged collector 已直接实现 repair-order 分类、monitor/dispose log 比较、`new_pending_ids` 推导与 `success/partial/empty/blocked` 状态判定;业务监测规则直接证实了队列采集、日志比较、音频提醒与监测日志写入,自动处理规则直接证实了去重、班组匹配、自动派单请求、短信发送、外呼触发以及 `setDisposeLog` 在成功 / 失败 / 异常 / 未匹配路径上的写入定义,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
但同样必须严格说明:这些 `code-confirmed` 只证明“代码或规则层存在这些实现分支”,不证明运行时已验证成功。此外,desk 业务监测规则里还存在 `status == "00" && status == "01"` 的待处理分类 bug,因此 rule workflow 本身也不能被描述为严谨无误。对 command-center 而言,本场景最应该被建模为“packaged collector 已具备输入驱动快照归一化能力、desk rule-asset workflow 证据更强、且监测流与自动处理流必须分开表达”的 monitor scene。
|
||||||
@@ -0,0 +1,155 @@
|
|||||||
|
# 95598-weekly-monitor-report 操作分析
|
||||||
|
|
||||||
|
## 1. 场景概述
|
||||||
|
|
||||||
|
`95598-weekly-monitor-report` 对应“95598、12398及配网设备监控情况周统计”场景,目标是汇总 95598、12398 与配网设备多来源周统计并生成统一周报。根据 `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\scenes\95598-weekly-monitor-report\scene.json`、`D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\95598-weekly-monitor-report\SKILL.md` 与 `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\95598-weekly-monitor-report\scripts\collect_weekly_metrics.js`,当前最硬直接证据是:脚本定义了六个 section template、空 artifact、`period`、`status: "ok"` 与 `partial_reasons: []`,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
同时必须明确:当前 packaged script 对 artifact schema / section template 的定义,远强于对实时浏览器采集、多源周统计归并、双周期对齐或导出行为的证明。也就是说,本场景现在更接近“周报结构模板脚本”,而非“已被代码严格证明可跑通的 live browser collector”,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
## 2. 证据来源
|
||||||
|
|
||||||
|
本分析统一只使用四个证据等级标签:`code-confirmed`、`contract-defined`、`implementation intent exists but not rigorous / buggy`、`no direct evidence / candidate only`。其中,脚本直接定义的 artifact schema / section template 归入 `code-confirmed`;未见脚本直接实现的双周期语义、采集逻辑与下游动作,不拔高于其对应较弱标签。
|
||||||
|
|
||||||
|
1. `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\95598-weekly-monitor-report\scripts\collect_weekly_metrics.js`
|
||||||
|
- 直接定义六个 section template,并返回空 artifact,证据等级:`code-confirmed`。
|
||||||
|
2. `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\95598-weekly-monitor-report\SKILL.md`
|
||||||
|
- 描述应读取 current-period 与 cumulative-period、校验会话、收集多来源 source groups、归一 section 数据,并在输出中返回两个周期、included source groups、period alignment issues 等;这更像运行契约与实现方向,证据等级以 `contract-defined` 与 `implementation intent exists but not rigorous / buggy` 为主。
|
||||||
|
3. `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\scenes\95598-weekly-monitor-report\scene.json`
|
||||||
|
- 定义场景输入 `period`、依赖 `browser` / `multi-source` / `period-alignment` / `local-report-service`,动作包括 `query` / `collect-report` / `aggregate-sections` / `align-periods`,证据等级:`code-confirmed`。
|
||||||
|
4. `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\95598-weekly-monitor-report\references\collection-flow.md`
|
||||||
|
- 明确入口页面提供两个日期范围:current-period 与 cumulative-period,并说明要先读两个范围,再收集 source groups、再按 section 归一,证据等级:`contract-defined`。
|
||||||
|
5. `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\95598-weekly-monitor-report\references\data-quality.md`
|
||||||
|
- 规定完整结果、partial 规则、常见弱点和 empty/failure 区分,证据等级:`contract-defined`。
|
||||||
|
6. `D:\data\ideaSpace\rust\sgClaw\claw-new\docs\superpowers\specs\2026-04-08-command-center-virtual-employee-inventory.json`
|
||||||
|
- 已将该场景整理为 `workflow`、`status_model`、`hidden_dependencies`、`open_questions` 等 command-center 视图;但其中部分是再抽象结果,不应反向拔高为实现证据,证据等级:`no direct evidence / candidate only`(仅限 inventory 不能单独证明 packaged script 已实现的部分)。
|
||||||
|
|
||||||
|
## 3. 实际入口与运行边界
|
||||||
|
|
||||||
|
实际入口已在 `scene.json` 固定:浏览器场景 `index.html`,技能工具名为 `95598-weekly-monitor-report.collect_weekly_metrics`,输出 artifact 为 `report-artifact`,这些都是 `code-confirmed`。
|
||||||
|
|
||||||
|
运行边界方面出现了本场景最明显的不严谨点:
|
||||||
|
|
||||||
|
- scene 与脚本都只保留一个 `period` 字段,证据等级:`code-confirmed`。
|
||||||
|
- `SKILL.md`、`collection-flow.md` 与 inventory 整理结果都明确说明页面实际有 `current-period` 与 `cumulative-period` 两套输入,证据等级:`contract-defined`。
|
||||||
|
- scene 还把 `period-alignment` 声明为依赖,并把 `align-periods` 声明为动作,但脚本没有任何相应执行逻辑,证据等级:`code-confirmed` 对元数据存在成立,而“已实现 period alignment”只能标为 `implementation intent exists but not rigorous / buggy`。
|
||||||
|
|
||||||
|
因此,当前最严谨的边界判断是:上层元数据与参考文档都把本场景描述成“双周期、多来源、需周期对齐的 section 周报”,但 packaged script 实际只提供了一个空 artifact 模板壳,尚未证明 live collection 行为。
|
||||||
|
|
||||||
|
## 4. 代码已证实的实际操作流程
|
||||||
|
|
||||||
|
当前脚本中可直接证实的流程只有:
|
||||||
|
|
||||||
|
1. 调用 `collectWeeklyMetrics(input)`。
|
||||||
|
2. 读取 `input.period || ""` 写入返回对象的 `period`。
|
||||||
|
3. 构造空主表:`columns: []`、`rows: []`。
|
||||||
|
4. 基于 `SECTION_TEMPLATES` 复制出 6 个 section,且每个 section 初始 `rows: []`。
|
||||||
|
5. 返回 `type: "report-artifact"`、`report_name`、`status: "ok"` 与 `partial_reasons: []`。
|
||||||
|
|
||||||
|
这些都属于 `code-confirmed`。
|
||||||
|
|
||||||
|
至于“读取 current-period / cumulative-period 两个日期范围”“验证多系统会话”“按 source group 采集 95598 / 12398 / 配网设备指标”“执行 period alignment”“导出周报或写报告日志”等行为,只在 `SKILL.md` 与 reference 中被描述,没有在 packaged script 中以可执行逻辑出现,因此不能算“代码已证实的实际操作流程”。
|
||||||
|
|
||||||
|
## 5. 标准化抽象流程
|
||||||
|
|
||||||
|
若做 command-center 的标准化抽象,可将本场景整理为:
|
||||||
|
|
||||||
|
1. 接收周报任务请求。
|
||||||
|
2. 解析 current-period 与 cumulative-period。
|
||||||
|
3. 验证多系统访问与会话上下文。
|
||||||
|
4. 按 source groups 收集周统计数据。
|
||||||
|
5. 将结果归并到六个 section。
|
||||||
|
6. 对 current-period 与 cumulative-period 做一致性校验或对齐。
|
||||||
|
7. 生成 `report-artifact`。
|
||||||
|
8. 视情况执行导出/日志等下游动作。
|
||||||
|
|
||||||
|
其中第 5 步“六个 section schema 已存在”以及第 7 步“返回 artifact 壳”是 `code-confirmed`。第 2、3、4、6、8 步主要来自 skill/reference/scene 的目标流程描述,证据等级为 `contract-defined`;若要说这些步骤已被 packaged script 落地,则只能降为 `implementation intent exists but not rigorous / buggy`。
|
||||||
|
|
||||||
|
## 6. 输入、上下文与依赖
|
||||||
|
|
||||||
|
### 输入
|
||||||
|
|
||||||
|
- `period` 是 scene 与脚本共享的显式输入,证据等级:`code-confirmed`。
|
||||||
|
- `currentPeriod` / `cumulativePeriod`(或 current-period / cumulative-period)是 `SKILL.md`、reference 与 inventory 隐含/显式要求的真实业务输入,证据等级:`contract-defined`。
|
||||||
|
- 这意味着当前输入建模存在明显冲突:统一配置只暴露 `period`,但场景语义其实依赖双周期,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
|
||||||
|
### 运行上下文
|
||||||
|
|
||||||
|
- `session`、多源系统账号/缓存 token、浏览器可见页面、周期对齐上下文等在 scene/reference 中被描述,scene 元数据层面的声明是 `code-confirmed`,具体业务语义是 `contract-defined`。
|
||||||
|
- “period-alignment-context” 被 inventory 当作 runtime_context 整理出来,可视为对 scene/reference 的再抽象;作为建议结构可以保留,但不宜拔高成脚本已实现能力。
|
||||||
|
|
||||||
|
### 依赖
|
||||||
|
|
||||||
|
- `browser`、`multi-source`、`period-alignment`、`local-report-service` 在 `scene.json` 中可直接定位,证据等级:`code-confirmed`。
|
||||||
|
- `/a_js/YPTAPI.js`、`http://localhost:13313/ReportServices/*` 等具体依赖来自 reference,证据等级:`contract-defined`。
|
||||||
|
|
||||||
|
## 7. 输出结构
|
||||||
|
|
||||||
|
当前脚本直接证实的输出结构包括:
|
||||||
|
|
||||||
|
- `type: "report-artifact"`
|
||||||
|
- `report_name: "95598-weekly-monitor-report"`
|
||||||
|
- `period`
|
||||||
|
- `columns: []`
|
||||||
|
- `rows: []`
|
||||||
|
- 6 个固定 section template
|
||||||
|
- `status: "ok"`
|
||||||
|
- `partial_reasons: []`
|
||||||
|
|
||||||
|
以上全部属于 `code-confirmed`。
|
||||||
|
|
||||||
|
六个已被脚本直接定义的 section 分别为:
|
||||||
|
|
||||||
|
1. `fault-repair`
|
||||||
|
2. `frequent-outage`
|
||||||
|
3. `full-aperture-work-orders`
|
||||||
|
4. `key-opinion-control`
|
||||||
|
5. `device-monitoring`
|
||||||
|
6. `proactive-dispatch`
|
||||||
|
|
||||||
|
这些 section 中,前三个使用 `current_period`、`cumulative`、`year_over_year` 三类值列,后三个只使用 `value`,证据等级:`code-confirmed`。但这里也出现了建模歧义:
|
||||||
|
|
||||||
|
- 输出 artifact 顶层只保留一个 `period`。
|
||||||
|
- section 内部却已经暗示了 `current_period` 与 `cumulative` 的双周期视角。
|
||||||
|
- skill/reference 又在文字上强调 current-period 与 cumulative-period 两个输入。
|
||||||
|
|
||||||
|
因此,“双周期输入如何映射到 artifact 顶层 period 与 section 列结构”当前并不严谨,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
|
||||||
|
## 8. 下游动作证据表
|
||||||
|
|
||||||
|
| 下游动作 | 当前证据 | 证据等级 | 严谨结论 |
|
||||||
|
| --- | --- | --- | --- |
|
||||||
|
| 返回 section 化 `report-artifact` | `collect_weekly_metrics.js` 直接返回对象 | `code-confirmed` | 已有周报 artifact 模板壳,但仍为空数据。 |
|
||||||
|
| 六类 section 模板存在 | 脚本直接定义 `SECTION_TEMPLATES` | `code-confirmed` | 只能确认输出分区 schema 存在,不能确认真实数据采集。 |
|
||||||
|
| 双周期读取 | 只在 `SKILL.md` / `collection-flow.md` 中描述 | `contract-defined` | 契约明确需要 current-period 与 cumulative-period,但脚本未实现。 |
|
||||||
|
| 多来源周统计采集 | 只在 skill/reference 中描述 | `contract-defined` | 有清晰目标流程,当前 packaged script 未直接证明。 |
|
||||||
|
| period alignment | scene 动作/依赖 + skill/reference 说明 | `implementation intent exists but not rigorous / buggy` | 元数据和文档都表达了需要对齐,但脚本没有对齐逻辑,建模仍含糊。 |
|
||||||
|
| 导出周报 | reference 提及 localhost report services | `contract-defined` | 只能确认存在下游服务约束,不能确认当前 skill 已执行导出。 |
|
||||||
|
| 报告日志写入 | skill/reference 提及 report-log | `contract-defined` | 只有体系级概念证据,当前脚本无直接调用。 |
|
||||||
|
| partial / blocked / empty 状态细分 | reference 有定义,脚本固定 `status: "ok"` | `implementation intent exists but not rigorous / buggy` | 状态模型意图明确,但 packaged script 尚未承载。 |
|
||||||
|
|
||||||
|
## 9. 当前代码疑点 / 不严谨点
|
||||||
|
|
||||||
|
1. `period` 与 `currentPeriod/cumulativePeriod` 的建模冲突最突出。scene 与 script 顶层只保留 `period`,但 skill/reference 明确要求双周期输入,前三个 section 的列结构也隐含双周期,这说明现有标准输入设计不严谨,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
2. `period-alignment` 既被声明为依赖,也被列为动作 `align-periods`,但脚本没有任何对齐实现;因此“周期对齐能力已实现”不能成立,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
3. 前三个 section 使用 `cumulative` 列名,而 skill/output 描述用的是 `cumulative period`;列名、输入名、顶层字段名之间没有形成统一建模,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
4. `status` 固定为 `"ok"`,与 reference 对 partial / empty / blocked 的细分要求不一致,证据等级:`code-confirmed` 对现状成立。
|
||||||
|
5. 尽管 scene/skill 明确是多来源周统计,但脚本完全没有 source group 采集或映射逻辑,因此“周统计 collector 已落地”不能提升为当前代码事实,证据等级:`no direct evidence / candidate only`(对 live collection 执行层而言)。
|
||||||
|
|
||||||
|
## 10. 对 command-center 标准配置的修订建议
|
||||||
|
|
||||||
|
1. 本场景应把标准输入从单一 `period` 修订为显式双周期结构,例如 `currentPeriod` 与 `cumulativePeriod`。若仍需要统一路由入口,可额外保留上层 `period` 摘要,但不能替代执行层双周期字段,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
2. 对 `period-alignment` 建议在标准配置中拆成两部分:
|
||||||
|
- `period_model`: 双周期输入结构;
|
||||||
|
- `alignment_rule`: 这两组周期如何校验一致性。
|
||||||
|
当前 scene 只表达了需要对齐,但未给出严格模型,因此这是必要修订。
|
||||||
|
3. 在 artifact 配置中区分:
|
||||||
|
- `implemented_section_templates`: 当前六个 section 已被脚本直接实现,证据等级:`code-confirmed`;
|
||||||
|
- `implemented_collection_logic`: 当前未被 packaged script 直接证明,需显式标低。
|
||||||
|
4. 对前三个 section 的列名建议统一成更一致的配置命名,如 `current_period` / `cumulative_period` / `year_over_year`,避免脚本列名、skill 文本、标准配置三套口径混用。
|
||||||
|
5. 状态模型建议拆成“契约层”和“实现层”,防止 command-center 把 `partial` / `blocked` 误当成当前 collector 已具备的稳定判定能力。
|
||||||
|
|
||||||
|
## 11. 最终严谨结论
|
||||||
|
|
||||||
|
关于 `95598-weekly-monitor-report`,当前最可靠的结论是:仓库已经存在一个六分区周报 artifact 模板实现,明确给出了 section 名称、列 schema、顶层 `period` 字段以及基础状态字段,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
但当前证据并不足以把它描述成“已严格实现双周期、多来源、含 period alignment 的真实浏览器周统计 collector”。相关双周期读取、source group 采集、period alignment、导出与日志行为,主要存在于 `SKILL.md`、`collection-flow.md`、`data-quality.md` 与 scene 元数据的目标描述中。尤其是 `period` vs `currentPeriod/cumulativePeriod` 以及 `period-alignment` 的建模仍明显含糊,说明本场景现在最适合被归类为“section schema 已定义,但 live browser collection 行为尚未被脚本严格证实”的 staged report scene。
|
||||||
@@ -0,0 +1,203 @@
|
|||||||
|
# 指挥中心规格文档证据分级规则
|
||||||
|
|
||||||
|
## 目的
|
||||||
|
|
||||||
|
这份文档用于统一指挥中心相关规格文档中的证据表达方式,明确区分:
|
||||||
|
|
||||||
|
- 已被代码或规则资产直接证实的事实
|
||||||
|
- 已被外部接口或文档契约明确约束的事实
|
||||||
|
- 代码中表达了实现方向,但实现质量、完整性或正确性仍不充分的内容
|
||||||
|
- 当前没有直接证据、只能作为候选判断的内容
|
||||||
|
|
||||||
|
目标不是让规格文档写得更保守,而是让“观察到的事实”“归纳后的结构”“目标态设计”之间的边界始终可追溯、可复核、可讨论。
|
||||||
|
|
||||||
|
## 为什么必须分级
|
||||||
|
|
||||||
|
如果不做证据分级,指挥中心文档很容易把三类内容混写在一起:
|
||||||
|
|
||||||
|
1. 代码里已经存在并可直接定位的行为
|
||||||
|
2. 为了便于抽象而做出的归一化整理
|
||||||
|
3. 未来希望达成、但当前未被运行时或资产严格证明的目标结构
|
||||||
|
|
||||||
|
混写的直接问题是:
|
||||||
|
|
||||||
|
- 读者会把“推断出的整理结果”误认为“当前已实现事实”
|
||||||
|
- 后续实现或重构时,无法判断某一条到底是在复述现状,还是在提出目标
|
||||||
|
- 多份规格文档之间会出现证据强弱不一致、措辞口径不一致的问题
|
||||||
|
|
||||||
|
因此,所有指挥中心规格文档都必须对关键判断显式标注证据等级。
|
||||||
|
|
||||||
|
## 证据标签
|
||||||
|
|
||||||
|
以下 4 个标签为唯一允许使用的标准标签,必须按原文书写,不得改写,不得替换为同义词。
|
||||||
|
|
||||||
|
### 1. `code-confirmed`
|
||||||
|
|
||||||
|
定义:该结论可由当前仓库中的代码、规则资产、静态配置或可直接定位的实现内容直接支持。
|
||||||
|
|
||||||
|
适用场景:
|
||||||
|
|
||||||
|
- 某个字段、流程步骤、状态分类、规则动作在代码或规则资产中可直接定位
|
||||||
|
- 某个输出结构、配置项、动作通道已被实现内容明确写出
|
||||||
|
- 某条成功路径虽然未证明线上真实跑通,但“存在该逻辑分支”这一事实已被代码直接证实
|
||||||
|
|
||||||
|
使用边界:
|
||||||
|
|
||||||
|
- `code-confirmed` 只证明“代码/资产中存在该实现或定义”
|
||||||
|
- 不自动等于“生产可用”“运行时已验证成功”“端到端已闭环”
|
||||||
|
|
||||||
|
### 2. `contract-defined`
|
||||||
|
|
||||||
|
定义:该结论不是直接来自仓库实现,而是由当前被认可的接口契约、协议文档、外部约束文档明确规定。
|
||||||
|
|
||||||
|
适用场景:
|
||||||
|
|
||||||
|
- 浏览器侧/服务侧接口字段、消息格式、状态码语义由契约文档定义
|
||||||
|
- 某一能力边界来自明确的外部 API 文档或经项目认可的集成约束
|
||||||
|
|
||||||
|
使用边界:
|
||||||
|
|
||||||
|
- `contract-defined` 证明“契约如此定义”
|
||||||
|
- 不自动等于“本仓库已实现”
|
||||||
|
- 如果代码实现与契约不一致,应分别描述,不得互相覆盖
|
||||||
|
|
||||||
|
### 3. `implementation intent exists but not rigorous / buggy`
|
||||||
|
|
||||||
|
定义:代码中已经出现实现意图、雏形或局部链路,但当前证据不足以把它写成稳定事实;或者已知实现不严谨、存在缺口、疑似有 bug、成功语义未被严格证明。
|
||||||
|
|
||||||
|
适用场景:
|
||||||
|
|
||||||
|
- 能看到相关函数、分支、调用点、配置项或动作名,但缺少足够证据证明其稳定成立
|
||||||
|
- 逻辑存在,但状态语义混乱、异常处理不足、前后约束不完整
|
||||||
|
- 只能证明“作者想做这件事”,不能证明“这件事已经被可靠实现”
|
||||||
|
|
||||||
|
使用边界:
|
||||||
|
|
||||||
|
- 该标签用于承认“实现方向存在”
|
||||||
|
- 同时明确指出“不能把它提升为已确认事实”
|
||||||
|
- 这是指挥中心文档中承接“代码里有影子,但证据不够硬”的唯一合法标签
|
||||||
|
|
||||||
|
### 4. `no direct evidence / candidate only`
|
||||||
|
|
||||||
|
定义:当前没有找到代码、规则资产、契约文档或其他直接证据;该内容只能作为候选结构、候选能力、候选拆分或待确认项。
|
||||||
|
|
||||||
|
适用场景:
|
||||||
|
|
||||||
|
- 为了统一配置结构而提出的候选字段
|
||||||
|
- 为了后续架构演进而提出的候选能力名称
|
||||||
|
- 仅由推测、命名习惯、经验归纳得到的判断
|
||||||
|
|
||||||
|
使用边界:
|
||||||
|
|
||||||
|
- 该标签明确表示“目前只是候选,不是事实”
|
||||||
|
- 不能把它写成“已有但待接入”“已支持但未启用”之类更强说法,除非另有直接证据
|
||||||
|
|
||||||
|
## 推荐表述模板
|
||||||
|
|
||||||
|
### `code-confirmed`
|
||||||
|
|
||||||
|
可用表述:
|
||||||
|
|
||||||
|
- “根据当前代码/规则资产,可直接确认……,证据等级:`code-confirmed`。”
|
||||||
|
- “文档中的……来自现有实现直接证据,证据等级:`code-confirmed`。”
|
||||||
|
- “这里只能确认代码层存在该成功路径/动作定义,证据等级:`code-confirmed`;不代表运行时已验证。”
|
||||||
|
|
||||||
|
### `contract-defined`
|
||||||
|
|
||||||
|
可用表述:
|
||||||
|
|
||||||
|
- “根据当前接口契约,……被定义为……,证据等级:`contract-defined`。”
|
||||||
|
- “该字段/消息结构来自认可的集成契约,证据等级:`contract-defined`。”
|
||||||
|
- “这里描述的是契约约束,不等于仓库内实现已完成,证据等级:`contract-defined`。”
|
||||||
|
|
||||||
|
### `implementation intent exists but not rigorous / buggy`
|
||||||
|
|
||||||
|
可用表述:
|
||||||
|
|
||||||
|
- “当前实现中可以看到……的意图,但证据尚不足以将其写成稳定事实,证据等级:`implementation intent exists but not rigorous / buggy`。”
|
||||||
|
- “代码存在相关链路,但实现不够严谨/疑似有缺口,因此仅标为 `implementation intent exists but not rigorous / buggy`。”
|
||||||
|
- “目前最多只能确认作者试图支持……,不能确认其已被可靠实现,证据等级:`implementation intent exists but not rigorous / buggy`。”
|
||||||
|
|
||||||
|
### `no direct evidence / candidate only`
|
||||||
|
|
||||||
|
可用表述:
|
||||||
|
|
||||||
|
- “……目前没有直接证据,只能作为候选项,证据等级:`no direct evidence / candidate only`。”
|
||||||
|
- “该拆分/命名属于归一化建议,不代表现状事实,证据等级:`no direct evidence / candidate only`。”
|
||||||
|
- “除非后续补到代码或契约证据,否则这里只能保持为 `no direct evidence / candidate only`。”
|
||||||
|
|
||||||
|
## 禁止表述模式
|
||||||
|
|
||||||
|
以下表述在指挥中心规格文档中禁止使用,除非同时给出更低证据等级并明确限定范围。
|
||||||
|
|
||||||
|
### 1. 禁止把代码存在误写为运行时已验证
|
||||||
|
|
||||||
|
禁止示例:
|
||||||
|
|
||||||
|
- “系统已经稳定支持……”
|
||||||
|
- “该链路已完成闭环……”
|
||||||
|
- “运行时已证明可以成功……”
|
||||||
|
|
||||||
|
问题:这些表述把“代码里有逻辑”错误提升成“真实运行已被验证”。
|
||||||
|
|
||||||
|
### 2. 禁止把推断结构误写为既有事实
|
||||||
|
|
||||||
|
禁止示例:
|
||||||
|
|
||||||
|
- “当前配置结构就是……”
|
||||||
|
- “系统已有统一能力模型……”
|
||||||
|
- “所有任务已经按该 schema 实现……”
|
||||||
|
|
||||||
|
问题:如果只是为了整理而归纳出的标准结构,应标为候选或目标态,不能写成现状。
|
||||||
|
|
||||||
|
### 3. 禁止使用模糊强化词替代证据标签
|
||||||
|
|
||||||
|
禁止示例:
|
||||||
|
|
||||||
|
- “基本可以认为……”
|
||||||
|
- “大概率就是……”
|
||||||
|
- “看起来已经支持……”
|
||||||
|
- “应该算是实现了……”
|
||||||
|
|
||||||
|
问题:模糊判断会绕开证据分级,导致读者无法判断结论强度。
|
||||||
|
|
||||||
|
### 4. 禁止自造同义标签或混用近义词
|
||||||
|
|
||||||
|
禁止示例:
|
||||||
|
|
||||||
|
- “代码已确认”
|
||||||
|
- “契约已定义”
|
||||||
|
- “半实现”
|
||||||
|
- “待验证”
|
||||||
|
- “候选”
|
||||||
|
|
||||||
|
问题:这些中文近义词会破坏跨文档一致性。必须使用本文规定的 4 个精确标签原文。
|
||||||
|
|
||||||
|
## 示例:`95598-repair-city-dispatch`
|
||||||
|
|
||||||
|
示例结论:
|
||||||
|
|
||||||
|
- 对 `95598-repair-city-dispatch` 而言,音频提醒、短信/消息提醒、外呼、处置日志等成功路径行为,如果能够在规则资产或实现内容中直接定位,应写为 `code-confirmed`。
|
||||||
|
- 但这只能说明“代码或规则里存在这些成功路径定义”。
|
||||||
|
- 不能据此直接写成“运行时已经稳定成功触发音频/短信/外呼/处置日志”。
|
||||||
|
- 如果当前没有端到端运行验证证据,那么“运行时成功”只能写为 `implementation intent exists but not rigorous / buggy`,或者在证据更弱时写为 `no direct evidence / candidate only`;不能提升为 `code-confirmed`。
|
||||||
|
|
||||||
|
推荐写法:
|
||||||
|
|
||||||
|
“在 `95598-repair-city-dispatch` 中,音频提醒、短信/消息提醒、外呼、处置日志相关成功路径可在规则资产中直接定位,因此这些‘规则层已定义的成功路径行为’可标注为 `code-confirmed`。但目前没有同等强度证据证明这些动作在真实运行时已稳定成功,因此‘运行时成功已验证’这一结论不能标为 `code-confirmed`;在缺少严格运行证据时,应标为 `implementation intent exists but not rigorous / buggy`。”
|
||||||
|
|
||||||
|
## 执行规则
|
||||||
|
|
||||||
|
- 所有指挥中心相关规格文档,必须使用本文定义的 4 个精确标签。
|
||||||
|
- 不允许使用任何同义词、中文替代词、缩写或自定义等级名。
|
||||||
|
- 一条关键结论如果没有证据等级,就视为表达不合格。
|
||||||
|
- 当同一主题同时涉及“代码事实”和“目标结构”时,必须拆句分别标注,不能合并成一个模糊结论。
|
||||||
|
|
||||||
|
## 最短落地准则
|
||||||
|
|
||||||
|
写每一条关键判断前,先问两个问题:
|
||||||
|
|
||||||
|
1. 我是在复述当前已被直接证据支持的事实,还是在做归一化整理/目标设计?
|
||||||
|
2. 我手上的证据,到底支撑的是代码存在、契约约束、实现意图,还是根本没有直接证据?
|
||||||
|
|
||||||
|
只有先回答这两个问题,指挥中心规格文档才能保持严格、可复核和可持续重写。
|
||||||
@@ -0,0 +1,639 @@
|
|||||||
|
# 指挥中心虚拟员工标准配置清单建议结构
|
||||||
|
|
||||||
|
> 免责声明:本文件描述的是“未来可采用的规范化目标配置结构”,不是当前 staged runtime 已稳定实现的结构,也不是对现状的直接复述。文中所有“目标 schema 字段”都必须与当前证据分级文档一起阅读;凡缺乏静态资产直接支撑的字段,只能视为 normalization choice 或 open / candidate 字段,不能表述为当前已稳定存在。
|
||||||
|
|
||||||
|
## 目标
|
||||||
|
|
||||||
|
这份结构文档的用途,是把当前 evidence-graded 现状文档中的信息,逐步映射为后续可维护、可扩展、可复用的目标配置清单。
|
||||||
|
|
||||||
|
因此必须同时保持两条边界:
|
||||||
|
|
||||||
|
1. 当前已观察到的事实,来自 evidence-graded current-state docs。
|
||||||
|
2. 这里提出的统一 schema,则是为后续 command-center 配置治理而做的 normalization proposal。
|
||||||
|
|
||||||
|
它们不能混写,更不能把 normalization proposal 误写成当前实现事实。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 一、当前证据文档与目标配置的关系
|
||||||
|
|
||||||
|
当前已经存在三类文档角色:
|
||||||
|
|
||||||
|
1. `2026-04-08-command-center-virtual-employee-inventory-table.md`
|
||||||
|
- 作用:给人读的 current-state 总览
|
||||||
|
- 性质:当前观察结果,不是配置 schema
|
||||||
|
2. `2026-04-08-command-center-virtual-employee-inventory.json`
|
||||||
|
- 作用:给机器读的 current-state inventory
|
||||||
|
- 性质:机器可消费的盘点结果,不是目标配置
|
||||||
|
3. 各 scene 的 `*-operation-analysis.md`
|
||||||
|
- 作用:记录每个场景的证据来源、强弱、已知问题和边界
|
||||||
|
- 性质:最关键的证据支撑层
|
||||||
|
|
||||||
|
本文件提出的目标配置结构,是在这些 current-state 文档之上的“规范化目标层”。
|
||||||
|
|
||||||
|
### 映射原则
|
||||||
|
|
||||||
|
- operation-analysis 文档中的 `code-confirmed` 结论,可优先映射为目标 schema 中的“evidence-derived fields”。
|
||||||
|
- `contract-defined` 结论,可映射为“declared / contract-backed fields”,但不能默认等于当前 runtime 已实现。
|
||||||
|
- `implementation intent exists but not rigorous / buggy` 的内容,应进入目标 schema 的 `known_issues`、`implementation_gap`、`notes` 或 `open_questions`,而不是被包装成稳定主字段。
|
||||||
|
- `no direct evidence / candidate only` 的内容,只能作为 normalization choice、candidate field 或未来扩展项保留。
|
||||||
|
|
||||||
|
简言之:evidence-graded current-state docs 告诉我们“现在能严谨说什么”,本文件只负责说明“未来若要统一配置,可怎样承接这些信息”。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 二、推荐文件组织
|
||||||
|
|
||||||
|
```text
|
||||||
|
command-center/
|
||||||
|
employee.json
|
||||||
|
capabilities.json
|
||||||
|
tasks/
|
||||||
|
fault-details-report.json
|
||||||
|
jinchang-business-environment-weekly-report.json
|
||||||
|
95598-weekly-monitor-report.json
|
||||||
|
95598-repair-city-dispatch.json
|
||||||
|
jiayuguan-meter-outage.json
|
||||||
|
```
|
||||||
|
|
||||||
|
### 文件职责
|
||||||
|
|
||||||
|
- `employee.json`
|
||||||
|
- 描述这个虚拟员工是谁、职责范围是什么、默认采用什么证据口径
|
||||||
|
- `capabilities.json`
|
||||||
|
- 维护归一化能力词表
|
||||||
|
- 明确哪些能力来自现有证据,哪些只是规范化命名
|
||||||
|
- `tasks/*.json`
|
||||||
|
- 每个场景一份目标配置
|
||||||
|
- 承接当前证据与未来标准字段的映射关系
|
||||||
|
|
||||||
|
### 为什么仍然推荐三层拆分
|
||||||
|
|
||||||
|
这类拆分仍然成立,但要加一条限定:
|
||||||
|
|
||||||
|
- 这是一种 target architecture proposal
|
||||||
|
- 不是当前仓库已存在的稳定目录结构
|
||||||
|
- 尤其 `capabilities.json` 代表“统一能力词表”的目标态,而不是当前 staged assets 已实现的统一能力注册表
|
||||||
|
|
||||||
|
因此,三层拆分本身属于 normalization choice,证据等级不应高于 `no direct evidence / candidate only`,除非未来真的落地成文件结构。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 三、`employee.json` 目标结构
|
||||||
|
|
||||||
|
### 3.1 推荐示例
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": "command-center-virtual-employee",
|
||||||
|
"name": "指挥中心虚拟员工",
|
||||||
|
"domain": "电力业务指挥中心",
|
||||||
|
"positioning": "负责业务监测、统计报表、异常识别与后续提醒/处置支撑的虚拟运营员工",
|
||||||
|
"mission": [
|
||||||
|
"采集业务数据并生成结构化报表",
|
||||||
|
"监测工单/事件并识别待处理对象",
|
||||||
|
"比较历史记录识别新增待办",
|
||||||
|
"为提醒、外呼、自动派单、自动处理等下游动作提供输入"
|
||||||
|
],
|
||||||
|
"task_ids": [
|
||||||
|
"fault-details-report",
|
||||||
|
"jinchang-business-environment-weekly-report",
|
||||||
|
"95598-weekly-monitor-report",
|
||||||
|
"95598-repair-city-dispatch",
|
||||||
|
"jiayuguan-meter-outage"
|
||||||
|
],
|
||||||
|
"default_evidence_model": [
|
||||||
|
"code-confirmed",
|
||||||
|
"contract-defined",
|
||||||
|
"implementation intent exists but not rigorous / buggy",
|
||||||
|
"no direct evidence / candidate only"
|
||||||
|
],
|
||||||
|
"default_status_model": [
|
||||||
|
"success",
|
||||||
|
"partial",
|
||||||
|
"empty",
|
||||||
|
"blocked"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3.2 字段分层说明
|
||||||
|
|
||||||
|
#### A. 可直接由当前证据承接的字段
|
||||||
|
|
||||||
|
- `name`
|
||||||
|
- `domain`
|
||||||
|
- `task_ids`(前提是仅映射当前已盘点的 5 个 scene)
|
||||||
|
- `default_evidence_model`
|
||||||
|
|
||||||
|
这些字段之所以较容易承接,是因为 current-state inventory 已经稳定整理出对应对象和场景清单。
|
||||||
|
|
||||||
|
但仍要注意:这只是“可从当前文档整理得到”,不是说仓库里已经存在一个运行中的 `employee.json`。
|
||||||
|
|
||||||
|
#### B. normalization choices
|
||||||
|
|
||||||
|
- `id`
|
||||||
|
- `positioning`
|
||||||
|
- `mission`
|
||||||
|
- `default_status_model`
|
||||||
|
|
||||||
|
这些字段主要是为了让目标配置更易治理、更可复用,属于规范化整理,不应表述为 staged runtime 现状。
|
||||||
|
|
||||||
|
#### C. open / candidate 字段
|
||||||
|
|
||||||
|
建议预留但暂不稳定化:
|
||||||
|
|
||||||
|
- `default_runtime_requirements`
|
||||||
|
- `default_result_types`
|
||||||
|
- `default_downstream_policy`
|
||||||
|
- `org_scope`
|
||||||
|
- `region_scope`
|
||||||
|
|
||||||
|
原因是:当前不同 scene 在“上下文依赖、输出类型、地区语义、下游策略”上并不一致,过早把这些做成员工级稳定字段会拔高现状。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 四、`capabilities.json` 目标结构
|
||||||
|
|
||||||
|
### 4.1 推荐示例
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"catalog_version": 1,
|
||||||
|
"evidence_method": "evidence-graded",
|
||||||
|
"core": [
|
||||||
|
{
|
||||||
|
"id": "browser-collection",
|
||||||
|
"name": "浏览器采集",
|
||||||
|
"kind": "normalized-capability",
|
||||||
|
"evidence_basis": "derived-from-multiple-scenes"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "report-generation",
|
||||||
|
"name": "报表生成",
|
||||||
|
"kind": "normalized-capability",
|
||||||
|
"evidence_basis": "derived-from-report-scenes"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "monitor-snapshot",
|
||||||
|
"name": "监测快照",
|
||||||
|
"kind": "normalized-capability",
|
||||||
|
"evidence_basis": "derived-from-monitor-scenes"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"channels": [
|
||||||
|
{
|
||||||
|
"id": "audio-remind",
|
||||||
|
"name": "音频提醒",
|
||||||
|
"kind": "normalized-channel",
|
||||||
|
"observed_in": [
|
||||||
|
"95598-repair-city-dispatch",
|
||||||
|
"jiayuguan-meter-outage"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "message-remind",
|
||||||
|
"name": "消息提醒",
|
||||||
|
"kind": "normalized-channel",
|
||||||
|
"observed_in": [
|
||||||
|
"95598-repair-city-dispatch"
|
||||||
|
],
|
||||||
|
"notes": "在 jiayuguan-meter-outage 中只看到保留意图,不应等同视为稳定现状。"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"actions": [
|
||||||
|
{
|
||||||
|
"id": "auto-dispatch",
|
||||||
|
"name": "自动派单",
|
||||||
|
"kind": "normalized-action"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4.2 字段分层说明
|
||||||
|
|
||||||
|
#### A. 可由当前证据承接的字段
|
||||||
|
|
||||||
|
- `observed_in`
|
||||||
|
- `notes`
|
||||||
|
- `evidence_basis`
|
||||||
|
|
||||||
|
如果后续真的落地 `capabilities.json`,最应该优先保留的不是“能力名本身”,而是能力和 scene 之间的 evidence mapping。因为当前场景的能力证据强弱明显不同:
|
||||||
|
|
||||||
|
- 3 个报表 scene 多为 schema/template stub
|
||||||
|
- 2 个监测 scene 更强 workflow 主要来自规则资产
|
||||||
|
- `message-remind`、`callout`、`auto-dispatch` 等通道在不同 scene 中强度不一致
|
||||||
|
|
||||||
|
#### B. normalization choices
|
||||||
|
|
||||||
|
- `core`
|
||||||
|
- `channels`
|
||||||
|
- `actions`
|
||||||
|
- `id`
|
||||||
|
- `name`
|
||||||
|
- `kind`
|
||||||
|
|
||||||
|
这些统一词表字段本身就是规范化选择。当前没有直接证据表明仓库中已经存在统一 capability registry。
|
||||||
|
|
||||||
|
#### C. open / candidate 字段
|
||||||
|
|
||||||
|
建议保持候选态:
|
||||||
|
|
||||||
|
- `required_contexts`
|
||||||
|
- `result_semantics`
|
||||||
|
- `stability_level`
|
||||||
|
- `implemented_by`
|
||||||
|
- `runtime_owner`
|
||||||
|
|
||||||
|
这些字段看起来很有用,但 staged assets 还不足以稳定支撑它们。
|
||||||
|
|
||||||
|
### 4.3 对能力词表的关键限制
|
||||||
|
|
||||||
|
- 不要把 `report-export`、`audio-remind`、`callout` 之类词条本身写成“已全局统一支持”。
|
||||||
|
- 不要因为某个规则资产里出现了调用,就把它提升为所有 scene 的稳定 capability。
|
||||||
|
- `email` 目前仍应保持 candidate,不应进入“已支持通道”集合。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 五、`tasks/*.json` 目标结构
|
||||||
|
|
||||||
|
### 5.1 统一推荐骨架
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": "95598-repair-city-dispatch",
|
||||||
|
"name": "95598抢修-市指",
|
||||||
|
"category": "monitor",
|
||||||
|
"current_state": {
|
||||||
|
"primary_evidence_summary": "rule assets stronger than packaged JS stub",
|
||||||
|
"source_refs": [],
|
||||||
|
"known_issues": []
|
||||||
|
},
|
||||||
|
"binding": {
|
||||||
|
"scene_id": "95598-repair-city-dispatch",
|
||||||
|
"skill_package": "95598-repair-city-dispatch",
|
||||||
|
"tool": "collect_repair_orders"
|
||||||
|
},
|
||||||
|
"trigger": {
|
||||||
|
"observed": {},
|
||||||
|
"normalized": {},
|
||||||
|
"open_questions": []
|
||||||
|
},
|
||||||
|
"inputs": {
|
||||||
|
"observed": {},
|
||||||
|
"normalized": {},
|
||||||
|
"open_questions": []
|
||||||
|
},
|
||||||
|
"systems": {
|
||||||
|
"observed": {},
|
||||||
|
"normalized": {},
|
||||||
|
"open_questions": []
|
||||||
|
},
|
||||||
|
"workflow": {
|
||||||
|
"observed": [],
|
||||||
|
"normalized": [],
|
||||||
|
"open_questions": []
|
||||||
|
},
|
||||||
|
"result": {
|
||||||
|
"observed": {},
|
||||||
|
"normalized": {},
|
||||||
|
"open_questions": []
|
||||||
|
},
|
||||||
|
"downstream_effects": {
|
||||||
|
"observed": [],
|
||||||
|
"normalized": [],
|
||||||
|
"open_questions": []
|
||||||
|
},
|
||||||
|
"required_capabilities": {
|
||||||
|
"normalized": [],
|
||||||
|
"open_questions": []
|
||||||
|
},
|
||||||
|
"status_model": {
|
||||||
|
"declared": {},
|
||||||
|
"implemented_notes": []
|
||||||
|
},
|
||||||
|
"evidence_grades": {},
|
||||||
|
"open_questions": []
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
这个骨架的核心目标不是“把所有字段都填满”,而是强制区分:
|
||||||
|
|
||||||
|
- `observed`
|
||||||
|
- `normalized`
|
||||||
|
- `open_questions`
|
||||||
|
|
||||||
|
这样可避免把 future-facing target config 误写成 current-state。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 六、报表类任务在目标 schema 中应如何表达
|
||||||
|
|
||||||
|
适用对象:
|
||||||
|
|
||||||
|
- `fault-details-report`
|
||||||
|
- `jinchang-business-environment-weekly-report`
|
||||||
|
- `95598-weekly-monitor-report`
|
||||||
|
|
||||||
|
### 6.1 当前证据对目标 schema 的约束
|
||||||
|
|
||||||
|
这 3 个任务当前最强直接证据主要是:
|
||||||
|
|
||||||
|
- 已有 `report-artifact` 结构壳
|
||||||
|
- 已有 section/template 定义
|
||||||
|
- 已有 `status` / `partial_reasons` 字段壳
|
||||||
|
|
||||||
|
但它们共同缺少同等强度的 live collection 证据。因此若采用该目标 schema,建议保留一个明确的 current-state 提示,例如:
|
||||||
|
|
||||||
|
```json
|
||||||
|
"current_state": {
|
||||||
|
"primary_evidence_summary": "packaged script mainly confirms artifact schema / section template; live collection remains contract-defined or weaker"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6.2 报表类字段分层
|
||||||
|
|
||||||
|
#### A. evidence-derived fields
|
||||||
|
|
||||||
|
- `binding.scene_id`
|
||||||
|
- `binding.skill_package`
|
||||||
|
- `binding.tool`
|
||||||
|
- `result.observed.artifact_type`
|
||||||
|
- `result.observed.key_fields`
|
||||||
|
- `systems.observed.browser_pages`
|
||||||
|
- `source_refs`
|
||||||
|
|
||||||
|
#### B. normalization choices
|
||||||
|
|
||||||
|
- `trigger.normalized.natural_language_examples`
|
||||||
|
- `inputs.normalized.runtime_context`
|
||||||
|
- `workflow.normalized`
|
||||||
|
- `required_capabilities.normalized`
|
||||||
|
- `downstream_effects.normalized`
|
||||||
|
|
||||||
|
#### C. open / candidate fields
|
||||||
|
|
||||||
|
- `period_model`
|
||||||
|
- `section_semantics`
|
||||||
|
- `region_scope`
|
||||||
|
- `alignment_rule`
|
||||||
|
- `report_export_policy`
|
||||||
|
|
||||||
|
### 6.3 各报表任务的特别约束
|
||||||
|
|
||||||
|
#### `fault-details-report`
|
||||||
|
|
||||||
|
- 若采用该目标 schema,建议对外保留 `period`,但执行层最好允许展开为 `startTime/endTime`。
|
||||||
|
- `summary-sheet` 建议标记为“template confirmed”,不要误写成“summary derivation implemented”。
|
||||||
|
|
||||||
|
#### `jinchang-business-environment-weekly-report`
|
||||||
|
|
||||||
|
- 若采用该目标 schema,建议把“4 个固定 section 模板已观察到”与“真实多源采集已实现”分开表达。
|
||||||
|
- `region` 是否成为稳定字段,目前仍是 open item。
|
||||||
|
|
||||||
|
#### `95598-weekly-monitor-report`
|
||||||
|
|
||||||
|
- 若采用该目标 schema,建议预留 `currentPeriod` 与 `cumulativePeriod`,但必须注明这属于对当前建模冲突的修正提案。
|
||||||
|
- `period alignment` 建议单列为 schema group 或 `alignment_rule`,而不是默认已经在 runtime 中稳定存在。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 七、监测类任务在目标 schema 中应如何表达
|
||||||
|
|
||||||
|
适用对象:
|
||||||
|
|
||||||
|
- `95598-repair-city-dispatch`
|
||||||
|
- `jiayuguan-meter-outage`
|
||||||
|
|
||||||
|
### 7.1 当前证据对目标 schema 的约束
|
||||||
|
|
||||||
|
这两个任务与报表类不同:
|
||||||
|
|
||||||
|
- packaged JS collector 已具备输入驱动的 `monitor-snapshot` 归一化 / 比较逻辑,并会附带规则来源、配置基础页角色、已知问题/身份模型说明
|
||||||
|
- 更强 workflow 证据主要来自规则资产(当前按盘点口径以 `D:/desk/智能体资料/大四区报告监测项/*.txt` 规则脚本为主)
|
||||||
|
- `assets/scene-snapshot/index.html` 仅属于配置基础层,不应计入 workflow 主执行证据
|
||||||
|
|
||||||
|
因此若采用该目标 schema,建议显式区分:
|
||||||
|
|
||||||
|
```json
|
||||||
|
"current_state": {
|
||||||
|
"packaged_stub_strength": "code-confirmed",
|
||||||
|
"rule_asset_workflow_strength": "code-confirmed",
|
||||||
|
"notes": "workflow evidence is stronger in rule assets than in packaged JS stub"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 7.2 监测类字段分层
|
||||||
|
|
||||||
|
#### A. evidence-derived fields
|
||||||
|
|
||||||
|
- `binding.*`
|
||||||
|
- `inputs.observed.explicit`
|
||||||
|
- `systems.observed.upstream_apis`
|
||||||
|
- `systems.observed.local_services`
|
||||||
|
- `workflow.observed`
|
||||||
|
- `result.observed`
|
||||||
|
- `downstream_effects.observed`
|
||||||
|
- `current_state.known_issues`
|
||||||
|
|
||||||
|
#### B. normalization choices
|
||||||
|
|
||||||
|
- `workflow.normalized`
|
||||||
|
- `required_capabilities.normalized`
|
||||||
|
- `canonical_snapshot_fields`
|
||||||
|
- `effect_channels`
|
||||||
|
|
||||||
|
#### C. open / candidate fields
|
||||||
|
|
||||||
|
- `identity_model`
|
||||||
|
- `downstream_policy`
|
||||||
|
- `alert_channel_split`
|
||||||
|
- `auto_processing_policy`
|
||||||
|
- `dependency_promotion_rules`
|
||||||
|
|
||||||
|
### 7.3 各监测任务的特别约束
|
||||||
|
|
||||||
|
#### `95598-repair-city-dispatch`
|
||||||
|
|
||||||
|
若采用该目标 schema,建议保留以下说明:
|
||||||
|
|
||||||
|
- workflow 强证据主要来自规则资产(当前盘点以 `D:/desk/智能体资料/大四区报告监测项/95598抢修-市指_业务检测配置.txt` 与 `D:/desk/智能体资料/大四区报告监测项/95598抢修-市指_自动处理配置.txt` 为主),而不是 packaged JS stub
|
||||||
|
- `pending` 分类存在 `status == "00" && status == "01"` bug
|
||||||
|
- `pending_ids/new_pending_ids` 更像 canonical target fields,而不是当前规则层已严格同名产出字段
|
||||||
|
|
||||||
|
建议把这个 bug 直接纳入:
|
||||||
|
|
||||||
|
```json
|
||||||
|
"current_state": {
|
||||||
|
"known_issues": [
|
||||||
|
"pending classification bug: status == \"00\" && status == \"01\""
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### `jiayuguan-meter-outage`
|
||||||
|
|
||||||
|
若采用该目标 schema,建议保留以下说明:
|
||||||
|
|
||||||
|
- workflow 强证据主要来自规则资产(当前盘点以 `D:/desk/智能体资料/大四区报告监测项/户表失电-嘉峪关_业务监测配置.txt` 与 `D:/desk/智能体资料/大四区报告监测项/户表失电-嘉峪关_自动处理配置.txt` 为主),而不是 packaged JS stub
|
||||||
|
- marketing token 是自动处理链路的强依赖
|
||||||
|
- monitor pending list 用 `consNo`,dispose dedupe 用 `eventId`,身份模型不一致
|
||||||
|
|
||||||
|
因此在该目标 schema 提案中,建议单列:
|
||||||
|
|
||||||
|
```json
|
||||||
|
"identity_model": {
|
||||||
|
"monitor_pending_identity": "consNo",
|
||||||
|
"dispose_dedupe_identity": "eventId",
|
||||||
|
"status": "implementation intent exists but not rigorous / buggy"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
这类字段不应被伪装成“已经统一好的 snapshot identity model”。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 八、推荐统一字段清单与证据边界
|
||||||
|
|
||||||
|
下面给出一个更严格的统一字段视图。
|
||||||
|
|
||||||
|
### 1. 元数据层
|
||||||
|
|
||||||
|
较适合作为稳定 target schema 的字段:
|
||||||
|
|
||||||
|
- `id`
|
||||||
|
- `name`
|
||||||
|
- `category`
|
||||||
|
- `binding.scene_id`
|
||||||
|
- `binding.skill_package`
|
||||||
|
- `binding.tool`
|
||||||
|
|
||||||
|
其中:
|
||||||
|
|
||||||
|
- `binding.*` 更偏 evidence-derived
|
||||||
|
- `id/name/category` 更偏 normalization choice
|
||||||
|
|
||||||
|
### 2. 现状映射层
|
||||||
|
|
||||||
|
建议新增并长期保留:
|
||||||
|
|
||||||
|
- `current_state.primary_evidence_summary`
|
||||||
|
- `current_state.source_refs`
|
||||||
|
- `current_state.known_issues`
|
||||||
|
- `current_state.notes`
|
||||||
|
|
||||||
|
这是本次重写后最重要的新增设计点之一。没有这层,target schema 很容易再次把“目标结构”和“现状证据”混在一起。
|
||||||
|
|
||||||
|
### 3. 触发层
|
||||||
|
|
||||||
|
- `trigger.observed`
|
||||||
|
- `trigger.normalized`
|
||||||
|
- `trigger.open_questions`
|
||||||
|
|
||||||
|
### 4. 输入层
|
||||||
|
|
||||||
|
- `inputs.observed`
|
||||||
|
- `inputs.normalized`
|
||||||
|
- `inputs.open_questions`
|
||||||
|
|
||||||
|
### 5. 系统层
|
||||||
|
|
||||||
|
- `systems.observed`
|
||||||
|
- `systems.normalized`
|
||||||
|
- `systems.open_questions`
|
||||||
|
|
||||||
|
### 6. 流程层
|
||||||
|
|
||||||
|
- `workflow.observed`
|
||||||
|
- `workflow.normalized`
|
||||||
|
- `workflow.open_questions`
|
||||||
|
|
||||||
|
### 7. 结果层
|
||||||
|
|
||||||
|
- `result.observed`
|
||||||
|
- `result.normalized`
|
||||||
|
- `result.open_questions`
|
||||||
|
|
||||||
|
### 8. 下游动作层
|
||||||
|
|
||||||
|
- `downstream_effects.observed`
|
||||||
|
- `downstream_effects.normalized`
|
||||||
|
- `downstream_effects.open_questions`
|
||||||
|
|
||||||
|
### 9. 能力层
|
||||||
|
|
||||||
|
- `required_capabilities.normalized`
|
||||||
|
- `required_capabilities.open_questions`
|
||||||
|
|
||||||
|
### 10. 证据层
|
||||||
|
|
||||||
|
- `evidence_grades`
|
||||||
|
- `source_refs`
|
||||||
|
|
||||||
|
### 11. 人工确认层
|
||||||
|
|
||||||
|
- `open_questions`
|
||||||
|
- `known_issues`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 九、为什么这次建议在 target schema 中显式保留“现状层”
|
||||||
|
|
||||||
|
旧版结构容易出现的问题是:
|
||||||
|
|
||||||
|
- 把 aggregate inventory 直接写成“标准配置已经长这样”
|
||||||
|
- 把 `required_capabilities`、`downstream_effects` 这样的归一化字段误读成 runtime 现状
|
||||||
|
- 把规则资产中的 workflow 直接等价成 packaged script 实现
|
||||||
|
|
||||||
|
因此这次建议最关键的修订不是多加几个字段,而是要求 target schema 同时携带:
|
||||||
|
|
||||||
|
1. `observed current state`
|
||||||
|
2. `normalized target structure`
|
||||||
|
3. `open / candidate items`
|
||||||
|
|
||||||
|
只有这样,后续继续扩展新 scene 时,文档才不会再次把三类内容混在一起。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 十、建议的落地顺序
|
||||||
|
|
||||||
|
1. 先把 current-state inventory 保持为证据分级后的事实盘点。
|
||||||
|
2. 再基于 inventory 生成目标态 `employee.json` / `capabilities.json` / `tasks/*.json` 草案。
|
||||||
|
3. 落地草案时,强制为每个 major group 补齐:
|
||||||
|
- `observed`
|
||||||
|
- `normalized`
|
||||||
|
- `open_questions`
|
||||||
|
4. 先优先收敛已知关键不严谨点:
|
||||||
|
- `fault-details-report` 的 `period` vs `startTime/endTime`
|
||||||
|
- `95598-weekly-monitor-report` 的双周期 / period alignment
|
||||||
|
- `95598-repair-city-dispatch` 的 pending classification bug
|
||||||
|
- `jiayuguan-meter-outage` 的 `consNo` vs `eventId` 身份不一致
|
||||||
|
5. 最后再考虑是否把能力词表与 target config 接入真实消费链路。
|
||||||
|
|
||||||
|
注意:在这些问题未收敛前,不应把目标配置字段写成“已经稳定”。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 十一、推荐结论
|
||||||
|
|
||||||
|
如果目标是形成“指挥中心虚拟员工的标准配置清单”,那么未来仍然可以采用:
|
||||||
|
|
||||||
|
- `employee.json`
|
||||||
|
- `capabilities.json`
|
||||||
|
- `tasks/*.json`
|
||||||
|
|
||||||
|
这样的三层结构。
|
||||||
|
|
||||||
|
但和旧版不同的是,这套结构必须显式承认:
|
||||||
|
|
||||||
|
- 它是 target architecture proposal,不是现状复述
|
||||||
|
- 每个 major schema group 都要区分 evidence-derived fields、normalization choices、open / candidate fields
|
||||||
|
- evidence-graded current-state docs 才是现状依据
|
||||||
|
- 报表类 3 个 scene 当前主要是 schema/template stub
|
||||||
|
- `95598-repair-city-dispatch` 与 `jiayuguan-meter-outage` 的 workflow 强证据主要在规则资产
|
||||||
|
- `95598-repair-city-dispatch` 存在 pending classification bug
|
||||||
|
- `jiayuguan-meter-outage` 存在 `consNo` / `eventId` 身份不一致问题
|
||||||
|
- 任何地方都不应宣称 runtime verification
|
||||||
|
|
||||||
|
只有在保持这些边界的前提下,这份“标准配置结构”才是严谨可持续的目标态提案,而不是再次把现状、推断和目标混写在一起。
|
||||||
@@ -0,0 +1,121 @@
|
|||||||
|
# 指挥中心虚拟员工业务盘点清单(表格版)
|
||||||
|
|
||||||
|
> 说明:本文件是“当前状态总览”,不是目标配置 schema。自本次重写起,所有判断统一采用 `code-confirmed`、`contract-defined`、`implementation intent exists but not rigorous / buggy`、`no direct evidence / candidate only` 四级证据模型;结论仅基于已暂存/已落库资产的静态检查结果,不代表任何运行时验证。
|
||||||
|
|
||||||
|
## 盘点范围
|
||||||
|
|
||||||
|
本表覆盖当前已整理的 5 个 staged scene / skill:
|
||||||
|
|
||||||
|
- `fault-details-report`
|
||||||
|
- `jinchang-business-environment-weekly-report`
|
||||||
|
- `95598-weekly-monitor-report`
|
||||||
|
- `95598-repair-city-dispatch`
|
||||||
|
- `jiayuguan-meter-outage`
|
||||||
|
|
||||||
|
## 虚拟员工定位
|
||||||
|
|
||||||
|
以下“虚拟员工定位”是对当前 5 个 scene 的归一化汇总视角,不是当前仓库里已存在统一员工对象的直接事实;证据等级:`no direct evidence / candidate only`。在这个归一化视角下,可把它理解为“面向电力业务指挥中心的任务型虚拟运营员工”,其职责边界可概括为:
|
||||||
|
|
||||||
|
- 以报表模板或监测快照形式承载结构化结果
|
||||||
|
- 对工单/事件队列做规则化监测与历史比较
|
||||||
|
- 为提醒、日志、外呼、自动派单、自动处理等下游动作提供输入语义
|
||||||
|
- 为未来统一配置清单提供归一化抽象基础
|
||||||
|
|
||||||
|
但必须强调:以上职责并不等于所有场景都已由统一 packaged runtime 严格实现,更不等于已完成运行时验证。
|
||||||
|
|
||||||
|
## 证据标签速记
|
||||||
|
|
||||||
|
| 标签 | 严格含义 |
|
||||||
|
| --- | --- |
|
||||||
|
| `code-confirmed` | 当前仓库代码、规则资产、静态配置中可直接定位到的事实 |
|
||||||
|
| `contract-defined` | 由场景说明、参考流程、接口/文档契约明确规定的事实 |
|
||||||
|
| `implementation intent exists but not rigorous / buggy` | 已看到实现方向或局部链路,但不够严谨、存在缺口或已知 bug |
|
||||||
|
| `no direct evidence / candidate only` | 当前没有直接证据,只能作为候选抽象、候选结构或待确认项 |
|
||||||
|
|
||||||
|
## 业务盘点表
|
||||||
|
|
||||||
|
| 名称 | 场景 ID | 类别 | 当前任务目标 | 已观察系统 / 证据基础 | 证据分级摘要 | 严格说明 / 未解决问题 | 对应分析文档 |
|
||||||
|
| --- | --- | --- | --- | --- | --- | --- | --- |
|
||||||
|
| 故障明细 | `fault-details-report` | 报表 | 以“故障明细主表 + summary-sheet 分区”形式承载故障明细报表结果。 | `scene.json`、`SKILL.md`、`scripts/collect_fault_details.js`、`references/collection-flow.md`、`references/data-quality.md` | `code-confirmed`:已直接定义 `report-artifact` 外壳、主表列、`summary-sheet` 模板、`status`/`partial_reasons` 字段。`contract-defined`:页面时间读取、故障查询、字段归一、汇总派生、导出/日志语义。`implementation intent exists but not rigorous / buggy`:`period` 与 `startTime/endTime` 建模不严谨,状态细分只停留在契约层。 | 当前更像“报表 schema/template stub”,不能写成已严格实现实时浏览器采集器;不得表述为已运行验证。 | `D:/data/ideaSpace/rust/sgClaw/claw-new/docs/superpowers/specs/2026-04-08-fault-details-report-operation-analysis.md` |
|
||||||
|
| 国网金昌供电公司营商环境周例会报告 | `jinchang-business-environment-weekly-report` | 报表 | 以四个固定 section 模板承载营商环境周报。 | `scene.json`、`SKILL.md`、`scripts/collect_business_environment_metrics.js`、`references/collection-flow.md`、`references/data-quality.md` | `code-confirmed`:四个 section template、空 artifact、`period`、基础状态字段已存在。`contract-defined`:多来源指标采集、周范围读取、section 聚合、导出/日志语义。`implementation intent exists but not rigorous / buggy`:`region` 仅在文案层出现,未进入稳定 schema。 | 这是“分区化周报模板”而不是已证实的 live collector;不能写成已稳定采集多个业务系统。 | `D:/data/ideaSpace/rust/sgClaw/claw-new/docs/superpowers/specs/2026-04-08-jinchang-business-environment-weekly-report-operation-analysis.md` |
|
||||||
|
| 95598、12398及配网设备监控情况周统计 | `95598-weekly-monitor-report` | 报表 | 以六个固定 section 模板承载周统计结果。 | `scene.json`、`SKILL.md`、`scripts/collect_weekly_metrics.js`、`references/collection-flow.md`、`references/data-quality.md` | `code-confirmed`:六个 section template、空 artifact、顶层 `period`、基础状态字段已存在。`contract-defined`:双周期输入、period alignment、多来源周统计采集。`implementation intent exists but not rigorous / buggy`:`period` vs `currentPeriod/cumulativePeriod` 冲突明显,period alignment 只在元数据/文档层被要求。 | 三个报表 scene 都更接近“已打包的 schema/template stub”,不应写成已实现 live collector;本场景还存在双周期建模未闭合问题。 | `D:/data/ideaSpace/rust/sgClaw/claw-new/docs/superpowers/specs/2026-04-08-95598-weekly-monitor-report-operation-analysis.md` |
|
||||||
|
| 95598抢修-市指 | `95598-repair-city-dispatch` | 监测 | 监测抢修工单队列,识别待处理/审核/已处理,并为提醒、日志、自动派单等链路提供输入。 | `scene.json`、`SKILL.md`、`scripts/collect_repair_orders.js`、`D:/desk/智能体资料/大四区报告监测项/95598抢修-市指_业务检测配置.txt`、`D:/desk/智能体资料/大四区报告监测项/95598抢修-市指_自动处理配置.txt` | `code-confirmed`:packaged JS 现已直接实现输入驱动的 `monitor-snapshot` collector,可做 repair-order 分类、monitor/dispose log 比较、`new_pending_ids` 推导、`success/partial/empty/blocked` 状态判定,并携带 `workflow_rule_sources`、`config_base_page/config_base_role`、`known_issues` 元数据;更强的队列采集、日志比较、音频提醒、短信、外呼、自动派单、处置日志写入证据直接存在于 desk 规则脚本。`contract-defined`:快照语义与下游副作用需分开表达。`implementation intent exists but not rigorous / buggy`:desk 规则内存在 `status == "00" && status == "01"` 的待处理分类 bug;规则层 `new_pending_ids` 仍更像归一化目标而非同名稳定字段。 | 本场景 desk workflow 证据仍强于 packaged collector,且当前实际定时执行证据以 desk 规则脚本为主;`assets/scene-snapshot/index.html` 仅是配置基础页。仍不能宣称任何运行时成功。 | `D:/data/ideaSpace/rust/sgClaw/claw-new/docs/superpowers/specs/2026-04-08-95598-repair-city-dispatch-operation-analysis.md` |
|
||||||
|
| 户表失电-嘉峪关 | `jiayuguan-meter-outage` | 监测 | 监测户表失电事件,结合服务工单状态与历史日志识别待处理对象,并为自动处理链路提供输入。 | `scene.json`、`SKILL.md`、`scripts/collect_outage_events.js`、`D:/desk/智能体资料/大四区报告监测项/户表失电-嘉峪关_业务监测配置.txt`、`D:/desk/智能体资料/大四区报告监测项/户表失电-嘉峪关_自动处理配置.txt` | `code-confirmed`:packaged JS 现已直接实现输入驱动的 `monitor-snapshot` collector,可从 outage/service-order 数据计算 `pending/audit/processed`、比较 monitor/dispose logs、推导 `new_pending_ids`、输出 `success/partial/empty/blocked`,并携带 `workflow_rule_sources`、`config_base_page/config_base_role`、`identity_model` 元数据;更强的 outage collection、service-order enrichment、monitor/dispose log 比较、营销 token 依赖自动处理与派单分支直接存在于 desk 规则脚本。`contract-defined`:快照与下游自动处理需分开理解。`implementation intent exists but not rigorous / buggy`:监测 pending 列表用 `consNo`,处置去重用 `eventId`,身份模型不一致;短信通道只看到保留意图/注释代码。 | 本场景 desk workflow 证据也强于 packaged collector,且当前实际定时执行证据以 desk 规则脚本为主;`assets/scene-snapshot/index.html` 仅是配置基础页。必须保留身份不一致问题,不能把 `pending_ids/new_pending_ids` 写成已被严格统一定义。 | `D:/data/ideaSpace/rust/sgClaw/claw-new/docs/superpowers/specs/2026-04-08-jiayuguan-meter-outage-operation-analysis.md` |
|
||||||
|
|
||||||
|
## 当前状态汇总
|
||||||
|
|
||||||
|
### 1. 报表类场景的共同结论
|
||||||
|
|
||||||
|
- `fault-details-report`
|
||||||
|
- `jinchang-business-environment-weekly-report`
|
||||||
|
- `95598-weekly-monitor-report`
|
||||||
|
|
||||||
|
这 3 个 scene 当前最强直接证据都集中在“已打包脚本定义了 artifact schema / section template / 基础状态字段”。
|
||||||
|
|
||||||
|
因此,对这 3 个 scene 的严谨表述应是:
|
||||||
|
|
||||||
|
- `code-confirmed`:已存在结构模板、字段壳和分区定义
|
||||||
|
- `contract-defined`:存在明确的目标采集流程与质量要求
|
||||||
|
- `implementation intent exists but not rigorous / buggy`:运行时采集、周期对齐、状态细分、导出/日志等链路没有被 packaged JS 同等强度证实
|
||||||
|
|
||||||
|
换言之,它们当前主要是“结构化报表模板场景”,不应表述为“已验证的 live collector”。
|
||||||
|
|
||||||
|
### 2. 监测类场景的共同结论
|
||||||
|
|
||||||
|
- `95598-repair-city-dispatch`
|
||||||
|
- `jiayuguan-meter-outage`
|
||||||
|
|
||||||
|
这 2 个 scene 的情况与报表类不同:
|
||||||
|
|
||||||
|
- packaged JS collector 已具备输入驱动的 `monitor-snapshot` 归一化 / 比较逻辑
|
||||||
|
- 更强 workflow 证据主要存在于 desk 规则资产
|
||||||
|
- 规则资产直接展示了采集、比较、提醒、日志、派单等流程分支
|
||||||
|
|
||||||
|
因此,对这 2 个 scene 的严谨表述应是:
|
||||||
|
|
||||||
|
- `code-confirmed`:规则资产中确有较强监测/自动处理链路定义
|
||||||
|
- 但这仍只证明“规则层存在这些实现分支”
|
||||||
|
- 不得进一步写成“运行时已稳定成功”
|
||||||
|
|
||||||
|
### 3. 当前全局未闭合问题
|
||||||
|
|
||||||
|
- `fault-details-report`:`period` 与 `startTime/endTime` 的关系未闭合
|
||||||
|
- `jinchang-business-environment-weekly-report`:`region` 语义只在文案层出现,未形成稳定字段
|
||||||
|
- `95598-weekly-monitor-report`:`period` 与 `currentPeriod/cumulativePeriod`、period alignment 之间的关系未闭合
|
||||||
|
- `95598-repair-city-dispatch`:待处理分类规则存在 `status == "00" && status == "01"` bug
|
||||||
|
- `jiayuguan-meter-outage`:monitor pending 使用 `consNo`,dispose dedupe 使用 `eventId`,身份模型不一致
|
||||||
|
|
||||||
|
## 按证据等级整理的能力视图
|
||||||
|
|
||||||
|
### `code-confirmed`
|
||||||
|
|
||||||
|
- 报表 artifact / monitor snapshot 的基础结构壳
|
||||||
|
- 报表 scene 的固定 section/template 定义
|
||||||
|
- 两个监测 scene 规则资产中的采集、比较、日志、提醒、派单分支存在性
|
||||||
|
|
||||||
|
### `contract-defined`
|
||||||
|
|
||||||
|
- 报表类 scene 的目标采集流程、导出语义、质量约束
|
||||||
|
- 监测类 scene 的“快照成功”与“副作用成功”分离原则
|
||||||
|
- 周报类双周期/多来源/对齐语义
|
||||||
|
|
||||||
|
### `implementation intent exists but not rigorous / buggy`
|
||||||
|
|
||||||
|
- 报表类 scene 中对 live collector、period alignment、状态细分的实现意图
|
||||||
|
- `95598-repair-city-dispatch` 的 pending 分类 bug
|
||||||
|
- `jiayuguan-meter-outage` 的身份键不一致
|
||||||
|
- 若干下游通道存在定义或注释代码,但不足以提升为稳定现状
|
||||||
|
|
||||||
|
### `no direct evidence / candidate only`
|
||||||
|
|
||||||
|
- 统一 capability 名称本身
|
||||||
|
- 未来标准配置里的字段拆分方案
|
||||||
|
- `email` 等当前未见直接证据的候选通道
|
||||||
|
|
||||||
|
## 使用边界
|
||||||
|
|
||||||
|
本文件只用于帮助人快速理解“当前观察到的业务盘点状态”。如需:
|
||||||
|
|
||||||
|
- 看每个场景的证据出处与分级理由,读对应 operation-analysis 文档
|
||||||
|
- 看机器可读盘点结构,读 `2026-04-08-command-center-virtual-employee-inventory.json`
|
||||||
|
- 看未来目标配置结构提案,读 `2026-04-08-command-center-standard-config-structure.md`
|
||||||
Binary file not shown.
Binary file not shown.
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,142 @@
|
|||||||
|
# fault-details-report 操作分析
|
||||||
|
|
||||||
|
## 1. 场景概述
|
||||||
|
|
||||||
|
`fault-details-report` 对应“故障明细”场景,目标表述为查询故障明细并生成包含明细与汇总分区的结构化报表。根据 `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\scenes\fault-details-report\scene.json`、`D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\fault-details-report\SKILL.md` 与 `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\fault-details-report\scripts\collect_fault_details.js`,当前最强直接证据在于:已打包脚本明确了报表 artifact 的列结构、汇总 section 名称、空结果形态与 `status: "ok"` 默认值,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
但同一批证据并没有展示真实浏览器页面抓取、请求触发、行级归一化或汇总派生的实际执行代码。也就是说,当前 packaged script 对 artifact schema / section template 的定义,明显强于对实时浏览器采集行为的证明,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
## 2. 证据来源
|
||||||
|
|
||||||
|
本分析统一只使用四个证据等级标签:`code-confirmed`、`contract-defined`、`implementation intent exists but not rigorous / buggy`、`no direct evidence / candidate only`。其中,脚本直接定义的 artifact schema / section template 归入 `code-confirmed`;未见脚本直接实现的运行语义与下游动作,不拔高于其对应较弱标签。
|
||||||
|
|
||||||
|
1. `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\fault-details-report\scripts\collect_fault_details.js`
|
||||||
|
- 直接定义 `DETAIL_COLUMNS`、`SUMMARY_COLUMNS`、返回对象字段、空 `rows`、空 `sections[0].rows`、`status: "ok"`、`partial_reasons: []`,证据等级:`code-confirmed`。
|
||||||
|
2. `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\fault-details-report\SKILL.md`
|
||||||
|
- 说明预期工作流为读取时间范围、收集原始故障明细、按规范列顺序归一、派生汇总 sheet、返回 artifact;这是技能说明与目标运行契约,能证明意图与期望输出,但不能单独证明脚本已实现全部步骤,整体证据等级以 `contract-defined` 与 `implementation intent exists but not rigorous / buggy` 并存描述更严谨。
|
||||||
|
3. `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\scenes\fault-details-report\scene.json`
|
||||||
|
- 定义场景输入为 `period`、依赖为 `browser` / `report-history` / `local-report-service`、动作包括 `query` / `collect-report` / `build-summary-section`,属于场景元数据定义,证据等级:`code-confirmed`。
|
||||||
|
4. `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\fault-details-report\references\collection-flow.md`
|
||||||
|
- 给出“读取开始结束时间、触发 repair-order query、收集明细、按 `excleIni[0].cols` 归一、派生 summary-sheet、再返回 artifact”的参考流程;它定义了预期采集语义,证据等级:`contract-defined`。
|
||||||
|
5. `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\fault-details-report\references\data-quality.md`
|
||||||
|
- 给出必填列、可空列、summary 派生期望、partial 规则与 empty/failure 区分,属于质量约束参考,证据等级:`contract-defined`。
|
||||||
|
6. `D:\data\ideaSpace\rust\sgClaw\claw-new\docs\superpowers\specs\2026-04-08-command-center-virtual-employee-inventory.json`
|
||||||
|
- 已把该场景整理为 `workflow`、`result.key_fields`、`status_model` 与 `open_questions`,可作为当前 command-center 侧归纳结果,但其中部分内容是对 scene/skill/reference 的再整理,不应反向当作新实现证据;证据等级:`no direct evidence / candidate only`(仅限 inventory 不能单独证明 packaged script 已实现的部分)。
|
||||||
|
|
||||||
|
## 3. 实际入口与运行边界
|
||||||
|
|
||||||
|
实际入口已在 `scene.json` 中声明为浏览器场景 `index.html`,技能包工具名为 `fault-details-report.collect_fault_details`,artifact 类型为 `report-artifact`,这些都是当前仓库可直接定位的定义,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
运行边界方面:
|
||||||
|
|
||||||
|
- 场景元数据只声明了 `inputs: ["period"]`,证据等级:`code-confirmed`。
|
||||||
|
- 参考流程却明确要求从页面 datetime range control 读取 `start` / `end` 时间,证据等级:`contract-defined`。
|
||||||
|
- 因而“外部统一输入叫 `period`,但页面真实业务输入更像 `startTime/endTime` 二元组”这一判断是当前最严谨的归纳,且 inventory 文件也把它列入 `open_questions`,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
|
||||||
|
还要强调:当前可直接运行的打包脚本并未包含浏览器操作、请求调用、页面解析或 localhost 导出调用代码,因此它的实际边界更接近“返回一个预定义空 artifact 的 schema stub”,而不是“已严格实现端到端浏览器采集器”,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
## 4. 代码已证实的实际操作流程
|
||||||
|
|
||||||
|
当前代码真正能严格确认的流程只有以下最小闭环:
|
||||||
|
|
||||||
|
1. 调用 `collectFaultDetails(input)`。
|
||||||
|
2. 读取 `input.period || ""` 填入返回对象的 `period` 字段。
|
||||||
|
3. 将 `DETAIL_COLUMNS` 写入主表 `columns`。
|
||||||
|
4. 将空数组写入主表 `rows`。
|
||||||
|
5. 构造一个名为 `summary-sheet` 的 section,并写入 `SUMMARY_COLUMNS` 与空 `rows`。
|
||||||
|
6. 返回 `type: "report-artifact"`、`report_name: "fault-details-report"`、`status: "ok"`、`partial_reasons: []`。
|
||||||
|
|
||||||
|
以上每一步都能在 `collect_fault_details.js` 中直接定位,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
至于以下操作:读取页面时间、触发 repair-order 查询、抓取故障行、归一字段、按明细派生汇总、判断 partial/empty/blocked、调用导出服务或报告日志服务,目前在 packaged script 中没有对应实现代码,只在 skill/reference 文本里出现,证据等级最多是 `contract-defined` 或 `implementation intent exists but not rigorous / buggy`,不能写成当前代码已证实的实际流程。
|
||||||
|
|
||||||
|
## 5. 标准化抽象流程
|
||||||
|
|
||||||
|
若为 command-center 做严格抽象,当前更合理的标准化流程应写成:
|
||||||
|
|
||||||
|
1. 解析外部任务输入。
|
||||||
|
2. 将业务时间范围映射到页面查询参数。
|
||||||
|
3. 执行浏览器态查询并收集故障明细行。
|
||||||
|
4. 按约定列顺序归一主表数据。
|
||||||
|
5. 基于明细结果派生 `summary-sheet`。
|
||||||
|
6. 生成 `report-artifact`。
|
||||||
|
7. 如有需要再执行导出/日志等下游动作。
|
||||||
|
|
||||||
|
其中第 6 步“生成具有主表+summary-sheet 的 artifact 结构”可由脚本直接支撑,证据等级:`code-confirmed`。第 2、3、4、5、7 步主要来自场景说明与 reference 文档,不是当前脚本已实现事实,证据等级应分别按 `contract-defined` 或 `implementation intent exists but not rigorous / buggy` 标注。
|
||||||
|
|
||||||
|
## 6. 输入、上下文与依赖
|
||||||
|
|
||||||
|
### 输入
|
||||||
|
|
||||||
|
- `period` 被 scene 元数据与脚本入参直接使用,证据等级:`code-confirmed`。
|
||||||
|
- “页面实际读取开始时间与结束时间”来自 `references/collection-flow.md` 和 `SKILL.md` 的 workflow 描述,证据等级:`contract-defined`。
|
||||||
|
- 因此 `period` 与 `startTime/endTime` 的关系当前并不严谨:很可能 `period` 只是上层统一抽象,而底层真实 collector 需要双时间字段,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
|
||||||
|
### 运行上下文
|
||||||
|
|
||||||
|
- 浏览器页面可访问、页面日期控件存在、会话已登录,来自 scene/inventory/reference 的联合描述,证据等级以 `code-confirmed`(元数据存在)和 `contract-defined`(具体语义)共同成立。
|
||||||
|
- `report-history`、`local-report-service` 被声明为依赖,但 reference 同时强调历史报告不是主数据源、localhost 服务是下游依赖,证据等级:`code-confirmed` 与 `contract-defined`。
|
||||||
|
|
||||||
|
### 依赖
|
||||||
|
|
||||||
|
- `browser`、`fault-detail-query-source`、`local-report-service` 等依赖名称或整理项可直接在 scene 或 inventory 中定位,证据等级:`code-confirmed`。
|
||||||
|
- `/a_js/YPTAPI.js`、`http://localhost:13313/ReportServices/*`、`faultDetailsExportXLSXS` 等更具体依赖来自 reference,证据等级:`contract-defined`。
|
||||||
|
|
||||||
|
## 7. 输出结构
|
||||||
|
|
||||||
|
当前输出结构是本场景最硬的直接证据。`collect_fault_details.js` 已直接定义:
|
||||||
|
|
||||||
|
- `type: "report-artifact"`
|
||||||
|
- `report_name: "fault-details-report"`
|
||||||
|
- `period`
|
||||||
|
- 主表 `columns` = `DETAIL_COLUMNS`
|
||||||
|
- 主表 `rows` = `[]`
|
||||||
|
- `sections[0].name = "summary-sheet"`
|
||||||
|
- `sections[0].columns = SUMMARY_COLUMNS`
|
||||||
|
- `sections[0].rows = []`
|
||||||
|
- `status = "ok"`
|
||||||
|
- `partial_reasons = []`
|
||||||
|
|
||||||
|
以上全部属于 `code-confirmed`。
|
||||||
|
|
||||||
|
但 `SKILL.md` 与 `data-quality.md` 还要求输出中体现 detail row count、summary row count、required column coverage、complete/partial status、missing columns、weak mappings、downstream failures 等诊断信息。除了 `status` 与 `partial_reasons` 字段壳子已经存在,其他诊断性内容并未在脚本中实现,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
|
||||||
|
## 8. 下游动作证据表
|
||||||
|
|
||||||
|
| 下游动作 | 当前证据 | 证据等级 | 严谨结论 |
|
||||||
|
| --- | --- | --- | --- |
|
||||||
|
| 生成 `report-artifact` 返回给上游 | `collect_fault_details.js` 直接返回对象 | `code-confirmed` | 已有稳定的 artifact 结构桩实现,但当前返回为空数据模板。 |
|
||||||
|
| 明细列顺序标准化 | `DETAIL_COLUMNS` 明确定义 | `code-confirmed` | 只能确认列 schema 被定义,不能确认真实行数据已按此顺序完成映射。 |
|
||||||
|
| `summary-sheet` 分区存在 | `sections` 中直接构造 `summary-sheet` | `code-confirmed` | 只能确认 section 模板存在,不能确认真实汇总派生逻辑已实现。 |
|
||||||
|
| 页面采集故障明细行 | 只在 `SKILL.md` / `collection-flow.md` 中描述 | `contract-defined` | 存在明确目标流程,但当前 packaged script 未直接证明已实现。 |
|
||||||
|
| 汇总派生 | 只在 `SKILL.md` / `collection-flow.md` / `data-quality.md` 中描述 | `contract-defined` | 有契约与质量要求,但没有脚本级派生代码证据。 |
|
||||||
|
| 导出 Excel | scene 依赖与 reference 提到 localhost export service | `contract-defined` | 这是下游依赖定义,不等于本 skill 当前已实际执行导出。 |
|
||||||
|
| 写报告日志 | scene 依赖 `report-history`,reference 提到 report-log | `contract-defined` | 只能确认体系中有该下游概念,当前脚本未直接实现日志写入。 |
|
||||||
|
| partial / empty / blocked 状态细分 | skill/reference 有规则,脚本固定 `status: "ok"` | `implementation intent exists but not rigorous / buggy` | 状态模型意图存在,但 packaged script 目前未严格承载这些分支。 |
|
||||||
|
|
||||||
|
## 9. 当前代码疑点 / 不严谨点
|
||||||
|
|
||||||
|
1. `period` 与 `startTime/endTime` 的建模不一致。scene 与脚本只保留 `period`,reference 却明确要求读取开始/结束时间;这会让 command-center 难以判断标准输入究竟是一段字符串还是两个独立时间字段,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
2. 脚本把 `status` 固定为 `"ok"`,但 reference 与 `SKILL.md` 明确区分 success / partial / empty / blocked;当前实现无法承载这些语义,证据等级:`code-confirmed` 对现状成立,而“应支持细分状态”属于 `contract-defined`。
|
||||||
|
3. `partial_reasons` 虽存在字段,但脚本没有任何填充逻辑,只能算 schema 占位,证据等级:`code-confirmed`。
|
||||||
|
4. `DETAIL_COLUMNS` 与 `SUMMARY_COLUMNS` 已定义,但没有任何从页面数据到列值的映射代码;“字段归一化能力已落地”不能成立,证据等级最多为 `implementation intent exists but not rigorous / buggy`。
|
||||||
|
5. 下游导出与日志在参考资料中存在,但当前 skill 脚本并未调用相关服务,因此“报表可直接生成 Excel”不能写成当前代码事实,证据等级:`no direct evidence / candidate only`(就 packaged script 内实际执行而言)。
|
||||||
|
|
||||||
|
## 10. 对 command-center 标准配置的修订建议
|
||||||
|
|
||||||
|
1. 将本场景输入从单一 `period` 修订为更严谨的双层表达:
|
||||||
|
- 对外统一层可保留 `period` 便于路由;
|
||||||
|
- 执行层建议显式展开 `startTime` / `endTime`。
|
||||||
|
其中“需要展开”的结论来自 scene 与 reference 的冲突修正,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
2. 在标准配置里把“artifact schema 已明确、live collector 未证实”作为单独字段或备注保留,避免 command-center 误把 schema stub 当成已实现采集器,证据等级:`code-confirmed`。
|
||||||
|
3. 将 `summary-sheet` 标记为 `section template confirmed`,而不是 `summary derivation implemented`。前者是 `code-confirmed`,后者当前没有同等强度证据。
|
||||||
|
4. 状态模型建议分成两层:
|
||||||
|
- `declared_status_model`: success / partial / empty / blocked,来源于 skill/reference,证据等级:`contract-defined`;
|
||||||
|
- `implemented_status_behavior`: 当前仅看到固定 `ok` 成功壳,证据等级:`code-confirmed`。
|
||||||
|
5. 对下游动作增加 `evidence_note`,明确 report-export / report-log 目前主要来自场景与参考定义,不是当前 packaged script 已证实行为。
|
||||||
|
|
||||||
|
## 11. 最终严谨结论
|
||||||
|
|
||||||
|
关于 `fault-details-report`,当前最可靠的结论是:仓库已经具备一个明确的报表 artifact 模板实现,能够稳定返回故障明细主表列定义、`summary-sheet` 汇总分区模板、空结果数组以及基础状态字段,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
但如果把结论提升为“已经实现真实浏览器故障明细采集、列归一化、汇总派生、导出与日志闭环”,则证据并不充分。相关行为主要存在于 `SKILL.md`、`references/collection-flow.md`、`references/data-quality.md` 与 scene 元数据中,能够证明的是目标流程与契约要求,而不是当前 packaged script 已严格完成这些逻辑。因此,本场景目前应被描述为“artifact schema / section template 定义强,live browser collection 行为证据弱”的 staged report scene,而不能被描述为已严谨落地的实时采集器。
|
||||||
@@ -0,0 +1,225 @@
|
|||||||
|
# jiayuguan-meter-outage 操作分析
|
||||||
|
|
||||||
|
## 1. 场景概述
|
||||||
|
|
||||||
|
`jiayuguan-meter-outage` 对应“户表失电-嘉峪关”场景,目标是采集户表失电事件、关联服务工单状态、对比历史监测 / 处置日志,并在必要时触发音频提醒或自动派单等后续动作。根据 `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\scenes\jiayuguan-meter-outage\scene.json`、`D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\jiayuguan-meter-outage\SKILL.md`、`D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\jiayuguan-meter-outage\scripts\collect_outage_events.js` 以及两份规则资产,当前最严谨的结论是:packaged JS collector 已经实现输入驱动的 `monitor-snapshot` 归一化 / 比较逻辑,会从 outage events 与 service orders 计算 `pending/audit/processed`、解析 monitor/dispose logs、推导 `pending_ids` / `new_pending_ids`、输出 `success/partial/empty/blocked` 状态,并附带 source endpoint 常量、localhost 端点、desk 规则来源、配置基础页标记与身份模型元数据;更强的业务工作流证据则主要存在于 desk 规则资产中,证据等级分别为 `code-confirmed`。
|
||||||
|
|
||||||
|
必须明确区分以下几层:
|
||||||
|
|
||||||
|
1. packaged runtime-snapshot-collector:`collect_outage_events.js` 已直接实现 outage/service-order 归一化、历史比较、身份模型暴露与标准快照输出,并显式携带 `workflow_rule_sources`、`config_base_page`、`config_base_role`、`packaged_collector_role` 与 `identity_model` 元数据,证据等级:`code-confirmed`。
|
||||||
|
2. outage collection:业务监测规则直接请求 `outage/dhsd/dhsdList` 收集失电事件,证据等级:`code-confirmed`。
|
||||||
|
3. service-order enrichment:业务监测规则再请求 `gdgl/active/service/order/list` 收集服务工单状态并补全 `audit` / `processed`,证据等级:`code-confirmed`。
|
||||||
|
4. monitor-log comparison:业务监测规则通过 `getMonitorLog` 对比历史待处理列表并决定是否音频提醒,证据等级:`code-confirmed`。
|
||||||
|
5. dispose-log dedupe:业务监测规则通过 `getDisposeLog` 做已派单去重并决定是否进入自动处理,证据等级:`code-confirmed`。
|
||||||
|
6. marketing-token-dependent auto-processing and dispatch:自动处理规则显式读取营销系统 token,并基于营销系统查询结果、班组配置和自动派单接口推进派单,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
但这些 `code-confirmed` 仍只证明“代码或规则资产中存在这些实现链路”,不代表运行时已验证成功。本文不声称任何运行时验证结论。
|
||||||
|
|
||||||
|
## 2. 证据来源
|
||||||
|
|
||||||
|
本分析统一只使用四个证据等级标签:`code-confirmed`、`contract-defined`、`implementation intent exists but not rigorous / buggy`、`no direct evidence / candidate only`。
|
||||||
|
|
||||||
|
1. `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\jiayuguan-meter-outage\scripts\collect_outage_events.js`
|
||||||
|
- 直接定义 `SOURCE_GROUPS`、`LOCAL_SERVICE_ENDPOINTS`、`WORKFLOW_RULE_SOURCES`、`CONFIG_BASE_PAGE`、`IDENTITY_MODEL`,并实现 outage/service-order 分类、monitor/dispose log 解析比较、`new_pending_ids` 推导、`success/partial/empty/blocked` 状态判定,以及带 `evidence` / `identity_model` 的 `monitor-snapshot` 输出,证据等级:`code-confirmed`。
|
||||||
|
2. `D:\desk\智能体资料\大四区报告监测项\户表失电-嘉峪关_业务监测配置.txt`
|
||||||
|
- 直接实现失电事件采集、服务工单状态补充、monitor log 比较、dispose log 去重、音频提醒与监测日志写入,证据等级:`code-confirmed`。
|
||||||
|
3. `D:\desk\智能体资料\大四区报告监测项\户表失电-嘉峪关_自动处理配置.txt`
|
||||||
|
- 直接实现营销 token 读取、营销系统用户查询、工单编号获取、班组分配、自动派单请求、音频提醒、处置日志写入,以及备用短信函数定义,证据等级:`code-confirmed`。
|
||||||
|
4. `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\jiayuguan-meter-outage\SKILL.md`
|
||||||
|
- 定义“失电事件采集与工单状态采集要分开,再组合成一份快照;下游提醒与自动派单不应重定义采集成功”的运行契约,证据等级:`contract-defined`。
|
||||||
|
5. `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\jiayuguan-meter-outage\references\collection-flow.md`
|
||||||
|
- 定义以配置页为入口、组合 outage-event collection、service-order enrichment、历史比较和 auto-processing context 的流程,证据等级:`contract-defined`。
|
||||||
|
6. `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\jiayuguan-meter-outage\references\data-quality.md`
|
||||||
|
- 定义 pending / audit / processed 的来源语义、partial 规则与依赖告警,证据等级:`contract-defined`。
|
||||||
|
7. `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\scenes\jiayuguan-meter-outage\scene.json`
|
||||||
|
- 声明场景分类、输入 `time`、依赖与动作,证据等级:`code-confirmed`。
|
||||||
|
8. `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\scenes\jiayuguan-meter-outage\scene.draft.json`
|
||||||
|
- 暴露对 marketing token context 和 `trigger-alert` / `auto-processing` 是否进一步拆分的待定整理,证据等级:`no direct evidence / candidate only`。
|
||||||
|
|
||||||
|
## 3. 实际入口与运行边界
|
||||||
|
|
||||||
|
实际入口在 `scene.json` 中已固定:场景页面入口为 `index.html`,技能工具名为 `jiayuguan-meter-outage.collect_outage_events`,输出类型为 `monitor-snapshot`,输入为 `time`,这些都属于 `code-confirmed`。
|
||||||
|
|
||||||
|
其中 `assets/scene-snapshot/index.html` 只应被视为配置基础页(例如班组、联系人、范围维护),不应被当作规则 workflow 的主执行证据。
|
||||||
|
|
||||||
|
运行边界方面,需要特别强调 packaged collector 与 rule workflow 的分层:
|
||||||
|
|
||||||
|
- packaged JS runtime collector 的直接能力边界:它已经能基于输入 `outage_events`、`service_orders`、`monitor_logs`、`dispose_logs` 做 `pending/audit/processed` 归一化、历史比较、`new_pending_ids` 推导与 `success/partial/empty/blocked` 判定,并公开两个上游 source endpoint、一组 localhost endpoint、desk 规则来源、配置基础页角色与身份模型元数据;但它仍是输入驱动归一化 collector,不直接发起浏览器请求,也不直接承载完整业务 workflow,证据等级:`code-confirmed`。
|
||||||
|
- 更强的业务流程边界,主要体现在 desk 规则资产:先采集户表失电事件,再请求服务工单列表补充状态,再做 monitor/dispose 日志比较,最后才决定提醒或自动处理,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
因此,本场景不能被描述成“packaged collector 已完整实现嘉峪关户表失电实时工作流”。更严谨的说法是:packaged collector 已实现可测试的输入驱动快照归一化 / 比较逻辑;较强 workflow 证据主要在 desk 规则资产中,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
此外,`collection-flow.md` 与 `SKILL.md` 都明确要求把 outage collection、service-order enrichment、历史比较与下游 auto-processing 分开理解;这是运行边界契约,证据等级:`contract-defined`。
|
||||||
|
|
||||||
|
## 4. 代码已证实的实际操作流程
|
||||||
|
|
||||||
|
### 4.1 packaged runtime-snapshot-collector 已证实流程
|
||||||
|
|
||||||
|
`collect_outage_events.js` 中现在能严格确认:
|
||||||
|
|
||||||
|
1. 调用 `collectOutageEvents(input)`,读取 `input.outage_events`、`input.service_orders`、`input.monitor_logs || input.monitor_log`、`input.dispose_logs || input.dispose_log`、`input.local_write_failures`、`input.blocked_reason` 等输入。
|
||||||
|
2. 通过 `buildOutageContext(...)` 从 outage events 提取 `pending_ids`、`eventIds` 与 `eventIdsByConsNo`,并通过 `classifyServiceOrders(...)` 基于 `gdztmc` 计算 `audit` / `processed`。
|
||||||
|
3. 解析 monitor/dispose logs,识别 malformed payload,并结合 `consNo` 与 `eventId` 的映射推导 `new_pending_ids`。
|
||||||
|
4. 对未知工单状态、日志缺失、日志解析失败、缺失 event identity、identity crosswalk ambiguity、本地写失败等情况记录 `partial_reasons`。
|
||||||
|
5. 按 `blocked > partial > empty > success` 的优先级计算 `status`,返回 `type: "monitor-snapshot"`、`scene: "jiayuguan-meter-outage"`、`pending`、`audit`、`processed`、`pending_ids`、`new_pending_ids`、`status`、`partial_reasons`。
|
||||||
|
6. 在返回对象中附带 `evidence.workflow_rule_sources`、`evidence.config_base_page`、`evidence.config_base_role`、`evidence.packaged_collector_role = "runtime-snapshot-collector"`,以及 `identity_model`。
|
||||||
|
7. 模块额外导出 `SOURCE_GROUPS`、`LOCAL_SERVICE_ENDPOINTS`、`WORKFLOW_RULE_SOURCES`、`CONFIG_BASE_PAGE`、`IDENTITY_MODEL`。
|
||||||
|
|
||||||
|
以上都属于 `code-confirmed`。
|
||||||
|
|
||||||
|
### 4.2 业务监测规则已证实流程
|
||||||
|
|
||||||
|
`户表失电-嘉峪关_业务监测配置.txt` 直接证实了以下分段流程:
|
||||||
|
|
||||||
|
1. outage collection:通过 `BrowserAction(... outage/dhsd/dhsdList ...)` 查询近两天到当天的失电事件,并把每条 `consNo` 放入 `idList`,证据等级:`code-confirmed`。
|
||||||
|
2. service-order enrichment:随后通过 `BrowserAction(... gdgl/active/service/order/list ...)` 查询当天工单列表,并按 `gdztmc == "待审核"` / `gdztmc == "已归档"` 分别累计 `audit` 与 `processed`,证据等级:`code-confirmed`。
|
||||||
|
3. monitor-log comparison:通过 `getMonitorLog` 读取历史 `pendingList`,对比当前 `idList`,如发现新增待处理则触发音频提醒,并把快照写入 `setMonitorData` / `setMonitorLog`,证据等级:`code-confirmed`。
|
||||||
|
4. dispose-log dedupe:通过 `getDisposeLog` 读取历史处置日志,解析 `orderID` 后提取其中 `id`,再以 `eventId` 为键从当前失电事件中筛出未处置事件 `pendingList`,证据等级:`code-confirmed`。
|
||||||
|
5. 若存在未处置事件,则把 `pendingList` 塞给 `_this.queueObj.pendingList` 并触发 `_this.autoTask()`;否则直接 `_this.processQueue()`,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
### 4.3 自动处理规则已证实流程
|
||||||
|
|
||||||
|
`户表失电-嘉峪关_自动处理配置.txt` 直接证实:
|
||||||
|
|
||||||
|
1. 自动处理依赖营销系统 token:代码从 `localStorage["markYXObj"]` 中读取 `token` 与 `loginUserInfo`,证据等级:`code-confirmed`。
|
||||||
|
2. 自动处理先按 `eqPsrName` 合并事件,再读取 `getClassList` 获取班组配置,证据等级:`code-confirmed`。
|
||||||
|
3. 用营销系统接口 `queryEleCust` 按 `consNo` 查询用户营销归属,再据此确定 `ecssMgtOrgCode`,证据等级:`code-confirmed`。
|
||||||
|
4. 之后还会调用 `gdgl/zdfw/tgforderzdfw/gdbh` 获取工单编号,再调用 `gdgl/active/service/order/saveAndSend` 发起自动派单,证据等级:`code-confirmed`。
|
||||||
|
5. 自动派单成功 / 失败 / 异常分支都会触发不同音频提醒,并写 `setDisposeLog`,证据等级:`code-confirmed`。
|
||||||
|
6. 短信函数 `msgFC` 在自动处理规则中被定义,但当前成功分支里的短信发送代码被整体注释掉,因此“短信通道已成为当前有效工作流”不能被写成稳定事实,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
|
||||||
|
## 5. 标准化抽象流程
|
||||||
|
|
||||||
|
若为 command-center 做严格抽象,本场景更合理的标准化流程应写成:
|
||||||
|
|
||||||
|
1. 接收监测任务输入 `time`。
|
||||||
|
2. 单独采集 outage events。
|
||||||
|
3. 单独采集 service-order states,并用其补充 `audit` / `processed`。
|
||||||
|
4. 使用 monitor log 做待处理比较,判断提醒语义。
|
||||||
|
5. 使用 dispose log 做已处置去重,筛出需要自动处理的事件集合。
|
||||||
|
6. 先形成或保留监测快照语义。
|
||||||
|
7. 若满足条件,再进入依赖营销 token 的自动处理 / 派单流程。
|
||||||
|
8. 记录音频、日志与处置结果等下游动作。
|
||||||
|
|
||||||
|
其中第 1 步可由 packaged collector 的显式输入 `time` 支撑,第 2、3、4、5、6 步可由 packaged collector 的输入驱动归一化 / 比较逻辑支撑,证据等级:`code-confirmed`;第 7、8 步主要由规则资产直接支撑,证据等级:`code-confirmed`;“这些步骤应被分离理解、下游动作不应覆盖采集成功语义”的边界来自 `SKILL.md` / references,证据等级:`contract-defined`。
|
||||||
|
|
||||||
|
如果把上述流程进一步说成“已由 packaged collector 严格统一承载实时 outage 请求、service-order 查询与自动派单副作用”,则不严谨,因为这些更强 workflow 证据主要来自 desk 规则资产而不是 packaged collector,证据等级只能降为 `implementation intent exists but not rigorous / buggy`。
|
||||||
|
|
||||||
|
## 6. 输入、上下文与依赖
|
||||||
|
|
||||||
|
### 输入
|
||||||
|
|
||||||
|
- `time` 是 scene 与 packaged script 共同声明的显式输入,证据等级:`code-confirmed`。
|
||||||
|
- 业务监测规则对失电事件使用“近两天到今天”的 `offTime` 查询窗,对服务工单使用“当天”的 `createTime` 查询窗,证据等级:`code-confirmed`。
|
||||||
|
- “当前 outage 和 service-order query windows 都属于实际输入的一部分”在 reference 中被明确说明,证据等级:`contract-defined`。
|
||||||
|
|
||||||
|
### 运行上下文
|
||||||
|
|
||||||
|
- 平台 session、org/user 上下文、浏览器 `BrowserAction` 能力在规则资产中直接使用,证据等级:`code-confirmed`。
|
||||||
|
- marketing token context 在自动处理规则中是实际依赖,而不仅仅是文档说法,证据等级:`code-confirmed`。
|
||||||
|
- reference 也把 marketing token context 明确列为 downstream enrichment / dispatch 依赖,证据等级:`contract-defined`。
|
||||||
|
|
||||||
|
### 依赖
|
||||||
|
|
||||||
|
- `scene.json` 声明 `browser`、`local-service`、`outage-source`、`service-order-source`、`history-log`,证据等级:`code-confirmed`。
|
||||||
|
- 业务监测规则直接使用 `outage/dhsd/dhsdList`、`gdgl/active/service/order/list`、`getMonitorLog`、`setMonitorData`、`setMonitorLog`、`getDisposeLog`、`setAudioPlayLog`,证据等级:`code-confirmed`。
|
||||||
|
- 自动处理规则直接使用营销系统 `queryEleCust`、工单编号接口 `gdgl/zdfw/tgforderzdfw/gdbh`、自动派单接口 `gdgl/active/service/order/saveAndSend`、`setDisposeLog` 与 `setAudioPlayLog`,证据等级:`code-confirmed`。
|
||||||
|
- `scene.draft.json` 中 marketing token context 是否应提升为正式 dependency 仍是待确认项,因此在标准配置整理上属于 `no direct evidence / candidate only`。
|
||||||
|
|
||||||
|
## 7. 输出结构
|
||||||
|
|
||||||
|
当前输出结构需要分层描述。
|
||||||
|
|
||||||
|
### 7.1 packaged runtime collector 已直接定义的输出
|
||||||
|
|
||||||
|
`collect_outage_events.js` 直接定义:
|
||||||
|
|
||||||
|
- `type: "monitor-snapshot"`
|
||||||
|
- `scene: "jiayuguan-meter-outage"`
|
||||||
|
- `time`
|
||||||
|
- `pending`
|
||||||
|
- `audit`
|
||||||
|
- `processed`
|
||||||
|
- `pending_ids`
|
||||||
|
- `new_pending_ids`
|
||||||
|
- `status`
|
||||||
|
- `partial_reasons`
|
||||||
|
- `evidence.workflow_rule_sources`
|
||||||
|
- `evidence.config_base_page`
|
||||||
|
- `evidence.config_base_role`
|
||||||
|
- `evidence.packaged_collector_role`
|
||||||
|
- `identity_model`
|
||||||
|
|
||||||
|
以上全部属于 `code-confirmed`。
|
||||||
|
|
||||||
|
### 7.2 业务监测规则已展示的实际快照字段语义
|
||||||
|
|
||||||
|
业务监测规则直接构造了:
|
||||||
|
|
||||||
|
- `time`
|
||||||
|
- `type: "户表失电-嘉峪关"`
|
||||||
|
- `pending`
|
||||||
|
- `pendingList`
|
||||||
|
- `audit`
|
||||||
|
- `processed`
|
||||||
|
|
||||||
|
这说明规则层快照对象与 packaged stub 的标准字段命名并不完全一致,尤其是 `pendingList` vs `pending_ids`、`type` vs `scene`,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
### 7.3 `new_pending_ids` 的证据强度与身份不一致问题
|
||||||
|
|
||||||
|
`SKILL.md`、reference 与 `data-quality.md` 把 `new_pending_ids` 当成目标输出的一部分,证据等级:`contract-defined`。但当前规则资产里更强的直接事实是:
|
||||||
|
|
||||||
|
- monitor pending list 使用的是 `consNo`,即 `idList.push(item.consNo)`,证据等级:`code-confirmed`;
|
||||||
|
- dispose dedupe 使用的是 `eventId`,即比较 `resList.indexOf(y.eventId)`,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
这意味着当前实现存在明显身份不一致:监测 pending 列表是 `consNo` 视角,而处置去重是 `eventId` 视角。因而“`pending_ids` / `new_pending_ids` 已被当前实现严谨统一定义”不能成立,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
|
||||||
|
## 8. 下游动作证据表
|
||||||
|
|
||||||
|
| 下游动作 | 当前证据 | 证据等级 | 严谨结论 |
|
||||||
|
| --- | --- | --- | --- |
|
||||||
|
| 返回 `monitor-snapshot` runtime collector 输出 | `collect_outage_events.js` 直接返回对象 | `code-confirmed` | packaged JS 直接证明标准 snapshot 字段、状态判定、身份说明与 collector metadata 已存在。 |
|
||||||
|
| 失电事件采集 | 业务监测规则调用 `outage/dhsd/dhsdList` | `code-confirmed` | outage collection 在规则资产中直接存在。 |
|
||||||
|
| 服务工单状态补充 | 业务监测规则调用 `service/order/list` 并按 `gdztmc` 分桶 | `code-confirmed` | service-order enrichment 直接存在。 |
|
||||||
|
| monitor-log 比较 | 业务监测规则调用 `getMonitorLog` 并对比 `consNo` 列表 | `code-confirmed` | 历史比较逻辑直接存在。 |
|
||||||
|
| dispose-log 去重 | 业务监测规则调用 `getDisposeLog` 并按 `eventId` 过滤 | `code-confirmed` | 去重逻辑直接存在,但身份键与 monitor pending list 不一致。 |
|
||||||
|
| 音频提醒调用 | 业务监测规则和自动处理规则都调用 `mac.audioPlay(...)` | `code-confirmed` | 只能确认规则层存在音频提醒调用。 |
|
||||||
|
| 自动派单请求 | 自动处理规则调用 `service/order/saveAndSend` | `code-confirmed` | 自动派单请求分支可直接定位。 |
|
||||||
|
| 依赖营销 token 的用户查询 | 自动处理规则调用营销系统 `queryEleCust`,请求头带 `auth_token` | `code-confirmed` | 自动处理对 marketing token 有明确硬依赖。 |
|
||||||
|
| `setDisposeLog` 成功 / 失败 / 异常写入 | 自动处理规则各分支都写 `setDisposeLog` | `code-confirmed` | 处置日志写入分支存在。 |
|
||||||
|
| 短信发送通道 | 自动处理规则定义 `msgFC`,但成功分支短信代码被注释 | `implementation intent exists but not rigorous / buggy` | 说明短信意图存在,但当前读取到的有效工作流未严格启用。 |
|
||||||
|
| `pending_ids` / `new_pending_ids` 严格统一 | skill/reference 有目标要求,但规则层 `consNo` 与 `eventId` 混用 | `implementation intent exists but not rigorous / buggy` | 当前身份模型不统一,不能写成严谨既成事实。 |
|
||||||
|
|
||||||
|
## 9. 当前代码疑点 / 不严谨点
|
||||||
|
|
||||||
|
1. 最关键的不严谨点是身份不一致:monitor pending list 以 `consNo` 作为待处理标识,而 dispose dedupe 以 `eventId` 作为去重标识。这会让 `pending_ids`、`new_pending_ids` 与“已处置集合”的语义难以严格对齐,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
2. packaged collector 与规则资产输出命名仍不一致:collector 使用 `scene`、`pending_ids`、`new_pending_ids`,规则对象使用 `type`、`pendingList`,证据等级:`code-confirmed`。
|
||||||
|
3. `SKILL.md` 明确要求把 outage collection 与 service-order enrichment 分离理解;当前规则确实这样做了,但 packaged stub 没有承载这层结构,因此如果 command-center 只读 packaged stub 会低估真实 workflow,证据等级:`code-confirmed`。
|
||||||
|
4. 自动处理强依赖 marketing token,但 `scene.json` 现有正式 dependencies 没把它显式列出;`scene.draft.json` 已把这点作为待确认项,说明标准依赖建模尚未闭合,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
5. 自动处理规则中短信发送函数虽然存在,但主成功路径短信代码被注释,说明短信通道更像保留意图而非当前可靠工作流,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
6. 本文不能根据规则中存在自动派单和音频分支,就声称这些分支已经过运行时验证;任何这种表述都应避免。
|
||||||
|
|
||||||
|
## 10. 对 command-center 标准配置的修订建议
|
||||||
|
|
||||||
|
1. 对本场景应显式拆分两层证据:
|
||||||
|
- `packaged_collector`: `collect_outage_events.js` 的 runtime snapshot collector、状态判定、历史比较与 metadata(规则来源、配置基础页角色、身份模型),证据等级:`code-confirmed`;
|
||||||
|
- `rule_asset_workflow`: 规则资产中的 outage collection、service-order enrichment、历史比较与自动处理流程,证据等级:`code-confirmed`。
|
||||||
|
2. 标准工作流建议强制拆成五段:
|
||||||
|
- `outage_collection`
|
||||||
|
- `service_order_enrichment`
|
||||||
|
- `monitor_log_comparison`
|
||||||
|
- `dispose_log_dedupe`
|
||||||
|
- `marketing_token_dependent_auto_processing`
|
||||||
|
这些拆分都能由现有规则资产直接支撑,证据等级:`code-confirmed`。
|
||||||
|
3. 标准配置中应单独增加 `identity_model_note`,明确当前监测 pending list 基于 `consNo`,而 dispose dedupe 基于 `eventId`,两者尚未统一,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
4. 对 dependencies 建议把 `marketing-token-context` 提升为显式依赖项,因为自动处理规则确实直接读取并使用营销 token,证据等级:`code-confirmed`;但“如何在标准 scene schema 中表达”目前仍是配置整理问题,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
5. 对输出 schema 建议区分:
|
||||||
|
- `canonical_snapshot_fields`: `pending_ids` / `new_pending_ids` 等标准字段;
|
||||||
|
- `observed_rule_fields`: `pendingList` / `type` 等规则字段。
|
||||||
|
并额外记录 `pending_identity = consNo`、`dispose_identity = eventId` 的差异,避免误建模。
|
||||||
|
|
||||||
|
## 11. 最终严谨结论
|
||||||
|
|
||||||
|
关于 `jiayuguan-meter-outage`,当前最可靠的结论是:仓库已经存在一个可测试的 packaged JS runtime collector,以及两份更强的 desk 规则脚本实现(`D:\desk\智能体资料\大四区报告监测项\户表失电-嘉峪关_业务监测配置.txt`、`D:\desk\智能体资料\大四区报告监测项\户表失电-嘉峪关_自动处理配置.txt`)。其中 packaged collector 已直接实现 outage/service-order 归一化、monitor/dispose log 比较、`new_pending_ids` 推导与 `success/partial/empty/blocked` 状态判定;业务监测规则直接证实了 outage collection、service-order enrichment、monitor-log comparison、dispose-log dedupe 与音频提醒 / 监测日志写入;自动处理规则则直接证实了依赖 marketing token 的用户归属查询、工单编号获取、自动派单请求以及音频 / 处置日志副作用分支,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
但同样必须严格说明:更强 workflow 证据主要在 desk 规则资产中,而不是 packaged collector;因此不能把本场景描述成“packaged collector 已严谨实现全部实时业务流程”。此外,当前实现仍存在关键身份不一致问题:monitor pending list 使用 `consNo`,dispose dedupe 使用 `eventId`。这说明本场景虽然 workflow 证据较强,但 `pending_ids` / `new_pending_ids` 的统一身份模型仍不严谨,最适合被描述为“packaged collector 已具备输入驱动快照归一化能力、desk rule-asset workflow 较强、且身份键需要在 command-center 标准配置中显式澄清”的 monitor scene。
|
||||||
@@ -0,0 +1,143 @@
|
|||||||
|
# jinchang-business-environment-weekly-report 操作分析
|
||||||
|
|
||||||
|
## 1. 场景概述
|
||||||
|
|
||||||
|
`jinchang-business-environment-weekly-report` 对应“国网金昌供电公司营商环境周例会报告”场景,目标是采集多来源指标并组装为分区结构化周报。根据 `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\scenes\jinchang-business-environment-weekly-report\scene.json`、`D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\jinchang-business-environment-weekly-report\SKILL.md` 与 `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\jinchang-business-environment-weekly-report\scripts\collect_business_environment_metrics.js`,当前已被代码直接证实的是:打包脚本定义了四个 section template、空主表、`period` 字段、`status: "ok"` 与 `partial_reasons: []`,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
同时必须明确说明:当前 packaged script 更强地定义了 artifact schema / section template,而没有同等强度地定义真实浏览器采集、跨系统查询、period 对齐或导出执行逻辑。换言之,本场景当前更像“结构化周报模板脚本”,而不是“已被脚本严格实现的多源实时采集器”,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
## 2. 证据来源
|
||||||
|
|
||||||
|
本分析统一只使用四个证据等级标签:`code-confirmed`、`contract-defined`、`implementation intent exists but not rigorous / buggy`、`no direct evidence / candidate only`。凡涉及脚本直接定义的 schema / section template,标为 `code-confirmed`;凡涉及将真实采集结果映射进这些结构的运行语义,如脚本未直接实现,则不高于 `contract-defined`。
|
||||||
|
|
||||||
|
1. `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\jinchang-business-environment-weekly-report\scripts\collect_business_environment_metrics.js`
|
||||||
|
- 直接定义四个 section template:`abnormal-transformer-monitoring`、`power-outage-monitoring`、`work-order-acceptance`、`dispatch-summary`,并返回空 artifact,证据等级:`code-confirmed`。
|
||||||
|
2. `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\jinchang-business-environment-weekly-report\SKILL.md`
|
||||||
|
- 说明应读取周范围、校验会话、收集多个 metric group、映射到 report sections、必要时标记 partial,并在输出里返回 `region`、`period`、缺失 section、周期对齐问题等。它主要定义目标契约与运行意图,证据等级以 `contract-defined` 和 `implementation intent exists but not rigorous / buggy` 为主。
|
||||||
|
3. `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\scenes\jinchang-business-environment-weekly-report\scene.json`
|
||||||
|
- 声明场景输入为 `period`,依赖包括 `browser`、`multi-source`、`local-report-service`,动作包括 `query` / `collect-report` / `aggregate-sections`,证据等级:`code-confirmed`。
|
||||||
|
4. `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\jinchang-business-environment-weekly-report\references\collection-flow.md`
|
||||||
|
- 描述周范围读取、跨系统会话校验、多指标组采集、section 装配与下游导出关系,证据等级:`contract-defined`。
|
||||||
|
5. `D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\jinchang-business-environment-weekly-report\references\data-quality.md`
|
||||||
|
- 描述完整结果、partial 规则、弱点区域与 empty/failure 区分,证据等级:`contract-defined`。
|
||||||
|
6. `D:\data\ideaSpace\rust\sgClaw\claw-new\docs\superpowers\specs\2026-04-08-command-center-virtual-employee-inventory.json`
|
||||||
|
- 归纳出 workflow、key_fields、status_model 等 command-center 视图;它能帮助识别当前整理结果,但不应被当成比原始 scene/skill/script 更强的实现证据,证据等级:`no direct evidence / candidate only`(仅限 inventory 不能单独证明 packaged script 已实现的部分)。
|
||||||
|
|
||||||
|
## 3. 实际入口与运行边界
|
||||||
|
|
||||||
|
实际入口在 `scene.json` 中已固定:场景页面入口为 `index.html`,技能调用为 `jinchang-business-environment-weekly-report.collect_business_environment_metrics`,输出 artifact 类型为 `report-artifact`,这些都属于 `code-confirmed`。
|
||||||
|
|
||||||
|
运行边界方面,当前仓库能确认的内容是:
|
||||||
|
|
||||||
|
- 对外输入名为 `period`,证据等级:`code-confirmed`。
|
||||||
|
- 需要浏览器页面、多源系统访问与本地报告服务,证据等级:`code-confirmed`。
|
||||||
|
- 参考资料要求按周范围收集多个指标组并组装 section,证据等级:`contract-defined`。
|
||||||
|
|
||||||
|
但“真实 collector 已在 packaged script 中实现多源访问、登录态校验、周期一致性检查”这一说法并不成立。当前脚本只返回空 section 模板,因而其可直接证明的运行边界仍是 schema stub;多源采集与组装仅体现为明确实现意图,而非已严格落地逻辑,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
|
||||||
|
## 4. 代码已证实的实际操作流程
|
||||||
|
|
||||||
|
当前代码能严格确认的实际操作流程如下:
|
||||||
|
|
||||||
|
1. 调用 `collectBusinessEnvironmentMetrics(input)`。
|
||||||
|
2. 读取 `input.period || ""` 写入 artifact 的 `period`。
|
||||||
|
3. 构造空主表:`columns: []`、`rows: []`。
|
||||||
|
4. 基于 `SECTION_TEMPLATES` 复制出 4 个 section,并确保每个 section 的 `rows: []`。
|
||||||
|
5. 返回 `type: "report-artifact"`、`report_name`、`status: "ok"`、`partial_reasons: []`。
|
||||||
|
|
||||||
|
这些步骤均可在 `collect_business_environment_metrics.js` 中直接定位,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
以下步骤虽然在 `SKILL.md` 与 reference 中多次出现,但并未被脚本直接实现:读取页面周范围、校验多源 token/session、采集变压器监测/停电监测/工单受理/调度总结等真实数据、检查 period alignment、生成最终文档或导出结果。这些内容不能写成“代码已证实的实际流程”,最多只能分别标记为 `contract-defined` 或 `implementation intent exists but not rigorous / buggy`。
|
||||||
|
|
||||||
|
## 5. 标准化抽象流程
|
||||||
|
|
||||||
|
若做 command-center 的标准化抽象,本场景可整理为:
|
||||||
|
|
||||||
|
1. 接收周报任务输入。
|
||||||
|
2. 解析页面周范围并绑定会话上下文。
|
||||||
|
3. 访问多个业务来源,按指标组采集数据。
|
||||||
|
4. 按四类 section 模板/列结构承载结果。
|
||||||
|
5. 形成统一 `report-artifact`。
|
||||||
|
6. 视情况执行导出/日志等下游动作。
|
||||||
|
|
||||||
|
其中第 4 步仅“四类 section 名称与列结构存在”是 `code-confirmed`;“真实采集结果已被映射进四类 section”仍只属于 `contract-defined` 的流程约定。第 2、3、6 步主要来自 skill/reference 的运行说明,证据等级应为 `contract-defined`。如果把这些步骤进一步写成“当前 packaged script 已可靠执行”,就会过度推断,证据等级只能降为 `implementation intent exists but not rigorous / buggy`。
|
||||||
|
|
||||||
|
## 6. 输入、上下文与依赖
|
||||||
|
|
||||||
|
### 输入
|
||||||
|
|
||||||
|
- `period` 是 scene 与脚本已共同声明的业务输入,证据等级:`code-confirmed`。
|
||||||
|
- `SKILL.md` 还要求输出中包含 `region`,但 scene 输入与 script 返回结构都未显式声明 `region` 字段,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
|
||||||
|
### 运行上下文
|
||||||
|
|
||||||
|
- `session`、多源系统可访问性、缓存 token 可用性等在 scene/reference 中被描述,scene 元数据层面的存在是 `code-confirmed`,更具体的业务语义是 `contract-defined`。
|
||||||
|
- 页面历史报告区、执行日志区被 reference 提到,但被明确描述为下游历史/辅助区域,而非主数据源,证据等级:`contract-defined`。
|
||||||
|
|
||||||
|
### 依赖
|
||||||
|
|
||||||
|
- `browser`、`multi-source`、`local-report-service` 可直接在 scene 中定位,证据等级:`code-confirmed`。
|
||||||
|
- `/a_js/YPTAPI.js`、`http://localhost:13313/ReportServices/*`、导出或 surface 服务来自 reference,证据等级:`contract-defined`。
|
||||||
|
|
||||||
|
## 7. 输出结构
|
||||||
|
|
||||||
|
当前脚本直接证实的输出结构包括:
|
||||||
|
|
||||||
|
- `type: "report-artifact"`
|
||||||
|
- `report_name: "jinchang-business-environment-weekly-report"`
|
||||||
|
- `period`
|
||||||
|
- `columns: []`
|
||||||
|
- `rows: []`
|
||||||
|
- `sections` 包含 4 个固定模板
|
||||||
|
- `status: "ok"`
|
||||||
|
- `partial_reasons: []`
|
||||||
|
|
||||||
|
这些均属于 `code-confirmed`。
|
||||||
|
|
||||||
|
四个固定 section template 分别为:
|
||||||
|
|
||||||
|
1. `abnormal-transformer-monitoring`
|
||||||
|
2. `power-outage-monitoring`
|
||||||
|
3. `work-order-acceptance`
|
||||||
|
4. `dispatch-summary`
|
||||||
|
|
||||||
|
它们的列结构也都在脚本中已明确定义,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
但 `SKILL.md` 输出部分提到应返回 `region`、missing sections、period alignment issues、downstream export/logging failures。除 `period` 与空 `partial_reasons` 字段外,其余诊断信息都没有在脚本中被明确建模。尤其是 `region` 出现在输出文案中,却没有进入 artifact schema,这是一处场景特定的不一致点,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
|
||||||
|
## 8. 下游动作证据表
|
||||||
|
|
||||||
|
| 下游动作 | 当前证据 | 证据等级 | 严谨结论 |
|
||||||
|
| --- | --- | --- | --- |
|
||||||
|
| 返回分区化 `report-artifact` | `collect_business_environment_metrics.js` 直接返回对象 | `code-confirmed` | 已有稳定 artifact 壳,但内容为空模板。 |
|
||||||
|
| 四类 section 模板存在 | 脚本直接定义 `SECTION_TEMPLATES` | `code-confirmed` | 只能确认 section schema 已确定,不能确认 section 数据采集已实现。 |
|
||||||
|
| 多源指标采集 | 只在 `SKILL.md` / `collection-flow.md` 中描述 | `contract-defined` | 契约上明确需要多源采集,但当前 packaged script 未直接证明。 |
|
||||||
|
| 周期一致性判断 | `SKILL.md` / `data-quality.md` 提到 period alignment | `contract-defined` | 存在质量要求,但脚本没有 period alignment 逻辑。 |
|
||||||
|
| 导出周报文档 | reference 提到 localhost export/surface services | `contract-defined` | 属于下游依赖定义,不等于当前 skill 已执行文档导出。 |
|
||||||
|
| 报告日志写入 | `SKILL.md` / reference 提到 report-log | `contract-defined` | 只能确认有该下游概念,当前脚本没有调用证据。 |
|
||||||
|
| `partial` 结果建模 | 脚本保留 `partial_reasons`,reference 定义 partial 语义 | `implementation intent exists but not rigorous / buggy` | 字段壳子存在,但没有真实 partial 分支。 |
|
||||||
|
| `region` 输出 | 只在 `SKILL.md` 输出说明中出现 | `implementation intent exists but not rigorous / buggy` | 表达上存在地区语义,但未进入 scene 输入或 artifact schema。 |
|
||||||
|
|
||||||
|
## 9. 当前代码疑点 / 不严谨点
|
||||||
|
|
||||||
|
1. `region` 出现在 `SKILL.md` 的输出项中,但 scene.json 与脚本 schema 都没有显式 `region` 字段;这意味着“金昌”可能只是场景名称隐含语义,而非可追踪输出字段,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
2. 脚本固定返回空 `columns` 与空 `rows`,说明主表并不是核心结构,真正的核心是 4 个 section template;如果 command-center 仍把它当通用主表型报表,容易误建模,证据等级:`code-confirmed`。
|
||||||
|
3. `status` 固定为 `"ok"`,与 skill/reference 所要求的 partial / empty / blocked 区分不一致,证据等级:`code-confirmed` 对现状成立,而目标状态模型仅为 `contract-defined`。
|
||||||
|
4. 参考资料强调多源系统会话与 token 缓存,但脚本完全没有这些依赖的执行路径,因此“多源采集能力已落地”不能被提升为当前代码事实,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
5. 导出与报告历史区域在 reference 中存在,但未被脚本直接接入;若在 command-center 中直接把它配置为“可导出 Word/Excel”现状,将属于过度推断,证据等级:`no direct evidence / candidate only`(就 packaged script 执行层而言)。
|
||||||
|
|
||||||
|
## 10. 对 command-center 标准配置的修订建议
|
||||||
|
|
||||||
|
1. 本场景应把核心输出建模为 `section-based report artifact`,而不是普通二维表。原因是脚本对四个 section template 的定义明显强于对主表的定义,证据等级:`code-confirmed`。
|
||||||
|
2. 在标准配置中补充 `region_semantics` 或 `fixed_region` 字段,明确“金昌”究竟只是场景命名,还是应成为可展示/可审计输出的一部分。目前这是未闭合问题,证据等级:`implementation intent exists but not rigorous / buggy`。
|
||||||
|
3. 状态模型建议拆分:
|
||||||
|
- 契约层声明 success / partial / empty / blocked,证据等级:`contract-defined`;
|
||||||
|
- 实现层当前只有固定 `ok` artifact stub,证据等级:`code-confirmed`。
|
||||||
|
4. 给配置增加 `collection_evidence` 备注,明确当前 packaged script 更偏向 section schema 模板,而不是 live browser collector,避免后续调度器把它误当已完成的实时采集技能。
|
||||||
|
5. 对 `downstream_effects` 建议增加 `implemented: false / not-directly-proven` 之类标记,以区分“场景上需要导出”与“脚本里已执行导出”。
|
||||||
|
|
||||||
|
## 11. 最终严谨结论
|
||||||
|
|
||||||
|
关于 `jinchang-business-environment-weekly-report`,当前最可靠的现状判断是:仓库已经存在一个四分区结构化周报 artifact 模板,四个 section 的名称与列 schema 已由 packaged script 直接定义,证据等级:`code-confirmed`。
|
||||||
|
|
||||||
|
但“已经实现真实浏览器多源采集、周期一致性校验、section 数据组装、最终导出与日志闭环”这一更强表述没有被脚本直接证明。相关行为主要由 `SKILL.md`、`collection-flow.md`、`data-quality.md` 与 scene 元数据定义目标流程和质量要求,因此应把它理解为“有明确契约和实现意图,但当前 packaged script 主要还是 schema/section stub”。此外,`region` 在输出话术中出现、却未进入 artifact schema,是本场景当前最需要在 command-center 标准配置中澄清的不严谨点。
|
||||||
@@ -0,0 +1,125 @@
|
|||||||
|
# Config-Owned Direct Skill Dispatch Design
|
||||||
|
|
||||||
|
**Goal:** Preserve the current minimal submit flow where sgClaw accepts natural-language input, directly invokes one configured staged browser skill without calling an LLM, and keeps dispatch ownership in sgClaw configuration rather than external skill metadata.
|
||||||
|
|
||||||
|
**Status:** Approved design direction for the next slice. The current minimal direct-submit path already works; this document records the ownership boundary that future dispatch-policy work should follow.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Decision Summary
|
||||||
|
|
||||||
|
1. Keep direct-skill selection in sgClaw configuration.
|
||||||
|
2. Continue using `skillsDir` plus `directSubmitSkill` as the only control surface for the no-LLM direct path.
|
||||||
|
3. Do not add sgClaw-specific dispatch fields to files under `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging` in this slice.
|
||||||
|
4. Keep the currently bound skill as `fault-details-report.collect_fault_details`.
|
||||||
|
5. When dispatch expands beyond one fixed skill, add the next policy layer on the sgClaw side first, not in `scene.json` or `SKILL.toml`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Current Minimal Flow
|
||||||
|
|
||||||
|
The intended user experience stays unchanged:
|
||||||
|
- the user types natural language into the input box
|
||||||
|
- sgClaw receives `BrowserMessage::SubmitTask`
|
||||||
|
- sgClaw loads runtime config
|
||||||
|
- if `directSubmitSkill` is configured, sgClaw bypasses LLM routing and directly resolves the configured staged skill from `skillsDir`
|
||||||
|
- sgClaw executes the target `browser_script` tool through the browser runtime and returns the result
|
||||||
|
- if `directSubmitSkill` is absent, sgClaw falls back to the existing orchestration / compat behavior
|
||||||
|
|
||||||
|
This keeps the first slice small while preserving a clear seam for future expansion.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Ownership Boundary
|
||||||
|
|
||||||
|
### sgClaw configuration owns dispatch choice
|
||||||
|
|
||||||
|
sgClaw configuration is responsible for deciding whether submit-task should bypass the LLM path and which direct skill should run.
|
||||||
|
|
||||||
|
For the current slice, that means:
|
||||||
|
- `skillsDir` tells sgClaw where to load staged skills from
|
||||||
|
- `directSubmitSkill` tells sgClaw which `skill.tool` should be used for the direct path
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"skillsDir": "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging",
|
||||||
|
"directSubmitSkill": "fault-details-report.collect_fault_details"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### skill_staging owns skill identity and execution assets
|
||||||
|
|
||||||
|
Files under `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging` remain responsible for describing the skill package, tool identity, and browser-script implementation.
|
||||||
|
|
||||||
|
For the current bound skill:
|
||||||
|
- `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/scenes/fault-details-report/scene.json`
|
||||||
|
- `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/SKILL.toml`
|
||||||
|
- `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.js`
|
||||||
|
|
||||||
|
These files already provide enough information for sgClaw to locate the package and run the tool. This slice does not add a new dispatch field inside them.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Why This Boundary Is Recommended
|
||||||
|
|
||||||
|
### One source of truth for routing
|
||||||
|
|
||||||
|
If sgClaw configuration owns the direct-skill decision, the operator can switch the direct skill by changing config only. There is no need to edit code and no need to mutate external skill assets just to change routing.
|
||||||
|
|
||||||
|
### Avoid freezing external manifest semantics too early
|
||||||
|
|
||||||
|
`skill_staging` is an external skill asset set. Adding sgClaw-specific dispatch metadata now would couple the staged-skill format to one integration strategy before the policy model is stable.
|
||||||
|
|
||||||
|
### Preserve a clean migration path
|
||||||
|
|
||||||
|
The current minimal path is intentionally narrow: one fixed configured direct skill, no LLM dispatch, no per-skill policy registry yet. Keeping dispatch control in sgClaw makes it easier to add a broader policy layer later without rewriting the staged-skill package format first.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Explicit Non-Goals
|
||||||
|
|
||||||
|
This design does not do the following:
|
||||||
|
- redesign the submit-task protocol
|
||||||
|
- move dispatch control into `scene.json` or `SKILL.toml`
|
||||||
|
- require every staged skill to declare `direct_browser` or `llm_agent` right now
|
||||||
|
- expand the current direct path into generic natural-language intent classification
|
||||||
|
- change the browser-script execution model
|
||||||
|
- change the current fallback orchestration / compat execution semantics when `directSubmitSkill` is not configured
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Current Skill Contract
|
||||||
|
|
||||||
|
The current direct path remains intentionally deterministic.
|
||||||
|
|
||||||
|
For `fault-details-report.collect_fault_details`, sgClaw derives only the minimum required arguments:
|
||||||
|
- `expected_domain` from the current `page_url`
|
||||||
|
- `period` from an explicit `YYYY-MM` token in the user's natural-language input
|
||||||
|
|
||||||
|
That means the UX still looks like natural-language submission, but the runtime does not ask an LLM to infer intent or invent missing parameters. If the period is missing, sgClaw should return a clear error instead of guessing.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Future Dispatch Policy Direction
|
||||||
|
|
||||||
|
When more than one staged skill needs routing control, the next layer should still begin on the sgClaw side.
|
||||||
|
|
||||||
|
Recommended direction:
|
||||||
|
- keep `directSubmitSkill` as the current bootstrap switch for the minimal fixed-skill path
|
||||||
|
- introduce a sgClaw-owned registry or config mapping that can later express `skill.tool -> direct_browser | llm_agent`
|
||||||
|
- keep external skill manifests unchanged until the policy surface proves stable in real use
|
||||||
|
|
||||||
|
Only after the routing model is stable should we consider whether external skill metadata needs a default dispatch hint.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Resulting Design Rule
|
||||||
|
|
||||||
|
For this project, the direct-skill decision remains config-owned:
|
||||||
|
- sgClaw config decides whether submit-task bypasses the LLM path
|
||||||
|
- staged skill metadata identifies what the skill is and how its browser tool runs
|
||||||
|
- future per-skill dispatch policy should be added in sgClaw first, not in `skill_staging`
|
||||||
|
|
||||||
|
This is the approved baseline for the next dispatch-policy slice.
|
||||||
@@ -0,0 +1,495 @@
|
|||||||
|
# Fault Details Full Skill Alignment Design
|
||||||
|
|
||||||
|
**Goal:** Upgrade `fault-details-report.collect_fault_details` from an empty artifact shell into a real staged business skill that matches the original fault-details package's collection, normalization, summary, export, and report-history behavior, while keeping direct-skill routing config-owned in `claw-new`.
|
||||||
|
|
||||||
|
**Status:** Approved design direction for the next remediation slice.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Decision Summary
|
||||||
|
|
||||||
|
1. Keep direct-skill selection in `claw-new` via `skillsDir` + `directSubmitSkill`; do not move dispatch ownership into `skill_staging` manifests.
|
||||||
|
2. Put the fault-details business logic in `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging`, not in `claw-new`.
|
||||||
|
3. Align the staged skill with the original package's real behavior: query raw rows, normalize detail columns, derive summary rows, call localhost export, and write report history.
|
||||||
|
4. Keep the current browser-execution seam narrow: use the existing `browser_script` / browser-eval path, not a new browser protocol or new opcodes.
|
||||||
|
5. Add a narrow artifact interpreter in `claw-new` so structured fault-results map cleanly to `TaskComplete.success` and a readable completion summary.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Why This Slice Exists
|
||||||
|
|
||||||
|
The current staged skill contract and the current staged skill implementation do not match.
|
||||||
|
|
||||||
|
### What the original package actually does
|
||||||
|
|
||||||
|
The original package under `D:/desk/智能体资料/大四区报告监测项/故障明细` does all of the following:
|
||||||
|
|
||||||
|
- reads the selected date range from the page UI
|
||||||
|
- queries the D4 repair-order data source
|
||||||
|
- filters and normalizes raw rows into the canonical detail export schema
|
||||||
|
- derives grouped summary rows by `gds`
|
||||||
|
- calls `http://localhost:13313/SurfaceServices/personalBread/export/faultDetailsExportXLSXS`
|
||||||
|
- auto-opens/downloads the generated file
|
||||||
|
- writes report history through `http://localhost:13313/ReportServices/Api/setReportLog`
|
||||||
|
|
||||||
|
### What the staged skill currently does
|
||||||
|
|
||||||
|
The current staged `collect_fault_details.js` only returns an empty `report-artifact` shell with empty `rows` and empty summary `sections`.
|
||||||
|
|
||||||
|
It also still uses a Node-style export shape instead of the browser-eval entrypoint shape that the current `browser_script` runtime expects. In practice, this means the staged script is not yet aligned with the real runtime contract even before business behavior is considered.
|
||||||
|
|
||||||
|
This slice closes that gap by making the staged skill actually perform the work the original package performs, but through the current sgClaw direct-skill runtime.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Design Rules
|
||||||
|
|
||||||
|
### 1. `claw-new` owns routing, not business transforms
|
||||||
|
|
||||||
|
`claw-new` stays responsible for:
|
||||||
|
|
||||||
|
- loading config
|
||||||
|
- deciding whether submit-task takes the direct-skill path
|
||||||
|
- resolving the configured staged skill
|
||||||
|
- executing the staged browser-script tool
|
||||||
|
- turning the returned artifact into `TaskComplete.success` + human-readable summary
|
||||||
|
|
||||||
|
`claw-new` must **not** become the place where the original fault classification table, detail-row field mapping, or summary aggregation rules are reimplemented.
|
||||||
|
|
||||||
|
### 2. `skill_staging` owns fault-details business behavior
|
||||||
|
|
||||||
|
The staged skill package owns:
|
||||||
|
|
||||||
|
- query orchestration inside the browser page context
|
||||||
|
- raw-row extraction
|
||||||
|
- canonical detail-row normalization
|
||||||
|
- classification and derived fields
|
||||||
|
- summary-sheet derivation
|
||||||
|
- localhost export request
|
||||||
|
- localhost report-log request
|
||||||
|
- structured result payload
|
||||||
|
|
||||||
|
### 3. Keep the current browser seam narrow
|
||||||
|
|
||||||
|
Do not introduce a new browser bridge, callback protocol, or skill-specific browser opcode for this slice.
|
||||||
|
|
||||||
|
The implementation should continue using the current `browser_script` execution seam already wired through `claw-new/src/compat/browser_script_skill_tool.rs` and `claw-new/src/compat/direct_skill_runtime.rs`.
|
||||||
|
|
||||||
|
### 4. Match business behavior, not the original shell verbatim
|
||||||
|
|
||||||
|
The original package is a local HTML/Vue shell that uses `BrowserAction(...)`, timers, and hidden-browser choreography. That shell does **not** need to be recreated inside `claw-new`.
|
||||||
|
|
||||||
|
What must be preserved is the business outcome:
|
||||||
|
|
||||||
|
- same canonical detail columns
|
||||||
|
- same key field mappings
|
||||||
|
- same classification rules
|
||||||
|
- same summary metrics
|
||||||
|
- same downstream export/history behavior
|
||||||
|
- same distinction between empty, partial, blocked, and failed work
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Ownership Boundary and Landing Zones
|
||||||
|
|
||||||
|
### Staged skill changes
|
||||||
|
|
||||||
|
These changes land in `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging`.
|
||||||
|
|
||||||
|
Primary files:
|
||||||
|
|
||||||
|
- `skills/fault-details-report/scripts/collect_fault_details.js`
|
||||||
|
- becomes the real browser-eval entrypoint
|
||||||
|
- must directly `return` the final structured artifact from the wrapped browser script
|
||||||
|
- may contain internal helper functions, but should remain self-contained for the current runtime
|
||||||
|
- `skills/fault-details-report/SKILL.toml`
|
||||||
|
- keep `browser_script`
|
||||||
|
- tighten the tool description so it matches the real behavior
|
||||||
|
- do not turn `SKILL.toml` into the source of truth for classification rules or routing policy
|
||||||
|
- `skills/fault-details-report/SKILL.md`
|
||||||
|
- align the written contract with the implemented runtime behavior
|
||||||
|
- `skills/fault-details-report/references/collection-flow.md`
|
||||||
|
- align the staged flow with the implemented query/export/history sequence
|
||||||
|
- `skills/fault-details-report/references/data-quality.md`
|
||||||
|
- stay authoritative for canonical columns, required fields, classification tables, `qxxcjl`-based reason heuristics, summary rules, and partial semantics
|
||||||
|
- `scenes/fault-details-report/scene.json`
|
||||||
|
- keep the scene contract aligned with the actual output and state semantics
|
||||||
|
- do not move classification or routing policy into scene metadata
|
||||||
|
|
||||||
|
### Caller/runtime changes
|
||||||
|
|
||||||
|
These changes land in `D:/data/ideaSpace/rust/sgClaw/claw-new`.
|
||||||
|
|
||||||
|
Primary files:
|
||||||
|
|
||||||
|
- `src/compat/direct_skill_runtime.rs`
|
||||||
|
- keep configured direct-skill execution here
|
||||||
|
- add narrow structured-artifact interpretation after the browser-script returns
|
||||||
|
- `src/agent/mod.rs`
|
||||||
|
- keep the current direct-submit routing seam here
|
||||||
|
- do not add fault-specific business logic here
|
||||||
|
- `src/compat/browser_script_skill_tool.rs`
|
||||||
|
- keep the browser-script contract strict: browser-eval entrypoint, no Node-only assumptions
|
||||||
|
- `tests/agent_runtime_test.rs`
|
||||||
|
- direct-submit path and result-surface regressions
|
||||||
|
- `tests/browser_script_skill_tool_test.rs`
|
||||||
|
- browser-script execution-shape regressions
|
||||||
|
|
||||||
|
If a new helper is needed in `claw-new`, it should be a narrow artifact-format/parser helper, not a new business-rules module.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Target Runtime Flow
|
||||||
|
|
||||||
|
### Step 1: Submit-task stays config-owned
|
||||||
|
|
||||||
|
The user still types natural language into the current sgClaw input.
|
||||||
|
|
||||||
|
`claw-new`:
|
||||||
|
|
||||||
|
- receives `BrowserMessage::SubmitTask`
|
||||||
|
- loads `SgClawSettings`
|
||||||
|
- sees `directSubmitSkill = "fault-details-report.collect_fault_details"`
|
||||||
|
- bypasses LLM routing exactly as it does now
|
||||||
|
- resolves the staged skill from `skillsDir`
|
||||||
|
|
||||||
|
This preserves the already approved config-owned routing boundary.
|
||||||
|
|
||||||
|
### Step 2: Browser-script tool executes as a true browser entrypoint
|
||||||
|
|
||||||
|
`collect_fault_details.js` must be shaped for the current runtime:
|
||||||
|
|
||||||
|
- the script runs inside the current browser page context through `eval`
|
||||||
|
- it must not rely on `module.exports`
|
||||||
|
- it must directly `return collectFaultDetails(args)` from the wrapped script body
|
||||||
|
|
||||||
|
This is required because the current sgClaw browser-script runtime reads one script file and wraps it in a browser-side IIFE.
|
||||||
|
|
||||||
|
### Step 3: The skill reads the page-selected time range
|
||||||
|
|
||||||
|
The source-of-truth query window should come from the current page state, matching the original package behavior.
|
||||||
|
|
||||||
|
Design rule:
|
||||||
|
|
||||||
|
- read the selected start and end time from the business page controls or page state
|
||||||
|
- include that exact selected range in the returned artifact
|
||||||
|
- keep `period` as a bootstrap label from `claw-new`, not as a license to silently guess a different business range
|
||||||
|
|
||||||
|
Compatibility rule with the current direct-submit seam:
|
||||||
|
|
||||||
|
- the current `claw-new` direct path still requires an explicit `YYYY-MM` token in the user's instruction in order to enter the configured direct-skill flow
|
||||||
|
- that requirement remains in place for this slice
|
||||||
|
- once inside the skill, the browser page's selected start/end range is the source of truth for collection
|
||||||
|
- the returned artifact should include both the user-visible `period` label and the exact selected page range so mismatches are observable instead of hidden
|
||||||
|
|
||||||
|
If the page-selected range cannot be read reliably, the skill should return `blocked` instead of inventing a month-wide query window from `period` alone.
|
||||||
|
|
||||||
|
### Step 4: The skill collects raw rows and normalizes detail fields
|
||||||
|
|
||||||
|
The staged skill must reproduce the original package's detail normalization logic inside the browser-executed script.
|
||||||
|
|
||||||
|
That includes preserving the canonical detail schema from the original `excleIni[0].cols`, including the key transforms already present in the original package, such as:
|
||||||
|
|
||||||
|
- `slsj = bxsj`
|
||||||
|
- `gssgs = "甘肃省电力公司"`
|
||||||
|
- `sgs` derived from the current company/city context
|
||||||
|
- `gddw = maintOrgName`
|
||||||
|
- `gds = maintGroupName`
|
||||||
|
- `clzt = "处理完成"`
|
||||||
|
- `bdz = bdzMc`
|
||||||
|
- `line = xlmc10`
|
||||||
|
- `pb = byqmc`
|
||||||
|
|
||||||
|
The staged skill must also port the original classification/derivation logic that fills:
|
||||||
|
|
||||||
|
- `sxfl1`
|
||||||
|
- `sxfl2`
|
||||||
|
- `sxfl3`
|
||||||
|
- `gzsb`
|
||||||
|
- `gzyy`
|
||||||
|
|
||||||
|
That includes the original matching table and the `qxxcjl`-based text extraction heuristics that derive the fault reason.
|
||||||
|
|
||||||
|
### Step 5: The skill derives summary rows from normalized detail rows
|
||||||
|
|
||||||
|
The staged skill must derive the summary sheet from grouped detail rows, keyed around the same business totals the original package computes.
|
||||||
|
|
||||||
|
At minimum that includes:
|
||||||
|
|
||||||
|
- `index`
|
||||||
|
- `gsName`
|
||||||
|
- `fwDept`
|
||||||
|
- `className`
|
||||||
|
- `allCount`
|
||||||
|
- `wxCount`
|
||||||
|
- `khcCount`
|
||||||
|
- `sbdSbCount`
|
||||||
|
- `gyGzCount`
|
||||||
|
- `dyGzCount`
|
||||||
|
- `tqdzCount`
|
||||||
|
- `tqbxCount`
|
||||||
|
- `dyxlCount`
|
||||||
|
- `bqxCount`
|
||||||
|
- `jllCount`
|
||||||
|
- `bhxCount`
|
||||||
|
- `qftdCount`
|
||||||
|
|
||||||
|
The summary derivation must stay in the staged skill so the same package can later be routed by LLM without moving business logic back into `claw-new`.
|
||||||
|
|
||||||
|
### Step 6: The skill performs downstream export and report logging
|
||||||
|
|
||||||
|
After detail rows and summary rows are available, the staged skill should reproduce the original package's downstream behavior:
|
||||||
|
|
||||||
|
- build the export payload for `faultDetailsExportXLSXS`
|
||||||
|
- call the localhost export endpoint
|
||||||
|
- capture the returned export path/URL
|
||||||
|
- write report history via `setReportLog`
|
||||||
|
|
||||||
|
Important boundary:
|
||||||
|
|
||||||
|
- export/report-log are downstream side effects
|
||||||
|
- they do not redefine whether collection itself succeeded
|
||||||
|
- if collection succeeds but export/logging fails, the result is `partial`, not a full collection failure
|
||||||
|
- auto-opening/downloading the exported file is out of scope for this slice; this slice records the export path/result in the artifact but does not add new opener/UI behavior in `claw-new`
|
||||||
|
|
||||||
|
### Step 7: The skill returns one structured artifact
|
||||||
|
|
||||||
|
The staged skill should return one self-describing JSON artifact containing:
|
||||||
|
|
||||||
|
- business identity (`type`, `report_name`)
|
||||||
|
- selected period label
|
||||||
|
- exact selected start/end range
|
||||||
|
- canonical detail columns + normalized rows
|
||||||
|
- summary section columns + rows
|
||||||
|
- counts
|
||||||
|
- business status
|
||||||
|
- partial reasons if any
|
||||||
|
- downstream export outcome
|
||||||
|
- downstream report-log outcome
|
||||||
|
|
||||||
|
### Step 8: `claw-new` interprets the artifact, not the business rules
|
||||||
|
|
||||||
|
After the browser-script returns, `claw-new` should parse the JSON artifact and map it into final submit-task behavior.
|
||||||
|
|
||||||
|
Recommended mapping:
|
||||||
|
|
||||||
|
- `status = ok` -> `TaskComplete.success = true`
|
||||||
|
- `status = partial` -> `TaskComplete.success = true`, with warnings in summary
|
||||||
|
- `status = empty` -> `TaskComplete.success = true`, clearly reported as empty-result
|
||||||
|
- `status = blocked` -> `TaskComplete.success = false`
|
||||||
|
- `status = error` -> `TaskComplete.success = false`
|
||||||
|
|
||||||
|
This keeps business classification in the staged skill while preventing false-positive success in the direct path.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Artifact Contract
|
||||||
|
|
||||||
|
The returned payload should stay `type = "report-artifact"`, but it must become rich enough to describe the real run.
|
||||||
|
|
||||||
|
Recommended contract:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "report-artifact",
|
||||||
|
"report_name": "fault-details-report",
|
||||||
|
"period": "2026-03",
|
||||||
|
"selected_range": {
|
||||||
|
"start": "2026-03-08 16:00:00",
|
||||||
|
"end": "2026-03-09 16:00:00"
|
||||||
|
},
|
||||||
|
"columns": ["qxdbh", "gssgs", "sgs", "gddw", "gds", "slsj", "yjflMc", "ejflMc", "sjflMc", "gzms", "yhbh", "yhmc", "lxr", "gzdd", "lxdh", "bxsj", "gdsj", "clzt", "qxxcjl", "bdz", "line", "pb", "sxfl1", "sxfl2", "sxfl3", "gzsb", "gzyy", "bz"],
|
||||||
|
"rows": [],
|
||||||
|
"sections": [
|
||||||
|
{
|
||||||
|
"name": "summary-sheet",
|
||||||
|
"columns": ["index", "gsName", "fwDept", "className", "allCount", "wxCount", "khcCount", "sbdSbCount", "gyGzCount", "dyGzCount", "tqdzCount", "tqbxCount", "dyxlCount", "bqxCount", "jllCount", "bhxCount", "qftdCount"],
|
||||||
|
"rows": []
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"counts": {
|
||||||
|
"detail_rows": 0,
|
||||||
|
"summary_rows": 0
|
||||||
|
},
|
||||||
|
"status": "ok",
|
||||||
|
"partial_reasons": [],
|
||||||
|
"downstream": {
|
||||||
|
"export": {
|
||||||
|
"attempted": true,
|
||||||
|
"success": true,
|
||||||
|
"path": "http://localhost:13313/.../fault-details.xlsx"
|
||||||
|
},
|
||||||
|
"report_log": {
|
||||||
|
"attempted": true,
|
||||||
|
"success": true,
|
||||||
|
"report_name": "国网XX故障报修明细表(03月09日)",
|
||||||
|
"path": "http://localhost:13313/.../fault-details.xlsx"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Contract notes
|
||||||
|
|
||||||
|
- `rows` is the canonical returned detail table, not the export-service transport payload.
|
||||||
|
- If the export service still requires a placeholder row for an empty spreadsheet, that placeholder should be synthesized only for the downstream export call, not as the canonical returned `rows` contract.
|
||||||
|
- `counts` should be computed from the canonical returned tables.
|
||||||
|
- `selected_range`, `columns`, `sections`, `counts`, `status`, and `partial_reasons` should always be present for `ok`, `partial`, and `empty`.
|
||||||
|
- For `blocked` and `error`, the artifact should still include `type`, `report_name`, `period`, `status`, and `partial_reasons`; `selected_range`, `columns`, `sections`, and `counts` should be included whenever they were already known before the failure point.
|
||||||
|
- `downstream` should be omitted only when export/report-log were not attempted yet; otherwise include it with `attempted` / `success` flags and any available path or failure detail.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Error Handling and Status Semantics
|
||||||
|
|
||||||
|
### `ok`
|
||||||
|
|
||||||
|
Use `ok` when all of the following are true:
|
||||||
|
|
||||||
|
- raw collection succeeded
|
||||||
|
- required detail-field normalization succeeded
|
||||||
|
- summary derivation succeeded
|
||||||
|
- export succeeded
|
||||||
|
- report-log write succeeded
|
||||||
|
|
||||||
|
### `partial`
|
||||||
|
|
||||||
|
Use `partial` when detail collection succeeded but at least one downstream stage degraded, including:
|
||||||
|
|
||||||
|
- one or more required fields could not be normalized, but the row set still remains exportable and summary derivation can proceed with explicit gaps recorded
|
||||||
|
- summary derivation was incomplete, but the detail table is still available
|
||||||
|
- export failed after rows were available
|
||||||
|
- report-log write failed after rows/export were available
|
||||||
|
|
||||||
|
Escalation rule:
|
||||||
|
|
||||||
|
- if the raw query succeeds but required fields are missing so broadly that the canonical detail table cannot be produced at all, use `error`, not `partial`
|
||||||
|
- if summary derivation cannot even start because the normalized detail rows are structurally unusable, use `error`, not `partial`
|
||||||
|
|
||||||
|
`partial_reasons` must name the degraded stage instead of hiding it.
|
||||||
|
|
||||||
|
### `empty`
|
||||||
|
|
||||||
|
Use `empty` when:
|
||||||
|
|
||||||
|
- the query succeeds for the selected range
|
||||||
|
- zero real detail rows match
|
||||||
|
|
||||||
|
This is not a failure.
|
||||||
|
|
||||||
|
If the business flow still wants an empty export file or placeholder export payload, that happens downstream without changing the semantic meaning of the result.
|
||||||
|
|
||||||
|
### `blocked`
|
||||||
|
|
||||||
|
Use `blocked` when the page/session preconditions are not met, for example:
|
||||||
|
|
||||||
|
- expected page/session is not available
|
||||||
|
- required page controls cannot be read
|
||||||
|
- login/session state is missing or expired
|
||||||
|
- required browser-visible APIs are unavailable in the current page context
|
||||||
|
|
||||||
|
### `error`
|
||||||
|
|
||||||
|
Use `error` when the run starts but fails due to operational or parsing problems, for example:
|
||||||
|
|
||||||
|
- request failure
|
||||||
|
- page script failure
|
||||||
|
- raw response parse failure
|
||||||
|
- malformed export response
|
||||||
|
|
||||||
|
### `claw-new` completion mapping
|
||||||
|
|
||||||
|
`claw-new` should convert structured status into final submit completion behavior:
|
||||||
|
|
||||||
|
- `ok` / `partial` / `empty`: return a success completion with a concise human summary
|
||||||
|
- `blocked` / `error`: return a failed completion with a concise human summary
|
||||||
|
|
||||||
|
This avoids the current risk where a structured error-like payload could still be surfaced as a nominal success string.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Testing and Acceptance Strategy
|
||||||
|
|
||||||
|
### Skill-side deterministic coverage
|
||||||
|
|
||||||
|
Add deterministic coverage around the staged skill's business logic in `skill_staging` for:
|
||||||
|
|
||||||
|
- canonical detail field mapping
|
||||||
|
- classification table parity
|
||||||
|
- `gzyy` extraction heuristics
|
||||||
|
- summary aggregation parity
|
||||||
|
- empty-result handling
|
||||||
|
- partial-result generation when downstream export/logging fails
|
||||||
|
- browser-script entrypoint shape (`return ...`, not `module.exports`)
|
||||||
|
|
||||||
|
The classification/summary tests should use fixed raw-row fixtures so the business rules are validated without a live browser session.
|
||||||
|
|
||||||
|
### `claw-new` runtime regressions
|
||||||
|
|
||||||
|
Add Rust coverage in `claw-new` for:
|
||||||
|
|
||||||
|
- direct-submit success with a populated `report-artifact`
|
||||||
|
- `partial` artifact mapping to `TaskComplete.success = true`
|
||||||
|
- `empty` artifact mapping to `TaskComplete.success = true`
|
||||||
|
- `blocked` / `error` artifact mapping to `TaskComplete.success = false`
|
||||||
|
- browser-script helper behavior for a real browser-eval return payload
|
||||||
|
|
||||||
|
### Manual acceptance
|
||||||
|
|
||||||
|
The live manual acceptance bar for this slice should be:
|
||||||
|
|
||||||
|
1. Configure `skillsDir` to the staged skill root and `directSubmitSkill` to `fault-details-report.collect_fault_details`.
|
||||||
|
2. Attach sgClaw to the real target browser page/session.
|
||||||
|
3. Submit a natural-language fault-details request without LLM routing.
|
||||||
|
4. Verify the staged skill:
|
||||||
|
- reads the selected page range
|
||||||
|
- queries real fault rows
|
||||||
|
- produces populated detail rows
|
||||||
|
- produces populated summary rows
|
||||||
|
- exports the workbook through localhost
|
||||||
|
- writes report history
|
||||||
|
5. Verify the final sgClaw completion message reports the correct status, counts, and downstream file/log outcome.
|
||||||
|
|
||||||
|
### Acceptance matrix
|
||||||
|
|
||||||
|
At minimum, acceptance should cover:
|
||||||
|
|
||||||
|
- normal populated result
|
||||||
|
- empty result with no matching rows
|
||||||
|
- partial result where export or report-log fails after collection
|
||||||
|
- blocked result where page/session preconditions are missing
|
||||||
|
- error result where parsing/query execution fails
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Explicit Non-Goals
|
||||||
|
|
||||||
|
This slice does **not**:
|
||||||
|
|
||||||
|
- move routing ownership out of `claw-new`
|
||||||
|
- require LLM routing to be available first
|
||||||
|
- add per-skill dispatch metadata to external manifests for routing policy
|
||||||
|
- introduce a new browser protocol or browser opcode
|
||||||
|
- recreate the original Vue shell inside `claw-new`
|
||||||
|
- move fault classification logic into Rust
|
||||||
|
- redesign the submit-task protocol beyond better interpretation of the returned artifact
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Resulting Design Rule
|
||||||
|
|
||||||
|
For the fault-details path:
|
||||||
|
|
||||||
|
- `claw-new` decides whether to invoke the fixed staged skill
|
||||||
|
- the staged skill performs the real fault business workflow
|
||||||
|
- the staged skill returns a structured artifact that describes collection + downstream outcomes
|
||||||
|
- `claw-new` interprets that artifact for submit-task success/failure and summary output
|
||||||
|
|
||||||
|
That keeps routing config-owned, keeps business logic with the staged skill, and makes `fault-details-report.collect_fault_details` ready for both the current no-LLM path and a later LLM-routed path.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Document Landing Zones
|
||||||
|
|
||||||
|
- Approved spec: `docs/superpowers/specs/2026-04-10-fault-details-full-skill-alignment-design.md`
|
||||||
|
- Follow-up implementation plan: `docs/superpowers/plans/2026-04-10-fault-details-full-skill-alignment-plan.md`
|
||||||
@@ -0,0 +1,618 @@
|
|||||||
|
# TQ Line-Loss Deterministic Skill Design
|
||||||
|
|
||||||
|
**Goal:** Add a staged business skill for `台区线损大数据-月_周累计线损率统计分析` and a deterministic natural-language routing path in `claw-new` that can bypass LLM when the instruction ends with `。。。`, while preserving the existing Zhihu hotlist behavior and keeping the execution seam pipe-first but ws-ready.
|
||||||
|
|
||||||
|
**Status:** Approved design direction for implementation planning.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Decision Summary
|
||||||
|
|
||||||
|
1. Add a new staged skill package `tq-lineloss-report` under `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/`, following the same packaging discipline as `fault-details-report`.
|
||||||
|
2. In `claw-new`, add a deterministic submit path triggered only when the instruction ends with the three-Chinese-dot suffix `。。。`.
|
||||||
|
3. In deterministic mode, route only through a fixed whitelist of staged skills; for this slice the new target is `tq-lineloss-report.collect_lineloss`.
|
||||||
|
4. Deterministic mode must extract business parameters from natural language without using an LLM: company/unit, month-vs-week mode, and period text.
|
||||||
|
5. Parsed natural-language parameters are not the final backend parameters. They must be normalized into the canonical codes required by the source page / source APIs (for example company code and period mode code).
|
||||||
|
6. If required parameters are missing or ambiguous, the runtime must stop and ask the user to provide them explicitly. It must **not** silently fall back to page defaults in this slice.
|
||||||
|
7. Skill execution must reuse the existing browser-script → pipe injection seam already proven by the Zhihu hotlist path. Do not create a second browser execution protocol.
|
||||||
|
8. The design must not regress or weaken the existing Zhihu hotlist direct path, browser-script path, export path, or current routing behavior.
|
||||||
|
9. The main branch implementation remains pipe-only, but all new deterministic-routing and skill contracts must stay backend-neutral so the execution backend can later be swapped to ws on the ws branch.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Non-Negotiable Boundaries
|
||||||
|
|
||||||
|
### 1. Do not break the existing Zhihu hotlist flow
|
||||||
|
|
||||||
|
This is the top safety boundary for the slice.
|
||||||
|
|
||||||
|
The new deterministic routing for `tq-lineloss-report` must not break, narrow, or silently change:
|
||||||
|
|
||||||
|
- current Zhihu hotlist routing
|
||||||
|
- current Zhihu direct browser-script execution
|
||||||
|
- current Zhihu export behavior
|
||||||
|
- current browser-script skill loading/execution
|
||||||
|
- existing direct-submit configuration behavior
|
||||||
|
|
||||||
|
Design implication:
|
||||||
|
|
||||||
|
- The new deterministic path must be added as a narrow, explicit branch.
|
||||||
|
- Existing Zhihu logic must keep its current trigger semantics and current execution seam.
|
||||||
|
- Verification for this slice must include targeted Zhihu regression coverage before implementation is considered complete.
|
||||||
|
|
||||||
|
### 2. Current main branch is pipe-only
|
||||||
|
|
||||||
|
The implementation landing on `main` must execute browser-script skills through the current pipe-backed browser execution seam.
|
||||||
|
|
||||||
|
Do not introduce ws as an active runtime requirement for this slice.
|
||||||
|
|
||||||
|
### 3. Future ws migration must stay cheap
|
||||||
|
|
||||||
|
Although `main` remains pipe-only, the new work must leave a clean extension seam so that after this slice is merged into `ws`, the browser backend can be switched without redesigning:
|
||||||
|
|
||||||
|
- the staged skill package
|
||||||
|
- the deterministic trigger contract
|
||||||
|
- the parameter extraction contract
|
||||||
|
- the parameter normalization contract
|
||||||
|
- the returned artifact contract
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Why This Slice Exists
|
||||||
|
|
||||||
|
The user wants a staged business skill for `台区线损大数据-月_周累计线损率统计分析` that behaves like a deterministic business operation, not a free-form LLM task.
|
||||||
|
|
||||||
|
The desired operator experience is:
|
||||||
|
|
||||||
|
- ordinary instructions continue to use the current normal routing / LLM path
|
||||||
|
- an instruction ending in `。。。` switches to deterministic business execution
|
||||||
|
- deterministic execution targets a fixed staged skill
|
||||||
|
- business parameters are extracted from the instruction
|
||||||
|
- those parameters are normalized to the real coded values the source page/API needs
|
||||||
|
- the staged browser-script is injected into the third-party browser through the existing pipe seam
|
||||||
|
|
||||||
|
This provides an inner-network-safe path that can work without a model today, while reserving an upgrade path for future semantic fallback.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Terminology
|
||||||
|
|
||||||
|
### Deterministic mode
|
||||||
|
|
||||||
|
A submit-task mode enabled only when the instruction ends with `。。。`.
|
||||||
|
|
||||||
|
### Natural-language business parameters
|
||||||
|
|
||||||
|
Values expressed by the user in text, such as:
|
||||||
|
|
||||||
|
- `兰州公司`
|
||||||
|
- `天水公司`
|
||||||
|
- `月累计`
|
||||||
|
- `周累计`
|
||||||
|
- `2026-03`
|
||||||
|
- `2026年第12周`
|
||||||
|
|
||||||
|
These are intermediate semantic values, not final backend parameters.
|
||||||
|
|
||||||
|
### Canonical execution parameters
|
||||||
|
|
||||||
|
The normalized values required by the source page / source API, such as:
|
||||||
|
|
||||||
|
- canonical company label
|
||||||
|
- canonical company code
|
||||||
|
- period mode code (month/week)
|
||||||
|
- canonical request period payload
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Ownership Boundary and Landing Zones
|
||||||
|
|
||||||
|
### Staged skill changes
|
||||||
|
|
||||||
|
These land in:
|
||||||
|
|
||||||
|
`D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging`
|
||||||
|
|
||||||
|
Primary landing zone:
|
||||||
|
|
||||||
|
- `skills/tq-lineloss-report/`
|
||||||
|
|
||||||
|
Target package structure:
|
||||||
|
|
||||||
|
- `SKILL.md`
|
||||||
|
- `SKILL.toml`
|
||||||
|
- `references/collection-flow.md`
|
||||||
|
- `references/data-quality.md`
|
||||||
|
- `assets/scene-snapshot/index.html`
|
||||||
|
- `scripts/collect_lineloss.js`
|
||||||
|
- `scripts/collect_lineloss.test.js`
|
||||||
|
|
||||||
|
Potential aligned scene metadata (if included in this slice):
|
||||||
|
|
||||||
|
- `scenes/tq-lineloss-report/scene.json`
|
||||||
|
- optional scene registry updates if the current staging conventions require it
|
||||||
|
|
||||||
|
### Caller/runtime changes
|
||||||
|
|
||||||
|
These land in:
|
||||||
|
|
||||||
|
`D:/data/ideaSpace/rust/sgClaw/claw-new`
|
||||||
|
|
||||||
|
Likely ownership areas:
|
||||||
|
|
||||||
|
- deterministic instruction detection and deterministic skill matching
|
||||||
|
- parameter extraction and normalization
|
||||||
|
- deterministic skill dispatch to the existing browser-script seam
|
||||||
|
- narrow result interpretation for the returned artifact
|
||||||
|
- focused regression tests
|
||||||
|
|
||||||
|
Design rule:
|
||||||
|
|
||||||
|
`claw-new` owns routing, extraction, normalization, and dispatch.
|
||||||
|
|
||||||
|
`claw-new` must **not** absorb the line-loss business logic itself.
|
||||||
|
|
||||||
|
The staged skill package owns:
|
||||||
|
|
||||||
|
- page inspection
|
||||||
|
- page-side state reading
|
||||||
|
- page/API data collection
|
||||||
|
- row normalization
|
||||||
|
- export/report-log behavior
|
||||||
|
- final artifact generation
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Target Runtime Flow
|
||||||
|
|
||||||
|
### Step 1: Submit-task enters deterministic mode only on `。。。`
|
||||||
|
|
||||||
|
When the user instruction does **not** end in `。。。`:
|
||||||
|
|
||||||
|
- keep the current runtime behavior unchanged
|
||||||
|
- preserve existing Zhihu hotlist behavior exactly
|
||||||
|
- preserve existing direct-submit and compat/LLM flows
|
||||||
|
|
||||||
|
When the instruction **does** end in `。。。`:
|
||||||
|
|
||||||
|
- enter deterministic mode
|
||||||
|
- do not run the ordinary LLM interpretation branch for this request
|
||||||
|
- evaluate only the deterministic skill whitelist
|
||||||
|
|
||||||
|
### Step 2: Deterministic whitelist match
|
||||||
|
|
||||||
|
The runtime should match the instruction against deterministic business scenes.
|
||||||
|
|
||||||
|
For this slice the new required deterministic scene is:
|
||||||
|
|
||||||
|
- `tq-lineloss-report.collect_lineloss`
|
||||||
|
|
||||||
|
The matching layer should remain narrow and explicit. It should not become a general scene-registry runtime in this slice.
|
||||||
|
|
||||||
|
Matching should use a deterministic combination of:
|
||||||
|
|
||||||
|
- instruction keywords
|
||||||
|
- optional page URL/title constraints when available
|
||||||
|
|
||||||
|
The runtime must not accidentally steal instructions that should still go down the Zhihu path.
|
||||||
|
|
||||||
|
### Step 3: Extract semantic business parameters from natural language
|
||||||
|
|
||||||
|
After `tq-lineloss-report` is matched, the runtime extracts semantic business parameters from the instruction.
|
||||||
|
|
||||||
|
Required semantic categories:
|
||||||
|
|
||||||
|
- company/unit expression
|
||||||
|
- period mode (`month` vs `week`)
|
||||||
|
- period text/value
|
||||||
|
|
||||||
|
Examples of accepted user-facing expressions include:
|
||||||
|
|
||||||
|
- `兰州公司`
|
||||||
|
- `天水公司`
|
||||||
|
- `国网兰州供电公司`
|
||||||
|
- `城关供电分公司`
|
||||||
|
- `2026-03`
|
||||||
|
- `2026年3月`
|
||||||
|
- `2026年第12周`
|
||||||
|
- `第12周`
|
||||||
|
- `月累计`
|
||||||
|
- `周累计`
|
||||||
|
|
||||||
|
### Step 4: Normalize semantic values into canonical coded values
|
||||||
|
|
||||||
|
This is a required separate design step.
|
||||||
|
|
||||||
|
The runtime must not pass raw natural-language company text directly to the business request layer.
|
||||||
|
|
||||||
|
Instead it must normalize semantic values into canonical execution parameters, including:
|
||||||
|
|
||||||
|
- `org_label` — canonical unit label
|
||||||
|
- `org_code` — the actual code/value required by the business page/API
|
||||||
|
- `period_mode` — canonical mode (`month` or `week`)
|
||||||
|
- `period_mode_code` — the page/API code (for example `timeChage`-style encoded mode)
|
||||||
|
- canonical time payload required by the source APIs/page state
|
||||||
|
|
||||||
|
This normalization should be derived from the actual source materials, including page-side dictionaries such as the existing unit tree data.
|
||||||
|
|
||||||
|
### Step 5: Missing and ambiguous parameters must stop execution
|
||||||
|
|
||||||
|
This slice must not silently infer missing parameters from page defaults.
|
||||||
|
|
||||||
|
If a required parameter is missing, execution must stop with an explicit prompt to the user.
|
||||||
|
|
||||||
|
If a parameter is ambiguous, execution must stop with an explicit ambiguity prompt.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
|
||||||
|
- no company matched
|
||||||
|
- no month/week mode matched
|
||||||
|
- no period value matched when required
|
||||||
|
- a short company alias matches multiple canonical units
|
||||||
|
- both monthly and weekly intent appear in the same instruction
|
||||||
|
|
||||||
|
This is preferable to silently using the wrong company code or the wrong query period.
|
||||||
|
|
||||||
|
### Step 6: Execute the staged skill through the existing pipe seam
|
||||||
|
|
||||||
|
If and only if parameters are present and successfully normalized:
|
||||||
|
|
||||||
|
- resolve `tq-lineloss-report.collect_lineloss`
|
||||||
|
- build the args object
|
||||||
|
- execute it through the current `browser_script` runtime
|
||||||
|
- inject the script into the browser through the existing pipe-backed browser tool seam
|
||||||
|
|
||||||
|
This slice must reuse the execution pattern already proven by the current browser-script/direct-skill infrastructure and the current Zhihu hotlist path.
|
||||||
|
|
||||||
|
Do not introduce a second browser protocol, new browser opcode family, or parallel execution harness.
|
||||||
|
|
||||||
|
### Step 7: Skill JS performs page-side work and returns one artifact
|
||||||
|
|
||||||
|
The staged script owns the actual line-loss business behavior:
|
||||||
|
|
||||||
|
- reading page-side state when needed
|
||||||
|
- validating the page context
|
||||||
|
- using normalized codes/parameters from args
|
||||||
|
- building source API requests
|
||||||
|
- collecting/normalizing rows
|
||||||
|
- export/report logging behavior if required by the final business contract
|
||||||
|
- returning a structured artifact
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Deterministic Trigger Contract
|
||||||
|
|
||||||
|
### Trigger rule
|
||||||
|
|
||||||
|
Deterministic mode is activated only when the raw instruction ends with the exact three-Chinese-dot suffix:
|
||||||
|
|
||||||
|
- `。。。`
|
||||||
|
|
||||||
|
This suffix is a user-controlled explicit mode switch.
|
||||||
|
|
||||||
|
### Why the suffix exists
|
||||||
|
|
||||||
|
It lets the user force business-deterministic behavior without relying on a model, while preserving the normal LLM path for ordinary requests.
|
||||||
|
|
||||||
|
### Scope rule
|
||||||
|
|
||||||
|
The suffix is not a free pass to run arbitrary browser actions.
|
||||||
|
|
||||||
|
It only selects among the deterministic skill whitelist.
|
||||||
|
|
||||||
|
If no deterministic scene matches, the runtime should return a deterministic-mode mismatch error that explains the currently supported deterministic scenes, rather than silently dropping into another behavior.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Company / Unit Matching Contract
|
||||||
|
|
||||||
|
### Accepted input style
|
||||||
|
|
||||||
|
The user does **not** need to type the exact full canonical label.
|
||||||
|
|
||||||
|
The runtime should support business shorthand such as:
|
||||||
|
|
||||||
|
- `兰州公司`
|
||||||
|
- `天水公司`
|
||||||
|
- `白银公司`
|
||||||
|
- `城关供电分公司`
|
||||||
|
- `榆中县供电公司`
|
||||||
|
|
||||||
|
### Matching approach
|
||||||
|
|
||||||
|
Do not use regex alone as the primary company-resolution mechanism.
|
||||||
|
|
||||||
|
Use a three-stage resolution strategy:
|
||||||
|
|
||||||
|
1. text normalization
|
||||||
|
2. alias/candidate generation from canonical unit names
|
||||||
|
3. uniqueness resolution against the real unit dictionary
|
||||||
|
|
||||||
|
### Normalization examples
|
||||||
|
|
||||||
|
Canonical names such as:
|
||||||
|
|
||||||
|
- `国网兰州供电公司`
|
||||||
|
- `国网天水供电公司`
|
||||||
|
- `国网榆中县供电公司`
|
||||||
|
|
||||||
|
should be matchable from business shorthand forms such as:
|
||||||
|
|
||||||
|
- `兰州公司`
|
||||||
|
- `天水公司`
|
||||||
|
- `榆中县公司`
|
||||||
|
- `榆中供电公司`
|
||||||
|
|
||||||
|
### Data source for canonical mapping
|
||||||
|
|
||||||
|
The company/unit resolver should derive canonical mappings from the real source materials used by the business page, such as the current unit tree dictionary embedded in the source page resources.
|
||||||
|
|
||||||
|
Design implication:
|
||||||
|
|
||||||
|
- the resolver should produce the real `value`/code required downstream
|
||||||
|
- the resolver should also keep the canonical label for display/auditability
|
||||||
|
|
||||||
|
### Ambiguity rule
|
||||||
|
|
||||||
|
If a short alias resolves to more than one valid unit, execution must stop and ask the user to be more specific.
|
||||||
|
|
||||||
|
Do not auto-guess.
|
||||||
|
|
||||||
|
### Supported granularity
|
||||||
|
|
||||||
|
The first implementation must support both:
|
||||||
|
|
||||||
|
- city-company level
|
||||||
|
- district/county/sub-company level
|
||||||
|
|
||||||
|
This includes forms like:
|
||||||
|
|
||||||
|
- `兰州公司`
|
||||||
|
- `天水公司`
|
||||||
|
- `城关供电分公司`
|
||||||
|
- `榆中县供电公司`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Period Extraction and Normalization Contract
|
||||||
|
|
||||||
|
### Required period dimensions
|
||||||
|
|
||||||
|
The runtime must identify:
|
||||||
|
|
||||||
|
- mode: `month` or `week`
|
||||||
|
- actual requested period value in a canonical form
|
||||||
|
|
||||||
|
### Accepted user-facing patterns
|
||||||
|
|
||||||
|
At minimum the design should account for patterns such as:
|
||||||
|
|
||||||
|
- `月累计`
|
||||||
|
- `周累计`
|
||||||
|
- `2026-03`
|
||||||
|
- `2026年3月`
|
||||||
|
- `2026年第12周`
|
||||||
|
- `第12周`
|
||||||
|
|
||||||
|
### Normalization output
|
||||||
|
|
||||||
|
The resolver should produce:
|
||||||
|
|
||||||
|
- a canonical mode enum/string
|
||||||
|
- a mode code required by the page/API
|
||||||
|
- a canonical period payload consumable by the script/business request layer
|
||||||
|
|
||||||
|
### Ambiguity rule
|
||||||
|
|
||||||
|
If both month and week intent appear, stop and ask the user to clarify.
|
||||||
|
|
||||||
|
### Missing-period rule
|
||||||
|
|
||||||
|
If the selected line-loss query requires a time period and the instruction does not provide enough information to construct one, stop and ask the user to provide it.
|
||||||
|
|
||||||
|
Do not default to the page-selected period in this slice.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Parameter Prompting Contract
|
||||||
|
|
||||||
|
When deterministic mode matches `tq-lineloss-report` but one or more required parameters are missing or ambiguous, the runtime should return a user-facing prompt rather than executing.
|
||||||
|
|
||||||
|
Expected prompting cases include:
|
||||||
|
|
||||||
|
- missing company/unit
|
||||||
|
- missing month/week mode
|
||||||
|
- missing period value
|
||||||
|
- ambiguous company alias
|
||||||
|
- contradictory period expressions
|
||||||
|
|
||||||
|
The prompt should be specific enough to let the user correct only the missing field(s).
|
||||||
|
|
||||||
|
Example style:
|
||||||
|
|
||||||
|
- `已命中台区线损报表技能,但缺少供电单位,请补充如“兰州公司”或“城关供电分公司”。`
|
||||||
|
- `已命中台区线损报表技能,但未识别到月/周类型,请补充“月累计”或“周累计”。`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Skill Package Contract
|
||||||
|
|
||||||
|
### SKILL.toml
|
||||||
|
|
||||||
|
The new skill package must declare a single deterministic collection entrypoint:
|
||||||
|
|
||||||
|
- tool name: `collect_lineloss`
|
||||||
|
- kind: `browser_script`
|
||||||
|
|
||||||
|
The tool description must reflect the real staged behavior, not a placeholder shell.
|
||||||
|
|
||||||
|
### SKILL.md
|
||||||
|
|
||||||
|
The written contract should cover:
|
||||||
|
|
||||||
|
- when to use the skill
|
||||||
|
- when not to use it
|
||||||
|
- collection workflow
|
||||||
|
- runtime contract
|
||||||
|
- explicit missing/partial/error semantics
|
||||||
|
- returned artifact contract
|
||||||
|
|
||||||
|
### references/collection-flow.md
|
||||||
|
|
||||||
|
Must explain:
|
||||||
|
|
||||||
|
- the source page state used by the skill
|
||||||
|
- how company and period parameters map to business requests
|
||||||
|
- which page/API calls are used for month vs week
|
||||||
|
- export/report-log sequencing if retained in the business flow
|
||||||
|
|
||||||
|
### references/data-quality.md
|
||||||
|
|
||||||
|
Must define:
|
||||||
|
|
||||||
|
- canonical output columns
|
||||||
|
- required field coverage
|
||||||
|
- status semantics
|
||||||
|
- partial/error conditions
|
||||||
|
- company/period normalization assumptions that the script relies on
|
||||||
|
|
||||||
|
### scripts/collect_lineloss.js
|
||||||
|
|
||||||
|
This is the real browser-side entrypoint. It should:
|
||||||
|
|
||||||
|
- accept normalized args
|
||||||
|
- validate page context
|
||||||
|
- execute deterministic page/API data collection
|
||||||
|
- normalize rows
|
||||||
|
- perform downstream export/report-history behavior if required
|
||||||
|
- directly return the final artifact from the browser-script runtime entrypoint shape
|
||||||
|
|
||||||
|
### scripts/collect_lineloss.test.js
|
||||||
|
|
||||||
|
Must cover the business transforms that can be tested off-browser, especially:
|
||||||
|
|
||||||
|
- company normalization assumptions consumed by the script
|
||||||
|
- monthly vs weekly request-shape logic
|
||||||
|
- status semantics
|
||||||
|
- artifact shaping
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Returned Artifact Contract
|
||||||
|
|
||||||
|
The final line-loss skill should return one structured artifact object rather than free-form prose.
|
||||||
|
|
||||||
|
At minimum it should expose:
|
||||||
|
|
||||||
|
- artifact type
|
||||||
|
- report name
|
||||||
|
- canonical company label/code used for the query
|
||||||
|
- period mode and canonical period value used for the query
|
||||||
|
- columns
|
||||||
|
- rows
|
||||||
|
- status
|
||||||
|
- counts
|
||||||
|
- downstream export/report-log status when applicable
|
||||||
|
- clear reasons for blocked/partial/error states
|
||||||
|
|
||||||
|
The exact field names may be finalized during implementation planning, but the contract must be structured enough for `claw-new` to interpret success vs partial vs blocked without re-embedding business logic.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Pipe-First / Ws-Ready Execution Seam
|
||||||
|
|
||||||
|
### Current requirement
|
||||||
|
|
||||||
|
The first implementation on `main` must use the existing pipe-backed browser execution path.
|
||||||
|
|
||||||
|
### Future requirement
|
||||||
|
|
||||||
|
The design must allow later ws adoption without redesigning the skill or routing contract.
|
||||||
|
|
||||||
|
### Practical design rule
|
||||||
|
|
||||||
|
Keep these backend-neutral:
|
||||||
|
|
||||||
|
- deterministic trigger contract
|
||||||
|
- skill matching contract
|
||||||
|
- parameter extraction contract
|
||||||
|
- parameter normalization contract
|
||||||
|
- tool args contract
|
||||||
|
- artifact contract
|
||||||
|
|
||||||
|
Keep backend-specific code isolated to the execution seam only.
|
||||||
|
|
||||||
|
That way the later ws migration can replace the browser backend beneath the same deterministic skill contract.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Caller/Runtime Design Rules
|
||||||
|
|
||||||
|
### 1. Keep new business logic out of broad orchestration
|
||||||
|
|
||||||
|
Do not thread line-loss-specific business behavior through the general orchestration/runtime path.
|
||||||
|
|
||||||
|
### 2. Add a narrow deterministic-routing seam
|
||||||
|
|
||||||
|
This slice should add a narrow deterministic branch around submit-task routing, rather than rewriting the whole runtime decision tree.
|
||||||
|
|
||||||
|
### 3. Separate extraction from normalization
|
||||||
|
|
||||||
|
Do not mix “what the user typed” with “what the backend needs”.
|
||||||
|
|
||||||
|
There must be a distinct normalization step.
|
||||||
|
|
||||||
|
### 4. Keep the direct-skill browser seam narrow
|
||||||
|
|
||||||
|
Reuse the current `browser_script` execution seam instead of inventing a new browser bridge.
|
||||||
|
|
||||||
|
### 5. Preserve Zhihu behavior by design, not by hope
|
||||||
|
|
||||||
|
The design should assume new deterministic routing can accidentally steal or alter existing Zhihu behavior unless explicitly guarded against.
|
||||||
|
|
||||||
|
This is why focused Zhihu regression coverage is mandatory.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Verification Requirements for the Future Implementation Plan
|
||||||
|
|
||||||
|
Implementation planning must include explicit verification for:
|
||||||
|
|
||||||
|
1. deterministic suffix detection
|
||||||
|
2. deterministic lineloss scene matching
|
||||||
|
3. company alias normalization to canonical code
|
||||||
|
4. support for both company-level and district/county/sub-company-level units
|
||||||
|
5. month/week extraction and normalization
|
||||||
|
6. missing-parameter prompt behavior
|
||||||
|
7. ambiguous-company prompt behavior
|
||||||
|
8. pipe-backed browser-script execution for the new skill
|
||||||
|
9. no regression to the existing Zhihu hotlist path
|
||||||
|
10. preserved direct-skill/browser-script behavior outside the new line-loss scene
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Out of Scope for This Slice
|
||||||
|
|
||||||
|
- enabling ws execution on `main`
|
||||||
|
- replacing the current Zhihu routing model
|
||||||
|
- general scene-registry runtime architecture redesign
|
||||||
|
- full free-form semantic understanding of arbitrary business language
|
||||||
|
- typo-tolerant fuzzy NLP beyond deterministic business-safe matching
|
||||||
|
- making page defaults the hidden source of truth when the user omitted parameters
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Planning Notes
|
||||||
|
|
||||||
|
The implementation plan should likely split into distinct work items for:
|
||||||
|
|
||||||
|
1. staged skill package creation and business contract definition
|
||||||
|
2. deterministic trigger + scene match in `claw-new`
|
||||||
|
3. company/unit normalization and ambiguity handling
|
||||||
|
4. period extraction/normalization and ambiguity handling
|
||||||
|
5. pipe-backed direct execution integration
|
||||||
|
6. returned artifact interpretation
|
||||||
|
7. Zhihu regression verification
|
||||||
|
8. ws-readiness seam verification
|
||||||
|
|
||||||
|
The plan should explicitly keep the “do not break Zhihu hotlist” boundary visible in every execution and verification stage.
|
||||||
@@ -0,0 +1,69 @@
|
|||||||
|
# 异步 Browser Script 支持设计
|
||||||
|
|
||||||
|
## 问题
|
||||||
|
|
||||||
|
`collect_lineloss.js` 的 `buildBrowserEntrypointResult` 是 async 函数,但 `build_eval_js` 生成的执行代码是同步的,导致 Promise 被 JSON.stringify 序列化为 `{}`。
|
||||||
|
|
||||||
|
**日志表现**:
|
||||||
|
```
|
||||||
|
[execute_browser_script_impl] 返回成功, payload 长度: 4
|
||||||
|
```
|
||||||
|
返回 `{}(4字符)` 而不是实际的报表数据。
|
||||||
|
|
||||||
|
## 根本原因
|
||||||
|
|
||||||
|
`callback_backend.rs` 的 `build_eval_js` 函数:
|
||||||
|
```javascript
|
||||||
|
var v=(function(){return {script}})(); // 同步执行
|
||||||
|
var t=(typeof v==='string')?v:JSON.stringify(v); // Promise -> "{}"
|
||||||
|
```
|
||||||
|
|
||||||
|
当 script 返回 Promise 时,`JSON.stringify(Promise)` 返回 `{}`。
|
||||||
|
|
||||||
|
## 解决方案
|
||||||
|
|
||||||
|
修改 `build_eval_js` 支持 Promise:
|
||||||
|
|
||||||
|
1. 用 `await` 等待 script 执行结果
|
||||||
|
2. 检测结果是否为 Promise,如果是则等待 resolve
|
||||||
|
3. 保持对同步脚本的向后兼容
|
||||||
|
|
||||||
|
## 实现细节
|
||||||
|
|
||||||
|
修改 `src/browser/callback_backend.rs` 的 `build_eval_js` 函数:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
(async function(){
|
||||||
|
try {
|
||||||
|
var v = await (function(){return {script}})();
|
||||||
|
// 等待 Promise resolve
|
||||||
|
if (v && typeof v.then === 'function') {
|
||||||
|
v = await v;
|
||||||
|
}
|
||||||
|
var t = (typeof v === 'string') ? v : JSON.stringify(v);
|
||||||
|
// ... 回调逻辑保持不变
|
||||||
|
} catch(e) {}
|
||||||
|
})()
|
||||||
|
```
|
||||||
|
|
||||||
|
关键点:
|
||||||
|
- 包装整个 IIFE 为 async
|
||||||
|
- 用 `await` 等待 script 执行
|
||||||
|
- 检测 Promise-like 对象 (`v.then === 'function'`)
|
||||||
|
- 向后兼容:同步脚本直接返回值,async 脚本返回 Promise 后被 await
|
||||||
|
|
||||||
|
## 影响范围
|
||||||
|
|
||||||
|
- `src/browser/callback_backend.rs`: 修改 `build_eval_js` 函数
|
||||||
|
- 所有 `browser_script` 类型的 skill 自动支持 async
|
||||||
|
|
||||||
|
## 测试验证
|
||||||
|
|
||||||
|
1. 运行 `cargo test` 确保现有测试通过
|
||||||
|
2. 端到端测试 `tq-lineloss-report.collect_lineloss` 返回实际数据而非 `{}`
|
||||||
|
3. 验证同步脚本(如知乎热榜)仍然正常工作
|
||||||
|
|
||||||
|
## 不在范围内
|
||||||
|
|
||||||
|
- 不修改 `wrap_browser_script`(方案 C 的做法)
|
||||||
|
- 不修改 skill 脚本本身
|
||||||
47
docs/superpowers/specs/2026-04-13-async-eval-then-fix.md
Normal file
47
docs/superpowers/specs/2026-04-13-async-eval-then-fix.md
Normal file
@@ -0,0 +1,47 @@
|
|||||||
|
# 修复 build_eval_js 异步支持 + validatePageContext 诊断日志
|
||||||
|
|
||||||
|
## 问题描述
|
||||||
|
|
||||||
|
1. `collect_lineloss.js` 的 `buildBrowserEntrypointResult` 是 async 函数,返回 Promise
|
||||||
|
2. 当前同步版 `build_eval_js` 中 `JSON.stringify(Promise)` = `"{}"`
|
||||||
|
3. 之前的 async IIFE 方案导致 `page_context_unavailable`(原因待排查)
|
||||||
|
|
||||||
|
## 方案
|
||||||
|
|
||||||
|
### 修改1: build_eval_js 使用 .then() 分支
|
||||||
|
|
||||||
|
文件:`src/browser/callback_backend.rs` - `build_eval_js` 函数
|
||||||
|
|
||||||
|
逻辑:
|
||||||
|
1. 外层 IIFE 保持同步(兼容 C++ 注入层)
|
||||||
|
2. 将回调发送逻辑提取为 `_s` 函数
|
||||||
|
3. 如果返回值是 Promise(有 `.then` 方法),用 `.then(_s)` 异步等待结果
|
||||||
|
4. 否则直接同步调用 `_s(v)`
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
(function(){try{
|
||||||
|
var v=(function(){return {script}})();
|
||||||
|
function _s(v){
|
||||||
|
var t=(typeof v==='string')?v:JSON.stringify(v);
|
||||||
|
try{callBackJsToCpp(...);}catch(_){}
|
||||||
|
var j=JSON.stringify({...});
|
||||||
|
try{XHR...}catch(_){}
|
||||||
|
try{sendBeacon...}catch(_){}
|
||||||
|
}
|
||||||
|
if(v&&typeof v.then==='function'){v.then(_s).catch(function(){});}
|
||||||
|
else{_s(v);}
|
||||||
|
}catch(e){}})()
|
||||||
|
```
|
||||||
|
|
||||||
|
### 修改2: validatePageContext 添加诊断日志
|
||||||
|
|
||||||
|
文件:`D:\data\ideaSpace\rust\sgClaw\claw\claw\skills\skill_staging\skills\tq-lineloss-report\scripts\collect_lineloss.js`
|
||||||
|
|
||||||
|
在 `validatePageContext` 每个检查点添加 console.log,记录 host、expected_domain、mac 状态。
|
||||||
|
|
||||||
|
## 验证
|
||||||
|
|
||||||
|
1. `cargo test` 通过
|
||||||
|
2. 编译后拷贝 exe 到线上
|
||||||
|
3. 执行 skill,确认不再返回 `{}`
|
||||||
|
4. 如果出现 `page_context_unavailable`,查看浏览器控制台日志
|
||||||
55
docs/superpowers/specs/2026-04-13-expected-domain-arg-fix.md
Normal file
55
docs/superpowers/specs/2026-04-13-expected-domain-arg-fix.md
Normal file
@@ -0,0 +1,55 @@
|
|||||||
|
# 修复 Browser Script Skill Tool expected_domain 参数丢失问题
|
||||||
|
|
||||||
|
## 问题描述
|
||||||
|
|
||||||
|
`tq-lineloss-report.collect_lineloss` skill 执行时返回 `status=blocked row=0 reasons=missing_expected_domain` 错误。
|
||||||
|
|
||||||
|
## 根本原因
|
||||||
|
|
||||||
|
`src/compat/browser_script_skill_tool.rs` 中 `execute_browser_script_impl` 函数:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
// 第 183 行:从 args 中移除 expected_domain
|
||||||
|
let raw_expected_domain = match args.remove("expected_domain") {
|
||||||
|
Some(Value::String(value)) if !value.trim().is_empty() => value,
|
||||||
|
// ...
|
||||||
|
};
|
||||||
|
|
||||||
|
// 第 200 行:规范化域名(去掉 scheme、port 等)
|
||||||
|
let expected_domain = match normalize_domain_like(&raw_expected_domain) {
|
||||||
|
Some(value) => value,
|
||||||
|
// ...
|
||||||
|
};
|
||||||
|
|
||||||
|
// 第 234 行:包装脚本时,args 中已经没有 expected_domain 了!
|
||||||
|
let wrapped_script = wrap_browser_script(&script_body, &Value::Object(args.clone()));
|
||||||
|
```
|
||||||
|
|
||||||
|
`args.remove()` 会从 HashMap 中删除键值对,后续 `wrap_browser_script()` 传入的 args 不包含 `expected_domain`,导致 JS 脚本中 `const args = {...}` 缺少该字段。
|
||||||
|
|
||||||
|
## 解决方案
|
||||||
|
|
||||||
|
在规范化域名后,将 `expected_domain` 重新插入 args。
|
||||||
|
|
||||||
|
### 修改位置
|
||||||
|
|
||||||
|
文件:`src/compat/browser_script_skill_tool.rs`
|
||||||
|
行号:第 209 行后(`expected_domain` 赋值之后、`for required_arg` 循环之前)
|
||||||
|
|
||||||
|
### 修改内容
|
||||||
|
|
||||||
|
```rust
|
||||||
|
// 第 209 行后添加:
|
||||||
|
args.insert("expected_domain".to_string(), Value::String(expected_domain.clone()));
|
||||||
|
```
|
||||||
|
|
||||||
|
## 影响范围
|
||||||
|
|
||||||
|
- 只影响 `browser_script_skill_tool.rs`
|
||||||
|
- 所有使用 `expected_domain` 的 browser_script skill 都会受益
|
||||||
|
- 无破坏性变更
|
||||||
|
|
||||||
|
## 验证方法
|
||||||
|
|
||||||
|
1. 运行现有测试:`cargo test browser_script_skill_tool`
|
||||||
|
2. 内网验证:执行 `tq-lineloss-report.collect_lineloss` skill
|
||||||
48
docs/superpowers/specs/2026-04-13-lineloss-requesturl-fix.md
Normal file
48
docs/superpowers/specs/2026-04-13-lineloss-requesturl-fix.md
Normal file
@@ -0,0 +1,48 @@
|
|||||||
|
# 台区线损 Skill - requesturl 快速修复方案
|
||||||
|
|
||||||
|
## 问题背景
|
||||||
|
|
||||||
|
`sgHideBrowerserOpenPage` 命令需要 `requesturl` 参数(发起调用的页面 URL),但当前台区线损指令解析时返回 `about:blank`,导致浏览器不执行命令。
|
||||||
|
|
||||||
|
知乎热榜场景正常工作,因为 `derive_request_url_from_instruction` 返回了 `https://www.zhihu.com`。
|
||||||
|
|
||||||
|
## 设计方案
|
||||||
|
|
||||||
|
**方案:在 `derive_request_url_from_instruction` 中添加台区线损 URL 映射**
|
||||||
|
|
||||||
|
### 修改位置
|
||||||
|
|
||||||
|
`src/service/server.rs` - `derive_request_url_from_instruction` 函数
|
||||||
|
|
||||||
|
### 修改内容
|
||||||
|
|
||||||
|
```rust
|
||||||
|
fn derive_request_url_from_instruction(instruction: &str) -> Option<String> {
|
||||||
|
// 已有:知乎相关(保持不变)
|
||||||
|
if crate::compat::workflow_executor::detect_route(instruction, None, None)
|
||||||
|
.is_some_and(|route| { ... })
|
||||||
|
{
|
||||||
|
return Some("https://www.zhihu.com".to_string());
|
||||||
|
}
|
||||||
|
|
||||||
|
// 新增:台区线损相关
|
||||||
|
// TODO: 临时方案,后续应从 skill 配置或 deterministic_submit 解析结果中获取
|
||||||
|
if instruction.contains("线损") || instruction.contains("lineloss") {
|
||||||
|
return Some("http://20.76.57.61:18080".to_string());
|
||||||
|
}
|
||||||
|
|
||||||
|
None
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 约束条件
|
||||||
|
|
||||||
|
- URL 为硬编码,后续需重构为通用方案
|
||||||
|
- 仅匹配指令中包含"线损"或"lineloss"的场景
|
||||||
|
|
||||||
|
## 后续规划
|
||||||
|
|
||||||
|
将实现通用方案:
|
||||||
|
- 从 `DeterministicExecutionPlan.expected_domain` 构造完整 URL
|
||||||
|
- 或从 skill 配置文件中读取 target URL
|
||||||
|
- 调整流程顺序,先解析 skill 再打开 helper page
|
||||||
36
docs/superpowers/specs/2026-04-13-lineloss-target-url-fix.md
Normal file
36
docs/superpowers/specs/2026-04-13-lineloss-target-url-fix.md
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
# 台区线损 Skill - target_url 缺失修复方案
|
||||||
|
|
||||||
|
## 问题背景
|
||||||
|
|
||||||
|
`browser_script_skill_tool.rs` 调用 `Action::Eval` 时只传了 `script` 参数,没有传 `target_url`。`callback_backend.rs` 的 `target_url` 方法需要从 params 或 `current_target_url` 获取值,两者都没有时报错。
|
||||||
|
|
||||||
|
知乎热榜正常工作是因为先执行了 `Action::Navigate`,设置了 `current_target_url`。
|
||||||
|
|
||||||
|
## 设计方案
|
||||||
|
|
||||||
|
**方案:在 `browser_script_skill_tool.rs` 的 params 中添加 `target_url`**
|
||||||
|
|
||||||
|
### 修改位置
|
||||||
|
|
||||||
|
`src/compat/browser_script_skill_tool.rs` - `execute_browser_script_impl` 函数
|
||||||
|
|
||||||
|
### 修改内容
|
||||||
|
|
||||||
|
在调用 `browser_tool.invoke(Action::Eval, ...)` 时,从 `expected_domain` 构造完整 URL 并添加到 params:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
let target_url = format!("http://{}", expected_domain);
|
||||||
|
let result = match browser_tool.invoke(
|
||||||
|
Action::Eval,
|
||||||
|
json!({
|
||||||
|
"script": wrapped_script,
|
||||||
|
"target_url": target_url,
|
||||||
|
}),
|
||||||
|
&expected_domain,
|
||||||
|
) {
|
||||||
|
```
|
||||||
|
|
||||||
|
### 约束条件
|
||||||
|
|
||||||
|
- 使用 `http://` 协议前缀
|
||||||
|
- `expected_domain` 可能包含端口号(如 `20.76.57.61:18080`),直接拼接即可
|
||||||
@@ -0,0 +1,84 @@
|
|||||||
|
# Remove mac Guard from validatePageContext
|
||||||
|
|
||||||
|
## Date
|
||||||
|
|
||||||
|
2026-04-13
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
`tq-lineloss-report` skill execution reports `status=blocked rows=0 reasons=page_context_unavailable`.
|
||||||
|
|
||||||
|
Diagnostic instrumentation confirmed:
|
||||||
|
|
||||||
|
```
|
||||||
|
href=http://20.76.57.61:18080/gsllys
|
||||||
|
host=20.76.57.61
|
||||||
|
port=18080
|
||||||
|
title=台区线损大数据分析模块
|
||||||
|
mac=false
|
||||||
|
```
|
||||||
|
|
||||||
|
The script executes on the correct domain but `globalThis.mac` does not exist, triggering the `page_context_unavailable` guard.
|
||||||
|
|
||||||
|
## Root Cause
|
||||||
|
|
||||||
|
`window.mac` is a Vue instance created by the **original scene page** (`index.html`), assigned via `window.mac = this` in `mounted()`. The original scene page acts as a controller that injects JS into the business page via `BrowserAction('sgBrowserExcuteJsCode', exactURL, jsCode)`.
|
||||||
|
|
||||||
|
In the skill execution model, there is no scene page. The script is injected directly via `sgBrowserExcuteJsCodeByDomain` onto a page matching the domain. No Vue instance is created, so `globalThis.mac` is always `undefined`. The `mac` check is architecturally invalid for the skill model.
|
||||||
|
|
||||||
|
Additionally, `sgBrowserExcuteJsCodeByDomain("20.76.57.61")` matches the parent frame page (`/gsllys`) rather than the business sub-page (`/gsllys/tqLinelossStatis/tqQualifyRateMonitor`). This is acceptable because the skill script makes direct HTTP requests with absolute URLs and does not depend on page-local state.
|
||||||
|
|
||||||
|
## Design
|
||||||
|
|
||||||
|
Remove the `globalThis.mac` existence check from `validatePageContext` in `collect_lineloss.js`. Retain the `host` matching check as a basic domain guard.
|
||||||
|
|
||||||
|
Also clean up the temporary diagnostic code (`diag` variable, `console.log` statements, enriched reason strings) added during debugging.
|
||||||
|
|
||||||
|
### Before
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
validatePageContext(args) {
|
||||||
|
const host = normalizeText(globalThis.location?.hostname);
|
||||||
|
const port = normalizeText(globalThis.location?.port);
|
||||||
|
const href = normalizeText(globalThis.location?.href);
|
||||||
|
const title = normalizeText(globalThis.document?.title);
|
||||||
|
const expected = normalizeText(args.expected_domain);
|
||||||
|
const hasMac = !!globalThis.mac;
|
||||||
|
const diag = 'href=' + href + '|host=' + host + '|port=' + port + '|title=' + title + '|mac=' + hasMac;
|
||||||
|
console.log('[validatePageContext] ' + diag);
|
||||||
|
if (!host) {
|
||||||
|
return { ok: false, reason: 'page_context_unavailable:host_empty|' + diag };
|
||||||
|
}
|
||||||
|
if (host !== expected) {
|
||||||
|
return { ok: false, reason: 'page_context_mismatch:host=' + host + ',expected=' + expected + '|' + diag };
|
||||||
|
}
|
||||||
|
if (!hasMac) {
|
||||||
|
return { ok: false, reason: 'page_context_unavailable:mac_missing|' + diag };
|
||||||
|
}
|
||||||
|
return { ok: true };
|
||||||
|
},
|
||||||
|
```
|
||||||
|
|
||||||
|
### After
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
validatePageContext(args) {
|
||||||
|
const host = normalizeText(globalThis.location?.hostname);
|
||||||
|
const expected = normalizeText(args.expected_domain);
|
||||||
|
if (!host) {
|
||||||
|
return { ok: false, reason: 'page_context_unavailable' };
|
||||||
|
}
|
||||||
|
if (host !== expected) {
|
||||||
|
return { ok: false, reason: 'page_context_mismatch' };
|
||||||
|
}
|
||||||
|
return { ok: true };
|
||||||
|
},
|
||||||
|
```
|
||||||
|
|
||||||
|
## Files Changed
|
||||||
|
|
||||||
|
- `claw/claw/skills/skill_staging/skills/tq-lineloss-report/scripts/collect_lineloss.js` — `validatePageContext` function only
|
||||||
|
|
||||||
|
## No Recompilation Required
|
||||||
|
|
||||||
|
The JS file is read at runtime via `fs::read_to_string`. No Rust code changes.
|
||||||
@@ -0,0 +1,111 @@
|
|||||||
|
# Rust-Side Lineloss XLSX Export
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
`collect_lineloss.js` runs on a remote page (`http://20.76.57.61:18080/gsllys`).
|
||||||
|
The script successfully queries API data (12 rows), but cannot call
|
||||||
|
`http://localhost:13313/.../faultDetailsExportXLSX` because the browser blocks
|
||||||
|
cross-origin requests from a remote page to `localhost`.
|
||||||
|
|
||||||
|
The original scene architecture had a local scene page acting as a proxy,
|
||||||
|
but skill mode has no local page -- so export is architecturally impossible
|
||||||
|
from the browser side.
|
||||||
|
|
||||||
|
## Decision
|
||||||
|
|
||||||
|
Move XLSX generation to the Rust side. JS only collects data; Rust generates
|
||||||
|
the `.xlsx` file locally after receiving the artifact.
|
||||||
|
|
||||||
|
Report log (`setReportLog`) is deferred to a later iteration.
|
||||||
|
|
||||||
|
## Design
|
||||||
|
|
||||||
|
### JS Changes (`collect_lineloss.js`)
|
||||||
|
|
||||||
|
1. Remove `exportWorkbook()` call and `writeReportLog()` call
|
||||||
|
2. Return artifact with `rows` array and `column_defs` array
|
||||||
|
3. Status is `ok` when rows > 0, `empty` when rows == 0, `error`/`blocked` unchanged
|
||||||
|
|
||||||
|
Artifact shape:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "report-artifact",
|
||||||
|
"report_name": "tq-lineloss-report",
|
||||||
|
"status": "ok",
|
||||||
|
"org": { "label": "...", "code": "..." },
|
||||||
|
"period": { "mode": "month", "value": "2026-03" },
|
||||||
|
"column_defs": [["ORG_NAME","供电单位"], ["YGDL","累计供电量"], ...],
|
||||||
|
"rows": [
|
||||||
|
{"ORG_NAME":"xxx", "YGDL":"12345.67", ...}
|
||||||
|
],
|
||||||
|
"counts": { "rows": 12 }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Rust Changes
|
||||||
|
|
||||||
|
#### New file: `src/compat/lineloss_xlsx_export.rs`
|
||||||
|
|
||||||
|
Generates a standard `.xlsx` file using `zip` crate + OpenXML XML strings.
|
||||||
|
Follows the pattern established in `openxml_office_tool.rs`.
|
||||||
|
|
||||||
|
Public API:
|
||||||
|
```rust
|
||||||
|
pub struct LinelossExportRequest {
|
||||||
|
pub column_defs: Vec<(String, String)>, // (key, chinese_header)
|
||||||
|
pub rows: Vec<Map<String, Value>>,
|
||||||
|
pub sheet_name: String,
|
||||||
|
pub output_path: PathBuf,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn export_lineloss_xlsx(request: &LinelossExportRequest) -> anyhow::Result<PathBuf>;
|
||||||
|
```
|
||||||
|
|
||||||
|
Internals:
|
||||||
|
- Build header row from `column_defs[*].1` (chinese names)
|
||||||
|
- Build data rows by looking up `column_defs[*].0` keys in each row map
|
||||||
|
- Generate `worksheet_xml` with inline string cells
|
||||||
|
- Package with standard OpenXML boilerplate (content_types, rels, workbook)
|
||||||
|
- Write to `output_path`
|
||||||
|
|
||||||
|
#### Modified: `src/compat/deterministic_submit.rs`
|
||||||
|
|
||||||
|
In `execute_deterministic_submit_with_browser_backend` (and the non-backend variant):
|
||||||
|
|
||||||
|
```
|
||||||
|
let output = execute_browser_script_skill_raw_output_with_browser_backend(...)?;
|
||||||
|
let artifact = parse_lineloss_artifact(&output);
|
||||||
|
|
||||||
|
if artifact has rows > 0 && column_defs present:
|
||||||
|
let export_path = workspace_root/out/tq-lineloss-{timestamp}.xlsx
|
||||||
|
export_lineloss_xlsx(LinelossExportRequest { ... })?
|
||||||
|
// attach export_path to outcome summary
|
||||||
|
|
||||||
|
Ok(summarize_lineloss_output_with_export(&output, export_path))
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Modified: `src/compat/mod.rs`
|
||||||
|
|
||||||
|
Add `pub mod lineloss_xlsx_export;`
|
||||||
|
|
||||||
|
### Output Path
|
||||||
|
|
||||||
|
`{workspace_root}/out/tq-lineloss-{org_label}-{period}-{timestamp_nanos}.xlsx`
|
||||||
|
|
||||||
|
### Error Handling
|
||||||
|
|
||||||
|
- XLSX generation failure: outcome status = `partial`, reason = `xlsx_export_failed`
|
||||||
|
- Artifact parse failure: fall through to existing `summarize_lineloss_output`
|
||||||
|
|
||||||
|
## Files Changed
|
||||||
|
|
||||||
|
| File | Change Type |
|
||||||
|
|------|-------------|
|
||||||
|
| `collect_lineloss.js` | Modify: remove export/log calls, add rows+column_defs to artifact |
|
||||||
|
| `src/compat/lineloss_xlsx_export.rs` | New: XLSX generation |
|
||||||
|
| `src/compat/deterministic_submit.rs` | Modify: post-process artifact, call XLSX export |
|
||||||
|
| `src/compat/mod.rs` | Modify: register new module |
|
||||||
|
|
||||||
|
## Requires Recompilation
|
||||||
|
|
||||||
|
Yes. Rust code changes require `cargo build`.
|
||||||
@@ -0,0 +1,55 @@
|
|||||||
|
# Helper Page Lifecycle Fix v2 — Same-Connection Close + Open
|
||||||
|
|
||||||
|
**Date:** 2026-04-14
|
||||||
|
**Status:** Approved
|
||||||
|
|
||||||
|
## Problem
|
||||||
|
|
||||||
|
Two issues remain after v1:
|
||||||
|
|
||||||
|
1. **Process restart leaves orphaned helper pages**: When the sg_claw process restarts, the old helper page tab remains open in the browser. The new process opens another one.
|
||||||
|
2. **Helper page is visible**: Uses `sgBrowerserOpenPage` (visible tab API) instead of `sgHideBrowerserOpenPage` (hidden domain API).
|
||||||
|
|
||||||
|
## Root Cause of v1 Failure
|
||||||
|
|
||||||
|
The v1 `close_helper_page` function created a **second** WebSocket connection to the browser during `Drop`. This likely conflicted with the existing bootstrap connection, causing the browser's WebSocket state to become confused.
|
||||||
|
|
||||||
|
## Solution
|
||||||
|
|
||||||
|
Send the close command on the **same** WebSocket connection used for bootstrap, before sending the open command:
|
||||||
|
|
||||||
|
1. Connect to browser WS
|
||||||
|
2. Register as "web" role
|
||||||
|
3. **Blindly send** `sgHideBrowerserClosePage(helper_url)` — closes any orphaned page from a previous process run
|
||||||
|
4. Send `sgHideBrowerserOpenPage(helper_url)` — opens the new helper page
|
||||||
|
5. Poll `/sgclaw/callback/ready` for page readiness
|
||||||
|
|
||||||
|
Both `use_hidden_domain = true` and the close+open logic are combined into a single change.
|
||||||
|
|
||||||
|
## Why This Works
|
||||||
|
|
||||||
|
- **Same connection**: Only one WebSocket connection to the browser. No conflict with existing connections.
|
||||||
|
- **Best-effort close**: If no orphaned page exists (first run ever), the close command is silently ignored by the browser. This does not affect the subsequent open command.
|
||||||
|
- **Fire-and-forget**: Both close and open commands use the same fire-and-forget semantics as the existing bootstrap command.
|
||||||
|
|
||||||
|
## API Reference
|
||||||
|
|
||||||
|
| API | Wire format | Effect |
|
||||||
|
|-----|------------|--------|
|
||||||
|
| `sgHideBrowerserOpenPage` (API #6) | `[requesturl, "sgHideBrowerserOpenPage", url]` | Opens in hidden domain |
|
||||||
|
| `sgHideBrowerserClosePage` (API #68) | `[requesturl, "sgHideBrowerserClosePage", url]` | Closes hidden domain page |
|
||||||
|
|
||||||
|
## Affected Files
|
||||||
|
|
||||||
|
| File | Change |
|
||||||
|
|------|--------|
|
||||||
|
| `src/browser/callback_host.rs` | In `bootstrap_helper_page`: add close command before open command |
|
||||||
|
| `src/service/server.rs` | Change `use_hidden_domain` from `false` to `true` |
|
||||||
|
|
||||||
|
## What Does NOT Change
|
||||||
|
|
||||||
|
- `callback_backend.rs` — `SHOW_AREA`, `build_command` unchanged
|
||||||
|
- `sgBrowserExcuteJsCodeByDomain` area parameter — stays `"show"`
|
||||||
|
- Helper page HTML content — unchanged
|
||||||
|
- `Drop for LiveBrowserCallbackHost` — remains simple (shutdown only, no close attempt)
|
||||||
|
- `cached_host` in `mod.rs` — remains lifted to outer loop
|
||||||
@@ -0,0 +1,99 @@
|
|||||||
|
# Helper Page Lifecycle Fix & Hidden Domain Support
|
||||||
|
|
||||||
|
**Date:** 2026-04-14
|
||||||
|
**Status:** Approved
|
||||||
|
|
||||||
|
## Problem Statement
|
||||||
|
|
||||||
|
Two bugs in the browser-helper.html page management:
|
||||||
|
|
||||||
|
1. **Duplicate helper pages**: Every WebSocket client reconnection triggers a new `serve_client()` call, which creates a new `LiveBrowserCallbackHost` and opens a new helper page via `sgBrowerserOpenPage`. The old helper page tab is never closed, causing accumulation of orphaned tabs.
|
||||||
|
|
||||||
|
2. **Helper page is visible**: The bootstrap uses `sgBrowerserOpenPage` (visible tab API) instead of `sgHideBrowerserOpenPage` (hidden domain API). The helper page should not be visible to the user.
|
||||||
|
|
||||||
|
## Root Cause Analysis
|
||||||
|
|
||||||
|
### Duplicate pages
|
||||||
|
|
||||||
|
Call chain:
|
||||||
|
- `src/service/mod.rs:72` — outer `loop` accepts new WebSocket connections
|
||||||
|
- `src/service/mod.rs:79` — each connection calls `serve_client()`
|
||||||
|
- `src/service/server.rs:241` — `cached_host` declared as local variable, re-initialized to `None` each call
|
||||||
|
- `src/service/server.rs:288` → `callback_host.rs:241` — `bootstrap_helper_page()` opens a new helper tab
|
||||||
|
|
||||||
|
`Drop for LiveBrowserCallbackHost` (`callback_host.rs:321-328`) only shuts down the HTTP server thread. It does not send a browser close command for the helper tab.
|
||||||
|
|
||||||
|
### Visible page
|
||||||
|
|
||||||
|
`callback_host.rs:28`: `HELPER_BOOTSTRAP_ACTION = "sgBrowerserOpenPage"` — this is the visible-domain open API (API #7). The hidden-domain equivalent is `sgHideBrowerserOpenPage` (API #6).
|
||||||
|
|
||||||
|
## Solution: Approach C — Incremental Fix
|
||||||
|
|
||||||
|
### Step 1: Fix lifecycle (immediate, deterministic fix)
|
||||||
|
|
||||||
|
#### 1a. Lift `cached_host` to outer loop
|
||||||
|
|
||||||
|
Move `cached_host: Option<Arc<LiveBrowserCallbackHost>>` from inside `serve_client()` to before the `loop` in `run_service()` (`mod.rs`). Change `serve_client()` signature to accept `&mut Option<Arc<LiveBrowserCallbackHost>>` instead of creating its own.
|
||||||
|
|
||||||
|
Effect: Multiple WebSocket reconnections share the same host. Helper page opens once per process lifetime.
|
||||||
|
|
||||||
|
#### 1b. Close helper page on Drop
|
||||||
|
|
||||||
|
Enhance `Drop for LiveBrowserCallbackHost`:
|
||||||
|
- Add `browser_ws_url: String` field to `LiveBrowserCallbackHost` (stored at construction time)
|
||||||
|
- Add `use_hidden_domain: bool` field (stored at construction time)
|
||||||
|
- In `Drop::drop`, before shutting down the server thread:
|
||||||
|
1. Connect to `browser_ws_url` with 100ms connection timeout
|
||||||
|
2. Send register message
|
||||||
|
3. Send close command: `[helper_url, close_api, helper_url]`
|
||||||
|
- `close_api` = `"sgBrowserClosePage"` when `use_hidden_domain == false`
|
||||||
|
- `close_api` = `"sgHideBrowerserClosePage"` when `use_hidden_domain == true`
|
||||||
|
4. All steps are best-effort: failures are silently ignored
|
||||||
|
5. Total timeout cap: 500ms
|
||||||
|
|
||||||
|
### Step 2: Hidden domain config switch (for testing/gradual rollout)
|
||||||
|
|
||||||
|
#### 2a. Parameter plumbing
|
||||||
|
|
||||||
|
- `LiveBrowserCallbackHost::start_with_browser_ws_url` gains parameter `use_hidden_domain: bool`
|
||||||
|
- `bootstrap_helper_page` selects API based on this flag:
|
||||||
|
- `true` → `"sgHideBrowerserOpenPage"`
|
||||||
|
- `false` → `"sgBrowerserOpenPage"` (current behavior, default)
|
||||||
|
- `LiveBrowserCallbackHost` stores the flag for Drop close-command selection
|
||||||
|
|
||||||
|
#### 2b. Caller changes
|
||||||
|
|
||||||
|
- `mod.rs` / `server.rs` pass `false` as default
|
||||||
|
- To enable hidden domain, change the call site to pass `true`
|
||||||
|
|
||||||
|
## What Does NOT Change
|
||||||
|
|
||||||
|
- `callback_backend.rs` `SHOW_AREA = "show"` — JS injection targets visible business pages, not the helper itself
|
||||||
|
- `sgBrowserExcuteJsCodeByDomain` area parameter — stays `"show"` regardless of helper domain
|
||||||
|
- Helper page HTML content — WebSocket connection and command polling JS remain the same
|
||||||
|
- `collect_lineloss.js` — not affected
|
||||||
|
|
||||||
|
## Affected Files
|
||||||
|
|
||||||
|
| File | Change |
|
||||||
|
|------|--------|
|
||||||
|
| `src/browser/callback_host.rs` | New fields on `LiveBrowserCallbackHost`, `start_with_browser_ws_url` signature change, `Drop` enhancement, new `close_helper_page` helper fn |
|
||||||
|
| `src/service/mod.rs` | `cached_host` lifted to outer loop, passed to `serve_client` |
|
||||||
|
| `src/service/server.rs` | `serve_client` signature change to accept `&mut Option<Arc<LiveBrowserCallbackHost>>` |
|
||||||
|
| Existing test files | Adapt `start_with_browser_ws_url` calls with new `use_hidden_domain` parameter |
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
- Existing `callback_host` tests: adapt to new signature (add `false` parameter)
|
||||||
|
- New unit test: `use_hidden_domain = true` → bootstrap sends `sgHideBrowerserOpenPage`
|
||||||
|
- New unit test: `use_hidden_domain = false` → bootstrap sends `sgBrowerserOpenPage` (regression)
|
||||||
|
- `cargo build` + `cargo test` full verification
|
||||||
|
|
||||||
|
## Browser API Reference
|
||||||
|
|
||||||
|
| API | Wire format | Effect |
|
||||||
|
|-----|------------|--------|
|
||||||
|
| `sgBrowerserOpenPage` (API #7) | `[requesturl, "sgBrowerserOpenPage", url]` | Opens visible tab |
|
||||||
|
| `sgHideBrowerserOpenPage` (API #6) | `[requesturl, "sgHideBrowerserOpenPage", url]` | Opens in hidden domain |
|
||||||
|
| `sgBrowserClosePage` (API #64) | `[requesturl, "sgBrowserClosePage", url]` | Closes visible tab |
|
||||||
|
| `sgHideBrowerserClosePage` (API #68) | `[requesturl, "sgHideBrowerserClosePage", url]` | Closes hidden domain page |
|
||||||
@@ -0,0 +1,284 @@
|
|||||||
|
# sgClaw Service Console Enhancement Design
|
||||||
|
|
||||||
|
## Background
|
||||||
|
|
||||||
|
The current `sg_claw_service_console.html` provides a basic UI for connecting to the sgClaw service WebSocket and submitting tasks. However, it requires manual connection on first load and has no way to configure the sgClaw settings (API key, model, base URL, skills directory) from the UI.
|
||||||
|
|
||||||
|
Users need to manually edit `sgclaw_config.json` before using the console, which is inconvenient for routine operations.
|
||||||
|
|
||||||
|
## Problem Statement
|
||||||
|
|
||||||
|
1. Page requires manual "Connect" button click on first load
|
||||||
|
2. No UI for configuring sgClaw runtime settings (model, API key, base URL, skills dir)
|
||||||
|
3. Users must manually edit `sgclaw_config.json` file to change configuration
|
||||||
|
|
||||||
|
## Goal
|
||||||
|
|
||||||
|
Enhance the service console page with:
|
||||||
|
|
||||||
|
1. **Auto-connect on page load** - attempt WebSocket connection immediately
|
||||||
|
2. **Settings panel** - edit sgClaw configuration fields through a friendly UI
|
||||||
|
3. **Config save via WebSocket** - send configuration updates to the running sgClaw service, which writes them to `sgclaw_config.json`
|
||||||
|
|
||||||
|
## Non-goals
|
||||||
|
|
||||||
|
- Auto-starting `sg_claw.exe` process (browser security limitation, deferred)
|
||||||
|
- Changing existing `submit_task` protocol or execution flow
|
||||||
|
- Modifying browser-helper.html or browser execution logic
|
||||||
|
- Adding authentication or multi-user support
|
||||||
|
- Configuration validation beyond basic field checks
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
### Component Overview
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────┐
|
||||||
|
│ sg_claw_service_console.html │
|
||||||
|
│ ┌───────────────────────────────────┐ │
|
||||||
|
│ │ Auto-connect on load │ │
|
||||||
|
│ │ (ws://127.0.0.1:42321 default) │ │
|
||||||
|
│ └───────────────────────────────────┘ │
|
||||||
|
│ ┌───────────────────────────────────┐ │
|
||||||
|
│ │ Settings Panel (Modal) │ │
|
||||||
|
│ │ - API Key │ │
|
||||||
|
│ │ - Base URL │ │
|
||||||
|
│ │ - Model │ │
|
||||||
|
│ │ - Skills Directory │ │
|
||||||
|
│ │ - Direct Submit Skill (optional) │ │
|
||||||
|
│ │ - Runtime Profile (dropdown) │ │
|
||||||
|
│ │ - Browser Backend (dropdown) │ │
|
||||||
|
│ │ [Save] [Cancel] │ │
|
||||||
|
│ └───────────────────────────────────┘ │
|
||||||
|
│ ┌───────────────────────────────────┐ │
|
||||||
|
│ │ Existing: Connection + Composer │ │
|
||||||
|
│ └───────────────────────────────────┘ │
|
||||||
|
└──────────────┬──────────────────────────┘
|
||||||
|
│ WebSocket
|
||||||
|
│ submit_task / update_config
|
||||||
|
▼
|
||||||
|
┌─────────────────────────────────────────┐
|
||||||
|
│ sg_claw.exe (service) │
|
||||||
|
│ ┌───────────────────────────────────┐ │
|
||||||
|
│ │ ClientMessage handler │ │
|
||||||
|
│ │ - SubmitTask (existing) │ │
|
||||||
|
│ │ - UpdateConfig (new) │ │
|
||||||
|
│ └───────────────────────────────────┘ │
|
||||||
|
│ ┌───────────────────────────────────┐ │
|
||||||
|
│ │ Config writer │ │
|
||||||
|
│ │ Writes to sgclaw_config.json │ │
|
||||||
|
│ └───────────────────────────────────┘ │
|
||||||
|
└─────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### Data Flow
|
||||||
|
|
||||||
|
1. **Auto-connect flow:**
|
||||||
|
- Page loads → JavaScript calls `connect()` automatically
|
||||||
|
- If WS opens → show "已连接" chip, enable send button
|
||||||
|
- If WS fails → show "未连接" chip, keep send disabled
|
||||||
|
- Reconnect logic remains unchanged (existing heartbeat/reconnect)
|
||||||
|
|
||||||
|
2. **Config save flow:**
|
||||||
|
- User clicks "设置" button → modal opens with current config values
|
||||||
|
- User edits fields → clicks "保存"
|
||||||
|
- Page sends `update_config` message via WS:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "update_config",
|
||||||
|
"config": {
|
||||||
|
"apiKey": "...",
|
||||||
|
"baseUrl": "...",
|
||||||
|
"model": "...",
|
||||||
|
"skillsDir": "...",
|
||||||
|
"directSubmitSkill": "...",
|
||||||
|
"runtimeProfile": "...",
|
||||||
|
"browserBackend": "..."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
- sgClaw service receives message → validates → writes to `sgclaw_config.json`
|
||||||
|
- Service responds with success/error → page shows notification
|
||||||
|
- Service reloads config in-memory (or requires restart - see below)
|
||||||
|
|
||||||
|
### Protocol Changes
|
||||||
|
|
||||||
|
#### New ClientMessage variant
|
||||||
|
|
||||||
|
Add to `src/service/protocol.rs`:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
|
||||||
|
#[serde(tag = "type", rename_all = "snake_case")]
|
||||||
|
pub enum ClientMessage {
|
||||||
|
Connect,
|
||||||
|
Start,
|
||||||
|
Stop,
|
||||||
|
SubmitTask { ... },
|
||||||
|
Ping,
|
||||||
|
UpdateConfig { // NEW
|
||||||
|
config: ConfigUpdatePayload,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
|
||||||
|
pub struct ConfigUpdatePayload {
|
||||||
|
pub api_key: Option<String>,
|
||||||
|
pub base_url: Option<String>,
|
||||||
|
pub model: Option<String>,
|
||||||
|
pub skills_dir: Option<String>,
|
||||||
|
pub direct_submit_skill: Option<String>,
|
||||||
|
pub runtime_profile: Option<String>,
|
||||||
|
pub browser_backend: Option<String>,
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### New ServiceMessage variant (optional)
|
||||||
|
|
||||||
|
Add to `src/service/protocol.rs`:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
|
||||||
|
#[serde(tag = "type", rename_all = "snake_case")]
|
||||||
|
pub enum ServiceMessage {
|
||||||
|
StatusChanged { state: String },
|
||||||
|
LogEntry { level: String, message: String },
|
||||||
|
TaskComplete { success: bool, summary: String },
|
||||||
|
Busy { message: String },
|
||||||
|
Pong,
|
||||||
|
ConfigUpdated { success: bool, message: String }, // NEW
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Config Persistence
|
||||||
|
|
||||||
|
The service will:
|
||||||
|
|
||||||
|
1. Load current `sgclaw_config.json` from the config path (derived from process args)
|
||||||
|
2. Merge incoming `ConfigUpdatePayload` fields (only non-null fields are updated)
|
||||||
|
3. Write the merged config back to the same file
|
||||||
|
4. Respond with success/error message
|
||||||
|
5. **Hot reload**: The service should reload config in-memory without requiring restart
|
||||||
|
|
||||||
|
**Important:** If the config file path cannot be resolved (no `--config-path` arg), the service should respond with an error message indicating that config updates are not supported in env-var-only mode.
|
||||||
|
|
||||||
|
### UI Design
|
||||||
|
|
||||||
|
#### Settings Button
|
||||||
|
|
||||||
|
- Add a "设置" button in the sidebar, below the existing connect button
|
||||||
|
- Styled as a ghost button with a gear icon (using unicode ⚙ or CSS-only icon)
|
||||||
|
|
||||||
|
#### Settings Modal
|
||||||
|
|
||||||
|
- Overlay modal with centered card
|
||||||
|
- Form fields with labels in Chinese:
|
||||||
|
- `API 密钥` (apiKey) - password input type with show/hide toggle
|
||||||
|
- `模型服务地址` (baseUrl) - text input
|
||||||
|
- `模型名称` (model) - text input
|
||||||
|
- `Skills 目录路径` (skillsDir) - text input with path validation
|
||||||
|
- `直接提交技能` (directSubmitSkill) - text input (optional, can be empty)
|
||||||
|
- `运行模式` (runtimeProfile) - dropdown: `browser-attached` / `service-standalone`
|
||||||
|
- `浏览器后端` (browserBackend) - dropdown: `super-rpa` / `pipe` / `none`
|
||||||
|
- [保存] primary button, [取消] ghost button
|
||||||
|
- Validation:
|
||||||
|
- API Key and Model are required (show red error if empty on save)
|
||||||
|
- Base URL must be a valid URL format
|
||||||
|
- Skills Dir must be a valid path format
|
||||||
|
- Other fields are optional
|
||||||
|
|
||||||
|
#### Connection State Auto-detection
|
||||||
|
|
||||||
|
- On page load, call `connect()` automatically
|
||||||
|
- Connection state chip updates as before
|
||||||
|
- Reconnect logic (existing) remains unchanged
|
||||||
|
|
||||||
|
### File Changes
|
||||||
|
|
||||||
|
| File | Change |
|
||||||
|
|------|--------|
|
||||||
|
| `frontend/service-console/sg_claw_service_console.html` | Add auto-connect on load, settings modal UI, save logic |
|
||||||
|
| `src/service/protocol.rs` | Add `UpdateConfig` variant and `ConfigUpdatePayload` struct |
|
||||||
|
| `src/service/protocol.rs` | Add `ConfigUpdated` service message variant |
|
||||||
|
| `src/service/server.rs` | Handle `UpdateConfig` message, merge config, write file |
|
||||||
|
| `src/agent/task_runner.rs` | Add `pub fn config_path(&self) -> Option<&Path>` getter to `AgentRuntimeContext` |
|
||||||
|
| `src/config/settings.rs` | Add `save_to_path()` method for writing config to file |
|
||||||
|
| `tests/service_console_html_test.rs` | Add assertions for settings modal and update_config message |
|
||||||
|
|
||||||
|
### Config Save Implementation
|
||||||
|
|
||||||
|
In `src/service/server.rs`, when handling `UpdateConfig`:
|
||||||
|
|
||||||
|
```rust
|
||||||
|
ClientMessage::UpdateConfig { config } => {
|
||||||
|
// 1. Load current config from config_path
|
||||||
|
let config_path = runtime_context.config_path(); // needs to be exposed
|
||||||
|
let current = SgClawSettings::load(config_path.as_deref())?;
|
||||||
|
|
||||||
|
// 2. Merge: only overwrite fields that are Some in the payload
|
||||||
|
let mut merged = current.unwrap_or_default();
|
||||||
|
if let Some(v) = config.api_key { merged.provider_api_key = v; }
|
||||||
|
if let Some(v) = config.base_url { merged.provider_base_url = v; }
|
||||||
|
if let Some(v) = config.model { merged.provider_model = v; }
|
||||||
|
if let Some(v) = config.skills_dir { merged.skills_dir = Some(PathBuf::from(v)); }
|
||||||
|
// ... etc for other fields
|
||||||
|
|
||||||
|
// 3. Write back to file
|
||||||
|
merged.save_to_path(config_path.as_ref().ok_or("no config path")?)?;
|
||||||
|
|
||||||
|
// 4. Respond
|
||||||
|
sink.send_service_message(ServiceMessage::ConfigUpdated {
|
||||||
|
success: true,
|
||||||
|
message: "配置已保存".to_string(),
|
||||||
|
})?;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Hot Reload Consideration
|
||||||
|
|
||||||
|
After saving config, the service should reload its in-memory settings. This requires:
|
||||||
|
|
||||||
|
1. Storing the loaded `SgClawSettings` in a reloadable container (e.g., `Arc<Mutex<SgClawSettings>>` or `Arc<RwLock<...>>`)
|
||||||
|
2. Or, the service can respond with "配置已保存,请重启 sg_claw 以应用更改" (simpler, avoids hot reload complexity)
|
||||||
|
|
||||||
|
**Recommended:** Start with "requires restart" approach. Hot reload can be added later if needed.
|
||||||
|
|
||||||
|
### Error Handling
|
||||||
|
|
||||||
|
| Scenario | Response |
|
||||||
|
|----------|----------|
|
||||||
|
| WS not connected when saving | Show inline error: "请先连接服务" |
|
||||||
|
| Config file not found | Service responds: "未找到配置文件,请通过 --config-path 指定" |
|
||||||
|
| Invalid config values | Service validates and responds with specific error |
|
||||||
|
| Write permission denied | Service responds: "无法写入配置文件,请检查文件权限" |
|
||||||
|
| WS disconnected during save | Show error: "连接断开,保存失败,请重试" |
|
||||||
|
|
||||||
|
### Test Strategy
|
||||||
|
|
||||||
|
1. **Integration test** (`tests/service_console_html_test.rs`):
|
||||||
|
- Assert page contains settings modal HTML
|
||||||
|
- Assert page contains "设置" button
|
||||||
|
- Assert page sends `update_config` message shape
|
||||||
|
- Assert page auto-connects on load (contains `window.onload` or equivalent)
|
||||||
|
|
||||||
|
2. **Protocol test** (new or existing test file):
|
||||||
|
- Assert `ClientMessage::UpdateConfig` serializes correctly
|
||||||
|
- Assert `ServiceMessage::ConfigUpdated` deserializes correctly
|
||||||
|
|
||||||
|
3. **Config save test** (new test in `tests/compat_config_test.rs` or new file):
|
||||||
|
- Create temp config file
|
||||||
|
- Send UpdateConfig message
|
||||||
|
- Verify file contents match expected merged config
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
1. Page auto-connects to WS on load without manual button click
|
||||||
|
2. Settings button visible in sidebar
|
||||||
|
3. Settings modal opens with form fields for all configurable options
|
||||||
|
4. Clicking "保存" sends `update_config` message via WS
|
||||||
|
5. Service receives message and writes to `sgclaw_config.json`
|
||||||
|
6. Service responds with success/error message
|
||||||
|
7. Page displays save result notification
|
||||||
|
8. Existing task submission flow unchanged
|
||||||
|
9. Existing heartbeat/reconnect logic unchanged
|
||||||
|
10. Automated tests pass
|
||||||
BIN
docs/多核浏览器管道API接口文档.docx
Normal file
BIN
docs/多核浏览器管道API接口文档.docx
Normal file
Binary file not shown.
@@ -309,6 +309,24 @@
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Settings modal elements */
|
||||||
|
select {
|
||||||
|
width: 100%;
|
||||||
|
border: 1px solid var(--line);
|
||||||
|
border-radius: 16px;
|
||||||
|
padding: 14px 16px;
|
||||||
|
background: rgba(255, 255, 255, 0.92);
|
||||||
|
color: var(--text);
|
||||||
|
font: inherit;
|
||||||
|
outline: none;
|
||||||
|
cursor: pointer;
|
||||||
|
}
|
||||||
|
|
||||||
|
select:focus {
|
||||||
|
border-color: rgba(15, 118, 110, 0.5);
|
||||||
|
box-shadow: 0 0 0 4px rgba(15, 118, 110, 0.12);
|
||||||
|
}
|
||||||
|
|
||||||
@media (max-width: 900px) {
|
@media (max-width: 900px) {
|
||||||
body {
|
body {
|
||||||
padding: 16px;
|
padding: 16px;
|
||||||
@@ -347,6 +365,7 @@
|
|||||||
<input id="wsUrl" value="ws://127.0.0.1:42321" />
|
<input id="wsUrl" value="ws://127.0.0.1:42321" />
|
||||||
</div>
|
</div>
|
||||||
<button id="connectBtn" class="ghost-btn">连接</button>
|
<button id="connectBtn" class="ghost-btn">连接</button>
|
||||||
|
<button id="settingsBtn" class="ghost-btn" style="margin-top: 8px;">⚙ 设置</button>
|
||||||
|
|
||||||
<p class="section-label" style="margin-top: 26px;">Composer</p>
|
<p class="section-label" style="margin-top: 26px;">Composer</p>
|
||||||
<div class="field">
|
<div class="field">
|
||||||
@@ -372,6 +391,65 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<!-- Settings Modal -->
|
||||||
|
<div id="settingsModal" style="display: none; position: fixed; top: 0; left: 0; width: 100%; height: 100%; background: rgba(0,0,0,0.5); z-index: 1000; align-items: center; justify-content: center;">
|
||||||
|
<div style="background: var(--panel); border-radius: 20px; padding: 28px; width: min(520px, 90%); max-height: 85vh; overflow-y: auto; box-shadow: var(--shadow);">
|
||||||
|
<h3 style="margin: 0 0 20px; font-size: 1.2rem;">sgClaw 配置</h3>
|
||||||
|
|
||||||
|
<div class="field">
|
||||||
|
<label for="settingApiKey">API 密钥 *</label>
|
||||||
|
<input id="settingApiKey" type="password" placeholder="输入模型 API 密钥" />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="field">
|
||||||
|
<label for="settingBaseUrl">模型服务地址 *</label>
|
||||||
|
<input id="settingBaseUrl" type="url" placeholder="例如:https://api.deepseek.com" />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="field">
|
||||||
|
<label for="settingModel">模型名称 *</label>
|
||||||
|
<input id="settingModel" type="text" placeholder="例如:deepseek-chat" />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="field">
|
||||||
|
<label for="settingSkillsDir">Skills 目录路径</label>
|
||||||
|
<input id="settingSkillsDir" type="text" placeholder="例如:D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills" />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="field">
|
||||||
|
<label for="settingDirectSubmitSkill">直接提交技能</label>
|
||||||
|
<input id="settingDirectSubmitSkill" type="text" placeholder="例如:tq-lineloss-report.collect_lineloss" />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="field">
|
||||||
|
<label for="settingRuntimeProfile">运行模式</label>
|
||||||
|
<select id="settingRuntimeProfile" style="width: 100%; border: 1px solid var(--line); border-radius: 16px; padding: 14px 16px; background: rgba(255, 255, 255, 0.92); color: var(--text); font: inherit;">
|
||||||
|
<option value="browser-attached">browser-attached</option>
|
||||||
|
<option value="browser-heavy">browser-heavy</option>
|
||||||
|
<option value="general-assistant">general-assistant</option>
|
||||||
|
</select>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="field">
|
||||||
|
<label for="settingBrowserBackend">浏览器后端</label>
|
||||||
|
<select id="settingBrowserBackend" style="width: 100%; border: 1px solid var(--line); border-radius: 16px; padding: 14px 16px; background: rgba(255, 255, 255, 0.92); color: var(--text); font: inherit;">
|
||||||
|
<option value="super-rpa">super-rpa</option>
|
||||||
|
<option value="agent-browser">agent-browser</option>
|
||||||
|
<option value="rust-native">rust-native</option>
|
||||||
|
<option value="computer-use">computer-use</option>
|
||||||
|
<option value="auto">auto</option>
|
||||||
|
</select>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div id="settingsValidation" style="color: var(--error); font-size: 0.92rem; min-height: 1.4em; margin: 10px 0;"></div>
|
||||||
|
|
||||||
|
<div style="display: flex; gap: 12px; margin-top: 16px;">
|
||||||
|
<button id="settingsSaveBtn" class="primary-btn" style="flex: 1;">保存</button>
|
||||||
|
<button id="settingsCancelBtn" class="ghost-btn" style="flex: 1;">取消</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
<script>
|
<script>
|
||||||
const defaultWsUrl = "ws://127.0.0.1:42321";
|
const defaultWsUrl = "ws://127.0.0.1:42321";
|
||||||
const elements = {
|
const elements = {
|
||||||
@@ -386,6 +464,17 @@
|
|||||||
};
|
};
|
||||||
|
|
||||||
let socket = null;
|
let socket = null;
|
||||||
|
let reconnectTimer = null;
|
||||||
|
let connectTimeoutTimer = null;
|
||||||
|
let heartbeatTimer = null;
|
||||||
|
let shouldReconnect = false;
|
||||||
|
let lastHeartbeatAt = 0;
|
||||||
|
const reconnectDelayMs = 1500;
|
||||||
|
const reconnectCloseCode = 4000;
|
||||||
|
const reconnectCloseReason = "manual_disconnect";
|
||||||
|
const heartbeatIntervalMs = 15000;
|
||||||
|
const heartbeatTimeoutMs = 30000;
|
||||||
|
const connectTimeoutMs = 5000;
|
||||||
|
|
||||||
function appendRow(kind, text) {
|
function appendRow(kind, text) {
|
||||||
if (elements.emptyState) {
|
if (elements.emptyState) {
|
||||||
@@ -410,6 +499,59 @@
|
|||||||
elements.messageStream.scrollTop = elements.messageStream.scrollHeight;
|
elements.messageStream.scrollTop = elements.messageStream.scrollHeight;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function clearReconnectTimer() {
|
||||||
|
if (reconnectTimer) {
|
||||||
|
clearTimeout(reconnectTimer);
|
||||||
|
reconnectTimer = null;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function clearConnectTimeoutTimer() {
|
||||||
|
if (connectTimeoutTimer) {
|
||||||
|
clearTimeout(connectTimeoutTimer);
|
||||||
|
connectTimeoutTimer = null;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function stopHeartbeat() {
|
||||||
|
if (heartbeatTimer) {
|
||||||
|
clearInterval(heartbeatTimer);
|
||||||
|
heartbeatTimer = null;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function startHeartbeat() {
|
||||||
|
stopHeartbeat();
|
||||||
|
lastHeartbeatAt = Date.now();
|
||||||
|
heartbeatTimer = setInterval(() => {
|
||||||
|
if (!socket || socket.readyState !== WebSocket.OPEN) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
if (Date.now() - lastHeartbeatAt > heartbeatTimeoutMs) {
|
||||||
|
appendRow("error", "heartbeat missed, forcing reconnect");
|
||||||
|
const activeSocket = socket;
|
||||||
|
socket = null;
|
||||||
|
stopHeartbeat();
|
||||||
|
clearConnectTimeoutTimer();
|
||||||
|
activeSocket.close();
|
||||||
|
scheduleReconnect();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
socket.send(JSON.stringify({ type: "ping" }));
|
||||||
|
}, heartbeatIntervalMs);
|
||||||
|
}
|
||||||
|
|
||||||
|
function scheduleReconnect() {
|
||||||
|
clearReconnectTimer();
|
||||||
|
clearConnectTimeoutTimer();
|
||||||
|
if (!shouldReconnect) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
appendRow("status", "service websocket disconnected, retrying");
|
||||||
|
reconnectTimer = setTimeout(() => connectOrDisconnectService(true), reconnectDelayMs);
|
||||||
|
updateUiState();
|
||||||
|
}
|
||||||
|
|
||||||
function setValidation(message) {
|
function setValidation(message) {
|
||||||
elements.validationText.textContent = message;
|
elements.validationText.textContent = message;
|
||||||
}
|
}
|
||||||
@@ -417,7 +559,7 @@
|
|||||||
function updateUiState() {
|
function updateUiState() {
|
||||||
const readyState = socket ? socket.readyState : WebSocket.CLOSED;
|
const readyState = socket ? socket.readyState : WebSocket.CLOSED;
|
||||||
const connected = readyState === WebSocket.OPEN;
|
const connected = readyState === WebSocket.OPEN;
|
||||||
const connecting = readyState === WebSocket.CONNECTING;
|
const connecting = readyState === WebSocket.CONNECTING || Boolean(reconnectTimer);
|
||||||
let stateText = "未连接";
|
let stateText = "未连接";
|
||||||
let stateValue = "disconnected";
|
let stateValue = "disconnected";
|
||||||
|
|
||||||
@@ -435,35 +577,68 @@
|
|||||||
elements.connectionState.dataset.state = stateValue;
|
elements.connectionState.dataset.state = stateValue;
|
||||||
}
|
}
|
||||||
|
|
||||||
function connectOrDisconnectService() {
|
function connectOrDisconnectService(forceConnect = false) {
|
||||||
if (socket && (socket.readyState === WebSocket.OPEN || socket.readyState === WebSocket.CONNECTING)) {
|
if (!forceConnect && socket && (socket.readyState === WebSocket.OPEN || socket.readyState === WebSocket.CONNECTING)) {
|
||||||
socket.close();
|
shouldReconnect = false;
|
||||||
|
clearReconnectTimer();
|
||||||
|
clearConnectTimeoutTimer();
|
||||||
|
stopHeartbeat();
|
||||||
|
socket.close(reconnectCloseCode, reconnectCloseReason);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
clearReconnectTimer();
|
||||||
|
clearConnectTimeoutTimer();
|
||||||
const url = elements.wsUrl.value.trim() || defaultWsUrl;
|
const url = elements.wsUrl.value.trim() || defaultWsUrl;
|
||||||
elements.wsUrl.value = url;
|
elements.wsUrl.value = url;
|
||||||
|
shouldReconnect = true;
|
||||||
const nextSocket = new WebSocket(url);
|
const nextSocket = new WebSocket(url);
|
||||||
socket = nextSocket;
|
socket = nextSocket;
|
||||||
updateUiState();
|
updateUiState();
|
||||||
|
|
||||||
|
connectTimeoutTimer = setTimeout(() => {
|
||||||
|
if (socket !== nextSocket || nextSocket.readyState !== WebSocket.CONNECTING) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
appendRow("error", "service websocket connect timed out");
|
||||||
|
socket = null;
|
||||||
|
nextSocket.close();
|
||||||
|
scheduleReconnect();
|
||||||
|
}, connectTimeoutMs);
|
||||||
|
|
||||||
nextSocket.addEventListener("open", () => {
|
nextSocket.addEventListener("open", () => {
|
||||||
if (socket !== nextSocket) {
|
if (socket !== nextSocket) {
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
clearReconnectTimer();
|
||||||
|
clearConnectTimeoutTimer();
|
||||||
|
lastHeartbeatAt = Date.now();
|
||||||
|
startHeartbeat();
|
||||||
appendRow("status", "service websocket connected");
|
appendRow("status", "service websocket connected");
|
||||||
updateUiState();
|
updateUiState();
|
||||||
});
|
});
|
||||||
|
|
||||||
nextSocket.addEventListener("close", () => {
|
nextSocket.addEventListener("close", (event) => {
|
||||||
if (socket === nextSocket) {
|
if (socket !== nextSocket) {
|
||||||
socket = null;
|
return;
|
||||||
}
|
}
|
||||||
appendRow("status", "service websocket disconnected");
|
socket = null;
|
||||||
updateUiState();
|
clearConnectTimeoutTimer();
|
||||||
|
stopHeartbeat();
|
||||||
|
const manualClose = event.code === reconnectCloseCode || event.reason === reconnectCloseReason;
|
||||||
|
if (manualClose) {
|
||||||
|
shouldReconnect = false;
|
||||||
|
appendRow("status", "service websocket disconnected");
|
||||||
|
updateUiState();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
scheduleReconnect();
|
||||||
});
|
});
|
||||||
|
|
||||||
nextSocket.addEventListener("error", () => {
|
nextSocket.addEventListener("error", () => {
|
||||||
|
if (socket !== nextSocket) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
appendRow("error", "service websocket error");
|
appendRow("error", "service websocket error");
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -471,6 +646,7 @@
|
|||||||
}
|
}
|
||||||
|
|
||||||
function handleMessage(event) {
|
function handleMessage(event) {
|
||||||
|
lastHeartbeatAt = Date.now();
|
||||||
let message;
|
let message;
|
||||||
try {
|
try {
|
||||||
message = JSON.parse(event.data);
|
message = JSON.parse(event.data);
|
||||||
@@ -492,6 +668,11 @@
|
|||||||
case "busy":
|
case "busy":
|
||||||
appendRow("error", message.message);
|
appendRow("error", message.message);
|
||||||
break;
|
break;
|
||||||
|
case "pong":
|
||||||
|
break;
|
||||||
|
case "config_updated":
|
||||||
|
handleConfigResponse(message);
|
||||||
|
break;
|
||||||
default:
|
default:
|
||||||
appendRow("error", "unknown service message: " + event.data);
|
appendRow("error", "unknown service message: " + event.data);
|
||||||
}
|
}
|
||||||
@@ -527,6 +708,128 @@
|
|||||||
});
|
});
|
||||||
|
|
||||||
updateUiState();
|
updateUiState();
|
||||||
|
|
||||||
|
// Auto-connect on page load
|
||||||
|
window.addEventListener("DOMContentLoaded", () => {
|
||||||
|
connectOrDisconnectService(true);
|
||||||
|
});
|
||||||
|
|
||||||
|
// Settings modal state
|
||||||
|
const settingsElements = {
|
||||||
|
modal: document.getElementById("settingsModal"),
|
||||||
|
apiKey: document.getElementById("settingApiKey"),
|
||||||
|
baseUrl: document.getElementById("settingBaseUrl"),
|
||||||
|
model: document.getElementById("settingModel"),
|
||||||
|
skillsDir: document.getElementById("settingSkillsDir"),
|
||||||
|
directSubmitSkill: document.getElementById("settingDirectSubmitSkill"),
|
||||||
|
runtimeProfile: document.getElementById("settingRuntimeProfile"),
|
||||||
|
browserBackend: document.getElementById("settingBrowserBackend"),
|
||||||
|
validation: document.getElementById("settingsValidation"),
|
||||||
|
saveBtn: document.getElementById("settingsSaveBtn"),
|
||||||
|
cancelBtn: document.getElementById("settingsCancelBtn"),
|
||||||
|
};
|
||||||
|
let settingsOpenBtn = null;
|
||||||
|
|
||||||
|
function openSettingsModal() {
|
||||||
|
settingsElements.apiKey.value = "";
|
||||||
|
settingsElements.baseUrl.value = "";
|
||||||
|
settingsElements.model.value = "";
|
||||||
|
settingsElements.skillsDir.value = "";
|
||||||
|
settingsElements.directSubmitSkill.value = "";
|
||||||
|
settingsElements.runtimeProfile.value = "browser-attached";
|
||||||
|
settingsElements.browserBackend.value = "super-rpa";
|
||||||
|
settingsElements.validation.textContent = "";
|
||||||
|
settingsElements.modal.style.display = "flex";
|
||||||
|
}
|
||||||
|
|
||||||
|
function closeSettingsModal() {
|
||||||
|
settingsElements.modal.style.display = "none";
|
||||||
|
}
|
||||||
|
|
||||||
|
function validateSettings() {
|
||||||
|
const apiKey = settingsElements.apiKey.value.trim();
|
||||||
|
const baseUrl = settingsElements.baseUrl.value.trim();
|
||||||
|
const model = settingsElements.model.value.trim();
|
||||||
|
|
||||||
|
if (!apiKey) {
|
||||||
|
return "API 密钥不能为空";
|
||||||
|
}
|
||||||
|
if (!model) {
|
||||||
|
return "模型名称不能为空";
|
||||||
|
}
|
||||||
|
if (!baseUrl) {
|
||||||
|
return "模型服务地址不能为空";
|
||||||
|
}
|
||||||
|
try {
|
||||||
|
new URL(baseUrl);
|
||||||
|
} catch {
|
||||||
|
return "模型服务地址格式无效,请输入有效的 URL";
|
||||||
|
}
|
||||||
|
return "";
|
||||||
|
}
|
||||||
|
|
||||||
|
function saveSettings() {
|
||||||
|
const error = validateSettings();
|
||||||
|
if (error) {
|
||||||
|
settingsElements.validation.textContent = error;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!socket || socket.readyState !== WebSocket.OPEN) {
|
||||||
|
settingsElements.validation.textContent = "请先连接服务";
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
settingsElements.validation.textContent = "";
|
||||||
|
settingsElements.saveBtn.disabled = true;
|
||||||
|
settingsElements.saveBtn.textContent = "保存中...";
|
||||||
|
|
||||||
|
const config = {
|
||||||
|
apiKey: settingsElements.apiKey.value.trim(),
|
||||||
|
baseUrl: settingsElements.baseUrl.value.trim(),
|
||||||
|
model: settingsElements.model.value.trim(),
|
||||||
|
};
|
||||||
|
|
||||||
|
const skillsDir = settingsElements.skillsDir.value.trim();
|
||||||
|
if (skillsDir) config.skillsDir = skillsDir;
|
||||||
|
|
||||||
|
const directSubmitSkill = settingsElements.directSubmitSkill.value.trim();
|
||||||
|
if (directSubmitSkill) config.directSubmitSkill = directSubmitSkill;
|
||||||
|
|
||||||
|
config.runtimeProfile = settingsElements.runtimeProfile.value;
|
||||||
|
config.browserBackend = settingsElements.browserBackend.value;
|
||||||
|
|
||||||
|
socket.send(JSON.stringify({
|
||||||
|
type: "update_config",
|
||||||
|
config,
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
function handleConfigResponse(message) {
|
||||||
|
settingsElements.saveBtn.disabled = false;
|
||||||
|
settingsElements.saveBtn.textContent = "保存";
|
||||||
|
|
||||||
|
if (message.success) {
|
||||||
|
settingsElements.validation.textContent = message.message;
|
||||||
|
settingsElements.validation.style.color = "var(--success)";
|
||||||
|
setTimeout(closeSettingsModal, 2000);
|
||||||
|
} else {
|
||||||
|
settingsElements.validation.textContent = message.message;
|
||||||
|
settingsElements.validation.style.color = "var(--error)";
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Event listeners for settings
|
||||||
|
settingsOpenBtn = document.getElementById("settingsBtn");
|
||||||
|
settingsOpenBtn.addEventListener("click", openSettingsModal);
|
||||||
|
settingsElements.cancelBtn.addEventListener("click", closeSettingsModal);
|
||||||
|
settingsElements.saveBtn.addEventListener("click", saveSettings);
|
||||||
|
|
||||||
|
settingsElements.modal.addEventListener("click", (e) => {
|
||||||
|
if (e.target === settingsElements.modal) {
|
||||||
|
closeSettingsModal();
|
||||||
|
}
|
||||||
|
});
|
||||||
</script>
|
</script>
|
||||||
</body>
|
</body>
|
||||||
</html>
|
</html>
|
||||||
|
|||||||
637
resources/zhihu-hotlist-echarts.html
Normal file
637
resources/zhihu-hotlist-echarts.html
Normal file
@@ -0,0 +1,637 @@
|
|||||||
|
<!doctype html>
|
||||||
|
<html lang="zh-CN">
|
||||||
|
<head>
|
||||||
|
<meta charset="utf-8" />
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1" />
|
||||||
|
<title>知乎热榜图表驾驶舱</title>
|
||||||
|
<script src="https://cdn.jsdelivr.net/npm/echarts@5/dist/echarts.min.js"></script>
|
||||||
|
<style>
|
||||||
|
:root {
|
||||||
|
--bg: #06111f;
|
||||||
|
--bg-2: #0a1f37;
|
||||||
|
--panel: rgba(8, 25, 42, 0.88);
|
||||||
|
--panel-strong: rgba(10, 32, 55, 0.95);
|
||||||
|
--line: rgba(101, 187, 255, 0.18);
|
||||||
|
--line-strong: rgba(236, 186, 81, 0.26);
|
||||||
|
--text: #eef6ff;
|
||||||
|
--muted: #8ea6c2;
|
||||||
|
--accent: #62d0ff;
|
||||||
|
--accent-2: #ecba51;
|
||||||
|
--accent-3: #6df0c2;
|
||||||
|
--danger: #ff8b7e;
|
||||||
|
--shadow: 0 20px 48px rgba(0, 0, 0, 0.34);
|
||||||
|
--font-heading: "DIN Alternate", "Bahnschrift", "Microsoft YaHei UI", sans-serif;
|
||||||
|
--font-body: "Segoe UI Variable Text", "Microsoft YaHei", "PingFang SC", sans-serif;
|
||||||
|
}
|
||||||
|
|
||||||
|
* {
|
||||||
|
box-sizing: border-box;
|
||||||
|
}
|
||||||
|
|
||||||
|
html,
|
||||||
|
body {
|
||||||
|
margin: 0;
|
||||||
|
min-height: 100%;
|
||||||
|
background:
|
||||||
|
radial-gradient(circle at 16% 10%, rgba(98, 208, 255, 0.18), transparent 22%),
|
||||||
|
radial-gradient(circle at 86% 12%, rgba(236, 186, 81, 0.14), transparent 18%),
|
||||||
|
linear-gradient(145deg, var(--bg) 0%, var(--bg-2) 42%, #030910 100%);
|
||||||
|
color: var(--text);
|
||||||
|
font-family: var(--font-body);
|
||||||
|
}
|
||||||
|
|
||||||
|
body::before {
|
||||||
|
content: "";
|
||||||
|
position: fixed;
|
||||||
|
inset: 0;
|
||||||
|
pointer-events: none;
|
||||||
|
background-image:
|
||||||
|
linear-gradient(rgba(101, 187, 255, 0.05) 1px, transparent 1px),
|
||||||
|
linear-gradient(90deg, rgba(101, 187, 255, 0.05) 1px, transparent 1px);
|
||||||
|
background-size: 44px 44px;
|
||||||
|
mask-image: radial-gradient(circle at center, black 34%, rgba(0, 0, 0, 0.22) 88%, transparent 100%);
|
||||||
|
}
|
||||||
|
|
||||||
|
.page {
|
||||||
|
min-height: 100vh;
|
||||||
|
padding: 18px;
|
||||||
|
display: grid;
|
||||||
|
grid-template-rows: auto auto 1fr auto;
|
||||||
|
gap: 14px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.panel {
|
||||||
|
position: relative;
|
||||||
|
overflow: hidden;
|
||||||
|
background:
|
||||||
|
linear-gradient(180deg, rgba(255, 255, 255, 0.035), rgba(255, 255, 255, 0.01)),
|
||||||
|
linear-gradient(145deg, rgba(9, 30, 51, 0.97), rgba(6, 20, 34, 0.92));
|
||||||
|
border: 1px solid var(--line);
|
||||||
|
border-radius: 22px;
|
||||||
|
box-shadow: var(--shadow);
|
||||||
|
}
|
||||||
|
|
||||||
|
.panel::before {
|
||||||
|
content: "";
|
||||||
|
position: absolute;
|
||||||
|
left: 18px;
|
||||||
|
right: 18px;
|
||||||
|
top: 0;
|
||||||
|
height: 2px;
|
||||||
|
background: linear-gradient(90deg, transparent, var(--accent), var(--accent-2), transparent);
|
||||||
|
opacity: 0.95;
|
||||||
|
}
|
||||||
|
|
||||||
|
.hero {
|
||||||
|
padding: 18px 24px;
|
||||||
|
display: grid;
|
||||||
|
grid-template-columns: minmax(0, 1fr) 360px;
|
||||||
|
gap: 16px;
|
||||||
|
align-items: center;
|
||||||
|
}
|
||||||
|
|
||||||
|
.eyebrow {
|
||||||
|
color: var(--accent);
|
||||||
|
letter-spacing: 2px;
|
||||||
|
text-transform: uppercase;
|
||||||
|
font-size: 12px;
|
||||||
|
margin-bottom: 8px;
|
||||||
|
}
|
||||||
|
|
||||||
|
h1 {
|
||||||
|
margin: 0;
|
||||||
|
font-family: var(--font-heading);
|
||||||
|
font-size: 38px;
|
||||||
|
line-height: 1.08;
|
||||||
|
letter-spacing: 1px;
|
||||||
|
}
|
||||||
|
|
||||||
|
#snapshot-meta {
|
||||||
|
margin: 10px 0 0;
|
||||||
|
color: var(--muted);
|
||||||
|
font-size: 14px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.hero-notes {
|
||||||
|
display: grid;
|
||||||
|
gap: 10px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.note-card {
|
||||||
|
padding: 14px 16px;
|
||||||
|
border-radius: 16px;
|
||||||
|
background: linear-gradient(135deg, rgba(98, 208, 255, 0.08), rgba(236, 186, 81, 0.08));
|
||||||
|
border: 1px solid rgba(255, 255, 255, 0.05);
|
||||||
|
}
|
||||||
|
|
||||||
|
.note-card strong {
|
||||||
|
display: block;
|
||||||
|
margin-bottom: 6px;
|
||||||
|
font-size: 14px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.note-card span {
|
||||||
|
color: var(--muted);
|
||||||
|
font-size: 12px;
|
||||||
|
line-height: 1.5;
|
||||||
|
}
|
||||||
|
|
||||||
|
.metrics {
|
||||||
|
display: grid;
|
||||||
|
grid-template-columns: repeat(4, 1fr);
|
||||||
|
gap: 12px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.metric {
|
||||||
|
padding: 18px 18px 16px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.metric-label {
|
||||||
|
color: var(--muted);
|
||||||
|
font-size: 12px;
|
||||||
|
letter-spacing: 1px;
|
||||||
|
text-transform: uppercase;
|
||||||
|
}
|
||||||
|
|
||||||
|
.metric-value {
|
||||||
|
margin-top: 10px;
|
||||||
|
font-family: var(--font-heading);
|
||||||
|
font-size: 34px;
|
||||||
|
color: var(--text);
|
||||||
|
}
|
||||||
|
|
||||||
|
.metric-sub {
|
||||||
|
margin-top: 8px;
|
||||||
|
color: var(--accent);
|
||||||
|
font-size: 12px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.charts {
|
||||||
|
min-height: 0;
|
||||||
|
display: grid;
|
||||||
|
grid-template-columns: 1.2fr 1fr 0.95fr;
|
||||||
|
grid-template-rows: 360px 320px;
|
||||||
|
gap: 14px;
|
||||||
|
grid-template-areas:
|
||||||
|
"bar top pie"
|
||||||
|
"bubble table table";
|
||||||
|
}
|
||||||
|
|
||||||
|
.chart-panel {
|
||||||
|
padding: 14px 16px 12px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.bar-panel { grid-area: bar; }
|
||||||
|
.top-panel { grid-area: top; }
|
||||||
|
.pie-panel { grid-area: pie; }
|
||||||
|
.bubble-panel { grid-area: bubble; }
|
||||||
|
.table-panel { grid-area: table; padding: 14px 16px 10px; }
|
||||||
|
|
||||||
|
.section-head {
|
||||||
|
display: flex;
|
||||||
|
align-items: end;
|
||||||
|
justify-content: space-between;
|
||||||
|
gap: 12px;
|
||||||
|
margin-bottom: 10px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.section-head h2 {
|
||||||
|
margin: 0;
|
||||||
|
font-size: 22px;
|
||||||
|
font-family: var(--font-heading);
|
||||||
|
letter-spacing: 1px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.section-head span {
|
||||||
|
color: var(--muted);
|
||||||
|
font-size: 12px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.chart {
|
||||||
|
width: 100%;
|
||||||
|
height: calc(100% - 42px);
|
||||||
|
}
|
||||||
|
|
||||||
|
.table-wrap {
|
||||||
|
height: calc(100% - 42px);
|
||||||
|
overflow: auto;
|
||||||
|
padding-right: 4px;
|
||||||
|
}
|
||||||
|
|
||||||
|
table {
|
||||||
|
width: 100%;
|
||||||
|
border-collapse: collapse;
|
||||||
|
}
|
||||||
|
|
||||||
|
thead th {
|
||||||
|
position: sticky;
|
||||||
|
top: 0;
|
||||||
|
z-index: 1;
|
||||||
|
background: rgba(6, 19, 32, 0.96);
|
||||||
|
padding: 10px 8px;
|
||||||
|
text-align: left;
|
||||||
|
font-size: 12px;
|
||||||
|
color: var(--muted);
|
||||||
|
letter-spacing: 1px;
|
||||||
|
text-transform: uppercase;
|
||||||
|
border-bottom: 1px solid var(--line-strong);
|
||||||
|
}
|
||||||
|
|
||||||
|
tbody td {
|
||||||
|
padding: 11px 8px;
|
||||||
|
border-bottom: 1px solid rgba(255, 255, 255, 0.05);
|
||||||
|
font-size: 13px;
|
||||||
|
vertical-align: top;
|
||||||
|
}
|
||||||
|
|
||||||
|
tbody tr:nth-child(odd) {
|
||||||
|
background: rgba(255, 255, 255, 0.016);
|
||||||
|
}
|
||||||
|
|
||||||
|
.rank {
|
||||||
|
font-family: var(--font-heading);
|
||||||
|
color: var(--accent-2);
|
||||||
|
white-space: nowrap;
|
||||||
|
}
|
||||||
|
|
||||||
|
.heat {
|
||||||
|
color: var(--accent-3);
|
||||||
|
font-family: var(--font-heading);
|
||||||
|
white-space: nowrap;
|
||||||
|
}
|
||||||
|
|
||||||
|
.tag {
|
||||||
|
display: inline-flex;
|
||||||
|
align-items: center;
|
||||||
|
padding: 4px 10px;
|
||||||
|
border-radius: 999px;
|
||||||
|
background: rgba(98, 208, 255, 0.12);
|
||||||
|
color: var(--accent);
|
||||||
|
font-size: 12px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.footer {
|
||||||
|
padding: 10px 16px;
|
||||||
|
color: var(--muted);
|
||||||
|
font-size: 12px;
|
||||||
|
}
|
||||||
|
|
||||||
|
@media (max-width: 1440px) {
|
||||||
|
.hero {
|
||||||
|
grid-template-columns: 1fr;
|
||||||
|
}
|
||||||
|
|
||||||
|
.metrics {
|
||||||
|
grid-template-columns: repeat(2, 1fr);
|
||||||
|
}
|
||||||
|
|
||||||
|
.charts {
|
||||||
|
grid-template-columns: 1fr;
|
||||||
|
grid-template-rows: 320px 320px 320px 320px 420px;
|
||||||
|
grid-template-areas:
|
||||||
|
"bar"
|
||||||
|
"top"
|
||||||
|
"pie"
|
||||||
|
"bubble"
|
||||||
|
"table";
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@media (max-width: 760px) {
|
||||||
|
.page {
|
||||||
|
padding: 12px;
|
||||||
|
}
|
||||||
|
|
||||||
|
h1 {
|
||||||
|
font-size: 28px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.metrics {
|
||||||
|
grid-template-columns: 1fr;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
</style>
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<div class="page">
|
||||||
|
<section class="panel hero">
|
||||||
|
<div>
|
||||||
|
<div class="eyebrow">Zhihu Hotlist Visual Command Center</div>
|
||||||
|
<h1>知乎热榜图表驾驶舱</h1>
|
||||||
|
<p id="snapshot-meta">由 sgClaw screen_html_export 生成的本地静态展示页</p>
|
||||||
|
</div>
|
||||||
|
<div class="hero-notes">
|
||||||
|
<div class="note-card">
|
||||||
|
<strong>图表表达</strong>
|
||||||
|
<span>同一份热榜数据同时映射为分类热度、头部热点、结构占比和热度散点,适合现场讲解图表能力。</span>
|
||||||
|
</div>
|
||||||
|
<div class="note-card">
|
||||||
|
<strong>演示建议</strong>
|
||||||
|
<span id="lead-summary">优先讲解榜首热点、分类分布与热度层级,再向下展开全量榜单细节。</span>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<section class="metrics">
|
||||||
|
<article class="panel metric">
|
||||||
|
<div class="metric-label">热榜条目数</div>
|
||||||
|
<div id="metric-total" class="metric-value">0</div>
|
||||||
|
<div class="metric-sub">Tracked items</div>
|
||||||
|
</article>
|
||||||
|
<article class="panel metric">
|
||||||
|
<div class="metric-label">主题分类数</div>
|
||||||
|
<div id="metric-categories" class="metric-value">0</div>
|
||||||
|
<div class="metric-sub">Topic groups</div>
|
||||||
|
</article>
|
||||||
|
<article class="panel metric">
|
||||||
|
<div class="metric-label">累计热度</div>
|
||||||
|
<div id="metric-heat" class="metric-value">0</div>
|
||||||
|
<div class="metric-sub">Total heat</div>
|
||||||
|
</article>
|
||||||
|
<article class="panel metric">
|
||||||
|
<div class="metric-label">头部峰值</div>
|
||||||
|
<div id="metric-peak" class="metric-value">0</div>
|
||||||
|
<div class="metric-sub">Peak topic heat</div>
|
||||||
|
</article>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<section class="charts">
|
||||||
|
<section class="panel chart-panel bar-panel">
|
||||||
|
<div class="section-head">
|
||||||
|
<h2>分类总热度</h2>
|
||||||
|
<span>横向对比</span>
|
||||||
|
</div>
|
||||||
|
<div id="bar-chart" class="chart"></div>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<section class="panel chart-panel top-panel">
|
||||||
|
<div class="section-head">
|
||||||
|
<h2>Top10 热点</h2>
|
||||||
|
<span>柱状排行</span>
|
||||||
|
</div>
|
||||||
|
<div id="top-chart" class="chart"></div>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<section class="panel chart-panel pie-panel">
|
||||||
|
<div class="section-head">
|
||||||
|
<h2>分类占比</h2>
|
||||||
|
<span>环形结构</span>
|
||||||
|
</div>
|
||||||
|
<div id="pie-chart" class="chart"></div>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<section class="panel chart-panel bubble-panel">
|
||||||
|
<div class="section-head">
|
||||||
|
<h2>热度分层</h2>
|
||||||
|
<span>散点气泡</span>
|
||||||
|
</div>
|
||||||
|
<div id="bubble-chart" class="chart"></div>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<section class="panel table-panel">
|
||||||
|
<div class="section-head">
|
||||||
|
<h2>热榜明细</h2>
|
||||||
|
<span id="table-note">按原始顺序保留</span>
|
||||||
|
</div>
|
||||||
|
<div class="table-wrap">
|
||||||
|
<table>
|
||||||
|
<thead>
|
||||||
|
<tr>
|
||||||
|
<th>排名</th>
|
||||||
|
<th>标题</th>
|
||||||
|
<th>分类</th>
|
||||||
|
<th>热度</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody id="table-body"></tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
</section>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<section class="panel footer">
|
||||||
|
本页由 `screen_html_export` 生成,适合在系统浏览器中直接打开进行展示。
|
||||||
|
</section>
|
||||||
|
</div>
|
||||||
|
<script>
|
||||||
|
const defaultPayload = {
|
||||||
|
"snapshot_id": "template-snapshot",
|
||||||
|
"generated_at_ms": 0,
|
||||||
|
"categories": [],
|
||||||
|
"table": []
|
||||||
|
}
|
||||||
|
|
||||||
|
const themeMeta = {
|
||||||
|
title: "知乎热榜图表驾驶舱",
|
||||||
|
renderer: "screen_html_export"
|
||||||
|
};
|
||||||
|
|
||||||
|
const chartColors = ["#62d0ff", "#ecba51", "#6df0c2", "#7f8cff", "#ff8b7e", "#9fcbff", "#58a6ff"];
|
||||||
|
const charts = {};
|
||||||
|
|
||||||
|
function formatNumber(value) {
|
||||||
|
return new Intl.NumberFormat("zh-CN").format(Number(value || 0));
|
||||||
|
}
|
||||||
|
|
||||||
|
function getTotalHeat(categories) {
|
||||||
|
return (categories || []).reduce((sum, item) => sum + Number(item.total_heat || 0), 0);
|
||||||
|
}
|
||||||
|
|
||||||
|
function getPeakHeat(table) {
|
||||||
|
return (table || []).reduce((max, row) => Math.max(max, Number(row.heat_value || 0)), 0);
|
||||||
|
}
|
||||||
|
|
||||||
|
function buildLeadSummary(table, categories) {
|
||||||
|
const top = (table || [])[0];
|
||||||
|
const category = (categories || []).slice().sort((a, b) => (b.total_heat || 0) - (a.total_heat || 0))[0];
|
||||||
|
const parts = [];
|
||||||
|
if (top) {
|
||||||
|
parts.push(`榜首是“${top.title}”`);
|
||||||
|
}
|
||||||
|
if (category) {
|
||||||
|
parts.push(`主导分类为“${category.category_label}”`);
|
||||||
|
}
|
||||||
|
parts.push(`共覆盖 ${(table || []).length} 条热点`);
|
||||||
|
return parts.join(",");
|
||||||
|
}
|
||||||
|
|
||||||
|
function ensureCharts() {
|
||||||
|
if (!window.echarts) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
charts.bar = charts.bar || echarts.init(document.getElementById("bar-chart"));
|
||||||
|
charts.top = charts.top || echarts.init(document.getElementById("top-chart"));
|
||||||
|
charts.pie = charts.pie || echarts.init(document.getElementById("pie-chart"));
|
||||||
|
charts.bubble = charts.bubble || echarts.init(document.getElementById("bubble-chart"));
|
||||||
|
}
|
||||||
|
|
||||||
|
function renderBarChart(categories) {
|
||||||
|
const sorted = (categories || []).slice().sort((a, b) => Number(a.total_heat || 0) - Number(b.total_heat || 0));
|
||||||
|
charts.bar.setOption({
|
||||||
|
animationDuration: 700,
|
||||||
|
grid: {left: 90, right: 18, top: 10, bottom: 20},
|
||||||
|
xAxis: {
|
||||||
|
type: "value",
|
||||||
|
axisLabel: {color: "#8ea6c2"},
|
||||||
|
splitLine: {lineStyle: {color: "rgba(255,255,255,0.06)"}}
|
||||||
|
},
|
||||||
|
yAxis: {
|
||||||
|
type: "category",
|
||||||
|
data: sorted.map((item) => item.category_label),
|
||||||
|
axisLabel: {color: "#eef6ff"},
|
||||||
|
axisLine: {lineStyle: {color: "rgba(255,255,255,0.1)"}}
|
||||||
|
},
|
||||||
|
tooltip: {trigger: "axis", axisPointer: {type: "shadow"}},
|
||||||
|
series: [{
|
||||||
|
type: "bar",
|
||||||
|
data: sorted.map((item, index) => ({
|
||||||
|
value: Number(item.total_heat || 0),
|
||||||
|
itemStyle: {color: chartColors[index % chartColors.length], borderRadius: [0, 8, 8, 0]}
|
||||||
|
})),
|
||||||
|
label: {show: true, position: "right", color: "#dfeeff"}
|
||||||
|
}]
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function renderTopChart(table) {
|
||||||
|
const top = (table || []).slice(0, 10);
|
||||||
|
charts.top.setOption({
|
||||||
|
animationDuration: 700,
|
||||||
|
grid: {left: 42, right: 12, top: 26, bottom: 46},
|
||||||
|
tooltip: {trigger: "axis", axisPointer: {type: "shadow"}},
|
||||||
|
xAxis: {
|
||||||
|
type: "category",
|
||||||
|
data: top.map((row) => `#${row.rank}`),
|
||||||
|
axisLabel: {color: "#8ea6c2"},
|
||||||
|
axisLine: {lineStyle: {color: "rgba(255,255,255,0.1)"}}
|
||||||
|
},
|
||||||
|
yAxis: {
|
||||||
|
type: "value",
|
||||||
|
axisLabel: {color: "#8ea6c2"},
|
||||||
|
splitLine: {lineStyle: {color: "rgba(255,255,255,0.06)"}}
|
||||||
|
},
|
||||||
|
series: [{
|
||||||
|
type: "bar",
|
||||||
|
data: top.map((row, index) => ({
|
||||||
|
value: Number(row.heat_value || 0),
|
||||||
|
itemStyle: {color: chartColors[index % chartColors.length], borderRadius: [8, 8, 0, 0]}
|
||||||
|
})),
|
||||||
|
label: {show: true, position: "top", color: "#eef6ff", formatter: ({dataIndex}) => top[dataIndex].heat_text}
|
||||||
|
}]
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function renderPieChart(categories) {
|
||||||
|
charts.pie.setOption({
|
||||||
|
animationDuration: 700,
|
||||||
|
color: chartColors,
|
||||||
|
tooltip: {trigger: "item"},
|
||||||
|
legend: {
|
||||||
|
bottom: 2,
|
||||||
|
textStyle: {color: "#8ea6c2", fontSize: 11},
|
||||||
|
itemWidth: 12,
|
||||||
|
itemHeight: 8
|
||||||
|
},
|
||||||
|
series: [{
|
||||||
|
type: "pie",
|
||||||
|
radius: ["44%", "72%"],
|
||||||
|
center: ["50%", "44%"],
|
||||||
|
itemStyle: {borderColor: "#081a2c", borderWidth: 2},
|
||||||
|
label: {
|
||||||
|
color: "#eef6ff",
|
||||||
|
formatter: "{b}\n{d}%"
|
||||||
|
},
|
||||||
|
data: (categories || []).map((item) => ({
|
||||||
|
name: item.category_label,
|
||||||
|
value: Number(item.total_heat || 0)
|
||||||
|
}))
|
||||||
|
}]
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function renderBubbleChart(table) {
|
||||||
|
const top = (table || []).slice(0, 12);
|
||||||
|
charts.bubble.setOption({
|
||||||
|
animationDuration: 700,
|
||||||
|
color: chartColors,
|
||||||
|
grid: {left: 44, right: 18, top: 16, bottom: 36},
|
||||||
|
xAxis: {
|
||||||
|
type: "value",
|
||||||
|
name: "排名",
|
||||||
|
inverse: true,
|
||||||
|
min: 0,
|
||||||
|
max: Math.max(...top.map((row) => Number(row.rank || 0)), 10) + 1,
|
||||||
|
nameTextStyle: {color: "#8ea6c2"},
|
||||||
|
axisLabel: {color: "#8ea6c2"},
|
||||||
|
splitLine: {lineStyle: {color: "rgba(255,255,255,0.06)"}}
|
||||||
|
},
|
||||||
|
yAxis: {
|
||||||
|
type: "value",
|
||||||
|
name: "热度值",
|
||||||
|
nameTextStyle: {color: "#8ea6c2"},
|
||||||
|
axisLabel: {color: "#8ea6c2"},
|
||||||
|
splitLine: {lineStyle: {color: "rgba(255,255,255,0.06)"}}
|
||||||
|
},
|
||||||
|
tooltip: {
|
||||||
|
formatter: (params) => {
|
||||||
|
const row = params.data.raw;
|
||||||
|
return `${row.title}<br/>排名 #${row.rank}<br/>热度 ${row.heat_text}<br/>分类 ${row.category_label}`;
|
||||||
|
}
|
||||||
|
},
|
||||||
|
series: [{
|
||||||
|
type: "scatter",
|
||||||
|
symbolSize: (value) => Math.max(18, Math.min(56, value[2] / 80000)),
|
||||||
|
data: top.map((row, index) => ({
|
||||||
|
value: [Number(row.rank || 0), Number(row.heat_value || 0), Number(row.heat_value || 0)],
|
||||||
|
raw: row,
|
||||||
|
itemStyle: {color: chartColors[index % chartColors.length], opacity: 0.82}
|
||||||
|
}))
|
||||||
|
}]
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function renderTable(table) {
|
||||||
|
document.getElementById("table-body").innerHTML = (table || []).map((row) => `
|
||||||
|
<tr>
|
||||||
|
<td class="rank">#${row.rank}</td>
|
||||||
|
<td>${row.title}</td>
|
||||||
|
<td><span class="tag">${row.category_label}</span></td>
|
||||||
|
<td class="heat">${row.heat_text}</td>
|
||||||
|
</tr>
|
||||||
|
`).join("");
|
||||||
|
}
|
||||||
|
|
||||||
|
function render(payload) {
|
||||||
|
const data = payload || defaultPayload;
|
||||||
|
const categories = data.categories || [];
|
||||||
|
const table = data.table || [];
|
||||||
|
|
||||||
|
document.title = themeMeta.title;
|
||||||
|
document.getElementById("snapshot-meta").textContent =
|
||||||
|
`${data.snapshot_id} | 生成时间 ${new Date(data.generated_at_ms || 0).toLocaleString()}`;
|
||||||
|
document.getElementById("metric-total").textContent = formatNumber(table.length);
|
||||||
|
document.getElementById("metric-categories").textContent = formatNumber(categories.length);
|
||||||
|
document.getElementById("metric-heat").textContent = formatNumber(getTotalHeat(categories));
|
||||||
|
document.getElementById("metric-peak").textContent = formatNumber(getPeakHeat(table));
|
||||||
|
document.getElementById("lead-summary").textContent = buildLeadSummary(table, categories);
|
||||||
|
document.getElementById("table-note").textContent =
|
||||||
|
table.length > 0 ? `当前展示 ${table.length} 条热点` : "暂无热榜数据";
|
||||||
|
|
||||||
|
renderTable(table);
|
||||||
|
ensureCharts();
|
||||||
|
if (window.echarts) {
|
||||||
|
renderBarChart(categories);
|
||||||
|
renderTopChart(table);
|
||||||
|
renderPieChart(categories);
|
||||||
|
renderBubbleChart(table);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
window.addEventListener("resize", () => {
|
||||||
|
Object.values(charts).forEach((chart) => chart && chart.resize());
|
||||||
|
});
|
||||||
|
|
||||||
|
render(defaultPayload);
|
||||||
|
</script>
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
@@ -1,10 +0,0 @@
|
|||||||
{
|
|
||||||
"version": 1,
|
|
||||||
"skills": {
|
|
||||||
"ui-ux-pro-max": {
|
|
||||||
"source": "nextlevelbuilder/ui-ux-pro-max-skill",
|
|
||||||
"sourceType": "github",
|
|
||||||
"computedHash": "6337038fe1fe6bbe1b9f252ab678ee575859190bab6f0f246f4061824eb40875"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -95,8 +95,18 @@ pub fn handle_browser_message_with_context<T: Transport + 'static>(
|
|||||||
page_url: normalize_optional_submit_field(page_url),
|
page_url: normalize_optional_submit_field(page_url),
|
||||||
page_title: normalize_optional_submit_field(page_title),
|
page_title: normalize_optional_submit_field(page_title),
|
||||||
};
|
};
|
||||||
let browser_backend = browser_backend_for_submit(browser_tool, context, &request)?;
|
if configured_browser_ws_url(context).is_some() {
|
||||||
run_submit_task_with_browser_backend(transport, transport, browser_backend, context, request)
|
let browser_backend = browser_backend_for_submit(browser_tool, context, &request)?;
|
||||||
|
run_submit_task_with_browser_backend(
|
||||||
|
transport,
|
||||||
|
transport,
|
||||||
|
browser_backend,
|
||||||
|
context,
|
||||||
|
request,
|
||||||
|
)
|
||||||
|
} else {
|
||||||
|
run_submit_task(transport, transport, browser_tool, context, request)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
BrowserMessage::Init { .. } => {
|
BrowserMessage::Init { .. } => {
|
||||||
eprintln!("ignoring duplicate init after handshake");
|
eprintln!("ignoring duplicate init after handshake");
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
use std::ffi::OsString;
|
use std::ffi::OsString;
|
||||||
use std::path::PathBuf;
|
use std::path::{Path, PathBuf};
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
|
|
||||||
use crate::browser::BrowserBackend;
|
use crate::browser::BrowserBackend;
|
||||||
@@ -64,6 +64,10 @@ impl AgentRuntimeContext {
|
|||||||
.map_err(|err| PipeError::Protocol(err.to_string()))
|
.map_err(|err| PipeError::Protocol(err.to_string()))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pub fn config_path(&self) -> Option<&Path> {
|
||||||
|
self.config_path.as_deref()
|
||||||
|
}
|
||||||
|
|
||||||
fn settings_source_label(&self) -> String {
|
fn settings_source_label(&self) -> String {
|
||||||
match &self.config_path {
|
match &self.config_path {
|
||||||
Some(path) if path.exists() => path.display().to_string(),
|
Some(path) if path.exists() => path.display().to_string(),
|
||||||
@@ -132,6 +136,40 @@ impl<T: Transport + ?Sized> AgentEventSink for T {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn resolve_submit_instruction(
|
||||||
|
instruction: String,
|
||||||
|
page_url: Option<&str>,
|
||||||
|
page_title: Option<&str>,
|
||||||
|
) -> Result<(String, Option<crate::compat::deterministic_submit::DeterministicExecutionPlan>), AgentMessage> {
|
||||||
|
let raw_instruction = instruction;
|
||||||
|
let trimmed_instruction = raw_instruction.trim().to_string();
|
||||||
|
if trimmed_instruction.is_empty() {
|
||||||
|
return Err(AgentMessage::TaskComplete {
|
||||||
|
success: false,
|
||||||
|
summary: "请输入任务内容。".to_string(),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
match crate::compat::deterministic_submit::decide_deterministic_submit(
|
||||||
|
&raw_instruction,
|
||||||
|
page_url,
|
||||||
|
page_title,
|
||||||
|
) {
|
||||||
|
crate::compat::deterministic_submit::DeterministicSubmitDecision::NotDeterministic => {
|
||||||
|
Ok((trimmed_instruction, None))
|
||||||
|
}
|
||||||
|
crate::compat::deterministic_submit::DeterministicSubmitDecision::Prompt { summary } => {
|
||||||
|
Err(AgentMessage::TaskComplete {
|
||||||
|
success: false,
|
||||||
|
summary,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
crate::compat::deterministic_submit::DeterministicSubmitDecision::Execute(plan) => {
|
||||||
|
Ok((plan.instruction.clone(), Some(plan)))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
pub fn run_submit_task<T: Transport + 'static>(
|
pub fn run_submit_task<T: Transport + 'static>(
|
||||||
transport: &T,
|
transport: &T,
|
||||||
sink: &dyn AgentEventSink,
|
sink: &dyn AgentEventSink,
|
||||||
@@ -146,13 +184,6 @@ pub fn run_submit_task<T: Transport + 'static>(
|
|||||||
page_url,
|
page_url,
|
||||||
page_title,
|
page_title,
|
||||||
} = request;
|
} = request;
|
||||||
let instruction = instruction.trim().to_string();
|
|
||||||
if instruction.is_empty() {
|
|
||||||
return sink.send(&AgentMessage::TaskComplete {
|
|
||||||
success: false,
|
|
||||||
summary: "请输入任务内容。".to_string(),
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
let task_context = CompatTaskContext {
|
let task_context = CompatTaskContext {
|
||||||
conversation_id,
|
conversation_id,
|
||||||
@@ -160,6 +191,14 @@ pub fn run_submit_task<T: Transport + 'static>(
|
|||||||
page_url,
|
page_url,
|
||||||
page_title,
|
page_title,
|
||||||
};
|
};
|
||||||
|
let (instruction, deterministic_plan) = match resolve_submit_instruction(
|
||||||
|
instruction,
|
||||||
|
task_context.page_url.as_deref(),
|
||||||
|
task_context.page_title.as_deref(),
|
||||||
|
) {
|
||||||
|
Ok(resolved) => resolved,
|
||||||
|
Err(completion) => return sink.send(&completion),
|
||||||
|
};
|
||||||
let _ = sink.send(&AgentMessage::LogEntry {
|
let _ = sink.send(&AgentMessage::LogEntry {
|
||||||
level: "info".to_string(),
|
level: "info".to_string(),
|
||||||
message: runtime_version_log_message(),
|
message: runtime_version_log_message(),
|
||||||
@@ -176,7 +215,7 @@ pub fn run_submit_task<T: Transport + 'static>(
|
|||||||
|
|
||||||
let completion = match context.load_sgclaw_settings() {
|
let completion = match context.load_sgclaw_settings() {
|
||||||
Ok(Some(settings)) => {
|
Ok(Some(settings)) => {
|
||||||
let resolved_skills_dirs =
|
let resolved_skills_dir =
|
||||||
resolve_skills_dir_from_sgclaw_settings(&context.workspace_root, &settings);
|
resolve_skills_dir_from_sgclaw_settings(&context.workspace_root, &settings);
|
||||||
let _ = sink.send(&AgentMessage::LogEntry {
|
let _ = sink.send(&AgentMessage::LogEntry {
|
||||||
level: "info".to_string(),
|
level: "info".to_string(),
|
||||||
@@ -189,7 +228,7 @@ pub fn run_submit_task<T: Transport + 'static>(
|
|||||||
});
|
});
|
||||||
let _ = sink.send(&AgentMessage::LogEntry {
|
let _ = sink.send(&AgentMessage::LogEntry {
|
||||||
level: "info".to_string(),
|
level: "info".to_string(),
|
||||||
message: format!("skills dirs resolved to [{}]", resolved_skills_dirs.iter().map(|d| d.display().to_string()).collect::<Vec<_>>().join(", ")),
|
message: format!("skills dir resolved to {}", resolved_skills_dir.display()),
|
||||||
});
|
});
|
||||||
let _ = sink.send(&AgentMessage::LogEntry {
|
let _ = sink.send(&AgentMessage::LogEntry {
|
||||||
level: "info".to_string(),
|
level: "info".to_string(),
|
||||||
@@ -198,6 +237,26 @@ pub fn run_submit_task<T: Transport + 'static>(
|
|||||||
settings.runtime_profile, settings.skills_prompt_mode
|
settings.runtime_profile, settings.skills_prompt_mode
|
||||||
),
|
),
|
||||||
});
|
});
|
||||||
|
if let Some(plan) = deterministic_plan.as_ref() {
|
||||||
|
let _ = send_mode_log(sink, "direct_skill_primary");
|
||||||
|
let completion =
|
||||||
|
match crate::compat::deterministic_submit::execute_deterministic_submit(
|
||||||
|
browser_tool.clone(),
|
||||||
|
plan,
|
||||||
|
&context.workspace_root,
|
||||||
|
&settings,
|
||||||
|
) {
|
||||||
|
Ok(outcome) => AgentMessage::TaskComplete {
|
||||||
|
success: outcome.success,
|
||||||
|
summary: outcome.summary,
|
||||||
|
},
|
||||||
|
Err(err) => AgentMessage::TaskComplete {
|
||||||
|
success: false,
|
||||||
|
summary: err.to_string(),
|
||||||
|
},
|
||||||
|
};
|
||||||
|
return sink.send(&completion);
|
||||||
|
}
|
||||||
if RuntimeEngine::new(settings.runtime_profile).browser_surface_enabled()
|
if RuntimeEngine::new(settings.runtime_profile).browser_surface_enabled()
|
||||||
&& crate::compat::orchestration::should_use_primary_orchestration(
|
&& crate::compat::orchestration::should_use_primary_orchestration(
|
||||||
&instruction,
|
&instruction,
|
||||||
@@ -228,6 +287,42 @@ pub fn run_submit_task<T: Transport + 'static>(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
if settings
|
||||||
|
.direct_submit_skill
|
||||||
|
.as_deref()
|
||||||
|
.map(str::trim)
|
||||||
|
.is_some_and(|value| !value.is_empty())
|
||||||
|
{
|
||||||
|
match crate::compat::direct_skill_runtime::execute_direct_submit_skill(
|
||||||
|
browser_tool.clone(),
|
||||||
|
&instruction,
|
||||||
|
&task_context,
|
||||||
|
&context.workspace_root,
|
||||||
|
&settings,
|
||||||
|
) {
|
||||||
|
Ok(outcome) => {
|
||||||
|
let _ = send_mode_log(sink, "direct_skill_primary");
|
||||||
|
return sink.send(&AgentMessage::TaskComplete {
|
||||||
|
success: outcome.success,
|
||||||
|
summary: outcome.summary,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
Err(PipeError::Protocol(message))
|
||||||
|
if message.contains("must use skill.tool format") =>
|
||||||
|
{
|
||||||
|
return sink.send(&AgentMessage::TaskComplete {
|
||||||
|
success: false,
|
||||||
|
summary: message,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
Err(err) => {
|
||||||
|
return sink.send(&AgentMessage::TaskComplete {
|
||||||
|
success: false,
|
||||||
|
summary: err.to_string(),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
let _ = send_mode_log(sink, "compat_llm_primary");
|
let _ = send_mode_log(sink, "compat_llm_primary");
|
||||||
match crate::compat::runtime::execute_task_with_sgclaw_settings(
|
match crate::compat::runtime::execute_task_with_sgclaw_settings(
|
||||||
transport,
|
transport,
|
||||||
@@ -280,13 +375,6 @@ pub fn run_submit_task_with_browser_backend<T: Transport + 'static>(
|
|||||||
page_url,
|
page_url,
|
||||||
page_title,
|
page_title,
|
||||||
} = request;
|
} = request;
|
||||||
let instruction = instruction.trim().to_string();
|
|
||||||
if instruction.is_empty() {
|
|
||||||
return sink.send(&AgentMessage::TaskComplete {
|
|
||||||
success: false,
|
|
||||||
summary: "请输入任务内容。".to_string(),
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
let task_context = CompatTaskContext {
|
let task_context = CompatTaskContext {
|
||||||
conversation_id,
|
conversation_id,
|
||||||
@@ -294,6 +382,14 @@ pub fn run_submit_task_with_browser_backend<T: Transport + 'static>(
|
|||||||
page_url,
|
page_url,
|
||||||
page_title,
|
page_title,
|
||||||
};
|
};
|
||||||
|
let (instruction, deterministic_plan) = match resolve_submit_instruction(
|
||||||
|
instruction,
|
||||||
|
task_context.page_url.as_deref(),
|
||||||
|
task_context.page_title.as_deref(),
|
||||||
|
) {
|
||||||
|
Ok(resolved) => resolved,
|
||||||
|
Err(completion) => return sink.send(&completion),
|
||||||
|
};
|
||||||
let _ = sink.send(&AgentMessage::LogEntry {
|
let _ = sink.send(&AgentMessage::LogEntry {
|
||||||
level: "info".to_string(),
|
level: "info".to_string(),
|
||||||
message: runtime_version_log_message(),
|
message: runtime_version_log_message(),
|
||||||
@@ -310,7 +406,7 @@ pub fn run_submit_task_with_browser_backend<T: Transport + 'static>(
|
|||||||
|
|
||||||
let completion = match context.load_sgclaw_settings() {
|
let completion = match context.load_sgclaw_settings() {
|
||||||
Ok(Some(settings)) => {
|
Ok(Some(settings)) => {
|
||||||
let resolved_skills_dirs =
|
let resolved_skills_dir =
|
||||||
resolve_skills_dir_from_sgclaw_settings(&context.workspace_root, &settings);
|
resolve_skills_dir_from_sgclaw_settings(&context.workspace_root, &settings);
|
||||||
let _ = sink.send(&AgentMessage::LogEntry {
|
let _ = sink.send(&AgentMessage::LogEntry {
|
||||||
level: "info".to_string(),
|
level: "info".to_string(),
|
||||||
@@ -323,7 +419,7 @@ pub fn run_submit_task_with_browser_backend<T: Transport + 'static>(
|
|||||||
});
|
});
|
||||||
let _ = sink.send(&AgentMessage::LogEntry {
|
let _ = sink.send(&AgentMessage::LogEntry {
|
||||||
level: "info".to_string(),
|
level: "info".to_string(),
|
||||||
message: format!("skills dirs resolved to [{}]", resolved_skills_dirs.iter().map(|d| d.display().to_string()).collect::<Vec<_>>().join(", ")),
|
message: format!("skills dir resolved to {}", resolved_skills_dir.display()),
|
||||||
});
|
});
|
||||||
let _ = sink.send(&AgentMessage::LogEntry {
|
let _ = sink.send(&AgentMessage::LogEntry {
|
||||||
level: "info".to_string(),
|
level: "info".to_string(),
|
||||||
@@ -332,6 +428,25 @@ pub fn run_submit_task_with_browser_backend<T: Transport + 'static>(
|
|||||||
settings.runtime_profile, settings.skills_prompt_mode
|
settings.runtime_profile, settings.skills_prompt_mode
|
||||||
),
|
),
|
||||||
});
|
});
|
||||||
|
if let Some(plan) = deterministic_plan.as_ref() {
|
||||||
|
let _ = send_mode_log(sink, "direct_skill_primary");
|
||||||
|
let completion = match crate::compat::deterministic_submit::execute_deterministic_submit_with_browser_backend(
|
||||||
|
browser_backend.clone(),
|
||||||
|
plan,
|
||||||
|
&context.workspace_root,
|
||||||
|
&settings,
|
||||||
|
) {
|
||||||
|
Ok(outcome) => AgentMessage::TaskComplete {
|
||||||
|
success: outcome.success,
|
||||||
|
summary: outcome.summary,
|
||||||
|
},
|
||||||
|
Err(err) => AgentMessage::TaskComplete {
|
||||||
|
success: false,
|
||||||
|
summary: err.to_string(),
|
||||||
|
},
|
||||||
|
};
|
||||||
|
return sink.send(&completion);
|
||||||
|
}
|
||||||
if RuntimeEngine::new(settings.runtime_profile).browser_surface_enabled()
|
if RuntimeEngine::new(settings.runtime_profile).browser_surface_enabled()
|
||||||
&& crate::compat::orchestration::should_use_primary_orchestration(
|
&& crate::compat::orchestration::should_use_primary_orchestration(
|
||||||
&instruction,
|
&instruction,
|
||||||
@@ -362,6 +477,42 @@ pub fn run_submit_task_with_browser_backend<T: Transport + 'static>(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
if settings
|
||||||
|
.direct_submit_skill
|
||||||
|
.as_deref()
|
||||||
|
.map(str::trim)
|
||||||
|
.is_some_and(|value| !value.is_empty())
|
||||||
|
{
|
||||||
|
match crate::compat::direct_skill_runtime::execute_direct_submit_skill_with_browser_backend(
|
||||||
|
browser_backend.clone(),
|
||||||
|
&instruction,
|
||||||
|
&task_context,
|
||||||
|
&context.workspace_root,
|
||||||
|
&settings,
|
||||||
|
) {
|
||||||
|
Ok(outcome) => {
|
||||||
|
let _ = send_mode_log(sink, "direct_skill_primary");
|
||||||
|
return sink.send(&AgentMessage::TaskComplete {
|
||||||
|
success: outcome.success,
|
||||||
|
summary: outcome.summary,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
Err(PipeError::Protocol(message))
|
||||||
|
if message.contains("must use skill.tool format") =>
|
||||||
|
{
|
||||||
|
return sink.send(&AgentMessage::TaskComplete {
|
||||||
|
success: false,
|
||||||
|
summary: message,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
Err(err) => {
|
||||||
|
return sink.send(&AgentMessage::TaskComplete {
|
||||||
|
success: false,
|
||||||
|
summary: err.to_string(),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
let _ = send_mode_log(sink, "compat_llm_primary");
|
let _ = send_mode_log(sink, "compat_llm_primary");
|
||||||
match crate::compat::runtime::execute_task_with_browser_backend(
|
match crate::compat::runtime::execute_task_with_browser_backend(
|
||||||
sink,
|
sink,
|
||||||
|
|||||||
@@ -83,6 +83,14 @@ fn run() -> Result<(), String> {
|
|||||||
eprintln!("busy: {message}");
|
eprintln!("busy: {message}");
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
ServiceMessage::Pong => {}
|
||||||
|
ServiceMessage::ConfigUpdated { success, message } => {
|
||||||
|
if success {
|
||||||
|
println!("config updated: {message}");
|
||||||
|
} else {
|
||||||
|
eprintln!("config update failed: {message}");
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
Message::Close(_) => {
|
Message::Close(_) => {
|
||||||
|
|||||||
@@ -436,12 +436,16 @@ fn build_eval_js(source_url: &str, script: &str) -> String {
|
|||||||
let events_url = escape_js_single_quoted(&events_endpoint_url(source_url));
|
let events_url = escape_js_single_quoted(&events_endpoint_url(source_url));
|
||||||
|
|
||||||
format!(
|
format!(
|
||||||
"(function(){{try{{var v=(function(){{return {script}}})();\
|
"(function(){{try{{\
|
||||||
|
var v=(function(){{return {script}}})();\
|
||||||
|
function _s(v){{\
|
||||||
var t=(typeof v==='string')?v:JSON.stringify(v);\
|
var t=(typeof v==='string')?v:JSON.stringify(v);\
|
||||||
try{{callBackJsToCpp('{escaped_source_url}@_@'+window.location.href+'@_@{callback}@_@sgBrowserExcuteJsCodeByDomain@_@'+(t??''))}}catch(_){{}}\
|
try{{callBackJsToCpp('{escaped_source_url}@_@'+window.location.href+'@_@{callback}@_@sgBrowserExcuteJsCodeByDomain@_@'+(t??''))}}catch(_){{}}\
|
||||||
var j=JSON.stringify({{type:'callback',callback:'{callback}',request_url:'{escaped_source_url}',payload:{{value:(t??'')}}}});\
|
var j=JSON.stringify({{type:'callback',callback:'{callback}',request_url:'{escaped_source_url}',payload:{{value:(t??'')}}}});\
|
||||||
try{{var r=new XMLHttpRequest();r.open('POST','{events_url}',true);r.setRequestHeader('Content-Type','application/json');r.send(j)}}catch(_){{}}\
|
try{{var r=new XMLHttpRequest();r.open('POST','{events_url}',true);r.setRequestHeader('Content-Type','application/json');r.send(j)}}catch(_){{}}\
|
||||||
try{{navigator.sendBeacon('{events_url}',new Blob([j],{{type:'application/json'}}))}}catch(_){{}}\
|
try{{navigator.sendBeacon('{events_url}',new Blob([j],{{type:'application/json'}}))}}catch(_){{}}\
|
||||||
|
}}\
|
||||||
|
if(v&&typeof v.then==='function'){{v.then(_s).catch(function(){{}});}}else{{_s(v);}}\
|
||||||
}}catch(e){{}}}})()"
|
}}catch(e){{}}}})()"
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -25,7 +25,6 @@ const COMMANDS_ENDPOINT_PATH: &str = "/sgclaw/callback/commands/next";
|
|||||||
const COMMAND_ACK_ENDPOINT_PATH: &str = "/sgclaw/callback/commands/ack";
|
const COMMAND_ACK_ENDPOINT_PATH: &str = "/sgclaw/callback/commands/ack";
|
||||||
const COMMAND_POLL_INTERVAL: Duration = Duration::from_millis(25);
|
const COMMAND_POLL_INTERVAL: Duration = Duration::from_millis(25);
|
||||||
const HELPER_POLL_INTERVAL: Duration = Duration::from_millis(50);
|
const HELPER_POLL_INTERVAL: Duration = Duration::from_millis(50);
|
||||||
const HELPER_BOOTSTRAP_ACTION: &str = "sgBrowerserOpenPage";
|
|
||||||
const NAVIGATE_CALLBACK_NAME: &str = "sgclawOnLoaded";
|
const NAVIGATE_CALLBACK_NAME: &str = "sgclawOnLoaded";
|
||||||
const CLICK_PROBE_CALLBACK_NAME: &str = "sgclawOnClickProbe";
|
const CLICK_PROBE_CALLBACK_NAME: &str = "sgclawOnClickProbe";
|
||||||
const CLICK_CALLBACK_NAME: &str = "sgclawOnClick";
|
const CLICK_CALLBACK_NAME: &str = "sgclawOnClick";
|
||||||
@@ -42,12 +41,15 @@ pub(crate) struct BrowserCallbackHost {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug)]
|
#[derive(Debug)]
|
||||||
|
#[allow(dead_code)]
|
||||||
pub(crate) struct LiveBrowserCallbackHost {
|
pub(crate) struct LiveBrowserCallbackHost {
|
||||||
host: Arc<BrowserCallbackHost>,
|
host: Arc<BrowserCallbackHost>,
|
||||||
shutdown: Arc<AtomicBool>,
|
shutdown: Arc<AtomicBool>,
|
||||||
server_thread: Mutex<Option<JoinHandle<()>>>,
|
server_thread: Mutex<Option<JoinHandle<()>>>,
|
||||||
command_lock: Mutex<()>,
|
command_lock: Mutex<()>,
|
||||||
result_timeout: Duration,
|
result_timeout: Duration,
|
||||||
|
browser_ws_url: String,
|
||||||
|
use_hidden_domain: bool,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Default)]
|
#[derive(Debug, Default)]
|
||||||
@@ -217,6 +219,7 @@ impl LiveBrowserCallbackHost {
|
|||||||
bootstrap_request_url: &str,
|
bootstrap_request_url: &str,
|
||||||
ready_timeout: Duration,
|
ready_timeout: Duration,
|
||||||
result_timeout: Duration,
|
result_timeout: Duration,
|
||||||
|
use_hidden_domain: bool,
|
||||||
) -> Result<Self, PipeError> {
|
) -> Result<Self, PipeError> {
|
||||||
let listener = TcpListener::bind("127.0.0.1:0").map_err(|err| {
|
let listener = TcpListener::bind("127.0.0.1:0").map_err(|err| {
|
||||||
PipeError::Protocol(format!("failed to bind callback host listener: {err}"))
|
PipeError::Protocol(format!("failed to bind callback host listener: {err}"))
|
||||||
@@ -238,7 +241,7 @@ impl LiveBrowserCallbackHost {
|
|||||||
let thread_shutdown = shutdown.clone();
|
let thread_shutdown = shutdown.clone();
|
||||||
let server_thread = thread::spawn(move || serve_loop(listener, thread_host, thread_shutdown));
|
let server_thread = thread::spawn(move || serve_loop(listener, thread_host, thread_shutdown));
|
||||||
|
|
||||||
bootstrap_helper_page(browser_ws_url, bootstrap_request_url, host.helper_url())?;
|
bootstrap_helper_page(browser_ws_url, bootstrap_request_url, host.helper_url(), use_hidden_domain)?;
|
||||||
wait_for_helper_ready(host.as_ref(), ready_timeout)?;
|
wait_for_helper_ready(host.as_ref(), ready_timeout)?;
|
||||||
|
|
||||||
let live_host = Self {
|
let live_host = Self {
|
||||||
@@ -247,6 +250,8 @@ impl LiveBrowserCallbackHost {
|
|||||||
server_thread: Mutex::new(Some(server_thread)),
|
server_thread: Mutex::new(Some(server_thread)),
|
||||||
command_lock: Mutex::new(()),
|
command_lock: Mutex::new(()),
|
||||||
result_timeout,
|
result_timeout,
|
||||||
|
browser_ws_url: browser_ws_url.to_string(),
|
||||||
|
use_hidden_domain,
|
||||||
};
|
};
|
||||||
Ok(live_host)
|
Ok(live_host)
|
||||||
}
|
}
|
||||||
@@ -337,7 +342,12 @@ fn normalize_loopback_origin(origin: &str) -> String {
|
|||||||
origin.trim_end_matches('/').to_string()
|
origin.trim_end_matches('/').to_string()
|
||||||
}
|
}
|
||||||
|
|
||||||
fn bootstrap_helper_page(browser_ws_url: &str, request_url: &str, helper_url: &str) -> Result<(), PipeError> {
|
fn bootstrap_helper_page(
|
||||||
|
browser_ws_url: &str,
|
||||||
|
request_url: &str,
|
||||||
|
helper_url: &str,
|
||||||
|
use_hidden_domain: bool,
|
||||||
|
) -> Result<(), PipeError> {
|
||||||
let (mut websocket, _) = connect(browser_ws_url)
|
let (mut websocket, _) = connect(browser_ws_url)
|
||||||
.map_err(|err| PipeError::Protocol(format!("browser websocket connect failed: {err}")))?;
|
.map_err(|err| PipeError::Protocol(format!("browser websocket connect failed: {err}")))?;
|
||||||
configure_bootstrap_socket(&mut websocket)?;
|
configure_bootstrap_socket(&mut websocket)?;
|
||||||
@@ -347,9 +357,20 @@ fn bootstrap_helper_page(browser_ws_url: &str, request_url: &str, helper_url: &s
|
|||||||
))
|
))
|
||||||
.map_err(|err| PipeError::Protocol(format!("browser websocket register failed: {err}")))?;
|
.map_err(|err| PipeError::Protocol(format!("browser websocket register failed: {err}")))?;
|
||||||
let _ = recv_bootstrap_prelude(&mut websocket);
|
let _ = recv_bootstrap_prelude(&mut websocket);
|
||||||
|
|
||||||
|
// Close any orphaned helper page from a previous process run.
|
||||||
|
// Best-effort: if no page exists, the browser silently ignores this.
|
||||||
|
let (open_action, close_action) = if use_hidden_domain {
|
||||||
|
("sgHideBrowerserOpenPage", "sgHideBrowerserClosePage")
|
||||||
|
} else {
|
||||||
|
("sgBrowerserOpenPage", "sgBrowserClosePage")
|
||||||
|
};
|
||||||
|
let close_payload = json!([request_url, close_action, helper_url]).to_string();
|
||||||
|
let _ = websocket.send(Message::Text(close_payload.into()));
|
||||||
|
|
||||||
let payload = json!([
|
let payload = json!([
|
||||||
request_url,
|
request_url,
|
||||||
HELPER_BOOTSTRAP_ACTION,
|
open_action,
|
||||||
helper_url,
|
helper_url,
|
||||||
])
|
])
|
||||||
.to_string();
|
.to_string();
|
||||||
@@ -667,7 +688,7 @@ fn normalize_callback_result(
|
|||||||
}))
|
}))
|
||||||
}
|
}
|
||||||
"eval" if result.callback == EVAL_CALLBACK_NAME => {
|
"eval" if result.callback == EVAL_CALLBACK_NAME => {
|
||||||
let value = result.payload.get("value").and_then(Value::as_str)?;
|
let value = result.payload.get("value")?.clone();
|
||||||
Some(BrowserCallbackResponse::Success(BrowserCallbackSuccess {
|
Some(BrowserCallbackResponse::Success(BrowserCallbackSuccess {
|
||||||
success: true,
|
success: true,
|
||||||
data: json!({ "text": value }),
|
data: json!({ "text": value }),
|
||||||
@@ -896,36 +917,66 @@ window.sgclawOnEval = sgclawOnEval;
|
|||||||
window.callBackJsToCpp = callBackJsToCpp;
|
window.callBackJsToCpp = callBackJsToCpp;
|
||||||
|
|
||||||
document.getElementById('wi').textContent = SGCLAW_BROWSER_WS_URL;
|
document.getElementById('wi').textContent = SGCLAW_BROWSER_WS_URL;
|
||||||
_log('Connecting to browser WebSocket\u2026');
|
|
||||||
|
|
||||||
const sgclawSocket = new WebSocket(SGCLAW_BROWSER_WS_URL);
|
let sgclawSocket = null;
|
||||||
sgclawSocket.addEventListener('open', async () => {{
|
let sgclawReconnectTimer = null;
|
||||||
document.getElementById('sd').classList.add('on');
|
let sgclawDeferredCommandLogged = false;
|
||||||
document.getElementById('stx').textContent = 'Connected';
|
|
||||||
_log('<span class="ok">\u2713</span> WebSocket connected');
|
|
||||||
_task('Connected to browser');
|
|
||||||
sgclawSocket.send(JSON.stringify({{ type: 'register', role: 'web' }}));
|
|
||||||
await sgclawReady();
|
|
||||||
_log('<span class="ok">\u2713</span> Ready signal sent');
|
|
||||||
_task('Ready \u2014 waiting for commands');
|
|
||||||
}});
|
|
||||||
|
|
||||||
sgclawSocket.addEventListener('close', () => {{
|
function connectSocket() {{
|
||||||
document.getElementById('sd').classList.remove('on');
|
if (sgclawSocket && (sgclawSocket.readyState === WebSocket.OPEN || sgclawSocket.readyState === WebSocket.CONNECTING)) {{
|
||||||
document.getElementById('stx').textContent = 'Disconnected';
|
return;
|
||||||
_log('<span class="er">\u2717</span> WebSocket disconnected');
|
}}
|
||||||
_task('Disconnected');
|
_log('Connecting to browser WebSocket\u2026');
|
||||||
}});
|
document.getElementById('stx').textContent = 'Connecting…';
|
||||||
|
_task('Connecting to browser');
|
||||||
sgclawSocket.addEventListener('message', (event) => {{
|
const socket = new WebSocket(SGCLAW_BROWSER_WS_URL);
|
||||||
console.debug('sgclaw helper received browser frame', event.data);
|
sgclawSocket = socket;
|
||||||
try {{
|
socket.addEventListener('open', async () => {{
|
||||||
var data = String(event.data || '');
|
if (sgclawSocket !== socket) {{
|
||||||
if (data.indexOf('@_@') !== -1) {{
|
return;
|
||||||
sgclawEmitCallback('callBackJsToCpp', {{ raw: data }});
|
|
||||||
}}
|
}}
|
||||||
}} catch (_e) {{}}
|
if (sgclawReconnectTimer) {{
|
||||||
}});
|
clearTimeout(sgclawReconnectTimer);
|
||||||
|
sgclawReconnectTimer = null;
|
||||||
|
}}
|
||||||
|
sgclawDeferredCommandLogged = false;
|
||||||
|
document.getElementById('sd').classList.add('on');
|
||||||
|
document.getElementById('stx').textContent = 'Connected';
|
||||||
|
_log('<span class="ok">\u2713</span> WebSocket connected');
|
||||||
|
_task('Connected to browser');
|
||||||
|
socket.send(JSON.stringify({{ type: 'register', role: 'web' }}));
|
||||||
|
await sgclawReady();
|
||||||
|
_log('<span class="ok">\u2713</span> Ready signal sent');
|
||||||
|
_task('Ready \u2014 waiting for commands');
|
||||||
|
}});
|
||||||
|
|
||||||
|
socket.addEventListener('close', () => {{
|
||||||
|
if (sgclawSocket !== socket) {{
|
||||||
|
return;
|
||||||
|
}}
|
||||||
|
sgclawSocket = null;
|
||||||
|
document.getElementById('sd').classList.remove('on');
|
||||||
|
document.getElementById('stx').textContent = 'Disconnected';
|
||||||
|
_log('<span class="er">\u2717</span> WebSocket disconnected');
|
||||||
|
_task('Disconnected — reconnecting');
|
||||||
|
if (!sgclawReconnectTimer) {{
|
||||||
|
sgclawReconnectTimer = setTimeout(connectSocket, 1000);
|
||||||
|
}}
|
||||||
|
}});
|
||||||
|
|
||||||
|
socket.addEventListener('message', (event) => {{
|
||||||
|
if (sgclawSocket !== socket) {{
|
||||||
|
return;
|
||||||
|
}}
|
||||||
|
console.debug('sgclaw helper received browser frame', event.data);
|
||||||
|
try {{
|
||||||
|
var data = String(event.data || '');
|
||||||
|
if (data.indexOf('@_@') !== -1) {{
|
||||||
|
sgclawEmitCallback('callBackJsToCpp', {{ raw: data }});
|
||||||
|
}}
|
||||||
|
}} catch (_e) {{}}
|
||||||
|
}});
|
||||||
|
}}
|
||||||
|
|
||||||
async function sgclawPollCommands() {{
|
async function sgclawPollCommands() {{
|
||||||
try {{
|
try {{
|
||||||
@@ -936,22 +987,29 @@ async function sgclawPollCommands() {{
|
|||||||
const envelope = await response.json();
|
const envelope = await response.json();
|
||||||
const command = envelope && envelope.command;
|
const command = envelope && envelope.command;
|
||||||
if (!command || !command.action) {{
|
if (!command || !command.action) {{
|
||||||
|
sgclawDeferredCommandLogged = false;
|
||||||
return;
|
return;
|
||||||
}}
|
}}
|
||||||
|
if (!sgclawSocket || sgclawSocket.readyState !== WebSocket.OPEN) {{
|
||||||
|
if (!sgclawDeferredCommandLogged) {{
|
||||||
|
_log('<span class="er">!</span> Browser connection lost — command deferred');
|
||||||
|
sgclawDeferredCommandLogged = true;
|
||||||
|
}}
|
||||||
|
return;
|
||||||
|
}}
|
||||||
|
sgclawDeferredCommandLogged = false;
|
||||||
_nc++;
|
_nc++;
|
||||||
const args = Array.isArray(command.args) ? command.args : [];
|
const args = Array.isArray(command.args) ? command.args : [];
|
||||||
_lastCmd=Date.now();_setIdle(false);
|
_lastCmd=Date.now();_setIdle(false);
|
||||||
_log('<span class="a">\u2192</span> execute <span class="a">'+command.action+'</span>'+(args.length>1?' <span class="u">'+String(args[1]||'').substring(0,50)+'</span>':''));
|
_log('<span class="a">\u2192</span> execute <span class="a">'+command.action+'</span>'+(args.length>1?' <span class="u">'+String(args[1]||'').substring(0,50)+'</span>':''));
|
||||||
_task('Executing: '+command.action);
|
_task('Executing: '+command.action);
|
||||||
if (sgclawSocket.readyState !== WebSocket.OPEN) {{
|
|
||||||
return;
|
|
||||||
}}
|
|
||||||
sgclawSocket.send(JSON.stringify([window.location.href || SGCLAW_HELPER_URL, command.action, ...args]));
|
sgclawSocket.send(JSON.stringify([window.location.href || SGCLAW_HELPER_URL, command.action, ...args]));
|
||||||
await sgclawPostJson(SGCLAW_COMMAND_ACK_ENDPOINT, {{ type: 'command_ack' }});
|
await sgclawPostJson(SGCLAW_COMMAND_ACK_ENDPOINT, {{ type: 'command_ack' }});
|
||||||
}} catch (_error) {{
|
}} catch (_error) {{
|
||||||
}}
|
}}
|
||||||
}}
|
}}
|
||||||
|
|
||||||
|
connectSocket();
|
||||||
setInterval(sgclawPollCommands, 250);
|
setInterval(sgclawPollCommands, 250);
|
||||||
_log('sgClaw Runtime Console initialized');
|
_log('sgClaw Runtime Console initialized');
|
||||||
</script>
|
</script>
|
||||||
@@ -1043,6 +1101,7 @@ mod tests {
|
|||||||
"https://www.zhihu.com",
|
"https://www.zhihu.com",
|
||||||
Duration::from_millis(100),
|
Duration::from_millis(100),
|
||||||
Duration::from_millis(50),
|
Duration::from_millis(50),
|
||||||
|
false,
|
||||||
);
|
);
|
||||||
assert!(result.is_err(), "expected timeout because no real helper page loads");
|
assert!(result.is_err(), "expected timeout because no real helper page loads");
|
||||||
drop(result);
|
drop(result);
|
||||||
@@ -1063,6 +1122,38 @@ mod tests {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn live_callback_host_hidden_domain_sends_hide_open_page_command() {
|
||||||
|
let (ws_url, frames, handle) = start_fake_browser_status_server();
|
||||||
|
|
||||||
|
let result = LiveBrowserCallbackHost::start_with_browser_ws_url(
|
||||||
|
&ws_url,
|
||||||
|
"https://www.zhihu.com",
|
||||||
|
Duration::from_millis(100),
|
||||||
|
Duration::from_millis(50),
|
||||||
|
true,
|
||||||
|
);
|
||||||
|
assert!(result.is_err(), "expected timeout because no real helper page loads");
|
||||||
|
drop(result);
|
||||||
|
handle.join().unwrap();
|
||||||
|
|
||||||
|
let sent = frames.lock().unwrap().clone();
|
||||||
|
assert!(
|
||||||
|
sent.iter().any(|frame| frame.contains("sgHideBrowerserOpenPage")),
|
||||||
|
"hidden domain bootstrap should send sgHideBrowerserOpenPage; sent frames: {sent:?}"
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
!sent.iter().any(|frame| {
|
||||||
|
frame.contains("\"sgBrowerserOpenPage\"")
|
||||||
|
}),
|
||||||
|
"hidden domain bootstrap should NOT send visible sgBrowerserOpenPage; sent frames: {sent:?}"
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
sent.iter().any(|frame| frame.contains("/sgclaw/browser-helper.html")),
|
||||||
|
"bootstrap should include the helper page URL; sent frames: {sent:?}"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn live_callback_host_treats_simulated_mouse_command_as_fire_and_forget() {
|
fn live_callback_host_treats_simulated_mouse_command_as_fire_and_forget() {
|
||||||
use crate::browser::callback_backend::{
|
use crate::browser::callback_backend::{
|
||||||
@@ -1076,6 +1167,8 @@ mod tests {
|
|||||||
server_thread: Mutex::new(None),
|
server_thread: Mutex::new(None),
|
||||||
command_lock: Mutex::new(()),
|
command_lock: Mutex::new(()),
|
||||||
result_timeout: Duration::from_millis(10),
|
result_timeout: Duration::from_millis(10),
|
||||||
|
browser_ws_url: "ws://127.0.0.1:12345".to_string(),
|
||||||
|
use_hidden_domain: false,
|
||||||
};
|
};
|
||||||
|
|
||||||
let response = host.execute(BrowserCallbackRequest {
|
let response = host.execute(BrowserCallbackRequest {
|
||||||
@@ -1110,6 +1203,11 @@ mod tests {
|
|||||||
assert!(html.contains("ws://127.0.0.1:12345"));
|
assert!(html.contains("ws://127.0.0.1:12345"));
|
||||||
assert!(html.contains(r#"JSON.stringify({ type: 'register', role: 'web' })"#));
|
assert!(html.contains(r#"JSON.stringify({ type: 'register', role: 'web' })"#));
|
||||||
assert!(html.contains("sgclawReady"));
|
assert!(html.contains("sgclawReady"));
|
||||||
|
assert!(html.contains("connectSocket()"));
|
||||||
|
assert!(html.contains("setTimeout(connectSocket, 1000)"));
|
||||||
|
assert!(html.contains("if (!sgclawSocket || sgclawSocket.readyState !== WebSocket.OPEN)"));
|
||||||
|
assert!(html.contains("Browser connection lost — command deferred"));
|
||||||
|
assert!(html.contains("sgclawSocket = null;"));
|
||||||
assert!(html.contains("sgclawOnLoaded"));
|
assert!(html.contains("sgclawOnLoaded"));
|
||||||
assert!(html.contains("sgclawOnClickProbe"));
|
assert!(html.contains("sgclawOnClickProbe"));
|
||||||
assert!(html.contains("sgclawOnClick"));
|
assert!(html.contains("sgclawOnClick"));
|
||||||
@@ -1361,4 +1459,36 @@ mod tests {
|
|||||||
other => panic!("expected Success, got {other:?}"),
|
other => panic!("expected Success, got {other:?}"),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn normalize_callback_result_path_a_eval_accepts_structured_value_payload() {
|
||||||
|
let request = make_request("eval");
|
||||||
|
let result = CallbackResult {
|
||||||
|
callback: "sgclawOnEval".to_string(),
|
||||||
|
request_url: "http://127.0.0.1:17888/sgclaw/browser-helper.html".to_string(),
|
||||||
|
target_url: Some("https://www.zhihu.com/hot".to_string()),
|
||||||
|
action: Some("sgBrowserExcuteJsCodeByDomain".to_string()),
|
||||||
|
payload: json!({
|
||||||
|
"value": {
|
||||||
|
"source": "https://www.zhihu.com/hot",
|
||||||
|
"rows": [[1, "问题一", "344万"]]
|
||||||
|
}
|
||||||
|
}),
|
||||||
|
};
|
||||||
|
|
||||||
|
let response = normalize_callback_result(&request, result, Duration::from_millis(10));
|
||||||
|
assert!(response.is_some(), "Path A eval should accept structured values");
|
||||||
|
match response.unwrap() {
|
||||||
|
super::super::callback_backend::BrowserCallbackResponse::Success(s) => {
|
||||||
|
assert_eq!(
|
||||||
|
s.data.get("text").unwrap(),
|
||||||
|
&json!({
|
||||||
|
"source": "https://www.zhihu.com/hot",
|
||||||
|
"rows": [[1, "问题一", "344万"]]
|
||||||
|
})
|
||||||
|
);
|
||||||
|
}
|
||||||
|
other => panic!("expected Success, got {other:?}"),
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -12,47 +12,15 @@ use zeroclaw::tools::{Tool, ToolResult};
|
|||||||
use crate::browser::BrowserBackend;
|
use crate::browser::BrowserBackend;
|
||||||
use crate::pipe::Action;
|
use crate::pipe::Action;
|
||||||
|
|
||||||
pub struct BrowserScriptInvocation<'a> {
|
|
||||||
pub tool: &'a SkillTool,
|
|
||||||
pub skill_root: &'a Path,
|
|
||||||
}
|
|
||||||
|
|
||||||
pub struct BrowserScriptSkillTool {
|
pub struct BrowserScriptSkillTool {
|
||||||
tool_name: String,
|
tool_name: String,
|
||||||
tool_description: String,
|
tool_description: String,
|
||||||
tool: SkillTool,
|
|
||||||
skill_root: PathBuf,
|
skill_root: PathBuf,
|
||||||
|
script_path: PathBuf,
|
||||||
args: HashMap<String, String>,
|
args: HashMap<String, String>,
|
||||||
browser_tool: Arc<dyn BrowserBackend>,
|
browser_tool: Arc<dyn BrowserBackend>,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl BrowserScriptInvocation<'_> {
|
|
||||||
fn script_path(&self) -> PathBuf {
|
|
||||||
self.skill_root.join(&self.tool.command)
|
|
||||||
}
|
|
||||||
|
|
||||||
fn canonical_script_path(&self) -> anyhow::Result<PathBuf> {
|
|
||||||
let script_path = self.script_path();
|
|
||||||
let canonical_skill_root = self
|
|
||||||
.skill_root
|
|
||||||
.canonicalize()
|
|
||||||
.unwrap_or_else(|_| self.skill_root.to_path_buf());
|
|
||||||
let canonical_script_path = script_path.canonicalize().map_err(|err| {
|
|
||||||
anyhow::anyhow!(
|
|
||||||
"failed to resolve browser script {}: {err}",
|
|
||||||
script_path.display()
|
|
||||||
)
|
|
||||||
})?;
|
|
||||||
if !canonical_script_path.starts_with(&canonical_skill_root) {
|
|
||||||
anyhow::bail!(
|
|
||||||
"browser script path escapes skill root: {}",
|
|
||||||
canonical_script_path.display()
|
|
||||||
);
|
|
||||||
}
|
|
||||||
Ok(canonical_script_path)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl BrowserScriptSkillTool {
|
impl BrowserScriptSkillTool {
|
||||||
pub fn new(
|
pub fn new(
|
||||||
skill_name: &str,
|
skill_name: &str,
|
||||||
@@ -60,14 +28,13 @@ impl BrowserScriptSkillTool {
|
|||||||
skill_root: &Path,
|
skill_root: &Path,
|
||||||
browser_tool: Arc<dyn BrowserBackend>,
|
browser_tool: Arc<dyn BrowserBackend>,
|
||||||
) -> anyhow::Result<Self> {
|
) -> anyhow::Result<Self> {
|
||||||
let invocation = BrowserScriptInvocation { tool, skill_root };
|
let script_path = resolve_browser_script_path(skill_root, &tool.command)?;
|
||||||
invocation.canonical_script_path()?;
|
|
||||||
|
|
||||||
Ok(Self {
|
Ok(Self {
|
||||||
tool_name: format!("{}.{}", skill_name, tool.name),
|
tool_name: format!("{}.{}", skill_name, tool.name),
|
||||||
tool_description: tool.description.clone(),
|
tool_description: tool.description.clone(),
|
||||||
tool: tool.clone(),
|
|
||||||
skill_root: skill_root.to_path_buf(),
|
skill_root: skill_root.to_path_buf(),
|
||||||
|
script_path,
|
||||||
args: tool.args.clone(),
|
args: tool.args.clone(),
|
||||||
browser_tool,
|
browser_tool,
|
||||||
})
|
})
|
||||||
@@ -119,12 +86,15 @@ impl Tool for BrowserScriptSkillTool {
|
|||||||
}
|
}
|
||||||
|
|
||||||
async fn execute(&self, args: Value) -> anyhow::Result<ToolResult> {
|
async fn execute(&self, args: Value) -> anyhow::Result<ToolResult> {
|
||||||
execute_browser_script_impl(
|
let tool = SkillTool {
|
||||||
&self.tool,
|
name: self.tool_name.clone(),
|
||||||
&self.skill_root,
|
description: self.tool_description.clone(),
|
||||||
self.browser_tool.clone(),
|
kind: "browser_script".to_string(),
|
||||||
args,
|
command: self.script_path.to_string_lossy().into_owned(),
|
||||||
)
|
args: self.args.clone(),
|
||||||
|
};
|
||||||
|
|
||||||
|
execute_browser_script_tool(&tool, &self.skill_root, self.browser_tool.as_ref(), args).await
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -165,24 +135,45 @@ pub fn build_browser_script_skill_tools(
|
|||||||
pub async fn execute_browser_script_tool(
|
pub async fn execute_browser_script_tool(
|
||||||
tool: &SkillTool,
|
tool: &SkillTool,
|
||||||
skill_root: &Path,
|
skill_root: &Path,
|
||||||
browser_tool: Arc<dyn BrowserBackend>,
|
browser_tool: &dyn BrowserBackend,
|
||||||
args: Value,
|
args: Value,
|
||||||
) -> anyhow::Result<ToolResult> {
|
) -> anyhow::Result<ToolResult> {
|
||||||
|
if tool.kind != "browser_script" {
|
||||||
|
return Ok(failed_tool_result(format!(
|
||||||
|
"browser script tool kind must be browser_script, got {}",
|
||||||
|
tool.kind
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
|
||||||
execute_browser_script_impl(tool, skill_root, browser_tool, args)
|
execute_browser_script_impl(tool, skill_root, browser_tool, args)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn execute_browser_script_impl(
|
fn execute_browser_script_impl(
|
||||||
tool: &SkillTool,
|
tool: &SkillTool,
|
||||||
skill_root: &Path,
|
skill_root: &Path,
|
||||||
browser_tool: Arc<dyn BrowserBackend>,
|
browser_tool: &dyn BrowserBackend,
|
||||||
args: Value,
|
args: Value,
|
||||||
) -> anyhow::Result<ToolResult> {
|
) -> anyhow::Result<ToolResult> {
|
||||||
let invocation = BrowserScriptInvocation { tool, skill_root };
|
eprintln!("[execute_browser_script_impl] 开始执行");
|
||||||
let script_path = invocation.canonical_script_path()?;
|
eprintln!("[execute_browser_script_impl] tool.name: {}", tool.name);
|
||||||
|
eprintln!("[execute_browser_script_impl] tool.command: {}", tool.command);
|
||||||
|
eprintln!("[execute_browser_script_impl] skill_root: {:?}", skill_root);
|
||||||
|
eprintln!("[execute_browser_script_impl] args: {:?}", args);
|
||||||
|
|
||||||
|
let script_path = resolve_browser_script_path(skill_root, &tool.command)?;
|
||||||
|
eprintln!("[execute_browser_script_impl] script_path: {:?}", script_path);
|
||||||
|
|
||||||
|
// 检查脚本文件是否存在
|
||||||
|
if !script_path.exists() {
|
||||||
|
eprintln!("[execute_browser_script_impl] 脚本文件不存在!");
|
||||||
|
} else {
|
||||||
|
eprintln!("[execute_browser_script_impl] 脚本文件存在");
|
||||||
|
}
|
||||||
|
|
||||||
let mut args = match args {
|
let mut args = match args {
|
||||||
Value::Object(args) => args,
|
Value::Object(args) => args,
|
||||||
other => {
|
other => {
|
||||||
|
eprintln!("[execute_browser_script_impl] args 不是 Object: {:?}", other);
|
||||||
return Ok(failed_tool_result(format!(
|
return Ok(failed_tool_result(format!(
|
||||||
"expected object arguments, got {other}"
|
"expected object arguments, got {other}"
|
||||||
)))
|
)))
|
||||||
@@ -192,27 +183,35 @@ fn execute_browser_script_impl(
|
|||||||
let raw_expected_domain = match args.remove("expected_domain") {
|
let raw_expected_domain = match args.remove("expected_domain") {
|
||||||
Some(Value::String(value)) if !value.trim().is_empty() => value,
|
Some(Value::String(value)) if !value.trim().is_empty() => value,
|
||||||
Some(other) => {
|
Some(other) => {
|
||||||
|
eprintln!("[execute_browser_script_impl] expected_domain 格式错误: {:?}", other);
|
||||||
return Ok(failed_tool_result(format!(
|
return Ok(failed_tool_result(format!(
|
||||||
"expected_domain must be a non-empty string, got {other}"
|
"expected_domain must be a non-empty string, got {other}"
|
||||||
)))
|
)))
|
||||||
}
|
}
|
||||||
None => {
|
None => {
|
||||||
|
eprintln!("[execute_browser_script_impl] 缺少 expected_domain");
|
||||||
return Ok(failed_tool_result(
|
return Ok(failed_tool_result(
|
||||||
"missing required field expected_domain".to_string(),
|
"missing required field expected_domain".to_string(),
|
||||||
))
|
))
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
eprintln!("[execute_browser_script_impl] raw_expected_domain: {}", raw_expected_domain);
|
||||||
|
|
||||||
let expected_domain = match normalize_domain_like(&raw_expected_domain) {
|
let expected_domain = match normalize_domain_like(&raw_expected_domain) {
|
||||||
Some(value) => value,
|
Some(value) => value,
|
||||||
None => {
|
None => {
|
||||||
|
eprintln!("[execute_browser_script_impl] expected_domain 解析失败");
|
||||||
return Ok(failed_tool_result(format!(
|
return Ok(failed_tool_result(format!(
|
||||||
"expected_domain must resolve to a hostname, got {raw_expected_domain:?}"
|
"expected_domain must resolve to a hostname, got {raw_expected_domain:?}"
|
||||||
)))
|
)))
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
eprintln!("[execute_browser_script_impl] expected_domain: {}", expected_domain);
|
||||||
|
args.insert("expected_domain".to_string(), Value::String(expected_domain.clone()));
|
||||||
|
|
||||||
for required_arg in tool.args.keys() {
|
for required_arg in tool.args.keys() {
|
||||||
if !args.contains_key(required_arg) {
|
if !args.contains_key(required_arg) {
|
||||||
|
eprintln!("[execute_browser_script_impl] 缺少必需参数: {}", required_arg);
|
||||||
return Ok(failed_tool_result(format!(
|
return Ok(failed_tool_result(format!(
|
||||||
"missing required field {required_arg}"
|
"missing required field {required_arg}"
|
||||||
)));
|
)));
|
||||||
@@ -220,8 +219,12 @@ fn execute_browser_script_impl(
|
|||||||
}
|
}
|
||||||
|
|
||||||
let script_body = match fs::read_to_string(&script_path) {
|
let script_body = match fs::read_to_string(&script_path) {
|
||||||
Ok(value) => value,
|
Ok(value) => {
|
||||||
|
eprintln!("[execute_browser_script_impl] 脚本读取成功, 长度: {} 字节", value.len());
|
||||||
|
value
|
||||||
|
}
|
||||||
Err(err) => {
|
Err(err) => {
|
||||||
|
eprintln!("[execute_browser_script_impl] 脚本读取失败: {}", err);
|
||||||
return Ok(failed_tool_result(format!(
|
return Ok(failed_tool_result(format!(
|
||||||
"failed to read browser script {}: {err}",
|
"failed to read browser script {}: {err}",
|
||||||
script_path.display()
|
script_path.display()
|
||||||
@@ -230,16 +233,36 @@ fn execute_browser_script_impl(
|
|||||||
};
|
};
|
||||||
|
|
||||||
let wrapped_script = wrap_browser_script(&script_body, &Value::Object(args.clone()));
|
let wrapped_script = wrap_browser_script(&script_body, &Value::Object(args.clone()));
|
||||||
|
eprintln!("[execute_browser_script_impl] 包装后脚本长度: {} 字节", wrapped_script.len());
|
||||||
|
eprintln!("[execute_browser_script_impl] 包装后脚本前500字符: {}",
|
||||||
|
if wrapped_script.len() > 500 { &wrapped_script[..500] } else { &wrapped_script });
|
||||||
|
eprintln!("[execute_browser_script_impl] 调用 browser_tool.invoke(Action::Eval)...");
|
||||||
|
|
||||||
|
let target_url = args.get("target_url")
|
||||||
|
.and_then(|v| v.as_str())
|
||||||
|
.map(|s| s.to_string())
|
||||||
|
.unwrap_or_else(|| format!("http://{}", expected_domain));
|
||||||
|
eprintln!("[execute_browser_script_impl] target_url: {}", target_url);
|
||||||
let result = match browser_tool.invoke(
|
let result = match browser_tool.invoke(
|
||||||
Action::Eval,
|
Action::Eval,
|
||||||
json!({ "script": wrapped_script }),
|
json!({
|
||||||
|
"script": wrapped_script,
|
||||||
|
"target_url": target_url,
|
||||||
|
}),
|
||||||
&expected_domain,
|
&expected_domain,
|
||||||
) {
|
) {
|
||||||
Ok(result) => result,
|
Ok(result) => {
|
||||||
Err(err) => return Ok(failed_tool_result(err.to_string())),
|
eprintln!("[execute_browser_script_impl] invoke 成功, result.success: {}", result.success);
|
||||||
|
result
|
||||||
|
}
|
||||||
|
Err(err) => {
|
||||||
|
eprintln!("[execute_browser_script_impl] invoke 失败: {}", err);
|
||||||
|
return Ok(failed_tool_result(err.to_string()))
|
||||||
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
if !result.success {
|
if !result.success {
|
||||||
|
eprintln!("[execute_browser_script_impl] result.success=false, data: {:?}", result.data);
|
||||||
return Ok(failed_tool_result(format_browser_script_error(&result.data)));
|
return Ok(failed_tool_result(format_browser_script_error(&result.data)));
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -248,6 +271,7 @@ fn execute_browser_script_impl(
|
|||||||
.get("text")
|
.get("text")
|
||||||
.cloned()
|
.cloned()
|
||||||
.unwrap_or_else(|| result.data.clone());
|
.unwrap_or_else(|| result.data.clone());
|
||||||
|
eprintln!("[execute_browser_script_impl] 返回成功, payload 长度: {:?}", payload.to_string().len());
|
||||||
Ok(ToolResult {
|
Ok(ToolResult {
|
||||||
success: true,
|
success: true,
|
||||||
output: stringify_tool_payload(&payload)?,
|
output: stringify_tool_payload(&payload)?,
|
||||||
@@ -263,6 +287,32 @@ fn wrap_browser_script(script_body: &str, args: &Value) -> String {
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn resolve_browser_script_path(skill_root: &Path, command: &str) -> anyhow::Result<PathBuf> {
|
||||||
|
let script_path = PathBuf::from(command);
|
||||||
|
let script_path = if script_path.is_absolute() {
|
||||||
|
script_path
|
||||||
|
} else {
|
||||||
|
skill_root.join(script_path)
|
||||||
|
};
|
||||||
|
let canonical_skill_root = skill_root
|
||||||
|
.canonicalize()
|
||||||
|
.unwrap_or_else(|_| skill_root.to_path_buf());
|
||||||
|
let canonical_script_path = script_path.canonicalize().map_err(|err| {
|
||||||
|
anyhow::anyhow!(
|
||||||
|
"failed to resolve browser script {}: {err}",
|
||||||
|
script_path.display()
|
||||||
|
)
|
||||||
|
})?;
|
||||||
|
if !canonical_script_path.starts_with(&canonical_skill_root) {
|
||||||
|
anyhow::bail!(
|
||||||
|
"browser script path escapes skill root: {}",
|
||||||
|
canonical_script_path.display()
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(canonical_script_path)
|
||||||
|
}
|
||||||
|
|
||||||
fn stringify_tool_payload(payload: &Value) -> anyhow::Result<String> {
|
fn stringify_tool_payload(payload: &Value) -> anyhow::Result<String> {
|
||||||
Ok(match payload {
|
Ok(match payload {
|
||||||
Value::String(value) => value.clone(),
|
Value::String(value) => value.clone(),
|
||||||
|
|||||||
@@ -12,7 +12,6 @@ use crate::runtime::RuntimeProfile;
|
|||||||
|
|
||||||
const SGCLAW_ZEROCLAW_WORKSPACE_DIR: &str = ".sgclaw-zeroclaw-workspace";
|
const SGCLAW_ZEROCLAW_WORKSPACE_DIR: &str = ".sgclaw-zeroclaw-workspace";
|
||||||
const SKILLS_DIR_NAME: &str = "skills";
|
const SKILLS_DIR_NAME: &str = "skills";
|
||||||
const STAGED_SKILLS_DIR_NAME: &str = "skill_staging";
|
|
||||||
|
|
||||||
pub fn build_zeroclaw_config(
|
pub fn build_zeroclaw_config(
|
||||||
workspace_root: &Path,
|
workspace_root: &Path,
|
||||||
@@ -88,41 +87,23 @@ pub fn zeroclaw_default_skills_dir(workspace_root: &Path) -> PathBuf {
|
|||||||
zeroclaw_workspace_dir(workspace_root).join(SKILLS_DIR_NAME)
|
zeroclaw_workspace_dir(workspace_root).join(SKILLS_DIR_NAME)
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn resolve_skills_dir(workspace_root: &Path, settings: &DeepSeekSettings) -> Vec<PathBuf> {
|
pub fn resolve_skills_dir(workspace_root: &Path, settings: &DeepSeekSettings) -> PathBuf {
|
||||||
resolve_skills_dir_paths(workspace_root, &settings.skills_dir)
|
settings
|
||||||
|
.skills_dir
|
||||||
|
.as_deref()
|
||||||
|
.map(normalize_configured_skills_dir)
|
||||||
|
.unwrap_or_else(|| zeroclaw_default_skills_dir(workspace_root))
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn resolve_skills_dir_from_sgclaw_settings(
|
pub fn resolve_skills_dir_from_sgclaw_settings(
|
||||||
workspace_root: &Path,
|
workspace_root: &Path,
|
||||||
settings: &SgClawSettings,
|
settings: &SgClawSettings,
|
||||||
) -> Vec<PathBuf> {
|
) -> PathBuf {
|
||||||
resolve_skills_dir_paths(workspace_root, &settings.skills_dir)
|
settings
|
||||||
}
|
.skills_dir
|
||||||
|
.as_deref()
|
||||||
pub fn resolve_scene_skills_dir_from_sgclaw_settings(
|
.map(normalize_configured_skills_dir)
|
||||||
workspace_root: &Path,
|
.unwrap_or_else(|| zeroclaw_default_skills_dir(workspace_root))
|
||||||
settings: &SgClawSettings,
|
|
||||||
) -> Vec<PathBuf> {
|
|
||||||
resolve_skills_dir_from_sgclaw_settings(workspace_root, settings)
|
|
||||||
.into_iter()
|
|
||||||
.flat_map(|dir| {
|
|
||||||
let scene_dir = resolve_scene_skills_dir_path(dir.clone());
|
|
||||||
if scene_dir != dir {
|
|
||||||
vec![dir, scene_dir]
|
|
||||||
} else {
|
|
||||||
vec![dir]
|
|
||||||
}
|
|
||||||
})
|
|
||||||
.collect()
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn resolve_scene_skills_dir_path(skills_dir: PathBuf) -> PathBuf {
|
|
||||||
let staged_skills_dir = skills_dir.join(STAGED_SKILLS_DIR_NAME).join(SKILLS_DIR_NAME);
|
|
||||||
if staged_skills_dir.is_dir() {
|
|
||||||
staged_skills_dir
|
|
||||||
} else {
|
|
||||||
skills_dir
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
fn normalize_configured_skills_dir(configured_dir: &Path) -> PathBuf {
|
fn normalize_configured_skills_dir(configured_dir: &Path) -> PathBuf {
|
||||||
@@ -138,13 +119,3 @@ fn normalize_configured_skills_dir(configured_dir: &Path) -> PathBuf {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
fn resolve_skills_dir_paths(workspace_root: &Path, configured_dirs: &[PathBuf]) -> Vec<PathBuf> {
|
|
||||||
if configured_dirs.is_empty() {
|
|
||||||
vec![zeroclaw_default_skills_dir(workspace_root)]
|
|
||||||
} else {
|
|
||||||
configured_dirs
|
|
||||||
.iter()
|
|
||||||
.map(|d| normalize_configured_skills_dir(d))
|
|
||||||
.collect()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|||||||
457
src/compat/deterministic_submit.rs
Normal file
457
src/compat/deterministic_submit.rs
Normal file
@@ -0,0 +1,457 @@
|
|||||||
|
use std::path::{Path, PathBuf};
|
||||||
|
use std::sync::Arc;
|
||||||
|
|
||||||
|
use serde_json::{Map, Value};
|
||||||
|
|
||||||
|
use crate::browser::BrowserBackend;
|
||||||
|
use crate::compat::artifact_open::{open_exported_xlsx, PostExportOpen};
|
||||||
|
use crate::compat::direct_skill_runtime::DirectSubmitOutcome;
|
||||||
|
use crate::compat::lineloss_xlsx_export::{export_lineloss_xlsx, LinelossExportRequest};
|
||||||
|
use crate::config::SgClawSettings;
|
||||||
|
use crate::pipe::{BrowserPipeTool, PipeError, Transport};
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||||
|
pub struct DeterministicExecutionPlan {
|
||||||
|
pub instruction: String,
|
||||||
|
pub tool_name: String,
|
||||||
|
pub expected_domain: String,
|
||||||
|
pub target_url: String,
|
||||||
|
pub org_label: String,
|
||||||
|
pub org_code: String,
|
||||||
|
pub period_mode: String,
|
||||||
|
pub period_mode_code: String,
|
||||||
|
pub period_value: String,
|
||||||
|
pub period_payload: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||||
|
pub enum DeterministicSubmitDecision {
|
||||||
|
NotDeterministic,
|
||||||
|
Prompt { summary: String },
|
||||||
|
Execute(DeterministicExecutionPlan),
|
||||||
|
}
|
||||||
|
|
||||||
|
const DETERMINISTIC_SUFFIX: &str = "。。。";
|
||||||
|
const LINELLOSS_HOST: &str = "20.76.57.61";
|
||||||
|
const LINELLOSS_TARGET_URL: &str = "http://20.76.57.61:18080/gsllys/tqLinelossStatis/tqQualifyRateMonitor";
|
||||||
|
const LINELLOSS_TOOL: &str = "tq-lineloss-report.collect_lineloss";
|
||||||
|
|
||||||
|
pub fn decide_deterministic_submit(
|
||||||
|
raw_instruction: &str,
|
||||||
|
page_url: Option<&str>,
|
||||||
|
page_title: Option<&str>,
|
||||||
|
) -> DeterministicSubmitDecision {
|
||||||
|
let Some(instruction) = strip_exact_deterministic_suffix(raw_instruction) else {
|
||||||
|
return DeterministicSubmitDecision::NotDeterministic;
|
||||||
|
};
|
||||||
|
|
||||||
|
let normalized_instruction = instruction.trim();
|
||||||
|
if normalized_instruction.is_empty() {
|
||||||
|
return unsupported_scene_prompt();
|
||||||
|
}
|
||||||
|
|
||||||
|
if !matches_lineloss_scene(normalized_instruction) {
|
||||||
|
return unsupported_scene_prompt();
|
||||||
|
}
|
||||||
|
|
||||||
|
let resolved_org = match crate::compat::tq_lineloss::org_resolver::resolve_org_from_instruction(
|
||||||
|
normalized_instruction,
|
||||||
|
) {
|
||||||
|
Ok(Some(resolved_org)) => resolved_org,
|
||||||
|
Ok(None) => {
|
||||||
|
return DeterministicSubmitDecision::Prompt {
|
||||||
|
summary: crate::compat::tq_lineloss::contracts::missing_company_prompt(),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
Err(summary) => {
|
||||||
|
return DeterministicSubmitDecision::Prompt { summary };
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let resolved_period = match crate::compat::tq_lineloss::period_resolver::resolve_period(
|
||||||
|
normalized_instruction,
|
||||||
|
) {
|
||||||
|
Ok(resolved_period) => resolved_period,
|
||||||
|
Err(summary) => {
|
||||||
|
return DeterministicSubmitDecision::Prompt { summary };
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
if page_context_conflicts_with_lineloss(page_url, page_title) {
|
||||||
|
return DeterministicSubmitDecision::Prompt {
|
||||||
|
summary:
|
||||||
|
"已命中台区线损报表技能,但当前页面与台区线损场景不匹配,请切换到线损页面后重试。"
|
||||||
|
.to_string(),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
DeterministicSubmitDecision::Execute(DeterministicExecutionPlan {
|
||||||
|
instruction: normalized_instruction.to_string(),
|
||||||
|
tool_name: LINELLOSS_TOOL.to_string(),
|
||||||
|
expected_domain: LINELLOSS_HOST.to_string(),
|
||||||
|
target_url: LINELLOSS_TARGET_URL.to_string(),
|
||||||
|
org_label: resolved_org.label,
|
||||||
|
org_code: resolved_org.code,
|
||||||
|
period_mode: period_mode_name(&resolved_period.mode).to_string(),
|
||||||
|
period_mode_code: resolved_period.mode_code,
|
||||||
|
period_value: resolved_period.value,
|
||||||
|
period_payload: serde_json::to_string(&resolved_period.payload)
|
||||||
|
.unwrap_or_else(|_| "{}".to_string()),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn execute_deterministic_submit<T: Transport + 'static>(
|
||||||
|
browser_tool: BrowserPipeTool<T>,
|
||||||
|
plan: &DeterministicExecutionPlan,
|
||||||
|
workspace_root: &Path,
|
||||||
|
settings: &SgClawSettings,
|
||||||
|
) -> Result<DirectSubmitOutcome, PipeError> {
|
||||||
|
let args = deterministic_submit_args(plan);
|
||||||
|
let output = crate::compat::direct_skill_runtime::execute_browser_script_skill_raw_output(
|
||||||
|
browser_tool,
|
||||||
|
&plan.tool_name,
|
||||||
|
workspace_root,
|
||||||
|
settings,
|
||||||
|
args,
|
||||||
|
)?;
|
||||||
|
|
||||||
|
let export_path = try_export_lineloss_xlsx(&output, workspace_root);
|
||||||
|
Ok(summarize_lineloss_output_with_export(&output, export_path.as_deref()))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn execute_deterministic_submit_with_browser_backend(
|
||||||
|
browser_backend: Arc<dyn BrowserBackend>,
|
||||||
|
plan: &DeterministicExecutionPlan,
|
||||||
|
workspace_root: &Path,
|
||||||
|
settings: &SgClawSettings,
|
||||||
|
) -> Result<DirectSubmitOutcome, PipeError> {
|
||||||
|
let args = deterministic_submit_args(plan);
|
||||||
|
let output =
|
||||||
|
crate::compat::direct_skill_runtime::execute_browser_script_skill_raw_output_with_browser_backend(
|
||||||
|
browser_backend,
|
||||||
|
&plan.tool_name,
|
||||||
|
workspace_root,
|
||||||
|
settings,
|
||||||
|
args,
|
||||||
|
)?;
|
||||||
|
|
||||||
|
let export_path = try_export_lineloss_xlsx(&output, workspace_root);
|
||||||
|
Ok(summarize_lineloss_output_with_export(&output, export_path.as_deref()))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn deterministic_submit_args(plan: &DeterministicExecutionPlan) -> Map<String, Value> {
|
||||||
|
let mut args = Map::new();
|
||||||
|
args.insert(
|
||||||
|
"expected_domain".to_string(),
|
||||||
|
Value::String(plan.expected_domain.clone()),
|
||||||
|
);
|
||||||
|
args.insert(
|
||||||
|
"target_url".to_string(),
|
||||||
|
Value::String(plan.target_url.clone()),
|
||||||
|
);
|
||||||
|
args.insert(
|
||||||
|
"org_label".to_string(),
|
||||||
|
Value::String(plan.org_label.clone()),
|
||||||
|
);
|
||||||
|
args.insert(
|
||||||
|
"org_code".to_string(),
|
||||||
|
Value::String(plan.org_code.clone()),
|
||||||
|
);
|
||||||
|
args.insert(
|
||||||
|
"period_mode".to_string(),
|
||||||
|
Value::String(plan.period_mode.clone()),
|
||||||
|
);
|
||||||
|
args.insert(
|
||||||
|
"period_mode_code".to_string(),
|
||||||
|
Value::String(plan.period_mode_code.clone()),
|
||||||
|
);
|
||||||
|
args.insert(
|
||||||
|
"period_value".to_string(),
|
||||||
|
Value::String(plan.period_value.clone()),
|
||||||
|
);
|
||||||
|
args.insert(
|
||||||
|
"period_payload".to_string(),
|
||||||
|
Value::String(plan.period_payload.clone()),
|
||||||
|
);
|
||||||
|
args
|
||||||
|
}
|
||||||
|
|
||||||
|
fn summarize_lineloss_output(output: &str) -> DirectSubmitOutcome {
|
||||||
|
let Some(payload) = serde_json::from_str::<Value>(output).ok() else {
|
||||||
|
return DirectSubmitOutcome {
|
||||||
|
success: true,
|
||||||
|
summary: output.to_string(),
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
let artifact = payload
|
||||||
|
.as_object()
|
||||||
|
.and_then(|object| object.get("text"))
|
||||||
|
.unwrap_or(&payload);
|
||||||
|
|
||||||
|
summarize_lineloss_artifact(artifact)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn summarize_lineloss_artifact(artifact: &Value) -> DirectSubmitOutcome {
|
||||||
|
let Some(artifact) = artifact.as_object() else {
|
||||||
|
return DirectSubmitOutcome {
|
||||||
|
success: true,
|
||||||
|
summary: artifact.to_string(),
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
if artifact.get("type").and_then(Value::as_str) != Some("report-artifact") {
|
||||||
|
return DirectSubmitOutcome {
|
||||||
|
success: true,
|
||||||
|
summary: Value::Object(artifact.clone()).to_string(),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
let status = artifact
|
||||||
|
.get("status")
|
||||||
|
.and_then(Value::as_str)
|
||||||
|
.unwrap_or("ok");
|
||||||
|
let success = matches!(status, "ok" | "partial" | "empty");
|
||||||
|
let report_name = artifact
|
||||||
|
.get("report_name")
|
||||||
|
.and_then(Value::as_str)
|
||||||
|
.unwrap_or("tq-lineloss-report");
|
||||||
|
let org_label = artifact
|
||||||
|
.get("org")
|
||||||
|
.and_then(Value::as_object)
|
||||||
|
.and_then(|org| org.get("label"))
|
||||||
|
.and_then(Value::as_str)
|
||||||
|
.unwrap_or("");
|
||||||
|
let period_value = artifact
|
||||||
|
.get("period")
|
||||||
|
.and_then(Value::as_object)
|
||||||
|
.and_then(|period| period.get("value"))
|
||||||
|
.and_then(Value::as_str)
|
||||||
|
.unwrap_or("");
|
||||||
|
let rows = artifact
|
||||||
|
.get("counts")
|
||||||
|
.and_then(Value::as_object)
|
||||||
|
.and_then(|counts| counts.get("rows"))
|
||||||
|
.and_then(Value::as_u64)
|
||||||
|
.map(|value| value as usize)
|
||||||
|
.or_else(|| artifact.get("rows").and_then(Value::as_array).map(Vec::len))
|
||||||
|
.unwrap_or(0);
|
||||||
|
let reasons = artifact
|
||||||
|
.get("reasons")
|
||||||
|
.and_then(Value::as_array)
|
||||||
|
.map(|reasons| {
|
||||||
|
reasons
|
||||||
|
.iter()
|
||||||
|
.filter_map(Value::as_str)
|
||||||
|
.filter(|value| !value.trim().is_empty())
|
||||||
|
.collect::<Vec<_>>()
|
||||||
|
})
|
||||||
|
.unwrap_or_default();
|
||||||
|
|
||||||
|
let mut parts = vec![report_name.to_string()];
|
||||||
|
if !org_label.is_empty() {
|
||||||
|
parts.push(org_label.to_string());
|
||||||
|
}
|
||||||
|
if !period_value.is_empty() {
|
||||||
|
parts.push(period_value.to_string());
|
||||||
|
}
|
||||||
|
parts.push(format!("status={status}"));
|
||||||
|
parts.push(format!("rows={rows}"));
|
||||||
|
if !reasons.is_empty() {
|
||||||
|
parts.push(format!("reasons={}", reasons.join(",")));
|
||||||
|
}
|
||||||
|
|
||||||
|
DirectSubmitOutcome {
|
||||||
|
success,
|
||||||
|
summary: parts.join(" "),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn summarize_lineloss_output_with_export(output: &str, export_path: Option<&Path>) -> DirectSubmitOutcome {
|
||||||
|
let mut outcome = summarize_lineloss_output(output);
|
||||||
|
|
||||||
|
if let Some(path) = export_path {
|
||||||
|
outcome.summary.push_str(&format!(" export_path={}", path.display()));
|
||||||
|
match open_exported_xlsx(path) {
|
||||||
|
PostExportOpen::Opened => {
|
||||||
|
outcome.summary.push_str(" 已自动打开Excel");
|
||||||
|
}
|
||||||
|
PostExportOpen::Failed(reason) => {
|
||||||
|
outcome.summary.push_str(&format!(" 自动打开Excel失败: {}", reason));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
outcome
|
||||||
|
}
|
||||||
|
|
||||||
|
struct LinelossArtifactExportData {
|
||||||
|
sheet_name: String,
|
||||||
|
column_defs: Vec<(String, String)>,
|
||||||
|
rows: Vec<Map<String, Value>>,
|
||||||
|
}
|
||||||
|
|
||||||
|
fn extract_export_data(output: &str) -> Option<LinelossArtifactExportData> {
|
||||||
|
let payload: Value = serde_json::from_str(output).ok()?;
|
||||||
|
let artifact = payload
|
||||||
|
.as_object()
|
||||||
|
.and_then(|object| object.get("text"))
|
||||||
|
.unwrap_or(&payload);
|
||||||
|
let artifact = artifact.as_object()?;
|
||||||
|
|
||||||
|
if artifact.get("type").and_then(Value::as_str) != Some("report-artifact") {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
|
||||||
|
let status = artifact.get("status").and_then(Value::as_str).unwrap_or("");
|
||||||
|
if !matches!(status, "ok" | "partial") {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
|
||||||
|
let rows = artifact
|
||||||
|
.get("rows")
|
||||||
|
.and_then(Value::as_array)?;
|
||||||
|
if rows.is_empty() {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
let rows: Vec<Map<String, Value>> = rows
|
||||||
|
.iter()
|
||||||
|
.filter_map(|row| row.as_object().cloned())
|
||||||
|
.collect();
|
||||||
|
if rows.is_empty() {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
|
||||||
|
let column_defs: Vec<(String, String)> = artifact
|
||||||
|
.get("column_defs")
|
||||||
|
.and_then(Value::as_array)
|
||||||
|
.map(|defs| {
|
||||||
|
defs.iter()
|
||||||
|
.filter_map(|def| {
|
||||||
|
let arr = def.as_array()?;
|
||||||
|
let key = arr.first()?.as_str()?.to_string();
|
||||||
|
let label = arr.get(1)?.as_str()?.to_string();
|
||||||
|
Some((key, label))
|
||||||
|
})
|
||||||
|
.collect()
|
||||||
|
})
|
||||||
|
.unwrap_or_default();
|
||||||
|
|
||||||
|
// Fallback: if column_defs not in artifact, try "columns" array as keys
|
||||||
|
let column_defs = if column_defs.is_empty() {
|
||||||
|
let columns = artifact
|
||||||
|
.get("columns")
|
||||||
|
.and_then(Value::as_array)?;
|
||||||
|
columns
|
||||||
|
.iter()
|
||||||
|
.filter_map(|col| {
|
||||||
|
let key = col.as_str()?.to_string();
|
||||||
|
Some((key.clone(), key))
|
||||||
|
})
|
||||||
|
.collect()
|
||||||
|
} else {
|
||||||
|
column_defs
|
||||||
|
};
|
||||||
|
|
||||||
|
if column_defs.is_empty() {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
|
||||||
|
let org_label = artifact
|
||||||
|
.get("org")
|
||||||
|
.and_then(Value::as_object)
|
||||||
|
.and_then(|org| org.get("label"))
|
||||||
|
.and_then(Value::as_str)
|
||||||
|
.unwrap_or("lineloss");
|
||||||
|
let period_mode = artifact
|
||||||
|
.get("period")
|
||||||
|
.and_then(Value::as_object)
|
||||||
|
.and_then(|p| p.get("mode"))
|
||||||
|
.and_then(Value::as_str)
|
||||||
|
.unwrap_or("month");
|
||||||
|
let period_value = artifact
|
||||||
|
.get("period")
|
||||||
|
.and_then(Value::as_object)
|
||||||
|
.and_then(|p| p.get("value"))
|
||||||
|
.and_then(Value::as_str)
|
||||||
|
.unwrap_or("");
|
||||||
|
let mode_label = if period_mode == "week" { "周度" } else { "月度" };
|
||||||
|
let sheet_name = format!("{org_label}{mode_label}线损分析报表({period_value})");
|
||||||
|
|
||||||
|
Some(LinelossArtifactExportData {
|
||||||
|
sheet_name,
|
||||||
|
column_defs,
|
||||||
|
rows,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn try_export_lineloss_xlsx(
|
||||||
|
output: &str,
|
||||||
|
workspace_root: &Path,
|
||||||
|
) -> Option<PathBuf> {
|
||||||
|
let data = extract_export_data(output)?;
|
||||||
|
let nanos = std::time::SystemTime::now()
|
||||||
|
.duration_since(std::time::UNIX_EPOCH)
|
||||||
|
.map(|d| d.as_nanos())
|
||||||
|
.unwrap_or_default();
|
||||||
|
let out_dir = workspace_root.join("out");
|
||||||
|
let output_path = out_dir.join(format!("tq-lineloss-{nanos}.xlsx"));
|
||||||
|
|
||||||
|
let request = LinelossExportRequest {
|
||||||
|
sheet_name: data.sheet_name,
|
||||||
|
column_defs: data.column_defs,
|
||||||
|
rows: data.rows,
|
||||||
|
output_path,
|
||||||
|
};
|
||||||
|
|
||||||
|
match export_lineloss_xlsx(&request) {
|
||||||
|
Ok(path) => {
|
||||||
|
eprintln!("[deterministic_submit] XLSX exported to: {}", path.display());
|
||||||
|
Some(path)
|
||||||
|
}
|
||||||
|
Err(err) => {
|
||||||
|
eprintln!("[deterministic_submit] XLSX export failed: {err}");
|
||||||
|
None
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn strip_exact_deterministic_suffix(raw_instruction: &str) -> Option<&str> {
|
||||||
|
let without_suffix = raw_instruction.strip_suffix(DETERMINISTIC_SUFFIX)?;
|
||||||
|
if without_suffix.ends_with('。') {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
Some(without_suffix)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn matches_lineloss_scene(instruction: &str) -> bool {
|
||||||
|
instruction.contains("线损") || instruction.contains("月累计") || instruction.contains("周累计")
|
||||||
|
}
|
||||||
|
|
||||||
|
fn page_context_conflicts_with_lineloss(page_url: Option<&str>, page_title: Option<&str>) -> bool {
|
||||||
|
let url = page_url.unwrap_or_default().to_ascii_lowercase();
|
||||||
|
let title = page_title.unwrap_or_default();
|
||||||
|
let has_context = !url.is_empty() || !title.is_empty();
|
||||||
|
if !has_context {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
let url_matches = url.contains(LINELLOSS_HOST) || url.contains("lineloss");
|
||||||
|
let title_matches = title.contains("线损");
|
||||||
|
|
||||||
|
!(url_matches || title_matches)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn period_mode_name(mode: &crate::compat::tq_lineloss::contracts::PeriodMode) -> &'static str {
|
||||||
|
match mode {
|
||||||
|
crate::compat::tq_lineloss::contracts::PeriodMode::Month => "month",
|
||||||
|
crate::compat::tq_lineloss::contracts::PeriodMode::Week => "week",
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn unsupported_scene_prompt() -> DeterministicSubmitDecision {
|
||||||
|
DeterministicSubmitDecision::Prompt {
|
||||||
|
summary: "确定性提交当前只支持台区线损月/周累计线损率报表场景,请补充台区线损请求。"
|
||||||
|
.to_string(),
|
||||||
|
}
|
||||||
|
}
|
||||||
450
src/compat/direct_skill_runtime.rs
Normal file
450
src/compat/direct_skill_runtime.rs
Normal file
@@ -0,0 +1,450 @@
|
|||||||
|
use std::path::Path;
|
||||||
|
use std::sync::Arc;
|
||||||
|
|
||||||
|
use reqwest::Url;
|
||||||
|
use serde_json::{Map, Value};
|
||||||
|
use zeroclaw::skills::{load_skills_from_directory, SkillTool};
|
||||||
|
|
||||||
|
use crate::browser::{BrowserBackend, PipeBrowserBackend};
|
||||||
|
use crate::compat::browser_script_skill_tool::execute_browser_script_tool;
|
||||||
|
use crate::compat::config_adapter::resolve_skills_dir_from_sgclaw_settings;
|
||||||
|
use crate::compat::runtime::CompatTaskContext;
|
||||||
|
use crate::config::SgClawSettings;
|
||||||
|
use crate::pipe::{BrowserPipeTool, PipeError, Transport};
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||||
|
pub struct DirectSubmitOutcome {
|
||||||
|
pub success: bool,
|
||||||
|
pub summary: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn execute_direct_submit_skill<T: Transport + 'static>(
|
||||||
|
browser_tool: BrowserPipeTool<T>,
|
||||||
|
instruction: &str,
|
||||||
|
task_context: &CompatTaskContext,
|
||||||
|
workspace_root: &Path,
|
||||||
|
settings: &SgClawSettings,
|
||||||
|
) -> Result<DirectSubmitOutcome, PipeError> {
|
||||||
|
let configured_tool = settings
|
||||||
|
.direct_submit_skill
|
||||||
|
.as_deref()
|
||||||
|
.map(str::trim)
|
||||||
|
.filter(|value| !value.is_empty())
|
||||||
|
.ok_or_else(|| PipeError::Protocol("direct submit skill is not configured".to_string()))?;
|
||||||
|
let expected_domain = derive_expected_domain(task_context)?;
|
||||||
|
let period = derive_period(instruction)?;
|
||||||
|
|
||||||
|
let mut args = Map::new();
|
||||||
|
args.insert("expected_domain".to_string(), Value::String(expected_domain));
|
||||||
|
args.insert("period".to_string(), Value::String(period));
|
||||||
|
|
||||||
|
let output = execute_browser_script_skill_raw_output(
|
||||||
|
browser_tool,
|
||||||
|
configured_tool,
|
||||||
|
workspace_root,
|
||||||
|
settings,
|
||||||
|
args,
|
||||||
|
)?;
|
||||||
|
|
||||||
|
Ok(interpret_direct_submit_output(&output))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn execute_direct_submit_skill_with_browser_backend(
|
||||||
|
browser_backend: Arc<dyn BrowserBackend>,
|
||||||
|
instruction: &str,
|
||||||
|
task_context: &CompatTaskContext,
|
||||||
|
workspace_root: &Path,
|
||||||
|
settings: &SgClawSettings,
|
||||||
|
) -> Result<DirectSubmitOutcome, PipeError> {
|
||||||
|
let configured_tool = settings
|
||||||
|
.direct_submit_skill
|
||||||
|
.as_deref()
|
||||||
|
.map(str::trim)
|
||||||
|
.filter(|value| !value.is_empty())
|
||||||
|
.ok_or_else(|| PipeError::Protocol("direct submit skill is not configured".to_string()))?;
|
||||||
|
let expected_domain = derive_expected_domain(task_context)?;
|
||||||
|
let period = derive_period(instruction)?;
|
||||||
|
|
||||||
|
let mut args = Map::new();
|
||||||
|
args.insert("expected_domain".to_string(), Value::String(expected_domain));
|
||||||
|
args.insert("period".to_string(), Value::String(period));
|
||||||
|
|
||||||
|
let output = execute_browser_script_skill_raw_output_with_browser_backend(
|
||||||
|
browser_backend,
|
||||||
|
configured_tool,
|
||||||
|
workspace_root,
|
||||||
|
settings,
|
||||||
|
args,
|
||||||
|
)?;
|
||||||
|
|
||||||
|
Ok(interpret_direct_submit_output(&output))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn execute_browser_script_skill_raw_output<T: Transport + 'static>(
|
||||||
|
browser_tool: BrowserPipeTool<T>,
|
||||||
|
configured_tool: &str,
|
||||||
|
workspace_root: &Path,
|
||||||
|
settings: &SgClawSettings,
|
||||||
|
args: Map<String, Value>,
|
||||||
|
) -> Result<String, PipeError> {
|
||||||
|
let (tool, skill_root) = resolve_browser_script_skill(configured_tool, workspace_root, settings)?;
|
||||||
|
|
||||||
|
execute_browser_script_tool_output(browser_tool, configured_tool, &tool, &skill_root, args)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn execute_browser_script_skill_raw_output_with_browser_backend(
|
||||||
|
browser_backend: Arc<dyn BrowserBackend>,
|
||||||
|
configured_tool: &str,
|
||||||
|
workspace_root: &Path,
|
||||||
|
settings: &SgClawSettings,
|
||||||
|
args: Map<String, Value>,
|
||||||
|
) -> Result<String, PipeError> {
|
||||||
|
let (tool, skill_root) =
|
||||||
|
resolve_browser_script_skill(configured_tool, workspace_root, settings)?;
|
||||||
|
|
||||||
|
execute_browser_script_tool_output_with_backend(
|
||||||
|
browser_backend.as_ref(),
|
||||||
|
configured_tool,
|
||||||
|
&tool,
|
||||||
|
&skill_root,
|
||||||
|
args,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn resolve_browser_script_skill(
|
||||||
|
configured_tool: &str,
|
||||||
|
workspace_root: &Path,
|
||||||
|
settings: &SgClawSettings,
|
||||||
|
) -> Result<(SkillTool, std::path::PathBuf), PipeError> {
|
||||||
|
let (skill_name, tool_name) = parse_configured_tool_name(configured_tool)?;
|
||||||
|
let skills_dir = resolve_skills_dir_from_sgclaw_settings(workspace_root, settings);
|
||||||
|
let skills = load_skills_from_directory(&skills_dir, true);
|
||||||
|
let skill = skills
|
||||||
|
.into_iter()
|
||||||
|
.find(|skill| skill.name == skill_name)
|
||||||
|
.ok_or_else(|| {
|
||||||
|
PipeError::Protocol(format!(
|
||||||
|
"direct submit skill {skill_name} was not found in {}",
|
||||||
|
skills_dir.display()
|
||||||
|
))
|
||||||
|
})?;
|
||||||
|
let skill_root = skill
|
||||||
|
.location
|
||||||
|
.as_deref()
|
||||||
|
.and_then(Path::parent)
|
||||||
|
.map(Path::to_path_buf)
|
||||||
|
.ok_or_else(|| {
|
||||||
|
PipeError::Protocol(format!(
|
||||||
|
"direct submit skill {skill_name} is missing a resolvable location"
|
||||||
|
))
|
||||||
|
})?;
|
||||||
|
let tool = skill
|
||||||
|
.tools
|
||||||
|
.iter()
|
||||||
|
.find(|tool| tool.name == tool_name)
|
||||||
|
.cloned()
|
||||||
|
.ok_or_else(|| {
|
||||||
|
PipeError::Protocol(format!(
|
||||||
|
"direct submit tool {configured_tool} was not found"
|
||||||
|
))
|
||||||
|
})?;
|
||||||
|
|
||||||
|
Ok((tool, skill_root))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn execute_browser_script_tool_output<T: Transport + 'static>(
|
||||||
|
browser_tool: BrowserPipeTool<T>,
|
||||||
|
configured_tool: &str,
|
||||||
|
tool: &SkillTool,
|
||||||
|
skill_root: &Path,
|
||||||
|
args: Map<String, Value>,
|
||||||
|
) -> Result<String, PipeError> {
|
||||||
|
let browser_backend = PipeBrowserBackend::from_inner(browser_tool);
|
||||||
|
execute_browser_script_tool_output_with_backend(
|
||||||
|
&browser_backend,
|
||||||
|
configured_tool,
|
||||||
|
tool,
|
||||||
|
skill_root,
|
||||||
|
args,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn execute_browser_script_tool_output_with_backend(
|
||||||
|
browser_backend: &dyn BrowserBackend,
|
||||||
|
configured_tool: &str,
|
||||||
|
tool: &SkillTool,
|
||||||
|
skill_root: &Path,
|
||||||
|
args: Map<String, Value>,
|
||||||
|
) -> Result<String, PipeError> {
|
||||||
|
if tool.kind != "browser_script" {
|
||||||
|
return Err(PipeError::Protocol(format!(
|
||||||
|
"direct submit tool {configured_tool} must be browser_script, got {}",
|
||||||
|
tool.kind
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut tool = tool.clone();
|
||||||
|
tool.args.remove("expected_domain");
|
||||||
|
|
||||||
|
let runtime = tokio::runtime::Runtime::new()
|
||||||
|
.map_err(|err| PipeError::Protocol(format!("failed to create tokio runtime: {err}")))?;
|
||||||
|
let result = runtime
|
||||||
|
.block_on(execute_browser_script_tool(
|
||||||
|
&tool,
|
||||||
|
skill_root,
|
||||||
|
browser_backend,
|
||||||
|
Value::Object(args),
|
||||||
|
))
|
||||||
|
.map_err(|err| PipeError::Protocol(err.to_string()))?;
|
||||||
|
|
||||||
|
if result.success {
|
||||||
|
Ok(result.output)
|
||||||
|
} else {
|
||||||
|
Err(PipeError::Protocol(
|
||||||
|
result
|
||||||
|
.error
|
||||||
|
.unwrap_or_else(|| "direct submit skill execution failed".to_string()),
|
||||||
|
))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn interpret_direct_submit_output(output: &str) -> DirectSubmitOutcome {
|
||||||
|
let Some(payload) = serde_json::from_str::<Value>(output).ok() else {
|
||||||
|
return DirectSubmitOutcome {
|
||||||
|
success: true,
|
||||||
|
summary: output.to_string(),
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
let Some(artifact) = payload.as_object() else {
|
||||||
|
return DirectSubmitOutcome {
|
||||||
|
success: true,
|
||||||
|
summary: output.to_string(),
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
if artifact.get("type").and_then(Value::as_str) != Some("report-artifact") {
|
||||||
|
return DirectSubmitOutcome {
|
||||||
|
success: true,
|
||||||
|
summary: output.to_string(),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
let status = artifact
|
||||||
|
.get("status")
|
||||||
|
.and_then(Value::as_str)
|
||||||
|
.unwrap_or("ok");
|
||||||
|
let success = matches!(status, "ok" | "partial" | "empty");
|
||||||
|
let report_name = artifact
|
||||||
|
.get("report_name")
|
||||||
|
.and_then(Value::as_str)
|
||||||
|
.unwrap_or("report-artifact");
|
||||||
|
let period = artifact
|
||||||
|
.get("period")
|
||||||
|
.and_then(Value::as_str)
|
||||||
|
.unwrap_or("");
|
||||||
|
let detail_rows = count_rows(artifact.get("counts"), artifact.get("rows"), "detail_rows");
|
||||||
|
let summary_rows = count_summary_rows(artifact.get("counts"), artifact.get("sections"));
|
||||||
|
let partial_reasons = artifact
|
||||||
|
.get("partial_reasons")
|
||||||
|
.and_then(Value::as_array)
|
||||||
|
.map(|reasons| {
|
||||||
|
reasons
|
||||||
|
.iter()
|
||||||
|
.filter_map(Value::as_str)
|
||||||
|
.filter(|value| !value.trim().is_empty())
|
||||||
|
.collect::<Vec<_>>()
|
||||||
|
})
|
||||||
|
.unwrap_or_default();
|
||||||
|
|
||||||
|
let mut parts = vec![report_name.to_string()];
|
||||||
|
if !period.trim().is_empty() {
|
||||||
|
parts.push(period.to_string());
|
||||||
|
}
|
||||||
|
parts.push(format!("status={status}"));
|
||||||
|
parts.push(format!("detail_rows={detail_rows}"));
|
||||||
|
parts.push(format!("summary_rows={summary_rows}"));
|
||||||
|
if !partial_reasons.is_empty() {
|
||||||
|
parts.push(format!("partial_reasons={}", partial_reasons.join(",")));
|
||||||
|
}
|
||||||
|
|
||||||
|
DirectSubmitOutcome {
|
||||||
|
success,
|
||||||
|
summary: parts.join(" "),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn count_rows(counts: Option<&Value>, rows: Option<&Value>, key: &str) -> usize {
|
||||||
|
counts
|
||||||
|
.and_then(Value::as_object)
|
||||||
|
.and_then(|counts| counts.get(key))
|
||||||
|
.and_then(Value::as_u64)
|
||||||
|
.map(|count| count as usize)
|
||||||
|
.or_else(|| rows.and_then(Value::as_array).map(Vec::len))
|
||||||
|
.unwrap_or(0)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn count_summary_rows(counts: Option<&Value>, sections: Option<&Value>) -> usize {
|
||||||
|
counts
|
||||||
|
.and_then(Value::as_object)
|
||||||
|
.and_then(|counts| counts.get("summary_rows"))
|
||||||
|
.and_then(Value::as_u64)
|
||||||
|
.map(|count| count as usize)
|
||||||
|
.or_else(|| {
|
||||||
|
sections
|
||||||
|
.and_then(Value::as_array)
|
||||||
|
.and_then(|sections| {
|
||||||
|
sections.iter().find_map(|section| {
|
||||||
|
section
|
||||||
|
.as_object()
|
||||||
|
.and_then(|section| section.get("rows"))
|
||||||
|
.and_then(Value::as_array)
|
||||||
|
.map(Vec::len)
|
||||||
|
})
|
||||||
|
})
|
||||||
|
})
|
||||||
|
.unwrap_or(0)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn parse_configured_tool_name(configured_tool: &str) -> Result<(&str, &str), PipeError> {
|
||||||
|
let (skill_name, tool_name) = configured_tool.split_once('.').ok_or_else(|| {
|
||||||
|
PipeError::Protocol(format!(
|
||||||
|
"direct submit skill must use skill.tool format, got {configured_tool}"
|
||||||
|
))
|
||||||
|
})?;
|
||||||
|
let skill_name = skill_name.trim();
|
||||||
|
let tool_name = tool_name.trim();
|
||||||
|
if skill_name.is_empty() || tool_name.is_empty() {
|
||||||
|
return Err(PipeError::Protocol(format!(
|
||||||
|
"direct submit skill must use skill.tool format, got {configured_tool}"
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
Ok((skill_name, tool_name))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn derive_expected_domain(task_context: &CompatTaskContext) -> Result<String, PipeError> {
|
||||||
|
let page_url = task_context
|
||||||
|
.page_url
|
||||||
|
.as_deref()
|
||||||
|
.map(str::trim)
|
||||||
|
.filter(|value| !value.is_empty())
|
||||||
|
.ok_or_else(|| {
|
||||||
|
PipeError::Protocol(
|
||||||
|
"当前命令需要浏览器页面上下文才能执行。请在浏览器中打开目标页面后重试,或在指令末尾添加'。。。'使用确定性提交。".to_string(),
|
||||||
|
)
|
||||||
|
})?;
|
||||||
|
|
||||||
|
Url::parse(page_url)
|
||||||
|
.ok()
|
||||||
|
.and_then(|url| url.host_str().map(|host| host.to_ascii_lowercase()))
|
||||||
|
.ok_or_else(|| {
|
||||||
|
PipeError::Protocol(format!(
|
||||||
|
"direct submit skill could not derive expected_domain from page_url {page_url:?}"
|
||||||
|
))
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn derive_period(instruction: &str) -> Result<String, PipeError> {
|
||||||
|
let chars = instruction.chars().collect::<Vec<_>>();
|
||||||
|
if chars.len() < 7 {
|
||||||
|
return Err(PipeError::Protocol(
|
||||||
|
"direct submit skill requires an explicit YYYY-MM period in the instruction"
|
||||||
|
.to_string(),
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
|
for start in 0..=chars.len() - 7 {
|
||||||
|
let candidate = chars[start..start + 7].iter().collect::<String>();
|
||||||
|
if is_year_month(&candidate) {
|
||||||
|
return Ok(candidate);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Err(PipeError::Protocol(
|
||||||
|
"direct submit skill requires an explicit YYYY-MM period in the instruction"
|
||||||
|
.to_string(),
|
||||||
|
))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn is_year_month(candidate: &str) -> bool {
|
||||||
|
let bytes = candidate.as_bytes();
|
||||||
|
bytes.len() == 7
|
||||||
|
&& bytes[0..4].iter().all(u8::is_ascii_digit)
|
||||||
|
&& bytes[4] == b'-'
|
||||||
|
&& bytes[5..7].iter().all(u8::is_ascii_digit)
|
||||||
|
&& matches!((bytes[5] - b'0') * 10 + (bytes[6] - b'0'), 1..=12)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::{
|
||||||
|
count_rows, count_summary_rows, derive_period, interpret_direct_submit_output,
|
||||||
|
is_year_month, parse_configured_tool_name,
|
||||||
|
};
|
||||||
|
use serde_json::json;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn parse_configured_tool_name_requires_skill_and_tool() {
|
||||||
|
assert_eq!(
|
||||||
|
parse_configured_tool_name("fault-details-report.collect_fault_details")
|
||||||
|
.unwrap(),
|
||||||
|
("fault-details-report", "collect_fault_details")
|
||||||
|
);
|
||||||
|
assert!(parse_configured_tool_name("fault-details-report").is_err());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn derive_period_requires_explicit_year_month() {
|
||||||
|
assert_eq!(derive_period("收集 2026-03 故障明细").unwrap(), "2026-03");
|
||||||
|
assert!(derive_period("收集三月故障明细").is_err());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn year_month_validation_rejects_invalid_month() {
|
||||||
|
assert!(is_year_month("2026-12"));
|
||||||
|
assert!(!is_year_month("2026-00"));
|
||||||
|
assert!(!is_year_month("2026-13"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn interpret_direct_submit_output_maps_report_artifact_statuses() {
|
||||||
|
let partial = interpret_direct_submit_output(
|
||||||
|
&json!({
|
||||||
|
"type": "report-artifact",
|
||||||
|
"report_name": "fault-details-report",
|
||||||
|
"period": "2026-03",
|
||||||
|
"counts": { "detail_rows": 1, "summary_rows": 1 },
|
||||||
|
"status": "partial",
|
||||||
|
"partial_reasons": ["report_log_failed"]
|
||||||
|
})
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
assert!(partial.success);
|
||||||
|
assert!(partial.summary.contains("status=partial"));
|
||||||
|
assert!(partial.summary.contains("report_log_failed"));
|
||||||
|
|
||||||
|
let blocked = interpret_direct_submit_output(
|
||||||
|
&json!({
|
||||||
|
"type": "report-artifact",
|
||||||
|
"report_name": "fault-details-report",
|
||||||
|
"status": "blocked",
|
||||||
|
"partial_reasons": ["selected_range_unavailable"]
|
||||||
|
})
|
||||||
|
.to_string(),
|
||||||
|
);
|
||||||
|
assert!(!blocked.success);
|
||||||
|
assert!(blocked.summary.contains("status=blocked"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn row_count_helpers_fall_back_to_payload_shapes() {
|
||||||
|
assert_eq!(
|
||||||
|
count_rows(None, Some(&json!([{ "qxdbh": "QX-1" }, { "qxdbh": "QX-2" }])), "detail_rows"),
|
||||||
|
2
|
||||||
|
);
|
||||||
|
assert_eq!(
|
||||||
|
count_summary_rows(None, Some(&json!([{ "name": "summary-sheet", "rows": [{ "index": 1 }] }]))),
|
||||||
|
1
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
223
src/compat/lineloss_xlsx_export.rs
Normal file
223
src/compat/lineloss_xlsx_export.rs
Normal file
@@ -0,0 +1,223 @@
|
|||||||
|
use std::fs;
|
||||||
|
use std::io::Write;
|
||||||
|
use std::path::{Path, PathBuf};
|
||||||
|
|
||||||
|
use serde_json::{Map, Value};
|
||||||
|
use zip::write::FileOptions;
|
||||||
|
use zip::{CompressionMethod, ZipWriter};
|
||||||
|
|
||||||
|
pub struct LinelossExportRequest {
|
||||||
|
pub sheet_name: String,
|
||||||
|
pub column_defs: Vec<(String, String)>,
|
||||||
|
pub rows: Vec<Map<String, Value>>,
|
||||||
|
pub output_path: PathBuf,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn export_lineloss_xlsx(request: &LinelossExportRequest) -> anyhow::Result<PathBuf> {
|
||||||
|
if request.rows.is_empty() {
|
||||||
|
anyhow::bail!("rows must not be empty");
|
||||||
|
}
|
||||||
|
if request.column_defs.is_empty() {
|
||||||
|
anyhow::bail!("column_defs must not be empty");
|
||||||
|
}
|
||||||
|
|
||||||
|
let sheet_xml = build_worksheet_xml(&request.column_defs, &request.rows);
|
||||||
|
|
||||||
|
write_xlsx(
|
||||||
|
&request.output_path,
|
||||||
|
&request.sheet_name,
|
||||||
|
&sheet_xml,
|
||||||
|
)?;
|
||||||
|
|
||||||
|
Ok(request.output_path.clone())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn build_worksheet_xml(
|
||||||
|
column_defs: &[(String, String)],
|
||||||
|
rows: &[Map<String, Value>],
|
||||||
|
) -> String {
|
||||||
|
let mut xml_rows = Vec::with_capacity(rows.len() + 1);
|
||||||
|
|
||||||
|
// Header row (row 1)
|
||||||
|
let header_cells: Vec<String> = column_defs
|
||||||
|
.iter()
|
||||||
|
.enumerate()
|
||||||
|
.map(|(col_idx, (_key, label))| {
|
||||||
|
let col_letter = column_letter(col_idx);
|
||||||
|
format!(
|
||||||
|
"<c r=\"{col_letter}1\" t=\"inlineStr\"><is><t>{}</t></is></c>",
|
||||||
|
xml_escape(label)
|
||||||
|
)
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
xml_rows.push(format!("<row r=\"1\">{}</row>", header_cells.join("")));
|
||||||
|
|
||||||
|
// Data rows (row 2+)
|
||||||
|
for (row_idx, row) in rows.iter().enumerate() {
|
||||||
|
let excel_row = row_idx + 2;
|
||||||
|
let cells: Vec<String> = column_defs
|
||||||
|
.iter()
|
||||||
|
.enumerate()
|
||||||
|
.map(|(col_idx, (key, _label))| {
|
||||||
|
let col_letter = column_letter(col_idx);
|
||||||
|
let value = row
|
||||||
|
.get(key)
|
||||||
|
.map(|v| value_to_string(v))
|
||||||
|
.unwrap_or_default();
|
||||||
|
format!(
|
||||||
|
"<c r=\"{col_letter}{excel_row}\" t=\"inlineStr\"><is><t>{}</t></is></c>",
|
||||||
|
xml_escape(&value)
|
||||||
|
)
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
xml_rows.push(format!("<row r=\"{excel_row}\">{}</row>", cells.join("")));
|
||||||
|
}
|
||||||
|
|
||||||
|
format!(
|
||||||
|
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\
|
||||||
|
<worksheet xmlns=\"http://schemas.openxmlformats.org/spreadsheetml/2006/main\">\
|
||||||
|
<sheetData>{}</sheetData>\
|
||||||
|
</worksheet>",
|
||||||
|
xml_rows.join("")
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn column_letter(index: usize) -> String {
|
||||||
|
let mut result = String::new();
|
||||||
|
let mut n = index;
|
||||||
|
loop {
|
||||||
|
result.insert(0, (b'A' + (n % 26) as u8) as char);
|
||||||
|
if n < 26 {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
n = n / 26 - 1;
|
||||||
|
}
|
||||||
|
result
|
||||||
|
}
|
||||||
|
|
||||||
|
fn value_to_string(value: &Value) -> String {
|
||||||
|
match value {
|
||||||
|
Value::String(text) => text.clone(),
|
||||||
|
Value::Number(number) => number.to_string(),
|
||||||
|
Value::Bool(flag) => flag.to_string(),
|
||||||
|
Value::Null => String::new(),
|
||||||
|
other => other.to_string(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn xml_escape(value: &str) -> String {
|
||||||
|
value
|
||||||
|
.replace('&', "&")
|
||||||
|
.replace('<', "<")
|
||||||
|
.replace('>', ">")
|
||||||
|
}
|
||||||
|
|
||||||
|
fn write_xlsx(output_path: &Path, sheet_name: &str, sheet_xml: &str) -> anyhow::Result<()> {
|
||||||
|
if let Some(parent) = output_path.parent() {
|
||||||
|
fs::create_dir_all(parent)?;
|
||||||
|
}
|
||||||
|
if output_path.exists() {
|
||||||
|
fs::remove_file(output_path)?;
|
||||||
|
}
|
||||||
|
|
||||||
|
let file = fs::File::create(output_path)?;
|
||||||
|
let mut zip = ZipWriter::new(file);
|
||||||
|
let options = FileOptions::default().compression_method(CompressionMethod::Stored);
|
||||||
|
|
||||||
|
zip.start_file("[Content_Types].xml", options)?;
|
||||||
|
zip.write_all(content_types_xml().as_bytes())?;
|
||||||
|
|
||||||
|
zip.start_file("_rels/.rels", options)?;
|
||||||
|
zip.write_all(root_rels_xml().as_bytes())?;
|
||||||
|
|
||||||
|
zip.start_file("docProps/app.xml", options)?;
|
||||||
|
zip.write_all(app_xml().as_bytes())?;
|
||||||
|
|
||||||
|
zip.start_file("docProps/core.xml", options)?;
|
||||||
|
zip.write_all(core_xml().as_bytes())?;
|
||||||
|
|
||||||
|
zip.start_file("xl/workbook.xml", options)?;
|
||||||
|
zip.write_all(workbook_xml(&xml_escape(sheet_name)).as_bytes())?;
|
||||||
|
|
||||||
|
zip.start_file("xl/_rels/workbook.xml.rels", options)?;
|
||||||
|
zip.write_all(workbook_rels_xml().as_bytes())?;
|
||||||
|
|
||||||
|
zip.start_file("xl/worksheets/sheet1.xml", options)?;
|
||||||
|
zip.write_all(sheet_xml.as_bytes())?;
|
||||||
|
|
||||||
|
zip.finish()?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn content_types_xml() -> &'static str {
|
||||||
|
r#"<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
|
||||||
|
<Types xmlns="http://schemas.openxmlformats.org/package/2006/content-types">
|
||||||
|
<Default Extension="rels" ContentType="application/vnd.openxmlformats-package.relationships+xml"/>
|
||||||
|
<Default Extension="xml" ContentType="application/xml"/>
|
||||||
|
<Override PartName="/xl/workbook.xml" ContentType="application/vnd.openxmlformats-officedocument.spreadsheetml.sheet.main+xml"/>
|
||||||
|
<Override PartName="/xl/worksheets/sheet1.xml" ContentType="application/vnd.openxmlformats-officedocument.spreadsheetml.worksheet+xml"/>
|
||||||
|
<Override PartName="/docProps/core.xml" ContentType="application/vnd.openxmlformats-package.core-properties+xml"/>
|
||||||
|
<Override PartName="/docProps/app.xml" ContentType="application/vnd.openxmlformats-officedocument.extended-properties+xml"/>
|
||||||
|
</Types>"#
|
||||||
|
}
|
||||||
|
|
||||||
|
fn root_rels_xml() -> &'static str {
|
||||||
|
r#"<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
|
||||||
|
<Relationships xmlns="http://schemas.openxmlformats.org/package/2006/relationships">
|
||||||
|
<Relationship Id="rId1" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/officeDocument" Target="xl/workbook.xml"/>
|
||||||
|
<Relationship Id="rId2" Type="http://schemas.openxmlformats.org/package/2006/relationships/metadata/core-properties" Target="docProps/core.xml"/>
|
||||||
|
<Relationship Id="rId3" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/extended-properties" Target="docProps/app.xml"/>
|
||||||
|
</Relationships>"#
|
||||||
|
}
|
||||||
|
|
||||||
|
fn app_xml() -> &'static str {
|
||||||
|
r#"<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
|
||||||
|
<Properties xmlns="http://schemas.openxmlformats.org/officeDocument/2006/extended-properties"
|
||||||
|
xmlns:vt="http://schemas.openxmlformats.org/officeDocument/2006/docPropsVTypes">
|
||||||
|
<Application>sgClaw</Application>
|
||||||
|
</Properties>"#
|
||||||
|
}
|
||||||
|
|
||||||
|
fn core_xml() -> &'static str {
|
||||||
|
r#"<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
|
||||||
|
<cp:coreProperties xmlns:cp="http://schemas.openxmlformats.org/package/2006/metadata/core-properties"
|
||||||
|
xmlns:dc="http://purl.org/dc/elements/1.1/"
|
||||||
|
xmlns:dcterms="http://purl.org/dc/terms/"
|
||||||
|
xmlns:dcmitype="http://purl.org/dc/dcmitype/"
|
||||||
|
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
|
||||||
|
<dc:title>台区线损报表</dc:title>
|
||||||
|
</cp:coreProperties>"#
|
||||||
|
}
|
||||||
|
|
||||||
|
fn workbook_xml(sheet_name: &str) -> String {
|
||||||
|
format!(
|
||||||
|
r#"<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
|
||||||
|
<workbook xmlns="http://schemas.openxmlformats.org/spreadsheetml/2006/main"
|
||||||
|
xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships">
|
||||||
|
<sheets>
|
||||||
|
<sheet name="{sheet_name}" sheetId="1" r:id="rId1"/>
|
||||||
|
</sheets>
|
||||||
|
</workbook>"#
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn workbook_rels_xml() -> &'static str {
|
||||||
|
r#"<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
|
||||||
|
<Relationships xmlns="http://schemas.openxmlformats.org/package/2006/relationships">
|
||||||
|
<Relationship Id="rId1" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/worksheet" Target="worksheets/sheet1.xml"/>
|
||||||
|
</Relationships>"#
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::column_letter;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn column_letter_maps_indices_correctly() {
|
||||||
|
assert_eq!(column_letter(0), "A");
|
||||||
|
assert_eq!(column_letter(1), "B");
|
||||||
|
assert_eq!(column_letter(6), "G");
|
||||||
|
assert_eq!(column_letter(25), "Z");
|
||||||
|
assert_eq!(column_letter(26), "AA");
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -3,10 +3,14 @@ pub mod browser_script_skill_tool;
|
|||||||
pub mod browser_tool_adapter;
|
pub mod browser_tool_adapter;
|
||||||
pub mod config_adapter;
|
pub mod config_adapter;
|
||||||
pub mod cron_adapter;
|
pub mod cron_adapter;
|
||||||
|
pub mod deterministic_submit;
|
||||||
|
pub mod direct_skill_runtime;
|
||||||
pub mod event_bridge;
|
pub mod event_bridge;
|
||||||
|
pub mod lineloss_xlsx_export;
|
||||||
pub mod memory_adapter;
|
pub mod memory_adapter;
|
||||||
pub mod openxml_office_tool;
|
pub mod openxml_office_tool;
|
||||||
pub mod orchestration;
|
pub mod orchestration;
|
||||||
pub mod runtime;
|
pub mod runtime;
|
||||||
pub mod screen_html_export_tool;
|
pub mod screen_html_export_tool;
|
||||||
|
pub mod tq_lineloss;
|
||||||
pub mod workflow_executor;
|
pub mod workflow_executor;
|
||||||
|
|||||||
@@ -4,12 +4,12 @@ use serde_json::{json, Value};
|
|||||||
use std::collections::BTreeMap;
|
use std::collections::BTreeMap;
|
||||||
use std::collections::BTreeSet;
|
use std::collections::BTreeSet;
|
||||||
use std::fs;
|
use std::fs;
|
||||||
use std::io::Write;
|
use std::io::{Read, Write};
|
||||||
use std::path::{Path, PathBuf};
|
use std::path::{Path, PathBuf};
|
||||||
use std::process::Command;
|
use std::process::Command;
|
||||||
use std::time::{SystemTime, UNIX_EPOCH};
|
use std::time::{SystemTime, UNIX_EPOCH};
|
||||||
use zeroclaw::tools::{Tool, ToolResult};
|
use zeroclaw::tools::{Tool, ToolResult};
|
||||||
use zip::write::SimpleFileOptions;
|
use zip::write::FileOptions;
|
||||||
use zip::{CompressionMethod, ZipWriter};
|
use zip::{CompressionMethod, ZipWriter};
|
||||||
|
|
||||||
const OPENXML_OFFICE_TOOL_NAME: &str = "openxml_office";
|
const OPENXML_OFFICE_TOOL_NAME: &str = "openxml_office";
|
||||||
@@ -131,9 +131,8 @@ impl Tool for OpenXmlOfficeTool {
|
|||||||
write_payload_json(&payload_path, &normalized_rows)?;
|
write_payload_json(&payload_path, &normalized_rows)?;
|
||||||
write_request_json(&request_path, &template_path, &payload_path, &output_path)?;
|
write_request_json(&request_path, &template_path, &payload_path, &output_path)?;
|
||||||
|
|
||||||
let rendered = run_openxml_cli(&request_path).or_else(|_| {
|
let rendered = run_openxml_cli(&request_path)
|
||||||
render_locally(&template_path, &payload_path, &output_path)
|
.or_else(|_| render_locally(&template_path, &payload_path, &output_path))?;
|
||||||
})?;
|
|
||||||
let artifact_path = rendered["data"]["artifact"]["path"]
|
let artifact_path = rendered["data"]["artifact"]["path"]
|
||||||
.as_str()
|
.as_str()
|
||||||
.map(str::to_string)
|
.map(str::to_string)
|
||||||
@@ -163,9 +162,7 @@ fn failed_tool_result(error: String) -> ToolResult {
|
|||||||
|
|
||||||
fn create_job_root(workspace_root: &Path) -> anyhow::Result<PathBuf> {
|
fn create_job_root(workspace_root: &Path) -> anyhow::Result<PathBuf> {
|
||||||
let nanos = SystemTime::now().duration_since(UNIX_EPOCH)?.as_nanos();
|
let nanos = SystemTime::now().duration_since(UNIX_EPOCH)?.as_nanos();
|
||||||
let path = workspace_root
|
let path = workspace_root.join(".sgclaw-openxml").join(format!("{nanos}"));
|
||||||
.join(".sgclaw-openxml")
|
|
||||||
.join(format!("{nanos}"));
|
|
||||||
fs::create_dir_all(&path)?;
|
fs::create_dir_all(&path)?;
|
||||||
Ok(path)
|
Ok(path)
|
||||||
}
|
}
|
||||||
@@ -223,10 +220,7 @@ fn canonicalize_column_name(value: &str) -> Option<&'static str> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
fn reorder_row(row: &[Value], column_order: &[usize]) -> Vec<Value> {
|
fn reorder_row(row: &[Value], column_order: &[usize]) -> Vec<Value> {
|
||||||
column_order
|
column_order.iter().map(|index| row[*index].clone()).collect()
|
||||||
.iter()
|
|
||||||
.map(|index| row[*index].clone())
|
|
||||||
.collect()
|
|
||||||
}
|
}
|
||||||
|
|
||||||
fn write_payload_json(path: &Path, rows: &[Vec<Value>]) -> anyhow::Result<()> {
|
fn write_payload_json(path: &Path, rows: &[Vec<Value>]) -> anyhow::Result<()> {
|
||||||
@@ -285,18 +279,8 @@ fn run_openxml_cli(request_path: &Path) -> anyhow::Result<Value> {
|
|||||||
.parent()
|
.parent()
|
||||||
.map(|path| path.join("openxml_cli").join("Cargo.toml"))
|
.map(|path| path.join("openxml_cli").join("Cargo.toml"))
|
||||||
.ok_or_else(|| anyhow::anyhow!("failed to resolve openxml_cli manifest path"))?;
|
.ok_or_else(|| anyhow::anyhow!("failed to resolve openxml_cli manifest path"))?;
|
||||||
let binary_name = if cfg!(windows) {
|
let output = if let Some(binary_path) = resolve_openxml_cli_binary(&manifest_path) {
|
||||||
"openxml-cli.exe"
|
Command::new(binary_path)
|
||||||
} else {
|
|
||||||
"openxml-cli"
|
|
||||||
};
|
|
||||||
let binary_path = manifest_path
|
|
||||||
.parent()
|
|
||||||
.map(|path| path.join("target").join("debug").join(binary_name))
|
|
||||||
.ok_or_else(|| anyhow::anyhow!("failed to resolve openxml_cli binary path"))?;
|
|
||||||
|
|
||||||
let output = if binary_path.exists() {
|
|
||||||
Command::new(&binary_path)
|
|
||||||
.args([
|
.args([
|
||||||
"template",
|
"template",
|
||||||
"render",
|
"render",
|
||||||
@@ -358,14 +342,11 @@ fn worksheet_xml_from_xlsx(path: &Path) -> anyhow::Result<String> {
|
|||||||
let mut archive = zip::ZipArchive::new(file)?;
|
let mut archive = zip::ZipArchive::new(file)?;
|
||||||
let mut sheet = archive.by_name("xl/worksheets/sheet1.xml")?;
|
let mut sheet = archive.by_name("xl/worksheets/sheet1.xml")?;
|
||||||
let mut xml = String::new();
|
let mut xml = String::new();
|
||||||
std::io::Read::read_to_string(&mut sheet, &mut xml)?;
|
sheet.read_to_string(&mut xml)?;
|
||||||
Ok(xml)
|
Ok(xml)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn render_template_xml(
|
fn render_template_xml(template: &str, variables: &serde_json::Map<String, Value>) -> String {
|
||||||
template: &str,
|
|
||||||
variables: &serde_json::Map<String, Value>,
|
|
||||||
) -> String {
|
|
||||||
let mut rendered = template.to_string();
|
let mut rendered = template.to_string();
|
||||||
for (key, value) in variables {
|
for (key, value) in variables {
|
||||||
let placeholder = format!("{{{{{key}}}}}");
|
let placeholder = format!("{{{{{key}}}}}");
|
||||||
@@ -392,7 +373,7 @@ fn write_rendered_xlsx(
|
|||||||
let mut archive = zip::ZipArchive::new(input)?;
|
let mut archive = zip::ZipArchive::new(input)?;
|
||||||
let output = fs::File::create(output_path)?;
|
let output = fs::File::create(output_path)?;
|
||||||
let mut writer = ZipWriter::new(output);
|
let mut writer = ZipWriter::new(output);
|
||||||
let options = SimpleFileOptions::default().compression_method(CompressionMethod::Stored);
|
let options = FileOptions::default().compression_method(CompressionMethod::Stored);
|
||||||
|
|
||||||
for index in 0..archive.len() {
|
for index in 0..archive.len() {
|
||||||
let mut entry = archive.by_index(index)?;
|
let mut entry = archive.by_index(index)?;
|
||||||
@@ -416,6 +397,34 @@ fn xml_escape(value: &str) -> String {
|
|||||||
.replace('>', ">")
|
.replace('>', ">")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn resolve_openxml_cli_binary(manifest_path: &Path) -> Option<PathBuf> {
|
||||||
|
let cli_dir = manifest_path.parent()?;
|
||||||
|
openxml_cli_candidate_paths(cli_dir)
|
||||||
|
.into_iter()
|
||||||
|
.find(|path| path.exists())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn openxml_cli_candidate_paths(cli_dir: &Path) -> Vec<PathBuf> {
|
||||||
|
let mut paths = Vec::new();
|
||||||
|
for profile in ["release", "debug"] {
|
||||||
|
paths.push(
|
||||||
|
cli_dir
|
||||||
|
.join("target")
|
||||||
|
.join(profile)
|
||||||
|
.join(openxml_cli_binary_name()),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
paths
|
||||||
|
}
|
||||||
|
|
||||||
|
fn openxml_cli_binary_name() -> &'static str {
|
||||||
|
if cfg!(windows) {
|
||||||
|
"openxml-cli.exe"
|
||||||
|
} else {
|
||||||
|
"openxml-cli"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
fn value_to_string(value: &Value) -> String {
|
fn value_to_string(value: &Value) -> String {
|
||||||
match value {
|
match value {
|
||||||
Value::String(text) => text.clone(),
|
Value::String(text) => text.clone(),
|
||||||
@@ -427,34 +436,39 @@ fn value_to_string(value: &Value) -> String {
|
|||||||
}
|
}
|
||||||
|
|
||||||
fn write_hotlist_template(path: &Path, row_count: usize) -> anyhow::Result<()> {
|
fn write_hotlist_template(path: &Path, row_count: usize) -> anyhow::Result<()> {
|
||||||
write_zip_file(&path, &[Content {
|
write_zip_file(
|
||||||
path: "[Content_Types].xml",
|
&path,
|
||||||
body: content_types_xml().to_string(),
|
&[
|
||||||
},
|
Content {
|
||||||
Content {
|
path: "[Content_Types].xml",
|
||||||
path: "_rels/.rels",
|
body: content_types_xml().to_string(),
|
||||||
body: root_rels_xml().to_string(),
|
},
|
||||||
},
|
Content {
|
||||||
Content {
|
path: "_rels/.rels",
|
||||||
path: "docProps/app.xml",
|
body: root_rels_xml().to_string(),
|
||||||
body: app_xml().to_string(),
|
},
|
||||||
},
|
Content {
|
||||||
Content {
|
path: "docProps/app.xml",
|
||||||
path: "docProps/core.xml",
|
body: app_xml().to_string(),
|
||||||
body: core_xml().to_string(),
|
},
|
||||||
},
|
Content {
|
||||||
Content {
|
path: "docProps/core.xml",
|
||||||
path: "xl/workbook.xml",
|
body: core_xml().to_string(),
|
||||||
body: workbook_xml().to_string(),
|
},
|
||||||
},
|
Content {
|
||||||
Content {
|
path: "xl/workbook.xml",
|
||||||
path: "xl/_rels/workbook.xml.rels",
|
body: workbook_xml().to_string(),
|
||||||
body: workbook_rels_xml().to_string(),
|
},
|
||||||
},
|
Content {
|
||||||
Content {
|
path: "xl/_rels/workbook.xml.rels",
|
||||||
path: "xl/worksheets/sheet1.xml",
|
body: workbook_rels_xml().to_string(),
|
||||||
body: worksheet_xml(row_count),
|
},
|
||||||
}])?;
|
Content {
|
||||||
|
path: "xl/worksheets/sheet1.xml",
|
||||||
|
body: worksheet_xml(row_count),
|
||||||
|
},
|
||||||
|
],
|
||||||
|
)?;
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -473,7 +487,7 @@ fn write_zip_file(path: &Path, entries: &[Content<'_>]) -> anyhow::Result<()> {
|
|||||||
|
|
||||||
let file = fs::File::create(path)?;
|
let file = fs::File::create(path)?;
|
||||||
let mut zip = ZipWriter::new(file);
|
let mut zip = ZipWriter::new(file);
|
||||||
let options = SimpleFileOptions::default().compression_method(CompressionMethod::Stored);
|
let options = FileOptions::default().compression_method(CompressionMethod::Stored);
|
||||||
for entry in entries {
|
for entry in entries {
|
||||||
zip.start_file(entry.path, options)?;
|
zip.start_file(entry.path, options)?;
|
||||||
zip.write_all(entry.body.as_bytes())?;
|
zip.write_all(entry.body.as_bytes())?;
|
||||||
@@ -482,6 +496,42 @@ fn write_zip_file(path: &Path, entries: &[Content<'_>]) -> anyhow::Result<()> {
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::{openxml_cli_binary_name, openxml_cli_candidate_paths, zip_entry_name};
|
||||||
|
use std::path::Path;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn openxml_cli_candidates_prefer_release_before_debug() {
|
||||||
|
let paths = openxml_cli_candidate_paths(Path::new("E:\\coding\\codex\\openxml_cli"));
|
||||||
|
assert_eq!(paths.len(), 2);
|
||||||
|
assert_eq!(
|
||||||
|
paths[0],
|
||||||
|
Path::new("E:\\coding\\codex\\openxml_cli")
|
||||||
|
.join("target")
|
||||||
|
.join("release")
|
||||||
|
.join(openxml_cli_binary_name())
|
||||||
|
);
|
||||||
|
assert_eq!(
|
||||||
|
paths[1],
|
||||||
|
Path::new("E:\\coding\\codex\\openxml_cli")
|
||||||
|
.join("target")
|
||||||
|
.join("debug")
|
||||||
|
.join(openxml_cli_binary_name())
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn zip_entry_name_normalizes_windows_separators() {
|
||||||
|
let rel = Path::new("xl\\worksheets\\sheet1.xml");
|
||||||
|
assert_eq!(zip_entry_name(rel), "xl/worksheets/sheet1.xml");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn zip_entry_name(path: &Path) -> String {
|
||||||
|
path.to_string_lossy().replace('\\', "/")
|
||||||
|
}
|
||||||
|
|
||||||
fn worksheet_xml(row_count: usize) -> String {
|
fn worksheet_xml(row_count: usize) -> String {
|
||||||
let mut rows = Vec::new();
|
let mut rows = Vec::new();
|
||||||
rows.push(
|
rows.push(
|
||||||
|
|||||||
@@ -2,6 +2,7 @@ use std::path::Path;
|
|||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
|
|
||||||
use crate::browser::BrowserBackend;
|
use crate::browser::BrowserBackend;
|
||||||
|
use crate::compat::config_adapter::resolve_skills_dir_from_sgclaw_settings;
|
||||||
use crate::compat::runtime::CompatTaskContext;
|
use crate::compat::runtime::CompatTaskContext;
|
||||||
use crate::config::SgClawSettings;
|
use crate::config::SgClawSettings;
|
||||||
use crate::pipe::{BrowserPipeTool, PipeError, Transport};
|
use crate::pipe::{BrowserPipeTool, PipeError, Transport};
|
||||||
@@ -36,6 +37,7 @@ pub fn execute_task_with_browser_backend(
|
|||||||
workspace_root: &Path,
|
workspace_root: &Path,
|
||||||
settings: &SgClawSettings,
|
settings: &SgClawSettings,
|
||||||
) -> Result<String, PipeError> {
|
) -> Result<String, PipeError> {
|
||||||
|
let skills_dir = resolve_skills_dir_from_sgclaw_settings(workspace_root, settings);
|
||||||
let route = crate::compat::workflow_executor::detect_route(
|
let route = crate::compat::workflow_executor::detect_route(
|
||||||
instruction,
|
instruction,
|
||||||
task_context.page_url.as_deref(),
|
task_context.page_url.as_deref(),
|
||||||
@@ -47,6 +49,7 @@ pub fn execute_task_with_browser_backend(
|
|||||||
transport,
|
transport,
|
||||||
browser_backend.clone(),
|
browser_backend.clone(),
|
||||||
workspace_root,
|
workspace_root,
|
||||||
|
&skills_dir,
|
||||||
instruction,
|
instruction,
|
||||||
task_context,
|
task_context,
|
||||||
route,
|
route,
|
||||||
@@ -73,6 +76,7 @@ pub fn execute_task_with_browser_backend(
|
|||||||
transport,
|
transport,
|
||||||
browser_backend,
|
browser_backend,
|
||||||
workspace_root,
|
workspace_root,
|
||||||
|
&skills_dir,
|
||||||
instruction,
|
instruction,
|
||||||
task_context,
|
task_context,
|
||||||
route,
|
route,
|
||||||
@@ -84,6 +88,7 @@ pub fn execute_task_with_browser_backend(
|
|||||||
transport,
|
transport,
|
||||||
browser_backend,
|
browser_backend,
|
||||||
workspace_root,
|
workspace_root,
|
||||||
|
&skills_dir,
|
||||||
instruction,
|
instruction,
|
||||||
task_context,
|
task_context,
|
||||||
route,
|
route,
|
||||||
@@ -101,6 +106,7 @@ pub fn execute_task_with_sgclaw_settings<T: Transport + 'static>(
|
|||||||
workspace_root: &Path,
|
workspace_root: &Path,
|
||||||
settings: &SgClawSettings,
|
settings: &SgClawSettings,
|
||||||
) -> Result<String, PipeError> {
|
) -> Result<String, PipeError> {
|
||||||
|
let skills_dir = resolve_skills_dir_from_sgclaw_settings(workspace_root, settings);
|
||||||
let route = crate::compat::workflow_executor::detect_route(
|
let route = crate::compat::workflow_executor::detect_route(
|
||||||
instruction,
|
instruction,
|
||||||
task_context.page_url.as_deref(),
|
task_context.page_url.as_deref(),
|
||||||
@@ -112,6 +118,7 @@ pub fn execute_task_with_sgclaw_settings<T: Transport + 'static>(
|
|||||||
transport,
|
transport,
|
||||||
&browser_tool,
|
&browser_tool,
|
||||||
workspace_root,
|
workspace_root,
|
||||||
|
&skills_dir,
|
||||||
instruction,
|
instruction,
|
||||||
task_context,
|
task_context,
|
||||||
route,
|
route,
|
||||||
@@ -138,6 +145,7 @@ pub fn execute_task_with_sgclaw_settings<T: Transport + 'static>(
|
|||||||
transport,
|
transport,
|
||||||
&browser_tool,
|
&browser_tool,
|
||||||
workspace_root,
|
workspace_root,
|
||||||
|
&skills_dir,
|
||||||
instruction,
|
instruction,
|
||||||
task_context,
|
task_context,
|
||||||
route,
|
route,
|
||||||
@@ -149,6 +157,7 @@ pub fn execute_task_with_sgclaw_settings<T: Transport + 'static>(
|
|||||||
transport,
|
transport,
|
||||||
&browser_tool,
|
&browser_tool,
|
||||||
workspace_root,
|
workspace_root,
|
||||||
|
&skills_dir,
|
||||||
instruction,
|
instruction,
|
||||||
task_context,
|
task_context,
|
||||||
route,
|
route,
|
||||||
|
|||||||
@@ -146,12 +146,12 @@ pub async fn execute_task_with_provider(
|
|||||||
instruction: &str,
|
instruction: &str,
|
||||||
task_context: &CompatTaskContext,
|
task_context: &CompatTaskContext,
|
||||||
config: ZeroClawConfig,
|
config: ZeroClawConfig,
|
||||||
skills_dir: Vec<PathBuf>,
|
skills_dir: PathBuf,
|
||||||
settings: SgClawSettings,
|
settings: SgClawSettings,
|
||||||
) -> Result<String, PipeError> {
|
) -> Result<String, PipeError> {
|
||||||
let engine = RuntimeEngine::new(settings.runtime_profile);
|
let engine = RuntimeEngine::new(settings.runtime_profile);
|
||||||
let browser_surface_present = engine.browser_surface_enabled();
|
let browser_surface_present = engine.browser_surface_enabled();
|
||||||
let loaded_skills = engine.loaded_skills(&config, &skills_dir);
|
let loaded_skills = engine.loaded_skills(&config, std::slice::from_ref(&skills_dir));
|
||||||
let loaded_skill_versions = loaded_skills
|
let loaded_skill_versions = loaded_skills
|
||||||
.iter()
|
.iter()
|
||||||
.map(|skill| (skill.name.clone(), skill.version.clone()))
|
.map(|skill| (skill.name.clone(), skill.version.clone()))
|
||||||
@@ -198,7 +198,7 @@ pub async fn execute_task_with_provider(
|
|||||||
let mut agent = engine.build_agent(
|
let mut agent = engine.build_agent(
|
||||||
provider,
|
provider,
|
||||||
&config,
|
&config,
|
||||||
&skills_dir,
|
std::slice::from_ref(&skills_dir),
|
||||||
tools,
|
tools,
|
||||||
browser_surface_present,
|
browser_surface_present,
|
||||||
instruction,
|
instruction,
|
||||||
|
|||||||
@@ -12,10 +12,10 @@ const SCREEN_HTML_EXPORT_TOOL_NAME: &str = "screen_html_export";
|
|||||||
const DEFAULT_SCREEN_TITLE: &str = "知乎热榜主题分类分析大屏";
|
const DEFAULT_SCREEN_TITLE: &str = "知乎热榜主题分类分析大屏";
|
||||||
const TEMPLATE: &str = include_str!(concat!(
|
const TEMPLATE: &str = include_str!(concat!(
|
||||||
env!("CARGO_MANIFEST_DIR"),
|
env!("CARGO_MANIFEST_DIR"),
|
||||||
"/../skill_lib/skills/zhihu-hotlist-screen/assets/zhihu-hotlist-echarts.html"
|
"/resources/zhihu-hotlist-echarts.html"
|
||||||
));
|
));
|
||||||
const PAYLOAD_START_MARKER: &str = " const defaultPayload = ";
|
const PAYLOAD_START_MARKER: &str = " const defaultPayload = ";
|
||||||
const PAYLOAD_END_MARKER: &str = "\n\n const themeMeta = {";
|
const PAYLOAD_END_MARKER: &str = "const themeMeta = {";
|
||||||
|
|
||||||
pub struct ScreenHtmlExportTool {
|
pub struct ScreenHtmlExportTool {
|
||||||
workspace_root: PathBuf,
|
workspace_root: PathBuf,
|
||||||
|
|||||||
50
src/compat/tq_lineloss/contracts.rs
Normal file
50
src/compat/tq_lineloss/contracts.rs
Normal file
@@ -0,0 +1,50 @@
|
|||||||
|
use serde_json::Value;
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||||
|
pub struct ResolvedOrg {
|
||||||
|
pub label: String,
|
||||||
|
pub code: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||||
|
pub enum PeriodMode {
|
||||||
|
Month,
|
||||||
|
Week,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, PartialEq)]
|
||||||
|
pub struct ResolvedPeriod {
|
||||||
|
pub mode: PeriodMode,
|
||||||
|
pub mode_code: String,
|
||||||
|
pub value: String,
|
||||||
|
pub payload: Value,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn missing_company_prompt() -> String {
|
||||||
|
"已命中台区线损报表技能,但缺少供电单位,请补充如“兰州公司”或“城关供电分公司”。"
|
||||||
|
.to_string()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn ambiguous_company_prompt() -> String {
|
||||||
|
"已命中台区线损报表技能,但供电单位存在歧义,请补充更完整名称。".to_string()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn missing_period_mode_prompt() -> String {
|
||||||
|
"已命中台区线损报表技能,但未识别到月/周类型,请补充“月累计”或“周累计”。"
|
||||||
|
.to_string()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn missing_period_prompt() -> String {
|
||||||
|
"已命中台区线损报表技能,但缺少统计周期,请补充如“2026-03”或“2026年第12周”。"
|
||||||
|
.to_string()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn contradictory_period_mode_prompt() -> String {
|
||||||
|
"已命中台区线损报表技能,但月/周类型存在冲突,请只保留“月累计”或“周累计”之一。"
|
||||||
|
.to_string()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn missing_week_year_prompt() -> String {
|
||||||
|
"已命中台区线损报表技能,但周累计缺少年份,请补充如“2026年第12周”。"
|
||||||
|
.to_string()
|
||||||
|
}
|
||||||
4
src/compat/tq_lineloss/mod.rs
Normal file
4
src/compat/tq_lineloss/mod.rs
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
pub mod contracts;
|
||||||
|
pub mod org_resolver;
|
||||||
|
pub mod org_units;
|
||||||
|
pub mod period_resolver;
|
||||||
71
src/compat/tq_lineloss/org_resolver.rs
Normal file
71
src/compat/tq_lineloss/org_resolver.rs
Normal file
@@ -0,0 +1,71 @@
|
|||||||
|
use super::contracts::{ambiguous_company_prompt, ResolvedOrg};
|
||||||
|
use super::org_units::{OrgUnit, ORG_UNITS};
|
||||||
|
|
||||||
|
fn normalize(value: &str) -> String {
|
||||||
|
value.chars().filter(|ch| !ch.is_whitespace()).collect()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn candidate_names(unit: &'static OrgUnit) -> impl Iterator<Item = &'static str> {
|
||||||
|
std::iter::once(unit.label).chain(unit.aliases.iter().copied())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn to_resolved_org(unit: &OrgUnit) -> ResolvedOrg {
|
||||||
|
ResolvedOrg {
|
||||||
|
label: unit.label.to_string(),
|
||||||
|
code: unit.code.to_string(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn resolve_org(input: &str) -> Result<ResolvedOrg, String> {
|
||||||
|
let normalized = normalize(input);
|
||||||
|
if normalized.is_empty() {
|
||||||
|
return Err(super::contracts::missing_company_prompt());
|
||||||
|
}
|
||||||
|
|
||||||
|
let exact_matches: Vec<&OrgUnit> = ORG_UNITS
|
||||||
|
.iter()
|
||||||
|
.filter(|unit| candidate_names(unit).any(|name| normalize(name) == normalized))
|
||||||
|
.collect();
|
||||||
|
if exact_matches.len() == 1 {
|
||||||
|
return Ok(to_resolved_org(exact_matches[0]));
|
||||||
|
}
|
||||||
|
if exact_matches.len() > 1 {
|
||||||
|
return Err(ambiguous_company_prompt());
|
||||||
|
}
|
||||||
|
|
||||||
|
let fuzzy_matches: Vec<&OrgUnit> = ORG_UNITS
|
||||||
|
.iter()
|
||||||
|
.filter(|unit| {
|
||||||
|
candidate_names(unit).any(|name| {
|
||||||
|
let normalized_name = normalize(name);
|
||||||
|
normalized_name.contains(&normalized) || normalized.contains(&normalized_name)
|
||||||
|
})
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
if fuzzy_matches.len() == 1 {
|
||||||
|
return Ok(to_resolved_org(fuzzy_matches[0]));
|
||||||
|
}
|
||||||
|
if fuzzy_matches.len() > 1 {
|
||||||
|
return Err(ambiguous_company_prompt());
|
||||||
|
}
|
||||||
|
|
||||||
|
Err(super::contracts::missing_company_prompt())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn resolve_org_from_instruction(instruction: &str) -> Result<Option<ResolvedOrg>, String> {
|
||||||
|
let normalized_instruction = normalize(instruction);
|
||||||
|
let direct_matches: Vec<&OrgUnit> = ORG_UNITS
|
||||||
|
.iter()
|
||||||
|
.filter(|unit| {
|
||||||
|
candidate_names(unit).any(|name| normalized_instruction.contains(&normalize(name)))
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
if direct_matches.len() == 1 {
|
||||||
|
return Ok(Some(to_resolved_org(direct_matches[0])));
|
||||||
|
}
|
||||||
|
if direct_matches.len() > 1 {
|
||||||
|
return Err(ambiguous_company_prompt());
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(None)
|
||||||
|
}
|
||||||
602
src/compat/tq_lineloss/org_units.rs
Normal file
602
src/compat/tq_lineloss/org_units.rs
Normal file
@@ -0,0 +1,602 @@
|
|||||||
|
pub(crate) struct OrgUnit {
|
||||||
|
pub(crate) label: &'static str,
|
||||||
|
pub(crate) code: &'static str,
|
||||||
|
pub(crate) aliases: &'static [&'static str],
|
||||||
|
}
|
||||||
|
|
||||||
|
pub(crate) const ORG_UNITS: &[OrgUnit] = &[
|
||||||
|
// ===== Province-level =====
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网甘肃省电力公司",
|
||||||
|
code: "62101",
|
||||||
|
aliases: &["国网甘肃省电力公司", "甘肃省电力公司", "甘肃电力公司", "甘肃省公司"],
|
||||||
|
},
|
||||||
|
|
||||||
|
// ===== City-level (lv=2) =====
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网兰州供电公司",
|
||||||
|
code: "62401",
|
||||||
|
aliases: &["国网兰州供电公司", "兰州供电公司", "兰州公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网白银供电公司",
|
||||||
|
code: "62402",
|
||||||
|
aliases: &["国网白银供电公司", "白银供电公司", "白银公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网天水供电公司",
|
||||||
|
code: "62403",
|
||||||
|
aliases: &["国网天水供电公司", "天水供电公司", "天水公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网平凉供电公司",
|
||||||
|
code: "62404",
|
||||||
|
aliases: &["国网平凉供电公司", "平凉供电公司", "平凉公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网金昌供电公司",
|
||||||
|
code: "62405",
|
||||||
|
aliases: &["国网金昌供电公司", "金昌供电公司", "金昌公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网张掖供电公司",
|
||||||
|
code: "62406",
|
||||||
|
aliases: &["国网张掖供电公司", "张掖供电公司", "张掖公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网陇南供电公司",
|
||||||
|
code: "62407",
|
||||||
|
aliases: &["国网陇南供电公司", "陇南供电公司", "陇南公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网定西供电公司",
|
||||||
|
code: "62408",
|
||||||
|
aliases: &["国网定西供电公司", "定西供电公司", "定西公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网庆阳供电公司",
|
||||||
|
code: "62409",
|
||||||
|
aliases: &["国网庆阳供电公司", "庆阳供电公司", "庆阳公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网武威供电公司",
|
||||||
|
code: "62410",
|
||||||
|
aliases: &["国网武威供电公司", "武威供电公司", "武威公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网酒泉供电公司",
|
||||||
|
code: "62411",
|
||||||
|
aliases: &["国网酒泉供电公司", "酒泉供电公司", "酒泉公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网临夏供电公司",
|
||||||
|
code: "62412",
|
||||||
|
aliases: &["国网临夏供电公司", "临夏供电公司", "临夏公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网甘南供电公司",
|
||||||
|
code: "62413",
|
||||||
|
aliases: &["国网甘南供电公司", "甘南供电公司", "甘南公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网嘉峪关供电公司",
|
||||||
|
code: "62414",
|
||||||
|
aliases: &["国网嘉峪关供电公司", "嘉峪关供电公司", "嘉峪关公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网兰州新区供电公司",
|
||||||
|
code: "62415",
|
||||||
|
aliases: &["国网兰州新区供电公司", "兰州新区供电公司", "兰州新区公司"],
|
||||||
|
},
|
||||||
|
|
||||||
|
// ===== 兰州供电公司 children (lv=3) =====
|
||||||
|
OrgUnit {
|
||||||
|
label: "城关供电分公司",
|
||||||
|
code: "6240108",
|
||||||
|
aliases: &["城关供电分公司", "城关分公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "七里河供电分公司",
|
||||||
|
code: "6240109",
|
||||||
|
aliases: &["七里河供电分公司", "七里河分公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "西固供电分公司",
|
||||||
|
code: "6240107",
|
||||||
|
aliases: &["西固供电分公司", "西固分公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "安宁供电分公司",
|
||||||
|
code: "6240111",
|
||||||
|
aliases: &["安宁供电分公司", "安宁分公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "红古供电分公司",
|
||||||
|
code: "6240102",
|
||||||
|
aliases: &["红古供电分公司", "红古分公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "东岗供电分公司",
|
||||||
|
code: "6240110",
|
||||||
|
aliases: &["东岗供电分公司", "东岗分公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网永登县供电公司",
|
||||||
|
code: "6240122",
|
||||||
|
aliases: &["国网永登县供电公司", "永登县供电公司", "永登县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网榆中县供电公司",
|
||||||
|
code: "6240121",
|
||||||
|
aliases: &["国网榆中县供电公司", "榆中县供电公司", "榆中县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网永靖县供电公司",
|
||||||
|
code: "6240123",
|
||||||
|
aliases: &["国网永靖县供电公司", "永靖县供电公司", "永靖县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "兰州客户服务中心",
|
||||||
|
code: "6240101",
|
||||||
|
aliases: &["兰州客户服务中心", "兰州客服中心"],
|
||||||
|
},
|
||||||
|
|
||||||
|
// ===== 白银供电公司 children (lv=3) =====
|
||||||
|
OrgUnit {
|
||||||
|
label: "城区供电分公司",
|
||||||
|
code: "6240201",
|
||||||
|
aliases: &["城区供电分公司", "城区分公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网白银市城区供电分公司",
|
||||||
|
code: "6240201",
|
||||||
|
aliases: &["国网白银市城区供电分公司", "白银市城区供电分公司", "白银城区分公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网皋兰县供电公司",
|
||||||
|
code: "6240223",
|
||||||
|
aliases: &["国网皋兰县供电公司", "皋兰县供电公司", "皋兰县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网靖远县供电公司",
|
||||||
|
code: "6240221",
|
||||||
|
aliases: &["国网靖远县供电公司", "靖远县供电公司", "靖远县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网景泰县供电公司",
|
||||||
|
code: "6240222",
|
||||||
|
aliases: &["国网景泰县供电公司", "景泰县供电公司", "景泰县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网会宁县供电公司",
|
||||||
|
code: "6240225",
|
||||||
|
aliases: &["国网会宁县供电公司", "会宁县供电公司", "会宁县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网白银市平川区供电公司",
|
||||||
|
code: "6240224",
|
||||||
|
aliases: &["国网白银市平川区供电公司", "白银市平川区供电公司", "平川区供电公司", "平川区公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "白银客户服务中心",
|
||||||
|
code: "6240207",
|
||||||
|
aliases: &["白银客户服务中心", "白银客服中心"],
|
||||||
|
},
|
||||||
|
|
||||||
|
// ===== 天水供电公司 children (lv=3) =====
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网天水市秦州区供电公司",
|
||||||
|
code: "6240323",
|
||||||
|
aliases: &["国网天水市秦州区供电公司", "天水市秦州区供电公司", "秦州区供电公司", "秦州区公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "秦州区供电公司",
|
||||||
|
code: "6240323",
|
||||||
|
aliases: &["秦州区供电公司", "秦州区公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网天水市麦积区供电公司",
|
||||||
|
code: "6240305",
|
||||||
|
aliases: &["国网天水市麦积区供电公司", "天水市麦积区供电公司", "麦积区供电公司", "麦积区公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网麦积区供电公司",
|
||||||
|
code: "6240305",
|
||||||
|
aliases: &["国网麦积区供电公司", "麦积区供电公司", "麦积区公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网武山县供电公司",
|
||||||
|
code: "6240321",
|
||||||
|
aliases: &["国网武山县供电公司", "武山县供电公司", "武山县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "武山县供电公司",
|
||||||
|
code: "6240321",
|
||||||
|
aliases: &["武山县供电公司", "武山县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网甘谷县供电公司",
|
||||||
|
code: "6240322",
|
||||||
|
aliases: &["国网甘谷县供电公司", "甘谷县供电公司", "甘谷县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "甘谷县供电公司",
|
||||||
|
code: "6240322",
|
||||||
|
aliases: &["甘谷县供电公司", "甘谷县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网秦安县供电公司",
|
||||||
|
code: "6240324",
|
||||||
|
aliases: &["国网秦安县供电公司", "秦安县供电公司", "秦安县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "清水县供电公司",
|
||||||
|
code: "6240325",
|
||||||
|
aliases: &["清水县供电公司", "清水县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "张家川县供电公司",
|
||||||
|
code: "6240326",
|
||||||
|
aliases: &["张家川县供电公司", "张家川县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "天水客户服务中心",
|
||||||
|
code: "6240306",
|
||||||
|
aliases: &["天水客户服务中心", "天水客服中心"],
|
||||||
|
},
|
||||||
|
|
||||||
|
// ===== 平凉供电公司 children (lv=3) =====
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网崇信县供电公司",
|
||||||
|
code: "6240401",
|
||||||
|
aliases: &["国网崇信县供电公司", "崇信县供电公司", "崇信县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网庄浪县供电公司",
|
||||||
|
code: "6240402",
|
||||||
|
aliases: &["国网庄浪县供电公司", "庄浪县供电公司", "庄浪县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网泾川县供电公司",
|
||||||
|
code: "6240403",
|
||||||
|
aliases: &["国网泾川县供电公司", "泾川县供电公司", "泾川县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网静宁县供电公司",
|
||||||
|
code: "6240404",
|
||||||
|
aliases: &["国网静宁县供电公司", "静宁县供电公司", "静宁县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网崆峒区供电公司",
|
||||||
|
code: "6240405",
|
||||||
|
aliases: &["国网崆峒区供电公司", "崆峒区供电公司", "崆峒区公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网华亭市公司",
|
||||||
|
code: "6240407",
|
||||||
|
aliases: &["国网华亭市公司", "华亭市公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网灵台县供电公司",
|
||||||
|
code: "6240408",
|
||||||
|
aliases: &["国网灵台县供电公司", "灵台县供电公司", "灵台县公司"],
|
||||||
|
},
|
||||||
|
|
||||||
|
// ===== 金昌供电公司 children (lv=3) =====
|
||||||
|
OrgUnit {
|
||||||
|
label: "金川区供电公司",
|
||||||
|
code: "6240522",
|
||||||
|
aliases: &["金川区供电公司", "金川区公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网永昌县供电公司",
|
||||||
|
code: "6240523",
|
||||||
|
aliases: &["国网永昌县供电公司", "永昌县供电公司", "永昌县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "城区供电服务中心",
|
||||||
|
code: "6240505",
|
||||||
|
aliases: &["城区供电服务中心"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "金昌客户服务中心",
|
||||||
|
code: "6240507",
|
||||||
|
aliases: &["金昌客户服务中心", "金昌客服中心"],
|
||||||
|
},
|
||||||
|
|
||||||
|
// ===== 张掖供电公司 children (lv=3) =====
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网甘州区供电公司",
|
||||||
|
code: "6240621",
|
||||||
|
aliases: &["国网甘州区供电公司", "甘州区供电公司", "甘州区公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "肃南县供电公司",
|
||||||
|
code: "6240622",
|
||||||
|
aliases: &["肃南县供电公司", "肃南县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网高台县供电公司",
|
||||||
|
code: "6240623",
|
||||||
|
aliases: &["国网高台县供电公司", "高台县供电公司", "高台县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网山丹县供电公司",
|
||||||
|
code: "6240624",
|
||||||
|
aliases: &["国网山丹县供电公司", "山丹县供电公司", "山丹县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网民乐县供电公司",
|
||||||
|
code: "6240625",
|
||||||
|
aliases: &["国网民乐县供电公司", "民乐县供电公司", "民乐县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网临泽县供电公司",
|
||||||
|
code: "6240626",
|
||||||
|
aliases: &["国网临泽县供电公司", "临泽县供电公司", "临泽县公司"],
|
||||||
|
},
|
||||||
|
|
||||||
|
// ===== 陇南供电公司 children (lv=3) =====
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网武都区供电公司",
|
||||||
|
code: "6240701",
|
||||||
|
aliases: &["国网武都区供电公司", "武都区供电公司", "武都区公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网宕昌县供电公司",
|
||||||
|
code: "6240702",
|
||||||
|
aliases: &["国网宕昌县供电公司", "宕昌县供电公司", "宕昌县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网文县供电公司",
|
||||||
|
code: "6240703",
|
||||||
|
aliases: &["国网文县供电公司", "文县供电公司", "文县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网康县供电公司",
|
||||||
|
code: "6240704",
|
||||||
|
aliases: &["国网康县供电公司", "康县供电公司", "康县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网西和县供电公司",
|
||||||
|
code: "6240705",
|
||||||
|
aliases: &["国网西和县供电公司", "西和县供电公司", "西和县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网礼县供电公司",
|
||||||
|
code: "6240706",
|
||||||
|
aliases: &["国网礼县供电公司", "礼县供电公司", "礼县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网成县供电公司",
|
||||||
|
code: "6240707",
|
||||||
|
aliases: &["国网成县供电公司", "成县供电公司", "成县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网徽县供电公司",
|
||||||
|
code: "6240708",
|
||||||
|
aliases: &["国网徽县供电公司", "徽县供电公司", "徽县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网两当县供电公司",
|
||||||
|
code: "6240709",
|
||||||
|
aliases: &["国网两当县供电公司", "两当县供电公司", "两当县公司"],
|
||||||
|
},
|
||||||
|
|
||||||
|
// ===== 定西供电公司 children (lv=3) =====
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网定西市安定区供电公司",
|
||||||
|
code: "6240801",
|
||||||
|
aliases: &["国网定西市安定区供电公司", "定西市安定区供电公司", "安定区供电公司", "安定区公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网通渭县供电公司",
|
||||||
|
code: "6240802",
|
||||||
|
aliases: &["国网通渭县供电公司", "通渭县供电公司", "通渭县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网陇西县供电公司",
|
||||||
|
code: "6240803",
|
||||||
|
aliases: &["国网陇西县供电公司", "陇西县供电公司", "陇西县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网渭源县供电公司",
|
||||||
|
code: "6240804",
|
||||||
|
aliases: &["国网渭源县供电公司", "渭源县供电公司", "渭源县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网临洮县供电公司",
|
||||||
|
code: "6240805",
|
||||||
|
aliases: &["国网临洮县供电公司", "临洮县供电公司", "临洮县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网漳县供电公司",
|
||||||
|
code: "6240806",
|
||||||
|
aliases: &["国网漳县供电公司", "漳县供电公司", "漳县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网岷县供电公司",
|
||||||
|
code: "6240807",
|
||||||
|
aliases: &["国网岷县供电公司", "岷县供电公司", "岷县公司"],
|
||||||
|
},
|
||||||
|
|
||||||
|
// ===== 庆阳供电公司 children (lv=3) =====
|
||||||
|
OrgUnit {
|
||||||
|
label: "西峰区供电公司",
|
||||||
|
code: "6240901",
|
||||||
|
aliases: &["西峰区供电公司", "西峰区公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网庆城县供电公司",
|
||||||
|
code: "6240902",
|
||||||
|
aliases: &["国网庆城县供电公司", "庆城县供电公司", "庆城县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网正宁县供电公司",
|
||||||
|
code: "6240903",
|
||||||
|
aliases: &["国网正宁县供电公司", "正宁县供电公司", "正宁县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网镇原县供电公司",
|
||||||
|
code: "6240904",
|
||||||
|
aliases: &["国网镇原县供电公司", "镇原县供电公司", "镇原县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网环县供电公司",
|
||||||
|
code: "6240905",
|
||||||
|
aliases: &["国网环县供电公司", "环县供电公司", "环县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网华池县供电公司",
|
||||||
|
code: "6240906",
|
||||||
|
aliases: &["国网华池县供电公司", "华池县供电公司", "华池县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网合水县供电公司",
|
||||||
|
code: "6240907",
|
||||||
|
aliases: &["国网合水县供电公司", "合水县供电公司", "合水县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网宁县供电公司",
|
||||||
|
code: "6240908",
|
||||||
|
aliases: &["国网宁县供电公司", "宁县供电公司", "宁县公司"],
|
||||||
|
},
|
||||||
|
|
||||||
|
// ===== 武威供电公司 children (lv=3) =====
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网古浪县供电公司",
|
||||||
|
code: "6241001",
|
||||||
|
aliases: &["国网古浪县供电公司", "古浪县供电公司", "古浪县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网凉州区供电公司",
|
||||||
|
code: "6241002",
|
||||||
|
aliases: &["国网凉州区供电公司", "凉州区供电公司", "凉州区公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网民勤县供电公司",
|
||||||
|
code: "6241003",
|
||||||
|
aliases: &["国网民勤县供电公司", "民勤县供电公司", "民勤县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网天祝县供电公司",
|
||||||
|
code: "6241004",
|
||||||
|
aliases: &["国网天祝县供电公司", "天祝县供电公司", "天祝县公司"],
|
||||||
|
},
|
||||||
|
|
||||||
|
// ===== 酒泉供电公司 children (lv=3) =====
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网酒泉市肃州区供电公司",
|
||||||
|
code: "6241101",
|
||||||
|
aliases: &["国网酒泉市肃州区供电公司", "酒泉市肃州区供电公司", "肃州区供电公司", "肃州区公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网金塔县供电公司",
|
||||||
|
code: "6241102",
|
||||||
|
aliases: &["国网金塔县供电公司", "金塔县供电公司", "金塔县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网玉门市供电公司",
|
||||||
|
code: "6241103",
|
||||||
|
aliases: &["国网玉门市供电公司", "玉门市供电公司", "玉门市公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网瓜州县供电公司",
|
||||||
|
code: "6241104",
|
||||||
|
aliases: &["国网瓜州县供电公司", "瓜州县供电公司", "瓜州县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网敦煌市供电公司",
|
||||||
|
code: "6241105",
|
||||||
|
aliases: &["国网敦煌市供电公司", "敦煌市供电公司", "敦煌市公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网肃北县供电公司",
|
||||||
|
code: "6241106",
|
||||||
|
aliases: &["国网肃北县供电公司", "肃北县供电公司", "肃北县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网阿克塞县供电公司",
|
||||||
|
code: "6241107",
|
||||||
|
aliases: &["国网阿克塞县供电公司", "阿克塞县供电公司", "阿克塞县公司"],
|
||||||
|
},
|
||||||
|
|
||||||
|
// ===== 临夏供电公司 children (lv=3) =====
|
||||||
|
OrgUnit {
|
||||||
|
label: "临夏市城关营业班",
|
||||||
|
code: "6241201",
|
||||||
|
aliases: &["临夏市城关营业班"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网临夏县供电公司",
|
||||||
|
code: "6241202",
|
||||||
|
aliases: &["国网临夏县供电公司", "临夏县供电公司", "临夏县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网东乡县供电公司",
|
||||||
|
code: "6241203",
|
||||||
|
aliases: &["国网东乡县供电公司", "东乡县供电公司", "东乡县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网和政县供电公司",
|
||||||
|
code: "6241204",
|
||||||
|
aliases: &["国网和政县供电公司", "和政县供电公司", "和政县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网广河县供电公司",
|
||||||
|
code: "6241205",
|
||||||
|
aliases: &["国网广河县供电公司", "广河县供电公司", "广河县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网积石山县供电公司",
|
||||||
|
code: "6241206",
|
||||||
|
aliases: &["国网积石山县供电公司", "积石山县供电公司", "积石山县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网康乐县供电公司",
|
||||||
|
code: "6241207",
|
||||||
|
aliases: &["国网康乐县供电公司", "康乐县供电公司", "康乐县公司"],
|
||||||
|
},
|
||||||
|
|
||||||
|
// ===== 甘南供电公司 children (lv=3) =====
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网合作市供电公司",
|
||||||
|
code: "6241301",
|
||||||
|
aliases: &["国网合作市供电公司", "合作市供电公司", "合作市公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网夏河县供电公司",
|
||||||
|
code: "6241302",
|
||||||
|
aliases: &["国网夏河县供电公司", "夏河县供电公司", "夏河县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网卓尼县供电公司",
|
||||||
|
code: "6241303",
|
||||||
|
aliases: &["国网卓尼县供电公司", "卓尼县供电公司", "卓尼县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网临潭县供电公司",
|
||||||
|
code: "6241304",
|
||||||
|
aliases: &["国网临潭县供电公司", "临潭县供电公司", "临潭县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网碌曲县供电公司",
|
||||||
|
code: "6241305",
|
||||||
|
aliases: &["国网碌曲县供电公司", "碌曲县供电公司", "碌曲县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网玛曲县供电公司",
|
||||||
|
code: "6241306",
|
||||||
|
aliases: &["国网玛曲县供电公司", "玛曲县供电公司", "玛曲县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网迭部县供电公司",
|
||||||
|
code: "6241307",
|
||||||
|
aliases: &["国网迭部县供电公司", "迭部县供电公司", "迭部县公司"],
|
||||||
|
},
|
||||||
|
OrgUnit {
|
||||||
|
label: "国网舟曲县供电公司",
|
||||||
|
code: "6241308",
|
||||||
|
aliases: &["国网舟曲县供电公司", "舟曲县供电公司", "舟曲县公司"],
|
||||||
|
},
|
||||||
|
];
|
||||||
244
src/compat/tq_lineloss/period_resolver.rs
Normal file
244
src/compat/tq_lineloss/period_resolver.rs
Normal file
@@ -0,0 +1,244 @@
|
|||||||
|
use chrono::{Datelike, Duration, Local, NaiveDate};
|
||||||
|
use serde_json::json;
|
||||||
|
|
||||||
|
use super::contracts::{
|
||||||
|
contradictory_period_mode_prompt, missing_period_mode_prompt, missing_period_prompt,
|
||||||
|
missing_week_year_prompt, PeriodMode, ResolvedPeriod,
|
||||||
|
};
|
||||||
|
|
||||||
|
pub fn resolve_period(input: &str) -> Result<ResolvedPeriod, String> {
|
||||||
|
let has_month = input.contains("月累计");
|
||||||
|
let has_week = input.contains("周累计");
|
||||||
|
|
||||||
|
match (has_month, has_week) {
|
||||||
|
(true, true) => return Err(contradictory_period_mode_prompt()),
|
||||||
|
(false, false) => return Err(missing_period_mode_prompt()),
|
||||||
|
(true, false) => resolve_month_period(input),
|
||||||
|
(false, true) => resolve_week_period(input),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn resolve_month_period(input: &str) -> Result<ResolvedPeriod, String> {
|
||||||
|
if let Some(value) = extract_year_month_dash(input) {
|
||||||
|
return Ok(ResolvedPeriod {
|
||||||
|
mode: PeriodMode::Month,
|
||||||
|
mode_code: "1".to_string(),
|
||||||
|
value: value.clone(),
|
||||||
|
payload: json!({ "fdate": value }),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if let Some(value) = extract_year_month_cn(input) {
|
||||||
|
return Ok(ResolvedPeriod {
|
||||||
|
mode: PeriodMode::Month,
|
||||||
|
mode_code: "1".to_string(),
|
||||||
|
value: value.clone(),
|
||||||
|
payload: json!({ "fdate": value }),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if contains_explicit_month_period_hint(input) {
|
||||||
|
return Err(missing_period_prompt());
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(default_month_period())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn resolve_week_period(input: &str) -> Result<ResolvedPeriod, String> {
|
||||||
|
if input.contains('第') && input.contains('周') && !input.contains('年') {
|
||||||
|
return Err(missing_week_year_prompt());
|
||||||
|
}
|
||||||
|
|
||||||
|
if let Some((year, week)) = extract_year_week(input) {
|
||||||
|
let Some(week_start) = week_start_date(year, week) else {
|
||||||
|
return Err(missing_period_prompt());
|
||||||
|
};
|
||||||
|
let week_end = week_start + Duration::days(6);
|
||||||
|
|
||||||
|
return Ok(ResolvedPeriod {
|
||||||
|
mode: PeriodMode::Week,
|
||||||
|
mode_code: "2".to_string(),
|
||||||
|
value: format!("{year}-W{week:02}"),
|
||||||
|
payload: json!({
|
||||||
|
"tjzq": "week",
|
||||||
|
"level": "00",
|
||||||
|
"weekSfdate": week_start.format("%Y-%m-%d").to_string(),
|
||||||
|
"weekEfdate": week_end.format("%Y-%m-%d").to_string(),
|
||||||
|
}),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if contains_explicit_week_period_hint(input) {
|
||||||
|
return Err(missing_period_prompt());
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(default_week_period())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn default_month_period() -> ResolvedPeriod {
|
||||||
|
let today = Local::now().date_naive();
|
||||||
|
let (year, month) = if today.month() == 1 {
|
||||||
|
(today.year() - 1, 12)
|
||||||
|
} else {
|
||||||
|
(today.year(), today.month() - 1)
|
||||||
|
};
|
||||||
|
let value = format!("{year}-{month:02}");
|
||||||
|
|
||||||
|
ResolvedPeriod {
|
||||||
|
mode: PeriodMode::Month,
|
||||||
|
mode_code: "1".to_string(),
|
||||||
|
value: value.clone(),
|
||||||
|
payload: json!({ "fdate": value }),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn default_week_period() -> ResolvedPeriod {
|
||||||
|
let today = Local::now().date_naive();
|
||||||
|
let month_start = today.with_day(1).expect("current month should have day 1");
|
||||||
|
let start = month_start.format("%Y-%m-%d").to_string();
|
||||||
|
let end = today.format("%Y-%m-%d").to_string();
|
||||||
|
|
||||||
|
ResolvedPeriod {
|
||||||
|
mode: PeriodMode::Week,
|
||||||
|
mode_code: "2".to_string(),
|
||||||
|
value: format!("{start}至{end}"),
|
||||||
|
payload: json!({
|
||||||
|
"tjzq": "week",
|
||||||
|
"level": "00",
|
||||||
|
"weekSfdate": start,
|
||||||
|
"weekEfdate": end,
|
||||||
|
}),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn contains_explicit_month_period_hint(input: &str) -> bool {
|
||||||
|
let trimmed = input.replace("月累计", "");
|
||||||
|
trimmed.contains('年')
|
||||||
|
|| trimmed.contains('月')
|
||||||
|
|| trimmed.contains('-')
|
||||||
|
|| trimmed.chars().any(|ch| ch.is_ascii_digit())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn contains_explicit_week_period_hint(input: &str) -> bool {
|
||||||
|
let trimmed = input.replace("周累计", "");
|
||||||
|
trimmed.contains('年')
|
||||||
|
|| trimmed.contains('第')
|
||||||
|
|| trimmed.contains('周')
|
||||||
|
|| trimmed.contains('-')
|
||||||
|
|| trimmed.chars().any(|ch| ch.is_ascii_digit())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn extract_year_month_dash(input: &str) -> Option<String> {
|
||||||
|
let chars: Vec<char> = input.chars().collect();
|
||||||
|
for window in chars.windows(7) {
|
||||||
|
let candidate: String = window.iter().collect();
|
||||||
|
if is_year_month_dash(&candidate) {
|
||||||
|
return Some(candidate);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
None
|
||||||
|
}
|
||||||
|
|
||||||
|
fn is_year_month_dash(candidate: &str) -> bool {
|
||||||
|
let bytes = candidate.as_bytes();
|
||||||
|
bytes.len() == 7
|
||||||
|
&& bytes[0..4].iter().all(u8::is_ascii_digit)
|
||||||
|
&& bytes[4] == b'-'
|
||||||
|
&& bytes[5..7].iter().all(u8::is_ascii_digit)
|
||||||
|
&& matches!((bytes[5] - b'0') * 10 + (bytes[6] - b'0'), 1..=12)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn extract_year_month_cn(input: &str) -> Option<String> {
|
||||||
|
let chars: Vec<char> = input.chars().collect();
|
||||||
|
for index in 0..chars.len() {
|
||||||
|
if index + 6 >= chars.len() {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
if !chars[index..index + 4].iter().all(|ch| ch.is_ascii_digit()) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
if chars[index + 4] != '年' {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut month_digits = String::new();
|
||||||
|
let mut cursor = index + 5;
|
||||||
|
while cursor < chars.len() && chars[cursor].is_ascii_digit() && month_digits.len() < 2 {
|
||||||
|
month_digits.push(chars[cursor]);
|
||||||
|
cursor += 1;
|
||||||
|
}
|
||||||
|
if month_digits.is_empty() || cursor >= chars.len() || chars[cursor] != '月' {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
let month: u32 = month_digits.parse().ok()?;
|
||||||
|
if !(1..=12).contains(&month) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
let year: String = chars[index..index + 4].iter().collect();
|
||||||
|
return Some(format!("{year}-{month:02}"));
|
||||||
|
}
|
||||||
|
None
|
||||||
|
}
|
||||||
|
|
||||||
|
fn extract_year_week(input: &str) -> Option<(i32, u32)> {
|
||||||
|
let chars: Vec<char> = input.chars().collect();
|
||||||
|
for index in 0..chars.len() {
|
||||||
|
if index + 7 >= chars.len() {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
if !chars[index..index + 4].iter().all(|ch| ch.is_ascii_digit()) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
if chars[index + 4] != '年' || chars[index + 5] != '第' {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut week_digits = String::new();
|
||||||
|
let mut cursor = index + 6;
|
||||||
|
while cursor < chars.len() && chars[cursor].is_ascii_digit() && week_digits.len() < 2 {
|
||||||
|
week_digits.push(chars[cursor]);
|
||||||
|
cursor += 1;
|
||||||
|
}
|
||||||
|
if week_digits.is_empty() || cursor >= chars.len() || chars[cursor] != '周' {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
let year: i32 = chars[index..index + 4].iter().collect::<String>().parse().ok()?;
|
||||||
|
let week: u32 = week_digits.parse().ok()?;
|
||||||
|
if !(1..=53).contains(&week) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
return Some((year, week));
|
||||||
|
}
|
||||||
|
None
|
||||||
|
}
|
||||||
|
|
||||||
|
fn week_start_date(year: i32, week: u32) -> Option<NaiveDate> {
|
||||||
|
let jan4 = NaiveDate::from_ymd_opt(year, 1, 4)?;
|
||||||
|
let iso_week1_monday = jan4 - Duration::days(jan4.weekday().num_days_from_monday() as i64);
|
||||||
|
let candidate = iso_week1_monday + Duration::weeks((week - 1) as i64);
|
||||||
|
let iso = candidate.iso_week();
|
||||||
|
(iso.year() == year && iso.week() == week).then_some(candidate)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::resolve_period;
|
||||||
|
use crate::compat::tq_lineloss::contracts::PeriodMode;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn resolves_dash_month() {
|
||||||
|
let resolved = resolve_period("月累计 2026-03").unwrap();
|
||||||
|
assert_eq!(resolved.mode, PeriodMode::Month);
|
||||||
|
assert_eq!(resolved.payload["fdate"], "2026-03");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn resolves_week_range() {
|
||||||
|
let resolved = resolve_period("周累计 2026年第12周").unwrap();
|
||||||
|
assert_eq!(resolved.mode, PeriodMode::Week);
|
||||||
|
assert_eq!(resolved.payload["weekSfdate"], "2026-03-16");
|
||||||
|
assert_eq!(resolved.payload["weekEfdate"], "2026-03-22");
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -5,15 +5,11 @@ use std::thread;
|
|||||||
use std::time::Duration;
|
use std::time::Duration;
|
||||||
|
|
||||||
use regex::Regex;
|
use regex::Regex;
|
||||||
use reqwest::Url;
|
|
||||||
use serde_json::{json, Value};
|
use serde_json::{json, Value};
|
||||||
use zeroclaw::skills::load_skills_from_directory;
|
|
||||||
use zeroclaw::tools::Tool;
|
use zeroclaw::tools::Tool;
|
||||||
|
|
||||||
use crate::browser::{BrowserBackend, PipeBrowserBackend};
|
use crate::browser::{BrowserBackend, PipeBrowserBackend};
|
||||||
use crate::compat::artifact_open::{open_exported_xlsx, open_local_dashboard, PostExportOpen};
|
use crate::compat::artifact_open::{open_exported_xlsx, open_local_dashboard, PostExportOpen};
|
||||||
use crate::compat::browser_script_skill_tool::execute_browser_script_tool;
|
|
||||||
use crate::compat::config_adapter::resolve_scene_skills_dir_from_sgclaw_settings;
|
|
||||||
use crate::compat::openxml_office_tool::OpenXmlOfficeTool;
|
use crate::compat::openxml_office_tool::OpenXmlOfficeTool;
|
||||||
use crate::compat::runtime::CompatTaskContext;
|
use crate::compat::runtime::CompatTaskContext;
|
||||||
use crate::compat::screen_html_export_tool::ScreenHtmlExportTool;
|
use crate::compat::screen_html_export_tool::ScreenHtmlExportTool;
|
||||||
@@ -27,7 +23,6 @@ const ZHIHU_EDITOR_DOMAIN: &str = "zhuanlan.zhihu.com";
|
|||||||
const ZHIHU_HOT_URL: &str = "https://www.zhihu.com/hot";
|
const ZHIHU_HOT_URL: &str = "https://www.zhihu.com/hot";
|
||||||
const ZHIHU_CREATOR_URL: &str = "https://www.zhihu.com/creator";
|
const ZHIHU_CREATOR_URL: &str = "https://www.zhihu.com/creator";
|
||||||
const ZHIHU_EDITOR_URL: &str = "https://zhuanlan.zhihu.com/write";
|
const ZHIHU_EDITOR_URL: &str = "https://zhuanlan.zhihu.com/write";
|
||||||
const FAULT_DETAILS_SCENE_ID: &str = "fault-details-report";
|
|
||||||
const HOTLIST_READY_POLL_ATTEMPTS: usize = 10;
|
const HOTLIST_READY_POLL_ATTEMPTS: usize = 10;
|
||||||
const HOTLIST_READY_POLL_INTERVAL: Duration = Duration::from_millis(500);
|
const HOTLIST_READY_POLL_INTERVAL: Duration = Duration::from_millis(500);
|
||||||
const EDITOR_READY_POLL_ATTEMPTS: usize = 12;
|
const EDITOR_READY_POLL_ATTEMPTS: usize = 12;
|
||||||
@@ -39,7 +34,6 @@ const HOTLIST_TEXT_READY_PATTERN: &str =
|
|||||||
r"\d+(?:\.\d+)?\s*(?:万|亿|k|K|m|M)\s*热度";
|
r"\d+(?:\.\d+)?\s*(?:万|亿|k|K|m|M)\s*热度";
|
||||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||||
pub enum WorkflowRoute {
|
pub enum WorkflowRoute {
|
||||||
FaultDetailsReport,
|
|
||||||
ZhihuHotlistExportXlsx,
|
ZhihuHotlistExportXlsx,
|
||||||
ZhihuHotlistScreen,
|
ZhihuHotlistScreen,
|
||||||
ZhihuArticleEntry,
|
ZhihuArticleEntry,
|
||||||
@@ -66,13 +60,6 @@ pub fn detect_route(
|
|||||||
page_url: Option<&str>,
|
page_url: Option<&str>,
|
||||||
page_title: Option<&str>,
|
page_title: Option<&str>,
|
||||||
) -> Option<WorkflowRoute> {
|
) -> Option<WorkflowRoute> {
|
||||||
if let Some(scene) = crate::runtime::match_scene_instruction(instruction) {
|
|
||||||
if scene.id == FAULT_DETAILS_SCENE_ID
|
|
||||||
&& matches!(scene.dispatch_mode, crate::runtime::DispatchMode::DirectBrowser)
|
|
||||||
{
|
|
||||||
return Some(WorkflowRoute::FaultDetailsReport);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if crate::runtime::is_zhihu_hotlist_task(instruction, page_url, page_title) {
|
if crate::runtime::is_zhihu_hotlist_task(instruction, page_url, page_title) {
|
||||||
let normalized = instruction.to_ascii_lowercase();
|
let normalized = instruction.to_ascii_lowercase();
|
||||||
if normalized.contains("dashboard")
|
if normalized.contains("dashboard")
|
||||||
@@ -106,8 +93,7 @@ pub fn detect_route(
|
|||||||
pub fn prefers_direct_execution(route: &WorkflowRoute) -> bool {
|
pub fn prefers_direct_execution(route: &WorkflowRoute) -> bool {
|
||||||
matches!(
|
matches!(
|
||||||
route,
|
route,
|
||||||
WorkflowRoute::FaultDetailsReport
|
WorkflowRoute::ZhihuHotlistExportXlsx
|
||||||
| WorkflowRoute::ZhihuHotlistExportXlsx
|
|
||||||
| WorkflowRoute::ZhihuHotlistScreen
|
| WorkflowRoute::ZhihuHotlistScreen
|
||||||
| WorkflowRoute::ZhihuArticleEntry
|
| WorkflowRoute::ZhihuArticleEntry
|
||||||
| WorkflowRoute::ZhihuArticleDraft
|
| WorkflowRoute::ZhihuArticleDraft
|
||||||
@@ -133,8 +119,7 @@ pub fn should_fallback_after_summary(summary: &str, route: &WorkflowRoute) -> bo
|
|||||||
looks_like_denial
|
looks_like_denial
|
||||||
|| matches!(
|
|| matches!(
|
||||||
route,
|
route,
|
||||||
WorkflowRoute::FaultDetailsReport
|
WorkflowRoute::ZhihuHotlistExportXlsx
|
||||||
| WorkflowRoute::ZhihuHotlistExportXlsx
|
|
||||||
| WorkflowRoute::ZhihuHotlistScreen
|
| WorkflowRoute::ZhihuHotlistScreen
|
||||||
| WorkflowRoute::ZhihuArticleEntry
|
| WorkflowRoute::ZhihuArticleEntry
|
||||||
| WorkflowRoute::ZhihuArticleDraft
|
| WorkflowRoute::ZhihuArticleDraft
|
||||||
@@ -147,22 +132,22 @@ pub fn execute_route_with_browser_backend(
|
|||||||
transport: &dyn crate::agent::AgentEventSink,
|
transport: &dyn crate::agent::AgentEventSink,
|
||||||
browser_backend: Arc<dyn BrowserBackend>,
|
browser_backend: Arc<dyn BrowserBackend>,
|
||||||
workspace_root: &Path,
|
workspace_root: &Path,
|
||||||
|
skills_dir: &Path,
|
||||||
instruction: &str,
|
instruction: &str,
|
||||||
task_context: &CompatTaskContext,
|
task_context: &CompatTaskContext,
|
||||||
route: WorkflowRoute,
|
route: WorkflowRoute,
|
||||||
settings: &SgClawSettings,
|
settings: &SgClawSettings,
|
||||||
) -> Result<String, PipeError> {
|
) -> Result<String, PipeError> {
|
||||||
match route {
|
match route {
|
||||||
WorkflowRoute::FaultDetailsReport => execute_fault_details_route(
|
|
||||||
browser_backend.clone(),
|
|
||||||
instruction,
|
|
||||||
workspace_root,
|
|
||||||
settings,
|
|
||||||
task_context.page_url.as_deref(),
|
|
||||||
),
|
|
||||||
WorkflowRoute::ZhihuHotlistExportXlsx | WorkflowRoute::ZhihuHotlistScreen => {
|
WorkflowRoute::ZhihuHotlistExportXlsx | WorkflowRoute::ZhihuHotlistScreen => {
|
||||||
let top_n = extract_top_n(instruction);
|
let top_n = extract_top_n(instruction);
|
||||||
let items = collect_hotlist_items(transport, browser_backend.as_ref(), top_n, task_context)?;
|
let items = collect_hotlist_items(
|
||||||
|
transport,
|
||||||
|
browser_backend.as_ref(),
|
||||||
|
skills_dir,
|
||||||
|
top_n,
|
||||||
|
task_context,
|
||||||
|
)?;
|
||||||
if items.is_empty() {
|
if items.is_empty() {
|
||||||
return Err(PipeError::Protocol(
|
return Err(PipeError::Protocol(
|
||||||
"知乎热榜采集失败:未能从页面文本中解析到热榜条目".to_string(),
|
"知乎热榜采集失败:未能从页面文本中解析到热榜条目".to_string(),
|
||||||
@@ -177,11 +162,12 @@ pub fn execute_route_with_browser_backend(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
WorkflowRoute::ZhihuArticleEntry => {
|
WorkflowRoute::ZhihuArticleEntry => {
|
||||||
execute_zhihu_article_entry_route(transport, browser_backend.as_ref())
|
execute_zhihu_article_entry_route(transport, browser_backend.as_ref(), skills_dir)
|
||||||
}
|
}
|
||||||
WorkflowRoute::ZhihuArticleDraft => execute_zhihu_article_route(
|
WorkflowRoute::ZhihuArticleDraft => execute_zhihu_article_route(
|
||||||
transport,
|
transport,
|
||||||
browser_backend.as_ref(),
|
browser_backend.as_ref(),
|
||||||
|
skills_dir,
|
||||||
instruction,
|
instruction,
|
||||||
task_context,
|
task_context,
|
||||||
false,
|
false,
|
||||||
@@ -191,6 +177,7 @@ pub fn execute_route_with_browser_backend(
|
|||||||
WorkflowRoute::ZhihuArticlePublish => execute_zhihu_article_route(
|
WorkflowRoute::ZhihuArticlePublish => execute_zhihu_article_route(
|
||||||
transport,
|
transport,
|
||||||
browser_backend.as_ref(),
|
browser_backend.as_ref(),
|
||||||
|
skills_dir,
|
||||||
instruction,
|
instruction,
|
||||||
task_context,
|
task_context,
|
||||||
true,
|
true,
|
||||||
@@ -201,6 +188,7 @@ pub fn execute_route_with_browser_backend(
|
|||||||
execute_generated_zhihu_article_publish_route(
|
execute_generated_zhihu_article_publish_route(
|
||||||
transport,
|
transport,
|
||||||
browser_backend.as_ref(),
|
browser_backend.as_ref(),
|
||||||
|
skills_dir,
|
||||||
instruction,
|
instruction,
|
||||||
task_context,
|
task_context,
|
||||||
workspace_root,
|
workspace_root,
|
||||||
@@ -214,6 +202,7 @@ pub fn execute_route<T: Transport + 'static>(
|
|||||||
transport: &T,
|
transport: &T,
|
||||||
browser_tool: &BrowserPipeTool<T>,
|
browser_tool: &BrowserPipeTool<T>,
|
||||||
workspace_root: &Path,
|
workspace_root: &Path,
|
||||||
|
skills_dir: &Path,
|
||||||
instruction: &str,
|
instruction: &str,
|
||||||
task_context: &CompatTaskContext,
|
task_context: &CompatTaskContext,
|
||||||
route: WorkflowRoute,
|
route: WorkflowRoute,
|
||||||
@@ -225,6 +214,7 @@ pub fn execute_route<T: Transport + 'static>(
|
|||||||
transport,
|
transport,
|
||||||
browser_backend,
|
browser_backend,
|
||||||
workspace_root,
|
workspace_root,
|
||||||
|
skills_dir,
|
||||||
instruction,
|
instruction,
|
||||||
task_context,
|
task_context,
|
||||||
route,
|
route,
|
||||||
@@ -232,164 +222,16 @@ pub fn execute_route<T: Transport + 'static>(
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn execute_fault_details_route(
|
|
||||||
browser_backend: Arc<dyn BrowserBackend>,
|
|
||||||
instruction: &str,
|
|
||||||
workspace_root: &Path,
|
|
||||||
settings: &SgClawSettings,
|
|
||||||
page_url: Option<&str>,
|
|
||||||
) -> Result<String, PipeError> {
|
|
||||||
let scene = crate::runtime::match_scene_instruction(instruction).ok_or_else(|| {
|
|
||||||
PipeError::Protocol("故障明细直连路由失败:未找到场景元数据。".to_string())
|
|
||||||
})?;
|
|
||||||
if scene.id != FAULT_DETAILS_SCENE_ID {
|
|
||||||
return Err(PipeError::Protocol(format!(
|
|
||||||
"故障明细直连路由失败:场景不匹配,got {}",
|
|
||||||
scene.id
|
|
||||||
)));
|
|
||||||
}
|
|
||||||
|
|
||||||
let period = derive_fault_details_period(instruction).ok_or_else(|| {
|
|
||||||
PipeError::Protocol(
|
|
||||||
"故障明细直连路由失败:无法从当前指令安全推导必填参数 period,请明确提供例如“导出 2026-04 故障明细”。"
|
|
||||||
.to_string(),
|
|
||||||
)
|
|
||||||
})?;
|
|
||||||
|
|
||||||
let skills_dirs = resolve_scene_skills_dir_from_sgclaw_settings(workspace_root, settings);
|
|
||||||
let skill = skills_dirs
|
|
||||||
.iter()
|
|
||||||
.flat_map(|dir| load_skills_from_directory(dir, true))
|
|
||||||
.find(|skill| skill.name == scene.skill_package)
|
|
||||||
.ok_or_else(|| {
|
|
||||||
PipeError::Protocol(format!(
|
|
||||||
"故障明细直连路由失败:未找到技能包 {} in [{}]",
|
|
||||||
scene.skill_package,
|
|
||||||
skills_dirs.iter().map(|d| d.display().to_string()).collect::<Vec<_>>().join(", ")
|
|
||||||
))
|
|
||||||
})?;
|
|
||||||
let skill_root = skill
|
|
||||||
.location
|
|
||||||
.as_deref()
|
|
||||||
.and_then(Path::parent)
|
|
||||||
.ok_or_else(|| {
|
|
||||||
PipeError::Protocol(format!(
|
|
||||||
"故障明细直连路由失败:技能包 {} 缺少有效位置元数据",
|
|
||||||
scene.skill_package
|
|
||||||
))
|
|
||||||
})?;
|
|
||||||
let tool = skill
|
|
||||||
.tools
|
|
||||||
.iter()
|
|
||||||
.find(|tool| tool.name == scene.skill_tool)
|
|
||||||
.ok_or_else(|| {
|
|
||||||
PipeError::Protocol(format!(
|
|
||||||
"故障明细直连路由失败:技能包 {} 缺少工具 {}",
|
|
||||||
scene.skill_package, scene.skill_tool
|
|
||||||
))
|
|
||||||
})?;
|
|
||||||
if tool.kind != "browser_script" {
|
|
||||||
return Err(PipeError::Protocol(format!(
|
|
||||||
"故障明细直连路由失败:工具 {} 必须是 browser_script,当前为 {}",
|
|
||||||
scene.skill_tool, tool.kind
|
|
||||||
)));
|
|
||||||
}
|
|
||||||
|
|
||||||
let expected_domain = fault_details_expected_domain(page_url, &scene.expected_domain)
|
|
||||||
.ok_or_else(|| {
|
|
||||||
PipeError::Protocol(
|
|
||||||
"故障明细直连路由失败:无法从当前页面上下文解析可用域名。".to_string(),
|
|
||||||
)
|
|
||||||
})?;
|
|
||||||
|
|
||||||
let runtime = tokio::runtime::Runtime::new()
|
|
||||||
.map_err(|err| PipeError::Protocol(format!("failed to create tokio runtime: {err}")))?;
|
|
||||||
let result = runtime
|
|
||||||
.block_on(execute_browser_script_tool(
|
|
||||||
tool,
|
|
||||||
skill_root,
|
|
||||||
browser_backend,
|
|
||||||
json!({
|
|
||||||
"expected_domain": expected_domain,
|
|
||||||
"period": period,
|
|
||||||
}),
|
|
||||||
))
|
|
||||||
.map_err(|err| PipeError::Protocol(err.to_string()))?;
|
|
||||||
if !result.success {
|
|
||||||
return Err(PipeError::Protocol(
|
|
||||||
result
|
|
||||||
.error
|
|
||||||
.unwrap_or_else(|| "fault-details-report browser script failed".to_string()),
|
|
||||||
));
|
|
||||||
}
|
|
||||||
|
|
||||||
Ok(result.output)
|
|
||||||
}
|
|
||||||
|
|
||||||
fn fault_details_expected_domain(page_url: Option<&str>, fallback: &str) -> Option<String> {
|
|
||||||
page_url
|
|
||||||
.and_then(host_from_url)
|
|
||||||
.or_else(|| host_from_url(fallback))
|
|
||||||
}
|
|
||||||
|
|
||||||
fn host_from_url(raw: &str) -> Option<String> {
|
|
||||||
let trimmed = raw.trim();
|
|
||||||
if trimmed.is_empty() {
|
|
||||||
return None;
|
|
||||||
}
|
|
||||||
|
|
||||||
if let Ok(url) = Url::parse(trimmed) {
|
|
||||||
return url.host_str().map(|host| host.to_ascii_lowercase());
|
|
||||||
}
|
|
||||||
|
|
||||||
let host = trimmed
|
|
||||||
.trim_start_matches("https://")
|
|
||||||
.trim_start_matches("http://")
|
|
||||||
.split(['/', '?', '#'])
|
|
||||||
.next()
|
|
||||||
.unwrap_or_default()
|
|
||||||
.split(':')
|
|
||||||
.next()
|
|
||||||
.unwrap_or_default()
|
|
||||||
.trim()
|
|
||||||
.to_ascii_lowercase();
|
|
||||||
|
|
||||||
(!host.is_empty()).then_some(host)
|
|
||||||
}
|
|
||||||
|
|
||||||
fn derive_fault_details_period(instruction: &str) -> Option<String> {
|
|
||||||
let month_re = Regex::new(r"(20\d{2})[-/年](0?[1-9]|1[0-2])").expect("valid fault details month regex");
|
|
||||||
let derived = month_re.captures_iter(instruction).find_map(|capture| {
|
|
||||||
let matched = capture.get(0)?;
|
|
||||||
let before_is_digit = instruction[..matched.start()]
|
|
||||||
.chars()
|
|
||||||
.next_back()
|
|
||||||
.is_some_and(|ch| ch.is_ascii_digit());
|
|
||||||
let after_is_digit = instruction[matched.end()..]
|
|
||||||
.chars()
|
|
||||||
.next()
|
|
||||||
.is_some_and(|ch| ch.is_ascii_digit());
|
|
||||||
if before_is_digit || after_is_digit {
|
|
||||||
return None;
|
|
||||||
}
|
|
||||||
|
|
||||||
let year = capture.get(1).map(|m| m.as_str()).unwrap_or_default();
|
|
||||||
let month = capture
|
|
||||||
.get(2)
|
|
||||||
.and_then(|m| m.as_str().parse::<u32>().ok())
|
|
||||||
.unwrap_or(1);
|
|
||||||
Some(format!("{year}-{month:02}"))
|
|
||||||
});
|
|
||||||
derived
|
|
||||||
}
|
|
||||||
|
|
||||||
fn collect_hotlist_items(
|
fn collect_hotlist_items(
|
||||||
transport: &dyn crate::agent::AgentEventSink,
|
transport: &dyn crate::agent::AgentEventSink,
|
||||||
browser_tool: &dyn BrowserBackend,
|
browser_tool: &dyn BrowserBackend,
|
||||||
|
skills_dir: &Path,
|
||||||
top_n: usize,
|
top_n: usize,
|
||||||
task_context: &CompatTaskContext,
|
task_context: &CompatTaskContext,
|
||||||
) -> Result<Vec<HotlistItem>, PipeError> {
|
) -> Result<Vec<HotlistItem>, PipeError> {
|
||||||
if let Some(items) = ensure_hotlist_page_ready(transport, browser_tool, top_n, task_context)? {
|
if let Some(items) =
|
||||||
|
ensure_hotlist_page_ready(transport, browser_tool, skills_dir, top_n, task_context)?
|
||||||
|
{
|
||||||
return Ok(items);
|
return Ok(items);
|
||||||
}
|
}
|
||||||
transport.send(&AgentMessage::LogEntry {
|
transport.send(&AgentMessage::LogEntry {
|
||||||
@@ -398,7 +240,7 @@ fn collect_hotlist_items(
|
|||||||
})?;
|
})?;
|
||||||
let response = browser_tool.invoke(
|
let response = browser_tool.invoke(
|
||||||
Action::Eval,
|
Action::Eval,
|
||||||
json!({ "script": load_hotlist_extractor_script(top_n)? }),
|
json!({ "script": load_hotlist_extractor_script(skills_dir, top_n)? }),
|
||||||
ZHIHU_DOMAIN,
|
ZHIHU_DOMAIN,
|
||||||
)?;
|
)?;
|
||||||
if !response.success {
|
if !response.success {
|
||||||
@@ -419,6 +261,7 @@ fn collect_hotlist_items(
|
|||||||
fn ensure_hotlist_page_ready(
|
fn ensure_hotlist_page_ready(
|
||||||
transport: &dyn crate::agent::AgentEventSink,
|
transport: &dyn crate::agent::AgentEventSink,
|
||||||
browser_tool: &dyn BrowserBackend,
|
browser_tool: &dyn BrowserBackend,
|
||||||
|
skills_dir: &Path,
|
||||||
top_n: usize,
|
top_n: usize,
|
||||||
task_context: &CompatTaskContext,
|
task_context: &CompatTaskContext,
|
||||||
) -> Result<Option<Vec<HotlistItem>>, PipeError> {
|
) -> Result<Option<Vec<HotlistItem>>, PipeError> {
|
||||||
@@ -441,7 +284,7 @@ fn ensure_hotlist_page_ready(
|
|||||||
// Best-effort wait for content to appear; ignore the boolean result –
|
// Best-effort wait for content to appear; ignore the boolean result –
|
||||||
// we always follow up with the probe.
|
// we always follow up with the probe.
|
||||||
let _ = poll_for_hotlist_readiness(browser_tool);
|
let _ = poll_for_hotlist_readiness(browser_tool);
|
||||||
if let Some(items) = probe_hotlist_extractor(transport, browser_tool, top_n)? {
|
if let Some(items) = probe_hotlist_extractor(transport, browser_tool, skills_dir, top_n)? {
|
||||||
return Ok(Some(items));
|
return Ok(Some(items));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -450,7 +293,7 @@ fn ensure_hotlist_page_ready(
|
|||||||
for attempt in 0..2 {
|
for attempt in 0..2 {
|
||||||
navigate_hotlist_page(transport, browser_tool)?;
|
navigate_hotlist_page(transport, browser_tool)?;
|
||||||
let _ = poll_for_hotlist_readiness(browser_tool);
|
let _ = poll_for_hotlist_readiness(browser_tool);
|
||||||
if let Some(items) = probe_hotlist_extractor(transport, browser_tool, top_n)? {
|
if let Some(items) = probe_hotlist_extractor(transport, browser_tool, skills_dir, top_n)? {
|
||||||
return Ok(Some(items));
|
return Ok(Some(items));
|
||||||
}
|
}
|
||||||
last_error = Some(format!(
|
last_error = Some(format!(
|
||||||
@@ -477,6 +320,7 @@ fn ensure_hotlist_page_ready(
|
|||||||
/// reports "editor_unavailable".
|
/// reports "editor_unavailable".
|
||||||
fn poll_for_editor_readiness(
|
fn poll_for_editor_readiness(
|
||||||
browser_tool: &dyn BrowserBackend,
|
browser_tool: &dyn BrowserBackend,
|
||||||
|
skills_dir: &Path,
|
||||||
desired_mode: &str,
|
desired_mode: &str,
|
||||||
) -> Result<Value, PipeError> {
|
) -> Result<Value, PipeError> {
|
||||||
let args = json!({ "desired_mode": desired_mode });
|
let args = json!({ "desired_mode": desired_mode });
|
||||||
@@ -485,6 +329,7 @@ fn poll_for_editor_readiness(
|
|||||||
for attempt in 0..EDITOR_READY_POLL_ATTEMPTS {
|
for attempt in 0..EDITOR_READY_POLL_ATTEMPTS {
|
||||||
match execute_browser_skill_script(
|
match execute_browser_skill_script(
|
||||||
browser_tool,
|
browser_tool,
|
||||||
|
skills_dir,
|
||||||
"zhihu-write",
|
"zhihu-write",
|
||||||
"prepare_article_editor.js",
|
"prepare_article_editor.js",
|
||||||
args.clone(),
|
args.clone(),
|
||||||
@@ -498,9 +343,7 @@ fn poll_for_editor_readiness(
|
|||||||
last_state = Some(state);
|
last_state = Some(state);
|
||||||
}
|
}
|
||||||
Err(PipeError::PipeClosed) => return Err(PipeError::PipeClosed),
|
Err(PipeError::PipeClosed) => return Err(PipeError::PipeClosed),
|
||||||
Err(_) => {
|
Err(_) => {}
|
||||||
// Script may fail while the page is still navigating; tolerate.
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if attempt + 1 < EDITOR_READY_POLL_ATTEMPTS {
|
if attempt + 1 < EDITOR_READY_POLL_ATTEMPTS {
|
||||||
@@ -508,12 +351,11 @@ fn poll_for_editor_readiness(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Return the last observed state so the caller can surface the
|
|
||||||
// "editor_unavailable" message; or make one final attempt.
|
|
||||||
match last_state {
|
match last_state {
|
||||||
Some(state) => Ok(state),
|
Some(state) => Ok(state),
|
||||||
None => execute_browser_skill_script(
|
None => execute_browser_skill_script(
|
||||||
browser_tool,
|
browser_tool,
|
||||||
|
skills_dir,
|
||||||
"zhihu-write",
|
"zhihu-write",
|
||||||
"prepare_article_editor.js",
|
"prepare_article_editor.js",
|
||||||
args,
|
args,
|
||||||
@@ -525,6 +367,7 @@ fn poll_for_editor_readiness(
|
|||||||
fn probe_hotlist_extractor(
|
fn probe_hotlist_extractor(
|
||||||
transport: &dyn crate::agent::AgentEventSink,
|
transport: &dyn crate::agent::AgentEventSink,
|
||||||
browser_tool: &dyn BrowserBackend,
|
browser_tool: &dyn BrowserBackend,
|
||||||
|
skills_dir: &Path,
|
||||||
top_n: usize,
|
top_n: usize,
|
||||||
) -> Result<Option<Vec<HotlistItem>>, PipeError> {
|
) -> Result<Option<Vec<HotlistItem>>, PipeError> {
|
||||||
transport.send(&AgentMessage::LogEntry {
|
transport.send(&AgentMessage::LogEntry {
|
||||||
@@ -533,7 +376,7 @@ fn probe_hotlist_extractor(
|
|||||||
})?;
|
})?;
|
||||||
let response = browser_tool.invoke(
|
let response = browser_tool.invoke(
|
||||||
Action::Eval,
|
Action::Eval,
|
||||||
json!({ "script": load_hotlist_extractor_script(top_n)? }),
|
json!({ "script": load_hotlist_extractor_script(skills_dir, top_n)? }),
|
||||||
ZHIHU_DOMAIN,
|
ZHIHU_DOMAIN,
|
||||||
)?;
|
)?;
|
||||||
if !response.success {
|
if !response.success {
|
||||||
@@ -708,6 +551,7 @@ pub fn finalize_screen_export(
|
|||||||
fn execute_zhihu_article_route(
|
fn execute_zhihu_article_route(
|
||||||
transport: &dyn crate::agent::AgentEventSink,
|
transport: &dyn crate::agent::AgentEventSink,
|
||||||
browser_tool: &dyn BrowserBackend,
|
browser_tool: &dyn BrowserBackend,
|
||||||
|
skills_dir: &Path,
|
||||||
instruction: &str,
|
instruction: &str,
|
||||||
task_context: &CompatTaskContext,
|
task_context: &CompatTaskContext,
|
||||||
publish_mode: bool,
|
publish_mode: bool,
|
||||||
@@ -732,6 +576,7 @@ fn execute_zhihu_article_route(
|
|||||||
})?;
|
})?;
|
||||||
let creator_state = execute_browser_skill_script(
|
let creator_state = execute_browser_skill_script(
|
||||||
browser_tool,
|
browser_tool,
|
||||||
|
skills_dir,
|
||||||
"zhihu-navigate",
|
"zhihu-navigate",
|
||||||
"open_creator_entry.js",
|
"open_creator_entry.js",
|
||||||
json!({ "desired_target": "article_editor" }),
|
json!({ "desired_target": "article_editor" }),
|
||||||
@@ -755,6 +600,7 @@ fn execute_zhihu_article_route(
|
|||||||
})?;
|
})?;
|
||||||
let editor_state = poll_for_editor_readiness(
|
let editor_state = poll_for_editor_readiness(
|
||||||
browser_tool,
|
browser_tool,
|
||||||
|
skills_dir,
|
||||||
if publish_mode { "publish" } else { "draft" },
|
if publish_mode { "publish" } else { "draft" },
|
||||||
)?;
|
)?;
|
||||||
if is_login_required_payload(&editor_state) {
|
if is_login_required_payload(&editor_state) {
|
||||||
@@ -773,10 +619,11 @@ fn execute_zhihu_article_route(
|
|||||||
message: "call zhihu-write.fill_article_draft".to_string(),
|
message: "call zhihu-write.fill_article_draft".to_string(),
|
||||||
})?;
|
})?;
|
||||||
let fill_result = if browser_tool.supports_live_input() {
|
let fill_result = if browser_tool.supports_live_input() {
|
||||||
execute_zhihu_fill_via_live_input(browser_tool, &article, publish_mode)?
|
execute_zhihu_fill_via_live_input(browser_tool, skills_dir, &article, publish_mode)?
|
||||||
} else {
|
} else {
|
||||||
execute_browser_skill_script(
|
execute_browser_skill_script(
|
||||||
browser_tool,
|
browser_tool,
|
||||||
|
skills_dir,
|
||||||
"zhihu-write",
|
"zhihu-write",
|
||||||
"fill_article_draft.js",
|
"fill_article_draft.js",
|
||||||
json!({
|
json!({
|
||||||
@@ -814,6 +661,7 @@ fn execute_zhihu_article_route(
|
|||||||
fn execute_generated_zhihu_article_publish_route(
|
fn execute_generated_zhihu_article_publish_route(
|
||||||
transport: &dyn crate::agent::AgentEventSink,
|
transport: &dyn crate::agent::AgentEventSink,
|
||||||
browser_tool: &dyn BrowserBackend,
|
browser_tool: &dyn BrowserBackend,
|
||||||
|
skills_dir: &Path,
|
||||||
instruction: &str,
|
instruction: &str,
|
||||||
task_context: &CompatTaskContext,
|
task_context: &CompatTaskContext,
|
||||||
workspace_root: &Path,
|
workspace_root: &Path,
|
||||||
@@ -834,6 +682,7 @@ fn execute_generated_zhihu_article_publish_route(
|
|||||||
execute_zhihu_article_route(
|
execute_zhihu_article_route(
|
||||||
transport,
|
transport,
|
||||||
browser_tool,
|
browser_tool,
|
||||||
|
skills_dir,
|
||||||
instruction,
|
instruction,
|
||||||
task_context,
|
task_context,
|
||||||
true,
|
true,
|
||||||
@@ -874,6 +723,7 @@ fn task_requests_zhihu_generated_article_publish(
|
|||||||
fn execute_zhihu_article_entry_route(
|
fn execute_zhihu_article_entry_route(
|
||||||
transport: &dyn crate::agent::AgentEventSink,
|
transport: &dyn crate::agent::AgentEventSink,
|
||||||
browser_tool: &dyn BrowserBackend,
|
browser_tool: &dyn BrowserBackend,
|
||||||
|
skills_dir: &Path,
|
||||||
) -> Result<String, PipeError> {
|
) -> Result<String, PipeError> {
|
||||||
navigate_zhihu_page(transport, browser_tool, ZHIHU_CREATOR_URL)?;
|
navigate_zhihu_page(transport, browser_tool, ZHIHU_CREATOR_URL)?;
|
||||||
transport.send(&AgentMessage::LogEntry {
|
transport.send(&AgentMessage::LogEntry {
|
||||||
@@ -882,6 +732,7 @@ fn execute_zhihu_article_entry_route(
|
|||||||
})?;
|
})?;
|
||||||
let creator_state = execute_browser_skill_script(
|
let creator_state = execute_browser_skill_script(
|
||||||
browser_tool,
|
browser_tool,
|
||||||
|
skills_dir,
|
||||||
"zhihu-navigate",
|
"zhihu-navigate",
|
||||||
"open_creator_entry.js",
|
"open_creator_entry.js",
|
||||||
json!({ "desired_target": "article_editor" }),
|
json!({ "desired_target": "article_editor" }),
|
||||||
@@ -903,10 +754,7 @@ fn execute_zhihu_article_entry_route(
|
|||||||
level: "info".to_string(),
|
level: "info".to_string(),
|
||||||
message: "call zhihu-write.prepare_article_editor".to_string(),
|
message: "call zhihu-write.prepare_article_editor".to_string(),
|
||||||
})?;
|
})?;
|
||||||
let editor_state = poll_for_editor_readiness(
|
let editor_state = poll_for_editor_readiness(browser_tool, skills_dir, "draft")?;
|
||||||
browser_tool,
|
|
||||||
"draft",
|
|
||||||
)?;
|
|
||||||
if is_login_required_payload(&editor_state) {
|
if is_login_required_payload(&editor_state) {
|
||||||
return Ok(build_login_block_message(payload_current_url(
|
return Ok(build_login_block_message(payload_current_url(
|
||||||
&editor_state,
|
&editor_state,
|
||||||
@@ -921,8 +769,9 @@ fn execute_zhihu_article_entry_route(
|
|||||||
)))
|
)))
|
||||||
}
|
}
|
||||||
|
|
||||||
fn load_hotlist_extractor_script(top_n: usize) -> Result<String, PipeError> {
|
fn load_hotlist_extractor_script(skills_dir: &Path, top_n: usize) -> Result<String, PipeError> {
|
||||||
load_browser_skill_script(
|
load_browser_skill_script(
|
||||||
|
skills_dir,
|
||||||
"zhihu-hotlist",
|
"zhihu-hotlist",
|
||||||
"extract_hotlist.js",
|
"extract_hotlist.js",
|
||||||
json!({ "top_n": top_n.to_string() }),
|
json!({ "top_n": top_n.to_string() }),
|
||||||
@@ -1007,12 +856,14 @@ fn navigate_zhihu_page(
|
|||||||
|
|
||||||
fn execute_browser_skill_script(
|
fn execute_browser_skill_script(
|
||||||
browser_tool: &dyn BrowserBackend,
|
browser_tool: &dyn BrowserBackend,
|
||||||
|
skills_dir: &Path,
|
||||||
skill_name: &str,
|
skill_name: &str,
|
||||||
script_name: &str,
|
script_name: &str,
|
||||||
args: Value,
|
args: Value,
|
||||||
expected_domain: &str,
|
expected_domain: &str,
|
||||||
) -> Result<Value, PipeError> {
|
) -> Result<Value, PipeError> {
|
||||||
let wrapped_script = load_browser_skill_script(skill_name, script_name, args)?;
|
let wrapped_script =
|
||||||
|
load_browser_skill_script(skills_dir, skill_name, script_name, args)?;
|
||||||
let response = browser_tool.invoke(
|
let response = browser_tool.invoke(
|
||||||
Action::Eval,
|
Action::Eval,
|
||||||
json!({ "script": wrapped_script }),
|
json!({ "script": wrapped_script }),
|
||||||
@@ -1039,6 +890,7 @@ fn live_input_probe_script(selector_candidates: &[&str]) -> String {
|
|||||||
|
|
||||||
fn execute_zhihu_fill_via_live_input(
|
fn execute_zhihu_fill_via_live_input(
|
||||||
browser_tool: &dyn BrowserBackend,
|
browser_tool: &dyn BrowserBackend,
|
||||||
|
skills_dir: &Path,
|
||||||
article: &ArticleDraft,
|
article: &ArticleDraft,
|
||||||
publish_mode: bool,
|
publish_mode: bool,
|
||||||
) -> Result<Value, PipeError> {
|
) -> Result<Value, PipeError> {
|
||||||
@@ -1176,6 +1028,7 @@ return JSON.stringify({{status:'ok',chunks:chunks.length}});
|
|||||||
// enable the button after the content fill updates the editor state.
|
// enable the button after the content fill updates the editor state.
|
||||||
let fill_result = execute_browser_skill_script(
|
let fill_result = execute_browser_skill_script(
|
||||||
browser_tool,
|
browser_tool,
|
||||||
|
skills_dir,
|
||||||
"zhihu-write",
|
"zhihu-write",
|
||||||
"fill_article_draft.js",
|
"fill_article_draft.js",
|
||||||
json!({
|
json!({
|
||||||
@@ -1275,11 +1128,15 @@ mod tests {
|
|||||||
"test-key".to_string(),
|
"test-key".to_string(),
|
||||||
"http://127.0.0.1:9".to_string(),
|
"http://127.0.0.1:9".to_string(),
|
||||||
"deepseek-chat".to_string(),
|
"deepseek-chat".to_string(),
|
||||||
Vec::new(),
|
None,
|
||||||
)
|
)
|
||||||
.unwrap()
|
.unwrap()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn test_skills_dir() -> &'static Path {
|
||||||
|
Path::new("D:/data/ideaSpace/rust/sgClaw/claw/claw/skills")
|
||||||
|
}
|
||||||
|
|
||||||
struct MockWorkflowTransport {
|
struct MockWorkflowTransport {
|
||||||
sent: Mutex<Vec<AgentMessage>>,
|
sent: Mutex<Vec<AgentMessage>>,
|
||||||
responses: Mutex<VecDeque<BrowserMessage>>,
|
responses: Mutex<VecDeque<BrowserMessage>>,
|
||||||
@@ -1439,6 +1296,7 @@ mod tests {
|
|||||||
transport.as_ref(),
|
transport.as_ref(),
|
||||||
backend.clone(),
|
backend.clone(),
|
||||||
Path::new("."),
|
Path::new("."),
|
||||||
|
test_skills_dir(),
|
||||||
"打开知乎写文章页面",
|
"打开知乎写文章页面",
|
||||||
&CompatTaskContext::default(),
|
&CompatTaskContext::default(),
|
||||||
WorkflowRoute::ZhihuArticleEntry,
|
WorkflowRoute::ZhihuArticleEntry,
|
||||||
@@ -1459,6 +1317,7 @@ mod tests {
|
|||||||
Action::Eval,
|
Action::Eval,
|
||||||
json!({
|
json!({
|
||||||
"script": load_browser_skill_script(
|
"script": load_browser_skill_script(
|
||||||
|
test_skills_dir(),
|
||||||
"zhihu-navigate",
|
"zhihu-navigate",
|
||||||
"open_creator_entry.js",
|
"open_creator_entry.js",
|
||||||
json!({ "desired_target": "article_editor" })
|
json!({ "desired_target": "article_editor" })
|
||||||
@@ -1471,6 +1330,7 @@ mod tests {
|
|||||||
Action::Eval,
|
Action::Eval,
|
||||||
json!({
|
json!({
|
||||||
"script": load_browser_skill_script(
|
"script": load_browser_skill_script(
|
||||||
|
test_skills_dir(),
|
||||||
"zhihu-write",
|
"zhihu-write",
|
||||||
"prepare_article_editor.js",
|
"prepare_article_editor.js",
|
||||||
json!({ "desired_mode": "draft" })
|
json!({ "desired_mode": "draft" })
|
||||||
@@ -1543,6 +1403,7 @@ mod tests {
|
|||||||
transport.as_ref(),
|
transport.as_ref(),
|
||||||
backend.clone(),
|
backend.clone(),
|
||||||
Path::new("."),
|
Path::new("."),
|
||||||
|
test_skills_dir(),
|
||||||
"打开知乎写文章页面",
|
"打开知乎写文章页面",
|
||||||
&CompatTaskContext::default(),
|
&CompatTaskContext::default(),
|
||||||
WorkflowRoute::ZhihuArticleEntry,
|
WorkflowRoute::ZhihuArticleEntry,
|
||||||
@@ -1563,6 +1424,7 @@ mod tests {
|
|||||||
Action::Eval,
|
Action::Eval,
|
||||||
json!({
|
json!({
|
||||||
"script": load_browser_skill_script(
|
"script": load_browser_skill_script(
|
||||||
|
test_skills_dir(),
|
||||||
"zhihu-navigate",
|
"zhihu-navigate",
|
||||||
"open_creator_entry.js",
|
"open_creator_entry.js",
|
||||||
json!({ "desired_target": "article_editor" })
|
json!({ "desired_target": "article_editor" })
|
||||||
@@ -1580,6 +1442,7 @@ mod tests {
|
|||||||
Action::Eval,
|
Action::Eval,
|
||||||
json!({
|
json!({
|
||||||
"script": load_browser_skill_script(
|
"script": load_browser_skill_script(
|
||||||
|
test_skills_dir(),
|
||||||
"zhihu-write",
|
"zhihu-write",
|
||||||
"prepare_article_editor.js",
|
"prepare_article_editor.js",
|
||||||
json!({ "desired_mode": "draft" })
|
json!({ "desired_mode": "draft" })
|
||||||
@@ -1668,6 +1531,7 @@ mod tests {
|
|||||||
let summary = execute_zhihu_article_route(
|
let summary = execute_zhihu_article_route(
|
||||||
transport.as_ref(),
|
transport.as_ref(),
|
||||||
backend.as_ref(),
|
backend.as_ref(),
|
||||||
|
test_skills_dir(),
|
||||||
"标题:测试标题\n正文:第一段内容",
|
"标题:测试标题\n正文:第一段内容",
|
||||||
&CompatTaskContext::default(),
|
&CompatTaskContext::default(),
|
||||||
false,
|
false,
|
||||||
@@ -1798,6 +1662,7 @@ mod tests {
|
|||||||
let summary = execute_zhihu_article_route(
|
let summary = execute_zhihu_article_route(
|
||||||
transport.as_ref(),
|
transport.as_ref(),
|
||||||
backend.as_ref(),
|
backend.as_ref(),
|
||||||
|
test_skills_dir(),
|
||||||
"标题:测试标题\n正文:第一段内容",
|
"标题:测试标题\n正文:第一段内容",
|
||||||
&CompatTaskContext::default(),
|
&CompatTaskContext::default(),
|
||||||
false,
|
false,
|
||||||
@@ -1828,6 +1693,7 @@ mod tests {
|
|||||||
assert_eq!(invocations[8].0, Action::Eval);
|
assert_eq!(invocations[8].0, Action::Eval);
|
||||||
assert_eq!(invocations[8].1["script"], json!(
|
assert_eq!(invocations[8].1["script"], json!(
|
||||||
load_browser_skill_script(
|
load_browser_skill_script(
|
||||||
|
test_skills_dir(),
|
||||||
"zhihu-write",
|
"zhihu-write",
|
||||||
"fill_article_draft.js",
|
"fill_article_draft.js",
|
||||||
json!({
|
json!({
|
||||||
@@ -1926,6 +1792,7 @@ mod tests {
|
|||||||
let _ = execute_zhihu_article_route(
|
let _ = execute_zhihu_article_route(
|
||||||
transport.as_ref(),
|
transport.as_ref(),
|
||||||
backend.as_ref(),
|
backend.as_ref(),
|
||||||
|
test_skills_dir(),
|
||||||
"标题:测试标题\n正文:第一段内容\n第二段内容",
|
"标题:测试标题\n正文:第一段内容\n第二段内容",
|
||||||
&CompatTaskContext::default(),
|
&CompatTaskContext::default(),
|
||||||
false,
|
false,
|
||||||
@@ -1944,6 +1811,7 @@ mod tests {
|
|||||||
#[test]
|
#[test]
|
||||||
fn zhihu_fill_script_checks_live_input_before_dom_fill_fallback() {
|
fn zhihu_fill_script_checks_live_input_before_dom_fill_fallback() {
|
||||||
let script = load_browser_skill_script(
|
let script = load_browser_skill_script(
|
||||||
|
test_skills_dir(),
|
||||||
"zhihu-write",
|
"zhihu-write",
|
||||||
"fill_article_draft.js",
|
"fill_article_draft.js",
|
||||||
json!({
|
json!({
|
||||||
@@ -1978,6 +1846,7 @@ mod tests {
|
|||||||
#[test]
|
#[test]
|
||||||
fn zhihu_fill_script_live_input_uses_editor_content_instead_of_whole_page_text() {
|
fn zhihu_fill_script_live_input_uses_editor_content_instead_of_whole_page_text() {
|
||||||
let script = load_browser_skill_script(
|
let script = load_browser_skill_script(
|
||||||
|
test_skills_dir(),
|
||||||
"zhihu-write",
|
"zhihu-write",
|
||||||
"fill_article_draft.js",
|
"fill_article_draft.js",
|
||||||
json!({
|
json!({
|
||||||
@@ -2070,6 +1939,7 @@ mod tests {
|
|||||||
transport.as_ref(),
|
transport.as_ref(),
|
||||||
backend.clone(),
|
backend.clone(),
|
||||||
Path::new("."),
|
Path::new("."),
|
||||||
|
test_skills_dir(),
|
||||||
"打开知乎写文章页面",
|
"打开知乎写文章页面",
|
||||||
&CompatTaskContext::default(),
|
&CompatTaskContext::default(),
|
||||||
WorkflowRoute::ZhihuArticleEntry,
|
WorkflowRoute::ZhihuArticleEntry,
|
||||||
@@ -2090,6 +1960,7 @@ mod tests {
|
|||||||
Action::Eval,
|
Action::Eval,
|
||||||
json!({
|
json!({
|
||||||
"script": load_browser_skill_script(
|
"script": load_browser_skill_script(
|
||||||
|
test_skills_dir(),
|
||||||
"zhihu-navigate",
|
"zhihu-navigate",
|
||||||
"open_creator_entry.js",
|
"open_creator_entry.js",
|
||||||
json!({ "desired_target": "article_editor" })
|
json!({ "desired_target": "article_editor" })
|
||||||
@@ -2107,6 +1978,7 @@ mod tests {
|
|||||||
Action::Eval,
|
Action::Eval,
|
||||||
json!({
|
json!({
|
||||||
"script": load_browser_skill_script(
|
"script": load_browser_skill_script(
|
||||||
|
test_skills_dir(),
|
||||||
"zhihu-write",
|
"zhihu-write",
|
||||||
"prepare_article_editor.js",
|
"prepare_article_editor.js",
|
||||||
json!({ "desired_mode": "draft" })
|
json!({ "desired_mode": "draft" })
|
||||||
@@ -2148,7 +2020,13 @@ mod tests {
|
|||||||
};
|
};
|
||||||
|
|
||||||
let browser_backend = PipeBrowserBackend::from_inner(browser_tool);
|
let browser_backend = PipeBrowserBackend::from_inner(browser_tool);
|
||||||
let items = collect_hotlist_items(transport.as_ref(), &browser_backend, 10, &task_context)
|
let items = collect_hotlist_items(
|
||||||
|
transport.as_ref(),
|
||||||
|
&browser_backend,
|
||||||
|
test_skills_dir(),
|
||||||
|
10,
|
||||||
|
&task_context,
|
||||||
|
)
|
||||||
.expect("hotlist collection should succeed");
|
.expect("hotlist collection should succeed");
|
||||||
|
|
||||||
assert_eq!(items.len(), 2);
|
assert_eq!(items.len(), 2);
|
||||||
@@ -2202,7 +2080,13 @@ mod tests {
|
|||||||
};
|
};
|
||||||
|
|
||||||
let browser_backend = PipeBrowserBackend::from_inner(browser_tool);
|
let browser_backend = PipeBrowserBackend::from_inner(browser_tool);
|
||||||
let items = collect_hotlist_items(transport.as_ref(), &browser_backend, 10, &task_context)
|
let items = collect_hotlist_items(
|
||||||
|
transport.as_ref(),
|
||||||
|
&browser_backend,
|
||||||
|
test_skills_dir(),
|
||||||
|
10,
|
||||||
|
&task_context,
|
||||||
|
)
|
||||||
.expect("hotlist collection should succeed after readiness polling");
|
.expect("hotlist collection should succeed after readiness polling");
|
||||||
|
|
||||||
assert_eq!(items.len(), 1);
|
assert_eq!(items.len(), 1);
|
||||||
@@ -2271,7 +2155,13 @@ mod tests {
|
|||||||
};
|
};
|
||||||
|
|
||||||
let browser_backend = PipeBrowserBackend::from_inner(browser_tool);
|
let browser_backend = PipeBrowserBackend::from_inner(browser_tool);
|
||||||
let items = collect_hotlist_items(transport.as_ref(), &browser_backend, 10, &task_context)
|
let items = collect_hotlist_items(
|
||||||
|
transport.as_ref(),
|
||||||
|
&browser_backend,
|
||||||
|
test_skills_dir(),
|
||||||
|
10,
|
||||||
|
&task_context,
|
||||||
|
)
|
||||||
.expect("hotlist collection should succeed after one navigation retry");
|
.expect("hotlist collection should succeed after one navigation retry");
|
||||||
|
|
||||||
assert_eq!(items.len(), 1);
|
assert_eq!(items.len(), 1);
|
||||||
@@ -2338,7 +2228,13 @@ mod tests {
|
|||||||
};
|
};
|
||||||
|
|
||||||
let browser_backend = PipeBrowserBackend::from_inner(browser_tool);
|
let browser_backend = PipeBrowserBackend::from_inner(browser_tool);
|
||||||
let items = collect_hotlist_items(transport.as_ref(), &browser_backend, 10, &task_context)
|
let items = collect_hotlist_items(
|
||||||
|
transport.as_ref(),
|
||||||
|
&browser_backend,
|
||||||
|
test_skills_dir(),
|
||||||
|
10,
|
||||||
|
&task_context,
|
||||||
|
)
|
||||||
.expect("hotlist collection should succeed via extractor probe");
|
.expect("hotlist collection should succeed via extractor probe");
|
||||||
|
|
||||||
assert_eq!(items.len(), 1);
|
assert_eq!(items.len(), 1);
|
||||||
@@ -2357,15 +2253,12 @@ mod tests {
|
|||||||
}
|
}
|
||||||
|
|
||||||
fn load_browser_skill_script(
|
fn load_browser_skill_script(
|
||||||
|
skills_dir: &Path,
|
||||||
skill_name: &str,
|
skill_name: &str,
|
||||||
script_name: &str,
|
script_name: &str,
|
||||||
args: Value,
|
args: Value,
|
||||||
) -> Result<String, PipeError> {
|
) -> Result<String, PipeError> {
|
||||||
let script_path = Path::new(env!("CARGO_MANIFEST_DIR"))
|
let script_path = skills_dir
|
||||||
.parent()
|
|
||||||
.unwrap_or_else(|| Path::new(env!("CARGO_MANIFEST_DIR")))
|
|
||||||
.join("skill_lib")
|
|
||||||
.join("skills")
|
|
||||||
.join(skill_name)
|
.join(skill_name)
|
||||||
.join("scripts")
|
.join("scripts")
|
||||||
.join(script_name);
|
.join(script_name);
|
||||||
|
|||||||
@@ -1,5 +1,7 @@
|
|||||||
mod settings;
|
mod settings;
|
||||||
|
|
||||||
|
pub use crate::runtime::RuntimeProfile;
|
||||||
|
|
||||||
pub use settings::{
|
pub use settings::{
|
||||||
BrowserBackend, ConfigError, DeepSeekSettings, OfficeBackend, PlannerMode, ProviderSettings,
|
BrowserBackend, ConfigError, DeepSeekSettings, OfficeBackend, PlannerMode, ProviderSettings,
|
||||||
SgClawSettings, SkillsPromptMode,
|
SgClawSettings, SkillsPromptMode,
|
||||||
|
|||||||
@@ -1,7 +1,6 @@
|
|||||||
use std::path::{Path, PathBuf};
|
use std::path::{Path, PathBuf};
|
||||||
|
|
||||||
use serde::Deserialize;
|
use serde::{Deserialize, Serialize};
|
||||||
use serde::de;
|
|
||||||
use thiserror::Error;
|
use thiserror::Error;
|
||||||
|
|
||||||
use crate::runtime::RuntimeProfile;
|
use crate::runtime::RuntimeProfile;
|
||||||
@@ -11,6 +10,10 @@ pub use zeroclaw::config::SkillsPromptInjectionMode as SkillsPromptMode;
|
|||||||
const DEFAULT_DEEPSEEK_BASE_URL: &str = "https://api.deepseek.com";
|
const DEFAULT_DEEPSEEK_BASE_URL: &str = "https://api.deepseek.com";
|
||||||
const DEFAULT_DEEPSEEK_MODEL: &str = "deepseek-chat";
|
const DEFAULT_DEEPSEEK_MODEL: &str = "deepseek-chat";
|
||||||
const DEFAULT_PROVIDER_ID: &str = "deepseek";
|
const DEFAULT_PROVIDER_ID: &str = "deepseek";
|
||||||
|
const DIRECT_SUBMIT_PROVIDER_ID: &str = "direct-submit";
|
||||||
|
const DIRECT_SUBMIT_BASE_URL: &str = "http://127.0.0.1/direct-submit";
|
||||||
|
const DIRECT_SUBMIT_MODEL: &str = "direct-submit-placeholder-model";
|
||||||
|
const DIRECT_SUBMIT_API_KEY: &str = "direct-submit-placeholder-key";
|
||||||
|
|
||||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||||
pub enum PlannerMode {
|
pub enum PlannerMode {
|
||||||
@@ -67,6 +70,19 @@ impl ProviderSettings {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn direct_submit_placeholder() -> Self {
|
||||||
|
Self {
|
||||||
|
id: DIRECT_SUBMIT_PROVIDER_ID.to_string(),
|
||||||
|
provider: DIRECT_SUBMIT_PROVIDER_ID.to_string(),
|
||||||
|
api_key: DIRECT_SUBMIT_API_KEY.to_string(),
|
||||||
|
base_url: Some(DIRECT_SUBMIT_BASE_URL.to_string()),
|
||||||
|
model: DIRECT_SUBMIT_MODEL.to_string(),
|
||||||
|
api_path: None,
|
||||||
|
wire_api: None,
|
||||||
|
requires_openai_auth: false,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
fn from_raw(raw: RawProviderSettings) -> Result<Self, ConfigError> {
|
fn from_raw(raw: RawProviderSettings) -> Result<Self, ConfigError> {
|
||||||
let id = raw.id.trim().to_string();
|
let id = raw.id.trim().to_string();
|
||||||
if id.is_empty() {
|
if id.is_empty() {
|
||||||
@@ -106,7 +122,7 @@ pub struct DeepSeekSettings {
|
|||||||
pub api_key: String,
|
pub api_key: String,
|
||||||
pub base_url: String,
|
pub base_url: String,
|
||||||
pub model: String,
|
pub model: String,
|
||||||
pub skills_dir: Vec<PathBuf>,
|
pub skills_dir: Option<PathBuf>,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl DeepSeekSettings {
|
impl DeepSeekSettings {
|
||||||
@@ -125,7 +141,8 @@ pub struct SgClawSettings {
|
|||||||
pub provider_api_key: String,
|
pub provider_api_key: String,
|
||||||
pub provider_base_url: String,
|
pub provider_base_url: String,
|
||||||
pub provider_model: String,
|
pub provider_model: String,
|
||||||
pub skills_dir: Vec<PathBuf>,
|
pub skills_dir: Option<PathBuf>,
|
||||||
|
pub direct_submit_skill: Option<String>,
|
||||||
pub skills_prompt_mode: SkillsPromptMode,
|
pub skills_prompt_mode: SkillsPromptMode,
|
||||||
pub runtime_profile: RuntimeProfile,
|
pub runtime_profile: RuntimeProfile,
|
||||||
pub planner_mode: PlannerMode,
|
pub planner_mode: PlannerMode,
|
||||||
@@ -156,7 +173,7 @@ impl SgClawSettings {
|
|||||||
api_key: String,
|
api_key: String,
|
||||||
base_url: String,
|
base_url: String,
|
||||||
model: String,
|
model: String,
|
||||||
skills_dir: Vec<PathBuf>,
|
skills_dir: Option<PathBuf>,
|
||||||
) -> Result<Self, ConfigError> {
|
) -> Result<Self, ConfigError> {
|
||||||
Self::new(
|
Self::new(
|
||||||
api_key,
|
api_key,
|
||||||
@@ -166,6 +183,7 @@ impl SgClawSettings {
|
|||||||
None,
|
None,
|
||||||
None,
|
None,
|
||||||
None,
|
None,
|
||||||
|
None,
|
||||||
Vec::new(),
|
Vec::new(),
|
||||||
None,
|
None,
|
||||||
None,
|
None,
|
||||||
@@ -182,6 +200,65 @@ impl SgClawSettings {
|
|||||||
.expect("active_provider should always resolve to a configured provider")
|
.expect("active_provider should always resolve to a configured provider")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn to_serializable(&self) -> SerializableRawSgClawSettings {
|
||||||
|
SerializableRawSgClawSettings {
|
||||||
|
api_key: self.provider_api_key.clone(),
|
||||||
|
base_url: self.provider_base_url.clone(),
|
||||||
|
model: self.provider_model.clone(),
|
||||||
|
skills_dir: self.skills_dir.as_ref().map(|p| p.to_string_lossy().into_owned()),
|
||||||
|
direct_submit_skill: self.direct_submit_skill.clone(),
|
||||||
|
skills_prompt_mode: Some(match self.skills_prompt_mode {
|
||||||
|
SkillsPromptMode::Full => "full".to_string(),
|
||||||
|
SkillsPromptMode::Compact => "compact".to_string(),
|
||||||
|
}),
|
||||||
|
runtime_profile: Some(match self.runtime_profile {
|
||||||
|
RuntimeProfile::BrowserAttached => "browser-attached".to_string(),
|
||||||
|
RuntimeProfile::BrowserHeavy => "browser-heavy".to_string(),
|
||||||
|
RuntimeProfile::GeneralAssistant => "general-assistant".to_string(),
|
||||||
|
}),
|
||||||
|
planner_mode: Some(match self.planner_mode {
|
||||||
|
PlannerMode::ZeroclawPlanFirst => "zeroclaw-plan-first".to_string(),
|
||||||
|
PlannerMode::LegacyDeterministic => "legacy-deterministic".to_string(),
|
||||||
|
}),
|
||||||
|
active_provider: Some(self.active_provider.clone()),
|
||||||
|
browser_backend: Some(match self.browser_backend {
|
||||||
|
BrowserBackend::SuperRpa => "super-rpa".to_string(),
|
||||||
|
BrowserBackend::AgentBrowser => "agent-browser".to_string(),
|
||||||
|
BrowserBackend::RustNative => "rust-native".to_string(),
|
||||||
|
BrowserBackend::ComputerUse => "computer-use".to_string(),
|
||||||
|
BrowserBackend::Auto => "auto".to_string(),
|
||||||
|
}),
|
||||||
|
office_backend: Some(match self.office_backend {
|
||||||
|
OfficeBackend::OpenXml => "openxml".to_string(),
|
||||||
|
OfficeBackend::Disabled => "disabled".to_string(),
|
||||||
|
}),
|
||||||
|
browser_ws_url: self.browser_ws_url.clone(),
|
||||||
|
service_ws_listen_addr: self.service_ws_listen_addr.clone(),
|
||||||
|
providers: self
|
||||||
|
.providers
|
||||||
|
.iter()
|
||||||
|
.map(|p| SerializableProviderSettings {
|
||||||
|
id: p.id.clone(),
|
||||||
|
provider: Some(p.provider.clone()),
|
||||||
|
api_key: p.api_key.clone(),
|
||||||
|
base_url: p.base_url.clone(),
|
||||||
|
model: p.model.clone(),
|
||||||
|
api_path: p.api_path.clone(),
|
||||||
|
wire_api: p.wire_api.clone(),
|
||||||
|
requires_openai_auth: p.requires_openai_auth,
|
||||||
|
})
|
||||||
|
.collect(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn save_to_path(&self, path: &Path) -> Result<(), ConfigError> {
|
||||||
|
let serializable = self.to_serializable();
|
||||||
|
let json = serde_json::to_string_pretty(&serializable)
|
||||||
|
.map_err(|err| ConfigError::ConfigParse(path.to_path_buf(), err.to_string()))?;
|
||||||
|
std::fs::write(path, json)
|
||||||
|
.map_err(|err| ConfigError::ConfigRead(path.to_path_buf(), err.to_string()))
|
||||||
|
}
|
||||||
|
|
||||||
fn maybe_from_env() -> Result<Option<Self>, ConfigError> {
|
fn maybe_from_env() -> Result<Option<Self>, ConfigError> {
|
||||||
let api_key = match std::env::var("DEEPSEEK_API_KEY") {
|
let api_key = match std::env::var("DEEPSEEK_API_KEY") {
|
||||||
Ok(value) => value,
|
Ok(value) => value,
|
||||||
@@ -199,7 +276,8 @@ impl SgClawSettings {
|
|||||||
api_key,
|
api_key,
|
||||||
base_url,
|
base_url,
|
||||||
model,
|
model,
|
||||||
Vec::new(),
|
None,
|
||||||
|
None,
|
||||||
None,
|
None,
|
||||||
None,
|
None,
|
||||||
None,
|
None,
|
||||||
@@ -284,7 +362,8 @@ impl SgClawSettings {
|
|||||||
config.api_key,
|
config.api_key,
|
||||||
config.base_url,
|
config.base_url,
|
||||||
config.model,
|
config.model,
|
||||||
resolve_configured_skills_dirs(config.skills_dir, config_dir),
|
resolve_configured_skills_dir(config.skills_dir, config_dir),
|
||||||
|
config.direct_submit_skill,
|
||||||
skills_prompt_mode,
|
skills_prompt_mode,
|
||||||
runtime_profile,
|
runtime_profile,
|
||||||
planner_mode,
|
planner_mode,
|
||||||
@@ -302,7 +381,8 @@ impl SgClawSettings {
|
|||||||
api_key: String,
|
api_key: String,
|
||||||
base_url: String,
|
base_url: String,
|
||||||
model: String,
|
model: String,
|
||||||
skills_dir: Vec<PathBuf>,
|
skills_dir: Option<PathBuf>,
|
||||||
|
direct_submit_skill: Option<String>,
|
||||||
skills_prompt_mode: Option<SkillsPromptMode>,
|
skills_prompt_mode: Option<SkillsPromptMode>,
|
||||||
runtime_profile: Option<RuntimeProfile>,
|
runtime_profile: Option<RuntimeProfile>,
|
||||||
planner_mode: Option<PlannerMode>,
|
planner_mode: Option<PlannerMode>,
|
||||||
@@ -313,10 +393,15 @@ impl SgClawSettings {
|
|||||||
browser_ws_url: Option<String>,
|
browser_ws_url: Option<String>,
|
||||||
service_ws_listen_addr: Option<String>,
|
service_ws_listen_addr: Option<String>,
|
||||||
) -> Result<Self, ConfigError> {
|
) -> Result<Self, ConfigError> {
|
||||||
|
let direct_submit_skill = normalize_direct_submit_skill(direct_submit_skill)?;
|
||||||
let providers = if providers.is_empty() {
|
let providers = if providers.is_empty() {
|
||||||
vec![ProviderSettings::from_legacy_deepseek(
|
if direct_submit_skill.is_some() {
|
||||||
api_key, base_url, model,
|
vec![ProviderSettings::direct_submit_placeholder()]
|
||||||
)?]
|
} else {
|
||||||
|
vec![ProviderSettings::from_legacy_deepseek(
|
||||||
|
api_key, base_url, model,
|
||||||
|
)?]
|
||||||
|
}
|
||||||
} else {
|
} else {
|
||||||
providers
|
providers
|
||||||
};
|
};
|
||||||
@@ -340,6 +425,7 @@ impl SgClawSettings {
|
|||||||
.unwrap_or_default(),
|
.unwrap_or_default(),
|
||||||
provider_model: active_provider_settings.model.clone(),
|
provider_model: active_provider_settings.model.clone(),
|
||||||
skills_dir,
|
skills_dir,
|
||||||
|
direct_submit_skill,
|
||||||
skills_prompt_mode: skills_prompt_mode.unwrap_or(SkillsPromptMode::Compact),
|
skills_prompt_mode: skills_prompt_mode.unwrap_or(SkillsPromptMode::Compact),
|
||||||
runtime_profile: runtime_profile.unwrap_or(RuntimeProfile::BrowserAttached),
|
runtime_profile: runtime_profile.unwrap_or(RuntimeProfile::BrowserAttached),
|
||||||
planner_mode: planner_mode.unwrap_or(PlannerMode::ZeroclawPlanFirst),
|
planner_mode: planner_mode.unwrap_or(PlannerMode::ZeroclawPlanFirst),
|
||||||
@@ -433,18 +519,11 @@ fn parse_office_backend(raw: &str) -> Result<OfficeBackend, String> {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
fn resolve_configured_skills_dirs(raw: Vec<String>, config_dir: &Path) -> Vec<PathBuf> {
|
fn resolve_configured_skills_dir(raw: Option<String>, config_dir: &Path) -> Option<PathBuf> {
|
||||||
raw.into_iter()
|
raw.map(|value| value.trim().to_string())
|
||||||
.filter(|s| !s.trim().is_empty())
|
.filter(|value| !value.is_empty())
|
||||||
.map(|s| {
|
.map(PathBuf::from)
|
||||||
let path = PathBuf::from(s.trim());
|
.map(|path| if path.is_absolute() { path } else { config_dir.join(path) })
|
||||||
if path.is_absolute() {
|
|
||||||
path
|
|
||||||
} else {
|
|
||||||
config_dir.join(path)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
.collect()
|
|
||||||
}
|
}
|
||||||
|
|
||||||
fn normalize_required_value(field: &'static str, raw: String) -> Result<String, ConfigError> {
|
fn normalize_required_value(field: &'static str, raw: String) -> Result<String, ConfigError> {
|
||||||
@@ -460,6 +539,29 @@ fn normalize_optional_value(raw: Option<String>) -> Option<String> {
|
|||||||
.filter(|value| !value.is_empty())
|
.filter(|value| !value.is_empty())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn normalize_direct_submit_skill(raw: Option<String>) -> Result<Option<String>, ConfigError> {
|
||||||
|
let value = normalize_optional_value(raw);
|
||||||
|
let Some(value) = value.as_deref() else {
|
||||||
|
return Ok(None);
|
||||||
|
};
|
||||||
|
|
||||||
|
let Some((skill_name, tool_name)) = value.split_once('.') else {
|
||||||
|
return Err(ConfigError::InvalidValue(
|
||||||
|
"directSubmitSkill",
|
||||||
|
format!("must use skill.tool format, got {value}"),
|
||||||
|
));
|
||||||
|
};
|
||||||
|
|
||||||
|
if skill_name.trim().is_empty() || tool_name.trim().is_empty() {
|
||||||
|
return Err(ConfigError::InvalidValue(
|
||||||
|
"directSubmitSkill",
|
||||||
|
format!("must use skill.tool format, got {value}"),
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(Some(value.to_string()))
|
||||||
|
}
|
||||||
|
|
||||||
fn normalize_base_url(raw: String) -> String {
|
fn normalize_base_url(raw: String) -> String {
|
||||||
let trimmed = raw.trim();
|
let trimmed = raw.trim();
|
||||||
if trimmed.is_empty() {
|
if trimmed.is_empty() {
|
||||||
@@ -486,47 +588,52 @@ fn normalize_enum_token(raw: &str) -> String {
|
|||||||
.to_ascii_lowercase()
|
.to_ascii_lowercase()
|
||||||
}
|
}
|
||||||
|
|
||||||
fn deserialize_skills_dirs<'de, D>(deserializer: D) -> Result<Vec<String>, D::Error>
|
#[derive(Debug, Serialize)]
|
||||||
where
|
struct SerializableRawSgClawSettings {
|
||||||
D: de::Deserializer<'de>,
|
#[serde(rename = "apiKey")]
|
||||||
{
|
api_key: String,
|
||||||
struct StringOrVec;
|
#[serde(rename = "baseUrl")]
|
||||||
|
base_url: String,
|
||||||
|
model: String,
|
||||||
|
#[serde(rename = "skillsDir", skip_serializing_if = "Option::is_none")]
|
||||||
|
skills_dir: Option<String>,
|
||||||
|
#[serde(rename = "directSubmitSkill", skip_serializing_if = "Option::is_none")]
|
||||||
|
direct_submit_skill: Option<String>,
|
||||||
|
#[serde(rename = "skillsPromptMode", skip_serializing_if = "Option::is_none")]
|
||||||
|
skills_prompt_mode: Option<String>,
|
||||||
|
#[serde(rename = "runtimeProfile", skip_serializing_if = "Option::is_none")]
|
||||||
|
runtime_profile: Option<String>,
|
||||||
|
#[serde(rename = "plannerMode", skip_serializing_if = "Option::is_none")]
|
||||||
|
planner_mode: Option<String>,
|
||||||
|
#[serde(rename = "activeProvider", skip_serializing_if = "Option::is_none")]
|
||||||
|
active_provider: Option<String>,
|
||||||
|
#[serde(rename = "browserBackend", skip_serializing_if = "Option::is_none")]
|
||||||
|
browser_backend: Option<String>,
|
||||||
|
#[serde(rename = "officeBackend", skip_serializing_if = "Option::is_none")]
|
||||||
|
office_backend: Option<String>,
|
||||||
|
#[serde(rename = "browserWsUrl", skip_serializing_if = "Option::is_none")]
|
||||||
|
browser_ws_url: Option<String>,
|
||||||
|
#[serde(rename = "serviceWsListenAddr", skip_serializing_if = "Option::is_none")]
|
||||||
|
service_ws_listen_addr: Option<String>,
|
||||||
|
#[serde(default)]
|
||||||
|
providers: Vec<SerializableProviderSettings>,
|
||||||
|
}
|
||||||
|
|
||||||
impl<'de> de::Visitor<'de> for StringOrVec {
|
#[derive(Debug, Serialize)]
|
||||||
type Value = Vec<String>;
|
struct SerializableProviderSettings {
|
||||||
|
id: String,
|
||||||
fn expecting(&self, formatter: &mut std::fmt::Formatter) -> std::fmt::Result {
|
provider: Option<String>,
|
||||||
formatter.write_str("a string or array of strings")
|
#[serde(rename = "apiKey")]
|
||||||
}
|
api_key: String,
|
||||||
|
#[serde(rename = "baseUrl", skip_serializing_if = "Option::is_none")]
|
||||||
fn visit_str<E: de::Error>(self, value: &str) -> Result<Vec<String>, E> {
|
base_url: Option<String>,
|
||||||
if value.trim().is_empty() {
|
model: String,
|
||||||
Ok(Vec::new())
|
#[serde(rename = "apiPath", skip_serializing_if = "Option::is_none")]
|
||||||
} else {
|
api_path: Option<String>,
|
||||||
Ok(vec![value.to_string()])
|
#[serde(rename = "wireApi", skip_serializing_if = "Option::is_none")]
|
||||||
}
|
wire_api: Option<String>,
|
||||||
}
|
#[serde(rename = "requiresOpenaiAuth")]
|
||||||
|
requires_openai_auth: bool,
|
||||||
fn visit_seq<A: de::SeqAccess<'de>>(self, mut seq: A) -> Result<Vec<String>, A::Error> {
|
|
||||||
let mut dirs = Vec::new();
|
|
||||||
while let Some(value) = seq.next_element::<String>()? {
|
|
||||||
if !value.trim().is_empty() {
|
|
||||||
dirs.push(value);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Ok(dirs)
|
|
||||||
}
|
|
||||||
|
|
||||||
fn visit_none<E: de::Error>(self) -> Result<Vec<String>, E> {
|
|
||||||
Ok(Vec::new())
|
|
||||||
}
|
|
||||||
|
|
||||||
fn visit_unit<E: de::Error>(self) -> Result<Vec<String>, E> {
|
|
||||||
Ok(Vec::new())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
deserializer.deserialize_any(StringOrVec)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Deserialize)]
|
#[derive(Debug, Deserialize)]
|
||||||
@@ -537,8 +644,10 @@ struct RawSgClawSettings {
|
|||||||
base_url: String,
|
base_url: String,
|
||||||
#[serde(default)]
|
#[serde(default)]
|
||||||
model: String,
|
model: String,
|
||||||
#[serde(rename = "skillsDir", alias = "skills_dir", default, deserialize_with = "deserialize_skills_dirs")]
|
#[serde(rename = "skillsDir", alias = "skills_dir", default)]
|
||||||
skills_dir: Vec<String>,
|
skills_dir: Option<String>,
|
||||||
|
#[serde(rename = "directSubmitSkill", alias = "direct_submit_skill", default)]
|
||||||
|
direct_submit_skill: Option<String>,
|
||||||
#[serde(rename = "skillsPromptMode", alias = "skills_prompt_mode", default)]
|
#[serde(rename = "skillsPromptMode", alias = "skills_prompt_mode", default)]
|
||||||
skills_prompt_mode: Option<String>,
|
skills_prompt_mode: Option<String>,
|
||||||
#[serde(rename = "runtimeProfile", alias = "runtime_profile", default)]
|
#[serde(rename = "runtimeProfile", alias = "runtime_profile", default)]
|
||||||
|
|||||||
@@ -12,9 +12,8 @@ use zeroclaw::tools::{self, ReadSkillTool};
|
|||||||
use zeroclaw::SecurityPolicy;
|
use zeroclaw::SecurityPolicy;
|
||||||
|
|
||||||
use crate::compat::memory_adapter::build_memory;
|
use crate::compat::memory_adapter::build_memory;
|
||||||
use crate::compat::config_adapter::resolve_scene_skills_dir_path;
|
|
||||||
use crate::pipe::PipeError;
|
use crate::pipe::PipeError;
|
||||||
use crate::runtime::{match_scene_instruction, DispatchMode, RuntimeProfile, ToolPolicy};
|
use crate::runtime::{RuntimeProfile, ToolPolicy};
|
||||||
|
|
||||||
const BROWSER_ACTION_TOOL_NAME: &str = "browser_action";
|
const BROWSER_ACTION_TOOL_NAME: &str = "browser_action";
|
||||||
const SUPERRPA_BROWSER_TOOL_NAME: &str = "superrpa_browser";
|
const SUPERRPA_BROWSER_TOOL_NAME: &str = "superrpa_browser";
|
||||||
@@ -26,7 +25,6 @@ const ZHIHU_HOTLIST_EXECUTION_PROMPT: &str = "Zhihu hotlist execution contract:\
|
|||||||
const OFFICE_EXPORT_COMPLETION_PROMPT: &str = "Export completion contract:\n- This task requires a real Excel export.\n- After the Zhihu rows are available, you must call openxml_office before finishing.\n- Never fabricate, simulate, or invent substitute hotlist data when a live collection/export task fails.\n- If live collection fails, report the failure concisely instead of producing fake rows.\n- Do not stop after describing how you will parse or export the data.\n- Do not repeat the same sentence or section in your final answer.\n- Your final answer must include the generated local .xlsx path.";
|
const OFFICE_EXPORT_COMPLETION_PROMPT: &str = "Export completion contract:\n- This task requires a real Excel export.\n- After the Zhihu rows are available, you must call openxml_office before finishing.\n- Never fabricate, simulate, or invent substitute hotlist data when a live collection/export task fails.\n- If live collection fails, report the failure concisely instead of producing fake rows.\n- Do not stop after describing how you will parse or export the data.\n- Do not repeat the same sentence or section in your final answer.\n- Your final answer must include the generated local .xlsx path.";
|
||||||
const SCREEN_EXPORT_COMPLETION_PROMPT: &str = "Presentation completion contract:\n- This task requires a real dashboard artifact.\n- After the Zhihu rows are available, you must call screen_html_export before finishing.\n- Do not stop after describing how you will render or present the data.\n- Do not repeat the same sentence or section in your final answer.\n- Your final answer must include the local .html path and the presentation object.";
|
const SCREEN_EXPORT_COMPLETION_PROMPT: &str = "Presentation completion contract:\n- This task requires a real dashboard artifact.\n- After the Zhihu rows are available, you must call screen_html_export before finishing.\n- Do not stop after describing how you will render or present the data.\n- Do not repeat the same sentence or section in your final answer.\n- Your final answer must include the local .html path and the presentation object.";
|
||||||
const ZHIHU_WRITE_PUBLISH_PROMPT: &str = "Zhihu article publish contract:\n- This task may publish a Zhihu article.\n- You must not click publish without explicit human confirmation in the current session.\n- If the user asked to publish but no explicit confirmation phrase is present yet, ask for confirmation concisely and stop after the confirmation request.\n- Do not keep exploring tools after you have determined that publish confirmation is missing.\n- If the user only asked to write or draft, stay in draft mode and do not treat it as publish mode.\n- Do not repeat the same sentence or section in your final answer.";
|
const ZHIHU_WRITE_PUBLISH_PROMPT: &str = "Zhihu article publish contract:\n- This task may publish a Zhihu article.\n- You must not click publish without explicit human confirmation in the current session.\n- If the user asked to publish but no explicit confirmation phrase is present yet, ask for confirmation concisely and stop after the confirmation request.\n- Do not keep exploring tools after you have determined that publish confirmation is missing.\n- If the user only asked to write or draft, stay in draft mode and do not treat it as publish mode.\n- Do not repeat the same sentence or section in your final answer.";
|
||||||
const REPAIR_CITY_DISPATCH_EXECUTION_PROMPT: &str = "95598 repair city dispatch execution contract:\n- Treat this as a browser workflow, not a text-only task.\n- You must call `95598-repair-city-dispatch.collect_repair_orders` first when the tool is available.\n- Use generic browser probing only after the scene-specific collection tool fails or is unavailable.\n- Collect the live repair order queue before summarizing or reporting status.";
|
|
||||||
|
|
||||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||||
pub struct RuntimeEngine {
|
pub struct RuntimeEngine {
|
||||||
@@ -153,9 +151,6 @@ impl RuntimeEngine {
|
|||||||
}
|
}
|
||||||
|
|
||||||
let mut sections = vec![BROWSER_TOOL_CONTRACT_PROMPT.to_string()];
|
let mut sections = vec![BROWSER_TOOL_CONTRACT_PROMPT.to_string()];
|
||||||
if let Some(scene_contract) = build_scene_execution_contract(trimmed_instruction) {
|
|
||||||
sections.push(scene_contract);
|
|
||||||
}
|
|
||||||
if is_zhihu_hotlist_task(trimmed_instruction, page_url, page_title) {
|
if is_zhihu_hotlist_task(trimmed_instruction, page_url, page_title) {
|
||||||
sections.push(ZHIHU_HOTLIST_EXECUTION_PROMPT.to_string());
|
sections.push(ZHIHU_HOTLIST_EXECUTION_PROMPT.to_string());
|
||||||
}
|
}
|
||||||
@@ -276,17 +271,6 @@ fn task_needs_local_file_read(instruction: &str) -> bool {
|
|||||||
normalized.contains("/home/") || normalized.contains("./") || normalized.contains("../")
|
normalized.contains("/home/") || normalized.contains("./") || normalized.contains("../")
|
||||||
}
|
}
|
||||||
|
|
||||||
fn build_scene_execution_contract(instruction: &str) -> Option<String> {
|
|
||||||
let scene = match_scene_instruction(instruction)?;
|
|
||||||
if scene.id == "95598-repair-city-dispatch"
|
|
||||||
&& matches!(scene.dispatch_mode, DispatchMode::AgentBrowser)
|
|
||||||
{
|
|
||||||
Some(REPAIR_CITY_DISPATCH_EXECUTION_PROMPT.to_string())
|
|
||||||
} else {
|
|
||||||
None
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn is_zhihu_hotlist_task(
|
pub fn is_zhihu_hotlist_task(
|
||||||
instruction: &str,
|
instruction: &str,
|
||||||
page_url: Option<&str>,
|
page_url: Option<&str>,
|
||||||
@@ -402,14 +386,6 @@ fn load_runtime_skills(config: &ZeroClawConfig, skills_dirs: &[PathBuf]) -> Vec<
|
|||||||
dir,
|
dir,
|
||||||
config.skills.allow_scripts,
|
config.skills.allow_scripts,
|
||||||
));
|
));
|
||||||
|
|
||||||
let scene_skills_dir = resolve_scene_skills_dir_path(dir.clone());
|
|
||||||
if scene_skills_dir != *dir {
|
|
||||||
skills.extend(zeroclaw::skills::load_skills_from_directory(
|
|
||||||
&scene_skills_dir,
|
|
||||||
config.skills.allow_scripts,
|
|
||||||
));
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
skills
|
skills
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,14 +1,9 @@
|
|||||||
mod engine;
|
mod engine;
|
||||||
mod profile;
|
mod profile;
|
||||||
mod scene_registry;
|
|
||||||
mod tool_policy;
|
mod tool_policy;
|
||||||
|
|
||||||
pub use engine::{
|
pub use engine::{
|
||||||
is_zhihu_hotlist_task, is_zhihu_write_task, task_requests_zhihu_article_publish, RuntimeEngine,
|
is_zhihu_hotlist_task, is_zhihu_write_task, task_requests_zhihu_article_publish, RuntimeEngine,
|
||||||
};
|
};
|
||||||
pub use profile::RuntimeProfile;
|
pub use profile::RuntimeProfile;
|
||||||
pub use scene_registry::{
|
|
||||||
load_first_slice_scene_registry, load_scene_registry_from_root, match_scene_instruction,
|
|
||||||
match_scene_instruction_in_registry, DispatchMode, SceneRegistryEntry,
|
|
||||||
};
|
|
||||||
pub use tool_policy::ToolPolicy;
|
pub use tool_policy::ToolPolicy;
|
||||||
|
|||||||
@@ -1,242 +0,0 @@
|
|||||||
use std::fs;
|
|
||||||
use std::path::{Path, PathBuf};
|
|
||||||
|
|
||||||
use serde::Deserialize;
|
|
||||||
use serde_json::{Map, Value};
|
|
||||||
|
|
||||||
const STAGED_SCENE_ROOT: &str = "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging";
|
|
||||||
|
|
||||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
|
||||||
pub enum DispatchMode {
|
|
||||||
DirectBrowser,
|
|
||||||
AgentBrowser,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
|
||||||
pub struct SceneRegistryEntry {
|
|
||||||
pub id: String,
|
|
||||||
pub name: String,
|
|
||||||
pub summary: String,
|
|
||||||
pub tags: Vec<String>,
|
|
||||||
pub inputs: Vec<String>,
|
|
||||||
pub outputs: Vec<String>,
|
|
||||||
pub skill_package: String,
|
|
||||||
pub skill_tool: String,
|
|
||||||
pub skill_artifact_type: String,
|
|
||||||
pub dispatch_mode: DispatchMode,
|
|
||||||
pub expected_domain: String,
|
|
||||||
pub aliases: Vec<String>,
|
|
||||||
pub default_args: Map<String, Value>,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, Deserialize)]
|
|
||||||
struct SceneMetadata {
|
|
||||||
id: String,
|
|
||||||
name: String,
|
|
||||||
summary: String,
|
|
||||||
#[serde(default)]
|
|
||||||
tags: Vec<String>,
|
|
||||||
#[serde(default)]
|
|
||||||
inputs: Vec<String>,
|
|
||||||
#[serde(default)]
|
|
||||||
outputs: Vec<String>,
|
|
||||||
skill: SceneSkillMetadata,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, Deserialize)]
|
|
||||||
struct SceneSkillMetadata {
|
|
||||||
package: String,
|
|
||||||
tool: String,
|
|
||||||
artifact_type: String,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord)]
|
|
||||||
struct SceneMatchScore {
|
|
||||||
matched_terms: usize,
|
|
||||||
longest_term: usize,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
|
||||||
struct SceneMatchResult {
|
|
||||||
score: SceneMatchScore,
|
|
||||||
has_strong_phrase_hit: bool,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug, Clone)]
|
|
||||||
struct SceneRuntimePolicy {
|
|
||||||
scene_id: &'static str,
|
|
||||||
dispatch_mode: DispatchMode,
|
|
||||||
expected_domain: &'static str,
|
|
||||||
aliases: &'static [&'static str],
|
|
||||||
}
|
|
||||||
|
|
||||||
const FIRST_SLICE_POLICIES: [SceneRuntimePolicy; 2] = [
|
|
||||||
SceneRuntimePolicy {
|
|
||||||
scene_id: "fault-details-report",
|
|
||||||
dispatch_mode: DispatchMode::DirectBrowser,
|
|
||||||
expected_domain: "sgcc.example.invalid",
|
|
||||||
aliases: &["故障明细", "故障明细报表", "导出故障明细"],
|
|
||||||
},
|
|
||||||
SceneRuntimePolicy {
|
|
||||||
scene_id: "95598-repair-city-dispatch",
|
|
||||||
dispatch_mode: DispatchMode::AgentBrowser,
|
|
||||||
expected_domain: "95598.example.invalid",
|
|
||||||
aliases: &["95598抢修市指", "市指抢修监测", "95598抢修队列", "95598抢修市指监测"],
|
|
||||||
},
|
|
||||||
];
|
|
||||||
|
|
||||||
pub fn load_first_slice_scene_registry() -> Vec<SceneRegistryEntry> {
|
|
||||||
load_scene_registry_from_root(Path::new(STAGED_SCENE_ROOT))
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn load_scene_registry_from_root(root: &Path) -> Vec<SceneRegistryEntry> {
|
|
||||||
let mut registry = Vec::new();
|
|
||||||
for policy in FIRST_SLICE_POLICIES {
|
|
||||||
if let Some(entry) = load_scene_entry(root, &policy) {
|
|
||||||
registry.push(entry);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
registry
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn match_scene_instruction(instruction: &str) -> Option<SceneRegistryEntry> {
|
|
||||||
let registry = load_first_slice_scene_registry();
|
|
||||||
match_scene_instruction_in_registry(®istry, instruction)
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn match_scene_instruction_in_registry(
|
|
||||||
registry: &[SceneRegistryEntry],
|
|
||||||
instruction: &str,
|
|
||||||
) -> Option<SceneRegistryEntry> {
|
|
||||||
let normalized_instruction = normalize_for_match(instruction);
|
|
||||||
if normalized_instruction.is_empty() {
|
|
||||||
return None;
|
|
||||||
}
|
|
||||||
|
|
||||||
let mut best_match: Option<(SceneMatchResult, &SceneRegistryEntry)> = None;
|
|
||||||
let mut ambiguous = false;
|
|
||||||
let mut strong_phrase_hits = 0;
|
|
||||||
|
|
||||||
for entry in registry {
|
|
||||||
let Some(result) = score_scene_instruction(entry, &normalized_instruction) else {
|
|
||||||
continue;
|
|
||||||
};
|
|
||||||
if result.has_strong_phrase_hit {
|
|
||||||
strong_phrase_hits += 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
match &best_match {
|
|
||||||
None => {
|
|
||||||
best_match = Some((result, entry));
|
|
||||||
ambiguous = false;
|
|
||||||
}
|
|
||||||
Some((current_result, _)) if result.score > current_result.score => {
|
|
||||||
best_match = Some((result, entry));
|
|
||||||
ambiguous = false;
|
|
||||||
}
|
|
||||||
Some((current_result, current_entry)) if result.score == current_result.score => {
|
|
||||||
if result.has_strong_phrase_hit && current_result.has_strong_phrase_hit {
|
|
||||||
if current_entry.id != entry.id {
|
|
||||||
ambiguous = true;
|
|
||||||
}
|
|
||||||
} else if result.has_strong_phrase_hit && !current_result.has_strong_phrase_hit {
|
|
||||||
best_match = Some((result, entry));
|
|
||||||
ambiguous = false;
|
|
||||||
} else if current_result.has_strong_phrase_hit && !result.has_strong_phrase_hit {
|
|
||||||
ambiguous = false;
|
|
||||||
} else if current_entry.id != entry.id {
|
|
||||||
ambiguous = true;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Some(_) => {}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if ambiguous || strong_phrase_hits > 1 {
|
|
||||||
None
|
|
||||||
} else {
|
|
||||||
best_match.map(|(_, entry)| entry.clone())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn load_scene_entry(root: &Path, policy: &SceneRuntimePolicy) -> Option<SceneRegistryEntry> {
|
|
||||||
let scene_path = scene_json_path(root, policy.scene_id);
|
|
||||||
let contents = fs::read_to_string(scene_path).ok()?;
|
|
||||||
let metadata: SceneMetadata = serde_json::from_str(&contents).ok()?;
|
|
||||||
if metadata.id != policy.scene_id {
|
|
||||||
return None;
|
|
||||||
}
|
|
||||||
|
|
||||||
Some(SceneRegistryEntry {
|
|
||||||
id: policy.scene_id.to_string(),
|
|
||||||
name: metadata.name,
|
|
||||||
summary: metadata.summary,
|
|
||||||
tags: metadata.tags,
|
|
||||||
inputs: metadata.inputs,
|
|
||||||
outputs: metadata.outputs,
|
|
||||||
skill_package: metadata.skill.package,
|
|
||||||
skill_tool: metadata.skill.tool,
|
|
||||||
skill_artifact_type: metadata.skill.artifact_type,
|
|
||||||
dispatch_mode: policy.dispatch_mode,
|
|
||||||
expected_domain: policy.expected_domain.to_string(),
|
|
||||||
aliases: policy.aliases.iter().map(|alias| (*alias).to_string()).collect(),
|
|
||||||
default_args: Map::new(),
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
fn scene_json_path(root: &Path, scene_id: &str) -> PathBuf {
|
|
||||||
root.join("scenes").join(scene_id).join("scene.json")
|
|
||||||
}
|
|
||||||
|
|
||||||
fn score_scene_instruction(
|
|
||||||
entry: &SceneRegistryEntry,
|
|
||||||
normalized_instruction: &str,
|
|
||||||
) -> Option<SceneMatchResult> {
|
|
||||||
let mut matched_terms = 0;
|
|
||||||
let mut longest_term = 0;
|
|
||||||
let mut has_strong_phrase_hit = false;
|
|
||||||
|
|
||||||
for term in candidate_match_terms(entry) {
|
|
||||||
let normalized_term = normalize_for_match(&term);
|
|
||||||
if normalized_term.len() < 2 {
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
if normalized_instruction.contains(&normalized_term) {
|
|
||||||
matched_terms += 1;
|
|
||||||
longest_term = longest_term.max(normalized_term.len());
|
|
||||||
if normalized_term.len() >= 6 {
|
|
||||||
has_strong_phrase_hit = true;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if matched_terms == 0 {
|
|
||||||
None
|
|
||||||
} else {
|
|
||||||
Some(SceneMatchResult {
|
|
||||||
score: SceneMatchScore {
|
|
||||||
matched_terms,
|
|
||||||
longest_term,
|
|
||||||
},
|
|
||||||
has_strong_phrase_hit,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn candidate_match_terms(entry: &SceneRegistryEntry) -> Vec<String> {
|
|
||||||
let mut terms = Vec::new();
|
|
||||||
terms.push(entry.id.clone());
|
|
||||||
terms.push(entry.name.clone());
|
|
||||||
terms.push(entry.summary.clone());
|
|
||||||
terms.extend(entry.tags.iter().cloned());
|
|
||||||
terms.extend(entry.aliases.iter().cloned());
|
|
||||||
terms
|
|
||||||
}
|
|
||||||
|
|
||||||
fn normalize_for_match(value: &str) -> String {
|
|
||||||
value
|
|
||||||
.chars()
|
|
||||||
.filter(|ch| !ch.is_whitespace() && *ch != '-' && *ch != '_' && *ch != ':')
|
|
||||||
.flat_map(|ch| ch.to_lowercase())
|
|
||||||
.collect()
|
|
||||||
}
|
|
||||||
@@ -7,14 +7,15 @@ use std::sync::Arc;
|
|||||||
use tungstenite::accept;
|
use tungstenite::accept;
|
||||||
|
|
||||||
use crate::agent::AgentRuntimeContext;
|
use crate::agent::AgentRuntimeContext;
|
||||||
|
use crate::browser::callback_host::LiveBrowserCallbackHost;
|
||||||
use crate::pipe::PipeError;
|
use crate::pipe::PipeError;
|
||||||
use crate::security::MacPolicy;
|
use crate::security::MacPolicy;
|
||||||
|
|
||||||
const DEFAULT_BROWSER_WS_URL: &str = "ws://127.0.0.1:12345";
|
const DEFAULT_BROWSER_WS_URL: &str = "ws://127.0.0.1:12345";
|
||||||
const DEFAULT_SERVICE_WS_LISTEN_ADDR: &str = "127.0.0.1:42321";
|
const DEFAULT_SERVICE_WS_LISTEN_ADDR: &str = "127.0.0.1:42321";
|
||||||
|
|
||||||
pub use protocol::{ClientMessage, ServiceMessage};
|
pub use protocol::{ClientMessage, ConfigUpdatePayload, ServiceMessage};
|
||||||
pub use server::{serve_client, ServiceEventSink, ServiceSession};
|
pub use server::{ServiceEventSink, ServiceSession};
|
||||||
|
|
||||||
pub(crate) mod browser_ws_client {
|
pub(crate) mod browser_ws_client {
|
||||||
pub(crate) use super::server::{initial_request_url_for_submit_task, ServiceWsClient};
|
pub(crate) use super::server::{initial_request_url_for_submit_task, ServiceWsClient};
|
||||||
@@ -69,6 +70,11 @@ pub fn run() -> Result<(), PipeError> {
|
|||||||
browser_ws_url,
|
browser_ws_url,
|
||||||
);
|
);
|
||||||
|
|
||||||
|
// Cache the browser callback host across client sessions so the helper
|
||||||
|
// page tab is opened only once per process lifetime instead of once per
|
||||||
|
// WebSocket reconnection.
|
||||||
|
let mut cached_host: Option<Arc<LiveBrowserCallbackHost>> = None;
|
||||||
|
|
||||||
loop {
|
loop {
|
||||||
let (stream, _) = listener.accept()?;
|
let (stream, _) = listener.accept()?;
|
||||||
let websocket = accept(stream)
|
let websocket = accept(stream)
|
||||||
@@ -76,12 +82,13 @@ pub fn run() -> Result<(), PipeError> {
|
|||||||
let sink = Arc::new(ServiceEventSink::from_websocket(websocket));
|
let sink = Arc::new(ServiceEventSink::from_websocket(websocket));
|
||||||
match session.try_attach_client() {
|
match session.try_attach_client() {
|
||||||
Ok(()) => {
|
Ok(()) => {
|
||||||
let result = serve_client(
|
let result = server::serve_client(
|
||||||
&runtime_context,
|
&runtime_context,
|
||||||
&session,
|
&session,
|
||||||
sink.clone(),
|
sink.clone(),
|
||||||
browser_ws_url,
|
browser_ws_url,
|
||||||
&mac_policy,
|
&mac_policy,
|
||||||
|
&mut cached_host,
|
||||||
);
|
);
|
||||||
session.detach_client();
|
session.detach_client();
|
||||||
match result {
|
match result {
|
||||||
|
|||||||
@@ -3,6 +3,24 @@ use serde::{Deserialize, Serialize};
|
|||||||
use crate::agent::SubmitTaskRequest;
|
use crate::agent::SubmitTaskRequest;
|
||||||
use crate::pipe::ConversationMessage;
|
use crate::pipe::ConversationMessage;
|
||||||
|
|
||||||
|
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
|
||||||
|
pub struct ConfigUpdatePayload {
|
||||||
|
#[serde(rename = "apiKey", default)]
|
||||||
|
pub api_key: Option<String>,
|
||||||
|
#[serde(rename = "baseUrl", default)]
|
||||||
|
pub base_url: Option<String>,
|
||||||
|
#[serde(default)]
|
||||||
|
pub model: Option<String>,
|
||||||
|
#[serde(rename = "skillsDir", default)]
|
||||||
|
pub skills_dir: Option<String>,
|
||||||
|
#[serde(rename = "directSubmitSkill", default)]
|
||||||
|
pub direct_submit_skill: Option<String>,
|
||||||
|
#[serde(rename = "runtimeProfile", default)]
|
||||||
|
pub runtime_profile: Option<String>,
|
||||||
|
#[serde(rename = "browserBackend", default)]
|
||||||
|
pub browser_backend: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
|
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
|
||||||
#[serde(tag = "type", rename_all = "snake_case")]
|
#[serde(tag = "type", rename_all = "snake_case")]
|
||||||
pub enum ClientMessage {
|
pub enum ClientMessage {
|
||||||
@@ -21,6 +39,9 @@ pub enum ClientMessage {
|
|||||||
page_title: String,
|
page_title: String,
|
||||||
},
|
},
|
||||||
Ping,
|
Ping,
|
||||||
|
UpdateConfig {
|
||||||
|
config: ConfigUpdatePayload,
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
impl ClientMessage {
|
impl ClientMessage {
|
||||||
@@ -39,7 +60,11 @@ impl ClientMessage {
|
|||||||
page_url: normalize_optional_field(page_url),
|
page_url: normalize_optional_field(page_url),
|
||||||
page_title: normalize_optional_field(page_title),
|
page_title: normalize_optional_field(page_title),
|
||||||
}),
|
}),
|
||||||
ClientMessage::Connect | ClientMessage::Start | ClientMessage::Stop | ClientMessage::Ping => None,
|
ClientMessage::Connect
|
||||||
|
| ClientMessage::Start
|
||||||
|
| ClientMessage::Stop
|
||||||
|
| ClientMessage::Ping
|
||||||
|
| ClientMessage::UpdateConfig { .. } => None,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -51,6 +76,11 @@ pub enum ServiceMessage {
|
|||||||
LogEntry { level: String, message: String },
|
LogEntry { level: String, message: String },
|
||||||
TaskComplete { success: bool, summary: String },
|
TaskComplete { success: bool, summary: String },
|
||||||
Busy { message: String },
|
Busy { message: String },
|
||||||
|
Pong,
|
||||||
|
ConfigUpdated {
|
||||||
|
success: bool,
|
||||||
|
message: String,
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
fn normalize_optional_field(value: String) -> Option<String> {
|
fn normalize_optional_field(value: String) -> Option<String> {
|
||||||
|
|||||||
@@ -1,4 +1,6 @@
|
|||||||
use std::net::TcpStream;
|
use std::net::TcpStream;
|
||||||
|
use std::path::Path;
|
||||||
|
use std::path::PathBuf;
|
||||||
use std::sync::{Arc, Mutex};
|
use std::sync::{Arc, Mutex};
|
||||||
use std::time::Duration;
|
use std::time::Duration;
|
||||||
|
|
||||||
@@ -126,7 +128,7 @@ impl ServiceEventSink {
|
|||||||
.lock()
|
.lock()
|
||||||
.map_err(|_| PipeError::Protocol("service websocket writer lock poisoned".to_string()))?
|
.map_err(|_| PipeError::Protocol("service websocket writer lock poisoned".to_string()))?
|
||||||
.send(Message::Text(payload.into()))
|
.send(Message::Text(payload.into()))
|
||||||
.map_err(|err| PipeError::Protocol(format!("service websocket send failed: {err}")))?;
|
.map_err(|err| map_service_websocket_error(err, "send"))?;
|
||||||
}
|
}
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
@@ -229,17 +231,60 @@ fn send_status_changed(sink: &ServiceEventSink, state: &str) -> Result<(), PipeE
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn serve_client(
|
fn update_config_file(config_path: &Path, config: crate::service::protocol::ConfigUpdatePayload) -> Result<(), String> {
|
||||||
|
use crate::config::{BrowserBackend, RuntimeProfile, SgClawSettings};
|
||||||
|
|
||||||
|
let mut settings = SgClawSettings::load(Some(config_path))
|
||||||
|
.map_err(|e| e.to_string())?
|
||||||
|
.ok_or_else(|| "无法读取现有配置".to_string())?;
|
||||||
|
|
||||||
|
if let Some(v) = config.api_key {
|
||||||
|
settings.provider_api_key = v;
|
||||||
|
}
|
||||||
|
if let Some(v) = config.base_url {
|
||||||
|
settings.provider_base_url = v;
|
||||||
|
}
|
||||||
|
if let Some(v) = config.model {
|
||||||
|
settings.provider_model = v;
|
||||||
|
}
|
||||||
|
if let Some(v) = config.skills_dir {
|
||||||
|
settings.skills_dir = Some(PathBuf::from(&v));
|
||||||
|
}
|
||||||
|
if let Some(v) = config.direct_submit_skill {
|
||||||
|
settings.direct_submit_skill = Some(v);
|
||||||
|
}
|
||||||
|
if let Some(v) = config.runtime_profile {
|
||||||
|
settings.runtime_profile = match v.as_str() {
|
||||||
|
"browser-attached" => RuntimeProfile::BrowserAttached,
|
||||||
|
"browser-heavy" => RuntimeProfile::BrowserHeavy,
|
||||||
|
"general-assistant" => RuntimeProfile::GeneralAssistant,
|
||||||
|
_ => return Err(format!("无效的 runtimeProfile: {}", v)),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
if let Some(v) = config.browser_backend {
|
||||||
|
settings.browser_backend = match v.as_str() {
|
||||||
|
"super-rpa" => BrowserBackend::SuperRpa,
|
||||||
|
"agent-browser" => BrowserBackend::AgentBrowser,
|
||||||
|
"rust-native" => BrowserBackend::RustNative,
|
||||||
|
"computer-use" => BrowserBackend::ComputerUse,
|
||||||
|
"auto" => BrowserBackend::Auto,
|
||||||
|
_ => return Err(format!("无效的 browserBackend: {}", v)),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
settings
|
||||||
|
.save_to_path(config_path)
|
||||||
|
.map_err(|e| format!("写入配置文件失败: {}", e))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub(crate) fn serve_client(
|
||||||
context: &AgentRuntimeContext,
|
context: &AgentRuntimeContext,
|
||||||
session: &ServiceSession,
|
session: &ServiceSession,
|
||||||
sink: Arc<ServiceEventSink>,
|
sink: Arc<ServiceEventSink>,
|
||||||
browser_ws_url: &str,
|
browser_ws_url: &str,
|
||||||
mac_policy: &MacPolicy,
|
mac_policy: &MacPolicy,
|
||||||
|
cached_host: &mut Option<Arc<LiveBrowserCallbackHost>>,
|
||||||
) -> Result<(), PipeError> {
|
) -> Result<(), PipeError> {
|
||||||
// Cache the browser callback host across tasks so the helper page tab is
|
|
||||||
// opened only once per client session instead of once per task.
|
|
||||||
let mut cached_host: Option<Arc<LiveBrowserCallbackHost>> = None;
|
|
||||||
|
|
||||||
loop {
|
loop {
|
||||||
let Some(message) = sink.recv_client_message()? else {
|
let Some(message) = sink.recv_client_message()? else {
|
||||||
return Ok(());
|
return Ok(());
|
||||||
@@ -249,6 +294,7 @@ pub fn serve_client(
|
|||||||
ClientMessage::Connect => send_status_changed(sink.as_ref(), "connected")?,
|
ClientMessage::Connect => send_status_changed(sink.as_ref(), "connected")?,
|
||||||
ClientMessage::Start => send_status_changed(sink.as_ref(), "started")?,
|
ClientMessage::Start => send_status_changed(sink.as_ref(), "started")?,
|
||||||
ClientMessage::Stop => send_status_changed(sink.as_ref(), "stopped")?,
|
ClientMessage::Stop => send_status_changed(sink.as_ref(), "stopped")?,
|
||||||
|
ClientMessage::Ping => sink.send_service_message(ServiceMessage::Pong)?,
|
||||||
ClientMessage::SubmitTask {
|
ClientMessage::SubmitTask {
|
||||||
instruction,
|
instruction,
|
||||||
conversation_id,
|
conversation_id,
|
||||||
@@ -289,9 +335,10 @@ pub fn serve_client(
|
|||||||
&bootstrap_url,
|
&bootstrap_url,
|
||||||
Duration::from_secs(15),
|
Duration::from_secs(15),
|
||||||
BROWSER_RESPONSE_TIMEOUT,
|
BROWSER_RESPONSE_TIMEOUT,
|
||||||
|
true, // use_hidden_domain: hidden domain for invisible helper
|
||||||
) {
|
) {
|
||||||
Ok(host) => {
|
Ok(host) => {
|
||||||
cached_host = Some(Arc::new(host));
|
*cached_host = Some(Arc::new(host));
|
||||||
}
|
}
|
||||||
Err(err) => {
|
Err(err) => {
|
||||||
session.finish_task();
|
session.finish_task();
|
||||||
@@ -335,7 +382,39 @@ pub fn serve_client(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
ClientMessage::Ping => {}
|
ClientMessage::UpdateConfig { config } => {
|
||||||
|
let Some(config_path) = context.config_path() else {
|
||||||
|
sink.send_service_message(ServiceMessage::ConfigUpdated {
|
||||||
|
success: false,
|
||||||
|
message: "未找到配置文件路径。请通过 --config-path 参数启动 sg_claw 后再使用此功能。".to_string(),
|
||||||
|
})?;
|
||||||
|
continue;
|
||||||
|
};
|
||||||
|
|
||||||
|
if !config_path.exists() {
|
||||||
|
sink.send_service_message(ServiceMessage::ConfigUpdated {
|
||||||
|
success: false,
|
||||||
|
message: format!("配置文件不存在: {}", config_path.display()),
|
||||||
|
})?;
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
let result = update_config_file(config_path, config);
|
||||||
|
match result {
|
||||||
|
Ok(()) => {
|
||||||
|
sink.send_service_message(ServiceMessage::ConfigUpdated {
|
||||||
|
success: true,
|
||||||
|
message: "配置已保存。重启 sg_claw 以应用新配置。".to_string(),
|
||||||
|
})?;
|
||||||
|
}
|
||||||
|
Err(err) => {
|
||||||
|
sink.send_service_message(ServiceMessage::ConfigUpdated {
|
||||||
|
success: false,
|
||||||
|
message: format!("保存配置失败: {}", err),
|
||||||
|
})?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -378,6 +457,12 @@ fn derive_request_url_from_instruction(instruction: &str) -> Option<String> {
|
|||||||
return Some("https://zhuanlan.zhihu.com".to_string());
|
return Some("https://zhuanlan.zhihu.com".to_string());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// 台区线损相关
|
||||||
|
// TODO: 临时方案,后续应从 skill 配置或 deterministic_submit 解析结果中获取
|
||||||
|
if instruction.contains("线损") || instruction.contains("lineloss") {
|
||||||
|
return Some("http://20.76.57.61:18080".to_string());
|
||||||
|
}
|
||||||
|
|
||||||
None
|
None
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -471,6 +556,23 @@ impl Transport for NoopTransport {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod pipe_closed_mapping_tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn map_service_websocket_error_treats_connection_aborted_send_as_pipe_closed() {
|
||||||
|
let err = tungstenite::Error::Io(std::io::Error::from(std::io::ErrorKind::ConnectionAborted));
|
||||||
|
assert!(matches!(map_service_websocket_error(err, "send"), PipeError::PipeClosed));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn map_service_websocket_error_treats_send_after_closing_as_pipe_closed() {
|
||||||
|
let err = tungstenite::Error::Protocol(tungstenite::error::ProtocolError::SendAfterClosing);
|
||||||
|
assert!(matches!(map_service_websocket_error(err, "send"), PipeError::PipeClosed));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
struct ServiceBridgeTransport {
|
struct ServiceBridgeTransport {
|
||||||
bridge_base_url: String,
|
bridge_base_url: String,
|
||||||
@@ -796,6 +898,19 @@ mod tests {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn initial_request_url_falls_back_to_lineloss_origin_for_lineloss_instructions() {
|
||||||
|
let request = SubmitTaskRequest {
|
||||||
|
instruction: "兰州公司 台区线损大数据 月累计线损率统计分析。。。".to_string(),
|
||||||
|
..SubmitTaskRequest::default()
|
||||||
|
};
|
||||||
|
|
||||||
|
assert_eq!(
|
||||||
|
initial_request_url_for_submit_task(&request),
|
||||||
|
"http://20.76.57.61:18080"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn bridge_base_url_defaults_local_browser_ws_endpoint_to_http_bridge() {
|
fn bridge_base_url_defaults_local_browser_ws_endpoint_to_http_bridge() {
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
|
|||||||
@@ -7,15 +7,18 @@ use std::sync::{Arc, Mutex, OnceLock};
|
|||||||
use std::thread;
|
use std::thread;
|
||||||
use std::time::Duration;
|
use std::time::Duration;
|
||||||
|
|
||||||
use common::MockTransport;
|
|
||||||
use serde_json::{json, Value};
|
use serde_json::{json, Value};
|
||||||
|
use uuid::Uuid;
|
||||||
|
|
||||||
|
use common::MockTransport;
|
||||||
use sgclaw::agent::{
|
use sgclaw::agent::{
|
||||||
handle_browser_message, handle_browser_message_with_context, AgentRuntimeContext,
|
handle_browser_message, handle_browser_message_with_context, AgentRuntimeContext,
|
||||||
};
|
};
|
||||||
use sgclaw::pipe::{AgentMessage, BrowserMessage, BrowserPipeTool, Timing};
|
use sgclaw::compat::runtime::CompatTaskContext;
|
||||||
|
use sgclaw::config::SgClawSettings;
|
||||||
|
use sgclaw::pipe::{Action, AgentMessage, BrowserMessage, BrowserPipeTool, Timing};
|
||||||
use sgclaw::security::MacPolicy;
|
use sgclaw::security::MacPolicy;
|
||||||
use tungstenite::{accept, Message};
|
use tungstenite::{accept, error::ProtocolError, Message};
|
||||||
use uuid::Uuid;
|
|
||||||
|
|
||||||
fn env_lock() -> &'static Mutex<()> {
|
fn env_lock() -> &'static Mutex<()> {
|
||||||
static LOCK: OnceLock<Mutex<()>> = OnceLock::new();
|
static LOCK: OnceLock<Mutex<()>> = OnceLock::new();
|
||||||
@@ -34,6 +37,8 @@ fn write_config(
|
|||||||
base_url: &str,
|
base_url: &str,
|
||||||
model: &str,
|
model: &str,
|
||||||
skills_dir: Option<&str>,
|
skills_dir: Option<&str>,
|
||||||
|
browser_ws_url: Option<&str>,
|
||||||
|
direct_submit_skill: Option<&str>,
|
||||||
) -> PathBuf {
|
) -> PathBuf {
|
||||||
let config_path = root.join("sgclaw_config.json");
|
let config_path = root.join("sgclaw_config.json");
|
||||||
let mut payload = json!({
|
let mut payload = json!({
|
||||||
@@ -45,6 +50,12 @@ fn write_config(
|
|||||||
if let Some(skills_dir) = skills_dir {
|
if let Some(skills_dir) = skills_dir {
|
||||||
payload["skillsDir"] = json!(skills_dir);
|
payload["skillsDir"] = json!(skills_dir);
|
||||||
}
|
}
|
||||||
|
if let Some(browser_ws_url) = browser_ws_url {
|
||||||
|
payload["browserWsUrl"] = json!(browser_ws_url);
|
||||||
|
}
|
||||||
|
if let Some(direct_submit_skill) = direct_submit_skill {
|
||||||
|
payload["directSubmitSkill"] = json!(direct_submit_skill);
|
||||||
|
}
|
||||||
fs::write(&config_path, serde_json::to_string_pretty(&payload).unwrap()).unwrap();
|
fs::write(&config_path, serde_json::to_string_pretty(&payload).unwrap()).unwrap();
|
||||||
config_path
|
config_path
|
||||||
}
|
}
|
||||||
@@ -80,7 +91,10 @@ fn start_browser_ws_server() -> (String, Arc<Mutex<Vec<String>>>, thread::JoinHa
|
|||||||
let message = match socket.read() {
|
let message = match socket.read() {
|
||||||
Ok(message) => message,
|
Ok(message) => message,
|
||||||
Err(tungstenite::Error::ConnectionClosed)
|
Err(tungstenite::Error::ConnectionClosed)
|
||||||
| Err(tungstenite::Error::AlreadyClosed) => break,
|
| Err(tungstenite::Error::AlreadyClosed)
|
||||||
|
| Err(tungstenite::Error::Protocol(
|
||||||
|
ProtocolError::ResetWithoutClosingHandshake,
|
||||||
|
)) => break,
|
||||||
Err(err) => panic!("browser ws test server read failed: {err}"),
|
Err(err) => panic!("browser ws test server read failed: {err}"),
|
||||||
};
|
};
|
||||||
let payload = match message {
|
let payload = match message {
|
||||||
@@ -155,20 +169,688 @@ fn start_browser_ws_server() -> (String, Arc<Mutex<Vec<String>>>, thread::JoinHa
|
|||||||
(format!("ws://{address}"), frames, handle)
|
(format!("ws://{address}"), frames, handle)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn browser_ws_server_treats_reset_without_closing_handshake_as_disconnect() {
|
||||||
|
let err = tungstenite::Error::Protocol(ProtocolError::ResetWithoutClosingHandshake);
|
||||||
|
assert!(matches!(
|
||||||
|
err,
|
||||||
|
tungstenite::Error::Protocol(ProtocolError::ResetWithoutClosingHandshake)
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
|
fn provider_path_test_policy() -> MacPolicy {
|
||||||
|
policy_for_domains(&["www.baidu.com"])
|
||||||
|
}
|
||||||
|
|
||||||
|
fn direct_runtime_test_policy() -> MacPolicy {
|
||||||
|
policy_for_domains(&["95598.sgcc.com.cn"])
|
||||||
|
}
|
||||||
|
|
||||||
fn test_policy() -> MacPolicy {
|
fn test_policy() -> MacPolicy {
|
||||||
|
policy_for_domains(&["www.zhihu.com"])
|
||||||
|
}
|
||||||
|
|
||||||
|
fn policy_for_domains(domains: &[&str]) -> MacPolicy {
|
||||||
MacPolicy::from_json_str(
|
MacPolicy::from_json_str(
|
||||||
r#"{
|
&serde_json::json!({
|
||||||
"version": "1.0",
|
"version": "1.0",
|
||||||
"domains": { "allowed": ["www.baidu.com", "www.zhihu.com"] },
|
"domains": { "allowed": domains },
|
||||||
"pipe_actions": {
|
"pipe_actions": {
|
||||||
"allowed": ["click", "type", "navigate", "getText", "eval"],
|
"allowed": ["click", "type", "navigate", "getText", "eval"],
|
||||||
"blocked": []
|
"blocked": []
|
||||||
}
|
}
|
||||||
}"#,
|
})
|
||||||
|
.to_string(),
|
||||||
)
|
)
|
||||||
.unwrap()
|
.unwrap()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn build_direct_runtime_skill_root() -> PathBuf {
|
||||||
|
let root = std::env::temp_dir().join(format!(
|
||||||
|
"sgclaw-agent-runtime-skill-root-{}",
|
||||||
|
Uuid::new_v4()
|
||||||
|
));
|
||||||
|
let skill_dir = root.join("fault-details-report");
|
||||||
|
let script_dir = skill_dir.join("scripts");
|
||||||
|
|
||||||
|
fs::create_dir_all(&script_dir).unwrap();
|
||||||
|
fs::write(
|
||||||
|
skill_dir.join("SKILL.toml"),
|
||||||
|
r#"
|
||||||
|
[skill]
|
||||||
|
name = "fault-details-report"
|
||||||
|
description = "Collect 95598 fault detail data via browser eval."
|
||||||
|
version = "0.1.0"
|
||||||
|
|
||||||
|
[[tools]]
|
||||||
|
name = "collect_fault_details"
|
||||||
|
description = "Collect structured fault detail rows for a specific period."
|
||||||
|
kind = "browser_script"
|
||||||
|
command = "scripts/collect_fault_details.js"
|
||||||
|
|
||||||
|
[tools.args]
|
||||||
|
period = "YYYY-MM period to collect."
|
||||||
|
"#,
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
fs::write(
|
||||||
|
script_dir.join("collect_fault_details.js"),
|
||||||
|
r#"
|
||||||
|
return {
|
||||||
|
fault_type: "outage",
|
||||||
|
observed_at: `${args.period}-15 09:00`,
|
||||||
|
affected_scope: "line-7",
|
||||||
|
expected_domain: args.expected_domain,
|
||||||
|
artifact_payload: "report artifact payload"
|
||||||
|
};
|
||||||
|
"#,
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
root
|
||||||
|
}
|
||||||
|
|
||||||
|
fn write_direct_submit_config(workspace_root: &std::path::Path, skill_root: &std::path::Path) -> PathBuf {
|
||||||
|
let config_path = workspace_root.join("sgclaw_config.json");
|
||||||
|
fs::write(
|
||||||
|
&config_path,
|
||||||
|
serde_json::json!({
|
||||||
|
"providers": [],
|
||||||
|
"skillsDir": skill_root,
|
||||||
|
"directSubmitSkill": "fault-details-report.collect_fault_details"
|
||||||
|
})
|
||||||
|
.to_string(),
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
config_path
|
||||||
|
}
|
||||||
|
|
||||||
|
fn direct_submit_runtime_context(skill_root: &std::path::Path) -> AgentRuntimeContext {
|
||||||
|
let workspace_root = std::env::temp_dir().join(format!(
|
||||||
|
"sgclaw-agent-runtime-workspace-{}",
|
||||||
|
Uuid::new_v4()
|
||||||
|
));
|
||||||
|
fs::create_dir_all(&workspace_root).unwrap();
|
||||||
|
let config_path = write_direct_submit_config(&workspace_root, skill_root);
|
||||||
|
AgentRuntimeContext::new(Some(config_path), workspace_root)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn submit_fault_details_message() -> BrowserMessage {
|
||||||
|
BrowserMessage::SubmitTask {
|
||||||
|
instruction: "请采集 2026-03 的故障明细并返回结果".to_string(),
|
||||||
|
conversation_id: String::new(),
|
||||||
|
messages: vec![],
|
||||||
|
page_url: "https://95598.sgcc.com.cn/".to_string(),
|
||||||
|
page_title: "网上国网".to_string(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn submit_zhihu_hotlist_export_message() -> BrowserMessage {
|
||||||
|
BrowserMessage::SubmitTask {
|
||||||
|
instruction: "打开知乎热榜,获取前10条数据,并导出 Excel".to_string(),
|
||||||
|
conversation_id: String::new(),
|
||||||
|
messages: vec![],
|
||||||
|
page_url: String::new(),
|
||||||
|
page_title: String::new(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn direct_submit_mode_logs(sent: &[AgentMessage]) -> Vec<String> {
|
||||||
|
sent.iter()
|
||||||
|
.filter_map(|message| match message {
|
||||||
|
AgentMessage::LogEntry { level, message } if level == "mode" => Some(message.clone()),
|
||||||
|
_ => None,
|
||||||
|
})
|
||||||
|
.collect()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn direct_submit_completion(sent: &[AgentMessage]) -> Option<(bool, String)> {
|
||||||
|
sent.iter().find_map(|message| match message {
|
||||||
|
AgentMessage::TaskComplete { success, summary } => Some((*success, summary.clone())),
|
||||||
|
_ => None,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn success_browser_response(seq: u64, data: serde_json::Value) -> BrowserMessage {
|
||||||
|
BrowserMessage::Response {
|
||||||
|
seq,
|
||||||
|
success: true,
|
||||||
|
data,
|
||||||
|
aom_snapshot: vec![],
|
||||||
|
timing: Timing {
|
||||||
|
queue_ms: 1,
|
||||||
|
exec_ms: 10,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn report_artifact_browser_response(
|
||||||
|
seq: u64,
|
||||||
|
status: &str,
|
||||||
|
partial_reasons: &[&str],
|
||||||
|
detail_rows: Vec<serde_json::Value>,
|
||||||
|
summary_rows: Vec<serde_json::Value>,
|
||||||
|
) -> BrowserMessage {
|
||||||
|
success_browser_response(
|
||||||
|
seq,
|
||||||
|
serde_json::json!({
|
||||||
|
"text": {
|
||||||
|
"type": "report-artifact",
|
||||||
|
"report_name": "fault-details-report",
|
||||||
|
"period": "2026-03",
|
||||||
|
"selected_range": {
|
||||||
|
"start": "2026-03-08 16:00:00",
|
||||||
|
"end": "2026-03-09 16:00:00"
|
||||||
|
},
|
||||||
|
"columns": ["qxdbh"],
|
||||||
|
"rows": detail_rows,
|
||||||
|
"sections": [{
|
||||||
|
"name": "summary-sheet",
|
||||||
|
"columns": ["index"],
|
||||||
|
"rows": summary_rows
|
||||||
|
}],
|
||||||
|
"counts": {
|
||||||
|
"detail_rows": detail_rows.len(),
|
||||||
|
"summary_rows": summary_rows.len()
|
||||||
|
},
|
||||||
|
"status": status,
|
||||||
|
"partial_reasons": partial_reasons,
|
||||||
|
"downstream": {
|
||||||
|
"export": {
|
||||||
|
"attempted": true,
|
||||||
|
"success": status != "blocked" && status != "error",
|
||||||
|
"path": "http://localhost/export.xlsx"
|
||||||
|
},
|
||||||
|
"report_log": {
|
||||||
|
"attempted": true,
|
||||||
|
"success": partial_reasons.is_empty(),
|
||||||
|
"error": partial_reasons
|
||||||
|
.first()
|
||||||
|
.copied()
|
||||||
|
.unwrap_or("")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn direct_submit_runtime_executes_fault_details_skill_without_provider_path() {
|
||||||
|
let skill_root = build_direct_runtime_skill_root();
|
||||||
|
let transport = Arc::new(MockTransport::new(vec![success_browser_response(
|
||||||
|
1,
|
||||||
|
serde_json::json!({
|
||||||
|
"text": {
|
||||||
|
"fault_type": "outage",
|
||||||
|
"observed_at": "2026-03-15 09:00",
|
||||||
|
"affected_scope": "line-7"
|
||||||
|
}
|
||||||
|
}),
|
||||||
|
)]));
|
||||||
|
let browser_tool = BrowserPipeTool::new(
|
||||||
|
transport.clone(),
|
||||||
|
direct_runtime_test_policy(),
|
||||||
|
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||||
|
)
|
||||||
|
.with_response_timeout(Duration::from_secs(1));
|
||||||
|
let mut settings = SgClawSettings::from_legacy_deepseek_fields(
|
||||||
|
"unused-key".to_string(),
|
||||||
|
"http://127.0.0.1:9".to_string(),
|
||||||
|
"unused-model".to_string(),
|
||||||
|
Some(skill_root.clone()),
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
settings.direct_submit_skill = Some("fault-details-report.collect_fault_details".to_string());
|
||||||
|
|
||||||
|
let summary = sgclaw::compat::direct_skill_runtime::execute_direct_submit_skill(
|
||||||
|
browser_tool,
|
||||||
|
"请采集 2026-03 的故障明细并返回结果",
|
||||||
|
&CompatTaskContext {
|
||||||
|
page_url: Some("https://95598.sgcc.com.cn/".to_string()),
|
||||||
|
..CompatTaskContext::default()
|
||||||
|
},
|
||||||
|
PathBuf::from(env!("CARGO_MANIFEST_DIR")).as_path(),
|
||||||
|
&settings,
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert!(summary.success);
|
||||||
|
assert!(summary.summary.contains("fault_type"));
|
||||||
|
let sent = transport.sent_messages();
|
||||||
|
assert!(sent.iter().all(|message| !matches!(message, AgentMessage::LogEntry { level, message } if level == "info" && message.contains("DeepSeek config loaded"))));
|
||||||
|
assert!(matches!(
|
||||||
|
&sent[0],
|
||||||
|
AgentMessage::Command {
|
||||||
|
seq,
|
||||||
|
action,
|
||||||
|
params,
|
||||||
|
security,
|
||||||
|
} if *seq == 1
|
||||||
|
&& action == &Action::Eval
|
||||||
|
&& security.expected_domain == "95598.sgcc.com.cn"
|
||||||
|
&& params["script"].as_str().is_some_and(|script| script.contains("2026-03"))
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn submit_task_uses_direct_skill_mode_without_llm_configuration() {
|
||||||
|
std::env::remove_var("DEEPSEEK_API_KEY");
|
||||||
|
std::env::remove_var("DEEPSEEK_BASE_URL");
|
||||||
|
std::env::remove_var("DEEPSEEK_MODEL");
|
||||||
|
|
||||||
|
let skill_root = build_direct_runtime_skill_root();
|
||||||
|
let runtime_context = direct_submit_runtime_context(&skill_root);
|
||||||
|
let transport = Arc::new(MockTransport::new(vec![success_browser_response(
|
||||||
|
1,
|
||||||
|
serde_json::json!({
|
||||||
|
"text": {
|
||||||
|
"fault_type": "outage",
|
||||||
|
"observed_at": "2026-03-15 09:00",
|
||||||
|
"affected_scope": "line-7",
|
||||||
|
"artifact_payload": "report artifact payload"
|
||||||
|
}
|
||||||
|
}),
|
||||||
|
)]));
|
||||||
|
let browser_tool = BrowserPipeTool::new(
|
||||||
|
transport.clone(),
|
||||||
|
direct_runtime_test_policy(),
|
||||||
|
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||||
|
)
|
||||||
|
.with_response_timeout(Duration::from_secs(1));
|
||||||
|
|
||||||
|
handle_browser_message_with_context(
|
||||||
|
transport.as_ref(),
|
||||||
|
&browser_tool,
|
||||||
|
&runtime_context,
|
||||||
|
submit_fault_details_message(),
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let sent = transport.sent_messages();
|
||||||
|
let completion = direct_submit_completion(&sent).expect("task completion");
|
||||||
|
|
||||||
|
assert!(completion.0, "expected direct submit task to succeed: {sent:?}");
|
||||||
|
assert!(
|
||||||
|
completion.1.contains("report artifact payload"),
|
||||||
|
"expected report artifact payload in summary: {}",
|
||||||
|
completion.1
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
!completion.1.contains("未配置大语言模型"),
|
||||||
|
"did not expect missing-llm summary: {}",
|
||||||
|
completion.1
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn submit_task_rejects_invalid_direct_submit_skill_config_before_routing() {
|
||||||
|
std::env::remove_var("DEEPSEEK_API_KEY");
|
||||||
|
std::env::remove_var("DEEPSEEK_BASE_URL");
|
||||||
|
std::env::remove_var("DEEPSEEK_MODEL");
|
||||||
|
|
||||||
|
let skill_root = build_direct_runtime_skill_root();
|
||||||
|
let workspace_root = std::env::temp_dir().join(format!(
|
||||||
|
"sgclaw-invalid-direct-submit-workspace-{}",
|
||||||
|
Uuid::new_v4()
|
||||||
|
));
|
||||||
|
fs::create_dir_all(&workspace_root).unwrap();
|
||||||
|
let config_path = workspace_root.join("sgclaw_config.json");
|
||||||
|
fs::write(
|
||||||
|
&config_path,
|
||||||
|
serde_json::json!({
|
||||||
|
"providers": [],
|
||||||
|
"skillsDir": skill_root,
|
||||||
|
"directSubmitSkill": "fault-details-report"
|
||||||
|
})
|
||||||
|
.to_string(),
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let runtime_context = AgentRuntimeContext::new(Some(config_path), workspace_root);
|
||||||
|
let transport = Arc::new(MockTransport::new(vec![]));
|
||||||
|
let browser_tool = BrowserPipeTool::new(
|
||||||
|
transport.clone(),
|
||||||
|
direct_runtime_test_policy(),
|
||||||
|
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||||
|
)
|
||||||
|
.with_response_timeout(Duration::from_secs(1));
|
||||||
|
|
||||||
|
handle_browser_message_with_context(
|
||||||
|
transport.as_ref(),
|
||||||
|
&browser_tool,
|
||||||
|
&runtime_context,
|
||||||
|
submit_fault_details_message(),
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let sent = transport.sent_messages();
|
||||||
|
assert!(matches!(
|
||||||
|
sent.last(),
|
||||||
|
Some(AgentMessage::TaskComplete { success, summary })
|
||||||
|
if !success && summary.contains("skill.tool")
|
||||||
|
));
|
||||||
|
assert!(direct_submit_mode_logs(&sent).is_empty());
|
||||||
|
assert!(!sent.iter().any(|message| matches!(message, AgentMessage::Command { .. })));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn submit_task_treats_partial_report_artifact_as_success_with_warning_summary() {
|
||||||
|
std::env::remove_var("DEEPSEEK_API_KEY");
|
||||||
|
std::env::remove_var("DEEPSEEK_BASE_URL");
|
||||||
|
std::env::remove_var("DEEPSEEK_MODEL");
|
||||||
|
|
||||||
|
let skill_root = build_direct_runtime_skill_root();
|
||||||
|
let runtime_context = direct_submit_runtime_context(&skill_root);
|
||||||
|
let transport = Arc::new(MockTransport::new(vec![report_artifact_browser_response(
|
||||||
|
1,
|
||||||
|
"partial",
|
||||||
|
&["report_log_failed"],
|
||||||
|
vec![serde_json::json!({ "qxdbh": "QX-1" })],
|
||||||
|
vec![serde_json::json!({ "index": 1 })],
|
||||||
|
)]));
|
||||||
|
let browser_tool = BrowserPipeTool::new(
|
||||||
|
transport.clone(),
|
||||||
|
direct_runtime_test_policy(),
|
||||||
|
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||||
|
)
|
||||||
|
.with_response_timeout(Duration::from_secs(1));
|
||||||
|
|
||||||
|
handle_browser_message_with_context(
|
||||||
|
transport.as_ref(),
|
||||||
|
&browser_tool,
|
||||||
|
&runtime_context,
|
||||||
|
submit_fault_details_message(),
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let sent = transport.sent_messages();
|
||||||
|
let completion = direct_submit_completion(&sent).expect("task completion");
|
||||||
|
|
||||||
|
assert!(completion.0, "expected partial artifact to succeed: {sent:?}");
|
||||||
|
assert!(completion.1.contains("fault-details-report"));
|
||||||
|
assert!(completion.1.contains("2026-03"));
|
||||||
|
assert!(completion.1.contains("status=partial"));
|
||||||
|
assert!(completion.1.contains("detail_rows=1"));
|
||||||
|
assert!(completion.1.contains("summary_rows=1"));
|
||||||
|
assert!(completion.1.contains("report_log_failed"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn submit_task_treats_empty_report_artifact_as_success() {
|
||||||
|
std::env::remove_var("DEEPSEEK_API_KEY");
|
||||||
|
std::env::remove_var("DEEPSEEK_BASE_URL");
|
||||||
|
std::env::remove_var("DEEPSEEK_MODEL");
|
||||||
|
|
||||||
|
let skill_root = build_direct_runtime_skill_root();
|
||||||
|
let runtime_context = direct_submit_runtime_context(&skill_root);
|
||||||
|
let transport = Arc::new(MockTransport::new(vec![report_artifact_browser_response(
|
||||||
|
1,
|
||||||
|
"empty",
|
||||||
|
&[],
|
||||||
|
vec![],
|
||||||
|
vec![],
|
||||||
|
)]));
|
||||||
|
let browser_tool = BrowserPipeTool::new(
|
||||||
|
transport.clone(),
|
||||||
|
direct_runtime_test_policy(),
|
||||||
|
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||||
|
)
|
||||||
|
.with_response_timeout(Duration::from_secs(1));
|
||||||
|
|
||||||
|
handle_browser_message_with_context(
|
||||||
|
transport.as_ref(),
|
||||||
|
&browser_tool,
|
||||||
|
&runtime_context,
|
||||||
|
submit_fault_details_message(),
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let sent = transport.sent_messages();
|
||||||
|
let completion = direct_submit_completion(&sent).expect("task completion");
|
||||||
|
|
||||||
|
assert!(completion.0, "expected empty artifact to succeed: {sent:?}");
|
||||||
|
assert!(completion.1.contains("status=empty"));
|
||||||
|
assert!(completion.1.contains("detail_rows=0"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn submit_task_treats_blocked_report_artifact_as_failure() {
|
||||||
|
std::env::remove_var("DEEPSEEK_API_KEY");
|
||||||
|
std::env::remove_var("DEEPSEEK_BASE_URL");
|
||||||
|
std::env::remove_var("DEEPSEEK_MODEL");
|
||||||
|
|
||||||
|
let skill_root = build_direct_runtime_skill_root();
|
||||||
|
let runtime_context = direct_submit_runtime_context(&skill_root);
|
||||||
|
let transport = Arc::new(MockTransport::new(vec![report_artifact_browser_response(
|
||||||
|
1,
|
||||||
|
"blocked",
|
||||||
|
&["selected_range_unavailable"],
|
||||||
|
vec![],
|
||||||
|
vec![],
|
||||||
|
)]));
|
||||||
|
let browser_tool = BrowserPipeTool::new(
|
||||||
|
transport.clone(),
|
||||||
|
direct_runtime_test_policy(),
|
||||||
|
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||||
|
)
|
||||||
|
.with_response_timeout(Duration::from_secs(1));
|
||||||
|
|
||||||
|
handle_browser_message_with_context(
|
||||||
|
transport.as_ref(),
|
||||||
|
&browser_tool,
|
||||||
|
&runtime_context,
|
||||||
|
submit_fault_details_message(),
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let sent = transport.sent_messages();
|
||||||
|
let completion = direct_submit_completion(&sent).expect("task completion");
|
||||||
|
|
||||||
|
assert!(!completion.0, "expected blocked artifact to fail: {sent:?}");
|
||||||
|
assert!(completion.1.contains("status=blocked"));
|
||||||
|
assert!(completion.1.contains("selected_range_unavailable"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn submit_task_treats_error_report_artifact_as_failure() {
|
||||||
|
std::env::remove_var("DEEPSEEK_API_KEY");
|
||||||
|
std::env::remove_var("DEEPSEEK_BASE_URL");
|
||||||
|
std::env::remove_var("DEEPSEEK_MODEL");
|
||||||
|
|
||||||
|
let skill_root = build_direct_runtime_skill_root();
|
||||||
|
let runtime_context = direct_submit_runtime_context(&skill_root);
|
||||||
|
let transport = Arc::new(MockTransport::new(vec![report_artifact_browser_response(
|
||||||
|
1,
|
||||||
|
"error",
|
||||||
|
&["detail_normalization_failed"],
|
||||||
|
vec![],
|
||||||
|
vec![],
|
||||||
|
)]));
|
||||||
|
let browser_tool = BrowserPipeTool::new(
|
||||||
|
transport.clone(),
|
||||||
|
direct_runtime_test_policy(),
|
||||||
|
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||||
|
)
|
||||||
|
.with_response_timeout(Duration::from_secs(1));
|
||||||
|
|
||||||
|
handle_browser_message_with_context(
|
||||||
|
transport.as_ref(),
|
||||||
|
&browser_tool,
|
||||||
|
&runtime_context,
|
||||||
|
submit_fault_details_message(),
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let sent = transport.sent_messages();
|
||||||
|
let completion = direct_submit_completion(&sent).expect("task completion");
|
||||||
|
|
||||||
|
assert!(!completion.0, "expected error artifact to fail: {sent:?}");
|
||||||
|
assert!(completion.1.contains("status=error"));
|
||||||
|
assert!(completion.1.contains("detail_normalization_failed"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn submit_task_routes_zhihu_hotlist_export_before_direct_submit() {
|
||||||
|
std::env::remove_var("DEEPSEEK_API_KEY");
|
||||||
|
std::env::remove_var("DEEPSEEK_BASE_URL");
|
||||||
|
std::env::remove_var("DEEPSEEK_MODEL");
|
||||||
|
|
||||||
|
let skill_root = build_direct_runtime_skill_root();
|
||||||
|
let runtime_context = direct_submit_runtime_context(&skill_root);
|
||||||
|
let transport = Arc::new(MockTransport::new(vec![]));
|
||||||
|
let browser_tool = BrowserPipeTool::new(
|
||||||
|
transport.clone(),
|
||||||
|
policy_for_domains(&["www.zhihu.com"]),
|
||||||
|
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||||
|
)
|
||||||
|
.with_response_timeout(Duration::from_secs(1));
|
||||||
|
|
||||||
|
handle_browser_message_with_context(
|
||||||
|
transport.as_ref(),
|
||||||
|
&browser_tool,
|
||||||
|
&runtime_context,
|
||||||
|
submit_zhihu_hotlist_export_message(),
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let sent = transport.sent_messages();
|
||||||
|
let mode_logs = direct_submit_mode_logs(&sent);
|
||||||
|
let completion = direct_submit_completion(&sent).expect("task completion");
|
||||||
|
|
||||||
|
assert_eq!(mode_logs, vec!["zeroclaw_process_message_primary".to_string()]);
|
||||||
|
assert!(
|
||||||
|
!completion.0,
|
||||||
|
"expected zhihu export without page context to fail before browser actions: {sent:?}"
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
!completion
|
||||||
|
.1
|
||||||
|
.contains("direct submit skill requires page_url so expected_domain can be derived"),
|
||||||
|
"unexpected direct submit fallback: {sent:?}"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn direct_skill_mode_logs_direct_skill_primary() {
|
||||||
|
std::env::remove_var("DEEPSEEK_API_KEY");
|
||||||
|
std::env::remove_var("DEEPSEEK_BASE_URL");
|
||||||
|
std::env::remove_var("DEEPSEEK_MODEL");
|
||||||
|
|
||||||
|
let skill_root = build_direct_runtime_skill_root();
|
||||||
|
let runtime_context = direct_submit_runtime_context(&skill_root);
|
||||||
|
let transport = Arc::new(MockTransport::new(vec![success_browser_response(
|
||||||
|
1,
|
||||||
|
serde_json::json!({
|
||||||
|
"text": {
|
||||||
|
"fault_type": "outage",
|
||||||
|
"observed_at": "2026-03-15 09:00",
|
||||||
|
"affected_scope": "line-7",
|
||||||
|
"artifact_payload": "report artifact payload"
|
||||||
|
}
|
||||||
|
}),
|
||||||
|
)]));
|
||||||
|
let browser_tool = BrowserPipeTool::new(
|
||||||
|
transport.clone(),
|
||||||
|
direct_runtime_test_policy(),
|
||||||
|
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||||
|
)
|
||||||
|
.with_response_timeout(Duration::from_secs(1));
|
||||||
|
|
||||||
|
handle_browser_message_with_context(
|
||||||
|
transport.as_ref(),
|
||||||
|
&browser_tool,
|
||||||
|
&runtime_context,
|
||||||
|
submit_fault_details_message(),
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let sent = transport.sent_messages();
|
||||||
|
let mode_logs = direct_submit_mode_logs(&sent);
|
||||||
|
|
||||||
|
assert_eq!(mode_logs, vec!["direct_skill_primary".to_string()]);
|
||||||
|
assert!(
|
||||||
|
!mode_logs.iter().any(|mode| mode == "compat_llm_primary"),
|
||||||
|
"unexpected compat mode logs: {mode_logs:?}"
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
!mode_logs
|
||||||
|
.iter()
|
||||||
|
.any(|mode| mode == "zeroclaw_process_message_primary"),
|
||||||
|
"unexpected zeroclaw mode logs: {mode_logs:?}"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn production_submit_task_with_ws_and_direct_submit_config_routes_zhihu_before_direct_submit() {
|
||||||
|
let _guard = env_lock().lock().unwrap_or_else(|err| err.into_inner());
|
||||||
|
std::env::set_var("SGCLAW_DISABLE_POST_EXPORT_OPEN", "1");
|
||||||
|
std::env::remove_var("DEEPSEEK_API_KEY");
|
||||||
|
std::env::remove_var("DEEPSEEK_BASE_URL");
|
||||||
|
std::env::remove_var("DEEPSEEK_MODEL");
|
||||||
|
|
||||||
|
let workspace_root = temp_workspace_root();
|
||||||
|
let (ws_url, _frames, ws_handle) = start_browser_ws_server();
|
||||||
|
let config_path = write_config(
|
||||||
|
&workspace_root,
|
||||||
|
"deepseek-test-key",
|
||||||
|
"http://127.0.0.1:9",
|
||||||
|
"deepseek-chat",
|
||||||
|
Some(real_skill_lib_root().to_str().unwrap()),
|
||||||
|
Some(&ws_url),
|
||||||
|
Some("fault-details-report.collect_fault_details"),
|
||||||
|
);
|
||||||
|
|
||||||
|
let transport = Arc::new(MockTransport::new(vec![]));
|
||||||
|
let browser_tool = BrowserPipeTool::new(
|
||||||
|
transport.clone(),
|
||||||
|
test_policy(),
|
||||||
|
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||||
|
)
|
||||||
|
.with_response_timeout(Duration::from_secs(1));
|
||||||
|
let runtime_context = AgentRuntimeContext::new(Some(config_path), workspace_root);
|
||||||
|
|
||||||
|
handle_browser_message_with_context(
|
||||||
|
transport.as_ref(),
|
||||||
|
&browser_tool,
|
||||||
|
&runtime_context,
|
||||||
|
BrowserMessage::SubmitTask {
|
||||||
|
instruction: "打开知乎热榜,获取前10条数据,并导出 Excel".to_string(),
|
||||||
|
conversation_id: String::new(),
|
||||||
|
messages: vec![],
|
||||||
|
page_url: String::new(),
|
||||||
|
page_title: String::new(),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
ws_handle.join().unwrap();
|
||||||
|
|
||||||
|
let sent = transport.sent_messages();
|
||||||
|
let mode_logs = direct_submit_mode_logs(&sent);
|
||||||
|
let completion = direct_submit_completion(&sent).expect("task completion");
|
||||||
|
|
||||||
|
assert!(
|
||||||
|
mode_logs
|
||||||
|
.iter()
|
||||||
|
.any(|mode| mode == "zeroclaw_process_message_primary"),
|
||||||
|
"expected orchestration mode log before direct submit: {sent:?}"
|
||||||
|
);
|
||||||
|
assert!(
|
||||||
|
!mode_logs.iter().any(|mode| mode == "direct_skill_primary"),
|
||||||
|
"unexpected direct submit mode log for zhihu ws submit: {sent:?}"
|
||||||
|
);
|
||||||
|
assert!(completion.0, "expected zhihu ws submit to succeed: {sent:?}");
|
||||||
|
assert!(
|
||||||
|
!completion
|
||||||
|
.1
|
||||||
|
.contains("direct submit skill requires page_url so expected_domain can be derived"),
|
||||||
|
"unexpected direct-submit page_url failure: {sent:?}"
|
||||||
|
);
|
||||||
|
|
||||||
|
std::env::remove_var("SGCLAW_DISABLE_POST_EXPORT_OPEN");
|
||||||
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn production_submit_task_routes_zhihu_through_ws_backend_without_helper_bootstrap() {
|
fn production_submit_task_routes_zhihu_through_ws_backend_without_helper_bootstrap() {
|
||||||
let _guard = env_lock().lock().unwrap_or_else(|err| err.into_inner());
|
let _guard = env_lock().lock().unwrap_or_else(|err| err.into_inner());
|
||||||
@@ -179,17 +861,17 @@ fn production_submit_task_routes_zhihu_through_ws_backend_without_helper_bootstr
|
|||||||
std::env::remove_var("DEEPSEEK_MODEL");
|
std::env::remove_var("DEEPSEEK_MODEL");
|
||||||
|
|
||||||
let workspace_root = temp_workspace_root();
|
let workspace_root = temp_workspace_root();
|
||||||
|
let (ws_url, frames, ws_handle) = start_browser_ws_server();
|
||||||
let config_path = write_config(
|
let config_path = write_config(
|
||||||
&workspace_root,
|
&workspace_root,
|
||||||
"deepseek-test-key",
|
"deepseek-test-key",
|
||||||
"http://127.0.0.1:9",
|
"http://127.0.0.1:9",
|
||||||
"deepseek-chat",
|
"deepseek-chat",
|
||||||
Some(real_skill_lib_root().to_str().unwrap()),
|
Some(real_skill_lib_root().to_str().unwrap()),
|
||||||
|
Some(&ws_url),
|
||||||
|
None,
|
||||||
);
|
);
|
||||||
|
|
||||||
let (ws_url, frames, ws_handle) = start_browser_ws_server();
|
|
||||||
std::env::set_var("SGCLAW_BROWSER_WS_URL", &ws_url);
|
|
||||||
|
|
||||||
let transport = Arc::new(MockTransport::new(vec![]));
|
let transport = Arc::new(MockTransport::new(vec![]));
|
||||||
let browser_tool = BrowserPipeTool::new(
|
let browser_tool = BrowserPipeTool::new(
|
||||||
transport.clone(),
|
transport.clone(),
|
||||||
@@ -306,7 +988,7 @@ fn production_submit_task_does_not_route_into_legacy_runtime_without_llm_config(
|
|||||||
let transport = Arc::new(MockTransport::new(vec![]));
|
let transport = Arc::new(MockTransport::new(vec![]));
|
||||||
let browser_tool = BrowserPipeTool::new(
|
let browser_tool = BrowserPipeTool::new(
|
||||||
transport.clone(),
|
transport.clone(),
|
||||||
test_policy(),
|
provider_path_test_policy(),
|
||||||
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||||
)
|
)
|
||||||
.with_response_timeout(Duration::from_secs(1));
|
.with_response_timeout(Duration::from_secs(1));
|
||||||
|
|||||||
@@ -32,6 +32,175 @@ fn test_policy() -> MacPolicy {
|
|||||||
.unwrap()
|
.unwrap()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn execute_browser_script_tool_runs_packaged_script_with_expected_domain() {
|
||||||
|
let skill_dir = unique_temp_dir("sgclaw-browser-script-helper");
|
||||||
|
let scripts_dir = skill_dir.join("scripts");
|
||||||
|
fs::create_dir_all(&scripts_dir).unwrap();
|
||||||
|
fs::write(
|
||||||
|
scripts_dir.join("extract_hotlist.js"),
|
||||||
|
"return { wrapped_args: args, source: \"packaged script\" };\n",
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let transport = Arc::new(MockTransport::new(vec![BrowserMessage::Response {
|
||||||
|
seq: 1,
|
||||||
|
success: true,
|
||||||
|
data: json!({
|
||||||
|
"text": {
|
||||||
|
"sheet_name": "知乎热榜",
|
||||||
|
"rows": [[1, "标题", "10条"]]
|
||||||
|
}
|
||||||
|
}),
|
||||||
|
aom_snapshot: vec![],
|
||||||
|
timing: Timing {
|
||||||
|
queue_ms: 1,
|
||||||
|
exec_ms: 5,
|
||||||
|
},
|
||||||
|
}]));
|
||||||
|
let browser_tool = BrowserPipeTool::new(
|
||||||
|
transport.clone(),
|
||||||
|
test_policy(),
|
||||||
|
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||||
|
)
|
||||||
|
.with_response_timeout(Duration::from_secs(1));
|
||||||
|
|
||||||
|
let mut tool_args = HashMap::new();
|
||||||
|
tool_args.insert("top_n".to_string(), "How many rows to extract".to_string());
|
||||||
|
let skill_tool = SkillTool {
|
||||||
|
name: "extract_hotlist".to_string(),
|
||||||
|
description: "Extract structured hotlist rows".to_string(),
|
||||||
|
kind: "browser_script".to_string(),
|
||||||
|
command: "scripts/extract_hotlist.js".to_string(),
|
||||||
|
args: tool_args,
|
||||||
|
};
|
||||||
|
|
||||||
|
let result = execute_browser_script_tool(
|
||||||
|
&skill_tool,
|
||||||
|
&skill_dir,
|
||||||
|
&PipeBrowserBackend::from_inner(browser_tool),
|
||||||
|
json!({
|
||||||
|
"expected_domain": "https://WWW.ZHIHU.COM/hot?foo=bar",
|
||||||
|
"top_n": "10"
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let sent = transport.sent_messages();
|
||||||
|
assert!(result.success);
|
||||||
|
assert_eq!(
|
||||||
|
serde_json::from_str::<serde_json::Value>(&result.output).unwrap(),
|
||||||
|
json!({
|
||||||
|
"sheet_name": "知乎热榜",
|
||||||
|
"rows": [[1, "标题", "10条"]]
|
||||||
|
})
|
||||||
|
);
|
||||||
|
assert!(matches!(
|
||||||
|
&sent[0],
|
||||||
|
AgentMessage::Command {
|
||||||
|
action,
|
||||||
|
params,
|
||||||
|
security,
|
||||||
|
..
|
||||||
|
} if action == &Action::Eval
|
||||||
|
&& security.expected_domain == "www.zhihu.com"
|
||||||
|
&& params["script"].as_str().unwrap().contains("\"expected_domain\":\"www.zhihu.com\"")
|
||||||
|
&& params["script"].as_str().unwrap().contains("\"top_n\":\"10\"")
|
||||||
|
&& params["script"].as_str().unwrap().contains("source: \"packaged script\"")
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn execute_browser_script_tool_rejects_non_browser_script_tool_kind() {
|
||||||
|
let skill_dir = unique_temp_dir("sgclaw-browser-script-helper-invalid-kind");
|
||||||
|
let scripts_dir = skill_dir.join("scripts");
|
||||||
|
fs::create_dir_all(&scripts_dir).unwrap();
|
||||||
|
fs::write(scripts_dir.join("extract_hotlist.js"), "return 'unused';\n").unwrap();
|
||||||
|
|
||||||
|
let transport = Arc::new(MockTransport::new(vec![]));
|
||||||
|
let browser_tool = BrowserPipeTool::new(
|
||||||
|
transport.clone(),
|
||||||
|
test_policy(),
|
||||||
|
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||||
|
)
|
||||||
|
.with_response_timeout(Duration::from_secs(1));
|
||||||
|
|
||||||
|
let mut tool_args = HashMap::new();
|
||||||
|
tool_args.insert("top_n".to_string(), "How many rows to extract".to_string());
|
||||||
|
let skill_tool = SkillTool {
|
||||||
|
name: "extract_hotlist".to_string(),
|
||||||
|
description: "Extract structured hotlist rows".to_string(),
|
||||||
|
kind: "shell".to_string(),
|
||||||
|
command: "scripts/extract_hotlist.js".to_string(),
|
||||||
|
args: tool_args,
|
||||||
|
};
|
||||||
|
|
||||||
|
let result = execute_browser_script_tool(
|
||||||
|
&skill_tool,
|
||||||
|
&skill_dir,
|
||||||
|
&PipeBrowserBackend::from_inner(browser_tool),
|
||||||
|
json!({
|
||||||
|
"expected_domain": "www.zhihu.com",
|
||||||
|
"top_n": "10"
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert!(!result.success);
|
||||||
|
assert_eq!(
|
||||||
|
result.error.as_deref(),
|
||||||
|
Some("browser script tool kind must be browser_script, got shell")
|
||||||
|
);
|
||||||
|
assert!(transport.sent_messages().is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn execute_browser_script_tool_rejects_missing_expected_domain() {
|
||||||
|
let skill_dir = unique_temp_dir("sgclaw-browser-script-helper-invalid-domain");
|
||||||
|
let scripts_dir = skill_dir.join("scripts");
|
||||||
|
fs::create_dir_all(&scripts_dir).unwrap();
|
||||||
|
fs::write(scripts_dir.join("extract_hotlist.js"), "return 'unused';\n").unwrap();
|
||||||
|
|
||||||
|
let transport = Arc::new(MockTransport::new(vec![]));
|
||||||
|
let browser_tool = BrowserPipeTool::new(
|
||||||
|
transport.clone(),
|
||||||
|
test_policy(),
|
||||||
|
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||||
|
)
|
||||||
|
.with_response_timeout(Duration::from_secs(1));
|
||||||
|
|
||||||
|
let mut tool_args = HashMap::new();
|
||||||
|
tool_args.insert("top_n".to_string(), "How many rows to extract".to_string());
|
||||||
|
let skill_tool = SkillTool {
|
||||||
|
name: "extract_hotlist".to_string(),
|
||||||
|
description: "Extract structured hotlist rows".to_string(),
|
||||||
|
kind: "browser_script".to_string(),
|
||||||
|
command: "scripts/extract_hotlist.js".to_string(),
|
||||||
|
args: tool_args,
|
||||||
|
};
|
||||||
|
|
||||||
|
let result = execute_browser_script_tool(
|
||||||
|
&skill_tool,
|
||||||
|
&skill_dir,
|
||||||
|
&PipeBrowserBackend::from_inner(browser_tool),
|
||||||
|
json!({
|
||||||
|
"expected_domain": " ",
|
||||||
|
"top_n": "10"
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert!(!result.success);
|
||||||
|
assert_eq!(
|
||||||
|
result.error.as_deref(),
|
||||||
|
Some("expected_domain must be a non-empty string, got \" \"")
|
||||||
|
);
|
||||||
|
assert!(transport.sent_messages().is_empty());
|
||||||
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn browser_script_skill_tool_executes_packaged_script_via_eval() {
|
async fn browser_script_skill_tool_executes_packaged_script_via_eval() {
|
||||||
let skill_dir = unique_temp_dir("sgclaw-browser-script-skill");
|
let skill_dir = unique_temp_dir("sgclaw-browser-script-skill");
|
||||||
@@ -110,14 +279,98 @@ return {
|
|||||||
..
|
..
|
||||||
} if action == &Action::Eval
|
} if action == &Action::Eval
|
||||||
&& security.expected_domain == "www.zhihu.com"
|
&& security.expected_domain == "www.zhihu.com"
|
||||||
&& params["script"].as_str().unwrap().contains("const args = {\"top_n\":\"10\"};")
|
&& params["script"].as_str().unwrap().contains("\"expected_domain\":\"www.zhihu.com\"")
|
||||||
|
&& params["script"].as_str().unwrap().contains("\"top_n\":\"10\"")
|
||||||
&& params["script"].as_str().unwrap().contains("return {")
|
&& params["script"].as_str().unwrap().contains("return {")
|
||||||
));
|
));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn browser_script_skill_tool_executes_script_directly_under_skill_root() {
|
||||||
|
let skill_root = unique_temp_dir("sgclaw-browser-script-direct-root");
|
||||||
|
let script_name = "extract_hotlist_direct.js";
|
||||||
|
let script_path = skill_root.join(script_name);
|
||||||
|
fs::write(
|
||||||
|
&script_path,
|
||||||
|
r#"
|
||||||
|
return {
|
||||||
|
sheet_name: "知乎热榜",
|
||||||
|
rows: [[1, "标题", args.top_n]]
|
||||||
|
};
|
||||||
|
"#,
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let transport = Arc::new(MockTransport::new(vec![BrowserMessage::Response {
|
||||||
|
seq: 1,
|
||||||
|
success: true,
|
||||||
|
data: json!({
|
||||||
|
"text": {
|
||||||
|
"sheet_name": "知乎热榜",
|
||||||
|
"rows": [[1, "标题", "10条"]]
|
||||||
|
}
|
||||||
|
}),
|
||||||
|
aom_snapshot: vec![],
|
||||||
|
timing: Timing {
|
||||||
|
queue_ms: 1,
|
||||||
|
exec_ms: 5,
|
||||||
|
},
|
||||||
|
}]));
|
||||||
|
let browser_tool = BrowserPipeTool::new(
|
||||||
|
transport.clone(),
|
||||||
|
test_policy(),
|
||||||
|
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||||
|
)
|
||||||
|
.with_response_timeout(Duration::from_secs(1));
|
||||||
|
let backend: Arc<dyn BrowserBackend> = Arc::new(PipeBrowserBackend::from_inner(browser_tool));
|
||||||
|
|
||||||
|
let mut args = HashMap::new();
|
||||||
|
args.insert("top_n".to_string(), "How many rows to extract".to_string());
|
||||||
|
let skill_tool = SkillTool {
|
||||||
|
name: "extract_hotlist".to_string(),
|
||||||
|
description: "Extract structured hotlist rows".to_string(),
|
||||||
|
kind: "browser_script".to_string(),
|
||||||
|
command: script_name.to_string(),
|
||||||
|
args,
|
||||||
|
};
|
||||||
|
let tool = BrowserScriptSkillTool::new("zhihu-hotlist", &skill_tool, &skill_root, backend)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let result = tool
|
||||||
|
.execute(json!({
|
||||||
|
"expected_domain": "https://www.zhihu.com/hot",
|
||||||
|
"top_n": "10条"
|
||||||
|
}))
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let sent = transport.sent_messages();
|
||||||
|
assert!(result.success);
|
||||||
|
assert_eq!(
|
||||||
|
serde_json::from_str::<serde_json::Value>(&result.output).unwrap(),
|
||||||
|
json!({
|
||||||
|
"sheet_name": "知乎热榜",
|
||||||
|
"rows": [[1, "标题", "10条"]]
|
||||||
|
})
|
||||||
|
);
|
||||||
|
assert!(matches!(
|
||||||
|
&sent[0],
|
||||||
|
AgentMessage::Command {
|
||||||
|
action,
|
||||||
|
params,
|
||||||
|
security,
|
||||||
|
..
|
||||||
|
} if action == &Action::Eval
|
||||||
|
&& security.expected_domain == "www.zhihu.com"
|
||||||
|
&& params["script"].as_str().unwrap().contains("\"expected_domain\":\"www.zhihu.com\"")
|
||||||
|
&& params["script"].as_str().unwrap().contains("\"top_n\":\"10条\"")
|
||||||
|
&& params["script"].as_str().unwrap().contains("rows: [[1, \"标题\", args.top_n]]")
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn browser_script_helper_executes_packaged_script_via_eval() {
|
async fn browser_script_helper_executes_packaged_script_via_eval() {
|
||||||
let skill_dir = unique_temp_dir("sgclaw-browser-script-helper");
|
let skill_dir = unique_temp_dir("sgclaw-browser-script-helper-fault-details");
|
||||||
let scripts_dir = skill_dir.join("scripts");
|
let scripts_dir = skill_dir.join("scripts");
|
||||||
fs::create_dir_all(&scripts_dir).unwrap();
|
fs::create_dir_all(&scripts_dir).unwrap();
|
||||||
fs::write(
|
fs::write(
|
||||||
@@ -152,7 +405,7 @@ return {
|
|||||||
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||||
)
|
)
|
||||||
.with_response_timeout(Duration::from_secs(1));
|
.with_response_timeout(Duration::from_secs(1));
|
||||||
let backend: Arc<dyn BrowserBackend> = Arc::new(PipeBrowserBackend::from_inner(browser_tool));
|
let backend = PipeBrowserBackend::from_inner(browser_tool);
|
||||||
|
|
||||||
let mut args = HashMap::new();
|
let mut args = HashMap::new();
|
||||||
args.insert("period".to_string(), "Target report period".to_string());
|
args.insert("period".to_string(), "Target report period".to_string());
|
||||||
@@ -164,10 +417,15 @@ return {
|
|||||||
args,
|
args,
|
||||||
};
|
};
|
||||||
|
|
||||||
let result = execute_browser_script_tool(&skill_tool, &skill_dir, backend, json!({
|
let result = execute_browser_script_tool(
|
||||||
"expected_domain": "https://www.zhihu.com/hot",
|
&skill_tool,
|
||||||
"period": "2026-04"
|
&skill_dir,
|
||||||
}))
|
&backend,
|
||||||
|
json!({
|
||||||
|
"expected_domain": "https://www.zhihu.com/hot",
|
||||||
|
"period": "2026-04"
|
||||||
|
}),
|
||||||
|
)
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -189,7 +447,8 @@ return {
|
|||||||
..
|
..
|
||||||
} if action == &Action::Eval
|
} if action == &Action::Eval
|
||||||
&& security.expected_domain == "www.zhihu.com"
|
&& security.expected_domain == "www.zhihu.com"
|
||||||
&& params["script"].as_str().unwrap().contains("const args = {\"period\":\"2026-04\"};")
|
&& params["script"].as_str().unwrap().contains("\"expected_domain\":\"www.zhihu.com\"")
|
||||||
|
&& params["script"].as_str().unwrap().contains("\"period\":\"2026-04\"")
|
||||||
&& params["script"].as_str().unwrap().contains("sheet_name")
|
&& params["script"].as_str().unwrap().contains("sheet_name")
|
||||||
));
|
));
|
||||||
}
|
}
|
||||||
@@ -208,7 +467,7 @@ async fn browser_script_helper_requires_expected_domain() {
|
|||||||
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||||
)
|
)
|
||||||
.with_response_timeout(Duration::from_secs(1));
|
.with_response_timeout(Duration::from_secs(1));
|
||||||
let backend: Arc<dyn BrowserBackend> = Arc::new(PipeBrowserBackend::from_inner(browser_tool));
|
let backend = PipeBrowserBackend::from_inner(browser_tool);
|
||||||
|
|
||||||
let mut args = HashMap::new();
|
let mut args = HashMap::new();
|
||||||
args.insert("period".to_string(), "Target report period".to_string());
|
args.insert("period".to_string(), "Target report period".to_string());
|
||||||
@@ -220,9 +479,14 @@ async fn browser_script_helper_requires_expected_domain() {
|
|||||||
args,
|
args,
|
||||||
};
|
};
|
||||||
|
|
||||||
let result = execute_browser_script_tool(&skill_tool, &skill_dir, backend, json!({
|
let result = execute_browser_script_tool(
|
||||||
"period": "2026-04"
|
&skill_tool,
|
||||||
}))
|
&skill_dir,
|
||||||
|
&backend,
|
||||||
|
json!({
|
||||||
|
"period": "2026-04"
|
||||||
|
}),
|
||||||
|
)
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -234,6 +498,193 @@ async fn browser_script_helper_requires_expected_domain() {
|
|||||||
assert!(transport.sent_messages().is_empty());
|
assert!(transport.sent_messages().is_empty());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn execute_browser_script_tool_preserves_structured_report_artifact_payload() {
|
||||||
|
let skill_dir = unique_temp_dir("sgclaw-browser-script-helper-report-artifact");
|
||||||
|
let scripts_dir = skill_dir.join("scripts");
|
||||||
|
fs::create_dir_all(&scripts_dir).unwrap();
|
||||||
|
fs::write(
|
||||||
|
scripts_dir.join("collect_fault_details.js"),
|
||||||
|
r#"
|
||||||
|
return {
|
||||||
|
type: "report-artifact",
|
||||||
|
report_name: "fault-details-report",
|
||||||
|
period: args.period,
|
||||||
|
selected_range: {
|
||||||
|
start: "2026-03-08 16:00:00",
|
||||||
|
end: "2026-03-09 16:00:00"
|
||||||
|
},
|
||||||
|
columns: ["qxdbh"],
|
||||||
|
rows: [{ qxdbh: "QX-1" }],
|
||||||
|
sections: [{ name: "summary-sheet", columns: ["index"], rows: [{ index: 1 }] }],
|
||||||
|
counts: { detail_rows: 1, summary_rows: 1 },
|
||||||
|
status: "partial",
|
||||||
|
partial_reasons: ["report_log_failed"],
|
||||||
|
downstream: {
|
||||||
|
export: { attempted: true, success: true, path: "http://localhost/export.xlsx" },
|
||||||
|
report_log: { attempted: true, success: false, error: "500" }
|
||||||
|
}
|
||||||
|
};
|
||||||
|
"#,
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let transport = Arc::new(MockTransport::new(vec![BrowserMessage::Response {
|
||||||
|
seq: 1,
|
||||||
|
success: true,
|
||||||
|
data: json!({
|
||||||
|
"text": {
|
||||||
|
"type": "report-artifact",
|
||||||
|
"report_name": "fault-details-report",
|
||||||
|
"period": "2026-03",
|
||||||
|
"selected_range": {
|
||||||
|
"start": "2026-03-08 16:00:00",
|
||||||
|
"end": "2026-03-09 16:00:00"
|
||||||
|
},
|
||||||
|
"columns": ["qxdbh"],
|
||||||
|
"rows": [{ "qxdbh": "QX-1" }],
|
||||||
|
"sections": [{ "name": "summary-sheet", "columns": ["index"], "rows": [{ "index": 1 }] }],
|
||||||
|
"counts": { "detail_rows": 1, "summary_rows": 1 },
|
||||||
|
"status": "partial",
|
||||||
|
"partial_reasons": ["report_log_failed"],
|
||||||
|
"downstream": {
|
||||||
|
"export": { "attempted": true, "success": true, "path": "http://localhost/export.xlsx" },
|
||||||
|
"report_log": { "attempted": true, "success": false, "error": "500" }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}),
|
||||||
|
aom_snapshot: vec![],
|
||||||
|
timing: Timing {
|
||||||
|
queue_ms: 1,
|
||||||
|
exec_ms: 5,
|
||||||
|
},
|
||||||
|
}]));
|
||||||
|
let browser_tool = BrowserPipeTool::new(
|
||||||
|
transport.clone(),
|
||||||
|
test_policy(),
|
||||||
|
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||||
|
)
|
||||||
|
.with_response_timeout(Duration::from_secs(1));
|
||||||
|
let backend = PipeBrowserBackend::from_inner(browser_tool);
|
||||||
|
|
||||||
|
let mut tool_args = HashMap::new();
|
||||||
|
tool_args.insert("period".to_string(), "YYYY-MM period to collect".to_string());
|
||||||
|
let skill_tool = SkillTool {
|
||||||
|
name: "collect_fault_details".to_string(),
|
||||||
|
description: "Collect structured fault details".to_string(),
|
||||||
|
kind: "browser_script".to_string(),
|
||||||
|
command: "scripts/collect_fault_details.js".to_string(),
|
||||||
|
args: tool_args,
|
||||||
|
};
|
||||||
|
|
||||||
|
let result = execute_browser_script_tool(
|
||||||
|
&skill_tool,
|
||||||
|
&skill_dir,
|
||||||
|
&backend,
|
||||||
|
json!({
|
||||||
|
"expected_domain": "https://www.zhihu.com/",
|
||||||
|
"period": "2026-03"
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert!(result.success);
|
||||||
|
assert_eq!(
|
||||||
|
serde_json::from_str::<serde_json::Value>(&result.output).unwrap(),
|
||||||
|
json!({
|
||||||
|
"type": "report-artifact",
|
||||||
|
"report_name": "fault-details-report",
|
||||||
|
"period": "2026-03",
|
||||||
|
"selected_range": {
|
||||||
|
"start": "2026-03-08 16:00:00",
|
||||||
|
"end": "2026-03-09 16:00:00"
|
||||||
|
},
|
||||||
|
"columns": ["qxdbh"],
|
||||||
|
"rows": [{ "qxdbh": "QX-1" }],
|
||||||
|
"sections": [{ "name": "summary-sheet", "columns": ["index"], "rows": [{ "index": 1 }] }],
|
||||||
|
"counts": { "detail_rows": 1, "summary_rows": 1 },
|
||||||
|
"status": "partial",
|
||||||
|
"partial_reasons": ["report_log_failed"],
|
||||||
|
"downstream": {
|
||||||
|
"export": { "attempted": true, "success": true, "path": "http://localhost/export.xlsx" },
|
||||||
|
"report_log": { "attempted": true, "success": false, "error": "500" }
|
||||||
|
}
|
||||||
|
})
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn execute_browser_script_tool_awaits_async_script() {
|
||||||
|
let skill_dir = unique_temp_dir("sgclaw-browser-script-async");
|
||||||
|
let scripts_dir = skill_dir.join("scripts");
|
||||||
|
fs::create_dir_all(&scripts_dir).unwrap();
|
||||||
|
// 异步脚本,返回 Promise
|
||||||
|
fs::write(
|
||||||
|
scripts_dir.join("async_extract.js"),
|
||||||
|
"return (async function() { return { async: true, args: args }; })();\n",
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let transport = Arc::new(MockTransport::new(vec![BrowserMessage::Response {
|
||||||
|
seq: 1,
|
||||||
|
success: true,
|
||||||
|
data: json!({
|
||||||
|
"text": {
|
||||||
|
"async": true,
|
||||||
|
"args": { "expected_domain": "example.com" }
|
||||||
|
}
|
||||||
|
}),
|
||||||
|
aom_snapshot: vec![],
|
||||||
|
timing: Timing {
|
||||||
|
queue_ms: 1,
|
||||||
|
exec_ms: 5,
|
||||||
|
},
|
||||||
|
}]));
|
||||||
|
|
||||||
|
let policy_json = MacPolicy::from_json_str(
|
||||||
|
r#"{
|
||||||
|
"version": "1.0",
|
||||||
|
"domains": { "allowed": ["www.zhihu.com", "example.com"] },
|
||||||
|
"pipe_actions": {
|
||||||
|
"allowed": ["click", "type", "navigate", "getText", "eval"],
|
||||||
|
"blocked": []
|
||||||
|
}
|
||||||
|
}"#,
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let browser_tool = BrowserPipeTool::new(
|
||||||
|
transport.clone(),
|
||||||
|
policy_json,
|
||||||
|
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||||
|
)
|
||||||
|
.with_response_timeout(Duration::from_secs(1));
|
||||||
|
|
||||||
|
let skill_tool = SkillTool {
|
||||||
|
name: "async_extract".to_string(),
|
||||||
|
description: "Extract data asynchronously".to_string(),
|
||||||
|
kind: "browser_script".to_string(),
|
||||||
|
command: "scripts/async_extract.js".to_string(),
|
||||||
|
args: HashMap::new(),
|
||||||
|
};
|
||||||
|
|
||||||
|
let result = execute_browser_script_tool(
|
||||||
|
&skill_tool,
|
||||||
|
&skill_dir,
|
||||||
|
&PipeBrowserBackend::from_inner(browser_tool),
|
||||||
|
json!({
|
||||||
|
"expected_domain": "example.com"
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert!(result.success);
|
||||||
|
let output = serde_json::from_str::<serde_json::Value>(&result.output).unwrap();
|
||||||
|
assert_eq!(output["async"], true);
|
||||||
|
}
|
||||||
|
|
||||||
fn unique_temp_dir(prefix: &str) -> PathBuf {
|
fn unique_temp_dir(prefix: &str) -> PathBuf {
|
||||||
let nanos = SystemTime::now()
|
let nanos = SystemTime::now()
|
||||||
.duration_since(UNIX_EPOCH)
|
.duration_since(UNIX_EPOCH)
|
||||||
|
|||||||
@@ -4,8 +4,8 @@ use std::sync::{Mutex, OnceLock};
|
|||||||
|
|
||||||
use sgclaw::compat::config_adapter::{
|
use sgclaw::compat::config_adapter::{
|
||||||
build_zeroclaw_config, build_zeroclaw_config_from_settings,
|
build_zeroclaw_config, build_zeroclaw_config_from_settings,
|
||||||
build_zeroclaw_config_from_sgclaw_settings, resolve_scene_skills_dir_path, resolve_skills_dir,
|
build_zeroclaw_config_from_sgclaw_settings, resolve_skills_dir, zeroclaw_default_skills_dir,
|
||||||
zeroclaw_default_skills_dir, zeroclaw_workspace_dir,
|
zeroclaw_workspace_dir,
|
||||||
};
|
};
|
||||||
use sgclaw::config::{
|
use sgclaw::config::{
|
||||||
BrowserBackend, DeepSeekSettings, OfficeBackend, PlannerMode, SgClawSettings, SkillsPromptMode,
|
BrowserBackend, DeepSeekSettings, OfficeBackend, PlannerMode, SgClawSettings, SkillsPromptMode,
|
||||||
@@ -47,7 +47,7 @@ fn zeroclaw_config_adapter_uses_deterministic_workspace_dir() {
|
|||||||
api_key: "key".to_string(),
|
api_key: "key".to_string(),
|
||||||
base_url: "https://proxy.example.com/v1".to_string(),
|
base_url: "https://proxy.example.com/v1".to_string(),
|
||||||
model: "deepseek-reasoner".to_string(),
|
model: "deepseek-reasoner".to_string(),
|
||||||
skills_dir: Vec::new(),
|
skills_dir: None,
|
||||||
};
|
};
|
||||||
|
|
||||||
let workspace_dir = zeroclaw_workspace_dir(Path::new("/var/lib/sgclaw"));
|
let workspace_dir = zeroclaw_workspace_dir(Path::new("/var/lib/sgclaw"));
|
||||||
@@ -66,7 +66,7 @@ fn zeroclaw_config_adapter_uses_deterministic_workspace_dir() {
|
|||||||
);
|
);
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
resolve_skills_dir(Path::new("/var/lib/sgclaw"), &settings),
|
resolve_skills_dir(Path::new("/var/lib/sgclaw"), &settings),
|
||||||
vec![zeroclaw_default_skills_dir(Path::new("/var/lib/sgclaw"))]
|
zeroclaw_default_skills_dir(Path::new("/var/lib/sgclaw"))
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -92,7 +92,7 @@ fn deepseek_settings_reload_from_browser_config_path_after_file_changes() {
|
|||||||
assert_eq!(first.api_key, "sk-first");
|
assert_eq!(first.api_key, "sk-first");
|
||||||
assert_eq!(first.base_url, "https://api.deepseek.com");
|
assert_eq!(first.base_url, "https://api.deepseek.com");
|
||||||
assert_eq!(first.model, "deepseek-chat");
|
assert_eq!(first.model, "deepseek-chat");
|
||||||
assert!(first.skills_dir.is_empty());
|
assert!(first.skills_dir.is_none());
|
||||||
|
|
||||||
fs::write(
|
fs::write(
|
||||||
&config_path,
|
&config_path,
|
||||||
@@ -111,23 +111,23 @@ fn deepseek_settings_reload_from_browser_config_path_after_file_changes() {
|
|||||||
assert_eq!(second.api_key, "sk-second");
|
assert_eq!(second.api_key, "sk-second");
|
||||||
assert_eq!(second.base_url, "https://proxy.example.com/v1");
|
assert_eq!(second.base_url, "https://proxy.example.com/v1");
|
||||||
assert_eq!(second.model, "deepseek-reasoner");
|
assert_eq!(second.model, "deepseek-reasoner");
|
||||||
assert_eq!(second.skills_dir, vec![root.join("skill_lib")]);
|
assert_eq!(second.skills_dir, Some(root.join("skill_lib")));
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn resolve_skills_dir_prefers_nested_skills_subdirectory_for_configured_repo_root() {
|
fn ws_cleanup_resolves_single_configured_skills_dir() {
|
||||||
let root = std::env::temp_dir().join(format!("sgclaw-skills-{}", Uuid::new_v4()));
|
let root = std::env::temp_dir().join(format!("sgclaw-skills-{}", Uuid::new_v4()));
|
||||||
fs::create_dir_all(root.join("skill_lib/skills")).unwrap();
|
fs::create_dir_all(root.join("skill_lib/skills")).unwrap();
|
||||||
let settings = DeepSeekSettings {
|
let settings = DeepSeekSettings {
|
||||||
api_key: "key".to_string(),
|
api_key: "key".to_string(),
|
||||||
base_url: "https://api.deepseek.com".to_string(),
|
base_url: "https://api.deepseek.com".to_string(),
|
||||||
model: "deepseek-chat".to_string(),
|
model: "deepseek-chat".to_string(),
|
||||||
skills_dir: vec![root.join("skill_lib")],
|
skills_dir: Some(root.join("skill_lib")),
|
||||||
};
|
};
|
||||||
|
|
||||||
let resolved = resolve_skills_dir(&root, &settings);
|
let resolved = resolve_skills_dir(&root, &settings);
|
||||||
|
|
||||||
assert_eq!(resolved, vec![root.join("skill_lib/skills")]);
|
assert_eq!(resolved, root.join("skill_lib/skills"));
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -139,41 +139,12 @@ fn resolve_skills_dir_preserves_absolute_configured_skills_directory() {
|
|||||||
api_key: "key".to_string(),
|
api_key: "key".to_string(),
|
||||||
base_url: "https://api.deepseek.com".to_string(),
|
base_url: "https://api.deepseek.com".to_string(),
|
||||||
model: "deepseek-chat".to_string(),
|
model: "deepseek-chat".to_string(),
|
||||||
skills_dir: vec![external_skills.clone()],
|
skills_dir: Some(external_skills.clone()),
|
||||||
};
|
};
|
||||||
|
|
||||||
let resolved = resolve_skills_dir(&root, &settings);
|
let resolved = resolve_skills_dir(&root, &settings);
|
||||||
|
|
||||||
assert_eq!(resolved, vec![external_skills]);
|
assert_eq!(resolved, external_skills);
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn resolve_skills_dir_uses_skills_child_for_external_staged_root() {
|
|
||||||
let root = std::env::temp_dir().join(format!("sgclaw-skills-{}", Uuid::new_v4()));
|
|
||||||
let staged_root = root.join("external/skill_staging");
|
|
||||||
fs::create_dir_all(staged_root.join("skills")).unwrap();
|
|
||||||
fs::create_dir_all(staged_root.join("scenes")).unwrap();
|
|
||||||
let settings = DeepSeekSettings {
|
|
||||||
api_key: "key".to_string(),
|
|
||||||
base_url: "https://api.deepseek.com".to_string(),
|
|
||||||
model: "deepseek-chat".to_string(),
|
|
||||||
skills_dir: vec![staged_root.clone()],
|
|
||||||
};
|
|
||||||
|
|
||||||
let resolved = resolve_skills_dir(&root, &settings);
|
|
||||||
|
|
||||||
assert_eq!(resolved, vec![staged_root.join("skills")]);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn resolve_scene_skills_dir_path_prefers_staged_skills_child_under_project_root() {
|
|
||||||
let root = std::env::temp_dir().join(format!("sgclaw-scene-skills-{}", Uuid::new_v4()));
|
|
||||||
let top_level_skills = root.join("project/skills");
|
|
||||||
fs::create_dir_all(top_level_skills.join("skill_staging/skills")).unwrap();
|
|
||||||
|
|
||||||
let resolved = resolve_scene_skills_dir_path(top_level_skills.clone());
|
|
||||||
|
|
||||||
assert_eq!(resolved, top_level_skills.join("skill_staging/skills"));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -182,7 +153,7 @@ fn sgclaw_settings_default_to_compact_skills_and_browser_attached_profile() {
|
|||||||
"sk-test".to_string(),
|
"sk-test".to_string(),
|
||||||
"https://api.deepseek.com".to_string(),
|
"https://api.deepseek.com".to_string(),
|
||||||
"deepseek-chat".to_string(),
|
"deepseek-chat".to_string(),
|
||||||
Vec::new(),
|
None,
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
@@ -190,6 +161,60 @@ fn sgclaw_settings_default_to_compact_skills_and_browser_attached_profile() {
|
|||||||
assert_eq!(settings.skills_prompt_mode, SkillsPromptMode::Compact);
|
assert_eq!(settings.skills_prompt_mode, SkillsPromptMode::Compact);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn sgclaw_settings_load_direct_submit_only_config_and_resolve_relative_skills_dir() {
|
||||||
|
let root = std::env::temp_dir().join(format!("sgclaw-direct-submit-only-config-{}", Uuid::new_v4()));
|
||||||
|
fs::create_dir_all(&root).unwrap();
|
||||||
|
let config_path = root.join("sgclaw_config.json");
|
||||||
|
|
||||||
|
fs::write(
|
||||||
|
&config_path,
|
||||||
|
r#"{
|
||||||
|
"providers": [],
|
||||||
|
"skillsDir": "skill_lib",
|
||||||
|
"directSubmitSkill": "fault-details-report.collect_fault_details"
|
||||||
|
}"#,
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let settings = SgClawSettings::load(Some(config_path.as_path()))
|
||||||
|
.unwrap()
|
||||||
|
.expect("expected sgclaw settings from config file");
|
||||||
|
|
||||||
|
assert_eq!(
|
||||||
|
settings.direct_submit_skill.as_deref(),
|
||||||
|
Some("fault-details-report.collect_fault_details")
|
||||||
|
);
|
||||||
|
assert_eq!(settings.skills_dir, Some(root.join("skill_lib")));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn sgclaw_settings_reject_invalid_direct_submit_skill_format() {
|
||||||
|
let root = std::env::temp_dir().join(format!(
|
||||||
|
"sgclaw-invalid-direct-submit-skill-{}",
|
||||||
|
Uuid::new_v4()
|
||||||
|
));
|
||||||
|
fs::create_dir_all(&root).unwrap();
|
||||||
|
let config_path = root.join("sgclaw_config.json");
|
||||||
|
|
||||||
|
fs::write(
|
||||||
|
&config_path,
|
||||||
|
r#"{
|
||||||
|
"providers": [],
|
||||||
|
"skillsDir": "skill_lib",
|
||||||
|
"directSubmitSkill": "fault-details-report"
|
||||||
|
}"#,
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let err = SgClawSettings::load(Some(config_path.as_path()))
|
||||||
|
.expect_err("expected invalid directSubmitSkill format");
|
||||||
|
let message = err.to_string();
|
||||||
|
|
||||||
|
assert!(message.contains("directSubmitSkill"));
|
||||||
|
assert!(message.contains("skill.tool"));
|
||||||
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn sgclaw_settings_load_new_runtime_fields_from_browser_config() {
|
fn sgclaw_settings_load_new_runtime_fields_from_browser_config() {
|
||||||
let root = std::env::temp_dir().join(format!("sgclaw-runtime-config-{}", Uuid::new_v4()));
|
let root = std::env::temp_dir().join(format!("sgclaw-runtime-config-{}", Uuid::new_v4()));
|
||||||
@@ -216,10 +241,29 @@ fn sgclaw_settings_load_new_runtime_fields_from_browser_config() {
|
|||||||
|
|
||||||
assert_eq!(settings.runtime_profile, RuntimeProfile::GeneralAssistant);
|
assert_eq!(settings.runtime_profile, RuntimeProfile::GeneralAssistant);
|
||||||
assert_eq!(settings.skills_prompt_mode, SkillsPromptMode::Full);
|
assert_eq!(settings.skills_prompt_mode, SkillsPromptMode::Full);
|
||||||
assert_eq!(settings.skills_dir, vec![root.join("skill_lib")]);
|
assert_eq!(settings.skills_dir, Some(root.join("skill_lib")));
|
||||||
assert_eq!(config.skills.prompt_injection_mode, SkillsPromptMode::Full);
|
assert_eq!(config.skills.prompt_injection_mode, SkillsPromptMode::Full);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn ws_cleanup_rejects_array_style_skills_dir_config() {
|
||||||
|
let root = std::env::temp_dir().join(format!("sgclaw-config-{}", uuid::Uuid::new_v4()));
|
||||||
|
std::fs::create_dir_all(&root).unwrap();
|
||||||
|
let config_path = root.join("sgclaw_config.json");
|
||||||
|
std::fs::write(
|
||||||
|
&config_path,
|
||||||
|
r#"{
|
||||||
|
"apiKey": "sk-test",
|
||||||
|
"baseUrl": "https://api.deepseek.com",
|
||||||
|
"model": "deepseek-chat",
|
||||||
|
"skillsDir": ["skill_lib", "skill_staging"]
|
||||||
|
}"#,
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert!(sgclaw::config::SgClawSettings::load(Some(config_path.as_path())).is_err());
|
||||||
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn sgclaw_settings_load_browser_ws_url_from_browser_config() {
|
fn sgclaw_settings_load_browser_ws_url_from_browser_config() {
|
||||||
let root = std::env::temp_dir().join(format!("sgclaw-browser-ws-config-{}", Uuid::new_v4()));
|
let root = std::env::temp_dir().join(format!("sgclaw-browser-ws-config-{}", Uuid::new_v4()));
|
||||||
@@ -280,7 +324,7 @@ fn browser_attached_config_uses_low_temperature_for_deterministic_execution() {
|
|||||||
"sk-test".to_string(),
|
"sk-test".to_string(),
|
||||||
"https://api.deepseek.com".to_string(),
|
"https://api.deepseek.com".to_string(),
|
||||||
"deepseek-chat".to_string(),
|
"deepseek-chat".to_string(),
|
||||||
Vec::new(),
|
None,
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
|
|||||||
@@ -17,7 +17,7 @@ async fn compat_cron_adapter_creates_lists_and_runs_due_agent_jobs() {
|
|||||||
api_key: "key".to_string(),
|
api_key: "key".to_string(),
|
||||||
base_url: "https://api.deepseek.com".to_string(),
|
base_url: "https://api.deepseek.com".to_string(),
|
||||||
model: "deepseek-chat".to_string(),
|
model: "deepseek-chat".to_string(),
|
||||||
skills_dir: Vec::new(),
|
skills_dir: None,
|
||||||
};
|
};
|
||||||
let workspace_root = workspace_root("sgclaw-cron");
|
let workspace_root = workspace_root("sgclaw-cron");
|
||||||
let config = build_zeroclaw_config_from_settings(Path::new(&workspace_root), &settings);
|
let config = build_zeroclaw_config_from_settings(Path::new(&workspace_root), &settings);
|
||||||
|
|||||||
@@ -16,7 +16,7 @@ async fn compat_memory_adapter_uses_workspace_local_sqlite_backend() {
|
|||||||
api_key: "key".to_string(),
|
api_key: "key".to_string(),
|
||||||
base_url: "https://api.deepseek.com".to_string(),
|
base_url: "https://api.deepseek.com".to_string(),
|
||||||
model: "deepseek-chat".to_string(),
|
model: "deepseek-chat".to_string(),
|
||||||
skills_dir: Vec::new(),
|
skills_dir: None,
|
||||||
};
|
};
|
||||||
let workspace_root = workspace_root("sgclaw-memory");
|
let workspace_root = workspace_root("sgclaw-memory");
|
||||||
let config = build_zeroclaw_config_from_settings(Path::new(&workspace_root), &settings);
|
let config = build_zeroclaw_config_from_settings(Path::new(&workspace_root), &settings);
|
||||||
|
|||||||
@@ -1,10 +1,12 @@
|
|||||||
|
use std::fs::File;
|
||||||
|
use std::io::Read;
|
||||||
use std::path::PathBuf;
|
use std::path::PathBuf;
|
||||||
use std::process::Command as ProcessCommand;
|
|
||||||
|
|
||||||
use serde_json::json;
|
use serde_json::json;
|
||||||
use sgclaw::compat::openxml_office_tool::OpenXmlOfficeTool;
|
use sgclaw::compat::openxml_office_tool::OpenXmlOfficeTool;
|
||||||
use uuid::Uuid;
|
use uuid::Uuid;
|
||||||
use zeroclaw::tools::Tool;
|
use zeroclaw::tools::Tool;
|
||||||
|
use zip::ZipArchive;
|
||||||
|
|
||||||
fn temp_workspace_root() -> PathBuf {
|
fn temp_workspace_root() -> PathBuf {
|
||||||
let root = std::env::temp_dir().join(format!("sgclaw-openxml-office-{}", Uuid::new_v4()));
|
let root = std::env::temp_dir().join(format!("sgclaw-openxml-office-{}", Uuid::new_v4()));
|
||||||
@@ -12,6 +14,15 @@ fn temp_workspace_root() -> PathBuf {
|
|||||||
root
|
root
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn read_sheet_xml(output_path: &std::path::Path) -> String {
|
||||||
|
let file = File::open(output_path).unwrap();
|
||||||
|
let mut archive = ZipArchive::new(file).unwrap();
|
||||||
|
let mut entry = archive.by_name("xl/worksheets/sheet1.xml").unwrap();
|
||||||
|
let mut xml = String::new();
|
||||||
|
entry.read_to_string(&mut xml).unwrap();
|
||||||
|
xml
|
||||||
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn openxml_office_tool_renders_hotlist_xlsx_from_rows() {
|
async fn openxml_office_tool_renders_hotlist_xlsx_from_rows() {
|
||||||
let workspace_root = temp_workspace_root();
|
let workspace_root = temp_workspace_root();
|
||||||
@@ -33,20 +44,12 @@ async fn openxml_office_tool_renders_hotlist_xlsx_from_rows() {
|
|||||||
|
|
||||||
assert!(result.success, "{result:?}");
|
assert!(result.success, "{result:?}");
|
||||||
assert!(output_path.exists());
|
assert!(output_path.exists());
|
||||||
let payload: serde_json::Value = serde_json::from_str(&result.output).unwrap();
|
let output_json: serde_json::Value = serde_json::from_str(&result.output).unwrap();
|
||||||
assert_eq!(payload["output_path"], json!(output_path.to_str().unwrap()));
|
assert_eq!(output_json["row_count"], 2);
|
||||||
|
assert_eq!(output_json["renderer"], "openxml_office");
|
||||||
|
assert_eq!(output_json["output_path"], json!(output_path.to_str().unwrap()));
|
||||||
|
|
||||||
let unzip = ProcessCommand::new("unzip")
|
let xml = read_sheet_xml(&output_path);
|
||||||
.args([
|
|
||||||
"-p",
|
|
||||||
output_path.to_str().unwrap(),
|
|
||||||
"xl/worksheets/sheet1.xml",
|
|
||||||
])
|
|
||||||
.output()
|
|
||||||
.unwrap();
|
|
||||||
assert!(unzip.status.success());
|
|
||||||
|
|
||||||
let xml = String::from_utf8(unzip.stdout).unwrap();
|
|
||||||
assert!(xml.contains("问题一"));
|
assert!(xml.contains("问题一"));
|
||||||
assert!(xml.contains("344万"));
|
assert!(xml.contains("344万"));
|
||||||
assert!(xml.contains("问题二"));
|
assert!(xml.contains("问题二"));
|
||||||
@@ -75,17 +78,7 @@ async fn openxml_office_tool_accepts_reordered_columns_when_rows_are_structured(
|
|||||||
assert!(result.success, "{result:?}");
|
assert!(result.success, "{result:?}");
|
||||||
assert!(output_path.exists());
|
assert!(output_path.exists());
|
||||||
|
|
||||||
let unzip = ProcessCommand::new("unzip")
|
let xml = read_sheet_xml(&output_path);
|
||||||
.args([
|
|
||||||
"-p",
|
|
||||||
output_path.to_str().unwrap(),
|
|
||||||
"xl/worksheets/sheet1.xml",
|
|
||||||
])
|
|
||||||
.output()
|
|
||||||
.unwrap();
|
|
||||||
assert!(unzip.status.success());
|
|
||||||
|
|
||||||
let xml = String::from_utf8(unzip.stdout).unwrap();
|
|
||||||
assert!(xml.contains("问题一"));
|
assert!(xml.contains("问题一"));
|
||||||
assert!(xml.contains("344万"));
|
assert!(xml.contains("344万"));
|
||||||
assert!(xml.contains(">1<"));
|
assert!(xml.contains(">1<"));
|
||||||
@@ -113,17 +106,7 @@ async fn openxml_office_tool_accepts_localized_hotlist_column_aliases() {
|
|||||||
assert!(result.success, "{result:?}");
|
assert!(result.success, "{result:?}");
|
||||||
assert!(output_path.exists());
|
assert!(output_path.exists());
|
||||||
|
|
||||||
let unzip = ProcessCommand::new("unzip")
|
let xml = read_sheet_xml(&output_path);
|
||||||
.args([
|
|
||||||
"-p",
|
|
||||||
output_path.to_str().unwrap(),
|
|
||||||
"xl/worksheets/sheet1.xml",
|
|
||||||
])
|
|
||||||
.output()
|
|
||||||
.unwrap();
|
|
||||||
assert!(unzip.status.success());
|
|
||||||
|
|
||||||
let xml = String::from_utf8(unzip.stdout).unwrap();
|
|
||||||
assert!(xml.contains("问题一"));
|
assert!(xml.contains("问题一"));
|
||||||
assert!(xml.contains("344万"));
|
assert!(xml.contains("344万"));
|
||||||
assert!(xml.contains(">1<"));
|
assert!(xml.contains(">1<"));
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -43,17 +43,18 @@ async fn screen_html_export_tool_renders_dashboard_html_with_presentation_contra
|
|||||||
.as_str()
|
.as_str()
|
||||||
.unwrap()
|
.unwrap()
|
||||||
.starts_with("file://"));
|
.starts_with("file://"));
|
||||||
|
assert!(html.contains("知乎热榜图表驾驶舱"));
|
||||||
assert!(html.contains("snapshot-20260329"));
|
assert!(html.contains("snapshot-20260329"));
|
||||||
assert!(html.contains("问题一"));
|
assert!(html.contains("问题一"));
|
||||||
assert!(html.contains("344万"));
|
assert!(html.contains("344万"));
|
||||||
assert!(html.contains("const defaultPayload ="));
|
assert!(html.contains("const defaultPayload ="));
|
||||||
assert!(html.contains("汇报摘要"));
|
assert!(html.contains("lead-summary"));
|
||||||
assert!(html.contains("fitScreenToViewport"));
|
assert!(html.contains("bar-chart"));
|
||||||
assert!(html.contains("dashboard-canvas"));
|
assert!(html.contains("top-chart"));
|
||||||
assert!(html.contains("themeSwitcher"));
|
assert!(html.contains("pie-chart"));
|
||||||
assert!(html.contains("gov_blue_gold"));
|
assert!(html.contains("bubble-chart"));
|
||||||
assert!(html.contains("tech_cyan_blue"));
|
assert!(html.contains("metric-categories"));
|
||||||
assert!(html.contains("industry_ink_green"));
|
assert!(html.contains("themeMeta"));
|
||||||
assert!(html.contains("meeting_red_gold"));
|
assert!(html.contains("screen_html_export"));
|
||||||
assert!(html.contains("localStorage.setItem(\"zhihu-hotlist-theme\""));
|
assert!(html.contains("table-note"));
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -21,7 +21,7 @@ fn deepseek_settings_load_defaults_from_env() {
|
|||||||
assert_eq!(settings.api_key, "test-key");
|
assert_eq!(settings.api_key, "test-key");
|
||||||
assert_eq!(settings.base_url, "https://api.deepseek.com");
|
assert_eq!(settings.base_url, "https://api.deepseek.com");
|
||||||
assert_eq!(settings.model, "deepseek-chat");
|
assert_eq!(settings.model, "deepseek-chat");
|
||||||
assert!(settings.skills_dir.is_empty());
|
assert!(settings.skills_dir.is_none());
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -30,7 +30,7 @@ fn deepseek_request_shape_matches_openai_compatible_chat_format() {
|
|||||||
api_key: "test-key".to_string(),
|
api_key: "test-key".to_string(),
|
||||||
base_url: "https://api.deepseek.com".to_string(),
|
base_url: "https://api.deepseek.com".to_string(),
|
||||||
model: "deepseek-chat".to_string(),
|
model: "deepseek-chat".to_string(),
|
||||||
skills_dir: Vec::new(),
|
skills_dir: None,
|
||||||
});
|
});
|
||||||
let messages = vec![
|
let messages = vec![
|
||||||
ChatMessage {
|
ChatMessage {
|
||||||
|
|||||||
452
tests/deterministic_submit_test.rs
Normal file
452
tests/deterministic_submit_test.rs
Normal file
@@ -0,0 +1,452 @@
|
|||||||
|
mod common;
|
||||||
|
|
||||||
|
use std::path::PathBuf;
|
||||||
|
|
||||||
|
use chrono::{Datelike, Local};
|
||||||
|
use zeroclaw::skills::load_skills_from_directory;
|
||||||
|
|
||||||
|
use sgclaw::compat::deterministic_submit::{
|
||||||
|
decide_deterministic_submit, DeterministicSubmitDecision,
|
||||||
|
};
|
||||||
|
use sgclaw::compat::tq_lineloss::{
|
||||||
|
contracts::{PeriodMode, ResolvedOrg, ResolvedPeriod},
|
||||||
|
org_resolver::resolve_org,
|
||||||
|
period_resolver::resolve_period,
|
||||||
|
};
|
||||||
|
use sgclaw::runtime::is_zhihu_hotlist_task;
|
||||||
|
|
||||||
|
fn expected_default_month() -> String {
|
||||||
|
let today = Local::now().date_naive();
|
||||||
|
let (year, month) = if today.month() == 1 {
|
||||||
|
(today.year() - 1, 12)
|
||||||
|
} else {
|
||||||
|
(today.year(), today.month() - 1)
|
||||||
|
};
|
||||||
|
format!("{year}-{month:02}")
|
||||||
|
}
|
||||||
|
|
||||||
|
fn expected_default_week_range() -> (String, String, String) {
|
||||||
|
let today = Local::now().date_naive();
|
||||||
|
let month_start = today.with_day(1).expect("current month should have day 1");
|
||||||
|
let start = month_start.format("%Y-%m-%d").to_string();
|
||||||
|
let end = today.format("%Y-%m-%d").to_string();
|
||||||
|
(format!("{start}至{end}"), start, end)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn deterministic_submit_discovers_tq_lineloss_skill_contract() {
|
||||||
|
let skills_root = PathBuf::from("D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills");
|
||||||
|
let skills = load_skills_from_directory(&skills_root, true);
|
||||||
|
|
||||||
|
let skill = skills
|
||||||
|
.iter()
|
||||||
|
.find(|skill| skill.name == "tq-lineloss-report")
|
||||||
|
.expect("tq-lineloss-report should be discoverable from staged skills root");
|
||||||
|
|
||||||
|
let tool = skill
|
||||||
|
.tools
|
||||||
|
.iter()
|
||||||
|
.find(|tool| tool.name == "collect_lineloss")
|
||||||
|
.expect("collect_lineloss tool should be discoverable");
|
||||||
|
|
||||||
|
assert_eq!(tool.kind, "browser_script");
|
||||||
|
assert_eq!(tool.command, "scripts/collect_lineloss.js");
|
||||||
|
|
||||||
|
let required_args = [
|
||||||
|
"expected_domain",
|
||||||
|
"org_label",
|
||||||
|
"org_code",
|
||||||
|
"period_mode",
|
||||||
|
"period_mode_code",
|
||||||
|
"period_value",
|
||||||
|
"period_payload",
|
||||||
|
];
|
||||||
|
|
||||||
|
for arg in required_args {
|
||||||
|
assert!(
|
||||||
|
tool.args.contains_key(arg),
|
||||||
|
"expected required arg {arg} in tq-lineloss-report.collect_lineloss"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
assert_eq!(tool.args.len(), required_args.len());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn deterministic_submit_requires_exact_suffix() {
|
||||||
|
assert!(matches!(
|
||||||
|
decide_deterministic_submit("兰州公司 月累计 2026-03。。。", None, None),
|
||||||
|
DeterministicSubmitDecision::Execute(_)
|
||||||
|
));
|
||||||
|
|
||||||
|
assert!(matches!(
|
||||||
|
decide_deterministic_submit("兰州公司 月累计 2026-03", None, None),
|
||||||
|
DeterministicSubmitDecision::NotDeterministic
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn deterministic_submit_nonmatch_returns_supported_scene_message() {
|
||||||
|
let decision = decide_deterministic_submit("帮我打开百度。。。", None, None);
|
||||||
|
|
||||||
|
match decision {
|
||||||
|
DeterministicSubmitDecision::Prompt { summary } => {
|
||||||
|
assert!(summary.contains("台区线损") || summary.contains("支持场景"));
|
||||||
|
}
|
||||||
|
other => panic!("expected deterministic prompt for unsupported scene, got {other:?}"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn deterministic_submit_rejects_page_context_mismatch() {
|
||||||
|
let decision = decide_deterministic_submit(
|
||||||
|
"兰州公司 月累计 2026-03。。。",
|
||||||
|
Some("https://www.zhihu.com/hot"),
|
||||||
|
Some("知乎热榜"),
|
||||||
|
);
|
||||||
|
|
||||||
|
match decision {
|
||||||
|
DeterministicSubmitDecision::Prompt { summary } => {
|
||||||
|
assert!(summary.contains("台区线损") || summary.contains("页面") || summary.contains("不匹配"));
|
||||||
|
}
|
||||||
|
other => panic!("expected deterministic mismatch prompt, got {other:?}"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn zhihu_hotlist_request_without_suffix_keeps_existing_route() {
|
||||||
|
assert!(is_zhihu_hotlist_task(
|
||||||
|
"打开知乎热榜",
|
||||||
|
Some("https://www.zhihu.com/hot"),
|
||||||
|
Some("知乎热榜")
|
||||||
|
));
|
||||||
|
|
||||||
|
assert!(matches!(
|
||||||
|
decide_deterministic_submit(
|
||||||
|
"打开知乎热榜",
|
||||||
|
Some("https://www.zhihu.com/hot"),
|
||||||
|
Some("知乎热榜")
|
||||||
|
),
|
||||||
|
DeterministicSubmitDecision::NotDeterministic
|
||||||
|
));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn deterministic_submit_rejects_non_exact_suffix_variants() {
|
||||||
|
for instruction in [
|
||||||
|
"兰州公司 月累计 2026-03...",
|
||||||
|
"兰州公司 月累计 2026-03。。。。",
|
||||||
|
"兰州公司。。。月累计 2026-03",
|
||||||
|
"兰州公司 月累计 2026-03。。。 ",
|
||||||
|
] {
|
||||||
|
assert!(matches!(
|
||||||
|
decide_deterministic_submit(instruction, None, None),
|
||||||
|
DeterministicSubmitDecision::NotDeterministic
|
||||||
|
));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn lineloss_org_resolver_matches_city_alias() {
|
||||||
|
assert_eq!(
|
||||||
|
resolve_org("兰州公司").unwrap(),
|
||||||
|
ResolvedOrg {
|
||||||
|
label: "国网兰州供电公司".to_string(),
|
||||||
|
code: "62401".to_string(),
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
assert_eq!(
|
||||||
|
resolve_org("天水公司").unwrap(),
|
||||||
|
ResolvedOrg {
|
||||||
|
label: "国网天水供电公司".to_string(),
|
||||||
|
code: "62403".to_string(),
|
||||||
|
}
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn lineloss_org_resolver_matches_county_alias() {
|
||||||
|
assert_eq!(
|
||||||
|
resolve_org("榆中县公司").unwrap(),
|
||||||
|
ResolvedOrg {
|
||||||
|
label: "国网榆中县供电公司".to_string(),
|
||||||
|
code: "6240121".to_string(),
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
assert_eq!(
|
||||||
|
resolve_org("城关供电分公司").unwrap(),
|
||||||
|
ResolvedOrg {
|
||||||
|
label: "城关供电分公司".to_string(),
|
||||||
|
code: "6240108".to_string(),
|
||||||
|
}
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn lineloss_org_resolver_prompts_on_ambiguity() {
|
||||||
|
let summary = resolve_org("城关")
|
||||||
|
.expect_err("ambiguous alias should prompt instead of guessing");
|
||||||
|
|
||||||
|
assert!(summary.contains("供电单位存在歧义") || summary.contains("更完整名称"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn deterministic_submit_lineloss_missing_company_prompts() {
|
||||||
|
let decision = decide_deterministic_submit("月累计 2026-03。。。", None, None);
|
||||||
|
|
||||||
|
match decision {
|
||||||
|
DeterministicSubmitDecision::Prompt { summary } => {
|
||||||
|
assert!(summary.contains("缺少供电单位") || summary.contains("兰州公司"));
|
||||||
|
}
|
||||||
|
other => panic!("expected missing-company prompt, got {other:?}"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn lineloss_period_resolver_parses_month_text() {
|
||||||
|
assert_eq!(
|
||||||
|
resolve_period("月累计 2026-03").unwrap(),
|
||||||
|
ResolvedPeriod {
|
||||||
|
mode: PeriodMode::Month,
|
||||||
|
mode_code: "1".to_string(),
|
||||||
|
value: "2026-03".to_string(),
|
||||||
|
payload: serde_json::json!({
|
||||||
|
"fdate": "2026-03",
|
||||||
|
}),
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
assert_eq!(
|
||||||
|
resolve_period("月累计 2026年3月").unwrap(),
|
||||||
|
ResolvedPeriod {
|
||||||
|
mode: PeriodMode::Month,
|
||||||
|
mode_code: "1".to_string(),
|
||||||
|
value: "2026-03".to_string(),
|
||||||
|
payload: serde_json::json!({
|
||||||
|
"fdate": "2026-03",
|
||||||
|
}),
|
||||||
|
}
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn lineloss_period_resolver_parses_week_text() {
|
||||||
|
let resolved = resolve_period("周累计 2026年第12周").unwrap();
|
||||||
|
|
||||||
|
assert_eq!(resolved.mode, PeriodMode::Week);
|
||||||
|
assert_eq!(resolved.mode_code, "2");
|
||||||
|
assert_eq!(resolved.value, "2026-W12");
|
||||||
|
assert_eq!(resolved.payload["tjzq"], "week");
|
||||||
|
assert_eq!(resolved.payload["level"], "00");
|
||||||
|
assert_eq!(resolved.payload["weekSfdate"], "2026-03-16");
|
||||||
|
assert_eq!(resolved.payload["weekEfdate"], "2026-03-22");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn lineloss_period_resolver_defaults_month_period_from_page_semantics() {
|
||||||
|
let expected_month = expected_default_month();
|
||||||
|
|
||||||
|
assert_eq!(
|
||||||
|
resolve_period("兰州公司 月累计").unwrap(),
|
||||||
|
ResolvedPeriod {
|
||||||
|
mode: PeriodMode::Month,
|
||||||
|
mode_code: "1".to_string(),
|
||||||
|
value: expected_month.clone(),
|
||||||
|
payload: serde_json::json!({
|
||||||
|
"fdate": expected_month,
|
||||||
|
}),
|
||||||
|
}
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn lineloss_period_resolver_defaults_week_period_from_page_semantics() {
|
||||||
|
let (expected_value, expected_start, expected_end) = expected_default_week_range();
|
||||||
|
|
||||||
|
assert_eq!(
|
||||||
|
resolve_period("兰州公司 周累计").unwrap(),
|
||||||
|
ResolvedPeriod {
|
||||||
|
mode: PeriodMode::Week,
|
||||||
|
mode_code: "2".to_string(),
|
||||||
|
value: expected_value,
|
||||||
|
payload: serde_json::json!({
|
||||||
|
"tjzq": "week",
|
||||||
|
"level": "00",
|
||||||
|
"weekSfdate": expected_start,
|
||||||
|
"weekEfdate": expected_end,
|
||||||
|
}),
|
||||||
|
}
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn lineloss_period_resolver_prompts_for_missing_year_on_week() {
|
||||||
|
let summary = resolve_period("周累计 第12周")
|
||||||
|
.expect_err("bare week should prompt for year instead of guessing");
|
||||||
|
|
||||||
|
assert!(summary.contains("年份") || summary.contains("第12周"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn lineloss_period_resolver_rejects_contradictory_mode() {
|
||||||
|
let summary = resolve_period("月累计 周累计 2026-03")
|
||||||
|
.expect_err("contradictory month/week intent should not execute");
|
||||||
|
|
||||||
|
assert!(summary.contains("月/周") || summary.contains("冲突") || summary.contains("歧义"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn lineloss_period_resolver_prompts_for_missing_mode() {
|
||||||
|
let summary = resolve_period("兰州公司 2026-03")
|
||||||
|
.expect_err("missing mode should prompt instead of guessing");
|
||||||
|
|
||||||
|
assert!(summary.contains("月/周类型") || summary.contains("月累计") || summary.contains("周累计"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn lineloss_period_resolver_prompts_for_missing_period() {
|
||||||
|
let summary = resolve_period("兰州公司 月累计")
|
||||||
|
.expect_err("missing period should prompt instead of guessing");
|
||||||
|
|
||||||
|
assert!(summary.contains("周期") || summary.contains("时间") || summary.contains("2026-03"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn deterministic_lineloss_execution_plan_contains_canonical_args() {
|
||||||
|
let decision = decide_deterministic_submit(
|
||||||
|
"兰州公司 月累计 2026-03。。。",
|
||||||
|
Some("http://20.76.57.61:8080/#/lineloss"),
|
||||||
|
Some("台区线损报表"),
|
||||||
|
);
|
||||||
|
|
||||||
|
match decision {
|
||||||
|
DeterministicSubmitDecision::Execute(plan) => {
|
||||||
|
let debug = format!("{plan:?}");
|
||||||
|
assert!(debug.contains("国网兰州供电公司"), "missing canonical org label: {debug}");
|
||||||
|
assert!(debug.contains("62401"), "missing canonical org code: {debug}");
|
||||||
|
assert!(debug.contains("2026-03"), "missing canonical period value: {debug}");
|
||||||
|
assert!(debug.contains("month") || debug.contains("Month"), "missing canonical month mode: {debug}");
|
||||||
|
assert!(debug.contains("fdate"), "missing canonical month payload: {debug}");
|
||||||
|
}
|
||||||
|
other => panic!("expected deterministic execute plan, got {other:?}"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn deterministic_lineloss_missing_period_uses_default_month_execution_plan() {
|
||||||
|
let expected_month = expected_default_month();
|
||||||
|
let decision = decide_deterministic_submit("兰州公司 月累计。。。", None, None);
|
||||||
|
|
||||||
|
match decision {
|
||||||
|
DeterministicSubmitDecision::Execute(plan) => {
|
||||||
|
assert_eq!(plan.period_mode, "month");
|
||||||
|
assert_eq!(plan.period_mode_code, "1");
|
||||||
|
assert_eq!(plan.period_value, expected_month);
|
||||||
|
assert!(plan.period_payload.contains("fdate"));
|
||||||
|
}
|
||||||
|
other => panic!("expected missing month period to default into execution, got {other:?}"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn deterministic_lineloss_missing_period_uses_default_week_execution_plan() {
|
||||||
|
let (expected_value, expected_start, expected_end) = expected_default_week_range();
|
||||||
|
let decision = decide_deterministic_submit("兰州公司 周累计。。。", None, None);
|
||||||
|
|
||||||
|
match decision {
|
||||||
|
DeterministicSubmitDecision::Execute(plan) => {
|
||||||
|
assert_eq!(plan.period_mode, "week");
|
||||||
|
assert_eq!(plan.period_mode_code, "2");
|
||||||
|
assert_eq!(plan.period_value, expected_value);
|
||||||
|
assert!(plan.period_payload.contains(&expected_start));
|
||||||
|
assert!(plan.period_payload.contains(&expected_end));
|
||||||
|
}
|
||||||
|
other => panic!("expected missing week period to default into execution, got {other:?}"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn deterministic_lineloss_partial_artifact_summary_contract_is_locked() {
|
||||||
|
let artifact = serde_json::json!({
|
||||||
|
"type": "report-artifact",
|
||||||
|
"report_name": "tq-lineloss-report",
|
||||||
|
"status": "partial",
|
||||||
|
"org": {
|
||||||
|
"label": "国网兰州供电公司",
|
||||||
|
"code": "62401"
|
||||||
|
},
|
||||||
|
"period": {
|
||||||
|
"mode": "month",
|
||||||
|
"mode_code": "1",
|
||||||
|
"value": "2026-03",
|
||||||
|
"payload": {
|
||||||
|
"fdate": "2026-03"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"columns": ["ORG_NAME", "LINE_LOSS_RATE"],
|
||||||
|
"rows": [
|
||||||
|
{ "ORG_NAME": "国网兰州供电公司", "LINE_LOSS_RATE": "3.21" }
|
||||||
|
],
|
||||||
|
"counts": {
|
||||||
|
"rows": 1
|
||||||
|
},
|
||||||
|
"export": {
|
||||||
|
"attempted": true,
|
||||||
|
"status": "failed",
|
||||||
|
"message": "report_log_failed"
|
||||||
|
},
|
||||||
|
"reasons": ["report_log_failed"]
|
||||||
|
});
|
||||||
|
|
||||||
|
assert_eq!(artifact["type"], "report-artifact");
|
||||||
|
assert_eq!(artifact["report_name"], "tq-lineloss-report");
|
||||||
|
assert_eq!(artifact["status"], "partial");
|
||||||
|
assert_eq!(artifact["org"]["label"], "国网兰州供电公司");
|
||||||
|
assert_eq!(artifact["period"]["value"], "2026-03");
|
||||||
|
assert_eq!(artifact["counts"]["rows"], 1);
|
||||||
|
assert_eq!(artifact["reasons"][0], "report_log_failed");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn deterministic_lineloss_blocked_and_error_artifact_statuses_are_failure_contracts() {
|
||||||
|
for status in ["blocked", "error"] {
|
||||||
|
let artifact = serde_json::json!({
|
||||||
|
"type": "report-artifact",
|
||||||
|
"report_name": "tq-lineloss-report",
|
||||||
|
"status": status,
|
||||||
|
"org": {
|
||||||
|
"label": "国网兰州供电公司",
|
||||||
|
"code": "62401"
|
||||||
|
},
|
||||||
|
"period": {
|
||||||
|
"mode": "week",
|
||||||
|
"mode_code": "2",
|
||||||
|
"value": "2026-W12",
|
||||||
|
"payload": {
|
||||||
|
"tjzq": "week",
|
||||||
|
"level": "00",
|
||||||
|
"weekSfdate": "2026-03-16",
|
||||||
|
"weekEfdate": "2026-03-22"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"columns": [],
|
||||||
|
"rows": [],
|
||||||
|
"counts": {
|
||||||
|
"rows": 0
|
||||||
|
},
|
||||||
|
"export": {
|
||||||
|
"attempted": false,
|
||||||
|
"status": "skipped",
|
||||||
|
"message": null
|
||||||
|
},
|
||||||
|
"reasons": ["selected_range_unavailable"]
|
||||||
|
});
|
||||||
|
|
||||||
|
assert_eq!(artifact["status"], status);
|
||||||
|
assert_eq!(artifact["type"], "report-artifact");
|
||||||
|
assert_eq!(artifact["period"]["mode"], "week");
|
||||||
|
assert_eq!(artifact["reasons"][0], "selected_range_unavailable");
|
||||||
|
}
|
||||||
|
}
|
||||||
105
tests/lineloss_xlsx_export_test.rs
Normal file
105
tests/lineloss_xlsx_export_test.rs
Normal file
@@ -0,0 +1,105 @@
|
|||||||
|
use std::fs;
|
||||||
|
use std::path::PathBuf;
|
||||||
|
|
||||||
|
use serde_json::json;
|
||||||
|
use sgclaw::compat::lineloss_xlsx_export::{export_lineloss_xlsx, LinelossExportRequest};
|
||||||
|
|
||||||
|
fn temp_output_path(name: &str) -> PathBuf {
|
||||||
|
let dir = std::env::temp_dir().join("sgclaw-test-xlsx");
|
||||||
|
fs::create_dir_all(&dir).unwrap();
|
||||||
|
dir.join(name)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn export_month_lineloss_produces_valid_xlsx() {
|
||||||
|
let output_path = temp_output_path("month-test.xlsx");
|
||||||
|
if output_path.exists() {
|
||||||
|
fs::remove_file(&output_path).unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
let request = LinelossExportRequest {
|
||||||
|
sheet_name: "国网兰州供电公司月度线损分析报表(2026-03)".to_string(),
|
||||||
|
column_defs: vec![
|
||||||
|
("ORG_NAME".to_string(), "供电单位".to_string()),
|
||||||
|
("YGDL".to_string(), "累计供电量".to_string()),
|
||||||
|
("YYDL".to_string(), "累计售电量".to_string()),
|
||||||
|
("YXSL".to_string(), "线损完成率(%)".to_string()),
|
||||||
|
("RAT_SCOPE".to_string(), "线损率累计目标值".to_string()),
|
||||||
|
("BLANK3".to_string(), "目标完成率".to_string()),
|
||||||
|
("BLANK2".to_string(), "排行".to_string()),
|
||||||
|
],
|
||||||
|
rows: vec![
|
||||||
|
serde_json::from_value(json!({
|
||||||
|
"ORG_NAME": "城关供电",
|
||||||
|
"YGDL": "12345.67",
|
||||||
|
"YYDL": "11234.56",
|
||||||
|
"YXSL": "9.00",
|
||||||
|
"RAT_SCOPE": "9.50",
|
||||||
|
"BLANK3": "94.74",
|
||||||
|
"BLANK2": "1"
|
||||||
|
}))
|
||||||
|
.unwrap(),
|
||||||
|
serde_json::from_value(json!({
|
||||||
|
"ORG_NAME": "七里河供电",
|
||||||
|
"YGDL": "9876.54",
|
||||||
|
"YYDL": "8765.43",
|
||||||
|
"YXSL": "11.24",
|
||||||
|
"RAT_SCOPE": "10.00",
|
||||||
|
"BLANK3": "112.40",
|
||||||
|
"BLANK2": "2"
|
||||||
|
}))
|
||||||
|
.unwrap(),
|
||||||
|
],
|
||||||
|
output_path: output_path.clone(),
|
||||||
|
};
|
||||||
|
|
||||||
|
let result_path = export_lineloss_xlsx(&request).unwrap();
|
||||||
|
assert_eq!(result_path, output_path);
|
||||||
|
assert!(output_path.exists());
|
||||||
|
|
||||||
|
// Verify it's a valid ZIP (xlsx is a zip archive)
|
||||||
|
let file = fs::File::open(&output_path).unwrap();
|
||||||
|
let mut archive = zip::ZipArchive::new(file).unwrap();
|
||||||
|
|
||||||
|
// Must contain the standard OpenXML entries
|
||||||
|
let entry_names: Vec<String> = (0..archive.len())
|
||||||
|
.map(|i| archive.by_index(i).unwrap().name().to_string())
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
assert!(entry_names.contains(&"[Content_Types].xml".to_string()));
|
||||||
|
assert!(entry_names.contains(&"xl/worksheets/sheet1.xml".to_string()));
|
||||||
|
assert!(entry_names.contains(&"xl/workbook.xml".to_string()));
|
||||||
|
|
||||||
|
// Read sheet1.xml and verify it contains our data
|
||||||
|
let mut sheet = archive.by_name("xl/worksheets/sheet1.xml").unwrap();
|
||||||
|
let mut xml = String::new();
|
||||||
|
std::io::Read::read_to_string(&mut sheet, &mut xml).unwrap();
|
||||||
|
|
||||||
|
assert!(xml.contains("供电单位"), "header row should contain 供电单位");
|
||||||
|
assert!(xml.contains("累计供电量"), "header row should contain 累计供电量");
|
||||||
|
assert!(xml.contains("城关供电"), "data should contain 城关供电");
|
||||||
|
assert!(xml.contains("12345.67"), "data should contain 12345.67");
|
||||||
|
assert!(xml.contains("七里河供电"), "data should contain second row");
|
||||||
|
|
||||||
|
// Cleanup
|
||||||
|
fs::remove_file(&output_path).unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn export_empty_rows_returns_error() {
|
||||||
|
let output_path = temp_output_path("empty-test.xlsx");
|
||||||
|
|
||||||
|
let request = LinelossExportRequest {
|
||||||
|
sheet_name: "test".to_string(),
|
||||||
|
column_defs: vec![("A".to_string(), "ColA".to_string())],
|
||||||
|
rows: vec![],
|
||||||
|
output_path: output_path.clone(),
|
||||||
|
};
|
||||||
|
|
||||||
|
let result = export_lineloss_xlsx(&request);
|
||||||
|
assert!(result.is_err());
|
||||||
|
assert!(
|
||||||
|
result.unwrap_err().to_string().contains("rows must not be empty"),
|
||||||
|
"should reject empty rows"
|
||||||
|
);
|
||||||
|
}
|
||||||
@@ -50,13 +50,13 @@ fn loaded_skills_excludes_browser_script_tools_when_browser_surface_is_unavailab
|
|||||||
));
|
));
|
||||||
fs::create_dir_all(&workspace_root).unwrap();
|
fs::create_dir_all(&workspace_root).unwrap();
|
||||||
let skill_root = temp_skill_root();
|
let skill_root = temp_skill_root();
|
||||||
write_browser_script_skill(&skill_root, "fault-details-report");
|
write_browser_script_skill(&skill_root, "workspace-browser-skill");
|
||||||
|
|
||||||
let mut settings = SgClawSettings::from_legacy_deepseek_fields(
|
let mut settings = SgClawSettings::from_legacy_deepseek_fields(
|
||||||
"sk-test".to_string(),
|
"sk-test".to_string(),
|
||||||
"https://api.deepseek.com".to_string(),
|
"https://api.deepseek.com".to_string(),
|
||||||
"deepseek-chat".to_string(),
|
"deepseek-chat".to_string(),
|
||||||
vec![skill_root.clone()],
|
Some(skill_root.clone()),
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
settings.runtime_profile = RuntimeProfile::GeneralAssistant;
|
settings.runtime_profile = RuntimeProfile::GeneralAssistant;
|
||||||
@@ -64,7 +64,7 @@ fn loaded_skills_excludes_browser_script_tools_when_browser_surface_is_unavailab
|
|||||||
let skills_dir = resolve_skills_dir_from_sgclaw_settings(&workspace_root, &settings);
|
let skills_dir = resolve_skills_dir_from_sgclaw_settings(&workspace_root, &settings);
|
||||||
let engine = RuntimeEngine::new(RuntimeProfile::GeneralAssistant);
|
let engine = RuntimeEngine::new(RuntimeProfile::GeneralAssistant);
|
||||||
|
|
||||||
let loaded_skills = engine.loaded_skills(&config, &skills_dir);
|
let loaded_skills = engine.loaded_skills(&config, std::slice::from_ref(&skills_dir));
|
||||||
|
|
||||||
assert!(loaded_skills.is_empty());
|
assert!(loaded_skills.is_empty());
|
||||||
}
|
}
|
||||||
@@ -125,19 +125,16 @@ fn browser_attached_publish_prompt_requires_explicit_confirmation_before_clickin
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn browser_attached_95598_scene_prompt_requires_scene_tool_before_generic_browser_probing() {
|
fn ws_cleanup_browser_profile_does_not_inject_95598_scene_contract() {
|
||||||
let engine = RuntimeEngine::new(RuntimeProfile::BrowserAttached);
|
let engine = RuntimeEngine::new(RuntimeProfile::BrowserAttached);
|
||||||
|
|
||||||
let instruction = engine.build_instruction(
|
let instruction = engine.build_instruction(
|
||||||
"请处理95598-repair-city-dispatch场景,查看抢修市指派单并汇总当前队列",
|
"请处理95598抢修市指监测,查看抢修市指派单并汇总当前队列",
|
||||||
Some("https://95598.example.invalid/dispatch"),
|
Some("https://95598.example.invalid/dispatch"),
|
||||||
Some("95598抢修市指监测"),
|
Some("95598抢修市指监测"),
|
||||||
true,
|
true,
|
||||||
);
|
);
|
||||||
|
|
||||||
assert!(instruction.contains("95598-repair-city-dispatch.collect_repair_orders"));
|
assert!(!instruction.contains("collect_repair_orders"));
|
||||||
assert!(instruction.contains("browser workflow, not a text-only task"));
|
|
||||||
assert!(instruction.contains("generic browser probing only after"));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
@@ -151,7 +148,7 @@ fn browser_attached_unrelated_task_does_not_receive_95598_scene_contract() {
|
|||||||
true,
|
true,
|
||||||
);
|
);
|
||||||
|
|
||||||
assert!(!instruction.contains("95598-repair-city-dispatch.collect_repair_orders"));
|
assert!(!instruction.contains("collect_repair_orders"));
|
||||||
assert!(!instruction.contains("browser workflow, not a text-only task"));
|
assert!(!instruction.contains("browser workflow, not a text-only task"));
|
||||||
assert!(!instruction.contains("generic browser probing only after"));
|
assert!(!instruction.contains("generic browser probing only after"));
|
||||||
}
|
}
|
||||||
@@ -161,13 +158,13 @@ fn general_assistant_95598_scene_prompt_does_not_receive_browser_scene_contract(
|
|||||||
let engine = RuntimeEngine::new(RuntimeProfile::GeneralAssistant);
|
let engine = RuntimeEngine::new(RuntimeProfile::GeneralAssistant);
|
||||||
|
|
||||||
let instruction = engine.build_instruction(
|
let instruction = engine.build_instruction(
|
||||||
"请处理95598-repair-city-dispatch场景,查看抢修市指派单并汇总当前队列",
|
"请处理95598抢修市指监测,查看抢修市指派单并汇总当前队列",
|
||||||
Some("https://95598.example.invalid/dispatch"),
|
Some("https://95598.example.invalid/dispatch"),
|
||||||
Some("95598抢修市指监测"),
|
Some("95598抢修市指监测"),
|
||||||
false,
|
false,
|
||||||
);
|
);
|
||||||
|
|
||||||
assert!(!instruction.contains("95598-repair-city-dispatch.collect_repair_orders"));
|
assert!(!instruction.contains("collect_repair_orders"));
|
||||||
assert!(!instruction.contains("browser workflow, not a text-only task"));
|
assert!(!instruction.contains("browser workflow, not a text-only task"));
|
||||||
assert!(!instruction.contains("generic browser probing only after"));
|
assert!(!instruction.contains("generic browser probing only after"));
|
||||||
}
|
}
|
||||||
@@ -178,7 +175,7 @@ fn legacy_settings_default_to_plan_first_superrpa_and_openxml_backends() {
|
|||||||
"sk-test".to_string(),
|
"sk-test".to_string(),
|
||||||
"https://api.deepseek.com".to_string(),
|
"https://api.deepseek.com".to_string(),
|
||||||
"deepseek-chat".to_string(),
|
"deepseek-chat".to_string(),
|
||||||
Vec::new(),
|
None,
|
||||||
)
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
|
|||||||
@@ -51,7 +51,9 @@ fn submit_task_without_llm_configuration_returns_clear_error() {
|
|||||||
assert!(matches!(
|
assert!(matches!(
|
||||||
&sent[0],
|
&sent[0],
|
||||||
AgentMessage::LogEntry { level, message }
|
AgentMessage::LogEntry { level, message }
|
||||||
if level == "info" && message == "sgclaw runtime version=0.1.0 protocol=1.0"
|
if level == "info"
|
||||||
|
&& message.starts_with("sgclaw runtime version=")
|
||||||
|
&& message.ends_with(" protocol=1.0")
|
||||||
));
|
));
|
||||||
assert!(matches!(
|
assert!(matches!(
|
||||||
&sent[1],
|
&sent[1],
|
||||||
|
|||||||
@@ -1,223 +0,0 @@
|
|||||||
use std::fs;
|
|
||||||
use std::path::{Path, PathBuf};
|
|
||||||
|
|
||||||
use sgclaw::runtime::{
|
|
||||||
load_scene_registry_from_root, match_scene_instruction_in_registry, DispatchMode,
|
|
||||||
};
|
|
||||||
use uuid::Uuid;
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn scene_registry_loads_first_slice_dispatch_policies() {
|
|
||||||
let root = TempSceneRoot::new();
|
|
||||||
write_first_slice_scenes(&root);
|
|
||||||
|
|
||||||
let registry = load_scene_registry_from_root(root.path());
|
|
||||||
|
|
||||||
let fault_details = registry
|
|
||||||
.iter()
|
|
||||||
.find(|entry| entry.id == "fault-details-report")
|
|
||||||
.expect("fault-details-report scene should load");
|
|
||||||
assert_eq!(fault_details.dispatch_mode, DispatchMode::DirectBrowser);
|
|
||||||
assert_eq!(fault_details.expected_domain, "sgcc.example.invalid");
|
|
||||||
assert_eq!(fault_details.skill_package, "fault-details-report");
|
|
||||||
assert_eq!(fault_details.skill_tool, "collect_fault_details");
|
|
||||||
|
|
||||||
let repair_dispatch = registry
|
|
||||||
.iter()
|
|
||||||
.find(|entry| entry.id == "95598-repair-city-dispatch")
|
|
||||||
.expect("95598-repair-city-dispatch scene should load");
|
|
||||||
assert_eq!(repair_dispatch.dispatch_mode, DispatchMode::AgentBrowser);
|
|
||||||
assert_eq!(repair_dispatch.skill_package, "95598-repair-city-dispatch");
|
|
||||||
assert_eq!(repair_dispatch.skill_tool, "collect_repair_orders");
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn scene_registry_matches_fault_details_natural_language_instruction() {
|
|
||||||
let root = TempSceneRoot::new();
|
|
||||||
write_first_slice_scenes(&root);
|
|
||||||
let registry = load_scene_registry_from_root(root.path());
|
|
||||||
|
|
||||||
let matched =
|
|
||||||
match_scene_instruction_in_registry(®istry, "请帮我导出故障明细").expect("scene should match");
|
|
||||||
|
|
||||||
assert_eq!(matched.id, "fault-details-report");
|
|
||||||
assert_eq!(matched.dispatch_mode, DispatchMode::DirectBrowser);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn scene_registry_matches_city_dispatch_natural_language_instruction() {
|
|
||||||
let root = TempSceneRoot::new();
|
|
||||||
write_first_slice_scenes(&root);
|
|
||||||
let registry = load_scene_registry_from_root(root.path());
|
|
||||||
|
|
||||||
let matched = match_scene_instruction_in_registry(®istry, "帮我看一下95598抢修市指监测")
|
|
||||||
.expect("scene should match");
|
|
||||||
|
|
||||||
assert_eq!(matched.id, "95598-repair-city-dispatch");
|
|
||||||
assert_eq!(matched.dispatch_mode, DispatchMode::AgentBrowser);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn scene_registry_matches_rephrased_instruction_via_alias_terms() {
|
|
||||||
let root = TempSceneRoot::new();
|
|
||||||
write_first_slice_scenes(&root);
|
|
||||||
let registry = load_scene_registry_from_root(root.path());
|
|
||||||
|
|
||||||
let matched = match_scene_instruction_in_registry(®istry, "想看市指那边的95598抢修队列")
|
|
||||||
.expect("scene should match");
|
|
||||||
|
|
||||||
assert_eq!(matched.id, "95598-repair-city-dispatch");
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn scene_registry_returns_none_for_unrelated_instruction() {
|
|
||||||
let root = TempSceneRoot::new();
|
|
||||||
write_first_slice_scenes(&root);
|
|
||||||
let registry = load_scene_registry_from_root(root.path());
|
|
||||||
|
|
||||||
assert!(match_scene_instruction_in_registry(®istry, "今天上海天气怎么样").is_none());
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn scene_registry_ignores_missing_or_broken_scene_files() {
|
|
||||||
let root = TempSceneRoot::new();
|
|
||||||
root.write_scene(
|
|
||||||
"fault-details-report",
|
|
||||||
r#"{
|
|
||||||
"id": "fault-details-report",
|
|
||||||
"name": "故障明细",
|
|
||||||
"summary": "查询故障明细行并生成结构化报表。",
|
|
||||||
"inputs": ["period"],
|
|
||||||
"outputs": ["report-artifact"],
|
|
||||||
"tags": ["fault", "report"],
|
|
||||||
"skill": {
|
|
||||||
"package": "fault-details-report",
|
|
||||||
"tool": "collect_fault_details",
|
|
||||||
"artifact_type": "report-artifact"
|
|
||||||
}
|
|
||||||
}"#,
|
|
||||||
);
|
|
||||||
root.write_scene("95598-repair-city-dispatch", "{ broken json");
|
|
||||||
|
|
||||||
let registry = load_scene_registry_from_root(root.path());
|
|
||||||
|
|
||||||
assert_eq!(registry.len(), 1);
|
|
||||||
assert_eq!(registry[0].id, "fault-details-report");
|
|
||||||
assert_eq!(registry[0].dispatch_mode, DispatchMode::DirectBrowser);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn scene_registry_ignores_mismatched_scene_metadata_id() {
|
|
||||||
let root = TempSceneRoot::new();
|
|
||||||
root.write_scene(
|
|
||||||
"fault-details-report",
|
|
||||||
r#"{
|
|
||||||
"id": "wrong-scene-id",
|
|
||||||
"name": "故障明细",
|
|
||||||
"summary": "查询故障明细行并生成结构化报表。",
|
|
||||||
"inputs": ["period"],
|
|
||||||
"outputs": ["report-artifact"],
|
|
||||||
"tags": ["fault", "report"],
|
|
||||||
"skill": {
|
|
||||||
"package": "fault-details-report",
|
|
||||||
"tool": "collect_fault_details",
|
|
||||||
"artifact_type": "report-artifact"
|
|
||||||
}
|
|
||||||
}"#,
|
|
||||||
);
|
|
||||||
root.write_scene(
|
|
||||||
"95598-repair-city-dispatch",
|
|
||||||
r#"{
|
|
||||||
"id": "95598-repair-city-dispatch",
|
|
||||||
"name": "95598抢修市指监测",
|
|
||||||
"summary": "采集95598抢修市指监测列表。",
|
|
||||||
"inputs": ["period"],
|
|
||||||
"outputs": ["repair-orders"],
|
|
||||||
"tags": ["95598", "repair", "dispatch"],
|
|
||||||
"skill": {
|
|
||||||
"package": "95598-repair-city-dispatch",
|
|
||||||
"tool": "collect_repair_orders",
|
|
||||||
"artifact_type": "repair-orders"
|
|
||||||
}
|
|
||||||
}"#,
|
|
||||||
);
|
|
||||||
|
|
||||||
let registry = load_scene_registry_from_root(root.path());
|
|
||||||
|
|
||||||
assert_eq!(registry.len(), 1);
|
|
||||||
assert_eq!(registry[0].id, "95598-repair-city-dispatch");
|
|
||||||
}
|
|
||||||
|
|
||||||
#[test]
|
|
||||||
fn scene_registry_returns_none_for_ambiguous_instruction() {
|
|
||||||
let root = TempSceneRoot::new();
|
|
||||||
write_first_slice_scenes(&root);
|
|
||||||
let registry = load_scene_registry_from_root(root.path());
|
|
||||||
|
|
||||||
assert!(
|
|
||||||
match_scene_instruction_in_registry(®istry, "请同时处理导出故障明细和95598抢修市指监测").is_none()
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
struct TempSceneRoot {
|
|
||||||
root: PathBuf,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl TempSceneRoot {
|
|
||||||
fn new() -> Self {
|
|
||||||
let root = std::env::temp_dir().join(format!("scene-registry-test-{}", Uuid::new_v4()));
|
|
||||||
fs::create_dir_all(root.join("scenes")).expect("temp scene root should be created");
|
|
||||||
Self { root }
|
|
||||||
}
|
|
||||||
|
|
||||||
fn path(&self) -> &Path {
|
|
||||||
&self.root
|
|
||||||
}
|
|
||||||
|
|
||||||
fn write_scene(&self, scene_id: &str, contents: &str) {
|
|
||||||
let scene_dir = self.root.join("scenes").join(scene_id);
|
|
||||||
fs::create_dir_all(&scene_dir).expect("scene directory should be created");
|
|
||||||
fs::write(scene_dir.join("scene.json"), contents).expect("scene file should be written");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn write_first_slice_scenes(root: &TempSceneRoot) {
|
|
||||||
root.write_scene(
|
|
||||||
"fault-details-report",
|
|
||||||
r#"{
|
|
||||||
"id": "fault-details-report",
|
|
||||||
"name": "故障明细",
|
|
||||||
"summary": "查询故障明细行并生成结构化报表。",
|
|
||||||
"inputs": ["period"],
|
|
||||||
"outputs": ["report-artifact"],
|
|
||||||
"tags": ["fault", "report"],
|
|
||||||
"skill": {
|
|
||||||
"package": "fault-details-report",
|
|
||||||
"tool": "collect_fault_details",
|
|
||||||
"artifact_type": "report-artifact"
|
|
||||||
}
|
|
||||||
}"#,
|
|
||||||
);
|
|
||||||
root.write_scene(
|
|
||||||
"95598-repair-city-dispatch",
|
|
||||||
r#"{
|
|
||||||
"id": "95598-repair-city-dispatch",
|
|
||||||
"name": "95598抢修市指监测",
|
|
||||||
"summary": "采集95598抢修市指监测列表。",
|
|
||||||
"inputs": ["period"],
|
|
||||||
"outputs": ["repair-orders"],
|
|
||||||
"tags": ["95598", "repair", "dispatch"],
|
|
||||||
"skill": {
|
|
||||||
"package": "95598-repair-city-dispatch",
|
|
||||||
"tool": "collect_repair_orders",
|
|
||||||
"artifact_type": "repair-orders"
|
|
||||||
}
|
|
||||||
}"#,
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Drop for TempSceneRoot {
|
|
||||||
fn drop(&mut self) {
|
|
||||||
let _ = fs::remove_dir_all(&self.root);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -12,10 +12,26 @@ fn service_console_html_stays_on_service_ws_boundary() {
|
|||||||
|
|
||||||
assert!(source.contains("ws://127.0.0.1:42321"));
|
assert!(source.contains("ws://127.0.0.1:42321"));
|
||||||
assert!(source.contains("submit_task"));
|
assert!(source.contains("submit_task"));
|
||||||
|
assert!(source.contains("addEventListener(\"close\""));
|
||||||
|
assert!(source.contains("setTimeout(() => connectOrDisconnectService(true)"));
|
||||||
|
assert!(source.contains("connectTimeoutTimer"));
|
||||||
|
assert!(source.contains("lastHeartbeatAt"));
|
||||||
|
assert!(source.contains("heartbeat missed, forcing reconnect"));
|
||||||
|
assert!(source.contains("service websocket connect timed out"));
|
||||||
assert!(!source.contains("/sgclaw/browser-helper.html"));
|
assert!(!source.contains("/sgclaw/browser-helper.html"));
|
||||||
assert!(!source.contains("/sgclaw/callback/ready"));
|
assert!(!source.contains("/sgclaw/callback/ready"));
|
||||||
assert!(!source.contains("/sgclaw/callback/events"));
|
assert!(!source.contains("/sgclaw/callback/events"));
|
||||||
assert!(!source.contains("/sgclaw/callback/commands/next"));
|
assert!(!source.contains("/sgclaw/callback/commands/next"));
|
||||||
assert!(!source.contains("/sgclaw/callback/commands/ack"));
|
assert!(!source.contains("/sgclaw/callback/commands/ack"));
|
||||||
assert!(!source.contains("ws://127.0.0.1:12345"));
|
assert!(!source.contains("ws://127.0.0.1:12345"));
|
||||||
|
|
||||||
|
// Auto-connect and settings enhancement assertions
|
||||||
|
assert!(source.contains("DOMContentLoaded"));
|
||||||
|
assert!(source.contains("settingsBtn"));
|
||||||
|
assert!(source.contains("settingsModal"));
|
||||||
|
assert!(source.contains("update_config"));
|
||||||
|
assert!(source.contains("config_updated"));
|
||||||
|
assert!(source.contains("settingApiKey"));
|
||||||
|
assert!(source.contains("settingBaseUrl"));
|
||||||
|
assert!(source.contains("settingModel"));
|
||||||
}
|
}
|
||||||
|
|||||||
72
tests/service_protocol_update_config_test.rs
Normal file
72
tests/service_protocol_update_config_test.rs
Normal file
@@ -0,0 +1,72 @@
|
|||||||
|
use sgclaw::service::{ClientMessage, ConfigUpdatePayload, ServiceMessage};
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn update_config_serializes_correctly() {
|
||||||
|
let config = ConfigUpdatePayload {
|
||||||
|
api_key: Some("test-key".to_string()),
|
||||||
|
base_url: Some("https://api.example.com".to_string()),
|
||||||
|
model: Some("test-model".to_string()),
|
||||||
|
skills_dir: Some("/path/to/skills".to_string()),
|
||||||
|
direct_submit_skill: Some("my-skill.my-tool".to_string()),
|
||||||
|
runtime_profile: Some("browser-attached".to_string()),
|
||||||
|
browser_backend: Some("super-rpa".to_string()),
|
||||||
|
};
|
||||||
|
|
||||||
|
let msg = ClientMessage::UpdateConfig { config };
|
||||||
|
let json = serde_json::to_string(&msg).unwrap();
|
||||||
|
|
||||||
|
assert!(json.contains("\"type\":\"update_config\""));
|
||||||
|
assert!(json.contains("\"apiKey\":\"test-key\""));
|
||||||
|
assert!(json.contains("\"baseUrl\":\"https://api.example.com\""));
|
||||||
|
assert!(json.contains("\"model\":\"test-model\""));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn update_config_deserializes_correctly() {
|
||||||
|
let json = r#"{
|
||||||
|
"type": "update_config",
|
||||||
|
"config": {
|
||||||
|
"apiKey": "key123",
|
||||||
|
"baseUrl": "https://api.test.com",
|
||||||
|
"model": "gpt-4"
|
||||||
|
}
|
||||||
|
}"#;
|
||||||
|
|
||||||
|
let msg: ClientMessage = serde_json::from_str(json).unwrap();
|
||||||
|
match msg {
|
||||||
|
ClientMessage::UpdateConfig { config } => {
|
||||||
|
assert_eq!(config.api_key, Some("key123".to_string()));
|
||||||
|
assert_eq!(config.base_url, Some("https://api.test.com".to_string()));
|
||||||
|
assert_eq!(config.model, Some("gpt-4".to_string()));
|
||||||
|
assert!(config.skills_dir.is_none());
|
||||||
|
}
|
||||||
|
_ => panic!("expected UpdateConfig variant"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn config_updated_serializes_correctly() {
|
||||||
|
let msg = ServiceMessage::ConfigUpdated {
|
||||||
|
success: true,
|
||||||
|
message: "配置已保存".to_string(),
|
||||||
|
};
|
||||||
|
let json = serde_json::to_string(&msg).unwrap();
|
||||||
|
|
||||||
|
assert!(json.contains("\"type\":\"config_updated\""));
|
||||||
|
assert!(json.contains("\"success\":true"));
|
||||||
|
assert!(json.contains("配置已保存"));
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn config_updated_deserializes_correctly() {
|
||||||
|
let json = r#"{"type":"config_updated","success":false,"message":"保存失败"}"#;
|
||||||
|
let msg: ServiceMessage = serde_json::from_str(json).unwrap();
|
||||||
|
|
||||||
|
match msg {
|
||||||
|
ServiceMessage::ConfigUpdated { success, message } => {
|
||||||
|
assert!(!success);
|
||||||
|
assert_eq!(message, "保存失败");
|
||||||
|
}
|
||||||
|
_ => panic!("expected ConfigUpdated variant"),
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -12,6 +12,8 @@ use tungstenite::{accept, Message};
|
|||||||
const RUNTIME_DROP_PANIC_TEXT: &str =
|
const RUNTIME_DROP_PANIC_TEXT: &str =
|
||||||
"Cannot drop a runtime in a context where blocking is not allowed";
|
"Cannot drop a runtime in a context where blocking is not allowed";
|
||||||
|
|
||||||
|
const TEST_ZHIHU_SKILLS_DIR: &str = "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills";
|
||||||
|
|
||||||
fn read_ws_text(stream: &mut tungstenite::WebSocket<std::net::TcpStream>) -> String {
|
fn read_ws_text(stream: &mut tungstenite::WebSocket<std::net::TcpStream>) -> String {
|
||||||
match stream.read().unwrap() {
|
match stream.read().unwrap() {
|
||||||
Message::Text(text) => text.to_string(),
|
Message::Text(text) => text.to_string(),
|
||||||
@@ -162,6 +164,7 @@ fn start_callback_host_hotlist_browser_server(
|
|||||||
.to_string();
|
.to_string();
|
||||||
let helper_client = Client::builder()
|
let helper_client = Client::builder()
|
||||||
.timeout(Duration::from_secs(2))
|
.timeout(Duration::from_secs(2))
|
||||||
|
.pool_max_idle_per_host(0)
|
||||||
.build()
|
.build()
|
||||||
.unwrap();
|
.unwrap();
|
||||||
let helper_html = helper_client
|
let helper_html = helper_client
|
||||||
@@ -213,14 +216,18 @@ fn start_callback_host_hotlist_browser_server(
|
|||||||
let mut saw_eval = false;
|
let mut saw_eval = false;
|
||||||
|
|
||||||
while Instant::now() < deadline {
|
while Instant::now() < deadline {
|
||||||
let envelope: Value = helper_client
|
let envelope: Value = match helper_client
|
||||||
.get(format!("{helper_origin}/sgclaw/callback/commands/next"))
|
.get(format!("{helper_origin}/sgclaw/callback/commands/next"))
|
||||||
.send()
|
.send()
|
||||||
.unwrap()
|
.and_then(|response| response.error_for_status())
|
||||||
.error_for_status()
|
.and_then(|response| response.json())
|
||||||
.unwrap()
|
{
|
||||||
.json()
|
Ok(envelope) => envelope,
|
||||||
.unwrap();
|
Err(_) => {
|
||||||
|
thread::sleep(Duration::from_millis(20));
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
};
|
||||||
let Some(command) = envelope.get("command").and_then(Value::as_object) else {
|
let Some(command) = envelope.get("command").and_then(Value::as_object) else {
|
||||||
thread::sleep(Duration::from_millis(20));
|
thread::sleep(Duration::from_millis(20));
|
||||||
continue;
|
continue;
|
||||||
@@ -751,6 +758,7 @@ fn client_to_service_regression_routes_zhihu_through_callback_host_without_inval
|
|||||||
"apiKey": "sk-runtime",
|
"apiKey": "sk-runtime",
|
||||||
"baseUrl": "http://127.0.0.1:9",
|
"baseUrl": "http://127.0.0.1:9",
|
||||||
"model": "deepseek-chat",
|
"model": "deepseek-chat",
|
||||||
|
"skillsDir": "{TEST_ZHIHU_SKILLS_DIR}",
|
||||||
"browserWsUrl": "{browser_ws_url}",
|
"browserWsUrl": "{browser_ws_url}",
|
||||||
"serviceWsListenAddr": "{service_addr}"
|
"serviceWsListenAddr": "{service_addr}"
|
||||||
}}"#
|
}}"#
|
||||||
|
|||||||
@@ -14,6 +14,7 @@ use sgclaw::service::{ClientMessage, ServiceEventSink, ServiceMessage, ServiceSe
|
|||||||
|
|
||||||
const RUNTIME_DROP_PANIC_TEXT: &str =
|
const RUNTIME_DROP_PANIC_TEXT: &str =
|
||||||
"Cannot drop a runtime in a context where blocking is not allowed";
|
"Cannot drop a runtime in a context where blocking is not allowed";
|
||||||
|
const TEST_ZHIHU_SKILLS_DIR: &str = "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills";
|
||||||
|
|
||||||
fn read_ws_text<S>(stream: &mut tungstenite::WebSocket<S>) -> String
|
fn read_ws_text<S>(stream: &mut tungstenite::WebSocket<S>) -> String
|
||||||
where
|
where
|
||||||
@@ -213,6 +214,7 @@ fn start_callback_host_hotlist_browser_server(
|
|||||||
.to_string();
|
.to_string();
|
||||||
let helper_client = Client::builder()
|
let helper_client = Client::builder()
|
||||||
.timeout(Duration::from_secs(2))
|
.timeout(Duration::from_secs(2))
|
||||||
|
.pool_max_idle_per_host(0)
|
||||||
.build()
|
.build()
|
||||||
.unwrap();
|
.unwrap();
|
||||||
let helper_html = helper_client
|
let helper_html = helper_client
|
||||||
@@ -264,14 +266,18 @@ fn start_callback_host_hotlist_browser_server(
|
|||||||
let mut saw_eval = false;
|
let mut saw_eval = false;
|
||||||
|
|
||||||
while Instant::now() < deadline {
|
while Instant::now() < deadline {
|
||||||
let envelope: Value = helper_client
|
let envelope: Value = match helper_client
|
||||||
.get(format!("{helper_origin}/sgclaw/callback/commands/next"))
|
.get(format!("{helper_origin}/sgclaw/callback/commands/next"))
|
||||||
.send()
|
.send()
|
||||||
.unwrap()
|
.and_then(|response| response.error_for_status())
|
||||||
.error_for_status()
|
.and_then(|response| response.json())
|
||||||
.unwrap()
|
{
|
||||||
.json()
|
Ok(envelope) => envelope,
|
||||||
.unwrap();
|
Err(_) => {
|
||||||
|
thread::sleep(Duration::from_millis(20));
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
};
|
||||||
let Some(command) = envelope.get("command").and_then(Value::as_object) else {
|
let Some(command) = envelope.get("command").and_then(Value::as_object) else {
|
||||||
thread::sleep(Duration::from_millis(20));
|
thread::sleep(Duration::from_millis(20));
|
||||||
continue;
|
continue;
|
||||||
@@ -737,7 +743,7 @@ fn service_binary_survives_real_client_disconnect_after_task_complete() {
|
|||||||
.stderr(std::process::Stdio::piped())
|
.stderr(std::process::Stdio::piped())
|
||||||
.spawn()
|
.spawn()
|
||||||
.unwrap();
|
.unwrap();
|
||||||
client.stdin.as_mut().unwrap().write_all(" \n".as_bytes()).unwrap();
|
client.stdin.as_mut().unwrap().write_all("你好\n".as_bytes()).unwrap();
|
||||||
let client_output = client.wait_with_output().unwrap();
|
let client_output = client.wait_with_output().unwrap();
|
||||||
|
|
||||||
assert!(
|
assert!(
|
||||||
@@ -747,7 +753,7 @@ fn service_binary_survives_real_client_disconnect_after_task_complete() {
|
|||||||
String::from_utf8_lossy(&client_output.stderr)
|
String::from_utf8_lossy(&client_output.stderr)
|
||||||
);
|
);
|
||||||
assert!(
|
assert!(
|
||||||
String::from_utf8_lossy(&client_output.stdout).contains("请输入任务内容。"),
|
String::from_utf8_lossy(&client_output.stdout).contains("任务执行失败:"),
|
||||||
"client did not receive TaskComplete summary: stdout={} stderr={}",
|
"client did not receive TaskComplete summary: stdout={} stderr={}",
|
||||||
String::from_utf8_lossy(&client_output.stdout),
|
String::from_utf8_lossy(&client_output.stdout),
|
||||||
String::from_utf8_lossy(&client_output.stderr)
|
String::from_utf8_lossy(&client_output.stderr)
|
||||||
@@ -803,6 +809,7 @@ fn service_binary_submit_flow_routes_zhihu_through_callback_host() {
|
|||||||
"apiKey": "sk-runtime",
|
"apiKey": "sk-runtime",
|
||||||
"baseUrl": "http://127.0.0.1:9",
|
"baseUrl": "http://127.0.0.1:9",
|
||||||
"model": "deepseek-chat",
|
"model": "deepseek-chat",
|
||||||
|
"skillsDir": "{TEST_ZHIHU_SKILLS_DIR}",
|
||||||
"browserWsUrl": "{browser_ws_url}",
|
"browserWsUrl": "{browser_ws_url}",
|
||||||
"serviceWsListenAddr": "{service_addr}"
|
"serviceWsListenAddr": "{service_addr}"
|
||||||
}}"#
|
}}"#
|
||||||
@@ -976,6 +983,7 @@ fn service_binary_submit_flow_uses_callback_host_command_semantics_for_zhihu() {
|
|||||||
"apiKey": "sk-runtime",
|
"apiKey": "sk-runtime",
|
||||||
"baseUrl": "http://127.0.0.1:9",
|
"baseUrl": "http://127.0.0.1:9",
|
||||||
"model": "deepseek-chat",
|
"model": "deepseek-chat",
|
||||||
|
"skillsDir": "{TEST_ZHIHU_SKILLS_DIR}",
|
||||||
"browserWsUrl": "{browser_ws_url}",
|
"browserWsUrl": "{browser_ws_url}",
|
||||||
"serviceWsListenAddr": "{service_addr}"
|
"serviceWsListenAddr": "{service_addr}"
|
||||||
}}"#
|
}}"#
|
||||||
@@ -1282,6 +1290,105 @@ fn service_binary_accepts_connect_request_without_starting_browser_task() {
|
|||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn service_binary_survives_client_disconnect_during_task_completion_send() {
|
||||||
|
let service_listener = TcpListener::bind("127.0.0.1:0").unwrap();
|
||||||
|
let service_addr = service_listener.local_addr().unwrap();
|
||||||
|
drop(service_listener);
|
||||||
|
|
||||||
|
let root = std::env::temp_dir().join(format!("sgclaw-service-disconnect-{}", uuid::Uuid::new_v4()));
|
||||||
|
std::fs::create_dir_all(&root).unwrap();
|
||||||
|
let config_path = root.join("sgclaw_config.json");
|
||||||
|
std::fs::write(
|
||||||
|
&config_path,
|
||||||
|
format!(
|
||||||
|
r#"{{
|
||||||
|
"apiKey": "sk-runtime",
|
||||||
|
"baseUrl": "https://api.deepseek.com",
|
||||||
|
"model": "deepseek-chat",
|
||||||
|
"browserWsUrl": "ws://127.0.0.1:12345",
|
||||||
|
"serviceWsListenAddr": "{service_addr}"
|
||||||
|
}}"#
|
||||||
|
),
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let mut service = std::process::Command::new(
|
||||||
|
std::env::var("CARGO_BIN_EXE_sg_claw").expect("sg_claw test binary path"),
|
||||||
|
)
|
||||||
|
.env("SGCLAW_DISABLE_POST_EXPORT_OPEN", "1")
|
||||||
|
.arg("--config-path")
|
||||||
|
.arg(&config_path)
|
||||||
|
.stdout(std::process::Stdio::null())
|
||||||
|
.stderr(std::process::Stdio::piped())
|
||||||
|
.spawn()
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let ws_url = format!("ws://{service_addr}");
|
||||||
|
let connect_deadline = Instant::now() + Duration::from_secs(2);
|
||||||
|
let mut websocket = None;
|
||||||
|
while Instant::now() < connect_deadline {
|
||||||
|
match connect(ws_url.as_str()) {
|
||||||
|
Ok((socket, _)) => {
|
||||||
|
websocket = Some(socket);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
Err(_) => {
|
||||||
|
if service.try_wait().unwrap().is_some() {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
thread::sleep(Duration::from_millis(50));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut websocket = websocket.expect("service ws listener never became available");
|
||||||
|
websocket
|
||||||
|
.send(Message::Text(
|
||||||
|
serde_json::to_string(&ClientMessage::SubmitTask {
|
||||||
|
instruction: "你好".to_string(),
|
||||||
|
conversation_id: String::new(),
|
||||||
|
messages: vec![],
|
||||||
|
page_url: String::new(),
|
||||||
|
page_title: String::new(),
|
||||||
|
})
|
||||||
|
.unwrap()
|
||||||
|
.into(),
|
||||||
|
))
|
||||||
|
.unwrap();
|
||||||
|
drop(websocket);
|
||||||
|
|
||||||
|
let exit_deadline = Instant::now() + Duration::from_secs(1);
|
||||||
|
let mut service_status = None;
|
||||||
|
while Instant::now() < exit_deadline {
|
||||||
|
if let Some(status) = service.try_wait().unwrap() {
|
||||||
|
service_status = Some(status);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
thread::sleep(Duration::from_millis(20));
|
||||||
|
}
|
||||||
|
if service_status.is_none() {
|
||||||
|
service.kill().unwrap();
|
||||||
|
let _ = service.wait();
|
||||||
|
}
|
||||||
|
|
||||||
|
let stderr = service
|
||||||
|
.stderr
|
||||||
|
.take()
|
||||||
|
.map(|mut stream| {
|
||||||
|
let mut buf = Vec::new();
|
||||||
|
use std::io::Read;
|
||||||
|
let _ = stream.read_to_end(&mut buf);
|
||||||
|
String::from_utf8_lossy(&buf).into_owned()
|
||||||
|
})
|
||||||
|
.unwrap_or_default();
|
||||||
|
|
||||||
|
assert!(
|
||||||
|
service_status.is_none(),
|
||||||
|
"sg_claw exited after client disconnected mid-task; stderr={stderr}"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn submit_task_client_message_converts_into_shared_runner_request() {
|
fn submit_task_client_message_converts_into_shared_runner_request() {
|
||||||
let message = ClientMessage::SubmitTask {
|
let message = ClientMessage::SubmitTask {
|
||||||
@@ -1333,6 +1440,28 @@ fn lifecycle_client_messages_round_trip_with_stable_tags() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn service_messages_round_trip_with_stable_tags() {
|
||||||
|
let cases = [
|
||||||
|
(
|
||||||
|
ServiceMessage::StatusChanged {
|
||||||
|
state: "started".to_string(),
|
||||||
|
},
|
||||||
|
r#"{"type":"status_changed","state":"started"}"#,
|
||||||
|
),
|
||||||
|
(
|
||||||
|
ServiceMessage::Pong,
|
||||||
|
r#"{"type":"pong"}"#,
|
||||||
|
),
|
||||||
|
];
|
||||||
|
|
||||||
|
for (message, raw) in cases {
|
||||||
|
assert_eq!(serde_json::to_string(&message).unwrap(), raw);
|
||||||
|
let decoded: ServiceMessage = serde_json::from_str(raw).unwrap();
|
||||||
|
assert_eq!(decoded, message);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn service_event_sink_maps_log_completion_and_status_messages() {
|
fn service_event_sink_maps_log_completion_and_status_messages() {
|
||||||
let sink = ServiceEventSink::default();
|
let sink = ServiceEventSink::default();
|
||||||
|
|||||||
Reference in New Issue
Block a user