feat: service console auto-connect, settings panel, and batch of enhancements
- Auto-connect WebSocket on page load in service console - Settings modal for editing sgclaw_config.json (API key, base URL, model, skills dir, etc.) - UpdateConfig/ConfigUpdated protocol messages for remote config save - save_to_path() for SgClawSettings serialization - ConfigUpdated handler in sg_claw_client binary - Protocol serialization tests for new message types - HTML test assertions for auto-connect and settings UI - Additional pending changes: deterministic submit, org units, lineloss xlsx export, browser script tool, and docs 🤖 Generated with [Qoder][https://qoder.com]
This commit is contained in:
@@ -0,0 +1,448 @@
|
||||
# TQ Lineloss WS Dual-Transport Implementation Plan
|
||||
|
||||
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||
|
||||
**Goal:** Add ws communication support for the existing `tq-lineloss-report.collect_lineloss` deterministic browser_script path on the `feature/claw-ws` branch while preserving the current pipe path and validated Zhihu ws behavior.
|
||||
|
||||
**Architecture:** Reuse the existing backend-neutral execution seam that already exists for deterministic submit and browser_script execution. Keep lineloss business parsing, canonical args, and artifact interpretation unchanged; only make the ws backend/protocol and submit-path verification complete enough for the same lineloss skill contract to run over both pipe and ws.
|
||||
|
||||
**Tech Stack:** Rust 2021, Cargo tests, existing `BrowserBackend` abstraction, `WsBrowserBackend`, `ws_protocol`, browser websocket contract in `docs/_tmp_sgbrowser_ws_api_doc.txt`, existing staged `browser_script` skill execution seam.
|
||||
|
||||
---
|
||||
|
||||
## Execution Context
|
||||
|
||||
- Follow @superpowers:test-driven-development for each behavior change.
|
||||
- Follow @superpowers:verification-before-completion before claiming each task is done.
|
||||
- Do **not** create a git worktree unless the user explicitly asks.
|
||||
- This plan is **ws enablement only** for the already-added lineloss deterministic skill path.
|
||||
- Do **not** redesign deterministic routing, org parsing, period parsing, staged skill packaging, or artifact contracts unless a failing ws-specific test proves a minimal compatibility fix is required.
|
||||
- Do **not** modify validated Zhihu hotlist/export business behavior; only add regression coverage around it.
|
||||
- Preserve the current pipe execution path as the control implementation.
|
||||
- Preserve the current `BrowserBackend` seam; do not introduce a second lineloss-specific ws execution path.
|
||||
|
||||
## Scope Boundary
|
||||
|
||||
### In scope
|
||||
- Make the existing lineloss deterministic `browser_script` skill path run through ws on this branch.
|
||||
- Keep the same canonical tool args and returned artifact interpretation for both pipe and ws.
|
||||
- Verify ws browser-script execution against the documented browser ws contract.
|
||||
- Add focused tests for ws lineloss execution and regressions for Zhihu ws + pipe lineloss.
|
||||
|
||||
### Out of scope
|
||||
- Changing lineloss trigger semantics (`。。。`).
|
||||
- Changing org/unit normalization semantics or source dictionary shape.
|
||||
- Changing period normalization semantics.
|
||||
- Reworking staged skill docs or JS business collection logic beyond ws-compatibility necessities.
|
||||
- Any Zhihu feature work.
|
||||
- Any pipe-only cleanup/refactor.
|
||||
- Any general scene-registry redesign.
|
||||
|
||||
## File Map
|
||||
|
||||
### Expected code changes
|
||||
- Modify: `src/pipe/protocol.rs:49-78,130-165,192-209`
|
||||
- keep `Action::Eval` encoding aligned with the current transport contract and lineloss skill expectations
|
||||
- Modify: `src/pipe/browser_tool.rs:62-125`
|
||||
- ensure eval response correlation and payload handling remain sufficient for deterministic lineloss execution
|
||||
- Modify only if a focused test proves it is necessary: `src/compat/browser_script_skill_tool.rs:135-255`
|
||||
- preserve browser_script contract; only make minimal output-shape handling fixes if eval payloads differ from the pipe baseline in a way current code cannot consume
|
||||
- Modify only if a focused parity test proves it is necessary: `src/compat/direct_skill_runtime.rs:50-129`
|
||||
- preserve shared backend-neutral execution helper behavior; no business logic changes
|
||||
- Read and normally leave unchanged: `src/compat/deterministic_submit.rs:96-157`
|
||||
- this is the business contract baseline and should not be rewritten for transport parity work
|
||||
- Read and normally leave unchanged: `src/agent/mod.rs:242-285`
|
||||
- this contains the current deterministic dispatch split used by this branch
|
||||
|
||||
### Expected test changes
|
||||
- Modify: `tests/agent_runtime_test.rs`
|
||||
- add/extend deterministic lineloss runtime coverage and parity assertions using the current runtime path
|
||||
- Modify: `tests/compat_runtime_test.rs`
|
||||
- add/extend focused pipe lineloss regression assertions so transport work cannot silently break pipe
|
||||
- Modify only if end-to-end submit coverage truly needs it: `tests/runtime_task_flow_test.rs`
|
||||
- verify broader submit-flow expectations remain intact
|
||||
|
||||
### Reference-only files
|
||||
- Read only: `docs/superpowers/plans/2026-04-11-tq-lineloss-deterministic-skill-plan.md`
|
||||
- Read only: `docs/superpowers/specs/2026-04-11-tq-lineloss-deterministic-skill-design.md`
|
||||
- Read only: `docs/_tmp_sgbrowser_ws_api_doc.txt`
|
||||
|
||||
---
|
||||
|
||||
## Locked contracts
|
||||
|
||||
### Contract 1: Same lineloss deterministic business contract on both transports
|
||||
The ws path must reuse the existing values produced by `src/compat/deterministic_submit.rs:84-95` and `src/compat/deterministic_submit.rs:135-166`:
|
||||
- `expected_domain`
|
||||
- `org_label`
|
||||
- `org_code`
|
||||
- `period_mode`
|
||||
- `period_mode_code`
|
||||
- `period_value`
|
||||
- `period_payload`
|
||||
|
||||
No ws-specific lineloss args may be introduced in this slice.
|
||||
|
||||
### Contract 2: Same browser_script execution seam on both transports
|
||||
The ws path must continue to use `execute_browser_script_skill_raw_output_with_browser_backend(...)` from `src/compat/direct_skill_runtime.rs:95-112`, which in turn uses the same browser_script tool path as pipe. Do not add a second lineloss-only ws runner.
|
||||
|
||||
### Contract 3: Same artifact interpretation on both transports
|
||||
The ws path must produce output that remains consumable by `summarize_lineloss_output(...)` / `summarize_lineloss_artifact(...)` in `src/compat/deterministic_submit.rs:168-257` without transport-specific branching.
|
||||
|
||||
### Contract 4: Zhihu ws behavior must stay unchanged
|
||||
The existing ws browser-script / export path already validated by `tests/agent_runtime_test.rs` and `tests/compat_runtime_test.rs` is a hard regression boundary. If a change breaks Zhihu tests, fix the ws seam instead of weakening Zhihu expectations.
|
||||
|
||||
### Contract 5: Pipe remains the baseline
|
||||
For identical lineloss deterministic inputs, the pipe path should continue to succeed without requiring ws configuration.
|
||||
|
||||
---
|
||||
|
||||
### Task 1: Lock the ws contract with failing transport-level tests
|
||||
|
||||
**Files:**
|
||||
- Modify: `tests/agent_runtime_test.rs`
|
||||
- Modify: `tests/compat_runtime_test.rs`
|
||||
- Read: `docs/_tmp_sgbrowser_ws_api_doc.txt`
|
||||
|
||||
- [ ] **Step 1: Add a failing ws lineloss deterministic runtime test**
|
||||
|
||||
Model it after the existing ws harness in `tests/agent_runtime_test.rs:69-166`, but target lineloss deterministic execution instead of Zhihu. The test should:
|
||||
- configure `browserWsUrl`
|
||||
- submit a deterministic lineloss instruction ending with `。。。`
|
||||
- return a ws callback payload representing a lineloss `report-artifact`
|
||||
- assert success summary includes canonical org, period, status, and rows
|
||||
|
||||
Suggested skeleton:
|
||||
|
||||
```rust
|
||||
#[test]
|
||||
fn ws_deterministic_lineloss_submit_executes_browser_script_and_summarizes_artifact() {
|
||||
// arrange ws config + ws server + lineloss artifact callback
|
||||
// act handle_browser_message_with_context(... SubmitTask ...)
|
||||
// assert TaskComplete success summary contains canonical org/period/rows
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Add a failing pipe regression test for the same lineloss contract**
|
||||
|
||||
In `tests/compat_runtime_test.rs`, add a focused pipe-side assertion that the same deterministic lineloss instruction still succeeds through the current pipe seam and uses the same summary contract.
|
||||
|
||||
Suggested skeleton:
|
||||
|
||||
```rust
|
||||
#[test]
|
||||
fn pipe_deterministic_lineloss_submit_preserves_existing_summary_contract() {
|
||||
// arrange MockTransport responses for browser_script eval
|
||||
// act handle_browser_message_with_context(...)
|
||||
// assert success summary matches canonical contract
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 3: Add a failing ws regression assertion for Zhihu**
|
||||
|
||||
Add or tighten a Zhihu ws assertion proving ordinary Zhihu requests still use the existing ws path and do not get intercepted by lineloss deterministic logic.
|
||||
|
||||
- [ ] **Step 4: Run the three focused tests to confirm failure**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test ws_deterministic_lineloss_submit_executes_browser_script_and_summarizes_artifact -- --exact
|
||||
cargo test pipe_deterministic_lineloss_submit_preserves_existing_summary_contract -- --exact
|
||||
cargo test ws_zhihu_submit_path_remains_unchanged_after_lineloss_transport_work -- --exact
|
||||
```
|
||||
|
||||
Expected: at least the new ws lineloss test fails before the seam is completed.
|
||||
|
||||
- [ ] **Step 5: Commit**
|
||||
|
||||
```bash
|
||||
git add tests/agent_runtime_test.rs tests/compat_runtime_test.rs
|
||||
git commit -m "test: lock ws and pipe lineloss transport contracts"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 2: Make the current eval transport contract explicitly satisfy browser-script requirements
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/pipe/protocol.rs:49-78,130-165,192-209`
|
||||
- Modify: `src/pipe/browser_tool.rs:62-124`
|
||||
- Modify only if tests prove necessary: `src/compat/browser_script_skill_tool.rs:99-180,214-255`
|
||||
- Modify: `tests/pipe_protocol_test.rs`
|
||||
- Modify: `tests/browser_tool_test.rs`
|
||||
- Modify: `tests/browser_script_skill_tool_test.rs`
|
||||
|
||||
- [ ] **Step 1: Add failing protocol/result-contract tests first**
|
||||
|
||||
Extend or add focused tests to lock the current branch's real transport contract:
|
||||
- `Action::Eval` remains supported by the line protocol and command encoding
|
||||
- eval request/response correlation remains stable via `seq` matching for lineloss-style target URLs
|
||||
- eval/browser_script result handling preserves the full JSON artifact string without truncation before deterministic lineloss summarization consumes it
|
||||
|
||||
Suggested skeletons:
|
||||
|
||||
```rust
|
||||
#[test]
|
||||
fn eval_action_remains_supported_in_protocol() {}
|
||||
|
||||
#[test]
|
||||
fn browser_tool_matches_eval_response_by_seq_for_lineloss_flow() {}
|
||||
|
||||
#[test]
|
||||
fn browser_script_tool_preserves_json_artifact_string_for_lineloss() {}
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Run the focused Task 2 tests to confirm failure**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test eval_action_remains_supported_in_protocol -- --exact
|
||||
cargo test browser_tool_matches_eval_response_by_seq_for_lineloss_flow -- --exact
|
||||
cargo test browser_script_tool_preserves_json_artifact_string_for_lineloss -- --exact
|
||||
```
|
||||
|
||||
Expected: at least one test fails if the current protocol/correlation/result handling is still insufficient for the lineloss artifact path.
|
||||
|
||||
- [ ] **Step 3: Implement the minimal transport-contract fix**
|
||||
|
||||
Allowed changes:
|
||||
- adjust only the `Action::Eval` protocol/encoding support in `src/pipe/protocol.rs`
|
||||
- adjust only request/response correlation in `src/pipe/browser_tool.rs`
|
||||
- if and only if tests still prove it necessary, make a tiny result-shape/stringification fix in `src/compat/browser_script_skill_tool.rs`
|
||||
- keep existing Zhihu-compatible behavior intact
|
||||
|
||||
Not allowed:
|
||||
- adding lineloss-only transport fields
|
||||
- adding a second lineloss-specific execution path
|
||||
- changing deterministic lineloss business parsing or summary rules
|
||||
|
||||
- [ ] **Step 4: Re-run the focused Task 2 tests**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test eval_action_remains_supported_in_protocol -- --exact
|
||||
cargo test browser_tool_matches_eval_response_by_seq_for_lineloss_flow -- --exact
|
||||
cargo test browser_script_tool_preserves_json_artifact_string_for_lineloss -- --exact
|
||||
```
|
||||
|
||||
Expected: PASS.
|
||||
|
||||
- [ ] **Step 5: Re-run the focused ws lineloss runtime test from Task 1**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test ws_deterministic_lineloss_submit_executes_browser_script_and_summarizes_artifact -- --exact
|
||||
```
|
||||
|
||||
Expected: PASS.
|
||||
|
||||
- [ ] **Step 6: Commit**
|
||||
|
||||
```bash
|
||||
git add src/pipe/protocol.rs src/pipe/browser_tool.rs src/compat/browser_script_skill_tool.rs tests/pipe_protocol_test.rs tests/browser_tool_test.rs tests/browser_script_skill_tool_test.rs
|
||||
git commit -m "fix: align eval transport contract with lineloss browser script flow"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 3: Make eval result-shape handling surface the lineloss artifact cleanly
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/pipe/browser_tool.rs:62-125`
|
||||
- Modify only if tests prove necessary: `src/compat/browser_script_skill_tool.rs:159-180,248-255`
|
||||
- Modify: `tests/browser_script_skill_tool_test.rs`
|
||||
|
||||
- [ ] **Step 1: Add a failing result-shape test**
|
||||
|
||||
Lock that an eval response carrying a JSON string report artifact is surfaced as the same browser_script tool output shape expected by `execute_browser_script_tool(...)`.
|
||||
|
||||
Suggested skeleton:
|
||||
|
||||
```rust
|
||||
#[test]
|
||||
fn ws_backend_eval_returns_text_payload_consumable_by_browser_script_tool() {
|
||||
// arrange an eval response whose data.text is a JSON string artifact
|
||||
// assert execute_browser_script_tool(...) returns the full artifact text without truncation
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Run the result-shape test to confirm failure**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test ws_backend_eval_returns_text_payload_consumable_by_browser_script_tool -- --exact
|
||||
```
|
||||
|
||||
Expected: FAIL only if current eval/result handling is not sufficient for full lineloss artifact output.
|
||||
|
||||
- [ ] **Step 3: Implement the minimal result-shape fix**
|
||||
|
||||
Allowed fixes:
|
||||
- adjust `BrowserPipeTool::invoke(...)` only if response packaging itself is wrong
|
||||
- if and only if still required, make a tiny output-shape compatibility fix in `src/compat/browser_script_skill_tool.rs` so JSON string `data.text` payloads are preserved identically to the pipe baseline
|
||||
|
||||
Not allowed:
|
||||
- transport-specific lineloss parsing
|
||||
- changes to deterministic business logic
|
||||
- adding a second lineloss-specific execution path
|
||||
|
||||
- [ ] **Step 4: Re-run the result-shape test**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test ws_backend_eval_returns_text_payload_consumable_by_browser_script_tool -- --exact
|
||||
```
|
||||
|
||||
Expected: PASS.
|
||||
|
||||
- [ ] **Step 5: Re-run the focused ws lineloss runtime test from Task 1**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test ws_deterministic_lineloss_submit_executes_browser_script_and_summarizes_artifact -- --exact
|
||||
```
|
||||
|
||||
Expected: PASS.
|
||||
|
||||
- [ ] **Step 6: Commit**
|
||||
|
||||
```bash
|
||||
git add src/pipe/browser_tool.rs src/compat/browser_script_skill_tool.rs tests/browser_script_skill_tool_test.rs
|
||||
git commit -m "fix: make eval result shape match browser script contract"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 4: Verify the current backend-neutral deterministic execution path without changing business rules
|
||||
|
||||
**Files:**
|
||||
- Read baseline: `src/agent/mod.rs:242-285`
|
||||
- Read baseline: `src/compat/deterministic_submit.rs:96-157`
|
||||
- Modify only if a focused parity test proves it is necessary: `src/compat/direct_skill_runtime.rs:50-129`
|
||||
- Modify: `tests/agent_runtime_test.rs`
|
||||
- Modify: `tests/compat_runtime_test.rs`
|
||||
|
||||
- [ ] **Step 1: Add a failing integration test for backend-neutral parity**
|
||||
|
||||
Add a test proving these two current-branch paths produce the same lineloss summary contract for equivalent artifact payloads:
|
||||
- pipe path via the existing deterministic submit flow in `tests/compat_runtime_test.rs`
|
||||
- runtime path via `handle_browser_message_with_context(...)` deterministic submit routing in `tests/agent_runtime_test.rs`
|
||||
|
||||
Suggested skeleton:
|
||||
|
||||
```rust
|
||||
#[test]
|
||||
fn deterministic_lineloss_pipe_and_ws_paths_share_summary_contract() {}
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Run the parity test to confirm failure or gap**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test deterministic_lineloss_pipe_and_ws_paths_share_summary_contract -- --exact
|
||||
```
|
||||
|
||||
Expected: FAIL only if a remaining shared execution seam gap still exists.
|
||||
|
||||
- [ ] **Step 3: Apply the smallest shared execution fix if needed**
|
||||
|
||||
Allowed changes:
|
||||
- tiny helper extraction or result handling in `src/compat/direct_skill_runtime.rs`
|
||||
- no new lineloss-specific branch
|
||||
- no change to deterministic lineloss business parsing or summary rules
|
||||
- no change to configured direct-submit behavior for non-lineloss skills
|
||||
|
||||
- [ ] **Step 4: Re-run the parity test**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test deterministic_lineloss_pipe_and_ws_paths_share_summary_contract -- --exact
|
||||
```
|
||||
|
||||
Expected: PASS.
|
||||
|
||||
- [ ] **Step 5: Commit**
|
||||
|
||||
```bash
|
||||
git add src/compat/direct_skill_runtime.rs tests/agent_runtime_test.rs tests/compat_runtime_test.rs
|
||||
git commit -m "fix: preserve shared deterministic execution across pipe and ws"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 5: Run the full focused verification set and stop if any Zhihu or pipe regression appears
|
||||
|
||||
**Files:**
|
||||
- Reuse: `tests/agent_runtime_test.rs`
|
||||
- Reuse: `tests/compat_runtime_test.rs`
|
||||
- Reuse: `tests/runtime_task_flow_test.rs`
|
||||
|
||||
- [ ] **Step 1: Run focused ws + lineloss + Zhihu regression tests**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test --test agent_runtime_test
|
||||
cargo test --test compat_runtime_test
|
||||
cargo test --test runtime_task_flow_test
|
||||
```
|
||||
|
||||
Expected: PASS.
|
||||
|
||||
- [ ] **Step 2: Run targeted protocol/backend unit tests**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test eval_action_remains_supported_in_protocol -- --exact
|
||||
cargo test browser_tool_matches_eval_response_by_seq_for_lineloss_flow -- --exact
|
||||
cargo test browser_script_tool_preserves_json_artifact_string_for_lineloss -- --exact
|
||||
cargo test ws_backend_eval_returns_text_payload_consumable_by_browser_script_tool -- --exact
|
||||
cargo test deterministic_lineloss_pipe_and_ws_paths_share_summary_contract -- --exact
|
||||
```
|
||||
|
||||
Expected: PASS.
|
||||
|
||||
- [ ] **Step 3: Run the full Rust suite**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test
|
||||
```
|
||||
|
||||
Expected: PASS.
|
||||
|
||||
- [ ] **Step 4: Manual review of diff scope**
|
||||
|
||||
Confirm the diff only touches:
|
||||
- current transport/result seam files (`src/pipe/protocol.rs`, `src/pipe/browser_tool.rs`)
|
||||
- narrow shared browser_script/result compatibility helpers if strictly necessary
|
||||
- tests
|
||||
|
||||
If diff includes Zhihu business logic, lineloss parsing rules, staged skill business JS, or unrelated cleanup, remove those changes before completion.
|
||||
|
||||
- [ ] **Step 5: Commit**
|
||||
|
||||
```bash
|
||||
git add src/pipe/protocol.rs src/pipe/browser_tool.rs src/compat/browser_script_skill_tool.rs src/compat/direct_skill_runtime.rs tests/pipe_protocol_test.rs tests/browser_tool_test.rs tests/browser_script_skill_tool_test.rs tests/agent_runtime_test.rs tests/compat_runtime_test.rs
|
||||
git commit -m "test: verify lineloss ws transport without regressing pipe or zhihu"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Final verification checklist
|
||||
|
||||
- [ ] The same lineloss deterministic instruction works on pipe and ws.
|
||||
- [ ] Pipe still works without any ws configuration.
|
||||
- [ ] Eval transport support remains available for deterministic lineloss execution.
|
||||
- [ ] Eval response payloads preserve the full lineloss artifact JSON string.
|
||||
- [ ] `src/compat/deterministic_submit.rs` business rules remain transport-neutral.
|
||||
- [ ] No ws-specific lineloss args were introduced.
|
||||
- [ ] Zhihu ws tests still pass unchanged in behavior.
|
||||
- [ ] No ordinary Zhihu request is intercepted by lineloss deterministic routing.
|
||||
- [ ] No new transport-specific business branch was added for lineloss.
|
||||
|
||||
## Implementation notes
|
||||
|
||||
- Default to changing the current transport/result seam first: `src/pipe/protocol.rs` and `src/pipe/browser_tool.rs`.
|
||||
- Treat `src/compat/browser_script_skill_tool.rs` and `src/compat/direct_skill_runtime.rs` as shared seams: change them only if a focused failing test shows a transport-neutral compatibility bug.
|
||||
- If a proposed fix requires changing `src/compat/deterministic_submit.rs` business logic, stop and re-evaluate; that likely means the seam fix is happening at the wrong layer.
|
||||
- If a proposed fix changes Zhihu expectations, stop and repair the seam instead.
|
||||
73
docs/superpowers/plans/2026-04-13-async-eval-then-fix.md
Normal file
73
docs/superpowers/plans/2026-04-13-async-eval-then-fix.md
Normal file
@@ -0,0 +1,73 @@
|
||||
# Async Eval .then() Fix Implementation Plan
|
||||
|
||||
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||
|
||||
**Goal:** Fix `build_eval_js` to handle async script return values using `.then()` instead of `async IIFE`.
|
||||
|
||||
**Architecture:** Extract callback-sending logic into a `_s` helper function inside the generated JS. If the script returns a Promise, call `_s` via `.then()`; otherwise call `_s` synchronously. This keeps the outer IIFE synchronous for C++ injection compatibility.
|
||||
|
||||
**Tech Stack:** Rust, JavaScript
|
||||
|
||||
---
|
||||
|
||||
## Files
|
||||
|
||||
- Modify: `src/browser/callback_backend.rs:433-447` - `build_eval_js` function
|
||||
|
||||
---
|
||||
|
||||
### Task 1: Modify build_eval_js to support async via .then()
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/browser/callback_backend.rs:433-447`
|
||||
|
||||
- [ ] **Step 1: Replace build_eval_js implementation**
|
||||
|
||||
Replace the entire `build_eval_js` function body (lines 433-447) with:
|
||||
|
||||
```rust
|
||||
fn build_eval_js(source_url: &str, script: &str) -> String {
|
||||
let escaped_source_url = escape_js_single_quoted(source_url);
|
||||
let callback = EVAL_CALLBACK_NAME;
|
||||
let events_url = escape_js_single_quoted(&events_endpoint_url(source_url));
|
||||
|
||||
format!(
|
||||
"(function(){{try{{\
|
||||
var v=(function(){{return {script}}})();\
|
||||
function _s(v){{\
|
||||
var t=(typeof v==='string')?v:JSON.stringify(v);\
|
||||
try{{callBackJsToCpp('{escaped_source_url}@_@'+window.location.href+'@_@{callback}@_@sgBrowserExcuteJsCodeByDomain@_@'+(t??''))}}catch(_){{}}\
|
||||
var j=JSON.stringify({{type:'callback',callback:'{callback}',request_url:'{escaped_source_url}',payload:{{value:(t??'')}}}});\
|
||||
try{{var r=new XMLHttpRequest();r.open('POST','{events_url}',true);r.setRequestHeader('Content-Type','application/json');r.send(j)}}catch(_){{}}\
|
||||
try{{navigator.sendBeacon('{events_url}',new Blob([j],{{type:'application/json'}}))}}catch(_){{}}\
|
||||
}}\
|
||||
if(v&&typeof v.then==='function'){{v.then(_s).catch(function(){{}});}}else{{_s(v);}}\
|
||||
}}catch(e){{}}}})()"
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Run tests**
|
||||
|
||||
Run: `cargo test browser_script_skill_tool --no-fail-fast`
|
||||
|
||||
Expected: All tests pass.
|
||||
|
||||
- [ ] **Step 3: Run full test suite**
|
||||
|
||||
Run: `cargo test`
|
||||
|
||||
Expected: All tests pass (except pre-existing `lineloss_period_resolver_prompts_for_missing_period` failure which is unrelated).
|
||||
|
||||
- [ ] **Step 4: Build**
|
||||
|
||||
Run: `cargo build`
|
||||
|
||||
Expected: Compiles with no errors.
|
||||
|
||||
- [ ] **Step 5: Commit**
|
||||
|
||||
```bash
|
||||
git add src/browser/callback_backend.rs
|
||||
git commit -m "fix: support async browser scripts via .then() in build_eval_js"
|
||||
```
|
||||
@@ -0,0 +1,912 @@
|
||||
# Rust-Side Lineloss XLSX Export Implementation Plan
|
||||
|
||||
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||
|
||||
**Goal:** Move XLSX export from browser JS (blocked by CORS) to Rust side, so `collect_lineloss.js` only collects data and Rust generates the `.xlsx` file locally.
|
||||
|
||||
**Architecture:** JS collects API data and returns a `report-artifact` JSON with `rows`, `column_defs`, and metadata. Rust parses the artifact, extracts rows + column definitions, and generates a standard `.xlsx` file using the `zip` crate + OpenXML XML strings (same pattern as `openxml_office_tool.rs`). Report log is deferred.
|
||||
|
||||
**Tech Stack:** Rust, `zip` 0.6.6, `serde_json`, OpenXML Spreadsheet ML, JavaScript (browser-injected)
|
||||
|
||||
**Spec:** `docs/superpowers/specs/2026-04-13-rust-side-lineloss-xlsx-export.md`
|
||||
|
||||
---
|
||||
|
||||
## File Structure
|
||||
|
||||
| File | Responsibility |
|
||||
|------|---------------|
|
||||
| `src/compat/lineloss_xlsx_export.rs` | **New.** Pure XLSX generation: takes column defs + row data, produces `.xlsx` file. No business logic. |
|
||||
| `src/compat/deterministic_submit.rs` | **Modify.** After receiving JS artifact, extract rows + column_defs, call XLSX export, attach path to outcome. |
|
||||
| `src/compat/mod.rs` | **Modify.** Register `lineloss_xlsx_export` module. |
|
||||
| `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/scripts/collect_lineloss.js` | **Modify.** Remove `exportWorkbook`/`writeReportLog` calls. Add `column_defs` to artifact. |
|
||||
| `tests/lineloss_xlsx_export_test.rs` | **New.** Unit tests for XLSX generation. |
|
||||
|
||||
---
|
||||
|
||||
### Task 1: Create `lineloss_xlsx_export.rs` with Tests
|
||||
|
||||
**Files:**
|
||||
- Create: `src/compat/lineloss_xlsx_export.rs`
|
||||
- Create: `tests/lineloss_xlsx_export_test.rs`
|
||||
- Modify: `src/compat/mod.rs`
|
||||
|
||||
- [ ] **Step 1: Register the new module in `src/compat/mod.rs`**
|
||||
|
||||
Add the module declaration in alphabetical order. In `src/compat/mod.rs`, insert after `pub mod event_bridge;`:
|
||||
|
||||
```rust
|
||||
pub mod lineloss_xlsx_export;
|
||||
```
|
||||
|
||||
The full file becomes:
|
||||
|
||||
```rust
|
||||
pub mod artifact_open;
|
||||
pub mod browser_script_skill_tool;
|
||||
pub mod browser_tool_adapter;
|
||||
pub mod config_adapter;
|
||||
pub mod cron_adapter;
|
||||
pub mod deterministic_submit;
|
||||
pub mod direct_skill_runtime;
|
||||
pub mod event_bridge;
|
||||
pub mod lineloss_xlsx_export;
|
||||
pub mod memory_adapter;
|
||||
pub mod openxml_office_tool;
|
||||
pub mod orchestration;
|
||||
pub mod runtime;
|
||||
pub mod screen_html_export_tool;
|
||||
pub mod tq_lineloss;
|
||||
pub mod workflow_executor;
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Write the failing test for XLSX generation**
|
||||
|
||||
Create `tests/lineloss_xlsx_export_test.rs`:
|
||||
|
||||
```rust
|
||||
use std::fs;
|
||||
use std::path::PathBuf;
|
||||
|
||||
use serde_json::json;
|
||||
use sgclaw::compat::lineloss_xlsx_export::{export_lineloss_xlsx, LinelossExportRequest};
|
||||
|
||||
fn temp_output_path(name: &str) -> PathBuf {
|
||||
let dir = std::env::temp_dir().join("sgclaw-test-xlsx");
|
||||
fs::create_dir_all(&dir).unwrap();
|
||||
dir.join(name)
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn export_month_lineloss_produces_valid_xlsx() {
|
||||
let output_path = temp_output_path("month-test.xlsx");
|
||||
if output_path.exists() {
|
||||
fs::remove_file(&output_path).unwrap();
|
||||
}
|
||||
|
||||
let request = LinelossExportRequest {
|
||||
sheet_name: "国网兰州供电公司月度线损分析报表(2026-03)".to_string(),
|
||||
column_defs: vec![
|
||||
("ORG_NAME".to_string(), "供电单位".to_string()),
|
||||
("YGDL".to_string(), "累计供电量".to_string()),
|
||||
("YYDL".to_string(), "累计售电量".to_string()),
|
||||
("YXSL".to_string(), "线损完成率(%)".to_string()),
|
||||
("RAT_SCOPE".to_string(), "线损率累计目标值".to_string()),
|
||||
("BLANK3".to_string(), "目标完成率".to_string()),
|
||||
("BLANK2".to_string(), "排行".to_string()),
|
||||
],
|
||||
rows: vec![
|
||||
serde_json::from_value(json!({
|
||||
"ORG_NAME": "城关供电",
|
||||
"YGDL": "12345.67",
|
||||
"YYDL": "11234.56",
|
||||
"YXSL": "9.00",
|
||||
"RAT_SCOPE": "9.50",
|
||||
"BLANK3": "94.74",
|
||||
"BLANK2": "1"
|
||||
}))
|
||||
.unwrap(),
|
||||
serde_json::from_value(json!({
|
||||
"ORG_NAME": "七里河供电",
|
||||
"YGDL": "9876.54",
|
||||
"YYDL": "8765.43",
|
||||
"YXSL": "11.24",
|
||||
"RAT_SCOPE": "10.00",
|
||||
"BLANK3": "112.40",
|
||||
"BLANK2": "2"
|
||||
}))
|
||||
.unwrap(),
|
||||
],
|
||||
output_path: output_path.clone(),
|
||||
};
|
||||
|
||||
let result_path = export_lineloss_xlsx(&request).unwrap();
|
||||
assert_eq!(result_path, output_path);
|
||||
assert!(output_path.exists());
|
||||
|
||||
// Verify it's a valid ZIP (xlsx is a zip archive)
|
||||
let file = fs::File::open(&output_path).unwrap();
|
||||
let mut archive = zip::ZipArchive::new(file).unwrap();
|
||||
|
||||
// Must contain the standard OpenXML entries
|
||||
let entry_names: Vec<String> = (0..archive.len())
|
||||
.map(|i| archive.by_index(i).unwrap().name().to_string())
|
||||
.collect();
|
||||
|
||||
assert!(entry_names.contains(&"[Content_Types].xml".to_string()));
|
||||
assert!(entry_names.contains(&"xl/worksheets/sheet1.xml".to_string()));
|
||||
assert!(entry_names.contains(&"xl/workbook.xml".to_string()));
|
||||
|
||||
// Read sheet1.xml and verify it contains our data
|
||||
let mut sheet = archive.by_name("xl/worksheets/sheet1.xml").unwrap();
|
||||
let mut xml = String::new();
|
||||
std::io::Read::read_to_string(&mut sheet, &mut xml).unwrap();
|
||||
|
||||
assert!(xml.contains("供电单位"), "header row should contain 供电单位");
|
||||
assert!(xml.contains("累计供电量"), "header row should contain 累计供电量");
|
||||
assert!(xml.contains("城关供电"), "data should contain 城关供电");
|
||||
assert!(xml.contains("12345.67"), "data should contain 12345.67");
|
||||
assert!(xml.contains("七里河供电"), "data should contain second row");
|
||||
|
||||
// Cleanup
|
||||
fs::remove_file(&output_path).unwrap();
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn export_empty_rows_returns_error() {
|
||||
let output_path = temp_output_path("empty-test.xlsx");
|
||||
|
||||
let request = LinelossExportRequest {
|
||||
sheet_name: "test".to_string(),
|
||||
column_defs: vec![("A".to_string(), "ColA".to_string())],
|
||||
rows: vec![],
|
||||
output_path: output_path.clone(),
|
||||
};
|
||||
|
||||
let result = export_lineloss_xlsx(&request);
|
||||
assert!(result.is_err());
|
||||
assert!(
|
||||
result.unwrap_err().to_string().contains("rows must not be empty"),
|
||||
"should reject empty rows"
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 3: Run the test to verify it fails**
|
||||
|
||||
Run: `cargo test --test lineloss_xlsx_export_test -- --nocapture`
|
||||
|
||||
Expected: compilation error — `lineloss_xlsx_export` module doesn't exist yet or `export_lineloss_xlsx` / `LinelossExportRequest` not defined.
|
||||
|
||||
- [ ] **Step 4: Implement `src/compat/lineloss_xlsx_export.rs`**
|
||||
|
||||
```rust
|
||||
use std::fs;
|
||||
use std::io::Write;
|
||||
use std::path::{Path, PathBuf};
|
||||
|
||||
use serde_json::{Map, Value};
|
||||
use zip::write::FileOptions;
|
||||
use zip::{CompressionMethod, ZipWriter};
|
||||
|
||||
pub struct LinelossExportRequest {
|
||||
pub sheet_name: String,
|
||||
pub column_defs: Vec<(String, String)>,
|
||||
pub rows: Vec<Map<String, Value>>,
|
||||
pub output_path: PathBuf,
|
||||
}
|
||||
|
||||
pub fn export_lineloss_xlsx(request: &LinelossExportRequest) -> anyhow::Result<PathBuf> {
|
||||
if request.rows.is_empty() {
|
||||
anyhow::bail!("rows must not be empty");
|
||||
}
|
||||
if request.column_defs.is_empty() {
|
||||
anyhow::bail!("column_defs must not be empty");
|
||||
}
|
||||
|
||||
let sheet_xml = build_worksheet_xml(&request.column_defs, &request.rows);
|
||||
|
||||
write_xlsx(
|
||||
&request.output_path,
|
||||
&request.sheet_name,
|
||||
&sheet_xml,
|
||||
)?;
|
||||
|
||||
Ok(request.output_path.clone())
|
||||
}
|
||||
|
||||
fn build_worksheet_xml(
|
||||
column_defs: &[(String, String)],
|
||||
rows: &[Map<String, Value>],
|
||||
) -> String {
|
||||
let mut xml_rows = Vec::with_capacity(rows.len() + 1);
|
||||
|
||||
// Header row (row 1)
|
||||
let header_cells: Vec<String> = column_defs
|
||||
.iter()
|
||||
.enumerate()
|
||||
.map(|(col_idx, (_key, label))| {
|
||||
let col_letter = column_letter(col_idx);
|
||||
format!(
|
||||
"<c r=\"{col_letter}1\" t=\"inlineStr\"><is><t>{}</t></is></c>",
|
||||
xml_escape(label)
|
||||
)
|
||||
})
|
||||
.collect();
|
||||
xml_rows.push(format!("<row r=\"1\">{}</row>", header_cells.join("")));
|
||||
|
||||
// Data rows (row 2+)
|
||||
for (row_idx, row) in rows.iter().enumerate() {
|
||||
let excel_row = row_idx + 2;
|
||||
let cells: Vec<String> = column_defs
|
||||
.iter()
|
||||
.enumerate()
|
||||
.map(|(col_idx, (key, _label))| {
|
||||
let col_letter = column_letter(col_idx);
|
||||
let value = row
|
||||
.get(key)
|
||||
.map(|v| value_to_string(v))
|
||||
.unwrap_or_default();
|
||||
format!(
|
||||
"<c r=\"{col_letter}{excel_row}\" t=\"inlineStr\"><is><t>{}</t></is></c>",
|
||||
xml_escape(&value)
|
||||
)
|
||||
})
|
||||
.collect();
|
||||
xml_rows.push(format!("<row r=\"{excel_row}\">{}</row>", cells.join("")));
|
||||
}
|
||||
|
||||
format!(
|
||||
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\
|
||||
<worksheet xmlns=\"http://schemas.openxmlformats.org/spreadsheetml/2006/main\">\
|
||||
<sheetData>{}</sheetData>\
|
||||
</worksheet>",
|
||||
xml_rows.join("")
|
||||
)
|
||||
}
|
||||
|
||||
fn column_letter(index: usize) -> String {
|
||||
let mut result = String::new();
|
||||
let mut n = index;
|
||||
loop {
|
||||
result.insert(0, (b'A' + (n % 26) as u8) as char);
|
||||
if n < 26 {
|
||||
break;
|
||||
}
|
||||
n = n / 26 - 1;
|
||||
}
|
||||
result
|
||||
}
|
||||
|
||||
fn value_to_string(value: &Value) -> String {
|
||||
match value {
|
||||
Value::String(text) => text.clone(),
|
||||
Value::Number(number) => number.to_string(),
|
||||
Value::Bool(flag) => flag.to_string(),
|
||||
Value::Null => String::new(),
|
||||
other => other.to_string(),
|
||||
}
|
||||
}
|
||||
|
||||
fn xml_escape(value: &str) -> String {
|
||||
value
|
||||
.replace('&', "&")
|
||||
.replace('<', "<")
|
||||
.replace('>', ">")
|
||||
}
|
||||
|
||||
fn write_xlsx(output_path: &Path, sheet_name: &str, sheet_xml: &str) -> anyhow::Result<()> {
|
||||
if let Some(parent) = output_path.parent() {
|
||||
fs::create_dir_all(parent)?;
|
||||
}
|
||||
if output_path.exists() {
|
||||
fs::remove_file(output_path)?;
|
||||
}
|
||||
|
||||
let file = fs::File::create(output_path)?;
|
||||
let mut zip = ZipWriter::new(file);
|
||||
let options = FileOptions::default().compression_method(CompressionMethod::Stored);
|
||||
|
||||
zip.start_file("[Content_Types].xml", options)?;
|
||||
zip.write_all(content_types_xml().as_bytes())?;
|
||||
|
||||
zip.start_file("_rels/.rels", options)?;
|
||||
zip.write_all(root_rels_xml().as_bytes())?;
|
||||
|
||||
zip.start_file("docProps/app.xml", options)?;
|
||||
zip.write_all(app_xml().as_bytes())?;
|
||||
|
||||
zip.start_file("docProps/core.xml", options)?;
|
||||
zip.write_all(core_xml().as_bytes())?;
|
||||
|
||||
zip.start_file("xl/workbook.xml", options)?;
|
||||
zip.write_all(workbook_xml(&xml_escape(sheet_name)).as_bytes())?;
|
||||
|
||||
zip.start_file("xl/_rels/workbook.xml.rels", options)?;
|
||||
zip.write_all(workbook_rels_xml().as_bytes())?;
|
||||
|
||||
zip.start_file("xl/worksheets/sheet1.xml", options)?;
|
||||
zip.write_all(sheet_xml.as_bytes())?;
|
||||
|
||||
zip.finish()?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn content_types_xml() -> &'static str {
|
||||
r#"<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
|
||||
<Types xmlns="http://schemas.openxmlformats.org/package/2006/content-types">
|
||||
<Default Extension="rels" ContentType="application/vnd.openxmlformats-package.relationships+xml"/>
|
||||
<Default Extension="xml" ContentType="application/xml"/>
|
||||
<Override PartName="/xl/workbook.xml" ContentType="application/vnd.openxmlformats-officedocument.spreadsheetml.sheet.main+xml"/>
|
||||
<Override PartName="/xl/worksheets/sheet1.xml" ContentType="application/vnd.openxmlformats-officedocument.spreadsheetml.worksheet+xml"/>
|
||||
<Override PartName="/docProps/core.xml" ContentType="application/vnd.openxmlformats-package.core-properties+xml"/>
|
||||
<Override PartName="/docProps/app.xml" ContentType="application/vnd.openxmlformats-officedocument.extended-properties+xml"/>
|
||||
</Types>"#
|
||||
}
|
||||
|
||||
fn root_rels_xml() -> &'static str {
|
||||
r#"<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
|
||||
<Relationships xmlns="http://schemas.openxmlformats.org/package/2006/relationships">
|
||||
<Relationship Id="rId1" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/officeDocument" Target="xl/workbook.xml"/>
|
||||
<Relationship Id="rId2" Type="http://schemas.openxmlformats.org/package/2006/relationships/metadata/core-properties" Target="docProps/core.xml"/>
|
||||
<Relationship Id="rId3" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/extended-properties" Target="docProps/app.xml"/>
|
||||
</Relationships>"#
|
||||
}
|
||||
|
||||
fn app_xml() -> &'static str {
|
||||
r#"<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
|
||||
<Properties xmlns="http://schemas.openxmlformats.org/officeDocument/2006/extended-properties"
|
||||
xmlns:vt="http://schemas.openxmlformats.org/officeDocument/2006/docPropsVTypes">
|
||||
<Application>sgClaw</Application>
|
||||
</Properties>"#
|
||||
}
|
||||
|
||||
fn core_xml() -> &'static str {
|
||||
r#"<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
|
||||
<cp:coreProperties xmlns:cp="http://schemas.openxmlformats.org/package/2006/metadata/core-properties"
|
||||
xmlns:dc="http://purl.org/dc/elements/1.1/"
|
||||
xmlns:dcterms="http://purl.org/dc/terms/"
|
||||
xmlns:dcmitype="http://purl.org/dc/dcmitype/"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
|
||||
<dc:title>台区线损报表</dc:title>
|
||||
</cp:coreProperties>"#
|
||||
}
|
||||
|
||||
fn workbook_xml(sheet_name: &str) -> String {
|
||||
format!(
|
||||
r#"<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
|
||||
<workbook xmlns="http://schemas.openxmlformats.org/spreadsheetml/2006/main"
|
||||
xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships">
|
||||
<sheets>
|
||||
<sheet name="{sheet_name}" sheetId="1" r:id="rId1"/>
|
||||
</sheets>
|
||||
</workbook>"#
|
||||
)
|
||||
}
|
||||
|
||||
fn workbook_rels_xml() -> &'static str {
|
||||
r#"<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
|
||||
<Relationships xmlns="http://schemas.openxmlformats.org/package/2006/relationships">
|
||||
<Relationship Id="rId1" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/worksheet" Target="worksheets/sheet1.xml"/>
|
||||
</Relationships>"#
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::column_letter;
|
||||
|
||||
#[test]
|
||||
fn column_letter_maps_indices_correctly() {
|
||||
assert_eq!(column_letter(0), "A");
|
||||
assert_eq!(column_letter(1), "B");
|
||||
assert_eq!(column_letter(6), "G");
|
||||
assert_eq!(column_letter(25), "Z");
|
||||
assert_eq!(column_letter(26), "AA");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 5: Run the tests to verify they pass**
|
||||
|
||||
Run: `cargo test --test lineloss_xlsx_export_test -- --nocapture`
|
||||
|
||||
Expected: both `export_month_lineloss_produces_valid_xlsx` and `export_empty_rows_returns_error` PASS.
|
||||
|
||||
Also run the internal unit test:
|
||||
|
||||
Run: `cargo test lineloss_xlsx_export -- --nocapture`
|
||||
|
||||
Expected: `column_letter_maps_indices_correctly` PASS.
|
||||
|
||||
- [ ] **Step 6: Commit**
|
||||
|
||||
```bash
|
||||
git add src/compat/lineloss_xlsx_export.rs src/compat/mod.rs tests/lineloss_xlsx_export_test.rs
|
||||
git commit -m "feat(lineloss): add Rust-side XLSX generation for lineloss reports"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 2: Integrate XLSX Export into `deterministic_submit.rs`
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/compat/deterministic_submit.rs`
|
||||
|
||||
- [ ] **Step 1: Add imports and helper function to extract export data from artifact**
|
||||
|
||||
At the top of `src/compat/deterministic_submit.rs`, add the import:
|
||||
|
||||
```rust
|
||||
use crate::compat::lineloss_xlsx_export::{export_lineloss_xlsx, LinelossExportRequest};
|
||||
```
|
||||
|
||||
Then add a new helper function after `summarize_lineloss_artifact`:
|
||||
|
||||
```rust
|
||||
struct LinelossArtifactExportData {
|
||||
sheet_name: String,
|
||||
column_defs: Vec<(String, String)>,
|
||||
rows: Vec<Map<String, Value>>,
|
||||
}
|
||||
|
||||
fn extract_export_data(output: &str) -> Option<LinelossArtifactExportData> {
|
||||
let payload: Value = serde_json::from_str(output).ok()?;
|
||||
let artifact = payload
|
||||
.as_object()
|
||||
.and_then(|object| object.get("text"))
|
||||
.unwrap_or(&payload);
|
||||
let artifact = artifact.as_object()?;
|
||||
|
||||
if artifact.get("type").and_then(Value::as_str) != Some("report-artifact") {
|
||||
return None;
|
||||
}
|
||||
|
||||
let status = artifact.get("status").and_then(Value::as_str).unwrap_or("");
|
||||
if !matches!(status, "ok" | "partial") {
|
||||
return None;
|
||||
}
|
||||
|
||||
let rows = artifact
|
||||
.get("rows")
|
||||
.and_then(Value::as_array)?;
|
||||
if rows.is_empty() {
|
||||
return None;
|
||||
}
|
||||
let rows: Vec<Map<String, Value>> = rows
|
||||
.iter()
|
||||
.filter_map(|row| row.as_object().cloned())
|
||||
.collect();
|
||||
if rows.is_empty() {
|
||||
return None;
|
||||
}
|
||||
|
||||
let column_defs: Vec<(String, String)> = artifact
|
||||
.get("column_defs")
|
||||
.and_then(Value::as_array)
|
||||
.map(|defs| {
|
||||
defs.iter()
|
||||
.filter_map(|def| {
|
||||
let arr = def.as_array()?;
|
||||
let key = arr.first()?.as_str()?.to_string();
|
||||
let label = arr.get(1)?.as_str()?.to_string();
|
||||
Some((key, label))
|
||||
})
|
||||
.collect()
|
||||
})
|
||||
.unwrap_or_default();
|
||||
|
||||
// Fallback: if column_defs not in artifact, try "columns" array as keys
|
||||
let column_defs = if column_defs.is_empty() {
|
||||
let columns = artifact
|
||||
.get("columns")
|
||||
.and_then(Value::as_array)?;
|
||||
columns
|
||||
.iter()
|
||||
.filter_map(|col| {
|
||||
let key = col.as_str()?.to_string();
|
||||
Some((key.clone(), key))
|
||||
})
|
||||
.collect()
|
||||
} else {
|
||||
column_defs
|
||||
};
|
||||
|
||||
if column_defs.is_empty() {
|
||||
return None;
|
||||
}
|
||||
|
||||
let org_label = artifact
|
||||
.get("org")
|
||||
.and_then(Value::as_object)
|
||||
.and_then(|org| org.get("label"))
|
||||
.and_then(Value::as_str)
|
||||
.unwrap_or("lineloss");
|
||||
let period_mode = artifact
|
||||
.get("period")
|
||||
.and_then(Value::as_object)
|
||||
.and_then(|p| p.get("mode"))
|
||||
.and_then(Value::as_str)
|
||||
.unwrap_or("month");
|
||||
let period_value = artifact
|
||||
.get("period")
|
||||
.and_then(Value::as_object)
|
||||
.and_then(|p| p.get("value"))
|
||||
.and_then(Value::as_str)
|
||||
.unwrap_or("");
|
||||
let mode_label = if period_mode == "week" { "周度" } else { "月度" };
|
||||
let sheet_name = format!("{org_label}{mode_label}线损分析报表({period_value})");
|
||||
|
||||
Some(LinelossArtifactExportData {
|
||||
sheet_name,
|
||||
column_defs,
|
||||
rows,
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Add the export-after-collection function**
|
||||
|
||||
Add a new function that wraps the existing flow with XLSX export:
|
||||
|
||||
```rust
|
||||
fn try_export_lineloss_xlsx(
|
||||
output: &str,
|
||||
workspace_root: &Path,
|
||||
) -> Option<PathBuf> {
|
||||
let data = extract_export_data(output)?;
|
||||
let nanos = std::time::SystemTime::now()
|
||||
.duration_since(std::time::UNIX_EPOCH)
|
||||
.map(|d| d.as_nanos())
|
||||
.unwrap_or_default();
|
||||
let out_dir = workspace_root.join("out");
|
||||
let output_path = out_dir.join(format!("tq-lineloss-{nanos}.xlsx"));
|
||||
|
||||
let request = LinelossExportRequest {
|
||||
sheet_name: data.sheet_name,
|
||||
column_defs: data.column_defs,
|
||||
rows: data.rows,
|
||||
output_path,
|
||||
};
|
||||
|
||||
match export_lineloss_xlsx(&request) {
|
||||
Ok(path) => {
|
||||
eprintln!("[deterministic_submit] XLSX exported to: {}", path.display());
|
||||
Some(path)
|
||||
}
|
||||
Err(err) => {
|
||||
eprintln!("[deterministic_submit] XLSX export failed: {err}");
|
||||
None
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 3: Modify `execute_deterministic_submit_with_browser_backend` to call export**
|
||||
|
||||
Replace the body of `execute_deterministic_submit_with_browser_backend` (lines 119-136 of the original file):
|
||||
|
||||
```rust
|
||||
pub fn execute_deterministic_submit_with_browser_backend(
|
||||
browser_backend: Arc<dyn BrowserBackend>,
|
||||
plan: &DeterministicExecutionPlan,
|
||||
workspace_root: &Path,
|
||||
settings: &SgClawSettings,
|
||||
) -> Result<DirectSubmitOutcome, PipeError> {
|
||||
let args = deterministic_submit_args(plan);
|
||||
let output =
|
||||
crate::compat::direct_skill_runtime::execute_browser_script_skill_raw_output_with_browser_backend(
|
||||
browser_backend,
|
||||
&plan.tool_name,
|
||||
workspace_root,
|
||||
settings,
|
||||
args,
|
||||
)?;
|
||||
|
||||
let export_path = try_export_lineloss_xlsx(&output, workspace_root);
|
||||
Ok(summarize_lineloss_output_with_export(&output, export_path.as_deref()))
|
||||
}
|
||||
```
|
||||
|
||||
Apply the same change to `execute_deterministic_submit` (the non-backend variant, lines 101-117):
|
||||
|
||||
```rust
|
||||
pub fn execute_deterministic_submit<T: Transport + 'static>(
|
||||
browser_tool: BrowserPipeTool<T>,
|
||||
plan: &DeterministicExecutionPlan,
|
||||
workspace_root: &Path,
|
||||
settings: &SgClawSettings,
|
||||
) -> Result<DirectSubmitOutcome, PipeError> {
|
||||
let args = deterministic_submit_args(plan);
|
||||
let output = crate::compat::direct_skill_runtime::execute_browser_script_skill_raw_output(
|
||||
browser_tool,
|
||||
&plan.tool_name,
|
||||
workspace_root,
|
||||
settings,
|
||||
args,
|
||||
)?;
|
||||
|
||||
let export_path = try_export_lineloss_xlsx(&output, workspace_root);
|
||||
Ok(summarize_lineloss_output_with_export(&output, export_path.as_deref()))
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 4: Add `summarize_lineloss_output_with_export` function**
|
||||
|
||||
Add this new function. It wraps the existing `summarize_lineloss_output` and appends the export path:
|
||||
|
||||
```rust
|
||||
fn summarize_lineloss_output_with_export(output: &str, export_path: Option<&Path>) -> DirectSubmitOutcome {
|
||||
let mut outcome = summarize_lineloss_output(output);
|
||||
|
||||
if let Some(path) = export_path {
|
||||
outcome.summary.push_str(&format!(" export_path={}", path.display()));
|
||||
}
|
||||
|
||||
outcome
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 5: Run existing tests to ensure nothing breaks**
|
||||
|
||||
Run: `cargo test --test deterministic_submit_test -- --nocapture`
|
||||
|
||||
Expected: all existing tests PASS (the tests don't call `execute_deterministic_submit`, they test `decide_deterministic_submit` and parsing logic which is unchanged).
|
||||
|
||||
Run: `cargo test deterministic_submit -- --nocapture`
|
||||
|
||||
Expected: PASS.
|
||||
|
||||
- [ ] **Step 6: Commit**
|
||||
|
||||
```bash
|
||||
git add src/compat/deterministic_submit.rs
|
||||
git commit -m "feat(lineloss): integrate Rust-side XLSX export into deterministic submit pipeline"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 3: Modify `collect_lineloss.js` to Skip Browser-Side Export
|
||||
|
||||
**Files:**
|
||||
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/scripts/collect_lineloss.js`
|
||||
|
||||
- [ ] **Step 1: Add `column_defs` to the artifact returned by `buildArtifact`**
|
||||
|
||||
In the `buildArtifact` function (around line 198), the `columns` field currently contains just column keys (e.g., `["ORG_NAME", "YGDL", ...]`). Add a `column_defs` field that includes the full key+label pairs. Change the `buildArtifact` function to also accept and emit `column_defs`:
|
||||
|
||||
Find this block in `buildArtifact` (line 198-242):
|
||||
|
||||
```javascript
|
||||
function buildArtifact({
|
||||
status,
|
||||
blockedReason = '',
|
||||
fatalError = '',
|
||||
org_label = '',
|
||||
org_code = '',
|
||||
period_mode = '',
|
||||
period_mode_code = '',
|
||||
period_value = '',
|
||||
period_payload = {},
|
||||
columns = [],
|
||||
rows = [],
|
||||
export: exportState,
|
||||
reasons = []
|
||||
}) {
|
||||
```
|
||||
|
||||
Replace with:
|
||||
|
||||
```javascript
|
||||
function buildArtifact({
|
||||
status,
|
||||
blockedReason = '',
|
||||
fatalError = '',
|
||||
org_label = '',
|
||||
org_code = '',
|
||||
period_mode = '',
|
||||
period_mode_code = '',
|
||||
period_value = '',
|
||||
period_payload = {},
|
||||
columns = [],
|
||||
column_defs = [],
|
||||
rows = [],
|
||||
export: exportState,
|
||||
reasons = []
|
||||
}) {
|
||||
```
|
||||
|
||||
In the returned object (the `return { ... }` block inside `buildArtifact`), add `column_defs` after `columns`:
|
||||
|
||||
```javascript
|
||||
columns: [...columns],
|
||||
column_defs: [...column_defs],
|
||||
rows: [...rows],
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Pass `column_defs` from `buildBrowserEntrypointResult`**
|
||||
|
||||
In `buildBrowserEntrypointResult`, after the `columns` assignment (around line 452), add:
|
||||
|
||||
```javascript
|
||||
const columns = normalizedArgs.period_mode === 'week' ? WEEK_COLUMNS : MONTH_COLUMNS;
|
||||
const columnDefs = normalizedArgs.period_mode === 'week' ? WEEK_COLUMN_DEFS : MONTH_COLUMN_DEFS;
|
||||
```
|
||||
|
||||
Then in every call to `buildArtifact` inside `buildBrowserEntrypointResult`, add `column_defs: columnDefs` alongside `columns`. There are 5 calls:
|
||||
|
||||
**Call 1** (API error, around line 466):
|
||||
```javascript
|
||||
columns,
|
||||
column_defs: columnDefs,
|
||||
rows: [],
|
||||
```
|
||||
|
||||
**Call 2** (empty rows, around line 483):
|
||||
```javascript
|
||||
columns,
|
||||
column_defs: columnDefs,
|
||||
rows: []
|
||||
```
|
||||
|
||||
**Call 3** (normalization failure, around line 497):
|
||||
```javascript
|
||||
columns,
|
||||
column_defs: columnDefs,
|
||||
rows: [],
|
||||
```
|
||||
|
||||
**Call 4** (success, around line 558):
|
||||
```javascript
|
||||
columns,
|
||||
column_defs: columnDefs,
|
||||
rows,
|
||||
```
|
||||
|
||||
Note: the two `buildArtifact` calls before the `columns` variable is assigned (validation failure and page context failure, around lines 422 and 439) don't need `column_defs` since they don't have data.
|
||||
|
||||
- [ ] **Step 3: Remove the `exportWorkbook` and `writeReportLog` calls from the success path**
|
||||
|
||||
In `buildBrowserEntrypointResult`, replace the entire export block (lines 518-556) with a simplified version:
|
||||
|
||||
Find:
|
||||
```javascript
|
||||
const exportState = {
|
||||
attempted: false,
|
||||
status: 'skipped',
|
||||
message: null
|
||||
};
|
||||
|
||||
if (typeof deps.exportWorkbook === 'function') {
|
||||
exportState.attempted = true;
|
||||
try {
|
||||
const exportPayload = buildExportPayload({
|
||||
mode: normalizedArgs.period_mode,
|
||||
orgLabel: normalizedArgs.org_label,
|
||||
periodValue: normalizedArgs.period_value,
|
||||
rows
|
||||
});
|
||||
const exportResult = await deps.exportWorkbook(exportPayload);
|
||||
const exportPath = pickFirstNonEmpty(exportResult?.path, exportResult?.data?.path, exportResult?.data?.data);
|
||||
if (!exportPath) {
|
||||
throw new Error('export_failed');
|
||||
}
|
||||
exportState.status = 'ok';
|
||||
exportState.message = exportPath;
|
||||
|
||||
if (typeof deps.writeReportLog === 'function') {
|
||||
try {
|
||||
const reportLog = await deps.writeReportLog(buildReportName(normalizedArgs), exportPath);
|
||||
if (reportLog?.success === false) {
|
||||
reasons.push('report_log_failed');
|
||||
}
|
||||
} catch (_error) {
|
||||
reasons.push('report_log_failed');
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
reasons.push('export_failed');
|
||||
exportState.status = 'failed';
|
||||
exportState.message = pickFirstNonEmpty(error?.message, 'export_failed');
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Replace with:
|
||||
```javascript
|
||||
// Export is handled by Rust side after receiving the artifact.
|
||||
// JS only provides rows + column_defs in the artifact.
|
||||
const exportState = {
|
||||
attempted: false,
|
||||
status: 'deferred_to_rust',
|
||||
message: null
|
||||
};
|
||||
```
|
||||
|
||||
- [ ] **Step 4: Remove unused constants and functions**
|
||||
|
||||
Remove these constants (lines 5-6) since they are no longer called from JS:
|
||||
|
||||
```javascript
|
||||
const EXPORT_SERVICE_URL = 'http://localhost:13313/SurfaceServices/personalBread/export/faultDetailsExportXLSX';
|
||||
const REPORT_LOG_URL = 'http://localhost:13313/ReportServices/Api/setReportLog';
|
||||
```
|
||||
|
||||
Remove the `postJson` function (lines 264-294) — it is no longer needed since no JS-side HTTP calls are made to localhost.
|
||||
|
||||
Remove these functions from `defaultBrowserDeps()`:
|
||||
- `exportWorkbook` (lines 350-373)
|
||||
- `writeReportLog` (lines 375-409)
|
||||
|
||||
Remove these now-unused functions:
|
||||
- `buildExportTitles` (lines 244-254)
|
||||
- `buildExportPayload` (lines 256-262)
|
||||
- `buildReportName` (lines 413-415)
|
||||
|
||||
- [ ] **Step 5: Update the module.exports to remove unused exports**
|
||||
|
||||
Update the `module.exports` block (lines 572-586). Remove `buildBrowserEntrypointResult` from exports if it was only used for testing with full deps, or keep it for test compatibility. The final exports block:
|
||||
|
||||
```javascript
|
||||
if (typeof module !== 'undefined' && module.exports) {
|
||||
module.exports = {
|
||||
MONTH_COLUMNS,
|
||||
WEEK_COLUMNS,
|
||||
MONTH_COLUMN_DEFS,
|
||||
WEEK_COLUMN_DEFS,
|
||||
validateArgs,
|
||||
buildMonthRequest,
|
||||
buildWeekRequest,
|
||||
normalizeRows,
|
||||
determineArtifactStatus,
|
||||
buildArtifact,
|
||||
buildBrowserEntrypointResult
|
||||
};
|
||||
} else {
|
||||
return buildBrowserEntrypointResult(args);
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 6: Verify the JS file has no syntax errors**
|
||||
|
||||
Run: `node -c "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/scripts/collect_lineloss.js"`
|
||||
|
||||
Expected: no syntax errors. (Note: the file uses `return` at top level inside a wrapped IIFE when injected into the browser, so Node syntax check may warn — the important thing is no parse errors.)
|
||||
|
||||
Alternatively, check the test file still works:
|
||||
|
||||
Run: `node "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/scripts/collect_lineloss.test.js"`
|
||||
|
||||
Expected: tests pass (or at least no JS parse errors).
|
||||
|
||||
- [ ] **Step 7: Commit**
|
||||
|
||||
```bash
|
||||
git add "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/tq-lineloss-report/scripts/collect_lineloss.js"
|
||||
git commit -m "feat(lineloss): remove browser-side export, defer to Rust-side XLSX generation"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 4: Full Build Verification
|
||||
|
||||
**Files:** None (verification only)
|
||||
|
||||
- [ ] **Step 1: Run full cargo build**
|
||||
|
||||
Run: `cargo build`
|
||||
|
||||
Expected: successful compilation with no errors.
|
||||
|
||||
- [ ] **Step 2: Run all tests**
|
||||
|
||||
Run: `cargo test -- --nocapture`
|
||||
|
||||
Expected: all tests pass, including:
|
||||
- `lineloss_xlsx_export_test::export_month_lineloss_produces_valid_xlsx`
|
||||
- `lineloss_xlsx_export_test::export_empty_rows_returns_error`
|
||||
- `lineloss_xlsx_export::tests::column_letter_maps_indices_correctly`
|
||||
- All existing `deterministic_submit_test` tests
|
||||
|
||||
- [ ] **Step 3: Commit (if any fixups needed)**
|
||||
|
||||
Only if compilation or test fixes were required in this step.
|
||||
@@ -0,0 +1,117 @@
|
||||
# Helper Page Lifecycle Fix v2 — Same-Connection Close + Open
|
||||
|
||||
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||
|
||||
**Goal:** Prevent orphaned helper pages across process restarts by closing existing ones before opening new ones, all on the same WebSocket connection.
|
||||
|
||||
**Architecture:** In `bootstrap_helper_page`, after registering with the browser WS, send `sgHideBrowerserClosePage` (best-effort, silently ignored if no page exists), then send `sgHideBrowerserOpenPage`. Change `use_hidden_domain` to `true`.
|
||||
|
||||
**Tech Stack:** Rust, tungstenite, SuperRPA browser WS protocol
|
||||
|
||||
---
|
||||
|
||||
### Task 1: Add close-before-open in bootstrap_helper_page
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/browser/callback_host.rs:345-374` (bootstrap_helper_page function)
|
||||
|
||||
- [ ] **Step 1: Add close command before open command in bootstrap_helper_page**
|
||||
|
||||
Replace the current `bootstrap_helper_page` function. After `recv_bootstrap_prelude`, send the close command first, then the open command:
|
||||
|
||||
```rust
|
||||
fn bootstrap_helper_page(
|
||||
browser_ws_url: &str,
|
||||
request_url: &str,
|
||||
helper_url: &str,
|
||||
use_hidden_domain: bool,
|
||||
) -> Result<(), PipeError> {
|
||||
let (mut websocket, _) = connect(browser_ws_url)
|
||||
.map_err(|err| PipeError::Protocol(format!("browser websocket connect failed: {err}")))?;
|
||||
configure_bootstrap_socket(&mut websocket)?;
|
||||
websocket
|
||||
.send(Message::Text(
|
||||
r#"{"type":"register","role":"web"}"#.to_string().into(),
|
||||
))
|
||||
.map_err(|err| PipeError::Protocol(format!("browser websocket register failed: {err}")))?;
|
||||
let _ = recv_bootstrap_prelude(&mut websocket);
|
||||
|
||||
// Close any orphaned helper page from a previous process run.
|
||||
// Best-effort: if no page exists, the browser silently ignores this.
|
||||
let (open_action, close_action) = if use_hidden_domain {
|
||||
("sgHideBrowerserOpenPage", "sgHideBrowerserClosePage")
|
||||
} else {
|
||||
("sgBrowerserOpenPage", "sgBrowserClosePage")
|
||||
};
|
||||
let close_payload = json!([request_url, close_action, helper_url]).to_string();
|
||||
let _ = websocket.send(Message::Text(close_payload.into()));
|
||||
|
||||
let payload = json!([
|
||||
request_url,
|
||||
open_action,
|
||||
helper_url,
|
||||
])
|
||||
.to_string();
|
||||
websocket
|
||||
.send(Message::Text(payload.into()))
|
||||
.map_err(|err| PipeError::Protocol(format!("helper bootstrap send failed: {err}")))?;
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
Key changes from current code:
|
||||
- After `recv_bootstrap_prelude`, add the close command (best-effort, ignore errors)
|
||||
- Compute both `open_action` and `close_action` from `use_hidden_domain` flag
|
||||
- Send close first, then open on the same WebSocket connection
|
||||
|
||||
- [ ] **Step 2: Change `use_hidden_domain` to `true` in server.rs**
|
||||
|
||||
In `src/service/server.rs`, at the `start_with_browser_ws_url` call, change `false` to `true`:
|
||||
|
||||
```rust
|
||||
match LiveBrowserCallbackHost::start_with_browser_ws_url(
|
||||
browser_ws_url,
|
||||
&bootstrap_url,
|
||||
Duration::from_secs(15),
|
||||
BROWSER_RESPONSE_TIMEOUT,
|
||||
true, // use_hidden_domain: hidden domain for invisible helper
|
||||
) {
|
||||
```
|
||||
|
||||
- [ ] **Step 3: Build**
|
||||
|
||||
Run: `cargo build 2>&1`
|
||||
Expected: 0 errors.
|
||||
|
||||
- [ ] **Step 4: Run callback_host tests**
|
||||
|
||||
Run: `cargo test --lib -- callback_host 2>&1`
|
||||
Expected: 12 tests pass (including `live_callback_host_sends_bootstrap_open_page_command` which still checks for `sgBrowerserOpenPage` because the test passes `false`, and `live_callback_host_hidden_domain_sends_hide_open_page_command` which passes `true`).
|
||||
|
||||
Note: The test passes `false` for `use_hidden_domain`, so the close command will use `sgBrowserClosePage`. The test's fake WebSocket server will receive both the close and open frames. The test only checks that `sgBrowerserOpenPage` is present, which is still true.
|
||||
|
||||
- [ ] **Step 5: Commit**
|
||||
|
||||
```bash
|
||||
git add src/browser/callback_host.rs src/service/server.rs
|
||||
git commit -m "fix(callback_host): close orphaned helper page before opening new one on same WS"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 2: Full verification
|
||||
|
||||
**Files:** None (verification only)
|
||||
|
||||
- [ ] **Step 1: Full test suite**
|
||||
|
||||
Run: `cargo test 2>&1`
|
||||
Expected: All tests pass except pre-existing `lineloss_period_resolver_prompts_for_missing_period` failure.
|
||||
|
||||
- [ ] **Step 2: Verify key behavioral changes**
|
||||
|
||||
Manually confirm:
|
||||
1. `bootstrap_helper_page` sends close command before open command (both on same WS connection)
|
||||
2. `use_hidden_domain` is `true` in `server.rs` — helper page opens in hidden domain
|
||||
3. `Drop for LiveBrowserCallbackHost` remains simple (shutdown only, no close attempt)
|
||||
4. `cached_host` is still in `mod.rs` outer loop (process-internal deduplication)
|
||||
762
docs/superpowers/plans/2026-04-14-service-console-enhancement.md
Normal file
762
docs/superpowers/plans/2026-04-14-service-console-enhancement.md
Normal file
@@ -0,0 +1,762 @@
|
||||
# Service Console Enhancement Implementation Plan
|
||||
|
||||
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||
|
||||
**Goal:** Add auto-connect on page load and a settings panel to sg_claw_service_console.html, with config save via WebSocket to the sgClaw service.
|
||||
|
||||
**Architecture:** The HTML page auto-connects on load and provides a settings modal. When user saves, the page sends an `update_config` WebSocket message. The Rust service receives it, merges with existing config, writes to `sgclaw_config.json`, and responds.
|
||||
|
||||
**Tech Stack:** Rust (serde, tungstenite), vanilla JavaScript/HTML/CSS
|
||||
|
||||
---
|
||||
|
||||
### Task 1: Add `UpdateConfig` and `ConfigUpdated` protocol types
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/service/protocol.rs`
|
||||
|
||||
- [ ] **Step 1: Add `ConfigUpdatePayload` struct and `UpdateConfig` variant to `ClientMessage`**
|
||||
|
||||
Add this struct above the `ClientMessage` enum, and add the `UpdateConfig` variant to the enum:
|
||||
|
||||
```rust
|
||||
use std::path::PathBuf;
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
|
||||
pub struct ConfigUpdatePayload {
|
||||
#[serde(rename = "apiKey", default)]
|
||||
pub api_key: Option<String>,
|
||||
#[serde(rename = "baseUrl", default)]
|
||||
pub base_url: Option<String>,
|
||||
#[serde(default)]
|
||||
pub model: Option<String>,
|
||||
#[serde(rename = "skillsDir", default)]
|
||||
pub skills_dir: Option<String>,
|
||||
#[serde(rename = "directSubmitSkill", default)]
|
||||
pub direct_submit_skill: Option<String>,
|
||||
#[serde(rename = "runtimeProfile", default)]
|
||||
pub runtime_profile: Option<String>,
|
||||
#[serde(rename = "browserBackend", default)]
|
||||
pub browser_backend: Option<String>,
|
||||
}
|
||||
```
|
||||
|
||||
Add `UpdateConfig` variant to `ClientMessage` enum (after `Ping`):
|
||||
|
||||
```rust
|
||||
UpdateConfig {
|
||||
config: ConfigUpdatePayload,
|
||||
},
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Add `ConfigUpdated` variant to `ServiceMessage`**
|
||||
|
||||
Add after `Pong`:
|
||||
|
||||
```rust
|
||||
ConfigUpdated {
|
||||
success: bool,
|
||||
message: String,
|
||||
},
|
||||
```
|
||||
|
||||
- [ ] **Step 3: Update `into_submit_task_request` to handle `UpdateConfig`**
|
||||
|
||||
In the match arm, add `ClientMessage::UpdateConfig { .. }` to the list that returns `None`:
|
||||
|
||||
```rust
|
||||
ClientMessage::Connect
|
||||
| ClientMessage::Start
|
||||
| ClientMessage::Stop
|
||||
| ClientMessage::Ping
|
||||
| ClientMessage::UpdateConfig { .. } => None,
|
||||
```
|
||||
|
||||
- [ ] **Step 4: Run tests to verify protocol compiles**
|
||||
|
||||
Run: `cargo test --lib service::protocol`
|
||||
Expected: PASS (no protocol-specific tests yet, but it should compile)
|
||||
|
||||
### Task 2: Add `config_path()` getter to `AgentRuntimeContext`
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/agent/task_runner.rs`
|
||||
|
||||
- [ ] **Step 1: Add public getter method**
|
||||
|
||||
In the `impl AgentRuntimeContext` block, add after `load_sgclaw_settings()`:
|
||||
|
||||
```rust
|
||||
pub fn config_path(&self) -> Option<&Path> {
|
||||
self.config_path.as_deref()
|
||||
}
|
||||
```
|
||||
|
||||
Add the import at the top of the file if not present:
|
||||
|
||||
```rust
|
||||
use std::path::Path;
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Run tests to verify**
|
||||
|
||||
Run: `cargo test agent::task_runner`
|
||||
Expected: PASS
|
||||
|
||||
### Task 3: Add `save_to_path()` method to `SgClawSettings`
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/config/settings.rs`
|
||||
|
||||
- [ ] **Step 1: Add Serialize derive to SgClawSettings and related types**
|
||||
|
||||
The `RawSgClawSettings` struct uses `Deserialize` only. We need to add `Serialize` to `SgClawSettings` for writing. Add `use serde::Serialize;` at the top.
|
||||
|
||||
Add `Serialize` derive to `SgClawSettings`:
|
||||
|
||||
```rust
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Serialize)]
|
||||
pub struct SgClawSettings {
|
||||
```
|
||||
|
||||
But wait - `SgClawSettings` has enum fields (`RuntimeProfile`, `SkillsPromptMode`, `PlannerMode`, `BrowserBackend`, `OfficeBackend`) that don't implement `Serialize`. We need to add Serialize derives to those types too.
|
||||
|
||||
Instead, the simpler approach is to write a `to_raw()` method that converts `SgClawSettings` to a serializable struct, then serialize that.
|
||||
|
||||
- [ ] **Step 2: Create serializable raw config struct**
|
||||
|
||||
Add a new struct at the bottom of the file (before tests if any):
|
||||
|
||||
```rust
|
||||
#[derive(Debug, Serialize)]
|
||||
struct SerializableRawSgClawSettings {
|
||||
#[serde(rename = "apiKey")]
|
||||
api_key: String,
|
||||
#[serde(rename = "baseUrl")]
|
||||
base_url: String,
|
||||
model: String,
|
||||
#[serde(rename = "skillsDir", skip_serializing_if = "Option::is_none")]
|
||||
skills_dir: Option<String>,
|
||||
#[serde(rename = "directSubmitSkill", skip_serializing_if = "Option::is_none")]
|
||||
direct_submit_skill: Option<String>,
|
||||
#[serde(rename = "skillsPromptMode", skip_serializing_if = "Option::is_none")]
|
||||
skills_prompt_mode: Option<String>,
|
||||
#[serde(rename = "runtimeProfile", skip_serializing_if = "Option::is_none")]
|
||||
runtime_profile: Option<String>,
|
||||
#[serde(rename = "plannerMode", skip_serializing_if = "Option::is_none")]
|
||||
planner_mode: Option<String>,
|
||||
#[serde(rename = "activeProvider", skip_serializing_if = "Option::is_none")]
|
||||
active_provider: Option<String>,
|
||||
#[serde(rename = "browserBackend", skip_serializing_if = "Option::is_none")]
|
||||
browser_backend: Option<String>,
|
||||
#[serde(rename = "officeBackend", skip_serializing_if = "Option::is_none")]
|
||||
office_backend: Option<String>,
|
||||
#[serde(rename = "browserWsUrl", skip_serializing_if = "Option::is_none")]
|
||||
browser_ws_url: Option<String>,
|
||||
#[serde(rename = "serviceWsListenAddr", skip_serializing_if = "Option::is_none")]
|
||||
service_ws_listen_addr: Option<String>,
|
||||
#[serde(default)]
|
||||
providers: Vec<SerializableProviderSettings>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize)]
|
||||
struct SerializableProviderSettings {
|
||||
id: String,
|
||||
provider: Option<String>,
|
||||
#[serde(rename = "apiKey")]
|
||||
api_key: String,
|
||||
#[serde(rename = "baseUrl", skip_serializing_if = "Option::is_none")]
|
||||
base_url: Option<String>,
|
||||
model: String,
|
||||
#[serde(rename = "apiPath", skip_serializing_if = "Option::is_none")]
|
||||
api_path: Option<String>,
|
||||
#[serde(rename = "wireApi", skip_serializing_if = "Option::is_none")]
|
||||
wire_api: Option<String>,
|
||||
#[serde(rename = "requiresOpenaiAuth")]
|
||||
requires_openai_auth: bool,
|
||||
}
|
||||
```
|
||||
|
||||
Add `use serde::Serialize;` at the top of the file (combine with existing `use serde::Deserialize;`):
|
||||
|
||||
```rust
|
||||
use serde::{Deserialize, Serialize};
|
||||
```
|
||||
|
||||
- [ ] **Step 3: Add `to_serializable()` method to `SgClawSettings`**
|
||||
|
||||
In the `impl SgClawSettings` block, add:
|
||||
|
||||
```rust
|
||||
fn to_serializable(&self) -> SerializableRawSgClawSettings {
|
||||
let format_enum_value = |s: &str| s.to_string();
|
||||
|
||||
SerializableRawSgClawSettings {
|
||||
api_key: self.provider_api_key.clone(),
|
||||
base_url: self.provider_base_url.clone(),
|
||||
model: self.provider_model.clone(),
|
||||
skills_dir: self.skills_dir.as_ref().map(|p| p.to_string_lossy().into_owned()),
|
||||
direct_submit_skill: self.direct_submit_skill.clone(),
|
||||
skills_prompt_mode: Some(format_enum_value(match self.skills_prompt_mode {
|
||||
SkillsPromptMode::Full => "full",
|
||||
SkillsPromptMode::Compact => "compact",
|
||||
})),
|
||||
runtime_profile: Some(format_enum_value(match self.runtime_profile {
|
||||
RuntimeProfile::BrowserAttached => "browser-attached",
|
||||
RuntimeProfile::BrowserHeavy => "browser-heavy",
|
||||
RuntimeProfile::GeneralAssistant => "general-assistant",
|
||||
})),
|
||||
planner_mode: Some(format_enum_value(match self.planner_mode {
|
||||
PlannerMode::ZeroclawPlanFirst => "zeroclaw-plan-first",
|
||||
PlannerMode::LegacyDeterministic => "legacy-deterministic",
|
||||
})),
|
||||
active_provider: Some(self.active_provider.clone()),
|
||||
browser_backend: Some(format_enum_value(match self.browser_backend {
|
||||
BrowserBackend::SuperRpa => "super-rpa",
|
||||
BrowserBackend::AgentBrowser => "agent-browser",
|
||||
BrowserBackend::RustNative => "rust-native",
|
||||
BrowserBackend::ComputerUse => "computer-use",
|
||||
BrowserBackend::Auto => "auto",
|
||||
})),
|
||||
office_backend: Some(format_enum_value(match self.office_backend {
|
||||
OfficeBackend::OpenXml => "openxml",
|
||||
OfficeBackend::Disabled => "disabled",
|
||||
})),
|
||||
browser_ws_url: self.browser_ws_url.clone(),
|
||||
service_ws_listen_addr: self.service_ws_listen_addr.clone(),
|
||||
providers: self
|
||||
.providers
|
||||
.iter()
|
||||
.map(|p| SerializableProviderSettings {
|
||||
id: p.id.clone(),
|
||||
provider: Some(p.provider.clone()),
|
||||
api_key: p.api_key.clone(),
|
||||
base_url: p.base_url.clone(),
|
||||
model: p.model.clone(),
|
||||
api_path: p.api_path.clone(),
|
||||
wire_api: p.wire_api.clone(),
|
||||
requires_openai_auth: p.requires_openai_auth,
|
||||
})
|
||||
.collect(),
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 4: Add `save_to_path()` method**
|
||||
|
||||
In the same `impl SgClawSettings` block, add:
|
||||
|
||||
```rust
|
||||
pub fn save_to_path(&self, path: &Path) -> Result<(), ConfigError> {
|
||||
let serializable = self.to_serializable();
|
||||
let json = serde_json::to_string_pretty(&serializable)
|
||||
.map_err(|err| ConfigError::ConfigParse(path.to_path_buf(), err.to_string()))?;
|
||||
std::fs::write(path, json)
|
||||
.map_err(|err| ConfigError::ConfigRead(path.to_path_buf(), err.to_string()))
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 5: Run tests to verify compilation**
|
||||
|
||||
Run: `cargo test --lib config::settings`
|
||||
Expected: PASS
|
||||
|
||||
### Task 4: Handle `UpdateConfig` in the service server
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/service/server.rs`
|
||||
- Modify: `src/service/mod.rs` (if needed for imports)
|
||||
|
||||
- [ ] **Step 1: Add `UpdateConfig` match arm in `serve_client`**
|
||||
|
||||
In the `match message` block in `serve_client`, after the `SubmitTask` arm, add:
|
||||
|
||||
```rust
|
||||
ClientMessage::UpdateConfig { config } => {
|
||||
let Some(config_path) = context.config_path() else {
|
||||
sink.send_service_message(ServiceMessage::ConfigUpdated {
|
||||
success: false,
|
||||
message: "未找到配置文件路径。请通过 --config-path 参数启动 sg_claw 后再使用此功能。".to_string(),
|
||||
})?;
|
||||
continue;
|
||||
};
|
||||
|
||||
if !config_path.exists() {
|
||||
sink.send_service_message(ServiceMessage::ConfigUpdated {
|
||||
success: false,
|
||||
message: format!("配置文件不存在: {}", config_path.display()),
|
||||
})?;
|
||||
continue;
|
||||
}
|
||||
|
||||
let result = update_config_file(config_path, config);
|
||||
match result {
|
||||
Ok(()) => {
|
||||
sink.send_service_message(ServiceMessage::ConfigUpdated {
|
||||
success: true,
|
||||
message: "配置已保存。重启 sg_claw 以应用新配置。".to_string(),
|
||||
})?;
|
||||
}
|
||||
Err(err) => {
|
||||
sink.send_service_message(ServiceMessage::ConfigUpdated {
|
||||
success: false,
|
||||
message: format!("保存配置失败: {}", err),
|
||||
})?;
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Add `update_config_file` helper function**
|
||||
|
||||
Add this function above `serve_client` in `server.rs`:
|
||||
|
||||
```rust
|
||||
use crate::config::settings::{ConfigError, SgClawSettings};
|
||||
use crate::service::protocol::ConfigUpdatePayload;
|
||||
use std::path::Path;
|
||||
|
||||
fn update_config_file(config_path: &Path, config: ConfigUpdatePayload) -> Result<(), String> {
|
||||
let mut settings = SgClawSettings::load(Some(config_path))
|
||||
.map_err(|e| e.to_string())?
|
||||
.ok_or_else(|| "无法读取现有配置".to_string())?;
|
||||
|
||||
if let Some(v) = config.api_key {
|
||||
settings.provider_api_key = v;
|
||||
}
|
||||
if let Some(v) = config.base_url {
|
||||
settings.provider_base_url = v;
|
||||
}
|
||||
if let Some(v) = config.model {
|
||||
settings.provider_model = v;
|
||||
}
|
||||
if let Some(v) = config.skills_dir {
|
||||
settings.skills_dir = Some(PathBuf::from(&v));
|
||||
}
|
||||
if let Some(v) = config.direct_submit_skill {
|
||||
settings.direct_submit_skill = Some(v);
|
||||
}
|
||||
if let Some(v) = config.runtime_profile {
|
||||
settings.runtime_profile = match v.as_str() {
|
||||
"browser-attached" => crate::config::settings::RuntimeProfile::BrowserAttached,
|
||||
"browser-heavy" => crate::config::settings::RuntimeProfile::BrowserHeavy,
|
||||
"general-assistant" => crate::config::settings::RuntimeProfile::GeneralAssistant,
|
||||
_ => return Err(format!("无效的 runtimeProfile: {}", v)),
|
||||
};
|
||||
}
|
||||
if let Some(v) = config.browser_backend {
|
||||
settings.browser_backend = match v.as_str() {
|
||||
"super-rpa" => crate::config::settings::BrowserBackend::SuperRpa,
|
||||
"agent-browser" => crate::config::settings::BrowserBackend::AgentBrowser,
|
||||
"rust-native" => crate::config::settings::BrowserBackend::RustNative,
|
||||
"computer-use" => crate::config::settings::BrowserBackend::ComputerUse,
|
||||
"auto" => crate::config::settings::BrowserBackend::Auto,
|
||||
_ => return Err(format!("无效的 browserBackend: {}", v)),
|
||||
};
|
||||
}
|
||||
|
||||
settings
|
||||
.save_to_path(config_path)
|
||||
.map_err(|e| format!("写入配置文件失败: {}", e))
|
||||
}
|
||||
```
|
||||
|
||||
Add the import at the top of server.rs:
|
||||
|
||||
```rust
|
||||
use std::path::PathBuf;
|
||||
```
|
||||
|
||||
- [ ] **Step 3: Run tests to verify compilation**
|
||||
|
||||
Run: `cargo build`
|
||||
Expected: SUCCESS
|
||||
|
||||
### Task 5: Add auto-connect and settings UI to the service console HTML
|
||||
|
||||
**Files:**
|
||||
- Modify: `frontend/service-console/sg_claw_service_console.html`
|
||||
|
||||
- [ ] **Step 1: Add auto-connect on page load**
|
||||
|
||||
At the very end of the `<script>` section, after the existing event listeners and `updateUiState()`, add:
|
||||
|
||||
```javascript
|
||||
// Auto-connect on page load
|
||||
window.addEventListener("DOMContentLoaded", () => {
|
||||
connectOrDisconnectService(true);
|
||||
});
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Add Settings button HTML**
|
||||
|
||||
In the sidebar section of the HTML, after the connect button and before the "Composer" section label, add:
|
||||
|
||||
```html
|
||||
<button id="settingsBtn" class="ghost-btn" style="margin-top: 8px;">⚙ 设置</button>
|
||||
```
|
||||
|
||||
- [ ] **Step 3: Add Settings modal HTML**
|
||||
|
||||
Before the closing `</body>` tag, add the modal HTML:
|
||||
|
||||
```html
|
||||
<!-- Settings Modal -->
|
||||
<div id="settingsModal" style="display: none; position: fixed; top: 0; left: 0; width: 100%; height: 100%; background: rgba(0,0,0,0.5); z-index: 1000; align-items: center; justify-content: center;">
|
||||
<div style="background: var(--panel); border-radius: 20px; padding: 28px; width: min(520px, 90%); max-height: 85vh; overflow-y: auto; box-shadow: var(--shadow);">
|
||||
<h3 style="margin: 0 0 20px; font-size: 1.2rem;">sgClaw 配置</h3>
|
||||
|
||||
<div class="field">
|
||||
<label for="settingApiKey">API 密钥 *</label>
|
||||
<input id="settingApiKey" type="password" placeholder="输入模型 API 密钥" />
|
||||
</div>
|
||||
|
||||
<div class="field">
|
||||
<label for="settingBaseUrl">模型服务地址 *</label>
|
||||
<input id="settingBaseUrl" type="url" placeholder="例如:https://api.deepseek.com" />
|
||||
</div>
|
||||
|
||||
<div class="field">
|
||||
<label for="settingModel">模型名称 *</label>
|
||||
<input id="settingModel" type="text" placeholder="例如:deepseek-chat" />
|
||||
</div>
|
||||
|
||||
<div class="field">
|
||||
<label for="settingSkillsDir">Skills 目录路径</label>
|
||||
<input id="settingSkillsDir" type="text" placeholder="例如:D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills" />
|
||||
</div>
|
||||
|
||||
<div class="field">
|
||||
<label for="settingDirectSubmitSkill">直接提交技能</label>
|
||||
<input id="settingDirectSubmitSkill" type="text" placeholder="例如:tq-lineloss-report.collect_lineloss" />
|
||||
</div>
|
||||
|
||||
<div class="field">
|
||||
<label for="settingRuntimeProfile">运行模式</label>
|
||||
<select id="settingRuntimeProfile" style="width: 100%; border: 1px solid var(--line); border-radius: 16px; padding: 14px 16px; background: rgba(255, 255, 255, 0.92); color: var(--text); font: inherit;">
|
||||
<option value="browser-attached">browser-attached</option>
|
||||
<option value="browser-heavy">browser-heavy</option>
|
||||
<option value="general-assistant">general-assistant</option>
|
||||
</select>
|
||||
</div>
|
||||
|
||||
<div class="field">
|
||||
<label for="settingBrowserBackend">浏览器后端</label>
|
||||
<select id="settingBrowserBackend" style="width: 100%; border: 1px solid var(--line); border-radius: 16px; padding: 14px 16px; background: rgba(255, 255, 255, 0.92); color: var(--text); font: inherit;">
|
||||
<option value="super-rpa">super-rpa</option>
|
||||
<option value="agent-browser">agent-browser</option>
|
||||
<option value="rust-native">rust-native</option>
|
||||
<option value="computer-use">computer-use</option>
|
||||
<option value="auto">auto</option>
|
||||
</select>
|
||||
</div>
|
||||
|
||||
<div id="settingsValidation" style="color: var(--error); font-size: 0.92rem; min-height: 1.4em; margin: 10px 0;"></div>
|
||||
|
||||
<div style="display: flex; gap: 12px; margin-top: 16px;">
|
||||
<button id="settingsSaveBtn" class="primary-btn" style="flex: 1;">保存</button>
|
||||
<button id="settingsCancelBtn" class="ghost-btn" style="flex: 1;">取消</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
```
|
||||
|
||||
- [ ] **Step 4: Add settings modal CSS**
|
||||
|
||||
Add these CSS rules inside the `<style>` block, before the `@media` query:
|
||||
|
||||
```css
|
||||
/* Settings modal elements */
|
||||
select {
|
||||
width: 100%;
|
||||
border: 1px solid var(--line);
|
||||
border-radius: 16px;
|
||||
padding: 14px 16px;
|
||||
background: rgba(255, 255, 255, 0.92);
|
||||
color: var(--text);
|
||||
font: inherit;
|
||||
outline: none;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
select:focus {
|
||||
border-color: rgba(15, 118, 110, 0.5);
|
||||
box-shadow: 0 0 0 4px rgba(15, 118, 110, 0.12);
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 5: Add settings modal JavaScript logic**
|
||||
|
||||
Add this JavaScript at the end of the `<script>` section, before the closing `</script>` tag:
|
||||
|
||||
```javascript
|
||||
// Settings modal state
|
||||
const settingsElements = {
|
||||
modal: document.getElementById("settingsModal"),
|
||||
apiKey: document.getElementById("settingApiKey"),
|
||||
baseUrl: document.getElementById("settingBaseUrl"),
|
||||
model: document.getElementById("settingModel"),
|
||||
skillsDir: document.getElementById("settingSkillsDir"),
|
||||
directSubmitSkill: document.getElementById("settingDirectSubmitSkill"),
|
||||
runtimeProfile: document.getElementById("settingRuntimeProfile"),
|
||||
browserBackend: document.getElementById("settingBrowserBackend"),
|
||||
validation: document.getElementById("settingsValidation"),
|
||||
saveBtn: document.getElementById("settingsSaveBtn"),
|
||||
cancelBtn: document.getElementById("settingsCancelBtn"),
|
||||
};
|
||||
let settingsOpenBtn = null; // will be set below
|
||||
|
||||
function openSettingsModal() {
|
||||
// Pre-fill with current values from wsUrl field (for baseUrl hint)
|
||||
settingsElements.apiKey.value = "";
|
||||
settingsElements.baseUrl.value = "";
|
||||
settingsElements.model.value = "";
|
||||
settingsElements.skillsDir.value = "";
|
||||
settingsElements.directSubmitSkill.value = "";
|
||||
settingsElements.runtimeProfile.value = "browser-attached";
|
||||
settingsElements.browserBackend.value = "super-rpa";
|
||||
settingsElements.validation.textContent = "";
|
||||
settingsElements.modal.style.display = "flex";
|
||||
}
|
||||
|
||||
function closeSettingsModal() {
|
||||
settingsElements.modal.style.display = "none";
|
||||
}
|
||||
|
||||
function validateSettings() {
|
||||
const apiKey = settingsElements.apiKey.value.trim();
|
||||
const baseUrl = settingsElements.baseUrl.value.trim();
|
||||
const model = settingsElements.model.value.trim();
|
||||
|
||||
if (!apiKey) {
|
||||
return "API 密钥不能为空";
|
||||
}
|
||||
if (!model) {
|
||||
return "模型名称不能为空";
|
||||
}
|
||||
if (!baseUrl) {
|
||||
return "模型服务地址不能为空";
|
||||
}
|
||||
try {
|
||||
new URL(baseUrl);
|
||||
} catch {
|
||||
return "模型服务地址格式无效,请输入有效的 URL";
|
||||
}
|
||||
return "";
|
||||
}
|
||||
|
||||
function saveSettings() {
|
||||
const error = validateSettings();
|
||||
if (error) {
|
||||
settingsElements.validation.textContent = error;
|
||||
return;
|
||||
}
|
||||
|
||||
if (!socket || socket.readyState !== WebSocket.OPEN) {
|
||||
settingsElements.validation.textContent = "请先连接服务";
|
||||
return;
|
||||
}
|
||||
|
||||
settingsElements.validation.textContent = "";
|
||||
settingsElements.saveBtn.disabled = true;
|
||||
settingsElements.saveBtn.textContent = "保存中...";
|
||||
|
||||
const config = {
|
||||
apiKey: settingsElements.apiKey.value.trim(),
|
||||
baseUrl: settingsElements.baseUrl.value.trim(),
|
||||
model: settingsElements.model.value.trim(),
|
||||
};
|
||||
|
||||
const skillsDir = settingsElements.skillsDir.value.trim();
|
||||
if (skillsDir) config.skillsDir = skillsDir;
|
||||
|
||||
const directSubmitSkill = settingsElements.directSubmitSkill.value.trim();
|
||||
if (directSubmitSkill) config.directSubmitSkill = directSubmitSkill;
|
||||
|
||||
config.runtimeProfile = settingsElements.runtimeProfile.value;
|
||||
config.browserBackend = settingsElements.browserBackend.value;
|
||||
|
||||
socket.send(JSON.stringify({
|
||||
type: "update_config",
|
||||
config,
|
||||
}));
|
||||
}
|
||||
|
||||
function handleConfigResponse(message) {
|
||||
settingsElements.saveBtn.disabled = false;
|
||||
settingsElements.saveBtn.textContent = "保存";
|
||||
|
||||
if (message.success) {
|
||||
settingsElements.validation.textContent = message.message;
|
||||
settingsElements.validation.style.color = "var(--success)";
|
||||
// Auto-close after 2 seconds on success
|
||||
setTimeout(closeSettingsModal, 2000);
|
||||
} else {
|
||||
settingsElements.validation.textContent = message.message;
|
||||
settingsElements.validation.style.color = "var(--error)";
|
||||
}
|
||||
}
|
||||
|
||||
// Event listeners for settings
|
||||
settingsOpenBtn = document.getElementById("settingsBtn");
|
||||
settingsOpenBtn.addEventListener("click", openSettingsModal);
|
||||
settingsElements.cancelBtn.addEventListener("click", closeSettingsModal);
|
||||
settingsElements.saveBtn.addEventListener("click", saveSettings);
|
||||
|
||||
// Close modal on background click
|
||||
settingsElements.modal.addEventListener("click", (e) => {
|
||||
if (e.target === settingsElements.modal) {
|
||||
closeSettingsModal();
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
- [ ] **Step 6: Handle `config_updated` message in `handleMessage`**
|
||||
|
||||
In the existing `handleMessage` function, add a new case in the switch statement:
|
||||
|
||||
```javascript
|
||||
case "config_updated":
|
||||
handleConfigResponse(message);
|
||||
break;
|
||||
```
|
||||
|
||||
- [ ] **Step 7: Verify the HTML is well-formed**
|
||||
|
||||
Open the file in a browser and visually check that:
|
||||
- The settings button appears below the connect button
|
||||
- Clicking it opens the modal
|
||||
- The modal closes on Cancel or background click
|
||||
|
||||
### Task 6: Add protocol tests for new message types
|
||||
|
||||
**Files:**
|
||||
- Modify: `tests/service_console_html_test.rs`
|
||||
- Create: `tests/service_protocol_update_config_test.rs`
|
||||
|
||||
- [ ] **Step 1: Create protocol serialization test**
|
||||
|
||||
Create `tests/service_protocol_update_config_test.rs`:
|
||||
|
||||
```rust
|
||||
use sgclaw::service::protocol::{ClientMessage, ConfigUpdatePayload, ServiceMessage};
|
||||
|
||||
#[test]
|
||||
fn update_config_serializes_correctly() {
|
||||
let config = ConfigUpdatePayload {
|
||||
api_key: Some("test-key".to_string()),
|
||||
base_url: Some("https://api.example.com".to_string()),
|
||||
model: Some("test-model".to_string()),
|
||||
skills_dir: Some("/path/to/skills".to_string()),
|
||||
direct_submit_skill: Some("my-skill.my-tool".to_string()),
|
||||
runtime_profile: Some("browser-attached".to_string()),
|
||||
browser_backend: Some("super-rpa".to_string()),
|
||||
};
|
||||
|
||||
let msg = ClientMessage::UpdateConfig { config };
|
||||
let json = serde_json::to_string(&msg).unwrap();
|
||||
|
||||
assert!(json.contains("\"type\":\"update_config\""));
|
||||
assert!(json.contains("\"apiKey\":\"test-key\""));
|
||||
assert!(json.contains("\"baseUrl\":\"https://api.example.com\""));
|
||||
assert!(json.contains("\"model\":\"test-model\""));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn update_config_deserializes_correctly() {
|
||||
let json = r#"{
|
||||
"type": "update_config",
|
||||
"config": {
|
||||
"apiKey": "key123",
|
||||
"baseUrl": "https://api.test.com",
|
||||
"model": "gpt-4"
|
||||
}
|
||||
}"#;
|
||||
|
||||
let msg: ClientMessage = serde_json::from_str(json).unwrap();
|
||||
match msg {
|
||||
ClientMessage::UpdateConfig { config } => {
|
||||
assert_eq!(config.api_key, Some("key123".to_string()));
|
||||
assert_eq!(config.base_url, Some("https://api.test.com".to_string()));
|
||||
assert_eq!(config.model, Some("gpt-4".to_string()));
|
||||
assert!(config.skills_dir.is_none());
|
||||
}
|
||||
_ => panic!("expected UpdateConfig variant"),
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn config_updated_serializes_correctly() {
|
||||
let msg = ServiceMessage::ConfigUpdated {
|
||||
success: true,
|
||||
message: "配置已保存".to_string(),
|
||||
};
|
||||
let json = serde_json::to_string(&msg).unwrap();
|
||||
|
||||
assert!(json.contains("\"type\":\"config_updated\""));
|
||||
assert!(json.contains("\"success\":true"));
|
||||
assert!(json.contains("配置已保存"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn config_updated_deserializes_correctly() {
|
||||
let json = r#"{"type":"config_updated","success":false,"message":"保存失败"}"#;
|
||||
let msg: ServiceMessage = serde_json::from_str(json).unwrap();
|
||||
|
||||
match msg {
|
||||
ServiceMessage::ConfigUpdated { success, message } => {
|
||||
assert!(!success);
|
||||
assert_eq!(message, "保存失败");
|
||||
}
|
||||
_ => panic!("expected ConfigUpdated variant"),
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Update service console HTML test**
|
||||
|
||||
Add to `tests/service_console_html_test.rs`, at the end of the existing test:
|
||||
|
||||
```rust
|
||||
// New enhancement assertions
|
||||
assert!(source.contains("DOMContentLoaded"));
|
||||
assert!(source.contains("settingsBtn"));
|
||||
assert!(source.contains("settingsModal"));
|
||||
assert!(source.contains("update_config"));
|
||||
assert!(source.contains("config_updated"));
|
||||
assert!(source.contains("settingApiKey"));
|
||||
assert!(source.contains("settingBaseUrl"));
|
||||
assert!(source.contains("settingModel"));
|
||||
```
|
||||
|
||||
- [ ] **Step 3: Run all new tests**
|
||||
|
||||
Run: `cargo test --test service_protocol_update_config_test`
|
||||
Run: `cargo test --test service_console_html_test`
|
||||
Expected: All PASS
|
||||
|
||||
### Task 7: Full build and test verification
|
||||
|
||||
- [ ] **Step 1: Run full test suite**
|
||||
|
||||
Run: `cargo test 2>&1`
|
||||
Expected: All tests pass (except pre-existing `lineloss_period_resolver_prompts_for_missing_period` which was already failing before our changes)
|
||||
|
||||
- [ ] **Step 2: Build release binary**
|
||||
|
||||
Run: `cargo build --release 2>&1`
|
||||
Expected: SUCCESS
|
||||
|
||||
### Task 8: Manual smoke test instructions
|
||||
|
||||
After implementation, verify manually:
|
||||
|
||||
1. Start sg_claw with config path: `sg_claw.exe --config-path sgclaw_config.json`
|
||||
2. Open `sg_claw_service_console.html` in browser
|
||||
3. Verify: Page auto-connects (should show "已连接" within a few seconds)
|
||||
4. Click "设置" button
|
||||
5. Fill in API Key, Base URL, Model
|
||||
6. Click "保存"
|
||||
7. Verify: Modal shows "配置已保存。重启 sg_claw 以应用新配置。" and auto-closes after 2 seconds
|
||||
8. Verify: `sgclaw_config.json` file contains the new values
|
||||
9. Verify: Existing task submission still works (send a test instruction)
|
||||
@@ -0,0 +1,84 @@
|
||||
# Remove mac Guard from validatePageContext
|
||||
|
||||
## Date
|
||||
|
||||
2026-04-13
|
||||
|
||||
## Problem
|
||||
|
||||
`tq-lineloss-report` skill execution reports `status=blocked rows=0 reasons=page_context_unavailable`.
|
||||
|
||||
Diagnostic instrumentation confirmed:
|
||||
|
||||
```
|
||||
href=http://20.76.57.61:18080/gsllys
|
||||
host=20.76.57.61
|
||||
port=18080
|
||||
title=台区线损大数据分析模块
|
||||
mac=false
|
||||
```
|
||||
|
||||
The script executes on the correct domain but `globalThis.mac` does not exist, triggering the `page_context_unavailable` guard.
|
||||
|
||||
## Root Cause
|
||||
|
||||
`window.mac` is a Vue instance created by the **original scene page** (`index.html`), assigned via `window.mac = this` in `mounted()`. The original scene page acts as a controller that injects JS into the business page via `BrowserAction('sgBrowserExcuteJsCode', exactURL, jsCode)`.
|
||||
|
||||
In the skill execution model, there is no scene page. The script is injected directly via `sgBrowserExcuteJsCodeByDomain` onto a page matching the domain. No Vue instance is created, so `globalThis.mac` is always `undefined`. The `mac` check is architecturally invalid for the skill model.
|
||||
|
||||
Additionally, `sgBrowserExcuteJsCodeByDomain("20.76.57.61")` matches the parent frame page (`/gsllys`) rather than the business sub-page (`/gsllys/tqLinelossStatis/tqQualifyRateMonitor`). This is acceptable because the skill script makes direct HTTP requests with absolute URLs and does not depend on page-local state.
|
||||
|
||||
## Design
|
||||
|
||||
Remove the `globalThis.mac` existence check from `validatePageContext` in `collect_lineloss.js`. Retain the `host` matching check as a basic domain guard.
|
||||
|
||||
Also clean up the temporary diagnostic code (`diag` variable, `console.log` statements, enriched reason strings) added during debugging.
|
||||
|
||||
### Before
|
||||
|
||||
```javascript
|
||||
validatePageContext(args) {
|
||||
const host = normalizeText(globalThis.location?.hostname);
|
||||
const port = normalizeText(globalThis.location?.port);
|
||||
const href = normalizeText(globalThis.location?.href);
|
||||
const title = normalizeText(globalThis.document?.title);
|
||||
const expected = normalizeText(args.expected_domain);
|
||||
const hasMac = !!globalThis.mac;
|
||||
const diag = 'href=' + href + '|host=' + host + '|port=' + port + '|title=' + title + '|mac=' + hasMac;
|
||||
console.log('[validatePageContext] ' + diag);
|
||||
if (!host) {
|
||||
return { ok: false, reason: 'page_context_unavailable:host_empty|' + diag };
|
||||
}
|
||||
if (host !== expected) {
|
||||
return { ok: false, reason: 'page_context_mismatch:host=' + host + ',expected=' + expected + '|' + diag };
|
||||
}
|
||||
if (!hasMac) {
|
||||
return { ok: false, reason: 'page_context_unavailable:mac_missing|' + diag };
|
||||
}
|
||||
return { ok: true };
|
||||
},
|
||||
```
|
||||
|
||||
### After
|
||||
|
||||
```javascript
|
||||
validatePageContext(args) {
|
||||
const host = normalizeText(globalThis.location?.hostname);
|
||||
const expected = normalizeText(args.expected_domain);
|
||||
if (!host) {
|
||||
return { ok: false, reason: 'page_context_unavailable' };
|
||||
}
|
||||
if (host !== expected) {
|
||||
return { ok: false, reason: 'page_context_mismatch' };
|
||||
}
|
||||
return { ok: true };
|
||||
},
|
||||
```
|
||||
|
||||
## Files Changed
|
||||
|
||||
- `claw/claw/skills/skill_staging/skills/tq-lineloss-report/scripts/collect_lineloss.js` — `validatePageContext` function only
|
||||
|
||||
## No Recompilation Required
|
||||
|
||||
The JS file is read at runtime via `fs::read_to_string`. No Rust code changes.
|
||||
@@ -0,0 +1,111 @@
|
||||
# Rust-Side Lineloss XLSX Export
|
||||
|
||||
## Problem
|
||||
|
||||
`collect_lineloss.js` runs on a remote page (`http://20.76.57.61:18080/gsllys`).
|
||||
The script successfully queries API data (12 rows), but cannot call
|
||||
`http://localhost:13313/.../faultDetailsExportXLSX` because the browser blocks
|
||||
cross-origin requests from a remote page to `localhost`.
|
||||
|
||||
The original scene architecture had a local scene page acting as a proxy,
|
||||
but skill mode has no local page -- so export is architecturally impossible
|
||||
from the browser side.
|
||||
|
||||
## Decision
|
||||
|
||||
Move XLSX generation to the Rust side. JS only collects data; Rust generates
|
||||
the `.xlsx` file locally after receiving the artifact.
|
||||
|
||||
Report log (`setReportLog`) is deferred to a later iteration.
|
||||
|
||||
## Design
|
||||
|
||||
### JS Changes (`collect_lineloss.js`)
|
||||
|
||||
1. Remove `exportWorkbook()` call and `writeReportLog()` call
|
||||
2. Return artifact with `rows` array and `column_defs` array
|
||||
3. Status is `ok` when rows > 0, `empty` when rows == 0, `error`/`blocked` unchanged
|
||||
|
||||
Artifact shape:
|
||||
```json
|
||||
{
|
||||
"type": "report-artifact",
|
||||
"report_name": "tq-lineloss-report",
|
||||
"status": "ok",
|
||||
"org": { "label": "...", "code": "..." },
|
||||
"period": { "mode": "month", "value": "2026-03" },
|
||||
"column_defs": [["ORG_NAME","供电单位"], ["YGDL","累计供电量"], ...],
|
||||
"rows": [
|
||||
{"ORG_NAME":"xxx", "YGDL":"12345.67", ...}
|
||||
],
|
||||
"counts": { "rows": 12 }
|
||||
}
|
||||
```
|
||||
|
||||
### Rust Changes
|
||||
|
||||
#### New file: `src/compat/lineloss_xlsx_export.rs`
|
||||
|
||||
Generates a standard `.xlsx` file using `zip` crate + OpenXML XML strings.
|
||||
Follows the pattern established in `openxml_office_tool.rs`.
|
||||
|
||||
Public API:
|
||||
```rust
|
||||
pub struct LinelossExportRequest {
|
||||
pub column_defs: Vec<(String, String)>, // (key, chinese_header)
|
||||
pub rows: Vec<Map<String, Value>>,
|
||||
pub sheet_name: String,
|
||||
pub output_path: PathBuf,
|
||||
}
|
||||
|
||||
pub fn export_lineloss_xlsx(request: &LinelossExportRequest) -> anyhow::Result<PathBuf>;
|
||||
```
|
||||
|
||||
Internals:
|
||||
- Build header row from `column_defs[*].1` (chinese names)
|
||||
- Build data rows by looking up `column_defs[*].0` keys in each row map
|
||||
- Generate `worksheet_xml` with inline string cells
|
||||
- Package with standard OpenXML boilerplate (content_types, rels, workbook)
|
||||
- Write to `output_path`
|
||||
|
||||
#### Modified: `src/compat/deterministic_submit.rs`
|
||||
|
||||
In `execute_deterministic_submit_with_browser_backend` (and the non-backend variant):
|
||||
|
||||
```
|
||||
let output = execute_browser_script_skill_raw_output_with_browser_backend(...)?;
|
||||
let artifact = parse_lineloss_artifact(&output);
|
||||
|
||||
if artifact has rows > 0 && column_defs present:
|
||||
let export_path = workspace_root/out/tq-lineloss-{timestamp}.xlsx
|
||||
export_lineloss_xlsx(LinelossExportRequest { ... })?
|
||||
// attach export_path to outcome summary
|
||||
|
||||
Ok(summarize_lineloss_output_with_export(&output, export_path))
|
||||
```
|
||||
|
||||
#### Modified: `src/compat/mod.rs`
|
||||
|
||||
Add `pub mod lineloss_xlsx_export;`
|
||||
|
||||
### Output Path
|
||||
|
||||
`{workspace_root}/out/tq-lineloss-{org_label}-{period}-{timestamp_nanos}.xlsx`
|
||||
|
||||
### Error Handling
|
||||
|
||||
- XLSX generation failure: outcome status = `partial`, reason = `xlsx_export_failed`
|
||||
- Artifact parse failure: fall through to existing `summarize_lineloss_output`
|
||||
|
||||
## Files Changed
|
||||
|
||||
| File | Change Type |
|
||||
|------|-------------|
|
||||
| `collect_lineloss.js` | Modify: remove export/log calls, add rows+column_defs to artifact |
|
||||
| `src/compat/lineloss_xlsx_export.rs` | New: XLSX generation |
|
||||
| `src/compat/deterministic_submit.rs` | Modify: post-process artifact, call XLSX export |
|
||||
| `src/compat/mod.rs` | Modify: register new module |
|
||||
|
||||
## Requires Recompilation
|
||||
|
||||
Yes. Rust code changes require `cargo build`.
|
||||
@@ -0,0 +1,55 @@
|
||||
# Helper Page Lifecycle Fix v2 — Same-Connection Close + Open
|
||||
|
||||
**Date:** 2026-04-14
|
||||
**Status:** Approved
|
||||
|
||||
## Problem
|
||||
|
||||
Two issues remain after v1:
|
||||
|
||||
1. **Process restart leaves orphaned helper pages**: When the sg_claw process restarts, the old helper page tab remains open in the browser. The new process opens another one.
|
||||
2. **Helper page is visible**: Uses `sgBrowerserOpenPage` (visible tab API) instead of `sgHideBrowerserOpenPage` (hidden domain API).
|
||||
|
||||
## Root Cause of v1 Failure
|
||||
|
||||
The v1 `close_helper_page` function created a **second** WebSocket connection to the browser during `Drop`. This likely conflicted with the existing bootstrap connection, causing the browser's WebSocket state to become confused.
|
||||
|
||||
## Solution
|
||||
|
||||
Send the close command on the **same** WebSocket connection used for bootstrap, before sending the open command:
|
||||
|
||||
1. Connect to browser WS
|
||||
2. Register as "web" role
|
||||
3. **Blindly send** `sgHideBrowerserClosePage(helper_url)` — closes any orphaned page from a previous process run
|
||||
4. Send `sgHideBrowerserOpenPage(helper_url)` — opens the new helper page
|
||||
5. Poll `/sgclaw/callback/ready` for page readiness
|
||||
|
||||
Both `use_hidden_domain = true` and the close+open logic are combined into a single change.
|
||||
|
||||
## Why This Works
|
||||
|
||||
- **Same connection**: Only one WebSocket connection to the browser. No conflict with existing connections.
|
||||
- **Best-effort close**: If no orphaned page exists (first run ever), the close command is silently ignored by the browser. This does not affect the subsequent open command.
|
||||
- **Fire-and-forget**: Both close and open commands use the same fire-and-forget semantics as the existing bootstrap command.
|
||||
|
||||
## API Reference
|
||||
|
||||
| API | Wire format | Effect |
|
||||
|-----|------------|--------|
|
||||
| `sgHideBrowerserOpenPage` (API #6) | `[requesturl, "sgHideBrowerserOpenPage", url]` | Opens in hidden domain |
|
||||
| `sgHideBrowerserClosePage` (API #68) | `[requesturl, "sgHideBrowerserClosePage", url]` | Closes hidden domain page |
|
||||
|
||||
## Affected Files
|
||||
|
||||
| File | Change |
|
||||
|------|--------|
|
||||
| `src/browser/callback_host.rs` | In `bootstrap_helper_page`: add close command before open command |
|
||||
| `src/service/server.rs` | Change `use_hidden_domain` from `false` to `true` |
|
||||
|
||||
## What Does NOT Change
|
||||
|
||||
- `callback_backend.rs` — `SHOW_AREA`, `build_command` unchanged
|
||||
- `sgBrowserExcuteJsCodeByDomain` area parameter — stays `"show"`
|
||||
- Helper page HTML content — unchanged
|
||||
- `Drop for LiveBrowserCallbackHost` — remains simple (shutdown only, no close attempt)
|
||||
- `cached_host` in `mod.rs` — remains lifted to outer loop
|
||||
@@ -0,0 +1,284 @@
|
||||
# sgClaw Service Console Enhancement Design
|
||||
|
||||
## Background
|
||||
|
||||
The current `sg_claw_service_console.html` provides a basic UI for connecting to the sgClaw service WebSocket and submitting tasks. However, it requires manual connection on first load and has no way to configure the sgClaw settings (API key, model, base URL, skills directory) from the UI.
|
||||
|
||||
Users need to manually edit `sgclaw_config.json` before using the console, which is inconvenient for routine operations.
|
||||
|
||||
## Problem Statement
|
||||
|
||||
1. Page requires manual "Connect" button click on first load
|
||||
2. No UI for configuring sgClaw runtime settings (model, API key, base URL, skills dir)
|
||||
3. Users must manually edit `sgclaw_config.json` file to change configuration
|
||||
|
||||
## Goal
|
||||
|
||||
Enhance the service console page with:
|
||||
|
||||
1. **Auto-connect on page load** - attempt WebSocket connection immediately
|
||||
2. **Settings panel** - edit sgClaw configuration fields through a friendly UI
|
||||
3. **Config save via WebSocket** - send configuration updates to the running sgClaw service, which writes them to `sgclaw_config.json`
|
||||
|
||||
## Non-goals
|
||||
|
||||
- Auto-starting `sg_claw.exe` process (browser security limitation, deferred)
|
||||
- Changing existing `submit_task` protocol or execution flow
|
||||
- Modifying browser-helper.html or browser execution logic
|
||||
- Adding authentication or multi-user support
|
||||
- Configuration validation beyond basic field checks
|
||||
|
||||
## Architecture
|
||||
|
||||
### Component Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────┐
|
||||
│ sg_claw_service_console.html │
|
||||
│ ┌───────────────────────────────────┐ │
|
||||
│ │ Auto-connect on load │ │
|
||||
│ │ (ws://127.0.0.1:42321 default) │ │
|
||||
│ └───────────────────────────────────┘ │
|
||||
│ ┌───────────────────────────────────┐ │
|
||||
│ │ Settings Panel (Modal) │ │
|
||||
│ │ - API Key │ │
|
||||
│ │ - Base URL │ │
|
||||
│ │ - Model │ │
|
||||
│ │ - Skills Directory │ │
|
||||
│ │ - Direct Submit Skill (optional) │ │
|
||||
│ │ - Runtime Profile (dropdown) │ │
|
||||
│ │ - Browser Backend (dropdown) │ │
|
||||
│ │ [Save] [Cancel] │ │
|
||||
│ └───────────────────────────────────┘ │
|
||||
│ ┌───────────────────────────────────┐ │
|
||||
│ │ Existing: Connection + Composer │ │
|
||||
│ └───────────────────────────────────┘ │
|
||||
└──────────────┬──────────────────────────┘
|
||||
│ WebSocket
|
||||
│ submit_task / update_config
|
||||
▼
|
||||
┌─────────────────────────────────────────┐
|
||||
│ sg_claw.exe (service) │
|
||||
│ ┌───────────────────────────────────┐ │
|
||||
│ │ ClientMessage handler │ │
|
||||
│ │ - SubmitTask (existing) │ │
|
||||
│ │ - UpdateConfig (new) │ │
|
||||
│ └───────────────────────────────────┘ │
|
||||
│ ┌───────────────────────────────────┐ │
|
||||
│ │ Config writer │ │
|
||||
│ │ Writes to sgclaw_config.json │ │
|
||||
│ └───────────────────────────────────┘ │
|
||||
└─────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Data Flow
|
||||
|
||||
1. **Auto-connect flow:**
|
||||
- Page loads → JavaScript calls `connect()` automatically
|
||||
- If WS opens → show "已连接" chip, enable send button
|
||||
- If WS fails → show "未连接" chip, keep send disabled
|
||||
- Reconnect logic remains unchanged (existing heartbeat/reconnect)
|
||||
|
||||
2. **Config save flow:**
|
||||
- User clicks "设置" button → modal opens with current config values
|
||||
- User edits fields → clicks "保存"
|
||||
- Page sends `update_config` message via WS:
|
||||
```json
|
||||
{
|
||||
"type": "update_config",
|
||||
"config": {
|
||||
"apiKey": "...",
|
||||
"baseUrl": "...",
|
||||
"model": "...",
|
||||
"skillsDir": "...",
|
||||
"directSubmitSkill": "...",
|
||||
"runtimeProfile": "...",
|
||||
"browserBackend": "..."
|
||||
}
|
||||
}
|
||||
```
|
||||
- sgClaw service receives message → validates → writes to `sgclaw_config.json`
|
||||
- Service responds with success/error → page shows notification
|
||||
- Service reloads config in-memory (or requires restart - see below)
|
||||
|
||||
### Protocol Changes
|
||||
|
||||
#### New ClientMessage variant
|
||||
|
||||
Add to `src/service/protocol.rs`:
|
||||
|
||||
```rust
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
|
||||
#[serde(tag = "type", rename_all = "snake_case")]
|
||||
pub enum ClientMessage {
|
||||
Connect,
|
||||
Start,
|
||||
Stop,
|
||||
SubmitTask { ... },
|
||||
Ping,
|
||||
UpdateConfig { // NEW
|
||||
config: ConfigUpdatePayload,
|
||||
},
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
|
||||
pub struct ConfigUpdatePayload {
|
||||
pub api_key: Option<String>,
|
||||
pub base_url: Option<String>,
|
||||
pub model: Option<String>,
|
||||
pub skills_dir: Option<String>,
|
||||
pub direct_submit_skill: Option<String>,
|
||||
pub runtime_profile: Option<String>,
|
||||
pub browser_backend: Option<String>,
|
||||
}
|
||||
```
|
||||
|
||||
#### New ServiceMessage variant (optional)
|
||||
|
||||
Add to `src/service/protocol.rs`:
|
||||
|
||||
```rust
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
|
||||
#[serde(tag = "type", rename_all = "snake_case")]
|
||||
pub enum ServiceMessage {
|
||||
StatusChanged { state: String },
|
||||
LogEntry { level: String, message: String },
|
||||
TaskComplete { success: bool, summary: String },
|
||||
Busy { message: String },
|
||||
Pong,
|
||||
ConfigUpdated { success: bool, message: String }, // NEW
|
||||
}
|
||||
```
|
||||
|
||||
### Config Persistence
|
||||
|
||||
The service will:
|
||||
|
||||
1. Load current `sgclaw_config.json` from the config path (derived from process args)
|
||||
2. Merge incoming `ConfigUpdatePayload` fields (only non-null fields are updated)
|
||||
3. Write the merged config back to the same file
|
||||
4. Respond with success/error message
|
||||
5. **Hot reload**: The service should reload config in-memory without requiring restart
|
||||
|
||||
**Important:** If the config file path cannot be resolved (no `--config-path` arg), the service should respond with an error message indicating that config updates are not supported in env-var-only mode.
|
||||
|
||||
### UI Design
|
||||
|
||||
#### Settings Button
|
||||
|
||||
- Add a "设置" button in the sidebar, below the existing connect button
|
||||
- Styled as a ghost button with a gear icon (using unicode ⚙ or CSS-only icon)
|
||||
|
||||
#### Settings Modal
|
||||
|
||||
- Overlay modal with centered card
|
||||
- Form fields with labels in Chinese:
|
||||
- `API 密钥` (apiKey) - password input type with show/hide toggle
|
||||
- `模型服务地址` (baseUrl) - text input
|
||||
- `模型名称` (model) - text input
|
||||
- `Skills 目录路径` (skillsDir) - text input with path validation
|
||||
- `直接提交技能` (directSubmitSkill) - text input (optional, can be empty)
|
||||
- `运行模式` (runtimeProfile) - dropdown: `browser-attached` / `service-standalone`
|
||||
- `浏览器后端` (browserBackend) - dropdown: `super-rpa` / `pipe` / `none`
|
||||
- [保存] primary button, [取消] ghost button
|
||||
- Validation:
|
||||
- API Key and Model are required (show red error if empty on save)
|
||||
- Base URL must be a valid URL format
|
||||
- Skills Dir must be a valid path format
|
||||
- Other fields are optional
|
||||
|
||||
#### Connection State Auto-detection
|
||||
|
||||
- On page load, call `connect()` automatically
|
||||
- Connection state chip updates as before
|
||||
- Reconnect logic (existing) remains unchanged
|
||||
|
||||
### File Changes
|
||||
|
||||
| File | Change |
|
||||
|------|--------|
|
||||
| `frontend/service-console/sg_claw_service_console.html` | Add auto-connect on load, settings modal UI, save logic |
|
||||
| `src/service/protocol.rs` | Add `UpdateConfig` variant and `ConfigUpdatePayload` struct |
|
||||
| `src/service/protocol.rs` | Add `ConfigUpdated` service message variant |
|
||||
| `src/service/server.rs` | Handle `UpdateConfig` message, merge config, write file |
|
||||
| `src/agent/task_runner.rs` | Add `pub fn config_path(&self) -> Option<&Path>` getter to `AgentRuntimeContext` |
|
||||
| `src/config/settings.rs` | Add `save_to_path()` method for writing config to file |
|
||||
| `tests/service_console_html_test.rs` | Add assertions for settings modal and update_config message |
|
||||
|
||||
### Config Save Implementation
|
||||
|
||||
In `src/service/server.rs`, when handling `UpdateConfig`:
|
||||
|
||||
```rust
|
||||
ClientMessage::UpdateConfig { config } => {
|
||||
// 1. Load current config from config_path
|
||||
let config_path = runtime_context.config_path(); // needs to be exposed
|
||||
let current = SgClawSettings::load(config_path.as_deref())?;
|
||||
|
||||
// 2. Merge: only overwrite fields that are Some in the payload
|
||||
let mut merged = current.unwrap_or_default();
|
||||
if let Some(v) = config.api_key { merged.provider_api_key = v; }
|
||||
if let Some(v) = config.base_url { merged.provider_base_url = v; }
|
||||
if let Some(v) = config.model { merged.provider_model = v; }
|
||||
if let Some(v) = config.skills_dir { merged.skills_dir = Some(PathBuf::from(v)); }
|
||||
// ... etc for other fields
|
||||
|
||||
// 3. Write back to file
|
||||
merged.save_to_path(config_path.as_ref().ok_or("no config path")?)?;
|
||||
|
||||
// 4. Respond
|
||||
sink.send_service_message(ServiceMessage::ConfigUpdated {
|
||||
success: true,
|
||||
message: "配置已保存".to_string(),
|
||||
})?;
|
||||
}
|
||||
```
|
||||
|
||||
### Hot Reload Consideration
|
||||
|
||||
After saving config, the service should reload its in-memory settings. This requires:
|
||||
|
||||
1. Storing the loaded `SgClawSettings` in a reloadable container (e.g., `Arc<Mutex<SgClawSettings>>` or `Arc<RwLock<...>>`)
|
||||
2. Or, the service can respond with "配置已保存,请重启 sg_claw 以应用更改" (simpler, avoids hot reload complexity)
|
||||
|
||||
**Recommended:** Start with "requires restart" approach. Hot reload can be added later if needed.
|
||||
|
||||
### Error Handling
|
||||
|
||||
| Scenario | Response |
|
||||
|----------|----------|
|
||||
| WS not connected when saving | Show inline error: "请先连接服务" |
|
||||
| Config file not found | Service responds: "未找到配置文件,请通过 --config-path 指定" |
|
||||
| Invalid config values | Service validates and responds with specific error |
|
||||
| Write permission denied | Service responds: "无法写入配置文件,请检查文件权限" |
|
||||
| WS disconnected during save | Show error: "连接断开,保存失败,请重试" |
|
||||
|
||||
### Test Strategy
|
||||
|
||||
1. **Integration test** (`tests/service_console_html_test.rs`):
|
||||
- Assert page contains settings modal HTML
|
||||
- Assert page contains "设置" button
|
||||
- Assert page sends `update_config` message shape
|
||||
- Assert page auto-connects on load (contains `window.onload` or equivalent)
|
||||
|
||||
2. **Protocol test** (new or existing test file):
|
||||
- Assert `ClientMessage::UpdateConfig` serializes correctly
|
||||
- Assert `ServiceMessage::ConfigUpdated` deserializes correctly
|
||||
|
||||
3. **Config save test** (new test in `tests/compat_config_test.rs` or new file):
|
||||
- Create temp config file
|
||||
- Send UpdateConfig message
|
||||
- Verify file contents match expected merged config
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
1. Page auto-connects to WS on load without manual button click
|
||||
2. Settings button visible in sidebar
|
||||
3. Settings modal opens with form fields for all configurable options
|
||||
4. Clicking "保存" sends `update_config` message via WS
|
||||
5. Service receives message and writes to `sgclaw_config.json`
|
||||
6. Service responds with success/error message
|
||||
7. Page displays save result notification
|
||||
8. Existing task submission flow unchanged
|
||||
9. Existing heartbeat/reconnect logic unchanged
|
||||
10. Automated tests pass
|
||||
Reference in New Issue
Block a user