feat: add config-owned direct submit runtime
Keep browser-attached workflows on the configured direct-skill path and align the Zhihu export/browser regression contracts with the current ws merge state. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -0,0 +1,281 @@
|
||||
# Config-Owned Direct Skill Contract Implementation Plan
|
||||
|
||||
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||
|
||||
**Goal:** Validate the `directSubmitSkill` control surface early and prevent malformed direct-skill configs from entering the submit routing path, without changing the current happy-path direct execution behavior.
|
||||
|
||||
**Architecture:** Keep the existing direct-submit runtime and submit-task seam intact for valid configs. Move `directSubmitSkill` format validation into the normal `SgClawSettings` load path so malformed config fails before routing begins, while leaving valid-but-unresolvable `skill.tool` targets as direct runtime errors in the current direct path.
|
||||
|
||||
**Tech Stack:** Rust 2021, `serde` config parsing, current `BrowserMessage::SubmitTask` path, current direct skill runtime, Rust integration tests.
|
||||
|
||||
---
|
||||
|
||||
## Execution Context
|
||||
|
||||
- Follow @superpowers:test-driven-development for the Rust code changes in this plan.
|
||||
- Follow @superpowers:verification-before-completion before claiming any task is done.
|
||||
- Do **not** create a git worktree unless the user explicitly asks. This project prefers staying in the current checkout.
|
||||
- Keep scope tight: this plan does **not** add per-skill dispatch metadata, docs changes, intent classification, or LLM routing changes.
|
||||
|
||||
## File Map
|
||||
|
||||
### Existing files to modify
|
||||
|
||||
- Modify: `src/config/settings.rs`
|
||||
- validate `directSubmitSkill` during config normalization
|
||||
- keep the stored field as `Option<String>` so the current direct runtime API stays stable
|
||||
- Modify: `tests/compat_config_test.rs`
|
||||
- add a failing config-load regression for malformed `directSubmitSkill`
|
||||
- Modify: `tests/agent_runtime_test.rs`
|
||||
- add a failing submit-path regression proving malformed config is rejected before direct routing begins
|
||||
|
||||
### Existing files to read but not broaden
|
||||
|
||||
- Reuse without redesign: `src/agent/mod.rs`
|
||||
- Reuse without redesign: `src/compat/direct_skill_runtime.rs`
|
||||
- Reuse without redesign: `docs/superpowers/specs/2026-04-09-config-owned-direct-skill-dispatch-design.md`
|
||||
|
||||
### No new files expected
|
||||
|
||||
This slice should fit in the existing config and tests surfaces only.
|
||||
|
||||
---
|
||||
|
||||
### Task 1: Validate `directSubmitSkill` Before Submit Routing
|
||||
|
||||
**Files:**
|
||||
- Modify: `tests/compat_config_test.rs`
|
||||
- Modify: `tests/agent_runtime_test.rs`
|
||||
- Modify: `src/config/settings.rs`
|
||||
- Read only: `src/agent/mod.rs`
|
||||
- Read only: `src/compat/direct_skill_runtime.rs`
|
||||
|
||||
- [ ] **Step 1: Write the failing config test for malformed `directSubmitSkill`**
|
||||
|
||||
Add this focused test to `tests/compat_config_test.rs`:
|
||||
|
||||
```rust
|
||||
#[test]
|
||||
fn sgclaw_settings_reject_invalid_direct_submit_skill_format() {
|
||||
let root = std::env::temp_dir().join(format!(
|
||||
"sgclaw-invalid-direct-submit-skill-{}",
|
||||
Uuid::new_v4()
|
||||
));
|
||||
fs::create_dir_all(&root).unwrap();
|
||||
let config_path = root.join("sgclaw_config.json");
|
||||
|
||||
fs::write(
|
||||
&config_path,
|
||||
r#"{
|
||||
"providers": [],
|
||||
"skillsDir": "skill_lib",
|
||||
"directSubmitSkill": "fault-details-report"
|
||||
}"#,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let err = SgClawSettings::load(Some(config_path.as_path()))
|
||||
.expect_err("expected invalid directSubmitSkill format");
|
||||
let message = err.to_string();
|
||||
|
||||
assert!(message.contains("directSubmitSkill"));
|
||||
assert!(message.contains("skill.tool"));
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Run the focused config test and verify it fails**
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
cargo test --test compat_config_test sgclaw_settings_reject_invalid_direct_submit_skill_format -- --nocapture
|
||||
```
|
||||
|
||||
Expected: FAIL because the current config loader accepts the malformed string instead of rejecting it early.
|
||||
|
||||
- [ ] **Step 3: Write the failing agent regression for malformed config**
|
||||
|
||||
Add this focused test to `tests/agent_runtime_test.rs`:
|
||||
|
||||
```rust
|
||||
#[test]
|
||||
fn submit_task_rejects_invalid_direct_submit_skill_config_before_routing() {
|
||||
std::env::remove_var("DEEPSEEK_API_KEY");
|
||||
std::env::remove_var("DEEPSEEK_BASE_URL");
|
||||
std::env::remove_var("DEEPSEEK_MODEL");
|
||||
|
||||
let skill_root = build_direct_runtime_skill_root();
|
||||
let workspace_root = std::env::temp_dir().join(format!(
|
||||
"sgclaw-invalid-direct-submit-workspace-{}",
|
||||
Uuid::new_v4()
|
||||
));
|
||||
fs::create_dir_all(&workspace_root).unwrap();
|
||||
let config_path = workspace_root.join("sgclaw_config.json");
|
||||
fs::write(
|
||||
&config_path,
|
||||
serde_json::json!({
|
||||
"providers": [],
|
||||
"skillsDir": skill_root,
|
||||
"directSubmitSkill": "fault-details-report"
|
||||
})
|
||||
.to_string(),
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let runtime_context = AgentRuntimeContext::new(Some(config_path), workspace_root);
|
||||
let transport = Arc::new(MockTransport::new(vec![]));
|
||||
let browser_tool = BrowserPipeTool::new(
|
||||
transport.clone(),
|
||||
direct_runtime_test_policy(),
|
||||
vec![1, 2, 3, 4, 5, 6, 7, 8],
|
||||
)
|
||||
.with_response_timeout(Duration::from_secs(1));
|
||||
|
||||
handle_browser_message_with_context(
|
||||
transport.as_ref(),
|
||||
&browser_tool,
|
||||
&runtime_context,
|
||||
submit_fault_details_message(),
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let sent = transport.sent_messages();
|
||||
assert!(matches!(
|
||||
sent.last(),
|
||||
Some(AgentMessage::TaskComplete { success, summary })
|
||||
if !success && summary.contains("skill.tool")
|
||||
));
|
||||
assert!(direct_submit_mode_logs(&sent).is_empty());
|
||||
assert!(!sent.iter().any(|message| matches!(message, AgentMessage::Command { .. })));
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 4: Run the focused agent test and verify it fails**
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
cargo test --test agent_runtime_test submit_task_rejects_invalid_direct_submit_skill_config_before_routing -- --nocapture
|
||||
```
|
||||
|
||||
Expected: FAIL because the malformed config currently loads, enters the direct-submit branch, and emits `direct_skill_primary` before failing later.
|
||||
|
||||
- [ ] **Step 5: Implement the minimal config validation**
|
||||
|
||||
In `src/config/settings.rs`, add a small helper that validates the normalized `directSubmitSkill` string during `SgClawSettings::new(...)`.
|
||||
|
||||
Recommended implementation shape:
|
||||
|
||||
```rust
|
||||
fn normalize_direct_submit_skill(raw: Option<String>) -> Result<Option<String>, ConfigError> {
|
||||
let value = normalize_optional_value(raw);
|
||||
let Some(value) = value.as_deref() else {
|
||||
return Ok(None);
|
||||
};
|
||||
|
||||
let Some((skill_name, tool_name)) = value.split_once('.') else {
|
||||
return Err(ConfigError::InvalidValue(
|
||||
"directSubmitSkill",
|
||||
format!("must use skill.tool format, got {value}"),
|
||||
));
|
||||
};
|
||||
|
||||
if skill_name.trim().is_empty() || tool_name.trim().is_empty() {
|
||||
return Err(ConfigError::InvalidValue(
|
||||
"directSubmitSkill",
|
||||
format!("must use skill.tool format, got {value}"),
|
||||
));
|
||||
}
|
||||
|
||||
Ok(Some(value.to_string()))
|
||||
}
|
||||
```
|
||||
|
||||
Then use it here:
|
||||
|
||||
```rust
|
||||
let direct_submit_skill = normalize_direct_submit_skill(direct_submit_skill)?;
|
||||
```
|
||||
|
||||
Rules:
|
||||
- do not change the public field type from `Option<String>`
|
||||
- do not move parsing responsibility into `src/agent/mod.rs`
|
||||
- do not redesign `src/compat/direct_skill_runtime.rs`
|
||||
- keep valid-but-unresolvable `skill.tool` targets as runtime errors in the direct path
|
||||
|
||||
- [ ] **Step 6: Re-run the two focused tests and verify they pass**
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
cargo test --test compat_config_test sgclaw_settings_reject_invalid_direct_submit_skill_format -- --nocapture
|
||||
cargo test --test agent_runtime_test submit_task_rejects_invalid_direct_submit_skill_config_before_routing -- --nocapture
|
||||
```
|
||||
|
||||
Expected: PASS.
|
||||
|
||||
- [ ] **Step 7: Re-run the broader regression suites**
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
cargo test --test compat_config_test -- --nocapture
|
||||
cargo test --test agent_runtime_test -- --nocapture
|
||||
cargo test --test browser_script_skill_tool_test -- --nocapture
|
||||
cargo build --bin sgclaw
|
||||
```
|
||||
|
||||
Expected: PASS, including:
|
||||
- the direct-submit happy path
|
||||
- the existing no-LLM fallback behavior when `directSubmitSkill` is absent
|
||||
- unchanged browser-script helper semantics
|
||||
- clean binary build
|
||||
|
||||
---
|
||||
|
||||
## Verification Checklist
|
||||
|
||||
### Config validation
|
||||
|
||||
```bash
|
||||
cargo test --test compat_config_test -- --nocapture
|
||||
```
|
||||
|
||||
Expected: malformed `directSubmitSkill` is rejected early, while the existing direct-only config shape still loads.
|
||||
|
||||
### Submit-path behavior
|
||||
|
||||
```bash
|
||||
cargo test --test agent_runtime_test -- --nocapture
|
||||
```
|
||||
|
||||
Expected:
|
||||
- malformed `directSubmitSkill` never reaches direct routing
|
||||
- valid configured direct skill still succeeds without LLM config
|
||||
- no direct skill configured still returns the existing no-LLM message
|
||||
|
||||
### Browser-script helper safety
|
||||
|
||||
```bash
|
||||
cargo test --test browser_script_skill_tool_test -- --nocapture
|
||||
```
|
||||
|
||||
Expected: current browser-script execution semantics remain unchanged.
|
||||
|
||||
### Build
|
||||
|
||||
```bash
|
||||
cargo build --bin sgclaw
|
||||
```
|
||||
|
||||
Expected: the main binary compiles cleanly.
|
||||
|
||||
---
|
||||
|
||||
## Notes For The Engineer
|
||||
|
||||
- The paired spec is `docs/superpowers/specs/2026-04-09-config-owned-direct-skill-dispatch-design.md`.
|
||||
- Do **not** add sgClaw-specific dispatch metadata under `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging` in this slice.
|
||||
- Do **not** turn this into a per-skill registry task yet. This plan only hardens the current config-owned bootstrap contract.
|
||||
- Keep the current direct target example as `fault-details-report.collect_fault_details`; avoid hard-coding that name into new generic APIs.
|
||||
- If you discover a need for broader policy routing (`direct_browser` / `llm_agent` by skill), stop and write a new spec/plan instead of expanding this one.
|
||||
@@ -0,0 +1,520 @@
|
||||
# Direct Skill Invocation Without LLM Implementation Plan
|
||||
|
||||
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||
|
||||
**Goal:** Let the current pipe submit-task flow accept natural-language input but directly invoke one fixed staged browser skill without calling any model, while reserving a clean switch back to LLM-based routing later.
|
||||
|
||||
**Architecture:** Keep the existing `BrowserMessage::SubmitTask` entrypoint and add one narrow pre-routing seam before the current compat/LLM chain. When a new config field points to a fixed direct-submit skill, sgClaw loads that skill package from the configured external skills root, finds the target `browser_script` tool, executes it through the existing browser-script wrapper, and returns the result directly. When the field is absent, the current behavior stays unchanged. This preserves a future path where each skill can later declare `direct_browser` or `llm_agent` dispatch without rewriting the submit pipeline again.
|
||||
|
||||
**Tech Stack:** Rust 2021, existing `BrowserPipeTool`, current submit-task agent entrypoint, current browser-script skill executor, current sgClaw JSON config loader, `zeroclaw` skill manifest loader.
|
||||
|
||||
---
|
||||
|
||||
## Recommended First Skill
|
||||
|
||||
Use `fault-details-report.collect_fault_details` from:
|
||||
- `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/scenes/fault-details-report/scene.json`
|
||||
- `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/SKILL.toml`
|
||||
- `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.js`
|
||||
|
||||
Why this one first:
|
||||
- it is clearly a report/export skill
|
||||
- it exposes exactly one browser-script tool: `collect_fault_details`
|
||||
- it has the smallest contract surface (`period` only)
|
||||
- its current JS is deterministic and simple, so the first slice can focus on plumbing instead of browser scraping complexity
|
||||
|
||||
## Scope Guardrails
|
||||
|
||||
- Do **not** redesign the existing submit-task protocol.
|
||||
- Do **not** remove or rewrite the current LLM/compat path; leave it as the fallback/default path.
|
||||
- Do **not** introduce generic NL intent routing in this slice; this is one fixed direct skill only.
|
||||
- Do **not** modify `third_party/zeroclaw` skill manifest schema in phase 1.
|
||||
- Do **not** add Excel export wiring in the first slice unless a test explicitly requires it.
|
||||
- Do **not** invent a new browser-script execution model; reuse the existing wrapper semantics.
|
||||
|
||||
---
|
||||
|
||||
## File Map
|
||||
|
||||
### Existing files to modify
|
||||
|
||||
- Modify: `src/config/settings.rs`
|
||||
- add a minimal config field for one direct-submit skill name
|
||||
- Modify: `src/agent/mod.rs`
|
||||
- add a narrow pre-routing branch before the current compat/LLM path
|
||||
- Modify: `src/compat/browser_script_skill_tool.rs`
|
||||
- expose the smallest reusable helper for direct browser-script execution
|
||||
- Modify: `src/compat/mod.rs` or the nearest module export surface
|
||||
- export the new narrow direct-skill runtime module if needed
|
||||
- Modify: `tests/compat_config_test.rs`
|
||||
- add config coverage for the new direct-submit field
|
||||
- Modify: `tests/browser_script_skill_tool_test.rs`
|
||||
- add coverage for the reusable direct-execution helper
|
||||
- Modify: `tests/agent_runtime_test.rs`
|
||||
- prove submit-task can bypass the model and directly invoke the fixed skill
|
||||
|
||||
### New files to create
|
||||
|
||||
- Create: `src/compat/direct_skill_runtime.rs`
|
||||
- small runtime for loading one configured skill, resolving one configured tool, deriving minimal args, and executing it directly
|
||||
|
||||
### Files to reuse without changing behavior
|
||||
|
||||
- Reuse: `src/compat/runtime.rs`
|
||||
- Reuse: `src/compat/orchestration.rs`
|
||||
- Reuse: `src/compat/config_adapter.rs`
|
||||
- Reuse: `third_party/zeroclaw/src/skills/mod.rs`
|
||||
|
||||
---
|
||||
|
||||
### Task 1: Add A Minimal Direct-Submit Skill Config Field
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/config/settings.rs`
|
||||
- Modify: `tests/compat_config_test.rs`
|
||||
|
||||
- [ ] **Step 1: Write the failing config test for the new field**
|
||||
|
||||
In `tests/compat_config_test.rs`, add a focused config-load test proving the browser config file can declare one fixed direct-submit skill.
|
||||
|
||||
Test shape:
|
||||
|
||||
```rust
|
||||
#[test]
|
||||
fn sgclaw_settings_load_direct_submit_skill_from_browser_config() {
|
||||
let root = std::env::temp_dir().join(format!("sgclaw-direct-skill-{}", uuid::Uuid::new_v4()));
|
||||
std::fs::create_dir_all(&root).unwrap();
|
||||
let config_path = root.join("sgclaw_config.json");
|
||||
|
||||
std::fs::write(
|
||||
&config_path,
|
||||
r#"{
|
||||
"apiKey": "sk-runtime",
|
||||
"baseUrl": "https://api.deepseek.com",
|
||||
"model": "deepseek-chat",
|
||||
"skillsDir": "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging",
|
||||
"directSubmitSkill": "fault-details-report.collect_fault_details"
|
||||
}"#,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
let settings = sgclaw::config::SgClawSettings::load(Some(config_path.as_path()))
|
||||
.unwrap()
|
||||
.expect("expected sgclaw settings from config file");
|
||||
|
||||
assert_eq!(
|
||||
settings.direct_submit_skill.as_deref(),
|
||||
Some("fault-details-report.collect_fault_details")
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Run the focused config test and verify it fails**
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
cargo test --test compat_config_test sgclaw_settings_load_direct_submit_skill_from_browser_config -- --nocapture
|
||||
```
|
||||
|
||||
Expected: FAIL because the config field does not exist yet.
|
||||
|
||||
- [ ] **Step 3: Implement the minimal config field**
|
||||
|
||||
In `src/config/settings.rs`, add:
|
||||
- `direct_submit_skill: Option<String>` to `SgClawSettings`
|
||||
- `direct_submit_skill: Option<String>` to `RawSgClawSettings`
|
||||
- field normalization in `SgClawSettings::new(...)`
|
||||
|
||||
Recommended JSON key shape:
|
||||
|
||||
```rust
|
||||
#[serde(rename = "directSubmitSkill", alias = "direct_submit_skill", default)]
|
||||
direct_submit_skill: Option<String>,
|
||||
```
|
||||
|
||||
Rules:
|
||||
- trim empty values to `None`
|
||||
- keep `DeepSeekSettings` unchanged for this slice unless a compile error proves it must mirror the field
|
||||
- do not alter unrelated config semantics
|
||||
|
||||
- [ ] **Step 4: Re-run the focused config test**
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
cargo test --test compat_config_test sgclaw_settings_load_direct_submit_skill_from_browser_config -- --nocapture
|
||||
```
|
||||
|
||||
Expected: PASS.
|
||||
|
||||
- [ ] **Step 5: Re-run the broader config file tests**
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
cargo test --test compat_config_test -- --nocapture
|
||||
```
|
||||
|
||||
Expected: PASS.
|
||||
|
||||
- [ ] **Step 6: Commit Task 1**
|
||||
|
||||
```bash
|
||||
git add src/config/settings.rs tests/compat_config_test.rs
|
||||
git commit -m "feat: add direct submit skill config"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 2: Extract A Reusable Browser-Script Direct Execution Helper
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/compat/browser_script_skill_tool.rs`
|
||||
- Modify: `tests/browser_script_skill_tool_test.rs`
|
||||
|
||||
- [ ] **Step 1: Write the first failing helper test**
|
||||
|
||||
In `tests/browser_script_skill_tool_test.rs`, add a focused test proving direct code can execute a packaged browser script without constructing a full `Tool` object first.
|
||||
|
||||
Test shape:
|
||||
|
||||
```rust
|
||||
#[tokio::test]
|
||||
async fn execute_browser_script_tool_runs_packaged_script_with_expected_domain() {
|
||||
// build temp skill script
|
||||
// call the helper directly
|
||||
// assert Action::Eval was sent with wrapped args and normalized domain
|
||||
}
|
||||
```
|
||||
|
||||
Required assertions:
|
||||
- the helper reads the packaged JS file
|
||||
- it wraps args with `const args = ...`
|
||||
- it normalizes URL-like `expected_domain`
|
||||
- it returns the serialized payload string on success
|
||||
|
||||
- [ ] **Step 2: Run the helper test and verify it fails**
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
cargo test --test browser_script_skill_tool_test execute_browser_script_tool_runs_packaged_script_with_expected_domain -- --nocapture
|
||||
```
|
||||
|
||||
Expected: FAIL because the helper does not exist yet.
|
||||
|
||||
- [ ] **Step 3: Add the second failing helper test for required-domain validation**
|
||||
|
||||
Add a focused failure-path test proving the helper rejects missing or invalid `expected_domain` before any browser command is sent.
|
||||
|
||||
- [ ] **Step 4: Run the validation test and verify it fails**
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
cargo test --test browser_script_skill_tool_test execute_browser_script_tool_rejects_missing_expected_domain -- --nocapture
|
||||
```
|
||||
|
||||
Expected: FAIL because the helper does not exist yet.
|
||||
|
||||
- [ ] **Step 5: Implement the minimal reusable helper**
|
||||
|
||||
In `src/compat/browser_script_skill_tool.rs`, extract the smallest reusable function, for example:
|
||||
|
||||
```rust
|
||||
pub async fn execute_browser_script_tool<T: Transport + 'static>(
|
||||
tool: &SkillTool,
|
||||
skill_root: &Path,
|
||||
browser_tool: BrowserPipeTool<T>,
|
||||
args: Value,
|
||||
) -> anyhow::Result<ToolResult>
|
||||
```
|
||||
|
||||
Rules:
|
||||
- reuse the current path validation, script loading, wrapping, `Action::Eval`, and payload formatting logic already used by `BrowserScriptSkillTool::execute`
|
||||
- do not change outward behavior of `BrowserScriptSkillTool`
|
||||
- keep the helper narrow and browser-script-only
|
||||
|
||||
- [ ] **Step 6: Refactor `BrowserScriptSkillTool::execute` to call the helper**
|
||||
|
||||
Keep existing behavior and tests green while removing duplicate execution logic.
|
||||
|
||||
- [ ] **Step 7: Re-run the browser-script tests**
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
cargo test --test browser_script_skill_tool_test -- --nocapture
|
||||
```
|
||||
|
||||
Expected: PASS.
|
||||
|
||||
- [ ] **Step 8: Commit Task 2**
|
||||
|
||||
```bash
|
||||
git add src/compat/browser_script_skill_tool.rs tests/browser_script_skill_tool_test.rs
|
||||
git commit -m "refactor: extract direct browser script execution helper"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 3: Add A Narrow Direct Skill Runtime For One Fixed Skill
|
||||
|
||||
**Files:**
|
||||
- Create: `src/compat/direct_skill_runtime.rs`
|
||||
- Modify: `src/compat/mod.rs` or nearest module export point
|
||||
- Reuse: `src/compat/config_adapter.rs`
|
||||
- Reuse: `third_party/zeroclaw/src/skills/mod.rs`
|
||||
|
||||
- [ ] **Step 1: Write the first failing direct-runtime test**
|
||||
|
||||
Add a focused test in `tests/agent_runtime_test.rs` or a new narrow compat test proving code can resolve the configured external skills root, load `fault-details-report`, find `collect_fault_details`, and execute it directly.
|
||||
|
||||
Recommended shape:
|
||||
|
||||
```rust
|
||||
#[test]
|
||||
fn direct_skill_runtime_executes_fault_details_report_without_provider() {
|
||||
// config points at skill_staging root
|
||||
// direct_submit_skill points at fault-details-report.collect_fault_details
|
||||
// browser response returns report-artifact payload
|
||||
// assert no provider/http path is touched
|
||||
}
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Run the focused direct-runtime test and verify it fails**
|
||||
|
||||
Run the narrowest test command for the new test.
|
||||
|
||||
Expected: FAIL because the direct runtime does not exist yet.
|
||||
|
||||
- [ ] **Step 3: Implement `src/compat/direct_skill_runtime.rs`**
|
||||
|
||||
Add a narrow runtime with responsibilities only to:
|
||||
- resolve the configured skills dir with `resolve_skills_dir_from_sgclaw_settings(...)`
|
||||
- load skills from that directory with `load_skills_from_directory(...)`
|
||||
- parse the configured tool name into `skill_name` + `tool_name`
|
||||
- find the matching skill and matching tool
|
||||
- verify `tool.kind == "browser_script"`
|
||||
- derive the minimal argument object
|
||||
- call the new browser-script helper
|
||||
- return the output string or a clear `PipeError`
|
||||
|
||||
Do **not** add generic routing, scenes, or model fallback here.
|
||||
|
||||
- [ ] **Step 4: Keep argument derivation intentionally minimal**
|
||||
|
||||
For the first slice, derive only:
|
||||
- `expected_domain` from `page_url` when present, otherwise fail with a clear message
|
||||
- `period` from the instruction using a narrow deterministic pattern such as `YYYY-MM`
|
||||
|
||||
If the period cannot be derived, return a concise error telling the user to provide it explicitly. Do not guess.
|
||||
|
||||
- [ ] **Step 5: Re-run the focused direct-runtime test**
|
||||
|
||||
Run the same test command again.
|
||||
|
||||
Expected: PASS.
|
||||
|
||||
- [ ] **Step 6: Commit Task 3**
|
||||
|
||||
```bash
|
||||
git add src/compat/direct_skill_runtime.rs src/compat/mod.rs tests/agent_runtime_test.rs
|
||||
git commit -m "feat: add fixed direct skill runtime"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 4: Insert The Pre-Routing Seam In Submit-Task Entry
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/agent/mod.rs`
|
||||
- Modify: `tests/agent_runtime_test.rs`
|
||||
|
||||
- [ ] **Step 1: Write the first failing submit-path bypass test**
|
||||
|
||||
In `tests/agent_runtime_test.rs`, add a focused regression proving that when `directSubmitSkill` is configured, `BrowserMessage::SubmitTask` can succeed without any model/provider being configured.
|
||||
|
||||
Test shape:
|
||||
|
||||
```rust
|
||||
#[test]
|
||||
fn submit_task_uses_direct_skill_mode_without_llm_configuration() {
|
||||
// config contains skillsDir + directSubmitSkill, but no reachable provider
|
||||
// natural-language instruction includes period and page_url
|
||||
// expect TaskComplete success from direct skill result
|
||||
}
|
||||
```
|
||||
|
||||
Required assertions:
|
||||
- task succeeds even if provider would be unavailable
|
||||
- output contains the report artifact payload
|
||||
- no summary like `未配置大语言模型`
|
||||
|
||||
- [ ] **Step 2: Run the bypass test and verify it fails**
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
cargo test --test agent_runtime_test submit_task_uses_direct_skill_mode_without_llm_configuration -- --nocapture
|
||||
```
|
||||
|
||||
Expected: FAIL because submit-task still goes into the current LLM-oriented path.
|
||||
|
||||
- [ ] **Step 3: Add the second failing priority test**
|
||||
|
||||
Add one focused test proving the direct-submit branch runs before the existing compat/LLM branch.
|
||||
|
||||
The easiest assertion is that the mode log becomes something new like:
|
||||
- `direct_skill_primary`
|
||||
|
||||
and the normal mode logs do not appear for that turn.
|
||||
|
||||
- [ ] **Step 4: Run the priority test and verify it fails**
|
||||
|
||||
Run the narrow test command for the new test.
|
||||
|
||||
Expected: FAIL because the mode does not exist yet.
|
||||
|
||||
- [ ] **Step 5: Add the narrow pre-routing branch in `src/agent/mod.rs`**
|
||||
|
||||
In `handle_browser_message_with_context(...)`, after config load/logging and before the existing `should_use_primary_orchestration(...)` / `compat::runtime` path:
|
||||
- check `settings.direct_submit_skill`
|
||||
- if present, emit mode log `direct_skill_primary`
|
||||
- call the new direct runtime
|
||||
- send `TaskComplete` and return immediately
|
||||
|
||||
Rules:
|
||||
- if `direct_submit_skill` is absent, keep existing behavior byte-for-byte where possible
|
||||
- do not modify `compat::runtime.rs` or `compat::orchestration.rs` for this slice
|
||||
- do not silently fall through to LLM when direct execution fails; return the direct error clearly so the first slice is debuggable
|
||||
|
||||
- [ ] **Step 6: Re-run the focused submit-path tests**
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
cargo test --test agent_runtime_test submit_task_uses_direct_skill_mode_without_llm_configuration -- --nocapture
|
||||
cargo test --test agent_runtime_test direct_skill_mode_logs_direct_skill_primary -- --nocapture
|
||||
```
|
||||
|
||||
Expected: PASS.
|
||||
|
||||
- [ ] **Step 7: Re-run existing no-LLM submit regression coverage**
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
cargo test --test agent_runtime_test -- --nocapture
|
||||
```
|
||||
|
||||
Expected: PASS, including existing cases where no direct skill is configured and the old no-LLM failure still applies.
|
||||
|
||||
- [ ] **Step 8: Commit Task 4**
|
||||
|
||||
```bash
|
||||
git add src/agent/mod.rs tests/agent_runtime_test.rs
|
||||
git commit -m "feat: route submit tasks through fixed direct skill mode"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 5: Lock The Future Migration Seam Without Implementing LLM Dispatch Yet
|
||||
|
||||
**Files:**
|
||||
- Modify only if needed: `src/config/settings.rs`
|
||||
- Modify only if needed: `src/compat/direct_skill_runtime.rs`
|
||||
- Reuse: docs/plan only unless code needs one tiny naming fix
|
||||
|
||||
- [ ] **Step 1: Keep the config naming compatible with future per-skill dispatch**
|
||||
|
||||
Document and preserve this future meaning in code naming:
|
||||
- current field: one fixed direct skill for submit-task bootstrap
|
||||
- future model: each skill can declare dispatch mode such as `direct_browser` or `llm_agent`
|
||||
|
||||
Prefer neutral names in helper code like:
|
||||
- `direct skill mode`
|
||||
- `direct submit skill`
|
||||
|
||||
Avoid hard-coding `fault_details` into generic APIs.
|
||||
|
||||
- [ ] **Step 2: Add one small negative test for fallback behavior**
|
||||
|
||||
Add a focused test proving that when `directSubmitSkill` is not configured, submit-task still behaves exactly as before and can still return the existing no-LLM message.
|
||||
|
||||
If an existing test already proves this, keep it and do not add another.
|
||||
|
||||
- [ ] **Step 3: Re-run the focused end-to-end verification set**
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
cargo test --test compat_config_test -- --nocapture
|
||||
cargo test --test browser_script_skill_tool_test -- --nocapture
|
||||
cargo test --test agent_runtime_test -- --nocapture
|
||||
```
|
||||
|
||||
Expected: PASS.
|
||||
|
||||
- [ ] **Step 4: Build the main binary**
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
cargo build --bin sgclaw
|
||||
```
|
||||
|
||||
Expected: PASS.
|
||||
|
||||
- [ ] **Step 5: Commit Task 5**
|
||||
|
||||
```bash
|
||||
git add src/config/settings.rs src/compat/direct_skill_runtime.rs src/compat/browser_script_skill_tool.rs src/agent/mod.rs tests/compat_config_test.rs tests/browser_script_skill_tool_test.rs tests/agent_runtime_test.rs
|
||||
git commit -m "test: verify fixed direct skill submit path"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Verification Checklist
|
||||
|
||||
### Config loading
|
||||
|
||||
```bash
|
||||
cargo test --test compat_config_test -- --nocapture
|
||||
```
|
||||
|
||||
Expected: `directSubmitSkill` loads correctly and existing config behavior remains intact.
|
||||
|
||||
### Browser-script helper
|
||||
|
||||
```bash
|
||||
cargo test --test browser_script_skill_tool_test -- --nocapture
|
||||
```
|
||||
|
||||
Expected: direct helper preserves the existing browser-script execution semantics.
|
||||
|
||||
### Submit-path bypass
|
||||
|
||||
```bash
|
||||
cargo test --test agent_runtime_test -- --nocapture
|
||||
```
|
||||
|
||||
Expected: configured direct skill bypasses the model path, while unconfigured submit-task behavior stays unchanged.
|
||||
|
||||
### Build
|
||||
|
||||
```bash
|
||||
cargo build --bin sgclaw
|
||||
```
|
||||
|
||||
Expected: the binary compiles cleanly.
|
||||
|
||||
---
|
||||
|
||||
## Notes For The Engineer
|
||||
|
||||
- The key to keeping this slice small is to avoid changing `compat::runtime.rs` and `compat::orchestration.rs`; they remain the future LLM path.
|
||||
- `fault-details-report.collect_fault_details` is only the bootstrap skill. The plumbing must stay generic enough that the configured tool name can later point to another staged browser skill.
|
||||
- Phase 1 should not add per-skill dispatch metadata to the external skill manifests yet. Keep that decision in sgClaw config first; move it into skill metadata only after the direct path is proven useful.
|
||||
- Once the intranet model is ready, the clean next step is to add a dispatch policy layer that chooses between `direct_browser` and `llm_agent` before the current compat path is entered, reusing this same pre-routing seam.
|
||||
@@ -0,0 +1,672 @@
|
||||
# Fault Details Full Skill Alignment Implementation Plan
|
||||
|
||||
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||
|
||||
**Goal:** Upgrade `fault-details-report.collect_fault_details` into a real staged browser skill that matches the original fault-details workflow, and make `claw-new` interpret the returned artifact status correctly in the direct-submit path.
|
||||
|
||||
**Architecture:** Keep routing and direct-skill selection in `claw-new`, but move all fault-details collection, normalization, classification, summary, export, and report-log behavior into the staged skill under `skill_staging`. Implement the staged skill as a true browser-eval entrypoint that remains valid in page context, while exposing testable pure helpers through an environment-safe export guard for `node:test`; then add a narrow Rust artifact interpreter in `src/compat/direct_skill_runtime.rs` so `ok` / `partial` / `empty` map to successful task completion while `blocked` / `error` map to failed completion.
|
||||
|
||||
**Tech Stack:** Rust 2021, `serde_json`, existing `BrowserPipeTool` / `browser_script` runtime, `node:test`, staged skill fixtures, Cargo integration tests.
|
||||
|
||||
---
|
||||
|
||||
## Execution Context
|
||||
|
||||
- Follow @superpowers:test-driven-development for every behavior change.
|
||||
- Follow @superpowers:verification-before-completion before claiming each task is done.
|
||||
- Do **not** create a git worktree unless the user explicitly asks. This repo preference is already established.
|
||||
- Keep scope tight. Do **not** add a new browser protocol, new dispatch metadata, new UI opener behavior, or Rust-side fault classification logic.
|
||||
- Keep the current direct path bootstrap requirement intact: the user instruction must still include an explicit `YYYY-MM`, but the staged skill must treat the page-selected range as the source of truth for collection once execution begins.
|
||||
- Preserve parity with the original package’s real behavior: port the original classification table, `qxxcjl`-based reason heuristics, canonical detail mapping, summary aggregation rules, localhost export call, and report-log call into the staged skill rather than implementing a fixture-only subset.
|
||||
|
||||
## File Map
|
||||
|
||||
### Existing files to modify in `claw-new`
|
||||
|
||||
- Modify: `src/compat/direct_skill_runtime.rs`
|
||||
- add narrow structured artifact parsing and status-to-summary mapping
|
||||
- keep direct-skill routing/config ownership unchanged
|
||||
- Modify: `tests/agent_runtime_test.rs`
|
||||
- add direct-submit regressions for `ok`, `partial`, `empty`, `blocked`, and `error`
|
||||
- Modify: `tests/browser_script_skill_tool_test.rs`
|
||||
- add browser-script execution-shape regression for browser-eval return payloads used by fault-details
|
||||
|
||||
### Existing files to modify in `skill_staging`
|
||||
|
||||
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.js`
|
||||
- replace empty shell with browser-eval entrypoint plus parity helpers
|
||||
- Create: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.test.js`
|
||||
- deterministic fixture coverage for normalization, classification, summary, artifact contract, export/logging degradation, and entrypoint shape helpers
|
||||
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/SKILL.toml`
|
||||
- align tool description with real collection/export/report-log behavior
|
||||
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/SKILL.md`
|
||||
- align written contract with actual runtime behavior and artifact fields
|
||||
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/references/collection-flow.md`
|
||||
- align flow with page-range/query/export/report-log sequence
|
||||
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/references/data-quality.md`
|
||||
- make canonical columns, original classification tables, reason heuristics, summary rules, and partial semantics explicit
|
||||
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/scenes/fault-details-report/scene.json`
|
||||
- keep scene output/state contract aligned with real staged artifact behavior
|
||||
|
||||
### Existing files to read but not redesign
|
||||
|
||||
- Read only: `docs/superpowers/specs/2026-04-10-fault-details-full-skill-alignment-design.md`
|
||||
- Read only: `src/agent/mod.rs`
|
||||
- Read only: `src/compat/browser_script_skill_tool.rs`
|
||||
- Read only: `D:/desk/智能体资料/大四区报告监测项/故障明细/index.html`
|
||||
|
||||
---
|
||||
|
||||
### Task 1: Add staged-skill red tests for normalization, summary, and artifact-contract semantics
|
||||
|
||||
**Files:**
|
||||
- Create: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.test.js`
|
||||
- Read only: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.js`
|
||||
- Read only: `D:/desk/智能体资料/大四区报告监测项/故障明细/index.html`
|
||||
|
||||
- [ ] **Step 1: Write the failing staged-skill test file**
|
||||
|
||||
Add `collect_fault_details.test.js` using `node:test` and `assert/strict`. Cover these behaviors with fixed fixtures:
|
||||
|
||||
```javascript
|
||||
const test = require('node:test');
|
||||
const assert = require('node:assert/strict');
|
||||
|
||||
const {
|
||||
DETAIL_COLUMNS,
|
||||
SUMMARY_COLUMNS,
|
||||
normalizeDetailRow,
|
||||
deriveSummaryRows,
|
||||
determineArtifactStatus,
|
||||
buildFaultDetailsArtifact,
|
||||
buildBrowserEntrypointResult
|
||||
} = require('./collect_fault_details.js');
|
||||
|
||||
test('normalizeDetailRow maps canonical detail fields from raw repair rows', () => {
|
||||
const row = normalizeDetailRow({
|
||||
qxdbh: 'QX-1',
|
||||
bxsj: '2026-03-09 08:00:00',
|
||||
cityName: '国网兰州供电公司',
|
||||
maintOrgName: '城关供电服务班',
|
||||
maintGroupName: '抢修一班',
|
||||
bdzMc: '110kV东岗变',
|
||||
xlmc10: '10kV东岗线',
|
||||
byqmc: '东岗1号变',
|
||||
yjflMc: '电网故障',
|
||||
ejflMc: '线路故障',
|
||||
sjflMc: '低压线路',
|
||||
qxxcjl: '现场检查:低压线路断线,已处理完成',
|
||||
gzms: '客户报修停电'
|
||||
}, {
|
||||
companyName: '国网兰州供电公司'
|
||||
});
|
||||
|
||||
assert.equal(row.slsj, '2026-03-09 08:00:00');
|
||||
assert.equal(row.gssgs, '甘肃省电力公司');
|
||||
assert.equal(row.gddw, '城关供电服务班');
|
||||
assert.equal(row.gds, '抢修一班');
|
||||
assert.equal(row.clzt, '处理完成');
|
||||
assert.equal(row.bdz, '110kV东岗变');
|
||||
assert.equal(row.line, '10kV东岗线');
|
||||
assert.equal(row.pb, '东岗1号变');
|
||||
});
|
||||
|
||||
test('deriveSummaryRows groups normalized rows by gds and computes counters', () => {
|
||||
const rows = [
|
||||
{ gds: '抢修一班', gddw: '城关供电服务班', sgs: '国网兰州供电公司', sxfl1: '无效', sxfl2: '无效', gzsb: '' },
|
||||
{ gds: '抢修一班', gddw: '城关供电服务班', sgs: '国网兰州供电公司', sxfl1: '有效', sxfl2: '用户侧', gzsb: '表后线' },
|
||||
{ gds: '抢修一班', gddw: '城关供电服务班', sgs: '国网兰州供电公司', sxfl1: '有效', sxfl2: '电网侧', dwcFl: '低压故障', gzsb: '低压线路' }
|
||||
];
|
||||
|
||||
const summaryRows = deriveSummaryRows(rows, { companyName: '国网兰州供电公司' });
|
||||
assert.equal(summaryRows.length, 1);
|
||||
assert.equal(summaryRows[0].className, '抢修一班');
|
||||
assert.equal(summaryRows[0].allCount, 3);
|
||||
assert.equal(summaryRows[0].wxCount, 1);
|
||||
assert.equal(summaryRows[0].khcCount, 0);
|
||||
assert.equal(summaryRows[0].dyGzCount, 1);
|
||||
assert.equal(summaryRows[0].dyxlCount, 1);
|
||||
assert.equal(summaryRows[0].bhxCount, 1);
|
||||
});
|
||||
|
||||
test('determineArtifactStatus follows blocked > error > partial > empty > ok precedence', () => {
|
||||
assert.equal(determineArtifactStatus({ blockedReason: 'missing_session', fatalError: null, partialReasons: [], detailRows: [{}] }), 'blocked');
|
||||
assert.equal(determineArtifactStatus({ blockedReason: null, fatalError: 'parse_failed', partialReasons: [], detailRows: [{}] }), 'error');
|
||||
assert.equal(determineArtifactStatus({ blockedReason: null, fatalError: null, partialReasons: ['export_failed'], detailRows: [{}] }), 'partial');
|
||||
assert.equal(determineArtifactStatus({ blockedReason: null, fatalError: null, partialReasons: [], detailRows: [] }), 'empty');
|
||||
assert.equal(determineArtifactStatus({ blockedReason: null, fatalError: null, partialReasons: [], detailRows: [{}] }), 'ok');
|
||||
});
|
||||
|
||||
test('buildFaultDetailsArtifact keeps canonical fields, selected range, counts, and downstream results', () => {
|
||||
const artifact = buildFaultDetailsArtifact({
|
||||
period: '2026-03',
|
||||
selectedRange: { start: '2026-03-08 16:00:00', end: '2026-03-09 16:00:00' },
|
||||
detailRows: [{ qxdbh: 'QX-1' }],
|
||||
summaryRows: [{ index: 1 }],
|
||||
partialReasons: ['report_log_failed'],
|
||||
downstream: {
|
||||
export: { attempted: true, success: true, path: 'http://localhost/export.xlsx' },
|
||||
report_log: { attempted: true, success: false, error: '500' }
|
||||
}
|
||||
});
|
||||
|
||||
assert.equal(artifact.type, 'report-artifact');
|
||||
assert.equal(artifact.status, 'partial');
|
||||
assert.deepEqual(artifact.selected_range, { start: '2026-03-08 16:00:00', end: '2026-03-09 16:00:00' });
|
||||
assert.equal(artifact.counts.detail_rows, 1);
|
||||
assert.equal(artifact.counts.summary_rows, 1);
|
||||
assert.deepEqual(artifact.partial_reasons, ['report_log_failed']);
|
||||
});
|
||||
|
||||
test('buildFaultDetailsArtifact keeps required top-level fields for blocked artifact', () => {
|
||||
const artifact = buildFaultDetailsArtifact({
|
||||
period: '2026-03',
|
||||
blockedReason: 'selected_range_unavailable',
|
||||
partialReasons: ['selected_range_unavailable']
|
||||
});
|
||||
|
||||
assert.equal(artifact.type, 'report-artifact');
|
||||
assert.equal(artifact.report_name, 'fault-details-report');
|
||||
assert.equal(artifact.period, '2026-03');
|
||||
assert.equal(artifact.status, 'blocked');
|
||||
assert.deepEqual(artifact.partial_reasons, ['selected_range_unavailable']);
|
||||
assert.equal('downstream' in artifact, false);
|
||||
});
|
||||
|
||||
test('buildFaultDetailsArtifact keeps known selected range and counts on late error', () => {
|
||||
const artifact = buildFaultDetailsArtifact({
|
||||
period: '2026-03',
|
||||
selectedRange: { start: '2026-03-08 16:00:00', end: '2026-03-09 16:00:00' },
|
||||
detailRows: [],
|
||||
summaryRows: [],
|
||||
fatalError: 'summary_failed',
|
||||
partialReasons: ['summary_failed']
|
||||
});
|
||||
|
||||
assert.equal(artifact.status, 'error');
|
||||
assert.deepEqual(artifact.selected_range, { start: '2026-03-08 16:00:00', end: '2026-03-09 16:00:00' });
|
||||
assert.equal(artifact.counts.detail_rows, 0);
|
||||
assert.equal(artifact.counts.summary_rows, 0);
|
||||
});
|
||||
|
||||
test('buildBrowserEntrypointResult returns blocked artifact when selected range is unavailable', async () => {
|
||||
const artifact = await buildBrowserEntrypointResult({
|
||||
period: '2026-03'
|
||||
}, {
|
||||
readSelectedRange: async () => null
|
||||
});
|
||||
|
||||
assert.equal(artifact.status, 'blocked');
|
||||
assert.ok(artifact.partial_reasons.includes('selected_range_unavailable'));
|
||||
});
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Run the staged-skill test file and verify it fails**
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
node "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.test.js"
|
||||
```
|
||||
|
||||
Expected: FAIL because `collect_fault_details.js` does not export these helpers yet and still only returns an empty shell.
|
||||
|
||||
---
|
||||
|
||||
### Task 2: Implement staged-skill parity helpers and a valid browser entrypoint
|
||||
|
||||
**Files:**
|
||||
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.js`
|
||||
- Test: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.test.js`
|
||||
|
||||
- [ ] **Step 1: Implement the helper exports and browser entrypoint shape needed to satisfy the red tests**
|
||||
|
||||
Refactor `collect_fault_details.js` so the file remains a valid browser-eval script in page context while still supporting `node:test` through an environment-safe export guard.
|
||||
|
||||
Required implementation pieces:
|
||||
|
||||
```javascript
|
||||
const DETAIL_COLUMNS = [/* existing canonical columns */];
|
||||
const SUMMARY_COLUMNS = [/* existing summary columns */];
|
||||
|
||||
function normalizeDetailRow(raw, context) {
|
||||
// map qxdbh/gssgs/sgs/gddw/gds/slsj/clzt/bdz/line/pb
|
||||
// derive sxfl1/sxfl2/sxfl3/gzsb/gzyy from the original package rules
|
||||
}
|
||||
|
||||
function deriveSummaryRows(detailRows, context) {
|
||||
// group by gds and compute all original package counters
|
||||
}
|
||||
|
||||
function determineArtifactStatus({ blockedReason, fatalError, partialReasons, detailRows }) {
|
||||
// blocked > error > partial > empty > ok
|
||||
}
|
||||
|
||||
function buildFaultDetailsArtifact({
|
||||
period,
|
||||
selectedRange,
|
||||
detailRows,
|
||||
summaryRows,
|
||||
partialReasons,
|
||||
blockedReason,
|
||||
fatalError,
|
||||
downstream
|
||||
}) {
|
||||
// return report-artifact with columns, sections, counts, status, partial_reasons, downstream
|
||||
}
|
||||
|
||||
async function buildBrowserEntrypointResult(input, deps = defaultBrowserDeps()) {
|
||||
// read selected range from page
|
||||
// collect raw rows from page query
|
||||
// normalize rows
|
||||
// derive summary
|
||||
// attempt export + report log
|
||||
// return final artifact
|
||||
}
|
||||
|
||||
if (typeof module !== 'undefined' && module.exports) {
|
||||
module.exports = {
|
||||
DETAIL_COLUMNS,
|
||||
SUMMARY_COLUMNS,
|
||||
normalizeDetailRow,
|
||||
deriveSummaryRows,
|
||||
determineArtifactStatus,
|
||||
buildFaultDetailsArtifact,
|
||||
buildBrowserEntrypointResult
|
||||
};
|
||||
}
|
||||
|
||||
return await buildBrowserEntrypointResult(args);
|
||||
```
|
||||
|
||||
Rules:
|
||||
- keep `DETAIL_COLUMNS` and `SUMMARY_COLUMNS` canonical and stable
|
||||
- keep helper functions self-contained in this file unless a separate pure helper file becomes necessary for runtime validity
|
||||
- keep the browser entrypoint compatible with current `eval` wrapper
|
||||
- keep browser runtime free of unguarded Node-only assumptions
|
||||
- do **not** invent a new protocol or callback surface
|
||||
|
||||
- [ ] **Step 2: Re-run the staged-skill test file and verify it now reaches deeper failures or passes the initial helper coverage**
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
node "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.test.js"
|
||||
```
|
||||
|
||||
Expected: either PASS for the Task 1 cases, or fail only on the still-missing full parity/export/history specifics added in Task 3.
|
||||
|
||||
---
|
||||
|
||||
### Task 3: Add red tests for full classification parity, downstream partials, and empty-result export semantics
|
||||
|
||||
**Files:**
|
||||
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.test.js`
|
||||
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.js`
|
||||
- Read only: `D:/desk/智能体资料/大四区报告监测项/故障明细/index.html`
|
||||
|
||||
- [ ] **Step 1: Extend the staged-skill tests with failing parity and downstream cases**
|
||||
|
||||
Add focused failing tests such as:
|
||||
|
||||
```javascript
|
||||
test('normalizeDetailRow derives gzyy from qxxcjl text heuristics', () => {
|
||||
const row = normalizeDetailRow({
|
||||
qxxcjl: '现场检查:客户表后线烧损,已恢复送电',
|
||||
ejflMc: '客户侧故障',
|
||||
sjflMc: '表后线'
|
||||
}, { companyName: '国网兰州供电公司' });
|
||||
|
||||
assert.equal(row.gzsb, '表后线');
|
||||
assert.equal(row.gzyy, '表后线烧损');
|
||||
});
|
||||
|
||||
test('buildBrowserEntrypointResult returns partial when export fails after detail collection succeeds', async () => {
|
||||
const artifact = await buildBrowserEntrypointResult({ period: '2026-03' }, {
|
||||
readSelectedRange: async () => ({ start: '2026-03-08 16:00:00', end: '2026-03-09 16:00:00' }),
|
||||
queryFaultRows: async () => [{ qxdbh: 'QX-1', bxsj: '2026-03-09 08:00:00', maintGroupName: '抢修一班' }],
|
||||
readCompanyContext: () => ({ companyName: '国网兰州供电公司' }),
|
||||
exportWorkbook: async () => {
|
||||
throw new Error('export_failed');
|
||||
},
|
||||
writeReportLog: async () => ({ success: true })
|
||||
});
|
||||
|
||||
assert.equal(artifact.status, 'partial');
|
||||
assert.ok(artifact.partial_reasons.includes('export_failed'));
|
||||
assert.equal(artifact.counts.detail_rows, 1);
|
||||
assert.equal(artifact.downstream.export.attempted, true);
|
||||
assert.equal(artifact.downstream.export.success, false);
|
||||
});
|
||||
|
||||
test('buildBrowserEntrypointResult returns error when normalized detail rows cannot be produced', async () => {
|
||||
const artifact = await buildBrowserEntrypointResult({ period: '2026-03' }, {
|
||||
readSelectedRange: async () => ({ start: '2026-03-08 16:00:00', end: '2026-03-09 16:00:00' }),
|
||||
queryFaultRows: async () => [{ qxdbh: '', bxsj: '' }],
|
||||
readCompanyContext: () => ({ companyName: '国网兰州供电公司' })
|
||||
});
|
||||
|
||||
assert.equal(artifact.status, 'error');
|
||||
assert.ok(artifact.partial_reasons.includes('detail_normalization_failed'));
|
||||
});
|
||||
|
||||
test('buildBrowserEntrypointResult keeps canonical rows empty for empty result and omits downstream before attempts', async () => {
|
||||
const artifact = await buildBrowserEntrypointResult({ period: '2026-03' }, {
|
||||
readSelectedRange: async () => ({ start: '2026-03-08 16:00:00', end: '2026-03-09 16:00:00' }),
|
||||
queryFaultRows: async () => [],
|
||||
readCompanyContext: () => ({ companyName: '国网兰州供电公司' })
|
||||
});
|
||||
|
||||
assert.equal(artifact.status, 'empty');
|
||||
assert.deepEqual(artifact.rows, []);
|
||||
assert.equal('downstream' in artifact, false);
|
||||
});
|
||||
```
|
||||
|
||||
Also add fixture cases derived from the original package’s full classification table and summary counters so the staged skill is forced toward parity, not a subset implementation.
|
||||
|
||||
- [ ] **Step 2: Run the staged-skill test file and verify it fails on the new cases**
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
node "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.test.js"
|
||||
```
|
||||
|
||||
Expected: FAIL on missing full classification parity or downstream partial/error behavior.
|
||||
|
||||
- [ ] **Step 3: Implement the full business logic needed to satisfy the new tests**
|
||||
|
||||
In `collect_fault_details.js`:
|
||||
- port the original classification table and `qxxcjl` text heuristics for `sxfl1`, `sxfl2`, `sxfl3`, `gzsb`, `gzyy`
|
||||
- port the original summary derivation rules and counters completely
|
||||
- add required-field validation so structurally unusable normalized rows escalate to `error`
|
||||
- add downstream `exportWorkbook` and `writeReportLog` stages that record `{attempted, success, path, error}`
|
||||
- keep collection success distinct from downstream failures so export/logging failures become `partial`, not full failure
|
||||
- keep placeholder rows, if needed for downstream empty-export payloads, downstream-only and never in canonical returned `rows`
|
||||
- include both `period` and `selected_range` in the artifact
|
||||
- omit `downstream` when export/report-log have not been attempted yet
|
||||
|
||||
- [ ] **Step 4: Re-run the staged-skill test file and verify it passes**
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
node "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.test.js"
|
||||
```
|
||||
|
||||
Expected: PASS.
|
||||
|
||||
---
|
||||
|
||||
### Task 4: Align staged-skill metadata and reference docs with the implemented behavior
|
||||
|
||||
**Files:**
|
||||
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/SKILL.toml`
|
||||
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/SKILL.md`
|
||||
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/references/collection-flow.md`
|
||||
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/references/data-quality.md`
|
||||
- Modify: `D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/scenes/fault-details-report/scene.json`
|
||||
|
||||
- [ ] **Step 1: Update the staged metadata/docs to match the implemented runtime contract**
|
||||
|
||||
Required changes:
|
||||
- `SKILL.toml`: description must say the tool collects rows, derives summary, attempts localhost export, and records report history
|
||||
- `SKILL.md`: artifact example must include `selected_range`, `counts`, `status`, `partial_reasons`, and `downstream`
|
||||
- `references/collection-flow.md`: sequence must explicitly include page-selected range -> raw query -> normalization -> summary -> export -> report-log
|
||||
- `references/data-quality.md`: document the original classification tables, `qxxcjl` heuristics, summary rules, partial/error escalation rules, and empty-result semantics explicitly enough to match the implemented helpers
|
||||
- `scene.json`: keep inputs/outputs/status semantics aligned with the richer artifact; do not add routing policy there
|
||||
|
||||
- [ ] **Step 2: Read the updated staged docs and verify they match the implemented JS behavior**
|
||||
|
||||
Read and confirm:
|
||||
- descriptions no longer claim “artifact shell” behavior
|
||||
- docs do not move routing ownership out of `claw-new`
|
||||
- docs do not promise auto-opening/downloading behavior in this slice
|
||||
- docs reflect blocked/error field-presence rules and downstream-attempt semantics
|
||||
|
||||
Expected: staged metadata/docs accurately reflect the implemented collector.
|
||||
|
||||
---
|
||||
|
||||
### Task 5: Add Rust red tests for artifact-status interpretation in the direct-submit runtime
|
||||
|
||||
**Files:**
|
||||
- Modify: `tests/agent_runtime_test.rs`
|
||||
- Modify: `tests/browser_script_skill_tool_test.rs`
|
||||
- Modify: `src/compat/direct_skill_runtime.rs`
|
||||
- Read only: `src/compat/browser_script_skill_tool.rs`
|
||||
|
||||
- [ ] **Step 1: Add failing direct-submit runtime tests for structured artifact statuses**
|
||||
|
||||
Extend `tests/agent_runtime_test.rs` with focused regressions that use the existing temp skill-root harness but return real `report-artifact` payloads:
|
||||
|
||||
```rust
|
||||
#[test]
|
||||
fn submit_task_treats_partial_report_artifact_as_success_with_warning_summary() {
|
||||
let skill_root = build_direct_runtime_skill_root();
|
||||
let runtime_context = direct_submit_runtime_context(&skill_root);
|
||||
let transport = Arc::new(MockTransport::new(vec![success_browser_response(
|
||||
1,
|
||||
serde_json::json!({
|
||||
"text": {
|
||||
"type": "report-artifact",
|
||||
"report_name": "fault-details-report",
|
||||
"period": "2026-03",
|
||||
"selected_range": { "start": "2026-03-08 16:00:00", "end": "2026-03-09 16:00:00" },
|
||||
"columns": ["qxdbh"],
|
||||
"rows": [{ "qxdbh": "QX-1" }],
|
||||
"sections": [{ "name": "summary-sheet", "columns": ["index"], "rows": [{ "index": 1 }] }],
|
||||
"counts": { "detail_rows": 1, "summary_rows": 1 },
|
||||
"status": "partial",
|
||||
"partial_reasons": ["report_log_failed"],
|
||||
"downstream": {
|
||||
"export": { "attempted": true, "success": true, "path": "http://localhost/export.xlsx" },
|
||||
"report_log": { "attempted": true, "success": false, "error": "500" }
|
||||
}
|
||||
}
|
||||
}),
|
||||
)]));
|
||||
// ... invoke handle_browser_message_with_context(...)
|
||||
// assert TaskComplete.success == true
|
||||
// assert summary contains partial/report_log_failed/detail_rows=1
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn submit_task_treats_empty_report_artifact_as_success() { /* status=empty => success=true */ }
|
||||
|
||||
#[test]
|
||||
fn submit_task_treats_blocked_report_artifact_as_failure() { /* status=blocked => success=false */ }
|
||||
|
||||
#[test]
|
||||
fn submit_task_treats_error_report_artifact_as_failure() { /* status=error => success=false */ }
|
||||
```
|
||||
|
||||
Also add one focused helper regression to `tests/browser_script_skill_tool_test.rs` that proves the browser-script helper can return a structured object payload used by the fault-details path without flattening required fields away.
|
||||
|
||||
Suggested test name:
|
||||
|
||||
```rust
|
||||
#[tokio::test]
|
||||
async fn execute_browser_script_tool_preserves_structured_report_artifact_payload() { /* ... */ }
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Run the focused Rust tests and verify they fail**
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
cargo test --test agent_runtime_test submit_task_treats_partial_report_artifact_as_success_with_warning_summary -- --nocapture
|
||||
cargo test --test browser_script_skill_tool_test execute_browser_script_tool_preserves_structured_report_artifact_payload -- --nocapture
|
||||
```
|
||||
|
||||
Expected: the new `agent_runtime_test` case fails because `execute_direct_submit_skill` still returns raw JSON text and `src/agent/mod.rs` still marks all direct-submit results as success when no Rust-side interpretation exists.
|
||||
|
||||
---
|
||||
|
||||
### Task 6: Implement narrow Rust artifact interpretation without moving business rules into Rust
|
||||
|
||||
**Files:**
|
||||
- Modify: `src/compat/direct_skill_runtime.rs`
|
||||
- Modify: `tests/agent_runtime_test.rs`
|
||||
- Modify: `tests/browser_script_skill_tool_test.rs`
|
||||
|
||||
- [ ] **Step 1: Implement a narrow structured-artifact interpreter in `src/compat/direct_skill_runtime.rs`**
|
||||
|
||||
Add a small internal result type and parser, for example:
|
||||
|
||||
```rust
|
||||
struct DirectSubmitOutcome {
|
||||
success: bool,
|
||||
summary: String,
|
||||
}
|
||||
|
||||
fn interpret_direct_submit_output(output: &str) -> DirectSubmitOutcome {
|
||||
// parse JSON if possible
|
||||
// if type == "report-artifact", read status/counts/partial_reasons/downstream
|
||||
// map ok/partial/empty => success=true
|
||||
// map blocked/error => success=false
|
||||
// build concise summary with report_name, period, detail_rows, summary_rows, status, partial reasons
|
||||
// fall back to raw output text when payload is not a recognized artifact
|
||||
}
|
||||
```
|
||||
|
||||
Then change the public entrypoint shape from `Result<String, PipeError>` to a narrow result carrying `success` and `summary`, or add a second helper that `src/agent/mod.rs` can use without changing routing ownership.
|
||||
|
||||
Rules:
|
||||
- do **not** reimplement fault normalization/classification/summary in Rust
|
||||
- do **not** add fault-specific branching in `src/agent/mod.rs`
|
||||
- keep unrecognized non-artifact outputs working as before
|
||||
- keep explicit `YYYY-MM` derivation and configured `skill.tool` resolution unchanged
|
||||
|
||||
- [ ] **Step 2: Update the submit-path caller to use the interpreted success flag**
|
||||
|
||||
Adjust the direct-submit branch so `TaskComplete.success` comes from the artifact interpretation instead of blindly treating every `Ok(summary)` as success.
|
||||
|
||||
Implementation target:
|
||||
- keep the direct path in `src/agent/mod.rs`
|
||||
- keep error handling narrow
|
||||
- if needed, return a dedicated direct-submit outcome from `execute_direct_submit_skill`
|
||||
|
||||
- [ ] **Step 3: Re-run the focused Rust tests and verify they pass**
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
cargo test --test agent_runtime_test submit_task_treats_partial_report_artifact_as_success_with_warning_summary -- --nocapture
|
||||
cargo test --test agent_runtime_test submit_task_treats_empty_report_artifact_as_success -- --nocapture
|
||||
cargo test --test agent_runtime_test submit_task_treats_blocked_report_artifact_as_failure -- --nocapture
|
||||
cargo test --test agent_runtime_test submit_task_treats_error_report_artifact_as_failure -- --nocapture
|
||||
cargo test --test browser_script_skill_tool_test execute_browser_script_tool_preserves_structured_report_artifact_payload -- --nocapture
|
||||
```
|
||||
|
||||
Expected: PASS.
|
||||
|
||||
---
|
||||
|
||||
### Task 7: Run the full verification sweep for the staged skill and direct runtime
|
||||
|
||||
**Files:**
|
||||
- Verify only
|
||||
|
||||
- [ ] **Step 1: Run the staged-skill deterministic test file**
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
node "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.test.js"
|
||||
```
|
||||
|
||||
Expected: PASS.
|
||||
|
||||
- [ ] **Step 2: Run the relevant Rust regression suites**
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
cargo test --test browser_script_skill_tool_test -- --nocapture
|
||||
cargo test --test agent_runtime_test -- --nocapture
|
||||
```
|
||||
|
||||
Expected: PASS.
|
||||
|
||||
- [ ] **Step 3: Run the broader compatibility coverage and build**
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
cargo test --test compat_runtime_test -- --nocapture
|
||||
cargo test --test compat_config_test -- --nocapture
|
||||
cargo build --bin sgclaw
|
||||
```
|
||||
|
||||
Expected: PASS.
|
||||
|
||||
- [ ] **Step 4: Manually verify the requirements against the approved spec**
|
||||
|
||||
Checklist:
|
||||
- staged skill now reads page-selected range instead of inventing a month window after entry
|
||||
- staged skill returns canonical detail rows and summary rows
|
||||
- staged skill ports the original classification table, `qxxcjl` heuristics, and summary counters with parity coverage
|
||||
- staged skill records downstream export/report-log outcome
|
||||
- staged skill distinguishes `ok` / `partial` / `empty` / `blocked` / `error`
|
||||
- `blocked` / `error` artifacts keep the required top-level fields, and preserve known `selected_range` / `counts` when failure happens late enough
|
||||
- `downstream` is omitted when export/report-log were not attempted and included with attempted/success flags once they were attempted
|
||||
- empty-result canonical `rows` stay empty even if downstream export uses a placeholder transport row
|
||||
- `claw-new` maps `ok` / `partial` / `empty` to success and `blocked` / `error` to failure
|
||||
- no new routing metadata was added to `SKILL.toml` or `scene.json`
|
||||
- no new browser protocol or opener/UI behavior was introduced
|
||||
|
||||
Expected: all checklist items satisfied before calling the work complete.
|
||||
|
||||
---
|
||||
|
||||
## Verification Checklist
|
||||
|
||||
### Staged skill behavior
|
||||
|
||||
```bash
|
||||
node "D:/data/ideaSpace/rust/sgClaw/claw/claw/skills/skill_staging/skills/fault-details-report/scripts/collect_fault_details.test.js"
|
||||
```
|
||||
|
||||
Expected: deterministic fixture coverage passes for normalization, full classification parity, summary derivation, artifact shape, empty semantics, and downstream partial semantics.
|
||||
|
||||
### Direct-submit runtime mapping
|
||||
|
||||
```bash
|
||||
cargo test --test agent_runtime_test -- --nocapture
|
||||
```
|
||||
|
||||
Expected:
|
||||
- valid artifact `ok` / `partial` / `empty` completes successfully
|
||||
- valid artifact `blocked` / `error` completes as failure
|
||||
- existing invalid config regression still passes
|
||||
- existing direct-submit happy path still passes
|
||||
|
||||
### Browser-script helper safety
|
||||
|
||||
```bash
|
||||
cargo test --test browser_script_skill_tool_test -- --nocapture
|
||||
```
|
||||
|
||||
Expected: current browser-script execution semantics remain intact while returning structured artifact payloads.
|
||||
|
||||
### Compatibility/build
|
||||
|
||||
```bash
|
||||
cargo test --test compat_runtime_test -- --nocapture
|
||||
cargo test --test compat_config_test -- --nocapture
|
||||
cargo build --bin sgclaw
|
||||
```
|
||||
|
||||
Expected: no regressions in compat execution/config loading; main binary builds cleanly.
|
||||
|
||||
---
|
||||
|
||||
## Notes For The Engineer
|
||||
|
||||
- The paired spec is `docs/superpowers/specs/2026-04-10-fault-details-full-skill-alignment-design.md`.
|
||||
- Keep all fault business transforms in `skill_staging`, not in Rust.
|
||||
- Keep direct routing config-owned via `skillsDir` + `directSubmitSkill`.
|
||||
- Do **not** broaden this slice into LLM routing, generic dispatch policy, new browser opcodes, or export auto-open behavior.
|
||||
- If the original package reveals extra classification rules that are needed for parity, add them only inside `collect_fault_details.js` and its staged references/tests, not in `claw-new`.
|
||||
551
docs/superpowers/plans/2026-04-11-main-into-ws-merge-v2-plan.md
Normal file
551
docs/superpowers/plans/2026-04-11-main-into-ws-merge-v2-plan.md
Normal file
@@ -0,0 +1,551 @@
|
||||
# Main → WS Merge v2 Implementation Plan
|
||||
|
||||
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||
|
||||
**Goal:** 把最新 `origin/main` 合并到 `feature/claw-ws`,让 `ws` 分支最终同时保留 **pipe + ws** 两套通信能力、保留 Zhihu 行为,并用 `main` 上正式的 fault-details 实现替换 `ws` 上已 cleanup 删除的旧重复实现。
|
||||
|
||||
**Architecture:** 这次合并不是“把 cleanup 永久保持成没有 fault-details”,而是“先删除 ws 上旧重复实现,再吸收 main 上正式实现”。冲突裁决优先级是:**先保 pipe、再保 ws、再保 Zhihu、同时拒绝 ws 上旧重复 scene/fault-details 实现回流**。整个过程使用 `git merge --no-commit --no-ff origin/main`,冲突解决后只做聚焦验证,停在未提交状态。
|
||||
|
||||
**Tech Stack:** Git, Rust 2021, Cargo test, sgClaw pipe transport, ws transport, compat/runtime/orchestration stack, Zhihu direct workflow tests.
|
||||
|
||||
---
|
||||
|
||||
## Preconditions
|
||||
|
||||
- 当前分支必须是 `feature/claw-ws`
|
||||
- `2026-04-09-ws-branch-scene-cleanup-plan.md` 已完成
|
||||
- 当前不在 merge 状态
|
||||
- 当前没有 tracked 未提交改动
|
||||
- 本次**不创建 worktree**,按当前仓库执行
|
||||
- 本次结束点是:**已合并、已验证、未提交**
|
||||
|
||||
---
|
||||
|
||||
## Final Merge Principles
|
||||
|
||||
### 1) `main` 是 pipe 主线
|
||||
合并后不能把 `main` 上现有的 pipe 管道通信破坏掉。
|
||||
|
||||
### 2) `ws` 分支最终要同时保留 pipe + ws
|
||||
合并后不能让 `ws` 分支丢掉 websocket 路径,也不能只剩 pipe。
|
||||
|
||||
### 3) 两边都有 Zhihu
|
||||
合并后不能把现有 Zhihu 行为合坏,尤其是 ws→Zhihu 保留路径。
|
||||
|
||||
### 4) fault-details 以 `main` 正式实现为准
|
||||
- `ws` 上那套旧重复实现:**不能回流**
|
||||
- `main` 上正式实现:**应被合进来**
|
||||
- 最终结果不是“没有 fault-details”,而是“没有 ws 那套旧 fault-details,只保留 main 正式版本”
|
||||
|
||||
### 5) 不回流旧 scene plumbing
|
||||
以下旧面不能作为最终结果保留:
|
||||
- ws 自己那套旧 scene registry / old scene plumbing
|
||||
- ws cleanup 已删掉的旧重复 route/contract
|
||||
- 仅为旧 `skill_staging` 场景装配服务的残留逻辑
|
||||
|
||||
---
|
||||
|
||||
## File Map
|
||||
|
||||
### A. 合并时重点观察的共享/高风险文件
|
||||
- `Cargo.toml`
|
||||
- `Cargo.lock`
|
||||
- `src/agent/mod.rs`
|
||||
- `src/agent/task_runner.rs`
|
||||
- `src/config/settings.rs`
|
||||
- `src/compat/config_adapter.rs`
|
||||
- `src/compat/runtime.rs`
|
||||
- `src/compat/orchestration.rs`
|
||||
- `src/compat/workflow_executor.rs`
|
||||
- `src/compat/browser_script_skill_tool.rs`
|
||||
- `src/compat/direct_skill_runtime.rs`
|
||||
- `src/compat/openxml_office_tool.rs`
|
||||
|
||||
### B. pipe / ws / Zhihu 保护面
|
||||
- `src/compat/runtime.rs`
|
||||
- `src/compat/orchestration.rs`
|
||||
- `src/compat/workflow_executor.rs`
|
||||
- `src/agent/task_runner.rs`
|
||||
- `tests/agent_runtime_test.rs`
|
||||
- `tests/browser_ws_backend_test.rs`
|
||||
- `tests/service_ws_session_test.rs`
|
||||
- `tests/task_runner_test.rs`
|
||||
|
||||
### C. cleanup 后仍需防止旧实现回流的文件
|
||||
- `src/runtime/mod.rs`
|
||||
- `src/runtime/engine.rs`
|
||||
- `src/config/settings.rs`
|
||||
- `src/compat/config_adapter.rs`
|
||||
- `tests/compat_runtime_test.rs`
|
||||
- `tests/runtime_profile_test.rs`
|
||||
- `tests/compat_config_test.rs`
|
||||
|
||||
### D. 可能需要随 main 正式 fault-details 一起更新的测试面
|
||||
- `tests/compat_runtime_test.rs`
|
||||
- `tests/compat_config_test.rs`
|
||||
- `tests/browser_script_skill_tool_test.rs`
|
||||
- `tests/compat_openxml_office_tool_test.rs`
|
||||
|
||||
---
|
||||
|
||||
## Conflict Resolution Rule Table
|
||||
|
||||
| 类别 | 最终保留原则 |
|
||||
|---|---|
|
||||
| pipe 主路径 | **优先保留可工作的 main 版本**,不能被 ws 改坏 |
|
||||
| ws 路径 | **必须继续保留 ws 能力**,不能因吸收 main 而丢失 |
|
||||
| Zhihu | 两边相关能力都不能合坏,至少保住现有 keep-path |
|
||||
| fault-details | **保留 main 正式实现**,不保留 ws 旧重复实现 |
|
||||
| old scene/95598 cleanup 残留 | 不允许以 ws 旧重复实现形式回流 |
|
||||
| `skillsDir` / config | 以最终产品需要为准;若 main 正式实现不要求旧 array-style/scene expansion,则不回流 |
|
||||
| 临时 merge 修补 | 一律不保留 |
|
||||
|
||||
---
|
||||
|
||||
### Task 1: Confirm Merge Preconditions And Diff Surface
|
||||
|
||||
**Files:**
|
||||
- No code changes expected
|
||||
- Observe repo state and branch diff only
|
||||
|
||||
- [ ] **Step 1: Confirm current branch**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git rev-parse --abbrev-ref HEAD
|
||||
```
|
||||
|
||||
Expected:
|
||||
```text
|
||||
feature/claw-ws
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Confirm no merge is in progress**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git rev-parse -q --verify MERGE_HEAD
|
||||
```
|
||||
|
||||
Expected: exit code `1`.
|
||||
|
||||
- [ ] **Step 3: Confirm no tracked local changes**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git diff --name-only && printf '\n---STAGED---\n' && git diff --cached --name-only
|
||||
```
|
||||
|
||||
Expected:
|
||||
```text
|
||||
|
||||
---STAGED---
|
||||
```
|
||||
|
||||
- [ ] **Step 4: List current untracked files**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git status --short
|
||||
```
|
||||
|
||||
Expected: only known local untracked items, or a clearly understood list.
|
||||
|
||||
- [ ] **Step 5: Update `origin/main`**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git fetch origin main
|
||||
```
|
||||
|
||||
- [ ] **Step 6: Show ws vs main diff surface before merge**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git diff --name-status HEAD...origin/main
|
||||
```
|
||||
|
||||
Expected: clear file list to compare likely merge surface.
|
||||
|
||||
- [ ] **Step 7: Stop if preconditions fail**
|
||||
|
||||
Stop if:
|
||||
- branch is wrong
|
||||
- merge is in progress
|
||||
- tracked changes exist
|
||||
- untracked file collision with `origin/main` is found and unresolved
|
||||
|
||||
---
|
||||
|
||||
### Task 2: Start The Merge Without Committing
|
||||
|
||||
**Files:**
|
||||
- Merge index / working tree only
|
||||
|
||||
- [ ] **Step 1: Start no-commit merge**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git merge --no-commit --no-ff origin/main
|
||||
```
|
||||
|
||||
Expected:
|
||||
- either auto-merge pauses before commit
|
||||
- or Git reports conflicts
|
||||
|
||||
- [ ] **Step 2: Capture merge surface immediately**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git status --short
|
||||
```
|
||||
|
||||
- [ ] **Step 3: Separate results into three buckets**
|
||||
Create a working list of conflicted files under:
|
||||
1. pipe-critical
|
||||
2. ws/Zhihu-critical
|
||||
3. shared infra / tests
|
||||
|
||||
- [ ] **Step 4: If no conflicts, proceed directly to Task 4 verification**
|
||||
|
||||
- [ ] **Step 5: If conflicts exist, proceed to Task 3**
|
||||
|
||||
---
|
||||
|
||||
### Task 3: Resolve Conflicts By System Role, Not By Branch Bias
|
||||
|
||||
**Files:**
|
||||
- Only files reported by Git as conflicted
|
||||
|
||||
#### Global conflict policy
|
||||
For every conflicted hunk, answer these four questions in order:
|
||||
|
||||
1. Does this hunk affect **pipe** correctness?
|
||||
2. Does this hunk affect **ws** correctness?
|
||||
3. Does this hunk affect **Zhihu** correctness?
|
||||
4. Is this hunk part of **ws old duplicate fault-details/scene logic** or **main official implementation**?
|
||||
|
||||
Then apply the rule:
|
||||
- **pipe cannot break**
|
||||
- **ws cannot break**
|
||||
- **Zhihu cannot break**
|
||||
- **ws old duplicate fault-details must stay deleted**
|
||||
- **main official fault-details should come in**
|
||||
|
||||
---
|
||||
|
||||
#### Task 3A: Resolve pipe-critical shared runtime files
|
||||
|
||||
**Files:**
|
||||
- `src/compat/runtime.rs`
|
||||
- `src/agent/task_runner.rs`
|
||||
- `src/agent/mod.rs`
|
||||
- `src/config/settings.rs`
|
||||
- `src/compat/config_adapter.rs`
|
||||
|
||||
- [ ] **Step 1: For each conflict, keep the side that preserves main’s pipe behavior**
|
||||
|
||||
- [ ] **Step 2: Reject ws-only duplicate business logic that main already owns**
|
||||
|
||||
- [ ] **Step 3: Keep ws support if the file also serves ws path**
|
||||
This is additive preservation, not “main wins everything”.
|
||||
|
||||
- [ ] **Step 4: Verify each resolved file has no conflict markers**
|
||||
|
||||
Run per file:
|
||||
```bash
|
||||
git diff --check -- <path>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### Task 3B: Resolve ws / Zhihu-critical routing files
|
||||
|
||||
**Files:**
|
||||
- `src/compat/workflow_executor.rs`
|
||||
- `src/compat/orchestration.rs`
|
||||
|
||||
- [ ] **Step 1: Bring in main’s official fault-details path if it lives here**
|
||||
|
||||
- [ ] **Step 2: Do not reintroduce ws’s old duplicate fault-details path**
|
||||
|
||||
- [ ] **Step 3: Preserve ws submit/browser websocket path**
|
||||
|
||||
- [ ] **Step 4: Preserve Zhihu routing path**
|
||||
|
||||
- [ ] **Step 5: Verify each resolved file has no conflict markers**
|
||||
|
||||
Run per file:
|
||||
```bash
|
||||
git diff --check -- <path>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### Task 3C: Resolve shared infra files minimally
|
||||
|
||||
**Files:**
|
||||
- `Cargo.toml`
|
||||
- `Cargo.lock`
|
||||
- `src/compat/browser_script_skill_tool.rs`
|
||||
- `src/compat/direct_skill_runtime.rs`
|
||||
- `src/compat/openxml_office_tool.rs`
|
||||
|
||||
- [ ] **Step 1: Keep only the dependency/code shape needed by the merged result**
|
||||
|
||||
- [ ] **Step 2: Do not keep lines from prior failed merge attempts**
|
||||
|
||||
- [ ] **Step 3: Accept main fixes unless they break pipe/ws/Zhihu behavior**
|
||||
|
||||
- [ ] **Step 4: Verify each resolved file has no conflict markers**
|
||||
|
||||
Run per file:
|
||||
```bash
|
||||
git diff --check -- <path>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### Task 3D: Resolve tests to reflect final intended product
|
||||
|
||||
**Files:**
|
||||
- `tests/compat_runtime_test.rs`
|
||||
- `tests/runtime_profile_test.rs`
|
||||
- `tests/compat_config_test.rs`
|
||||
- `tests/agent_runtime_test.rs`
|
||||
- `tests/browser_script_skill_tool_test.rs`
|
||||
- `tests/compat_openxml_office_tool_test.rs`
|
||||
|
||||
- [ ] **Step 1: Keep tests proving pipe path still works**
|
||||
|
||||
- [ ] **Step 2: Keep tests proving ws path still works**
|
||||
|
||||
- [ ] **Step 3: Keep Zhihu keep-path regression**
|
||||
|
||||
- [ ] **Step 4: Replace cleanup-only “fault-details absent” assertions if final intended state is now “fault-details present via main official implementation”**
|
||||
|
||||
- [ ] **Step 5: Do not keep assertions that only prove ws’s old duplicate implementation is absent if they now contradict the intended merged product**
|
||||
|
||||
- [ ] **Step 6: Verify each resolved test file has no conflict markers**
|
||||
|
||||
Run per file:
|
||||
```bash
|
||||
git diff --check -- <path>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### Task 3E: Confirm merge is fully resolved
|
||||
|
||||
**Files:**
|
||||
- No code changes expected
|
||||
|
||||
- [ ] **Step 1: Confirm no unmerged entries remain**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git diff --name-only --diff-filter=U
|
||||
```
|
||||
|
||||
Expected: no output.
|
||||
|
||||
- [ ] **Step 2: Show final resolved file list**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git diff --cached --name-only
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 4: Verify Final Product Behavior, Not Cleanup Intermediate State
|
||||
|
||||
**Files:**
|
||||
- Test: `tests/agent_runtime_test.rs`
|
||||
- Test: `tests/browser_ws_backend_test.rs`
|
||||
- Test: `tests/service_ws_session_test.rs`
|
||||
- Test: `tests/task_runner_test.rs`
|
||||
- Test: `tests/compat_runtime_test.rs`
|
||||
- Test: `tests/runtime_profile_test.rs`
|
||||
- Test: `tests/compat_config_test.rs`
|
||||
- Conditional: `tests/browser_script_skill_tool_test.rs`
|
||||
- Conditional: `tests/compat_openxml_office_tool_test.rs`
|
||||
|
||||
#### Verification goals
|
||||
This task must prove all four:
|
||||
|
||||
1. **pipe path still works**
|
||||
2. **ws path still works**
|
||||
3. **Zhihu still works**
|
||||
4. **final fault-details implementation is the main version, not ws’s old duplicate**
|
||||
|
||||
---
|
||||
|
||||
#### Task 4A: Verify pipe-related behavior
|
||||
|
||||
- [ ] **Step 1: Run task runner coverage**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test --test task_runner_test -- --nocapture
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Run compat runtime suite relevant to main path**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test --test compat_runtime_test -- --nocapture
|
||||
```
|
||||
|
||||
- [ ] **Step 3: If pipe-specific tests fail, stop and fix merge resolution before continuing**
|
||||
|
||||
---
|
||||
|
||||
#### Task 4B: Verify ws-related behavior
|
||||
|
||||
- [ ] **Step 1: Run browser websocket backend suite**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test --test browser_ws_backend_test -- --nocapture
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Run service websocket session suite**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test --test service_ws_session_test -- --nocapture
|
||||
```
|
||||
|
||||
- [ ] **Step 3: If ws-specific tests fail, stop and fix merge resolution before continuing**
|
||||
|
||||
---
|
||||
|
||||
#### Task 4C: Verify Zhihu behavior
|
||||
|
||||
- [ ] **Step 1: Re-run ws→Zhihu keep-path regression**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test --test agent_runtime_test production_submit_task_routes_zhihu_through_ws_backend_without_helper_bootstrap -- --nocapture
|
||||
```
|
||||
|
||||
Expected:
|
||||
```text
|
||||
1 passed; 0 failed
|
||||
```
|
||||
|
||||
- [ ] **Step 2: If additional Zhihu tests were touched by conflicts, run the smallest affected test target**
|
||||
|
||||
Run as needed:
|
||||
```bash
|
||||
cargo test --test agent_runtime_test -- --nocapture
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### Task 4D: Verify config/runtime contracts
|
||||
|
||||
- [ ] **Step 1: Run runtime profile suite**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test --test runtime_profile_test -- --nocapture
|
||||
```
|
||||
|
||||
- [ ] **Step 2: Run compat config suite**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test --test compat_config_test -- --nocapture
|
||||
```
|
||||
|
||||
- [ ] **Step 3: Ensure contracts now reflect final merged product, not the cleanup-only intermediate**
|
||||
|
||||
---
|
||||
|
||||
#### Task 4E: Verify shared infra if touched
|
||||
|
||||
- [ ] **Step 1: If browser-script tool files were touched**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test --test browser_script_skill_tool_test -- --nocapture
|
||||
```
|
||||
|
||||
- [ ] **Step 2: If openxml files were touched**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test --test compat_openxml_office_tool_test -- --nocapture
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### Task 4F: Compile guard
|
||||
|
||||
- [ ] **Step 1: Run compile-only full test build**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
cargo test --no-run
|
||||
```
|
||||
|
||||
Expected: exit code `0`.
|
||||
|
||||
---
|
||||
|
||||
### Task 5: Confirm The Merge Outcome Matches The Principle
|
||||
|
||||
**Files:**
|
||||
- No code changes expected
|
||||
|
||||
- [ ] **Step 1: Show final status**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git status --short
|
||||
```
|
||||
|
||||
Expected:
|
||||
- no `UU` / `AA` / `DD`
|
||||
- merged, validated, uncommitted state only
|
||||
|
||||
- [ ] **Step 2: Show final staged summary**
|
||||
|
||||
Run:
|
||||
```bash
|
||||
git diff --cached --stat
|
||||
```
|
||||
|
||||
- [ ] **Step 3: Report the four required facts with command-backed evidence**
|
||||
Only if verified:
|
||||
1. pipe 没坏
|
||||
2. ws 没坏
|
||||
3. Zhihu 没坏
|
||||
4. 最终 fault-details 来自 main 正式实现,而不是 ws 旧重复实现
|
||||
|
||||
- [ ] **Step 4: Stop here**
|
||||
Do **not** run:
|
||||
```bash
|
||||
git commit
|
||||
git push
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Stop Conditions
|
||||
|
||||
出现以下任一情况立即停止,不擅自扩展处理:
|
||||
|
||||
- `origin/main` 的正式 fault-details 实现依赖 cleanup 已删掉的契约,而这已经超出简单 merge 范围
|
||||
- pipe 与 ws 同时依赖同一段共享代码,但两边要求已结构性冲突
|
||||
- Zhihu keep-path 失败
|
||||
- `cargo test --no-run` 失败且问题超出本次 merge surface
|
||||
- 需要重新设计 pipe/ws 共存方式,而不是单纯合并
|
||||
|
||||
---
|
||||
|
||||
## One-line Execution Rule
|
||||
|
||||
**这次 merge 的最终标准不是“继续保持 ws 没有 fault-details”,而是“保住 pipe、保住 ws、保住 Zhihu,并让 main 的正式 fault-details 替换 ws 旧重复实现”。**
|
||||
Reference in New Issue
Block a user