sgclaw: snapshot today's runtime and skill updates

This commit is contained in:
zyl
2026-03-30 15:05:39 +08:00
parent c793bfc6a1
commit f51d6b7659
50 changed files with 3473 additions and 621 deletions

View File

@@ -0,0 +1,158 @@
# Skill Lib Testing Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** Add an in-project, repeatable test harness that validates `/home/zyl/projects/sgClaw/skill_lib` against the current ZeroClaw `SKILL.md` loader and security-audit expectations.
**Architecture:** Keep the test runner inside the SGClaw repository and target the sibling `skill_lib` directory by relative path. Implement a small Python validator that mirrors the ZeroClaw markdown frontmatter parser and the relevant skill-audit checks, then cover it with a Python `unittest` suite that exercises the actual three migrated Zhihu skills.
**Tech Stack:** Python 3 standard library, `unittest`, local file-system inspection, ZeroClaw source code as behavioral reference, Markdown/YAML-like frontmatter parsing.
### Task 1: Freeze The Test Contract
**Files:**
- Create: `/home/zyl/projects/sgClaw/claw/docs/plans/2026-03-27-skill-lib-testing-plan.md`
- Reference only: `/home/zyl/projects/sgClaw/claw/third_party/zeroclaw/src/skills/mod.rs`
- Reference only: `/home/zyl/projects/sgClaw/claw/third_party/zeroclaw/src/skills/audit.rs`
- Reference only: `/home/zyl/projects/sgClaw/skill_lib/skills/*/SKILL.md`
**Step 1: Capture the loader semantics to preserve**
Document and implement tests for:
- `SKILL.md` frontmatter splitting on `---`
- supported metadata keys: `name`, `description`, `version`, `author`, `tags`
- fallback rules for name, description, and version
- prompt body must exclude the frontmatter block
**Step 2: Capture the audit semantics to preserve**
Document and implement tests for:
- skill root must contain `SKILL.md` or `SKILL.toml`
- symlinks are rejected
- shell-script files are blocked when `allow_scripts` is false
- markdown links must not escape the skill root
- high-risk command snippets inside markdown are rejected
**Step 3: Define the migrated-skill expectations**
The test suite must verify:
- exactly three skill packages exist
- the loaded names are `zhihu-hotlist`, `zhihu-navigate`, `zhihu-write`
- each package has both `references/` and `assets/`
- each description stays trigger-oriented and starts with `Use when`
### Task 2: Write The Failing Tests First
**Files:**
- Create: `/home/zyl/projects/sgClaw/claw/tests/skill_lib_validation_test.py`
**Step 1: Write a failing import-level test**
Import a not-yet-created validator module from:
- `/home/zyl/projects/sgClaw/claw/scripts/validate_skill_lib.py`
Expected initial failure:
- `ModuleNotFoundError` or `FileNotFoundError`
**Step 2: Encode the project expectations**
Add tests for:
- skill discovery count and names
- parsed metadata for each current skill
- audit cleanliness for each skill with `allow_scripts=False`
- package shape (`SKILL.md`, `references/`, `assets/`)
**Step 3: Run the tests and watch them fail**
Run:
```bash
python3 -m unittest tests.skill_lib_validation_test -v
```
Expected:
- failure because the validator module does not exist yet
### Task 3: Implement The Minimal Validator
**Files:**
- Create: `/home/zyl/projects/sgClaw/claw/scripts/validate_skill_lib.py`
**Step 1: Implement discovery helpers**
Implement:
- repo root resolution
- sibling `skill_lib` root resolution
- `skills/` directory enumeration
**Step 2: Implement the markdown loader**
Implement:
- frontmatter split
- lightweight frontmatter parsing
- description fallback extraction
- metadata normalization into a `SkillRecord`
**Step 3: Implement the relevant audit checks**
Implement:
- symlink detection
- blocked shell-script detection
- markdown link boundary checks
- high-risk snippet detection
- deterministic findings collection
**Step 4: Implement a small CLI**
Running:
```bash
python3 scripts/validate_skill_lib.py
```
Should:
- print one summary line per skill
- exit `0` when all skills pass
- exit non-zero when any skill fails
### Task 4: Run The Tests Green
**Files:**
- Test: `/home/zyl/projects/sgClaw/claw/tests/skill_lib_validation_test.py`
- Test: `/home/zyl/projects/sgClaw/claw/scripts/validate_skill_lib.py`
**Step 1: Re-run the unit tests**
Run:
```bash
python3 -m unittest tests.skill_lib_validation_test -v
```
Expected:
- all tests pass
**Step 2: Run the CLI validator**
Run:
```bash
python3 scripts/validate_skill_lib.py
```
Expected:
- all three skills print `PASS`
- process exits `0`
### Task 5: Document The Verification Entry Point
**Files:**
- Modify: `/home/zyl/projects/sgClaw/skill_lib/VERIFY.md`
**Step 1: Add the project-local validation command**
Add:
- `python3 /home/zyl/projects/sgClaw/claw/scripts/validate_skill_lib.py`
- `python3 -m unittest /home/zyl/projects/sgClaw/claw/tests/skill_lib_validation_test.py`
**Step 2: Re-run both commands after the doc update**
Expected:
- validator still exits `0`
- unit tests still pass