4.7 KiB
Skill Lib Testing Implementation Plan
For Claude: REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
Goal: Add an in-project, repeatable test harness that validates /home/zyl/projects/sgClaw/skill_lib against the current ZeroClaw SKILL.md loader and security-audit expectations.
Architecture: Keep the test runner inside the SGClaw repository and target the sibling skill_lib directory by relative path. Implement a small Python validator that mirrors the ZeroClaw markdown frontmatter parser and the relevant skill-audit checks, then cover it with a Python unittest suite that exercises the actual three migrated Zhihu skills.
Tech Stack: Python 3 standard library, unittest, local file-system inspection, ZeroClaw source code as behavioral reference, Markdown/YAML-like frontmatter parsing.
Task 1: Freeze The Test Contract
Files:
- Create:
/home/zyl/projects/sgClaw/claw/docs/plans/2026-03-27-skill-lib-testing-plan.md - Reference only:
/home/zyl/projects/sgClaw/claw/third_party/zeroclaw/src/skills/mod.rs - Reference only:
/home/zyl/projects/sgClaw/claw/third_party/zeroclaw/src/skills/audit.rs - Reference only:
/home/zyl/projects/sgClaw/skill_lib/skills/*/SKILL.md
Step 1: Capture the loader semantics to preserve
Document and implement tests for:
SKILL.mdfrontmatter splitting on---- supported metadata keys:
name,description,version,author,tags - fallback rules for name, description, and version
- prompt body must exclude the frontmatter block
Step 2: Capture the audit semantics to preserve
Document and implement tests for:
- skill root must contain
SKILL.mdorSKILL.toml - symlinks are rejected
- shell-script files are blocked when
allow_scriptsis false - markdown links must not escape the skill root
- high-risk command snippets inside markdown are rejected
Step 3: Define the migrated-skill expectations
The test suite must verify:
- exactly three skill packages exist
- the loaded names are
zhihu-hotlist,zhihu-navigate,zhihu-write - each package has both
references/andassets/ - each description stays trigger-oriented and starts with
Use when
Task 2: Write The Failing Tests First
Files:
- Create:
/home/zyl/projects/sgClaw/claw/tests/skill_lib_validation_test.py
Step 1: Write a failing import-level test
Import a not-yet-created validator module from:
/home/zyl/projects/sgClaw/claw/scripts/validate_skill_lib.py
Expected initial failure:
ModuleNotFoundErrororFileNotFoundError
Step 2: Encode the project expectations
Add tests for:
- skill discovery count and names
- parsed metadata for each current skill
- audit cleanliness for each skill with
allow_scripts=False - package shape (
SKILL.md,references/,assets/)
Step 3: Run the tests and watch them fail
Run:
python3 -m unittest tests.skill_lib_validation_test -v
Expected:
- failure because the validator module does not exist yet
Task 3: Implement The Minimal Validator
Files:
- Create:
/home/zyl/projects/sgClaw/claw/scripts/validate_skill_lib.py
Step 1: Implement discovery helpers
Implement:
- repo root resolution
- sibling
skill_libroot resolution skills/directory enumeration
Step 2: Implement the markdown loader
Implement:
- frontmatter split
- lightweight frontmatter parsing
- description fallback extraction
- metadata normalization into a
SkillRecord
Step 3: Implement the relevant audit checks
Implement:
- symlink detection
- blocked shell-script detection
- markdown link boundary checks
- high-risk snippet detection
- deterministic findings collection
Step 4: Implement a small CLI
Running:
python3 scripts/validate_skill_lib.py
Should:
- print one summary line per skill
- exit
0when all skills pass - exit non-zero when any skill fails
Task 4: Run The Tests Green
Files:
- Test:
/home/zyl/projects/sgClaw/claw/tests/skill_lib_validation_test.py - Test:
/home/zyl/projects/sgClaw/claw/scripts/validate_skill_lib.py
Step 1: Re-run the unit tests
Run:
python3 -m unittest tests.skill_lib_validation_test -v
Expected:
- all tests pass
Step 2: Run the CLI validator
Run:
python3 scripts/validate_skill_lib.py
Expected:
- all three skills print
PASS - process exits
0
Task 5: Document The Verification Entry Point
Files:
- Modify:
/home/zyl/projects/sgClaw/skill_lib/VERIFY.md
Step 1: Add the project-local validation command
Add:
python3 /home/zyl/projects/sgClaw/claw/scripts/validate_skill_lib.pypython3 -m unittest /home/zyl/projects/sgClaw/claw/tests/skill_lib_validation_test.py
Step 2: Re-run both commands after the doc update
Expected:
- validator still exits
0 - unit tests still pass