feat: refactor sgclaw around zeroclaw compat runtime
This commit is contained in:
20
third_party/zeroclaw/docs/contributing/README.md
vendored
Normal file
20
third_party/zeroclaw/docs/contributing/README.md
vendored
Normal file
@@ -0,0 +1,20 @@
|
||||
# Contributing, Review, and CI Docs
|
||||
|
||||
For contributors, reviewers, and maintainers.
|
||||
|
||||
## Core Policies
|
||||
|
||||
- Contribution guide: [../../CONTRIBUTING.md](../../CONTRIBUTING.md)
|
||||
- PR workflow rules: [./pr-workflow.md](./pr-workflow.md)
|
||||
- Reviewer playbook: [./reviewer-playbook.md](./reviewer-playbook.md)
|
||||
- CI map and ownership: [./ci-map.md](./ci-map.md)
|
||||
- Actions source policy: [./actions-source-policy.md](./actions-source-policy.md)
|
||||
- Extension examples: [./extension-examples.md](./extension-examples.md)
|
||||
- Testing guide: [./testing.md](./testing.md)
|
||||
|
||||
## Suggested Reading Order
|
||||
|
||||
1. `CONTRIBUTING.md`
|
||||
2. `pr-workflow.md`
|
||||
3. `reviewer-playbook.md`
|
||||
4. `ci-map.md`
|
||||
82
third_party/zeroclaw/docs/contributing/actions-source-policy.md
vendored
Normal file
82
third_party/zeroclaw/docs/contributing/actions-source-policy.md
vendored
Normal file
@@ -0,0 +1,82 @@
|
||||
# Actions Source Policy
|
||||
|
||||
This document defines the current GitHub Actions source-control policy for this repository.
|
||||
|
||||
## Current Policy
|
||||
|
||||
- Repository Actions permissions: enabled
|
||||
- Allowed actions mode: selected
|
||||
|
||||
Selected allowlist (all actions currently used across Quality Gate, Release Beta, and Release Stable workflows):
|
||||
|
||||
| Action | Used In | Purpose |
|
||||
|--------|---------|---------|
|
||||
| `actions/checkout@v4` | All workflows | Repository checkout |
|
||||
| `actions/upload-artifact@v4` | release, promote-release | Upload build artifacts |
|
||||
| `actions/download-artifact@v4` | release, promote-release | Download build artifacts for packaging |
|
||||
| `dtolnay/rust-toolchain@stable` | All workflows | Install Rust toolchain (1.92.0) |
|
||||
| `Swatinem/rust-cache@v2` | All workflows | Cargo build/dependency caching |
|
||||
| `softprops/action-gh-release@v2` | release, promote-release | Create GitHub Releases |
|
||||
| `docker/setup-buildx-action@v3` | release, promote-release | Docker Buildx setup |
|
||||
| `docker/login-action@v3` | release, promote-release | GHCR authentication |
|
||||
| `docker/build-push-action@v6` | release, promote-release | Multi-platform Docker image build and push |
|
||||
| `actions/labeler@v5` | pr-path-labeler | Apply path/scope labels from `labeler.yml` |
|
||||
|
||||
Equivalent allowlist patterns:
|
||||
|
||||
- `actions/*`
|
||||
- `dtolnay/rust-toolchain@*`
|
||||
- `Swatinem/rust-cache@*`
|
||||
- `softprops/action-gh-release@*`
|
||||
- `docker/*`
|
||||
|
||||
## Workflows
|
||||
|
||||
| Workflow | File | Trigger |
|
||||
|----------|------|---------|
|
||||
| Quality Gate | `.github/workflows/checks-on-pr.yml` | Pull requests to `master` |
|
||||
| Release Beta | `.github/workflows/release-beta-on-push.yml` | Push to `master` |
|
||||
| Release Stable | `.github/workflows/release-stable-manual.yml` | Manual `workflow_dispatch` |
|
||||
| PR Path Labeler | `.github/workflows/pr-path-labeler.yml` | `pull_request_target` (opened, synchronize, reopened) |
|
||||
|
||||
## Change Control
|
||||
|
||||
Record each policy change with:
|
||||
|
||||
- change date/time (UTC)
|
||||
- actor
|
||||
- reason
|
||||
- allowlist delta (added/removed patterns)
|
||||
- rollback note
|
||||
|
||||
Use these commands to export the current effective policy:
|
||||
|
||||
```bash
|
||||
gh api repos/zeroclaw-labs/zeroclaw/actions/permissions
|
||||
gh api repos/zeroclaw-labs/zeroclaw/actions/permissions/selected-actions
|
||||
```
|
||||
|
||||
## Guardrails
|
||||
|
||||
- Any PR that adds or changes `uses:` action sources must include an allowlist impact note.
|
||||
- New third-party actions require explicit maintainer review before allowlisting.
|
||||
- Expand allowlist only for verified missing actions; avoid broad wildcard exceptions.
|
||||
|
||||
## Change Log
|
||||
|
||||
- 2026-03-23: Added PR Path Labeler (`pr-path-labeler.yml`) using `actions/labeler@v5`. No allowlist change needed — covered by existing `actions/*` pattern.
|
||||
- 2026-03-10: Renamed workflows — CI → Quality Gate (`checks-on-pr.yml`), Beta Release → Release Beta (`release-beta-on-push.yml`), Promote Release → Release Stable (`release-stable-manual.yml`). Added `lint` and `security` jobs to Quality Gate. Added Cross-Platform Build (`cross-platform-build-manual.yml`).
|
||||
- 2026-03-05: Complete workflow overhaul — replaced 22 workflows with 3 (CI, Beta Release, Promote Release)
|
||||
- Removed patterns no longer in use: `DavidAnson/markdownlint-cli2-action@*`, `lycheeverse/lychee-action@*`, `EmbarkStudios/cargo-deny-action@*`, `rustsec/audit-check@*`, `rhysd/actionlint@*`, `sigstore/cosign-installer@*`, `Checkmarx/vorpal-reviewdog-github-action@*`, `useblacksmith/*`
|
||||
- Added: `Swatinem/rust-cache@*` (replaces `useblacksmith/*` rust-cache fork)
|
||||
- Retained: `actions/*`, `dtolnay/rust-toolchain@*`, `softprops/action-gh-release@*`, `docker/*`
|
||||
- 2026-03-05: CI build optimization — added mold linker, cargo-nextest, CARGO_INCREMENTAL=0
|
||||
- sccache removed due to fragile GHA cache backend causing build failures
|
||||
|
||||
## Rollback
|
||||
|
||||
Emergency unblock path:
|
||||
|
||||
1. Temporarily set Actions policy back to `all`.
|
||||
2. Restore selected allowlist after identifying missing entries.
|
||||
3. Record incident and final allowlist delta.
|
||||
116
third_party/zeroclaw/docs/contributing/adding-boards-and-tools.md
vendored
Normal file
116
third_party/zeroclaw/docs/contributing/adding-boards-and-tools.md
vendored
Normal file
@@ -0,0 +1,116 @@
|
||||
# Adding Boards and Tools — ZeroClaw Hardware Guide
|
||||
|
||||
This guide explains how to add new hardware boards and custom tools to ZeroClaw.
|
||||
|
||||
## Quick Start: Add a Board via CLI
|
||||
|
||||
```bash
|
||||
# Add a board (updates ~/.zeroclaw/config.toml)
|
||||
zeroclaw peripheral add nucleo-f401re /dev/ttyACM0
|
||||
zeroclaw peripheral add arduino-uno /dev/cu.usbmodem12345
|
||||
zeroclaw peripheral add rpi-gpio native # for Raspberry Pi GPIO (Linux)
|
||||
|
||||
# Restart daemon to apply
|
||||
zeroclaw daemon --host 127.0.0.1 --port 42617
|
||||
```
|
||||
|
||||
## Supported Boards
|
||||
|
||||
| Board | Transport | Path Example |
|
||||
|-----------------|-----------|---------------------------|
|
||||
| nucleo-f401re | serial | /dev/ttyACM0, /dev/cu.usbmodem* |
|
||||
| arduino-uno | serial | /dev/ttyACM0, /dev/cu.usbmodem* |
|
||||
| arduino-uno-q | bridge | (Uno Q IP) |
|
||||
| rpi-gpio | native | native |
|
||||
| esp32 | serial | /dev/ttyUSB0 |
|
||||
|
||||
## Manual Config
|
||||
|
||||
Edit `~/.zeroclaw/config.toml`:
|
||||
|
||||
```toml
|
||||
[peripherals]
|
||||
enabled = true
|
||||
datasheet_dir = "docs/datasheets" # optional: RAG for "turn on red led" → pin 13
|
||||
|
||||
[[peripherals.boards]]
|
||||
board = "nucleo-f401re"
|
||||
transport = "serial"
|
||||
path = "/dev/ttyACM0"
|
||||
baud = 115200
|
||||
|
||||
[[peripherals.boards]]
|
||||
board = "arduino-uno"
|
||||
transport = "serial"
|
||||
path = "/dev/cu.usbmodem12345"
|
||||
baud = 115200
|
||||
```
|
||||
|
||||
## Adding a Datasheet (RAG)
|
||||
|
||||
Place `.md` or `.txt` files in `docs/datasheets/` (or your `datasheet_dir`). Name files by board: `nucleo-f401re.md`, `arduino-uno.md`.
|
||||
|
||||
### Pin Aliases (Recommended)
|
||||
|
||||
Add a `## Pin Aliases` section so the agent can map "red led" → pin 13:
|
||||
|
||||
```markdown
|
||||
# My Board
|
||||
|
||||
## Pin Aliases
|
||||
|
||||
| alias | pin |
|
||||
|-------------|-----|
|
||||
| red_led | 13 |
|
||||
| builtin_led | 13 |
|
||||
| user_led | 5 |
|
||||
```
|
||||
|
||||
Or use key-value format:
|
||||
|
||||
```markdown
|
||||
## Pin Aliases
|
||||
red_led: 13
|
||||
builtin_led: 13
|
||||
```
|
||||
|
||||
### PDF Datasheets
|
||||
|
||||
With the `rag-pdf` feature, ZeroClaw can index PDF files:
|
||||
|
||||
```bash
|
||||
cargo build --features hardware,rag-pdf
|
||||
```
|
||||
|
||||
Place PDFs in the datasheet directory. They are extracted and chunked for RAG.
|
||||
|
||||
## Adding a New Board Type
|
||||
|
||||
1. **Create a datasheet** — `docs/datasheets/my-board.md` with pin aliases and GPIO info.
|
||||
2. **Add to config** — `zeroclaw peripheral add my-board /dev/ttyUSB0`
|
||||
3. **Implement a peripheral** (optional) — For custom protocols, implement the `Peripheral` trait in `src/peripherals/` and register in `create_peripheral_tools`.
|
||||
|
||||
See [`docs/hardware/hardware-peripherals-design.md`](../hardware/hardware-peripherals-design.md) for the full design.
|
||||
|
||||
## Adding a Custom Tool
|
||||
|
||||
1. Implement the `Tool` trait in `src/tools/`.
|
||||
2. Register in `create_peripheral_tools` (for hardware tools) or the agent tool registry.
|
||||
3. Add a tool description to the agent's `tool_descs` in `src/agent/loop_.rs`.
|
||||
|
||||
## CLI Reference
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `zeroclaw peripheral list` | List configured boards |
|
||||
| `zeroclaw peripheral add <board> <path>` | Add board (writes config) |
|
||||
| `zeroclaw peripheral flash` | Flash Arduino firmware |
|
||||
| `zeroclaw peripheral flash-nucleo` | Flash Nucleo firmware |
|
||||
| `zeroclaw hardware discover` | List USB devices |
|
||||
| `zeroclaw hardware info` | Chip info via probe-rs |
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- **Serial port not found** — On macOS use `/dev/cu.usbmodem*`; on Linux use `/dev/ttyACM0` or `/dev/ttyUSB0`.
|
||||
- **Build with hardware** — `cargo build --features hardware`
|
||||
- **Probe-rs for Nucleo** — `cargo build --features hardware,probe`
|
||||
57
third_party/zeroclaw/docs/contributing/cargo-slicer-speedup.md
vendored
Normal file
57
third_party/zeroclaw/docs/contributing/cargo-slicer-speedup.md
vendored
Normal file
@@ -0,0 +1,57 @@
|
||||
# Faster Builds with cargo-slicer
|
||||
|
||||
[cargo-slicer](https://github.com/nickel-org/cargo-slicer) is a `RUSTC_WRAPPER` that stubs unreachable library functions at the MIR level, skipping LLVM codegen for code the final binary never calls.
|
||||
|
||||
## Benchmark Results
|
||||
|
||||
| Environment | Mode | Baseline | With cargo-slicer | Wall-time savings |
|
||||
|---|---|---|---|---|
|
||||
| 48-core server | syn pre-analysis | 3m 52s | 3m 31s | **-9.1%** |
|
||||
| 48-core server | MIR-precise | 3m 52s | 2m 49s | **-27.2%** |
|
||||
| Raspberry Pi 4 | syn pre-analysis | 25m 03s | 17m 54s | **-28.6%** |
|
||||
|
||||
All measurements are clean `cargo +nightly build --release`. MIR-precise mode reads actual compiler MIR to build a more accurate call graph, stubbing 1,060 mono items vs 799 with syn-based analysis.
|
||||
|
||||
## CI Integration
|
||||
|
||||
The workflow `.github/workflows/ci-build-fast.yml` (not yet implemented) is intended to run an accelerated release build alongside the standard one. It triggers on Rust-code changes and workflow changes, does not gate merges, and runs in parallel as a non-blocking check.
|
||||
|
||||
CI uses a resilient two-path strategy:
|
||||
- **Fast path**: install `cargo-slicer` plus the `rustc-driver` binaries and run the MIR-precise sliced build.
|
||||
- **Fallback path**: if `rustc-driver` install fails (for example due to nightly `rustc` API drift), run a plain `cargo +nightly build --release` instead of failing the check.
|
||||
|
||||
This keeps the check useful and green while preserving acceleration whenever the toolchain is compatible.
|
||||
|
||||
## Local Usage
|
||||
|
||||
```bash
|
||||
# One-time install
|
||||
cargo install cargo-slicer
|
||||
rustup component add rust-src rustc-dev llvm-tools-preview --toolchain nightly
|
||||
cargo +nightly install cargo-slicer --profile release-rustc \
|
||||
--bin cargo-slicer-rustc --bin cargo_slicer_dispatch \
|
||||
--features rustc-driver
|
||||
|
||||
# Build with syn pre-analysis (from zeroclaw root)
|
||||
cargo-slicer pre-analyze
|
||||
CARGO_SLICER_VIRTUAL=1 CARGO_SLICER_CODEGEN_FILTER=1 \
|
||||
RUSTC_WRAPPER=$(which cargo_slicer_dispatch) \
|
||||
cargo +nightly build --release
|
||||
|
||||
# Build with MIR-precise analysis (more stubs, bigger savings)
|
||||
# Step 1: generate .mir-cache (first build with MIR_PRECISE)
|
||||
CARGO_SLICER_MIR_PRECISE=1 CARGO_SLICER_WORKSPACE_CRATES=zeroclaw,zeroclaw_robot_kit \
|
||||
CARGO_SLICER_VIRTUAL=1 CARGO_SLICER_CODEGEN_FILTER=1 \
|
||||
RUSTC_WRAPPER=$(which cargo_slicer_dispatch) \
|
||||
cargo +nightly build --release
|
||||
# Step 2: subsequent builds automatically use .mir-cache
|
||||
```
|
||||
|
||||
## How It Works
|
||||
|
||||
1. **Pre-analysis** scans workspace sources via `syn` to build a cross-crate call graph (~2 s).
|
||||
2. **Cross-crate BFS** from `main()` identifies which public library functions are actually reachable.
|
||||
3. **MIR stubbing** replaces unreachable bodies with `Unreachable` terminators — the mono collector finds no callees and prunes entire codegen subtrees.
|
||||
4. **MIR-precise mode** (optional) reads actual compiler MIR from the binary crate's perspective, building a ground-truth call graph that identifies even more unreachable functions.
|
||||
|
||||
No source files are modified. The output binary is functionally identical.
|
||||
64
third_party/zeroclaw/docs/contributing/change-playbooks.md
vendored
Normal file
64
third_party/zeroclaw/docs/contributing/change-playbooks.md
vendored
Normal file
@@ -0,0 +1,64 @@
|
||||
# Change Playbooks
|
||||
|
||||
Step-by-step guides for common extension and modification patterns in ZeroClaw.
|
||||
|
||||
For complete code examples of each extension trait, see [extension-examples.md](./extension-examples.md).
|
||||
|
||||
## Adding a Provider
|
||||
|
||||
- Implement `Provider` in `src/providers/`.
|
||||
- Register in `src/providers/mod.rs` factory.
|
||||
- Add focused tests for factory wiring and error paths.
|
||||
- Avoid provider-specific behavior leaks into shared orchestration code.
|
||||
|
||||
## Adding a Channel
|
||||
|
||||
- Implement `Channel` in `src/channels/`.
|
||||
- Keep `send`, `listen`, `health_check`, typing semantics consistent.
|
||||
- Cover auth/allowlist/health behavior with tests.
|
||||
|
||||
## Adding a Tool
|
||||
|
||||
- Implement `Tool` in `src/tools/` with strict parameter schema.
|
||||
- Validate and sanitize all inputs.
|
||||
- Return structured `ToolResult`; avoid panics in runtime path.
|
||||
|
||||
## Adding a Peripheral
|
||||
|
||||
- Implement `Peripheral` in `src/peripherals/`.
|
||||
- Peripherals expose `tools()` — each tool delegates to the hardware (GPIO, sensors, etc.).
|
||||
- Register board type in config schema if needed.
|
||||
- See `docs/hardware/hardware-peripherals-design.md` for protocol and firmware notes.
|
||||
|
||||
## Security / Runtime / Gateway Changes
|
||||
|
||||
- Include threat/risk notes and rollback strategy.
|
||||
- Add/update tests or validation evidence for failure modes and boundaries.
|
||||
- Keep observability useful but non-sensitive.
|
||||
- For `.github/workflows/**` changes, include Actions allowlist impact in PR notes and update `docs/contributing/actions-source-policy.md` when sources change.
|
||||
|
||||
## Docs System / README / IA Changes
|
||||
|
||||
- Treat docs navigation as product UX: preserve clear pathing from README -> docs hub -> SUMMARY -> category index.
|
||||
- Keep top-level nav concise; avoid duplicative links across adjacent nav blocks.
|
||||
- When runtime surfaces change, update related references in `docs/reference/`.
|
||||
- Keep multilingual entry-point parity for all supported locales (`en`, `zh-CN`, `ja`, `ru`, `fr`, `vi`) when nav or key wording changes.
|
||||
- When shared docs wording changes, sync corresponding localized docs in the same PR (or explicitly document deferral and follow-up PR).
|
||||
|
||||
## Tool Shared State
|
||||
|
||||
- Follow the `Arc<RwLock<T>>` handle pattern for any tool that owns long-lived shared state.
|
||||
- Accept handles at construction; do not create global/static mutable state.
|
||||
- Use `ClientId` (provided by the daemon) to namespace per-client state — never construct identity keys inside the tool.
|
||||
- Isolate security-sensitive state (credentials, quotas) per client; broadcast/display state may be shared with optional namespace prefixing.
|
||||
- Cached validation is invalidated on config change — tools must re-validate before the next execution when signaled.
|
||||
- See [ADR-004: Tool Shared State Ownership](../architecture/adr-004-tool-shared-state-ownership.md) for the full contract.
|
||||
|
||||
## Architecture Boundary Rules
|
||||
|
||||
- Extend capabilities by adding trait implementations + factory wiring first; avoid cross-module rewrites for isolated features.
|
||||
- Keep dependency direction inward to contracts: concrete integrations depend on trait/config/util layers, not on other concrete integrations.
|
||||
- Avoid cross-subsystem coupling (e.g., provider code importing channel internals, tool code mutating gateway policy directly).
|
||||
- Keep module responsibilities single-purpose: orchestration in `agent/`, transport in `channels/`, model I/O in `providers/`, policy in `security/`, execution in `tools/`.
|
||||
- Introduce new shared abstractions only after repeated use (rule-of-three), with at least one real caller.
|
||||
- For config/schema changes, treat keys as public contract: document defaults, compatibility impact, and migration/rollback path.
|
||||
136
third_party/zeroclaw/docs/contributing/ci-map.md
vendored
Normal file
136
third_party/zeroclaw/docs/contributing/ci-map.md
vendored
Normal file
@@ -0,0 +1,136 @@
|
||||
# CI Workflow Map
|
||||
|
||||
This document explains what each GitHub workflow does, when it runs, and whether it should block merges.
|
||||
|
||||
For event-by-event delivery behavior across PR, merge, push, and release, see [`.github/workflows/master-branch-flow.md`](../../.github/workflows/master-branch-flow.md).
|
||||
|
||||
## Merge-Blocking vs Optional
|
||||
|
||||
Merge-blocking checks should stay small and deterministic. Optional checks are useful for automation and maintenance, but should not block normal development.
|
||||
|
||||
### Merge-Blocking
|
||||
|
||||
- `.github/workflows/ci-run.yml` (`CI`)
|
||||
- Purpose: Rust validation (`cargo fmt --all -- --check`, `cargo clippy --locked --all-targets -- -D clippy::correctness`, strict delta lint gate on changed Rust lines, `test`, release build smoke) + docs quality checks when docs change (`markdownlint` blocks only issues on changed lines; link check scans only links added on changed lines)
|
||||
- Additional behavior: for Rust-impacting PRs and pushes, `CI Required Gate` requires `lint` + `test` + `build` (no PR build-only bypass)
|
||||
- Additional behavior: PRs that change `.github/workflows/**` require at least one approving review from a login in `WORKFLOW_OWNER_LOGINS` (repository variable fallback: `theonlyhennygod,JordanTheJet,SimianAstronaut7`)
|
||||
- Additional behavior: lint gates run before `test`/`build`; when lint/docs gates fail on PRs, CI posts an actionable feedback comment with failing gate names and local fix commands
|
||||
- Merge gate: `CI Required Gate`
|
||||
- `.github/workflows/workflow-sanity.yml` (`Workflow Sanity`)
|
||||
- Purpose: lint GitHub workflow files (`actionlint`, tab checks)
|
||||
- Recommended for workflow-changing PRs
|
||||
- `.github/workflows/pr-intake-checks.yml` (`PR Intake Checks`)
|
||||
- Purpose: safe pre-CI PR checks (template completeness, added-line tabs/trailing-whitespace/conflict markers) with immediate sticky feedback comment
|
||||
### Non-Blocking but Important
|
||||
|
||||
- `.github/workflows/pub-docker-img.yml` (`Docker`)
|
||||
- Purpose: PR Docker smoke check on `master` PRs and publish images on tag pushes (`v*`) only
|
||||
- `.github/workflows/sec-audit.yml` (`Security Audit`)
|
||||
- Purpose: dependency advisories (`rustsec/audit-check`, pinned SHA) and policy/license checks (`cargo deny`)
|
||||
- `.github/workflows/sec-codeql.yml` (`CodeQL Analysis`)
|
||||
- Purpose: scheduled/manual static analysis for security findings
|
||||
- `.github/workflows/sec-vorpal-reviewdog.yml` (`Sec Vorpal Reviewdog`)
|
||||
- Purpose: manual secure-coding feedback scan for supported non-Rust files (`.py`, `.js`, `.jsx`, `.ts`, `.tsx`) using reviewdog annotations
|
||||
- Noise control: excludes common test/fixture paths and test file patterns by default (`include_tests=false`)
|
||||
- `.github/workflows/pub-release.yml` (`Release`)
|
||||
- Purpose: build release artifacts in verification mode (manual/scheduled) and publish GitHub releases on tag push or manual publish mode
|
||||
- `.github/workflows/pub-homebrew-core.yml` (`Pub Homebrew Core`)
|
||||
- Purpose: manual, bot-owned Homebrew core formula bump PR flow for tagged releases
|
||||
- Guardrail: release tag must match `Cargo.toml` version
|
||||
- `.github/workflows/pub-scoop.yml` (`Pub Scoop Manifest`)
|
||||
- Purpose: Scoop bucket manifest update for Windows; auto-called by stable release, also manual dispatch
|
||||
- Guardrail: release tag must be `vX.Y.Z` format; Windows binary hash extracted from `SHA256SUMS`
|
||||
- `.github/workflows/pub-aur.yml` (`Pub AUR Package`)
|
||||
- Purpose: AUR PKGBUILD push for Arch Linux; auto-called by stable release, also manual dispatch
|
||||
- Guardrail: release tag must be `vX.Y.Z` format; source tarball SHA256 computed at publish time
|
||||
- `.github/workflows/pr-label-policy-check.yml` (`Label Policy Sanity`)
|
||||
- Purpose: validate shared contributor-tier policy in `.github/label-policy.json` and ensure label workflows consume that policy
|
||||
- `.github/workflows/test-rust-build.yml` (`Rust Reusable Job`)
|
||||
- Purpose: reusable Rust setup/cache + command runner for workflow-call consumers
|
||||
|
||||
### Optional Repository Automation
|
||||
|
||||
- `.github/workflows/pr-labeler.yml` (`PR Labeler`)
|
||||
- Purpose: scope/path labels + size/risk labels + fine-grained module labels (`<module>: <component>`)
|
||||
- Additional behavior: label descriptions are auto-managed as hover tooltips to explain each auto-judgment rule
|
||||
- Additional behavior: provider-related keywords in provider/config/onboard/integration changes are promoted to `provider:*` labels (for example `provider:kimi`, `provider:deepseek`)
|
||||
- Additional behavior: hierarchical de-duplication keeps only the most specific scope labels (for example `tool:composio` suppresses `tool:core` and `tool`)
|
||||
- Additional behavior: module namespaces are compacted — one specific module keeps `prefix:component`; multiple specifics collapse to just `prefix`
|
||||
- Additional behavior: applies contributor tiers on PRs by merged PR count (`trusted` >=5, `experienced` >=10, `principal` >=20, `distinguished` >=50)
|
||||
- Additional behavior: final label set is priority-sorted (`risk:*` first, then `size:*`, then contributor tier, then module/path labels)
|
||||
- Additional behavior: managed label colors follow display order to produce a smooth left-to-right gradient when many labels are present
|
||||
- Manual governance: supports `workflow_dispatch` with `mode=audit|repair` to inspect/fix managed label metadata drift across the whole repository
|
||||
- Additional behavior: risk + size labels are auto-corrected on manual PR label edits (`labeled`/`unlabeled` events); apply `risk: manual` when maintainers intentionally override automated risk selection
|
||||
- High-risk heuristic paths: `src/security/**`, `src/runtime/**`, `src/gateway/**`, `src/tools/**`, `.github/workflows/**`
|
||||
- Guardrail: maintainers can apply `risk: manual` to freeze automated risk recalculation
|
||||
- `.github/workflows/pr-auto-response.yml` (`PR Auto Responder`)
|
||||
- Purpose: first-time contributor onboarding + label-driven response routing (`r:support`, `r:needs-repro`, etc.)
|
||||
- Additional behavior: applies contributor tiers on issues by merged PR count (`trusted` >=5, `experienced` >=10, `principal` >=20, `distinguished` >=50), matching PR tier thresholds exactly
|
||||
- Additional behavior: contributor-tier labels are treated as automation-managed (manual add/remove on PR/issue is auto-corrected)
|
||||
- Guardrail: label-based close routes are issue-only; PRs are never auto-closed by route labels
|
||||
- `.github/workflows/pr-check-stale.yml` (`Stale`)
|
||||
- Purpose: stale issue/PR lifecycle automation
|
||||
- `.github/dependabot.yml` (`Dependabot`)
|
||||
- Purpose: grouped, rate-limited dependency update PRs (Cargo + GitHub Actions)
|
||||
- `.github/workflows/pr-check-status.yml` (`PR Hygiene`)
|
||||
- Purpose: nudge stale-but-active PRs to rebase/re-run required checks before queue starvation
|
||||
|
||||
## Trigger Map
|
||||
|
||||
- `CI`: push to `master`, PRs to `master`
|
||||
- `Docker`: tag push (`v*`) for publish, matching PRs to `master` for smoke build, manual dispatch for smoke only
|
||||
- `Release`: tag push (`v*`), weekly schedule (verification-only), manual dispatch (verification or publish)
|
||||
- `Pub Homebrew Core`: manual dispatch only
|
||||
- `Pub Scoop Manifest`: auto-called by stable release, also manual dispatch
|
||||
- `Pub AUR Package`: auto-called by stable release, also manual dispatch
|
||||
- `Security Audit`: push to `master`, PRs to `master`, weekly schedule
|
||||
- `Sec Vorpal Reviewdog`: manual dispatch only
|
||||
- `Workflow Sanity`: PR/push when `.github/workflows/**`, `.github/*.yml`, or `.github/*.yaml` change
|
||||
- `Dependabot`: all update PRs target `master`
|
||||
- `PR Intake Checks`: `pull_request_target` on opened/reopened/synchronize/edited/ready_for_review
|
||||
- `Label Policy Sanity`: PR/push when `.github/label-policy.json`, `.github/workflows/pr-labeler.yml`, or `.github/workflows/pr-auto-response.yml` changes
|
||||
- `PR Labeler`: `pull_request_target` lifecycle events
|
||||
- `PR Auto Responder`: issue opened/labeled, `pull_request_target` opened/labeled
|
||||
- `Stale PR Check`: daily schedule, manual dispatch
|
||||
- `PR Hygiene`: every 12 hours schedule, manual dispatch
|
||||
|
||||
## Fast Triage Guide
|
||||
|
||||
1. `CI Required Gate` failing: start with `.github/workflows/ci-run.yml`.
|
||||
2. Docker failures on PRs: inspect `.github/workflows/pub-docker-img.yml` `pr-smoke` job.
|
||||
3. Release failures (tag/manual/scheduled): inspect `.github/workflows/pub-release.yml` and the `prepare` job outputs.
|
||||
4. Homebrew formula publish failures: inspect `.github/workflows/pub-homebrew-core.yml` summary output and bot token/fork variables.
|
||||
5. Scoop manifest publish failures: inspect `.github/workflows/pub-scoop.yml` summary output and `SCOOP_BUCKET_REPO`/`SCOOP_BUCKET_TOKEN` settings.
|
||||
6. AUR package publish failures: inspect `.github/workflows/pub-aur.yml` summary output and `AUR_SSH_KEY` secret.
|
||||
7. Security failures: inspect `.github/workflows/sec-audit.yml` and `deny.toml`.
|
||||
8. Workflow syntax/lint failures: inspect `.github/workflows/workflow-sanity.yml`.
|
||||
9. PR intake failures: inspect `.github/workflows/pr-intake-checks.yml` sticky comment and run logs.
|
||||
10. Label policy parity failures: inspect `.github/workflows/pr-label-policy-check.yml`.
|
||||
11. Docs failures in CI: inspect `docs-quality` job logs in `.github/workflows/ci-run.yml`.
|
||||
12. Strict delta lint failures in CI: inspect `lint-strict-delta` job logs and compare with `BASE_SHA` diff scope.
|
||||
|
||||
## Maintenance Rules
|
||||
|
||||
- Keep merge-blocking checks deterministic and reproducible (`--locked` where applicable).
|
||||
- Follow [`docs/contributing/release-process.md`](./release-process.md) for verify-before-publish release cadence and tag discipline.
|
||||
- Keep merge-blocking rust quality policy aligned across `.github/workflows/ci-run.yml`, `dev/ci.sh`, and `.githooks/pre-push` (`./scripts/ci/rust_quality_gate.sh` + `./scripts/ci/rust_strict_delta_gate.sh`).
|
||||
- Use `./scripts/ci/rust_strict_delta_gate.sh` (or `./dev/ci.sh lint-delta`) as the incremental strict merge gate for changed Rust lines.
|
||||
- Run full strict lint audits regularly via `./scripts/ci/rust_quality_gate.sh --strict` (for example through `./dev/ci.sh lint-strict`) and track cleanup in focused PRs.
|
||||
- Keep docs markdown gating incremental via `./scripts/ci/docs_quality_gate.sh` (block changed-line issues, report baseline issues separately).
|
||||
- Keep docs link gating incremental via `./scripts/ci/collect_changed_links.py` + lychee (check only links added on changed lines).
|
||||
- Prefer explicit workflow permissions (least privilege).
|
||||
- Keep Actions source policy restricted to approved allowlist patterns (see [`docs/contributing/actions-source-policy.md`](./actions-source-policy.md)).
|
||||
- Use path filters for expensive workflows when practical.
|
||||
- Keep docs quality checks low-noise (incremental markdown + incremental added-link checks).
|
||||
- Keep dependency update volume controlled (grouping + PR limits).
|
||||
- Avoid mixing onboarding/community automation with merge-gating logic.
|
||||
- Test levels: `cargo test --test component`, `cargo test --test integration`, `cargo test --test system`.
|
||||
- Live tests (manual only): `cargo test --test live -- --ignored`.
|
||||
|
||||
## Automation Side-Effect Controls
|
||||
|
||||
- Prefer deterministic automation that can be manually overridden (`risk: manual`) when context is nuanced.
|
||||
- Keep auto-response comments deduplicated to prevent triage noise.
|
||||
- Keep auto-close behavior scoped to issues; maintainers own PR close/merge decisions.
|
||||
- If automation is wrong, correct labels first, then continue review with explicit rationale.
|
||||
- Use `superseded` / `stale-candidate` labels to prune duplicate or dormant PRs before deep review.
|
||||
132
third_party/zeroclaw/docs/contributing/cla.md
vendored
Normal file
132
third_party/zeroclaw/docs/contributing/cla.md
vendored
Normal file
@@ -0,0 +1,132 @@
|
||||
# ZeroClaw Contributor License Agreement (CLA)
|
||||
|
||||
**Version 1.0 — February 2026**
|
||||
**ZeroClaw Labs**
|
||||
|
||||
---
|
||||
|
||||
## Purpose
|
||||
|
||||
This Contributor License Agreement ("CLA") clarifies the intellectual
|
||||
property rights granted by contributors to ZeroClaw Labs. This agreement
|
||||
protects both contributors and users of the ZeroClaw project.
|
||||
|
||||
By submitting a contribution (pull request, patch, issue with code, or any
|
||||
other form of code submission) to the ZeroClaw repository, you agree to the
|
||||
terms of this CLA.
|
||||
|
||||
---
|
||||
|
||||
## 1. Definitions
|
||||
|
||||
- **"Contribution"** means any original work of authorship, including any
|
||||
modifications or additions to existing work, submitted to ZeroClaw Labs
|
||||
for inclusion in the ZeroClaw project.
|
||||
|
||||
- **"You"** means the individual or legal entity submitting a Contribution.
|
||||
|
||||
- **"ZeroClaw Labs"** means the maintainers and organization responsible
|
||||
for the ZeroClaw project at https://github.com/zeroclaw-labs/zeroclaw.
|
||||
|
||||
---
|
||||
|
||||
## 2. Grant of Copyright License
|
||||
|
||||
You grant ZeroClaw Labs and recipients of software distributed by ZeroClaw
|
||||
Labs a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
|
||||
irrevocable copyright license to:
|
||||
|
||||
- Reproduce, prepare derivative works of, publicly display, publicly
|
||||
perform, sublicense, and distribute your Contributions and derivative
|
||||
works under **both the MIT License and the Apache License 2.0**.
|
||||
|
||||
---
|
||||
|
||||
## 3. Grant of Patent License
|
||||
|
||||
You grant ZeroClaw Labs and recipients of software distributed by ZeroClaw
|
||||
Labs a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
|
||||
irrevocable patent license to make, have made, use, offer to sell, sell,
|
||||
import, and otherwise transfer your Contributions.
|
||||
|
||||
This patent license applies only to patent claims licensable by you that
|
||||
are necessarily infringed by your Contribution alone or in combination with
|
||||
the ZeroClaw project.
|
||||
|
||||
**This protects you:** if a third party files a patent claim against
|
||||
ZeroClaw that covers your Contribution, your patent license to the project
|
||||
is not revoked.
|
||||
|
||||
---
|
||||
|
||||
## 4. You Retain Your Rights
|
||||
|
||||
This CLA does **not** transfer ownership of your Contribution to ZeroClaw
|
||||
Labs. You retain full copyright ownership of your Contribution. You are
|
||||
free to use your Contribution in any other project under any license.
|
||||
|
||||
---
|
||||
|
||||
## 5. Original Work
|
||||
|
||||
You represent that:
|
||||
|
||||
1. Each Contribution is your original creation, or you have sufficient
|
||||
rights to submit it under this CLA.
|
||||
2. Your Contribution does not knowingly infringe any third-party patent,
|
||||
copyright, trademark, or other intellectual property right.
|
||||
3. If your employer has rights to intellectual property you create, you
|
||||
have received permission to submit the Contribution, or your employer
|
||||
has signed a corporate CLA with ZeroClaw Labs.
|
||||
|
||||
---
|
||||
|
||||
## 6. No Trademark Rights
|
||||
|
||||
This CLA does not grant you any rights to use the ZeroClaw name,
|
||||
trademarks, service marks, or logos. See [trademark.md](../maintainers/trademark.md) for trademark policy.
|
||||
|
||||
---
|
||||
|
||||
## 7. Attribution
|
||||
|
||||
ZeroClaw Labs will maintain attribution to contributors in the repository
|
||||
commit history and NOTICE file. Your contributions are permanently and
|
||||
publicly recorded.
|
||||
|
||||
---
|
||||
|
||||
## 8. Dual-License Commitment
|
||||
|
||||
All Contributions accepted into the ZeroClaw project are licensed under
|
||||
both:
|
||||
|
||||
- **MIT License** — permissive open-source use
|
||||
- **Apache License 2.0** — patent protection and stronger IP guarantees
|
||||
|
||||
This dual-license model ensures maximum compatibility and protection for
|
||||
the entire contributor community.
|
||||
|
||||
---
|
||||
|
||||
## 9. How to Agree
|
||||
|
||||
By opening a pull request or submitting a patch to the ZeroClaw repository,
|
||||
you indicate your agreement to this CLA. No separate signature is required
|
||||
for individual contributors.
|
||||
|
||||
For **corporate contributors** (submitting on behalf of a company or
|
||||
organization), please open an issue titled "Corporate CLA — [Company Name]"
|
||||
and a maintainer will follow up.
|
||||
|
||||
---
|
||||
|
||||
## 10. Questions
|
||||
|
||||
If you have questions about this CLA, open an issue at:
|
||||
https://github.com/zeroclaw-labs/zeroclaw/issues
|
||||
|
||||
---
|
||||
|
||||
*This CLA is based on the Apache Individual Contributor License Agreement
|
||||
v2.0, adapted for the ZeroClaw dual-license model.*
|
||||
206
third_party/zeroclaw/docs/contributing/custom-providers.md
vendored
Normal file
206
third_party/zeroclaw/docs/contributing/custom-providers.md
vendored
Normal file
@@ -0,0 +1,206 @@
|
||||
# Custom Provider Configuration
|
||||
|
||||
ZeroClaw supports custom API endpoints for both OpenAI-compatible and Anthropic-compatible providers.
|
||||
|
||||
## Provider Types
|
||||
|
||||
### OpenAI-Compatible Endpoints (`custom:`)
|
||||
|
||||
For services that implement the OpenAI API format:
|
||||
|
||||
```toml
|
||||
default_provider = "custom:https://your-api.com"
|
||||
api_key = "your-api-key"
|
||||
default_model = "your-model-name"
|
||||
```
|
||||
|
||||
### Anthropic-Compatible Endpoints (`anthropic-custom:`)
|
||||
|
||||
For services that implement the Anthropic API format:
|
||||
|
||||
```toml
|
||||
default_provider = "anthropic-custom:https://your-api.com"
|
||||
api_key = "your-api-key"
|
||||
default_model = "your-model-name"
|
||||
```
|
||||
|
||||
## Configuration Methods
|
||||
|
||||
### Config File
|
||||
|
||||
Edit `~/.zeroclaw/config.toml`:
|
||||
|
||||
```toml
|
||||
api_key = "your-api-key"
|
||||
default_provider = "anthropic-custom:https://api.example.com"
|
||||
default_model = "claude-sonnet-4-6"
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
For `custom:` and `anthropic-custom:` providers, use the generic key env vars:
|
||||
|
||||
```bash
|
||||
export API_KEY="your-api-key"
|
||||
# or: export ZEROCLAW_API_KEY="your-api-key"
|
||||
zeroclaw agent
|
||||
```
|
||||
|
||||
## llama.cpp Server (Recommended Local Setup)
|
||||
|
||||
ZeroClaw includes a first-class local provider for `llama-server`:
|
||||
|
||||
- Provider ID: `llamacpp` (alias: `llama.cpp`)
|
||||
- Default endpoint: `http://localhost:8080/v1`
|
||||
- API key is optional unless `llama-server` is started with `--api-key`
|
||||
|
||||
Start a local server (example):
|
||||
|
||||
```bash
|
||||
llama-server -hf ggml-org/gpt-oss-20b-GGUF --jinja -c 133000 --host 127.0.0.1 --port 8033
|
||||
```
|
||||
|
||||
Then configure ZeroClaw:
|
||||
|
||||
```toml
|
||||
default_provider = "llamacpp"
|
||||
api_url = "http://127.0.0.1:8033/v1"
|
||||
default_model = "ggml-org/gpt-oss-20b-GGUF"
|
||||
default_temperature = 0.7
|
||||
```
|
||||
|
||||
Quick validation:
|
||||
|
||||
```bash
|
||||
zeroclaw models refresh --provider llamacpp
|
||||
zeroclaw agent -m "hello"
|
||||
```
|
||||
|
||||
You do not need to export `ZEROCLAW_API_KEY=dummy` for this flow.
|
||||
|
||||
## SGLang Server
|
||||
|
||||
ZeroClaw includes a first-class local provider for [SGLang](https://github.com/sgl-project/sglang):
|
||||
|
||||
- Provider ID: `sglang`
|
||||
- Default endpoint: `http://localhost:30000/v1`
|
||||
- API key is optional unless the server requires authentication
|
||||
|
||||
Start a local server (example):
|
||||
|
||||
```bash
|
||||
python -m sglang.launch_server --model meta-llama/Llama-3.1-8B-Instruct --port 30000
|
||||
```
|
||||
|
||||
Then configure ZeroClaw:
|
||||
|
||||
```toml
|
||||
default_provider = "sglang"
|
||||
default_model = "meta-llama/Llama-3.1-8B-Instruct"
|
||||
default_temperature = 0.7
|
||||
```
|
||||
|
||||
Quick validation:
|
||||
|
||||
```bash
|
||||
zeroclaw models refresh --provider sglang
|
||||
zeroclaw agent -m "hello"
|
||||
```
|
||||
|
||||
You do not need to export `ZEROCLAW_API_KEY=dummy` for this flow.
|
||||
|
||||
## vLLM Server
|
||||
|
||||
ZeroClaw includes a first-class local provider for [vLLM](https://docs.vllm.ai/):
|
||||
|
||||
- Provider ID: `vllm`
|
||||
- Default endpoint: `http://localhost:8000/v1`
|
||||
- API key is optional unless the server requires authentication
|
||||
|
||||
Start a local server (example):
|
||||
|
||||
```bash
|
||||
vllm serve meta-llama/Llama-3.1-8B-Instruct
|
||||
```
|
||||
|
||||
Then configure ZeroClaw:
|
||||
|
||||
```toml
|
||||
default_provider = "vllm"
|
||||
default_model = "meta-llama/Llama-3.1-8B-Instruct"
|
||||
default_temperature = 0.7
|
||||
```
|
||||
|
||||
Quick validation:
|
||||
|
||||
```bash
|
||||
zeroclaw models refresh --provider vllm
|
||||
zeroclaw agent -m "hello"
|
||||
```
|
||||
|
||||
You do not need to export `ZEROCLAW_API_KEY=dummy` for this flow.
|
||||
|
||||
## Testing Configuration
|
||||
|
||||
Verify your custom endpoint:
|
||||
|
||||
```bash
|
||||
# Interactive mode
|
||||
zeroclaw agent
|
||||
|
||||
# Single message test
|
||||
zeroclaw agent -m "test message"
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Authentication Errors
|
||||
|
||||
- Verify API key is correct
|
||||
- Check endpoint URL format (must include `http://` or `https://`)
|
||||
- Ensure endpoint is accessible from your network
|
||||
|
||||
### Model Not Found
|
||||
|
||||
- Confirm model name matches provider's available models
|
||||
- Check provider documentation for exact model identifiers
|
||||
- Ensure endpoint and model family match. Some custom gateways only expose a subset of models.
|
||||
- Verify available models from the same endpoint and key you configured:
|
||||
|
||||
```bash
|
||||
curl -sS https://your-api.com/models \
|
||||
-H "Authorization: Bearer $API_KEY"
|
||||
```
|
||||
|
||||
- If the gateway does not implement `/models`, send a minimal chat request and inspect the provider's returned model error text.
|
||||
|
||||
### Connection Issues
|
||||
|
||||
- Test endpoint accessibility: `curl -I https://your-api.com`
|
||||
- Verify firewall/proxy settings
|
||||
- Check provider status page
|
||||
|
||||
## Examples
|
||||
|
||||
### Local LLM Server (Generic Custom Endpoint)
|
||||
|
||||
```toml
|
||||
default_provider = "custom:http://localhost:8080/v1"
|
||||
api_key = "your-api-key-if-required"
|
||||
default_model = "local-model"
|
||||
```
|
||||
|
||||
### Corporate Proxy
|
||||
|
||||
```toml
|
||||
default_provider = "anthropic-custom:https://llm-proxy.corp.example.com"
|
||||
api_key = "internal-token"
|
||||
```
|
||||
|
||||
### Cloud Provider Gateway
|
||||
|
||||
```toml
|
||||
default_provider = "custom:https://gateway.cloud-provider.com/v1"
|
||||
api_key = "gateway-api-key"
|
||||
default_model = "gpt-4"
|
||||
```
|
||||
63
third_party/zeroclaw/docs/contributing/doc-template.md
vendored
Normal file
63
third_party/zeroclaw/docs/contributing/doc-template.md
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# Documentation Template (Operational)
|
||||
|
||||
Use this template when adding a new operational or engineering document under `docs/`.
|
||||
|
||||
Keep sections that apply; remove non-applicable placeholders before merging.
|
||||
|
||||
---
|
||||
|
||||
## 1. Summary
|
||||
|
||||
- **Purpose:** <one sentence about why this document exists>
|
||||
- **Audience:** <operators | reviewers | contributors | maintainers>
|
||||
- **Scope:** <what this doc covers>
|
||||
- **Non-goals:** <what this doc intentionally does not cover>
|
||||
|
||||
## 2. Prerequisites
|
||||
|
||||
- <required environment>
|
||||
- <required permissions>
|
||||
- <required tools/config>
|
||||
|
||||
## 3. Procedure
|
||||
|
||||
### 3.1 Baseline Check
|
||||
|
||||
1. <step>
|
||||
2. <step>
|
||||
|
||||
### 3.2 Main Workflow
|
||||
|
||||
1. <step>
|
||||
2. <step>
|
||||
3. <step>
|
||||
|
||||
### 3.3 Verification
|
||||
|
||||
- <expected output or success signal>
|
||||
- <validation command/log/checkpoint>
|
||||
|
||||
## 4. Safety, Risk, and Rollback
|
||||
|
||||
- **Risk surface:** <which components may be impacted>
|
||||
- **Failure modes:** <what can go wrong>
|
||||
- **Rollback plan:** <concrete rollback command/steps>
|
||||
|
||||
## 5. Troubleshooting
|
||||
|
||||
- **Symptom:** <error/signal>
|
||||
- **Cause:** <likely cause>
|
||||
- **Fix:** <action>
|
||||
|
||||
## 6. Related Docs
|
||||
|
||||
- [README.md](./README.md) — documentation taxonomy and navigation.
|
||||
- <related-doc-1.md>
|
||||
- <related-doc-2.md>
|
||||
|
||||
## 7. Maintenance Notes
|
||||
|
||||
- **Owner:** <team/persona/area>
|
||||
- **Update trigger:** <what changes should force this doc update>
|
||||
- **Last reviewed:** <YYYY-MM-DD>
|
||||
|
||||
34
third_party/zeroclaw/docs/contributing/docs-contract.md
vendored
Normal file
34
third_party/zeroclaw/docs/contributing/docs-contract.md
vendored
Normal file
@@ -0,0 +1,34 @@
|
||||
# Documentation System Contract
|
||||
|
||||
Treat documentation as a first-class product surface, not a post-merge artifact.
|
||||
|
||||
## Canonical Entry Points
|
||||
|
||||
- root READMEs: `README.md`, `README.zh-CN.md`, `README.ja.md`, `README.ru.md`, `README.fr.md`, `README.vi.md`
|
||||
- docs hubs: `docs/README.md`, `docs/README.zh-CN.md`, `docs/README.ja.md`, `docs/README.ru.md`, `docs/README.fr.md`, `docs/README.vi.md`
|
||||
- unified TOC: `docs/SUMMARY.md`
|
||||
|
||||
## Supported Locales
|
||||
|
||||
`en`, `zh-CN`, `ja`, `ru`, `fr`, `vi`
|
||||
|
||||
## Collection Indexes
|
||||
|
||||
- `docs/setup-guides/README.md`
|
||||
- `docs/reference/README.md`
|
||||
- `docs/ops/README.md`
|
||||
- `docs/security/README.md`
|
||||
- `docs/hardware/README.md`
|
||||
- `docs/contributing/README.md`
|
||||
- `docs/maintainers/README.md`
|
||||
|
||||
## Governance Rules
|
||||
|
||||
- Keep README/hub top navigation and quick routes intuitive and non-duplicative.
|
||||
- Keep entry-point parity across all supported locales when changing navigation architecture.
|
||||
- If a change touches docs IA, runtime-contract references, or user-facing wording in shared docs, perform i18n follow-through for supported locales in the same PR:
|
||||
- Update locale navigation links (`README*`, `docs/README*`, `docs/SUMMARY.md`).
|
||||
- Update localized runtime-contract docs where equivalents exist.
|
||||
- For Vietnamese, treat `docs/vi/**` as canonical.
|
||||
- Keep proposal/roadmap docs explicitly labeled; avoid mixing proposal text into runtime-contract docs.
|
||||
- Keep project snapshots date-stamped and immutable once superseded by a newer date.
|
||||
407
third_party/zeroclaw/docs/contributing/extension-examples.md
vendored
Normal file
407
third_party/zeroclaw/docs/contributing/extension-examples.md
vendored
Normal file
@@ -0,0 +1,407 @@
|
||||
# Extension Examples
|
||||
|
||||
ZeroClaw's architecture is trait-driven and modular.
|
||||
To add a new provider, channel, tool, or memory backend, implement the corresponding trait and register it in the factory module.
|
||||
|
||||
This page contains minimal, working examples for each core extension point.
|
||||
For step-by-step integration checklists, see [change-playbooks.md](./change-playbooks.md).
|
||||
|
||||
> **Source of truth**: the trait definitions live in `src/*/traits.rs`.
|
||||
> If an example here conflicts with the trait file, the trait file wins.
|
||||
|
||||
---
|
||||
|
||||
## Tool (`src/tools/traits.rs`)
|
||||
|
||||
Tools are the agent's hands — they let it interact with the world.
|
||||
|
||||
**Required methods**: `name()`, `description()`, `parameters_schema()`, `execute()`.
|
||||
The `spec()` method has a default implementation that composes the others.
|
||||
|
||||
Register your tool in `src/tools/mod.rs` via `default_tools()`.
|
||||
|
||||
```rust
|
||||
// In your crate: use zeroclaw::tools::traits::{Tool, ToolResult};
|
||||
|
||||
use anyhow::Result;
|
||||
use async_trait::async_trait;
|
||||
use serde_json::{json, Value};
|
||||
|
||||
/// A tool that fetches a URL and returns the status code.
|
||||
pub struct HttpGetTool;
|
||||
|
||||
#[async_trait]
|
||||
impl Tool for HttpGetTool {
|
||||
fn name(&self) -> &str {
|
||||
"http_get"
|
||||
}
|
||||
|
||||
fn description(&self) -> &str {
|
||||
"Fetch a URL and return the HTTP status code and content length"
|
||||
}
|
||||
|
||||
fn parameters_schema(&self) -> Value {
|
||||
json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"url": { "type": "string", "description": "URL to fetch" }
|
||||
},
|
||||
"required": ["url"]
|
||||
})
|
||||
}
|
||||
|
||||
async fn execute(&self, args: Value) -> Result<ToolResult> {
|
||||
let url = args["url"]
|
||||
.as_str()
|
||||
.ok_or_else(|| anyhow::anyhow!("Missing 'url' parameter"))?;
|
||||
|
||||
match reqwest::get(url).await {
|
||||
Ok(resp) => {
|
||||
let status = resp.status().as_u16();
|
||||
let len = resp.content_length().unwrap_or(0);
|
||||
Ok(ToolResult {
|
||||
success: status < 400,
|
||||
output: format!("HTTP {status} — {len} bytes"),
|
||||
error: None,
|
||||
})
|
||||
}
|
||||
Err(e) => Ok(ToolResult {
|
||||
success: false,
|
||||
output: String::new(),
|
||||
error: Some(format!("Request failed: {e}")),
|
||||
}),
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Channel (`src/channels/traits.rs`)
|
||||
|
||||
Channels let ZeroClaw communicate through any messaging platform.
|
||||
|
||||
**Required methods**: `name()`, `send(&SendMessage)`, `listen()`.
|
||||
Default implementations exist for `health_check()`, `start_typing()`, `stop_typing()`,
|
||||
draft methods (`send_draft`, `update_draft`, `finalize_draft`, `cancel_draft`),
|
||||
and reaction methods (`add_reaction`, `remove_reaction`).
|
||||
|
||||
Register your channel in `src/channels/mod.rs` and add config to `ChannelsConfig` in `src/config/schema.rs`.
|
||||
|
||||
```rust
|
||||
// In your crate: use zeroclaw::channels::traits::{Channel, ChannelMessage, SendMessage};
|
||||
|
||||
use anyhow::Result;
|
||||
use async_trait::async_trait;
|
||||
use tokio::sync::mpsc;
|
||||
|
||||
/// Telegram channel via Bot API.
|
||||
pub struct TelegramChannel {
|
||||
bot_token: String,
|
||||
allowed_users: Vec<String>,
|
||||
client: reqwest::Client,
|
||||
}
|
||||
|
||||
impl TelegramChannel {
|
||||
pub fn new(bot_token: &str, allowed_users: Vec<String>) -> Self {
|
||||
Self {
|
||||
bot_token: bot_token.to_string(),
|
||||
allowed_users,
|
||||
client: reqwest::Client::new(),
|
||||
}
|
||||
}
|
||||
|
||||
fn api_url(&self, method: &str) -> String {
|
||||
format!("https://api.telegram.org/bot{}/{method}", self.bot_token)
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl Channel for TelegramChannel {
|
||||
fn name(&self) -> &str {
|
||||
"telegram"
|
||||
}
|
||||
|
||||
async fn send(&self, message: &SendMessage) -> Result<()> {
|
||||
self.client
|
||||
.post(self.api_url("sendMessage"))
|
||||
.json(&serde_json::json!({
|
||||
"chat_id": message.recipient,
|
||||
"text": message.content,
|
||||
"parse_mode": "Markdown",
|
||||
}))
|
||||
.send()
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn listen(&self, tx: mpsc::Sender<ChannelMessage>) -> Result<()> {
|
||||
let mut offset: i64 = 0;
|
||||
|
||||
loop {
|
||||
let resp = self
|
||||
.client
|
||||
.get(self.api_url("getUpdates"))
|
||||
.query(&[("offset", offset.to_string()), ("timeout", "30".into())])
|
||||
.send()
|
||||
.await?
|
||||
.json::<serde_json::Value>()
|
||||
.await?;
|
||||
|
||||
if let Some(updates) = resp["result"].as_array() {
|
||||
for update in updates {
|
||||
if let Some(msg) = update.get("message") {
|
||||
let sender = msg["from"]["username"]
|
||||
.as_str()
|
||||
.unwrap_or("unknown")
|
||||
.to_string();
|
||||
|
||||
if !self.allowed_users.is_empty()
|
||||
&& !self.allowed_users.contains(&sender)
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
let chat_id = msg["chat"]["id"].to_string();
|
||||
|
||||
let channel_msg = ChannelMessage {
|
||||
id: msg["message_id"].to_string(),
|
||||
sender,
|
||||
reply_target: chat_id,
|
||||
content: msg["text"].as_str().unwrap_or("").to_string(),
|
||||
channel: "telegram".into(),
|
||||
timestamp: msg["date"].as_u64().unwrap_or(0),
|
||||
thread_ts: None,
|
||||
};
|
||||
|
||||
if tx.send(channel_msg).await.is_err() {
|
||||
return Ok(());
|
||||
}
|
||||
}
|
||||
offset = update["update_id"].as_i64().unwrap_or(offset) + 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async fn health_check(&self) -> bool {
|
||||
self.client
|
||||
.get(self.api_url("getMe"))
|
||||
.send()
|
||||
.await
|
||||
.map(|r| r.status().is_success())
|
||||
.unwrap_or(false)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Provider (`src/providers/traits.rs`)
|
||||
|
||||
Providers are LLM backend adapters. Each provider connects ZeroClaw to a different model API.
|
||||
|
||||
**Required method**: `chat_with_system(system_prompt: Option<&str>, message: &str, model: &str, temperature: f64) -> Result<String>`.
|
||||
Everything else has default implementations:
|
||||
`simple_chat()` and `chat_with_history()` delegate to `chat_with_system()`;
|
||||
`capabilities()` returns no native tool calling by default;
|
||||
streaming methods return empty/error streams by default.
|
||||
|
||||
Register your provider in `src/providers/mod.rs`.
|
||||
|
||||
```rust
|
||||
// In your crate: use zeroclaw::providers::traits::Provider;
|
||||
|
||||
use anyhow::Result;
|
||||
use async_trait::async_trait;
|
||||
|
||||
/// Ollama local provider.
|
||||
pub struct OllamaProvider {
|
||||
base_url: String,
|
||||
client: reqwest::Client,
|
||||
}
|
||||
|
||||
impl OllamaProvider {
|
||||
pub fn new(base_url: Option<&str>) -> Self {
|
||||
Self {
|
||||
base_url: base_url.unwrap_or("http://localhost:11434").to_string(),
|
||||
client: reqwest::Client::new(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl Provider for OllamaProvider {
|
||||
async fn chat_with_system(
|
||||
&self,
|
||||
system_prompt: Option<&str>,
|
||||
message: &str,
|
||||
model: &str,
|
||||
temperature: f64,
|
||||
) -> Result<String> {
|
||||
let url = format!("{}/api/generate", self.base_url);
|
||||
|
||||
let mut body = serde_json::json!({
|
||||
"model": model,
|
||||
"prompt": message,
|
||||
"temperature": temperature,
|
||||
"stream": false,
|
||||
});
|
||||
|
||||
if let Some(system) = system_prompt {
|
||||
body["system"] = serde_json::Value::String(system.to_string());
|
||||
}
|
||||
|
||||
let resp = self
|
||||
.client
|
||||
.post(&url)
|
||||
.json(&body)
|
||||
.send()
|
||||
.await?
|
||||
.json::<serde_json::Value>()
|
||||
.await?;
|
||||
|
||||
resp["response"]
|
||||
.as_str()
|
||||
.map(|s| s.to_string())
|
||||
.ok_or_else(|| anyhow::anyhow!("No response field in Ollama reply"))
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Memory (`src/memory/traits.rs`)
|
||||
|
||||
Memory backends provide pluggable persistence for the agent's knowledge.
|
||||
|
||||
**Required methods**: `name()`, `store()`, `recall()`, `get()`, `list()`, `forget()`, `count()`, `health_check()`.
|
||||
Both `store()` and `recall()` accept an optional `session_id` for scoping.
|
||||
|
||||
Register your backend in `src/memory/mod.rs`.
|
||||
|
||||
```rust
|
||||
// In your crate: use zeroclaw::memory::traits::{Memory, MemoryEntry, MemoryCategory};
|
||||
|
||||
use async_trait::async_trait;
|
||||
use std::collections::HashMap;
|
||||
use std::sync::Mutex;
|
||||
|
||||
/// In-memory HashMap backend (useful for testing or ephemeral sessions).
|
||||
pub struct InMemoryBackend {
|
||||
store: Mutex<HashMap<String, MemoryEntry>>,
|
||||
}
|
||||
|
||||
impl InMemoryBackend {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
store: Mutex::new(HashMap::new()),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl Memory for InMemoryBackend {
|
||||
fn name(&self) -> &str {
|
||||
"in-memory"
|
||||
}
|
||||
|
||||
async fn store(
|
||||
&self,
|
||||
key: &str,
|
||||
content: &str,
|
||||
category: MemoryCategory,
|
||||
session_id: Option<&str>,
|
||||
) -> anyhow::Result<()> {
|
||||
let entry = MemoryEntry {
|
||||
id: uuid::Uuid::new_v4().to_string(),
|
||||
key: key.to_string(),
|
||||
content: content.to_string(),
|
||||
category,
|
||||
timestamp: chrono::Local::now().to_rfc3339(),
|
||||
session_id: session_id.map(|s| s.to_string()),
|
||||
score: None,
|
||||
};
|
||||
self.store
|
||||
.lock()
|
||||
.map_err(|e| anyhow::anyhow!("{e}"))?
|
||||
.insert(key.to_string(), entry);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn recall(
|
||||
&self,
|
||||
query: &str,
|
||||
limit: usize,
|
||||
session_id: Option<&str>,
|
||||
) -> anyhow::Result<Vec<MemoryEntry>> {
|
||||
let store = self.store.lock().map_err(|e| anyhow::anyhow!("{e}"))?;
|
||||
let query_lower = query.to_lowercase();
|
||||
|
||||
let mut results: Vec<MemoryEntry> = store
|
||||
.values()
|
||||
.filter(|e| e.content.to_lowercase().contains(&query_lower))
|
||||
.filter(|e| match session_id {
|
||||
Some(sid) => e.session_id.as_deref() == Some(sid),
|
||||
None => true,
|
||||
})
|
||||
.cloned()
|
||||
.collect();
|
||||
|
||||
results.truncate(limit);
|
||||
Ok(results)
|
||||
}
|
||||
|
||||
async fn get(&self, key: &str) -> anyhow::Result<Option<MemoryEntry>> {
|
||||
let store = self.store.lock().map_err(|e| anyhow::anyhow!("{e}"))?;
|
||||
Ok(store.get(key).cloned())
|
||||
}
|
||||
|
||||
async fn list(
|
||||
&self,
|
||||
category: Option<&MemoryCategory>,
|
||||
session_id: Option<&str>,
|
||||
) -> anyhow::Result<Vec<MemoryEntry>> {
|
||||
let store = self.store.lock().map_err(|e| anyhow::anyhow!("{e}"))?;
|
||||
Ok(store
|
||||
.values()
|
||||
.filter(|e| match category {
|
||||
Some(cat) => &e.category == cat,
|
||||
None => true,
|
||||
})
|
||||
.filter(|e| match session_id {
|
||||
Some(sid) => e.session_id.as_deref() == Some(sid),
|
||||
None => true,
|
||||
})
|
||||
.cloned()
|
||||
.collect())
|
||||
}
|
||||
|
||||
async fn forget(&self, key: &str) -> anyhow::Result<bool> {
|
||||
let mut store = self.store.lock().map_err(|e| anyhow::anyhow!("{e}"))?;
|
||||
Ok(store.remove(key).is_some())
|
||||
}
|
||||
|
||||
async fn count(&self) -> anyhow::Result<usize> {
|
||||
let store = self.store.lock().map_err(|e| anyhow::anyhow!("{e}"))?;
|
||||
Ok(store.len())
|
||||
}
|
||||
|
||||
async fn health_check(&self) -> bool {
|
||||
true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Registration Pattern
|
||||
|
||||
All extension traits follow the same wiring pattern:
|
||||
|
||||
1. Create your implementation file in the relevant `src/*/` directory.
|
||||
2. Register it in the module's factory function (e.g., `default_tools()`, provider match arm).
|
||||
3. Add any needed config keys to `src/config/schema.rs`.
|
||||
4. Write focused tests for factory wiring and error paths.
|
||||
|
||||
See [change-playbooks.md](./change-playbooks.md) for full checklists per extension type.
|
||||
213
third_party/zeroclaw/docs/contributing/label-registry.md
vendored
Normal file
213
third_party/zeroclaw/docs/contributing/label-registry.md
vendored
Normal file
@@ -0,0 +1,213 @@
|
||||
# Label Registry
|
||||
|
||||
Single reference for every label used on PRs and issues. Labels are grouped by category. Each entry lists the label name, definition, and how it is applied.
|
||||
|
||||
Sources consolidated here:
|
||||
|
||||
- `.github/labeler.yml` (path-label config for `actions/labeler`)
|
||||
- `.github/label-policy.json` (contributor tier thresholds)
|
||||
- `docs/contributing/pr-workflow.md` (size, risk, and triage label definitions)
|
||||
- `docs/contributing/ci-map.md` (automation behavior and high-risk path heuristics)
|
||||
|
||||
Note: The CI was simplified to 4 workflows (`ci.yml`, `release.yml`, `ci-full.yml`, `promote-release.yml`). Workflows that previously automated size, risk, contributor tier, and triage labels (`pr-labeler.yml`, `pr-auto-response.yml`, `pr-check-stale.yml`, and supporting scripts) were removed. Only path labels via `pr-path-labeler.yml` are currently automated.
|
||||
|
||||
---
|
||||
|
||||
## Path labels
|
||||
|
||||
Applied automatically by `pr-path-labeler.yml` using `actions/labeler`. Matches changed files against glob patterns in `.github/labeler.yml`.
|
||||
|
||||
### Base scope labels
|
||||
|
||||
| Label | Matches |
|
||||
|---|---|
|
||||
| `docs` | `docs/**`, `**/*.md`, `**/*.mdx`, `LICENSE`, `.markdownlint-cli2.yaml` |
|
||||
| `dependencies` | `Cargo.toml`, `Cargo.lock`, `deny.toml`, `.github/dependabot.yml` |
|
||||
| `ci` | `.github/**`, `.githooks/**` |
|
||||
| `core` | `src/*.rs` |
|
||||
| `agent` | `src/agent/**` |
|
||||
| `channel` | `src/channels/**` |
|
||||
| `gateway` | `src/gateway/**` |
|
||||
| `config` | `src/config/**` |
|
||||
| `cron` | `src/cron/**` |
|
||||
| `daemon` | `src/daemon/**` |
|
||||
| `doctor` | `src/doctor/**` |
|
||||
| `health` | `src/health/**` |
|
||||
| `heartbeat` | `src/heartbeat/**` |
|
||||
| `integration` | `src/integrations/**` |
|
||||
| `memory` | `src/memory/**` |
|
||||
| `security` | `src/security/**` |
|
||||
| `runtime` | `src/runtime/**` |
|
||||
| `onboard` | `src/onboard/**` |
|
||||
| `provider` | `src/providers/**` |
|
||||
| `service` | `src/service/**` |
|
||||
| `skillforge` | `src/skillforge/**` |
|
||||
| `skills` | `src/skills/**` |
|
||||
| `tool` | `src/tools/**` |
|
||||
| `tunnel` | `src/tunnel/**` |
|
||||
| `observability` | `src/observability/**` |
|
||||
| `tests` | `tests/**` |
|
||||
| `scripts` | `scripts/**` |
|
||||
| `dev` | `dev/**` |
|
||||
|
||||
### Per-component channel labels
|
||||
|
||||
Each channel gets a specific label in addition to the base `channel` label.
|
||||
|
||||
| Label | Matches |
|
||||
|---|---|
|
||||
| `channel:bluesky` | `bluesky.rs` |
|
||||
| `channel:clawdtalk` | `clawdtalk.rs` |
|
||||
| `channel:cli` | `cli.rs` |
|
||||
| `channel:dingtalk` | `dingtalk.rs` |
|
||||
| `channel:discord` | `discord.rs`, `discord_history.rs` |
|
||||
| `channel:email` | `email_channel.rs`, `gmail_push.rs` |
|
||||
| `channel:imessage` | `imessage.rs` |
|
||||
| `channel:irc` | `irc.rs` |
|
||||
| `channel:lark` | `lark.rs` |
|
||||
| `channel:linq` | `linq.rs` |
|
||||
| `channel:matrix` | `matrix.rs` |
|
||||
| `channel:mattermost` | `mattermost.rs` |
|
||||
| `channel:mochat` | `mochat.rs` |
|
||||
| `channel:mqtt` | `mqtt.rs` |
|
||||
| `channel:nextcloud-talk` | `nextcloud_talk.rs` |
|
||||
| `channel:nostr` | `nostr.rs` |
|
||||
| `channel:notion` | `notion.rs` |
|
||||
| `channel:qq` | `qq.rs` |
|
||||
| `channel:reddit` | `reddit.rs` |
|
||||
| `channel:signal` | `signal.rs` |
|
||||
| `channel:slack` | `slack.rs` |
|
||||
| `channel:telegram` | `telegram.rs` |
|
||||
| `channel:twitter` | `twitter.rs` |
|
||||
| `channel:wati` | `wati.rs` |
|
||||
| `channel:webhook` | `webhook.rs` |
|
||||
| `channel:wecom` | `wecom.rs` |
|
||||
| `channel:whatsapp` | `whatsapp.rs`, `whatsapp_storage.rs`, `whatsapp_web.rs` |
|
||||
|
||||
### Per-component provider labels
|
||||
|
||||
| Label | Matches |
|
||||
|---|---|
|
||||
| `provider:anthropic` | `anthropic.rs` |
|
||||
| `provider:azure-openai` | `azure_openai.rs` |
|
||||
| `provider:bedrock` | `bedrock.rs` |
|
||||
| `provider:claude-code` | `claude_code.rs` |
|
||||
| `provider:compatible` | `compatible.rs` |
|
||||
| `provider:copilot` | `copilot.rs` |
|
||||
| `provider:gemini` | `gemini.rs`, `gemini_cli.rs` |
|
||||
| `provider:glm` | `glm.rs` |
|
||||
| `provider:kilocli` | `kilocli.rs` |
|
||||
| `provider:ollama` | `ollama.rs` |
|
||||
| `provider:openai` | `openai.rs`, `openai_codex.rs` |
|
||||
| `provider:openrouter` | `openrouter.rs` |
|
||||
| `provider:telnyx` | `telnyx.rs` |
|
||||
|
||||
### Per-group tool labels
|
||||
|
||||
Tools are grouped by logical function rather than one label per file.
|
||||
|
||||
| Label | Matches |
|
||||
|---|---|
|
||||
| `tool:browser` | `browser.rs`, `browser_delegate.rs`, `browser_open.rs`, `text_browser.rs`, `screenshot.rs` |
|
||||
| `tool:cloud` | `cloud_ops.rs`, `cloud_patterns.rs` |
|
||||
| `tool:composio` | `composio.rs` |
|
||||
| `tool:cron` | `cron_add.rs`, `cron_list.rs`, `cron_remove.rs`, `cron_run.rs`, `cron_runs.rs`, `cron_update.rs` |
|
||||
| `tool:file` | `file_edit.rs`, `file_read.rs`, `file_write.rs`, `glob_search.rs`, `content_search.rs` |
|
||||
| `tool:google-workspace` | `google_workspace.rs` |
|
||||
| `tool:mcp` | `mcp_client.rs`, `mcp_deferred.rs`, `mcp_protocol.rs`, `mcp_tool.rs`, `mcp_transport.rs` |
|
||||
| `tool:memory` | `memory_forget.rs`, `memory_recall.rs`, `memory_store.rs` |
|
||||
| `tool:microsoft365` | `microsoft365/**` |
|
||||
| `tool:security` | `security_ops.rs`, `verifiable_intent.rs` |
|
||||
| `tool:shell` | `shell.rs`, `node_tool.rs`, `cli_discovery.rs` |
|
||||
| `tool:sop` | `sop_advance.rs`, `sop_approve.rs`, `sop_execute.rs`, `sop_list.rs`, `sop_status.rs` |
|
||||
| `tool:web` | `web_fetch.rs`, `web_search_tool.rs`, `web_search_provider_routing.rs`, `http_request.rs` |
|
||||
|
||||
---
|
||||
|
||||
## Size labels
|
||||
|
||||
Defined in `pr-workflow.md` §6.1. Based on effective changed line count, normalized for docs-only and lockfile-heavy PRs.
|
||||
|
||||
| Label | Threshold |
|
||||
|---|---|
|
||||
| `size: XS` | <= 80 lines |
|
||||
| `size: S` | <= 250 lines |
|
||||
| `size: M` | <= 500 lines |
|
||||
| `size: L` | <= 1000 lines |
|
||||
| `size: XL` | > 1000 lines |
|
||||
|
||||
**Applied by:** manual. The workflows that previously computed size labels (`pr-labeler.yml` and supporting scripts) were removed during CI simplification.
|
||||
|
||||
---
|
||||
|
||||
## Risk labels
|
||||
|
||||
Defined in `pr-workflow.md` §13.2 and `ci-map.md`. Based on a heuristic combining touched paths and change size.
|
||||
|
||||
| Label | Meaning |
|
||||
|---|---|
|
||||
| `risk: low` | No high-risk paths touched, small change |
|
||||
| `risk: medium` | Behavioral `src/**` changes without boundary/security impact |
|
||||
| `risk: high` | Touches high-risk paths (see below) or large security-adjacent change |
|
||||
| `risk: manual` | Maintainer override that freezes automated risk recalculation |
|
||||
|
||||
High-risk paths: `src/security/**`, `src/runtime/**`, `src/gateway/**`, `src/tools/**`, `.github/workflows/**`.
|
||||
|
||||
The boundary between low and medium is not formally defined beyond "no high-risk paths."
|
||||
|
||||
**Applied by:** manual. Previously automated via `pr-labeler.yml`; removed during CI simplification.
|
||||
|
||||
---
|
||||
|
||||
## Contributor tier labels
|
||||
|
||||
Defined in `.github/label-policy.json`. Based on the author's merged PR count queried from the GitHub API.
|
||||
|
||||
| Label | Minimum merged PRs |
|
||||
|---|---|
|
||||
| `trusted contributor` | 5 |
|
||||
| `experienced contributor` | 10 |
|
||||
| `principal contributor` | 20 |
|
||||
| `distinguished contributor` | 50 |
|
||||
|
||||
**Applied by:** manual. Previously automated via `pr-labeler.yml` and `pr-auto-response.yml`; removed during CI simplification.
|
||||
|
||||
---
|
||||
|
||||
## Response and triage labels
|
||||
|
||||
Defined in `pr-workflow.md` §8. Applied manually.
|
||||
|
||||
| Label | Purpose | Applied by |
|
||||
|---|---|---|
|
||||
| `r:needs-repro` | Incomplete bug report; request deterministic repro | Manual |
|
||||
| `r:support` | Usage/help item better handled outside bug backlog | Manual |
|
||||
| `invalid` | Not a valid bug/feature request | Manual |
|
||||
| `duplicate` | Duplicate of existing issue | Manual |
|
||||
| `stale-candidate` | Dormant PR/issue; candidate for closing | Manual |
|
||||
| `superseded` | Replaced by a newer PR | Manual |
|
||||
| `no-stale` | Exempt from stale automation; accepted but blocked work | Manual |
|
||||
|
||||
**Automation:** none currently. The workflows that handled label-driven issue closing (`pr-auto-response.yml`) and stale detection (`pr-check-stale.yml`) were removed during CI simplification.
|
||||
|
||||
---
|
||||
|
||||
## Implementation status
|
||||
|
||||
| Category | Count | Automated | Workflow |
|
||||
|---|---|---|---|
|
||||
| Path (base scope) | 27 | Yes | `pr-path-labeler.yml` |
|
||||
| Path (per-component) | 52 | Yes | `pr-path-labeler.yml` |
|
||||
| Size | 5 | No | Manual |
|
||||
| Risk | 4 | No | Manual |
|
||||
| Contributor tier | 4 | No | Manual |
|
||||
| Response/triage | 7 | No | Manual |
|
||||
| **Total** | **99** | | |
|
||||
|
||||
---
|
||||
|
||||
## Maintenance
|
||||
|
||||
- **Owner:** maintainers responsible for label policy and PR triage automation.
|
||||
- **Update trigger:** new channels, providers, or tools added to the source tree; label policy changes; triage workflow changes.
|
||||
- **Source of truth:** this document consolidates definitions from the four source files listed at the top. When definitions conflict, update the source file first, then sync this registry.
|
||||
239
third_party/zeroclaw/docs/contributing/langgraph-integration.md
vendored
Normal file
239
third_party/zeroclaw/docs/contributing/langgraph-integration.md
vendored
Normal file
@@ -0,0 +1,239 @@
|
||||
# LangGraph Integration Guide
|
||||
|
||||
This guide explains how to use the `zeroclaw-tools` Python package for consistent tool calling with any OpenAI-compatible LLM provider.
|
||||
|
||||
## Background
|
||||
|
||||
Some LLM providers, particularly Chinese models like GLM-5 (Zhipu AI), have inconsistent tool calling behavior when using text-based tool invocation. ZeroClaw's Rust core uses structured tool calling via the OpenAI API format, but some models respond better to a different approach.
|
||||
|
||||
LangGraph provides a stateful graph execution engine that guarantees consistent tool calling behavior regardless of the underlying model's native capabilities.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Your Application │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ zeroclaw-tools Agent │
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────┐ │
|
||||
│ │ LangGraph StateGraph │ │
|
||||
│ │ │ │
|
||||
│ │ ┌────────────┐ ┌────────────┐ │ │
|
||||
│ │ │ Agent │ ──────▶ │ Tools │ │ │
|
||||
│ │ │ Node │ ◀────── │ Node │ │ │
|
||||
│ │ └────────────┘ └────────────┘ │ │
|
||||
│ │ │ │ │ │
|
||||
│ │ ▼ ▼ │ │
|
||||
│ │ [Continue?] [Execute Tool] │ │
|
||||
│ │ │ │ │ │
|
||||
│ │ Yes │ No Result│ │ │
|
||||
│ │ ▼ ▼ │ │
|
||||
│ │ [END] [Back to Agent] │ │
|
||||
│ │ │ │
|
||||
│ └─────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ OpenAI-Compatible LLM Provider │
|
||||
│ (Z.AI, OpenRouter, Groq, DeepSeek, Ollama, etc.) │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Installation
|
||||
|
||||
```bash
|
||||
pip install zeroclaw-tools
|
||||
```
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from zeroclaw_tools import create_agent, shell, file_read, file_write
|
||||
from langchain_core.messages import HumanMessage
|
||||
|
||||
async def main():
|
||||
agent = create_agent(
|
||||
tools=[shell, file_read, file_write],
|
||||
model="glm-5",
|
||||
api_key="your-api-key",
|
||||
base_url="https://api.z.ai/api/coding/paas/v4"
|
||||
)
|
||||
|
||||
result = await agent.ainvoke({
|
||||
"messages": [HumanMessage(content="Read /etc/hostname and tell me the machine name")]
|
||||
})
|
||||
|
||||
print(result["messages"][-1].content)
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
## Available Tools
|
||||
|
||||
### Core Tools
|
||||
|
||||
| Tool | Description |
|
||||
|------|-------------|
|
||||
| `shell` | Execute shell commands |
|
||||
| `file_read` | Read file contents |
|
||||
| `file_write` | Write content to files |
|
||||
|
||||
### Extended Tools
|
||||
|
||||
| Tool | Description |
|
||||
|------|-------------|
|
||||
| `web_search` | Search the web (requires `BRAVE_API_KEY`) |
|
||||
| `http_request` | Make HTTP requests |
|
||||
| `memory_store` | Store data in persistent memory |
|
||||
| `memory_recall` | Recall stored data |
|
||||
|
||||
## Custom Tools
|
||||
|
||||
Create your own tools with the `@tool` decorator:
|
||||
|
||||
```python
|
||||
from zeroclaw_tools import tool, create_agent
|
||||
|
||||
@tool
|
||||
def get_weather(city: str) -> str:
|
||||
"""Get the current weather for a city."""
|
||||
# Your implementation
|
||||
return f"Weather in {city}: Sunny, 25°C"
|
||||
|
||||
@tool
|
||||
def query_database(sql: str) -> str:
|
||||
"""Execute a SQL query and return results."""
|
||||
# Your implementation
|
||||
return "Query returned 5 rows"
|
||||
|
||||
agent = create_agent(
|
||||
tools=[get_weather, query_database],
|
||||
model="glm-5",
|
||||
api_key="your-key"
|
||||
)
|
||||
```
|
||||
|
||||
## Provider Configuration
|
||||
|
||||
### Z.AI / GLM-5
|
||||
|
||||
```python
|
||||
agent = create_agent(
|
||||
model="glm-5",
|
||||
api_key="your-zhipu-key",
|
||||
base_url="https://api.z.ai/api/coding/paas/v4"
|
||||
)
|
||||
```
|
||||
|
||||
### OpenRouter
|
||||
|
||||
```python
|
||||
agent = create_agent(
|
||||
model="anthropic/claude-sonnet-4-6",
|
||||
api_key="your-openrouter-key",
|
||||
base_url="https://openrouter.ai/api/v1"
|
||||
)
|
||||
```
|
||||
|
||||
### Groq
|
||||
|
||||
```python
|
||||
agent = create_agent(
|
||||
model="llama-3.3-70b-versatile",
|
||||
api_key="your-groq-key",
|
||||
base_url="https://api.groq.com/openai/v1"
|
||||
)
|
||||
```
|
||||
|
||||
### Ollama (Local)
|
||||
|
||||
```python
|
||||
agent = create_agent(
|
||||
model="llama3.2",
|
||||
base_url="http://localhost:11434/v1"
|
||||
)
|
||||
```
|
||||
|
||||
## Discord Bot Integration
|
||||
|
||||
```python
|
||||
import os
|
||||
from zeroclaw_tools.integrations import DiscordBot
|
||||
|
||||
bot = DiscordBot(
|
||||
token=os.environ["DISCORD_TOKEN"],
|
||||
guild_id=123456789, # Your Discord server ID
|
||||
allowed_users=["123456789"], # User IDs that can use the bot
|
||||
api_key=os.environ["API_KEY"],
|
||||
model="glm-5"
|
||||
)
|
||||
|
||||
bot.run()
|
||||
```
|
||||
|
||||
## CLI Usage
|
||||
|
||||
```bash
|
||||
# Set environment variables
|
||||
export API_KEY="your-key"
|
||||
export BRAVE_API_KEY="your-brave-key" # Optional, for web search
|
||||
|
||||
# Single message
|
||||
zeroclaw-tools "What is the current date?"
|
||||
|
||||
# Interactive mode
|
||||
zeroclaw-tools -i
|
||||
```
|
||||
|
||||
## Comparison with Rust ZeroClaw
|
||||
|
||||
| Aspect | Rust ZeroClaw | zeroclaw-tools |
|
||||
|--------|---------------|-----------------|
|
||||
| **Performance** | Ultra-fast (~10ms startup) | Python startup (~500ms) |
|
||||
| **Memory** | <5 MB | ~50 MB |
|
||||
| **Binary size** | ~3.4 MB | pip package |
|
||||
| **Tool consistency** | Model-dependent | LangGraph guarantees |
|
||||
| **Extensibility** | Rust traits | Python decorators |
|
||||
| **Ecosystem** | Rust crates | PyPI packages |
|
||||
|
||||
**When to use Rust ZeroClaw:**
|
||||
- Production edge deployments
|
||||
- Resource-constrained environments (Raspberry Pi, etc.)
|
||||
- Maximum performance requirements
|
||||
|
||||
**When to use zeroclaw-tools:**
|
||||
- Models with inconsistent native tool calling
|
||||
- Python-centric development
|
||||
- Rapid prototyping
|
||||
- Integration with Python ML ecosystem
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "API key required" error
|
||||
|
||||
Set the `API_KEY` environment variable or pass `api_key` to `create_agent()`.
|
||||
|
||||
### Tool calls not executing
|
||||
|
||||
Ensure your model supports function calling. Some older models may not support tools.
|
||||
|
||||
### Rate limiting
|
||||
|
||||
Add delays between calls or implement your own rate limiting:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
|
||||
for message in messages:
|
||||
result = await agent.ainvoke({"messages": [message]})
|
||||
await asyncio.sleep(1) # Rate limit
|
||||
```
|
||||
|
||||
## Related Projects
|
||||
|
||||
- [rs-graph-llm](https://github.com/a-agmon/rs-graph-llm) - Rust LangGraph alternative
|
||||
- [langchain-rust](https://github.com/Abraxas-365/langchain-rust) - LangChain for Rust
|
||||
- [llm-chain](https://github.com/sobelio/llm-chain) - LLM chains in Rust
|
||||
86
third_party/zeroclaw/docs/contributing/pr-discipline.md
vendored
Normal file
86
third_party/zeroclaw/docs/contributing/pr-discipline.md
vendored
Normal file
@@ -0,0 +1,86 @@
|
||||
# PR Discipline
|
||||
|
||||
Rules for pull request quality, attribution, privacy, and handoff in ZeroClaw.
|
||||
|
||||
## Privacy / Sensitive Data (Required)
|
||||
|
||||
Treat privacy and neutrality as merge gates, not best-effort guidelines.
|
||||
|
||||
- Never commit personal or sensitive data in code, docs, tests, fixtures, snapshots, logs, examples, or commit messages.
|
||||
- Prohibited data includes (non-exhaustive): real names, personal emails, phone numbers, addresses, access tokens, API keys, credentials, IDs, and private URLs.
|
||||
- Use neutral project-scoped placeholders (e.g., `user_a`, `test_user`, `project_bot`, `example.com`) instead of real identity data.
|
||||
- Test names/messages/fixtures must be impersonal and system-focused; avoid first-person or identity-specific language.
|
||||
- If identity-like context is unavoidable, use ZeroClaw-scoped roles/labels only (e.g., `ZeroClawAgent`, `ZeroClawOperator`, `zeroclaw_user`).
|
||||
- Recommended identity-safe naming palette:
|
||||
- actor labels: `ZeroClawAgent`, `ZeroClawOperator`, `ZeroClawMaintainer`, `zeroclaw_user`
|
||||
- service/runtime labels: `zeroclaw_bot`, `zeroclaw_service`, `zeroclaw_runtime`, `zeroclaw_node`
|
||||
- environment labels: `zeroclaw_project`, `zeroclaw_workspace`, `zeroclaw_channel`
|
||||
- If reproducing external incidents, redact and anonymize all payloads before committing.
|
||||
- Before push, review `git diff --cached` specifically for accidental sensitive strings and identity leakage.
|
||||
|
||||
## Superseded-PR Attribution (Required)
|
||||
|
||||
When a PR supersedes another contributor's PR and carries forward substantive code or design decisions, preserve authorship explicitly.
|
||||
|
||||
- In the integrating commit message, add one `Co-authored-by: Name <email>` trailer per superseded contributor whose work is materially incorporated.
|
||||
- Use a GitHub-recognized email (`<login@users.noreply.github.com>` or the contributor's verified commit email).
|
||||
- Keep trailers on their own lines after a blank line at commit-message end; never encode them as escaped `\\n` text.
|
||||
- In the PR body, list superseded PR links and briefly state what was incorporated from each.
|
||||
- If no actual code/design was incorporated (only inspiration), do not use `Co-authored-by`; give credit in PR notes instead.
|
||||
|
||||
## Superseded-PR Templates
|
||||
|
||||
### PR Title/Body Template
|
||||
|
||||
- Recommended title format: `feat(<scope>): unify and supersede #<pr_a>, #<pr_b> [and #<pr_n>]`
|
||||
- In the PR body, include:
|
||||
|
||||
```md
|
||||
## Supersedes
|
||||
- #<pr_a> by @<author_a>
|
||||
- #<pr_b> by @<author_b>
|
||||
|
||||
## Integrated Scope
|
||||
- From #<pr_a>: <what was materially incorporated>
|
||||
- From #<pr_b>: <what was materially incorporated>
|
||||
|
||||
## Attribution
|
||||
- Co-authored-by trailers added for materially incorporated contributors: Yes/No
|
||||
- If No, explain why
|
||||
|
||||
## Non-goals
|
||||
- <explicitly list what was not carried over>
|
||||
|
||||
## Risk and Rollback
|
||||
- Risk: <summary>
|
||||
- Rollback: <revert commit/PR strategy>
|
||||
```
|
||||
|
||||
### Commit Message Template
|
||||
|
||||
```text
|
||||
feat(<scope>): unify and supersede #<pr_a>, #<pr_b> [and #<pr_n>]
|
||||
|
||||
<one-paragraph summary of integrated outcome>
|
||||
|
||||
Supersedes:
|
||||
- #<pr_a> by @<author_a>
|
||||
- #<pr_b> by @<author_b>
|
||||
|
||||
Integrated scope:
|
||||
- <subsystem_or_feature_a>: from #<pr_x>
|
||||
- <subsystem_or_feature_b>: from #<pr_y>
|
||||
|
||||
Co-authored-by: <Name A> <login_a@users.noreply.github.com>
|
||||
Co-authored-by: <Name B> <login_b@users.noreply.github.com>
|
||||
```
|
||||
|
||||
## Handoff Template (Agent -> Agent / Maintainer)
|
||||
|
||||
When handing off work, include:
|
||||
|
||||
1. What changed
|
||||
2. What did not change
|
||||
3. Validation run and results
|
||||
4. Remaining risks / unknowns
|
||||
5. Next recommended action
|
||||
366
third_party/zeroclaw/docs/contributing/pr-workflow.md
vendored
Normal file
366
third_party/zeroclaw/docs/contributing/pr-workflow.md
vendored
Normal file
@@ -0,0 +1,366 @@
|
||||
# ZeroClaw PR Workflow (High-Volume Collaboration)
|
||||
|
||||
This document defines how ZeroClaw handles high PR volume while maintaining:
|
||||
|
||||
- High performance
|
||||
- High efficiency
|
||||
- High stability
|
||||
- High extensibility
|
||||
- High sustainability
|
||||
- High security
|
||||
|
||||
Related references:
|
||||
|
||||
- [`docs/README.md`](../README.md) for documentation taxonomy and navigation.
|
||||
- [`ci-map.md`](./ci-map.md) for per-workflow ownership, triggers, and triage flow.
|
||||
- [`reviewer-playbook.md`](./reviewer-playbook.md) for day-to-day reviewer execution.
|
||||
|
||||
## 0. Summary
|
||||
|
||||
- **Purpose:** provide a deterministic, risk-based PR operating model for high-throughput collaboration.
|
||||
- **Audience:** contributors, maintainers, and agent-assisted reviewers.
|
||||
- **Scope:** repository settings, PR lifecycle, readiness contracts, risk routing, queue discipline, and recovery protocol.
|
||||
- **Non-goals:** replacing branch protection configuration or CI workflow source files as implementation authority.
|
||||
|
||||
---
|
||||
|
||||
## 1. Fast Path by PR Situation
|
||||
|
||||
Use this section to route quickly before full deep review.
|
||||
|
||||
### 1.1 Intake is incomplete
|
||||
|
||||
1. Request template completion and missing evidence in one checklist comment.
|
||||
2. Stop deep review until intake blockers are resolved.
|
||||
|
||||
Go to:
|
||||
|
||||
- [Section 5.1](#51-definition-of-ready-dor-before-requesting-review)
|
||||
|
||||
### 1.2 `CI Required Gate` failing
|
||||
|
||||
1. Route failure through CI map and fix deterministic gates first.
|
||||
2. Re-evaluate risk only after CI returns coherent signal.
|
||||
|
||||
Go to:
|
||||
|
||||
- [ci-map.md](./ci-map.md)
|
||||
- [Section 4.2](#42-step-b-validation)
|
||||
|
||||
### 1.3 High-risk path touched
|
||||
|
||||
1. Escalate to deep review lane.
|
||||
2. Require explicit rollback, failure-mode evidence, and security boundary checks.
|
||||
|
||||
Go to:
|
||||
|
||||
- [Section 9](#9-security-and-stability-rules)
|
||||
- [reviewer-playbook.md](./reviewer-playbook.md)
|
||||
|
||||
### 1.4 PR is superseded or duplicate
|
||||
|
||||
1. Require explicit supersede linkage and queue cleanup.
|
||||
2. Close superseded PR after maintainer confirmation.
|
||||
|
||||
Go to:
|
||||
|
||||
- [Section 8.2](#82-backlog-pressure-controls)
|
||||
|
||||
---
|
||||
|
||||
## 2. Governance Goals and Control Loop
|
||||
|
||||
### 2.1 Governance goals
|
||||
|
||||
1. Keep merge throughput predictable under heavy PR load.
|
||||
2. Keep CI signal quality high (fast feedback, low false positives).
|
||||
3. Keep security review explicit for risky surfaces.
|
||||
4. Keep changes easy to reason about and easy to revert.
|
||||
5. Keep repository artifacts free of personal/sensitive data leakage.
|
||||
|
||||
### 2.2 Governance design logic (control loop)
|
||||
|
||||
This workflow is intentionally layered to reduce reviewer load while keeping accountability clear:
|
||||
|
||||
1. **Intake classification:** path/size/risk/module labels route the PR to the right review depth.
|
||||
2. **Deterministic validation:** merge gate depends on reproducible checks, not subjective comments.
|
||||
3. **Risk-based review depth:** high-risk paths trigger deep review; low-risk paths stay fast.
|
||||
4. **Rollback-first merge contract:** every merge path includes concrete recovery steps.
|
||||
|
||||
Automation assists with triage and guardrails, but final merge accountability remains with human maintainers and PR authors.
|
||||
|
||||
---
|
||||
|
||||
## 3. Required Repository Settings
|
||||
|
||||
Maintain these branch protection rules on `master`:
|
||||
|
||||
- Require status checks before merge.
|
||||
- Require check `CI Required Gate`.
|
||||
- Require pull request reviews before merge.
|
||||
- Require CODEOWNERS review for protected paths.
|
||||
- For `.github/workflows/**`, require owner approval via `CI Required Gate` (`WORKFLOW_OWNER_LOGINS`) and keep branch/ruleset bypass limited to org owners.
|
||||
- Default workflow-owner allowlist is configured via the `WORKFLOW_OWNER_LOGINS` repository variable (see CODEOWNERS for current maintainers).
|
||||
- Dismiss stale approvals when new commits are pushed.
|
||||
- Restrict force-push on protected branches.
|
||||
- All contributor PRs target `master` directly.
|
||||
|
||||
---
|
||||
|
||||
## 4. PR Lifecycle Runbook
|
||||
|
||||
### 4.1 Step A: Intake
|
||||
|
||||
- Contributor opens PR with full `.github/pull_request_template.md`.
|
||||
- `PR Labeler` applies scope/path labels + size labels + risk labels + module labels (for example `channel:telegram`, `provider:kimi`, `tool:shell`) and contributor tiers by merged PR count (`trusted` >=5, `experienced` >=10, `principal` >=20, `distinguished` >=50), while de-duplicating less-specific scope labels when a more specific module label is present.
|
||||
- For all module prefixes, module labels are compacted to reduce noise: one specific module keeps `prefix:component`, but multiple specifics collapse to the base scope label `prefix`.
|
||||
- Label ordering is priority-first: `risk:*` -> `size:*` -> contributor tier -> module/path labels.
|
||||
- Maintainers can run `PR Labeler` manually (`workflow_dispatch`) in `audit` mode for drift visibility or `repair` mode to normalize managed label metadata repository-wide.
|
||||
- Hovering a label in GitHub shows its auto-managed description (rule/threshold summary).
|
||||
- Managed label colors are arranged by display order to create a smooth gradient across long label rows.
|
||||
- `PR Auto Responder` posts first-time guidance, handles label-driven routing for low-signal items, and auto-applies issue contributor tiers using the same thresholds as `PR Labeler` (`trusted` >=5, `experienced` >=10, `principal` >=20, `distinguished` >=50).
|
||||
|
||||
### 4.2 Step B: Validation
|
||||
|
||||
- `CI Required Gate` is the merge gate.
|
||||
- Docs-only PRs use fast-path and skip heavy Rust jobs.
|
||||
- Non-doc PRs must pass lint, tests, and release build smoke check.
|
||||
- Rust-impacting PRs use the same required gate set as `master` pushes (no PR build-only shortcut).
|
||||
|
||||
### 4.3 Step C: Review
|
||||
|
||||
- Reviewers prioritize by risk and size labels.
|
||||
- Security-sensitive paths (`src/security`, `src/runtime`, `src/gateway`, and CI workflows) require maintainer attention.
|
||||
- Large PRs (`size: L`/`size: XL`) should be split unless strongly justified.
|
||||
|
||||
### 4.4 Step D: Merge
|
||||
|
||||
- Prefer **squash merge** to keep history compact.
|
||||
- PR title should follow Conventional Commit style.
|
||||
- Merge only when rollback path is documented.
|
||||
|
||||
---
|
||||
|
||||
## 5. PR Readiness Contracts (DoR / DoD)
|
||||
|
||||
### 5.1 Definition of Ready (DoR) before requesting review
|
||||
|
||||
- PR template fully completed.
|
||||
- Scope boundary is explicit (what changed / what did not).
|
||||
- Validation evidence attached (not just "CI will check").
|
||||
- Security and rollback fields completed for risky paths.
|
||||
- Privacy/data-hygiene checks are completed and test language is neutral/project-scoped.
|
||||
- If identity-like wording appears in tests/examples, it is normalized to ZeroClaw/project-native labels.
|
||||
|
||||
### 5.2 Definition of Done (DoD) merge-ready
|
||||
|
||||
- `CI Required Gate` is green.
|
||||
- Required reviewers approved (including CODEOWNERS paths).
|
||||
- Risk class labels match touched paths.
|
||||
- Migration/compatibility impact is documented.
|
||||
- Rollback path is concrete and fast.
|
||||
|
||||
---
|
||||
|
||||
## 6. PR Size and Batching Policy
|
||||
|
||||
### 6.1 Size tiers
|
||||
|
||||
- `size: XS` <= 80 changed lines
|
||||
- `size: S` <= 250 changed lines
|
||||
- `size: M` <= 500 changed lines
|
||||
- `size: L` <= 1000 changed lines
|
||||
- `size: XL` > 1000 changed lines
|
||||
|
||||
### 6.2 Policy
|
||||
|
||||
- Target `XS/S/M` by default.
|
||||
- `L/XL` PRs need explicit justification and tighter test evidence.
|
||||
- If a large feature is unavoidable, split into stacked PRs.
|
||||
|
||||
### 6.3 Automation behavior
|
||||
|
||||
- `PR Labeler` applies `size:*` labels from effective changed lines.
|
||||
- Docs-only/lockfile-heavy PRs are normalized to avoid size inflation.
|
||||
|
||||
---
|
||||
|
||||
## 7. AI/Agent Contribution Policy
|
||||
|
||||
AI-assisted PRs are welcome, and review can also be agent-assisted.
|
||||
|
||||
### 7.1 Required
|
||||
|
||||
1. Clear PR summary with scope boundary.
|
||||
2. Explicit test/validation evidence.
|
||||
3. Security impact and rollback notes for risky changes.
|
||||
|
||||
### 7.2 Recommended
|
||||
|
||||
1. Brief tool/workflow notes when automation materially influenced the change.
|
||||
2. Optional prompt/plan snippets for reproducibility.
|
||||
|
||||
We do **not** require contributors to quantify AI-vs-human line ownership.
|
||||
|
||||
### 7.3 Review emphasis for AI-heavy PRs
|
||||
|
||||
- Contract compatibility.
|
||||
- Security boundaries.
|
||||
- Error handling and fallback behavior.
|
||||
- Performance and memory regressions.
|
||||
|
||||
---
|
||||
|
||||
## 8. Review SLA and Queue Discipline
|
||||
|
||||
- First maintainer triage target: within 48 hours.
|
||||
- If PR is blocked, maintainer leaves one actionable checklist.
|
||||
- `stale` automation is used to keep queue healthy; maintainers can apply `no-stale` when needed.
|
||||
- `pr-hygiene` automation checks open PRs every 12 hours and posts a nudge when a PR has no new commits for 48+ hours and is either behind `master` or missing/failing `CI Required Gate` on the head commit.
|
||||
|
||||
### 8.1 Queue budget controls
|
||||
|
||||
- Use a review queue budget: limit concurrent deep-review PRs per maintainer and keep the rest in triage state.
|
||||
- For stacked work, require explicit `Depends on #...` so review order is deterministic.
|
||||
|
||||
### 8.2 Backlog pressure controls
|
||||
|
||||
- If a new PR replaces an older open PR, require `Supersedes #...` and close the older one after maintainer confirmation.
|
||||
- Mark dormant/redundant PRs with `stale-candidate` or `superseded` to reduce duplicate review effort.
|
||||
|
||||
### 8.3 Issue triage discipline
|
||||
|
||||
- `r:needs-repro` for incomplete bug reports (request deterministic repro before deep triage).
|
||||
- `r:support` for usage/help items better handled outside bug backlog.
|
||||
- `invalid` / `duplicate` labels trigger **issue-only** closing automation with guidance.
|
||||
|
||||
### 8.4 Automation side-effect guards
|
||||
|
||||
- `PR Auto Responder` deduplicates label-based comments to avoid spam.
|
||||
- Automated close routes are limited to issues, not PRs.
|
||||
- Maintainers can freeze automated risk recalculation with `risk: manual` when context demands human override.
|
||||
|
||||
---
|
||||
|
||||
## 9. Security and Stability Rules
|
||||
|
||||
Changes in these areas require stricter review and stronger test evidence:
|
||||
|
||||
- `src/security/**`
|
||||
- Runtime process management.
|
||||
- Gateway ingress/authentication behavior (`src/gateway/**`).
|
||||
- Filesystem access boundaries.
|
||||
- Network/authentication behavior.
|
||||
- GitHub workflows and release pipeline.
|
||||
- Tools with execution capability (`src/tools/**`).
|
||||
|
||||
### 9.1 Minimum for risky PRs
|
||||
|
||||
- Threat/risk statement.
|
||||
- Mitigation notes.
|
||||
- Rollback steps.
|
||||
|
||||
### 9.2 Recommended for high-risk PRs
|
||||
|
||||
- Include a focused test proving boundary behavior.
|
||||
- Include one explicit failure-mode scenario and expected degradation.
|
||||
|
||||
For agent-assisted contributions, reviewers should also verify the author demonstrates understanding of runtime behavior and blast radius.
|
||||
|
||||
---
|
||||
|
||||
## 10. Failure Recovery Protocol
|
||||
|
||||
If a merged PR causes regressions:
|
||||
|
||||
1. Revert PR immediately on `master`.
|
||||
2. Open a follow-up issue with root-cause analysis.
|
||||
3. Re-introduce fix only with regression tests.
|
||||
|
||||
Prefer fast restore of service quality over delayed perfect fixes.
|
||||
|
||||
---
|
||||
|
||||
## 11. Maintainer Merge Checklist
|
||||
|
||||
- Scope is focused and understandable.
|
||||
- CI gate is green.
|
||||
- Docs-quality checks are green when docs changed.
|
||||
- Security impact fields are complete.
|
||||
- Privacy/data-hygiene fields are complete and evidence is redacted/anonymized.
|
||||
- Agent workflow notes are sufficient for reproducibility (if automation was used).
|
||||
- Rollback plan is explicit.
|
||||
- Commit title follows Conventional Commits.
|
||||
|
||||
---
|
||||
|
||||
## 12. Agent Review Operating Model
|
||||
|
||||
To keep review quality stable under high PR volume, use a two-lane review model.
|
||||
|
||||
### 12.1 Lane A: fast triage (agent-friendly)
|
||||
|
||||
- Confirm PR template completeness.
|
||||
- Confirm CI gate signal (`CI Required Gate`).
|
||||
- Confirm risk class via labels and touched paths.
|
||||
- Confirm rollback statement exists.
|
||||
- Confirm privacy/data-hygiene section and neutral wording requirements are satisfied.
|
||||
- Confirm any required identity-like wording uses ZeroClaw/project-native terminology.
|
||||
|
||||
### 12.2 Lane B: deep review (risk-based)
|
||||
|
||||
Required for high-risk changes (security/runtime/gateway/CI):
|
||||
|
||||
- Validate threat model assumptions.
|
||||
- Validate failure mode and degradation behavior.
|
||||
- Validate backward compatibility and migration impact.
|
||||
- Validate observability/logging impact.
|
||||
|
||||
---
|
||||
|
||||
## 13. Queue Priority and Label Discipline
|
||||
|
||||
### 13.1 Triage order recommendation
|
||||
|
||||
1. `size: XS`/`size: S` + bug/security fixes.
|
||||
2. `size: M` focused changes.
|
||||
3. `size: L`/`size: XL` split requests or staged review.
|
||||
|
||||
### 13.2 Label discipline
|
||||
|
||||
- Path labels identify subsystem ownership quickly.
|
||||
- Size labels drive batching strategy.
|
||||
- Risk labels drive review depth (`risk: low/medium/high`).
|
||||
- Module labels (`<module>: <component>`) improve reviewer routing for integration-specific changes and future newly-added modules.
|
||||
- `risk: manual` allows maintainers to preserve a human risk judgment when automation lacks context.
|
||||
- `no-stale` is reserved for accepted-but-blocked work.
|
||||
|
||||
---
|
||||
|
||||
## 14. Agent Handoff Contract
|
||||
|
||||
When one agent hands off to another (or to a maintainer), include:
|
||||
|
||||
1. Scope boundary (what changed / what did not).
|
||||
2. Validation evidence.
|
||||
3. Open risks and unknowns.
|
||||
4. Suggested next action.
|
||||
|
||||
This keeps context loss low and avoids repeated deep dives.
|
||||
|
||||
---
|
||||
|
||||
## 15. Related Docs
|
||||
|
||||
- [README.md](../README.md) — documentation taxonomy and navigation.
|
||||
- [ci-map.md](./ci-map.md) — CI workflow ownership and triage map.
|
||||
- [reviewer-playbook.md](./reviewer-playbook.md) — reviewer execution model.
|
||||
- [actions-source-policy.md](./actions-source-policy.md) — action source allowlist policy.
|
||||
|
||||
---
|
||||
|
||||
## 16. Maintenance Notes
|
||||
|
||||
- **Owner:** maintainers responsible for collaboration governance and merge quality.
|
||||
- **Update trigger:** branch protection changes, label/risk policy changes, queue governance updates, or agent review process changes.
|
||||
- **Last reviewed:** 2026-02-18.
|
||||
170
third_party/zeroclaw/docs/contributing/release-process.md
vendored
Normal file
170
third_party/zeroclaw/docs/contributing/release-process.md
vendored
Normal file
@@ -0,0 +1,170 @@
|
||||
# ZeroClaw Release Process
|
||||
|
||||
This runbook defines the maintainers' standard release flow.
|
||||
|
||||
Last verified: **February 21, 2026**.
|
||||
|
||||
## Release Goals
|
||||
|
||||
- Keep releases predictable and repeatable.
|
||||
- Publish only from code already in `master`.
|
||||
- Verify multi-target artifacts before publish.
|
||||
- Keep release cadence regular even with high PR volume.
|
||||
|
||||
## Standard Cadence
|
||||
|
||||
- Patch/minor releases: weekly or bi-weekly.
|
||||
- Emergency security fixes: out-of-band.
|
||||
- Never wait for very large commit batches to accumulate.
|
||||
|
||||
## Workflow Contract
|
||||
|
||||
Release automation lives in:
|
||||
|
||||
- `.github/workflows/pub-release.yml`
|
||||
- `.github/workflows/pub-homebrew-core.yml` (manual Homebrew formula PR, bot-owned)
|
||||
- `.github/workflows/pub-scoop.yml` (manual Scoop bucket manifest update)
|
||||
- `.github/workflows/pub-aur.yml` (manual AUR PKGBUILD push)
|
||||
|
||||
Modes:
|
||||
|
||||
- Tag push `v*`: publish mode.
|
||||
- Manual dispatch: verification-only or publish mode.
|
||||
- Weekly schedule: verification-only mode.
|
||||
|
||||
Publish-mode guardrails:
|
||||
|
||||
- Tag must match semver-like format `vX.Y.Z[-suffix]`.
|
||||
- Tag must already exist on origin.
|
||||
- Tag commit must be reachable from `origin/master`.
|
||||
- Matching GHCR image tag (`ghcr.io/<owner>/<repo>:<tag>`) must be available before GitHub Release publish completes.
|
||||
- Artifacts are verified before publish.
|
||||
|
||||
## Maintainer Procedure
|
||||
|
||||
### 1) Preflight on `master`
|
||||
|
||||
1. Ensure required checks are green on latest `master`.
|
||||
2. Confirm no high-priority incidents or known regressions are open.
|
||||
3. Confirm installer and Docker workflows are healthy on recent `master` commits.
|
||||
|
||||
### 2) Run verification build (no publish)
|
||||
|
||||
Run `Pub Release` manually:
|
||||
|
||||
- `publish_release`: `false`
|
||||
- `release_ref`: `master`
|
||||
|
||||
Expected outcome:
|
||||
|
||||
- Full target matrix builds successfully.
|
||||
- `verify-artifacts` confirms all expected archives exist.
|
||||
- No GitHub Release is published.
|
||||
|
||||
### 3) Cut release tag
|
||||
|
||||
From a clean local checkout synced to `origin/master`:
|
||||
|
||||
```bash
|
||||
scripts/release/cut_release_tag.sh vX.Y.Z --push
|
||||
```
|
||||
|
||||
This script enforces:
|
||||
|
||||
- clean working tree
|
||||
- `HEAD == origin/master`
|
||||
- non-duplicate tag
|
||||
- semver-like tag format
|
||||
|
||||
### 4) Monitor publish run
|
||||
|
||||
After tag push, monitor:
|
||||
|
||||
1. `Pub Release` publish mode
|
||||
2. `Pub Docker Img` publish job
|
||||
|
||||
Expected publish outputs:
|
||||
|
||||
- release archives
|
||||
- `SHA256SUMS`
|
||||
- `CycloneDX` and `SPDX` SBOMs
|
||||
- cosign signatures/certificates
|
||||
- GitHub Release notes + assets
|
||||
|
||||
### 5) Post-release validation
|
||||
|
||||
1. Verify GitHub Release assets are downloadable.
|
||||
2. Verify GHCR tags for the released version (`vX.Y.Z`) and release commit SHA tag (`sha-<12>`).
|
||||
3. Verify install paths that rely on release assets (for example bootstrap binary download).
|
||||
|
||||
### 6) Publish Homebrew Core formula (bot-owned)
|
||||
|
||||
Run `Pub Homebrew Core` manually:
|
||||
|
||||
- `release_tag`: `vX.Y.Z`
|
||||
- `dry_run`: `true` first, then `false`
|
||||
|
||||
Required repository settings for non-dry-run:
|
||||
|
||||
- secret: `HOMEBREW_CORE_BOT_TOKEN` (token from a dedicated bot account, not a personal maintainer account)
|
||||
- variable: `HOMEBREW_CORE_BOT_FORK_REPO` (for example `zeroclaw-release-bot/homebrew-core`)
|
||||
- optional variable: `HOMEBREW_CORE_BOT_EMAIL`
|
||||
|
||||
Workflow guardrails:
|
||||
|
||||
- release tag must match `Cargo.toml` version
|
||||
- formula source URL and SHA256 are updated from the tagged tarball
|
||||
- formula license is normalized to `Apache-2.0 OR MIT`
|
||||
- PR is opened from the bot fork into `Homebrew/homebrew-core:master`
|
||||
|
||||
### 7) Publish Scoop manifest (Windows)
|
||||
|
||||
Run `Pub Scoop Manifest` manually:
|
||||
|
||||
- `release_tag`: `vX.Y.Z`
|
||||
- `dry_run`: `true` first, then `false`
|
||||
|
||||
Required repository settings for non-dry-run:
|
||||
|
||||
- secret: `SCOOP_BUCKET_TOKEN` (PAT with push access to the bucket repo)
|
||||
- variable: `SCOOP_BUCKET_REPO` (for example `zeroclaw-labs/scoop-zeroclaw`)
|
||||
|
||||
Workflow guardrails:
|
||||
|
||||
- release tag must be `vX.Y.Z` format
|
||||
- Windows binary SHA256 extracted from `SHA256SUMS` release asset
|
||||
- manifest pushed to `bucket/zeroclaw.json` in the Scoop bucket repo
|
||||
|
||||
### 8) Publish AUR package (Arch Linux)
|
||||
|
||||
Run `Pub AUR Package` manually:
|
||||
|
||||
- `release_tag`: `vX.Y.Z`
|
||||
- `dry_run`: `true` first, then `false`
|
||||
|
||||
Required repository settings for non-dry-run:
|
||||
|
||||
- secret: `AUR_SSH_KEY` (SSH private key registered with AUR)
|
||||
|
||||
Workflow guardrails:
|
||||
|
||||
- release tag must be `vX.Y.Z` format
|
||||
- source tarball SHA256 computed from the tagged release
|
||||
- PKGBUILD and .SRCINFO pushed to AUR `zeroclaw` package
|
||||
|
||||
## Emergency / Recovery Path
|
||||
|
||||
If tag-push release fails after artifacts are validated:
|
||||
|
||||
1. Fix workflow or packaging issue on `master`.
|
||||
2. Re-run manual `Pub Release` in publish mode with:
|
||||
- `publish_release=true`
|
||||
- `release_tag=<existing tag>`
|
||||
- `release_ref` is automatically pinned to `release_tag` in publish mode
|
||||
3. Re-validate released assets.
|
||||
|
||||
## Operational Notes
|
||||
|
||||
- Keep release changes small and reversible.
|
||||
- Prefer one release issue/checklist per version so handoff is clear.
|
||||
- Avoid publishing from ad-hoc feature branches.
|
||||
191
third_party/zeroclaw/docs/contributing/reviewer-playbook.md
vendored
Normal file
191
third_party/zeroclaw/docs/contributing/reviewer-playbook.md
vendored
Normal file
@@ -0,0 +1,191 @@
|
||||
# Reviewer Playbook
|
||||
|
||||
This playbook is the operational companion to [`pr-workflow.md`](./pr-workflow.md).
|
||||
For broader documentation navigation, use [`docs/README.md`](../README.md).
|
||||
|
||||
## 0. Summary
|
||||
|
||||
- **Purpose:** define a deterministic reviewer operating model that keeps review quality high under heavy PR volume.
|
||||
- **Audience:** maintainers, reviewers, and agent-assisted reviewers.
|
||||
- **Scope:** intake triage, risk-to-depth routing, deep-review checks, automation overrides, and handoff protocol.
|
||||
- **Non-goals:** replacing PR policy authority in `CONTRIBUTING.md` or workflow authority in CI files.
|
||||
|
||||
---
|
||||
|
||||
## 1. Fast Path by Review Situation
|
||||
|
||||
Use this section to route quickly before reading full detail.
|
||||
|
||||
### 1.1 Intake fails in first 5 minutes
|
||||
|
||||
1. Leave one actionable checklist comment.
|
||||
2. Stop deep review until intake blockers are fixed.
|
||||
|
||||
Go to:
|
||||
|
||||
- [Section 3.1](#31-five-minute-intake-triage)
|
||||
|
||||
### 1.2 Risk is high or unclear
|
||||
|
||||
1. Treat as `risk: high` by default.
|
||||
2. Require deep review and explicit rollback evidence.
|
||||
|
||||
Go to:
|
||||
|
||||
- [Section 2](#2-review-depth-decision-matrix)
|
||||
- [Section 3.3](#33-deep-review-checklist-high-risk)
|
||||
|
||||
### 1.3 Automation output is wrong/noisy
|
||||
|
||||
1. Apply override protocol (`risk: manual`, dedupe comments/labels).
|
||||
2. Continue review with explicit rationale.
|
||||
|
||||
Go to:
|
||||
|
||||
- [Section 5](#5-automation-override-protocol)
|
||||
|
||||
### 1.4 Need review handoff
|
||||
|
||||
1. Handoff with scope/risk/validation/blockers.
|
||||
2. Assign concrete next action.
|
||||
|
||||
Go to:
|
||||
|
||||
- [Section 6](#6-handoff-protocol)
|
||||
|
||||
---
|
||||
|
||||
## 2. Review Depth Decision Matrix
|
||||
|
||||
| Risk label | Typical touched paths | Minimum review depth | Required evidence |
|
||||
|---|---|---|---|
|
||||
| `risk: low` | docs/tests/chore, isolated non-runtime changes | 1 reviewer + CI gate | coherent local validation + no behavior ambiguity |
|
||||
| `risk: medium` | `src/providers/**`, `src/channels/**`, `src/memory/**`, `src/config/**` | 1 subsystem-aware reviewer + behavior verification | focused scenario proof + explicit side effects |
|
||||
| `risk: high` | `src/security/**`, `src/runtime/**`, `src/gateway/**`, `src/tools/**`, `.github/workflows/**` | fast triage + deep review + rollback readiness | security/failure-mode checks + rollback clarity |
|
||||
|
||||
When uncertain, treat as `risk: high`.
|
||||
|
||||
If automated risk labeling is contextually wrong, maintainers can apply `risk: manual` and set the final `risk:*` label explicitly.
|
||||
|
||||
---
|
||||
|
||||
## 3. Standard Review Workflow
|
||||
|
||||
### 3.1 Five-minute intake triage
|
||||
|
||||
For every new PR:
|
||||
|
||||
1. Confirm template completeness (`summary`, `validation`, `security`, `rollback`).
|
||||
2. Confirm labels are present and plausible:
|
||||
- `size:*`, `risk:*`
|
||||
- scope labels (for example `provider`, `channel`, `security`)
|
||||
- module-scoped labels (`channel:*`, `provider:*`, `tool:*`)
|
||||
- contributor tier labels when applicable
|
||||
3. Confirm CI signal status (`CI Required Gate`).
|
||||
4. Confirm scope is one concern (reject mixed mega-PRs unless justified).
|
||||
5. Confirm privacy/data-hygiene and neutral test wording requirements are satisfied.
|
||||
|
||||
If any intake requirement fails, leave one actionable checklist comment instead of deep review.
|
||||
|
||||
### 3.2 Fast-lane checklist (all PRs)
|
||||
|
||||
- Scope boundary is explicit and believable.
|
||||
- Validation commands are present and results are coherent.
|
||||
- User-facing behavior changes are documented.
|
||||
- Author demonstrates understanding of behavior and blast radius (especially for agent-assisted PRs).
|
||||
- Rollback path is concrete (not just “revert”).
|
||||
- Compatibility/migration impacts are clear.
|
||||
- No personal/sensitive data leakage in diff artifacts; examples/tests remain neutral and project-scoped.
|
||||
- If identity-like wording exists, it uses ZeroClaw/project-native roles (not personal or real-world identities).
|
||||
- Naming and architecture boundaries follow project contracts (`AGENTS.md`, `CONTRIBUTING.md`).
|
||||
|
||||
### 3.3 Deep review checklist (high risk)
|
||||
|
||||
For high-risk PRs, verify at least one concrete example in each category:
|
||||
|
||||
- **Security boundaries:** deny-by-default behavior preserved, no accidental scope broadening.
|
||||
- **Failure modes:** error handling is explicit and degrades safely.
|
||||
- **Contract stability:** CLI/config/API compatibility preserved or migration documented.
|
||||
- **Observability:** failures are diagnosable without leaking secrets.
|
||||
- **Rollback safety:** revert path and blast radius are clear.
|
||||
|
||||
### 3.4 Review comment outcome style
|
||||
|
||||
Prefer checklist-style comments with one explicit outcome:
|
||||
|
||||
- **Ready to merge** (say why).
|
||||
- **Needs author action** (ordered blocker list).
|
||||
- **Needs deeper security/runtime review** (state exact risk and requested evidence).
|
||||
|
||||
Avoid vague comments that create avoidable back-and-forth latency.
|
||||
|
||||
---
|
||||
|
||||
## 4. Issue Triage and Backlog Governance
|
||||
|
||||
### 4.1 Issue triage label playbook
|
||||
|
||||
Use labels to keep backlog actionable:
|
||||
|
||||
- `r:needs-repro` for incomplete bug reports.
|
||||
- `r:support` for usage/support questions better routed outside bug backlog.
|
||||
- `duplicate` / `invalid` for non-actionable duplicates/noise.
|
||||
- `no-stale` for accepted work waiting on external blockers.
|
||||
- Request redaction when logs/payloads include personal identifiers or sensitive data.
|
||||
|
||||
### 4.2 PR backlog pruning protocol
|
||||
|
||||
When review demand exceeds capacity, apply this order:
|
||||
|
||||
1. Keep active bug/security PRs (`size: XS/S`) at the top of queue.
|
||||
2. Ask overlapping PRs to consolidate; close older ones as `superseded` after acknowledgement.
|
||||
3. Mark dormant PRs as `stale-candidate` before stale closure window starts.
|
||||
4. Require rebase + fresh validation before reopening stale/superseded technical work.
|
||||
|
||||
---
|
||||
|
||||
## 5. Automation Override Protocol
|
||||
|
||||
Use this when automation output creates review side effects:
|
||||
|
||||
1. **Incorrect risk label:** add `risk: manual`, then set intended `risk:*` label.
|
||||
2. **Incorrect auto-close on issue triage:** reopen issue, remove route label, leave one clarifying comment.
|
||||
3. **Label spam/noise:** keep one canonical maintainer comment and remove redundant route labels.
|
||||
4. **Ambiguous PR scope:** request split before deep review.
|
||||
|
||||
---
|
||||
|
||||
## 6. Handoff Protocol
|
||||
|
||||
If handing off review to another maintainer/agent, include:
|
||||
|
||||
1. Scope summary.
|
||||
2. Current risk class and rationale.
|
||||
3. What has been validated already.
|
||||
4. Open blockers.
|
||||
5. Suggested next action.
|
||||
|
||||
---
|
||||
|
||||
## 7. Weekly Queue Hygiene
|
||||
|
||||
- Review stale queue and apply `no-stale` only to accepted-but-blocked work.
|
||||
- Prioritize `size: XS/S` bug/security PRs first.
|
||||
- Convert recurring support issues into docs updates and auto-response guidance.
|
||||
|
||||
---
|
||||
|
||||
## 8. Related Docs
|
||||
|
||||
- [README.md](../README.md) — documentation taxonomy and navigation.
|
||||
- [pr-workflow.md](./pr-workflow.md) — governance workflow and merge contract.
|
||||
- [ci-map.md](./ci-map.md) — CI ownership and triage map.
|
||||
- [actions-source-policy.md](./actions-source-policy.md) — action source allowlist policy.
|
||||
|
||||
---
|
||||
|
||||
## 9. Maintenance Notes
|
||||
|
||||
- **Owner:** maintainers responsible for review quality and queue throughput.
|
||||
- **Update trigger:** PR policy changes, risk-routing model changes, or automation override behavior changes.
|
||||
- **Last reviewed:** 2026-02-18.
|
||||
303
third_party/zeroclaw/docs/contributing/testing-telegram.md
vendored
Normal file
303
third_party/zeroclaw/docs/contributing/testing-telegram.md
vendored
Normal file
@@ -0,0 +1,303 @@
|
||||
# 🧪 Test Execution Guide
|
||||
|
||||
## Quick Reference
|
||||
|
||||
```bash
|
||||
# Full automated test suite (~2 min)
|
||||
./tests/telegram/test_telegram_integration.sh
|
||||
|
||||
# Quick smoke test (~10 sec)
|
||||
./tests/telegram/quick_test.sh
|
||||
|
||||
# Just compile and unit test (~30 sec)
|
||||
cargo test telegram --lib
|
||||
```
|
||||
|
||||
## 📝 What Was Created For You
|
||||
|
||||
### 1. **test_telegram_integration.sh** (Main Test Suite)
|
||||
- **20+ automated tests** covering all fixes
|
||||
- **6 test phases**: Code quality, build, config, health, features, manual
|
||||
- **Colored output** with pass/fail indicators
|
||||
- **Detailed summary** at the end
|
||||
|
||||
```bash
|
||||
./tests/telegram/test_telegram_integration.sh
|
||||
```
|
||||
|
||||
### 2. **quick_test.sh** (Fast Validation)
|
||||
- **4 essential tests** for quick feedback
|
||||
- **<10 second** execution time
|
||||
- Perfect for **pre-commit** checks
|
||||
|
||||
```bash
|
||||
./tests/telegram/quick_test.sh
|
||||
```
|
||||
|
||||
### 3. **generate_test_messages.py** (Test Helper)
|
||||
- Generates test messages of various lengths
|
||||
- Tests message splitting functionality
|
||||
- 8 different message types
|
||||
|
||||
```bash
|
||||
# Generate a long message (>4096 chars)
|
||||
python3 tests/telegram/generate_test_messages.py long
|
||||
|
||||
# Show all message types
|
||||
python3 tests/telegram/generate_test_messages.py all
|
||||
```
|
||||
|
||||
### 4. **TESTING_TELEGRAM.md** (Complete Guide)
|
||||
- Comprehensive testing documentation
|
||||
- Troubleshooting guide
|
||||
- Performance benchmarks
|
||||
- CI/CD integration examples
|
||||
|
||||
## 🚀 Step-by-Step: First Run
|
||||
|
||||
### Step 1: Run Automated Tests
|
||||
|
||||
```bash
|
||||
cd /Users/abdzsam/zeroclaw
|
||||
|
||||
# Make scripts executable (already done)
|
||||
chmod +x tests/telegram/test_telegram_integration.sh tests/telegram/quick_test.sh
|
||||
|
||||
# Run the full test suite
|
||||
./tests/telegram/test_telegram_integration.sh
|
||||
```
|
||||
|
||||
**Expected output:**
|
||||
```
|
||||
⚡⚡⚡⚡⚡⚡⚡⚡⚡⚡⚡⚡⚡⚡⚡⚡⚡⚡⚡⚡⚡⚡⚡⚡⚡⚡⚡⚡⚡⚡
|
||||
|
||||
███████╗███████╗██████╗ ██████╗ ██████╗██╗ █████╗ ██╗ ██╗
|
||||
...
|
||||
|
||||
🧪 TELEGRAM INTEGRATION TEST SUITE 🧪
|
||||
|
||||
Phase 1: Code Quality Tests
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
Test 1: Compiling test suite
|
||||
✓ PASS: Test suite compiles successfully
|
||||
|
||||
Test 2: Running Telegram unit tests
|
||||
✓ PASS: All Telegram unit tests passed (24 tests)
|
||||
...
|
||||
|
||||
Test Summary
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Total Tests: 20
|
||||
Passed: 20
|
||||
Failed: 0
|
||||
Warnings: 0
|
||||
|
||||
Pass Rate: 100%
|
||||
|
||||
✓ ALL AUTOMATED TESTS PASSED! 🎉
|
||||
```
|
||||
|
||||
### Step 2: Configure Telegram (if not done)
|
||||
|
||||
```bash
|
||||
# Guided setup
|
||||
zeroclaw onboard
|
||||
|
||||
# Or channels-only setup
|
||||
zeroclaw onboard --channels-only
|
||||
```
|
||||
|
||||
When prompted:
|
||||
1. Select **Telegram** channel
|
||||
2. Enter your **bot token** from @BotFather
|
||||
3. Enter your **Telegram user ID** or username
|
||||
|
||||
### Step 3: Verify Health
|
||||
|
||||
```bash
|
||||
zeroclaw channel doctor
|
||||
```
|
||||
|
||||
**Expected output:**
|
||||
```
|
||||
🩺 ZeroClaw Channel Doctor
|
||||
|
||||
✅ Telegram healthy
|
||||
|
||||
Summary: 1 healthy, 0 unhealthy, 0 timed out
|
||||
```
|
||||
|
||||
### Step 4: Manual Testing
|
||||
|
||||
#### Test 1: Basic Message
|
||||
|
||||
```bash
|
||||
# Terminal 1: Start the channel
|
||||
zeroclaw channel start
|
||||
```
|
||||
|
||||
**In Telegram:**
|
||||
- Find your bot
|
||||
- Send: `Hello bot!`
|
||||
- **Verify**: Bot responds within 3 seconds
|
||||
|
||||
#### Test 2: Long Message (Split Test)
|
||||
|
||||
```bash
|
||||
# Generate a long message
|
||||
python3 tests/telegram/generate_test_messages.py long
|
||||
```
|
||||
|
||||
- **Copy the output**
|
||||
- **Paste into Telegram** to your bot
|
||||
- **Verify**:
|
||||
- Message is split into 2+ chunks
|
||||
- First chunk ends with `(continues...)`
|
||||
- Middle chunks have `(continued)` and `(continues...)`
|
||||
- Last chunk starts with `(continued)`
|
||||
- All chunks arrive in order
|
||||
|
||||
#### Test 3: Word Boundary Splitting
|
||||
|
||||
```bash
|
||||
python3 tests/telegram/generate_test_messages.py word
|
||||
```
|
||||
|
||||
- Send to bot
|
||||
- **Verify**: Splits at word boundaries (not mid-word)
|
||||
|
||||
## 🎯 Test Results Checklist
|
||||
|
||||
After running all tests, verify:
|
||||
|
||||
### Automated Tests
|
||||
- [ ] ✅ All 20 automated tests passed
|
||||
- [ ] ✅ Build completed successfully
|
||||
- [ ] ✅ Binary size <10MB
|
||||
- [ ] ✅ Health check completes in <5s
|
||||
- [ ] ✅ No clippy warnings
|
||||
|
||||
### Manual Tests
|
||||
- [ ] ✅ Bot responds to basic messages
|
||||
- [ ] ✅ Long messages split correctly
|
||||
- [ ] ✅ Continuation markers appear
|
||||
- [ ] ✅ Word boundaries respected
|
||||
- [ ] ✅ Allowlist blocks unauthorized users
|
||||
- [ ] ✅ No errors in logs
|
||||
|
||||
### Performance
|
||||
- [ ] ✅ Response time <3 seconds
|
||||
- [ ] ✅ Memory usage <10MB
|
||||
- [ ] ✅ No message loss
|
||||
- [ ] ✅ Rate limiting works (100ms delays)
|
||||
|
||||
## 🐛 Troubleshooting
|
||||
|
||||
### Issue: Tests fail to compile
|
||||
|
||||
```bash
|
||||
# Clean build
|
||||
cargo clean
|
||||
cargo build --release
|
||||
|
||||
# Update dependencies
|
||||
cargo update
|
||||
```
|
||||
|
||||
### Issue: "Bot token not configured"
|
||||
|
||||
```bash
|
||||
# Check config
|
||||
cat ~/.zeroclaw/config.toml | grep -A 5 telegram
|
||||
|
||||
# Reconfigure
|
||||
zeroclaw onboard --channels-only
|
||||
```
|
||||
|
||||
### Issue: Health check fails
|
||||
|
||||
```bash
|
||||
# Test bot token directly
|
||||
curl "https://api.telegram.org/bot<YOUR_TOKEN>/getMe"
|
||||
|
||||
# Should return: {"ok":true,"result":{...}}
|
||||
```
|
||||
|
||||
### Issue: Bot doesn't respond
|
||||
|
||||
```bash
|
||||
# Enable debug logging
|
||||
RUST_LOG=debug zeroclaw channel start
|
||||
|
||||
# Look for:
|
||||
# - "Telegram channel listening for messages..."
|
||||
# - "ignoring message from unauthorized user" (if allowlist issue)
|
||||
# - Any error messages
|
||||
```
|
||||
|
||||
## 📊 Performance Benchmarks
|
||||
|
||||
After all fixes, you should see:
|
||||
|
||||
| Metric | Target | Command |
|
||||
|--------|--------|---------|
|
||||
| Unit test pass | 24/24 | `cargo test telegram --lib` |
|
||||
| Build time | <30s | `time cargo build --release` |
|
||||
| Binary size | ~3-4MB | `ls -lh target/release/zeroclaw` |
|
||||
| Health check | <5s | `time zeroclaw channel doctor` |
|
||||
| First response | <3s | Manual test in Telegram |
|
||||
| Message split | <50ms | Check debug logs |
|
||||
| Memory usage | <10MB | `ps aux \| grep zeroclaw` |
|
||||
|
||||
## 🔄 CI/CD Integration
|
||||
|
||||
Add to your workflow:
|
||||
|
||||
```bash
|
||||
# Pre-commit hook
|
||||
#!/bin/bash
|
||||
./tests/telegram/quick_test.sh
|
||||
|
||||
# CI pipeline
|
||||
./tests/telegram/test_telegram_integration.sh
|
||||
```
|
||||
|
||||
## 📚 Next Steps
|
||||
|
||||
1. **Run the tests:**
|
||||
```bash
|
||||
./tests/telegram/test_telegram_integration.sh
|
||||
```
|
||||
|
||||
2. **Fix any failures** using the troubleshooting guide
|
||||
|
||||
3. **Complete manual tests** using the checklist
|
||||
|
||||
4. **Deploy to production** when all tests pass
|
||||
|
||||
5. **Monitor logs** for any issues:
|
||||
```bash
|
||||
zeroclaw daemon
|
||||
# or
|
||||
RUST_LOG=info zeroclaw channel start
|
||||
```
|
||||
|
||||
## 🎉 Success!
|
||||
|
||||
If all tests pass:
|
||||
- ✅ Message splitting works (4096 char limit)
|
||||
- ✅ Health check has 5s timeout
|
||||
- ✅ Empty chat_id is handled safely
|
||||
- ✅ All 24 unit tests pass
|
||||
- ✅ Code is production-ready
|
||||
|
||||
**Your Telegram integration is ready to go!** 🚀
|
||||
|
||||
---
|
||||
|
||||
## 📞 Support
|
||||
|
||||
- Issues: https://github.com/zeroclaw-labs/zeroclaw/issues
|
||||
- Docs: [testing-telegram.md](../../tests/telegram/testing-telegram.md)
|
||||
- Help: `zeroclaw --help`
|
||||
149
third_party/zeroclaw/docs/contributing/testing.md
vendored
Normal file
149
third_party/zeroclaw/docs/contributing/testing.md
vendored
Normal file
@@ -0,0 +1,149 @@
|
||||
# Testing Guide
|
||||
|
||||
ZeroClaw uses a five-level testing taxonomy with filesystem-based organization.
|
||||
|
||||
## Testing Taxonomy
|
||||
|
||||
| Level | What it tests | External boundaries | Directory |
|
||||
|-------|--------------|-------------------|-----------|
|
||||
| **Unit** | Single function/struct | Everything mocked | `#[cfg(test)]` blocks in `src/**/*.rs` or separate `src/**/tests.rs` files |
|
||||
| **Component** | One subsystem within its own boundary | Subsystem real, everything else mocked | `tests/component/` |
|
||||
| **Integration** | Multiple internal components wired together | Real internals, external APIs mocked | `tests/integration/` |
|
||||
| **System** | Full request→response across ALL internal boundaries | Only external APIs mocked | `tests/system/` |
|
||||
| **Live** | Full stack with real external services | Nothing mocked, `#[ignore]` | `tests/live/` |
|
||||
|
||||
## Directory Structure
|
||||
|
||||
| Directory | Level | Description | Run command |
|
||||
|-----------|-------|-------------|-------------|
|
||||
| `src/**/*.rs` | Unit | Co-located `#[cfg(test)]` blocks or separate `tests.rs` files alongside source | `cargo test --lib` |
|
||||
| `tests/component/` | Component | One subsystem, real impl, mocked boundaries | `cargo test --test component` |
|
||||
| `tests/integration/` | Integration | Multiple components wired together | `cargo test --test integration` |
|
||||
| `tests/system/` | System | Full channel→agent→channel flow | `cargo test --test system` |
|
||||
| `tests/live/` | Live | Real external services, `#[ignore]` | `cargo test --test live -- --ignored` |
|
||||
| `tests/manual/` | — | Human-driven test scripts (shell, Python) | Run directly |
|
||||
| `tests/support/` | — | Shared mock infrastructure (not a test binary) | — |
|
||||
| `tests/fixtures/` | — | Test data files (JSON traces, media) | — |
|
||||
|
||||
## How to Run Tests
|
||||
|
||||
```bash
|
||||
# Run all tests (unit + component + integration + system)
|
||||
cargo test
|
||||
|
||||
# Run only unit tests
|
||||
cargo test --lib
|
||||
|
||||
# Run component tests
|
||||
cargo test --test component
|
||||
|
||||
# Run integration tests
|
||||
cargo test --test integration
|
||||
|
||||
# Run system tests
|
||||
cargo test --test system
|
||||
|
||||
# Run live tests (requires API credentials)
|
||||
cargo test --test live -- --ignored
|
||||
|
||||
# Filter within a level
|
||||
cargo test --test integration agent
|
||||
|
||||
# Full CI validation
|
||||
./dev/ci.sh all
|
||||
|
||||
# Level-specific CI commands
|
||||
./dev/ci.sh test-component
|
||||
./dev/ci.sh test-integration
|
||||
./dev/ci.sh test-system
|
||||
```
|
||||
|
||||
## How to Add a New Test
|
||||
|
||||
1. **Testing one subsystem in isolation?** → `tests/component/`
|
||||
2. **Testing multiple components together?** → `tests/integration/`
|
||||
3. **Testing full message flow?** → `tests/system/`
|
||||
4. **Requires real API keys?** → `tests/live/` with `#[ignore]`
|
||||
|
||||
After creating a test file, add it to the appropriate `mod.rs` and use shared infrastructure from `tests/support/`.
|
||||
|
||||
## Shared Infrastructure (`tests/support/`)
|
||||
|
||||
All test binaries include `mod support;` making shared mocks available via `crate::support::*`.
|
||||
|
||||
| Module | Contents |
|
||||
|--------|----------|
|
||||
| `mock_provider.rs` | `MockProvider` (FIFO scripted), `RecordingProvider` (captures requests), `TraceLlmProvider` (JSON fixture replay) |
|
||||
| `mock_tools.rs` | `EchoTool`, `CountingTool`, `FailingTool`, `RecordingTool` |
|
||||
| `mock_channel.rs` | `TestChannel` (captures sends, records typing events) |
|
||||
| `helpers.rs` | `make_memory()`, `make_observer()`, `build_agent()`, `text_response()`, `tool_response()`, `StaticMemoryLoader` |
|
||||
| `trace.rs` | `LlmTrace`, `TraceTurn`, `TraceStep` types + `LlmTrace::from_file()` |
|
||||
| `assertions.rs` | `verify_expects()` for declarative trace assertion |
|
||||
|
||||
### Usage
|
||||
|
||||
```rust
|
||||
use crate::support::{MockProvider, EchoTool, CountingTool};
|
||||
use crate::support::helpers::{build_agent, text_response, tool_response};
|
||||
```
|
||||
|
||||
## JSON Trace Fixtures
|
||||
|
||||
Trace fixtures are canned LLM response scripts stored as JSON files in `tests/fixtures/traces/`. They replace inline mock setup with declarative conversation scripts.
|
||||
|
||||
### How it works
|
||||
|
||||
1. `TraceLlmProvider` loads a fixture and implements the `Provider` trait
|
||||
2. Each `provider.chat()` call returns the next step from the fixture in FIFO order
|
||||
3. Real tools execute normally (e.g., `EchoTool` processes arguments)
|
||||
4. After all turns, `verify_expects()` checks declarative assertions
|
||||
5. If the agent calls the provider more times than there are steps, the test fails
|
||||
|
||||
### Fixture format
|
||||
|
||||
```json
|
||||
{
|
||||
"model_name": "test-name",
|
||||
"turns": [
|
||||
{
|
||||
"user_input": "User message",
|
||||
"steps": [
|
||||
{
|
||||
"response": {
|
||||
"type": "text",
|
||||
"content": "LLM response",
|
||||
"input_tokens": 20,
|
||||
"output_tokens": 10
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"expects": {
|
||||
"response_contains": ["expected text"],
|
||||
"tools_used": ["echo"],
|
||||
"max_tool_calls": 1
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Response types**: `"text"` (plain text) or `"tool_calls"` (LLM requests tool execution).
|
||||
|
||||
**Expects fields**: `response_contains`, `response_not_contains`, `tools_used`, `tools_not_used`, `max_tool_calls`, `all_tools_succeeded`, `response_matches` (regex).
|
||||
|
||||
## Live Test Conventions
|
||||
|
||||
- All live tests must be `#[ignore]`
|
||||
- Use `env::var("ZEROCLAW_TEST_*")` for credentials
|
||||
- Run with `cargo test --test live -- --ignored --nocapture`
|
||||
|
||||
## Manual Tests (`tests/manual/`)
|
||||
|
||||
Scripts for human-driven testing that can't be automated via `cargo test`:
|
||||
|
||||
| Directory/File | What it does |
|
||||
|---|---|
|
||||
| `manual/telegram/` | Telegram integration test suite, smoke tests, message generator |
|
||||
| `manual/test_dockerignore.sh` | Validates `.dockerignore` excludes sensitive paths |
|
||||
|
||||
For Telegram-specific testing details, see [testing-telegram.md](./testing-telegram.md).
|
||||
Reference in New Issue
Block a user