feat: refactor sgclaw around zeroclaw compat runtime
This commit is contained in:
169
third_party/zeroclaw/dev/README.md
vendored
Normal file
169
third_party/zeroclaw/dev/README.md
vendored
Normal file
@@ -0,0 +1,169 @@
|
||||
# ZeroClaw Development Environment
|
||||
|
||||
A fully containerized development sandbox for ZeroClaw agents. This environment allows you to develop, test, and debug the agent in isolation without modifying your host system.
|
||||
|
||||
## Directory Structure
|
||||
|
||||
- **`agent/`**: (Merged into root Dockerfile)
|
||||
- The development image is built from the root `Dockerfile` using the `dev` stage (`target: dev`).
|
||||
- Based on `debian:bookworm-slim` (unlike production `distroless`).
|
||||
- Includes `bash`, `curl`, and debug tools.
|
||||
- **`sandbox/`**: Dockerfile for the simulated user environment.
|
||||
- Based on `ubuntu:22.04`.
|
||||
- Pre-loaded with `git`, `python3`, `nodejs`, `npm`, `gcc`, `make`.
|
||||
- Simulates a real developer machine.
|
||||
- **`docker-compose.yml`**: Defines the services and `dev-net` network.
|
||||
- **`cli.sh`**: Helper script to manage the lifecycle.
|
||||
|
||||
## Usage
|
||||
|
||||
Run all commands from the repository root using the helper script:
|
||||
|
||||
### 1. Start Environment
|
||||
|
||||
```bash
|
||||
./dev/cli.sh up
|
||||
```
|
||||
|
||||
Builds the agent from source and starts both containers.
|
||||
|
||||
### 2. Enter Agent Container (`zeroclaw-dev`)
|
||||
|
||||
```bash
|
||||
./dev/cli.sh agent
|
||||
```
|
||||
|
||||
Use this to run `zeroclaw` CLI commands manually, debug the binary, or check logs internally.
|
||||
|
||||
- **Path**: `/zeroclaw-data`
|
||||
- **User**: `nobody` (65534)
|
||||
|
||||
### 3. Enter Sandbox (`sandbox`)
|
||||
|
||||
```bash
|
||||
./dev/cli.sh shell
|
||||
```
|
||||
|
||||
Use this to act as the "user" or "environment" the agent interacts with.
|
||||
|
||||
- **Path**: `/home/developer/workspace`
|
||||
- **User**: `developer` (sudo-enabled)
|
||||
|
||||
### 4. Development Cycle
|
||||
|
||||
1. Make changes to Rust code in `src/`.
|
||||
2. Rebuild the agent:
|
||||
```bash
|
||||
./dev/cli.sh build
|
||||
```
|
||||
3. Test changes inside the container:
|
||||
```bash
|
||||
./dev/cli.sh agent
|
||||
# inside container:
|
||||
zeroclaw --version
|
||||
```
|
||||
|
||||
### 5. Persistence & Shared Workspace
|
||||
|
||||
The local `playground/` directory (in repo root) is mounted as the shared workspace:
|
||||
|
||||
- **Agent**: `/zeroclaw-data/workspace`
|
||||
- **Sandbox**: `/home/developer/workspace`
|
||||
|
||||
Files created by the agent are visible to the sandbox user, and vice versa.
|
||||
|
||||
The agent configuration lives in `target/.zeroclaw` (mounted to `/zeroclaw-data/.zeroclaw`), so settings persist across container rebuilds.
|
||||
|
||||
### 6. Cleanup
|
||||
|
||||
Stop containers and remove volumes and generated config:
|
||||
|
||||
```bash
|
||||
./dev/cli.sh clean
|
||||
```
|
||||
|
||||
**Note:** This removes `target/.zeroclaw` (config/DB) but leaves the `playground/` directory intact. To fully wipe everything, manually delete `playground/`.
|
||||
|
||||
## Local CI/CD (Docker-Only)
|
||||
|
||||
Use this when you want CI-style validation without relying on GitHub Actions and without running Rust toolchain commands on your host.
|
||||
|
||||
### 1. Build the local CI image
|
||||
|
||||
```bash
|
||||
./dev/ci.sh build-image
|
||||
```
|
||||
|
||||
### 2. Run full local CI pipeline
|
||||
|
||||
```bash
|
||||
./dev/ci.sh all
|
||||
```
|
||||
|
||||
This runs inside a container:
|
||||
|
||||
- `./scripts/ci/rust_quality_gate.sh`
|
||||
- `cargo test --locked --verbose`
|
||||
- `cargo build --release --locked --verbose`
|
||||
- `cargo deny check licenses sources`
|
||||
- `cargo audit`
|
||||
- Docker smoke build (`docker build --target dev ...` + `--version` check)
|
||||
|
||||
To run an opt-in strict lint audit locally:
|
||||
|
||||
```bash
|
||||
./dev/ci.sh lint-strict
|
||||
```
|
||||
|
||||
To run the incremental strict gate (changed Rust lines only):
|
||||
|
||||
```bash
|
||||
./dev/ci.sh lint-delta
|
||||
```
|
||||
|
||||
### 3. Run targeted stages
|
||||
|
||||
```bash
|
||||
./dev/ci.sh lint
|
||||
./dev/ci.sh lint-delta
|
||||
./dev/ci.sh test
|
||||
./dev/ci.sh build
|
||||
./dev/ci.sh deny
|
||||
./dev/ci.sh audit
|
||||
./dev/ci.sh security
|
||||
./dev/ci.sh docker-smoke
|
||||
# Optional host-side docs gate (changed-line markdown lint)
|
||||
./scripts/ci/docs_quality_gate.sh
|
||||
# Optional host-side docs links gate (changed-line added links)
|
||||
./scripts/ci/docs_links_gate.sh
|
||||
```
|
||||
|
||||
Note: local `deny` focuses on license/source policy; advisory scanning is handled by `audit`.
|
||||
|
||||
### 4. Enter CI container shell
|
||||
|
||||
```bash
|
||||
./dev/ci.sh shell
|
||||
```
|
||||
|
||||
### 5. Optional shortcut via existing dev CLI
|
||||
|
||||
```bash
|
||||
./dev/cli.sh ci
|
||||
./dev/cli.sh ci lint
|
||||
```
|
||||
|
||||
### Isolation model
|
||||
|
||||
- Rust compilation, tests, and audit/deny tools run in `zeroclaw-local-ci` container.
|
||||
- Your host filesystem is mounted at `/workspace`; no host Rust toolchain is required.
|
||||
- Cargo build artifacts are written to container volume `/ci-target` (not your host `target/`).
|
||||
- Docker smoke stage uses your Docker daemon to build image layers, but build steps execute in containers.
|
||||
|
||||
### Build cache notes
|
||||
|
||||
- Both `Dockerfile` and `dev/ci/Dockerfile` use BuildKit cache mounts for Cargo registry/git data.
|
||||
- The root `Dockerfile` also caches Rust `target/` (`id=zeroclaw-target`) to speed repeat local image builds.
|
||||
- Local CI reuses named Docker volumes for Cargo registry/git and target outputs.
|
||||
- `./dev/ci.sh docker-smoke` and `./dev/ci.sh all` now use `docker buildx` local cache at `.cache/buildx-smoke` when available.
|
||||
- The CI image keeps Rust toolchain defaults from `rust:1.92-slim` and installs pinned toolchain `1.92.0` (no custom `CARGO_HOME`/`RUSTUP_HOME` overrides), preventing repeated toolchain bootstrapping on each run.
|
||||
159
third_party/zeroclaw/dev/ci.sh
vendored
Executable file
159
third_party/zeroclaw/dev/ci.sh
vendored
Executable file
@@ -0,0 +1,159 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
if [ -f "dev/docker-compose.ci.yml" ]; then
|
||||
COMPOSE_FILE="dev/docker-compose.ci.yml"
|
||||
elif [ -f "docker-compose.ci.yml" ] && [ "$(basename "$(pwd)")" = "dev" ]; then
|
||||
COMPOSE_FILE="docker-compose.ci.yml"
|
||||
else
|
||||
echo "❌ Run this script from repo root or dev/ directory."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
compose_cmd=(docker compose -f "$COMPOSE_FILE")
|
||||
SMOKE_CACHE_DIR="${SMOKE_CACHE_DIR:-.cache/buildx-smoke}"
|
||||
|
||||
run_in_ci() {
|
||||
local cmd="$1"
|
||||
"${compose_cmd[@]}" run --rm local-ci bash -c "$cmd"
|
||||
}
|
||||
|
||||
build_smoke_image() {
|
||||
if docker buildx version >/dev/null 2>&1; then
|
||||
mkdir -p "$SMOKE_CACHE_DIR"
|
||||
local build_args=(
|
||||
--load
|
||||
--target dev
|
||||
--cache-to "type=local,dest=$SMOKE_CACHE_DIR,mode=max"
|
||||
-t zeroclaw-local-smoke:latest
|
||||
.
|
||||
)
|
||||
if [ -f "$SMOKE_CACHE_DIR/index.json" ]; then
|
||||
build_args=(--cache-from "type=local,src=$SMOKE_CACHE_DIR" "${build_args[@]}")
|
||||
fi
|
||||
docker buildx build "${build_args[@]}"
|
||||
else
|
||||
DOCKER_BUILDKIT=1 docker build --target dev -t zeroclaw-local-smoke:latest .
|
||||
fi
|
||||
}
|
||||
|
||||
print_help() {
|
||||
cat <<'EOF'
|
||||
ZeroClaw Local CI in Docker
|
||||
|
||||
Usage: ./dev/ci.sh <command>
|
||||
|
||||
Commands:
|
||||
build-image Build/update the local CI image
|
||||
shell Open an interactive shell inside the CI container
|
||||
lint Run rustfmt + clippy correctness gate (container only)
|
||||
lint-strict Run rustfmt + full clippy warnings gate (container only)
|
||||
lint-delta Run strict lint delta gate on changed Rust lines (container only)
|
||||
test Run cargo test (container only)
|
||||
test-component Run component tests only
|
||||
test-integration Run integration tests only
|
||||
test-system Run system tests only
|
||||
test-live Run live tests (requires credentials)
|
||||
test-manual Run manual test scripts (dockerignore, etc.)
|
||||
build Run release build smoke check (container only)
|
||||
audit Run cargo audit (container only)
|
||||
deny Run cargo deny check (container only)
|
||||
security Run cargo audit + cargo deny (container only)
|
||||
docker-smoke Build and verify runtime image (host docker daemon)
|
||||
all Run lint, test, build, security, docker-smoke
|
||||
clean Remove local CI containers and volumes
|
||||
EOF
|
||||
}
|
||||
|
||||
if [ $# -lt 1 ]; then
|
||||
print_help
|
||||
exit 1
|
||||
fi
|
||||
|
||||
case "$1" in
|
||||
build-image)
|
||||
"${compose_cmd[@]}" build local-ci
|
||||
;;
|
||||
|
||||
shell)
|
||||
"${compose_cmd[@]}" run --rm local-ci bash
|
||||
;;
|
||||
|
||||
lint)
|
||||
run_in_ci "./scripts/ci/rust_quality_gate.sh"
|
||||
;;
|
||||
|
||||
lint-strict)
|
||||
run_in_ci "./scripts/ci/rust_quality_gate.sh --strict"
|
||||
;;
|
||||
|
||||
lint-delta)
|
||||
run_in_ci "./scripts/ci/rust_strict_delta_gate.sh"
|
||||
;;
|
||||
|
||||
test)
|
||||
run_in_ci "cargo test --locked --verbose"
|
||||
;;
|
||||
|
||||
test-component)
|
||||
run_in_ci "cargo test --test component --locked --verbose"
|
||||
;;
|
||||
|
||||
test-integration)
|
||||
run_in_ci "cargo test --test integration --locked --verbose"
|
||||
;;
|
||||
|
||||
test-system)
|
||||
run_in_ci "cargo test --test system --locked --verbose"
|
||||
;;
|
||||
|
||||
test-live)
|
||||
run_in_ci "cargo test --test live -- --ignored --verbose"
|
||||
;;
|
||||
|
||||
test-manual)
|
||||
run_in_ci "bash tests/manual/test_dockerignore.sh"
|
||||
;;
|
||||
|
||||
build)
|
||||
run_in_ci "cargo build --release --locked --verbose"
|
||||
;;
|
||||
|
||||
audit)
|
||||
run_in_ci "cargo audit"
|
||||
;;
|
||||
|
||||
deny)
|
||||
run_in_ci "cargo deny check licenses sources"
|
||||
;;
|
||||
|
||||
security)
|
||||
run_in_ci "cargo deny check licenses sources"
|
||||
run_in_ci "cargo audit"
|
||||
;;
|
||||
|
||||
docker-smoke)
|
||||
build_smoke_image
|
||||
docker run --rm zeroclaw-local-smoke:latest --version
|
||||
;;
|
||||
|
||||
all)
|
||||
run_in_ci "./scripts/ci/rust_quality_gate.sh"
|
||||
run_in_ci "cargo test --locked --verbose"
|
||||
run_in_ci "bash tests/manual/test_dockerignore.sh"
|
||||
run_in_ci "cargo build --release --locked --verbose"
|
||||
run_in_ci "cargo deny check licenses sources"
|
||||
run_in_ci "cargo audit"
|
||||
build_smoke_image
|
||||
docker run --rm zeroclaw-local-smoke:latest --version
|
||||
;;
|
||||
|
||||
clean)
|
||||
"${compose_cmd[@]}" down -v --remove-orphans
|
||||
;;
|
||||
|
||||
*)
|
||||
print_help
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
22
third_party/zeroclaw/dev/ci/Dockerfile
vendored
Normal file
22
third_party/zeroclaw/dev/ci/Dockerfile
vendored
Normal file
@@ -0,0 +1,22 @@
|
||||
# syntax=docker/dockerfile:1.7
|
||||
|
||||
FROM rust:1.92-slim@sha256:bf3368a992915f128293ac76917ab6e561e4dda883273c8f5c9f6f8ea37a378e
|
||||
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
ca-certificates \
|
||||
git \
|
||||
pkg-config \
|
||||
libssl-dev \
|
||||
curl \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
RUN rustup toolchain install 1.92.0 --profile minimal --component rustfmt --component clippy
|
||||
|
||||
RUN --mount=type=cache,target=/usr/local/cargo/registry \
|
||||
--mount=type=cache,target=/usr/local/cargo/git \
|
||||
cargo install --locked cargo-audit --version 0.22.1 && \
|
||||
cargo install --locked cargo-deny --version 0.18.5
|
||||
|
||||
WORKDIR /workspace
|
||||
|
||||
CMD ["bash"]
|
||||
140
third_party/zeroclaw/dev/cli.sh
vendored
Executable file
140
third_party/zeroclaw/dev/cli.sh
vendored
Executable file
@@ -0,0 +1,140 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Detect execution context (root or dev/)
|
||||
if [ -f "dev/docker-compose.yml" ]; then
|
||||
BASE_DIR="dev"
|
||||
HOST_TARGET_DIR="target"
|
||||
elif [ -f "docker-compose.yml" ] && [ "$(basename "$(pwd)")" == "dev" ]; then
|
||||
BASE_DIR="."
|
||||
HOST_TARGET_DIR="../target"
|
||||
else
|
||||
echo "❌ Error: Run this script from the project root or dev/ directory."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
COMPOSE_FILE="$BASE_DIR/docker-compose.yml"
|
||||
if [ "$BASE_DIR" = "dev" ]; then
|
||||
ENV_FILE=".env"
|
||||
else
|
||||
ENV_FILE="../.env"
|
||||
fi
|
||||
|
||||
# Colors
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
RED='\033[0;31m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
function load_env {
|
||||
if [ -f "$ENV_FILE" ]; then
|
||||
# Auto-export variables from .env for docker compose passthrough.
|
||||
set -a
|
||||
source "$ENV_FILE"
|
||||
set +a
|
||||
fi
|
||||
}
|
||||
|
||||
function ensure_config {
|
||||
CONFIG_DIR="$HOST_TARGET_DIR/.zeroclaw"
|
||||
CONFIG_FILE="$CONFIG_DIR/config.toml"
|
||||
WORKSPACE_DIR="$CONFIG_DIR/workspace"
|
||||
|
||||
if [ ! -f "$CONFIG_FILE" ]; then
|
||||
echo -e "${YELLOW}⚙️ Config file missing in target/.zeroclaw. Creating default dev config from template...${NC}"
|
||||
mkdir -p "$WORKSPACE_DIR"
|
||||
|
||||
# Copy template
|
||||
cat "$BASE_DIR/config.template.toml" > "$CONFIG_FILE"
|
||||
fi
|
||||
}
|
||||
|
||||
function print_help {
|
||||
echo -e "${YELLOW}ZeroClaw Development Environment Manager${NC}"
|
||||
echo "Usage: ./dev/cli.sh [command]"
|
||||
echo ""
|
||||
echo "Commands:"
|
||||
echo -e " ${GREEN}up${NC} Start dev environment (Agent + Sandbox)"
|
||||
echo -e " ${GREEN}down${NC} Stop containers"
|
||||
echo -e " ${GREEN}shell${NC} Enter Sandbox (Ubuntu)"
|
||||
echo -e " ${GREEN}agent${NC} Enter Agent (ZeroClaw CLI)"
|
||||
echo -e " ${GREEN}logs${NC} View logs"
|
||||
echo -e " ${GREEN}build${NC} Rebuild images"
|
||||
echo -e " ${GREEN}ci${NC} Run local CI checks in Docker (see ./dev/ci.sh)"
|
||||
echo -e " ${GREEN}clean${NC} Stop and wipe workspace data"
|
||||
}
|
||||
|
||||
if [ -z "$1" ]; then
|
||||
print_help
|
||||
exit 1
|
||||
fi
|
||||
|
||||
load_env
|
||||
|
||||
case "$1" in
|
||||
up)
|
||||
ensure_config
|
||||
echo -e "${GREEN}🚀 Starting Dev Environment...${NC}"
|
||||
# Build context MUST be set correctly for docker compose
|
||||
docker compose -f "$COMPOSE_FILE" up -d
|
||||
echo -e "${GREEN}✅ Environment is running!${NC}"
|
||||
echo -e " - Agent: http://127.0.0.1:42617"
|
||||
echo -e " - Sandbox: running (background)"
|
||||
echo -e " - Config: target/.zeroclaw/config.toml (Edit locally to apply changes)"
|
||||
;;
|
||||
|
||||
down)
|
||||
echo -e "${YELLOW}🛑 Stopping services...${NC}"
|
||||
docker compose -f "$COMPOSE_FILE" down
|
||||
echo -e "${GREEN}✅ Stopped.${NC}"
|
||||
;;
|
||||
|
||||
shell)
|
||||
echo -e "${GREEN}💻 Entering Sandbox (Ubuntu)... (Type 'exit' to leave)${NC}"
|
||||
docker exec -it zeroclaw-sandbox /bin/bash
|
||||
;;
|
||||
|
||||
agent)
|
||||
echo -e "${GREEN}🤖 Entering Agent Container (ZeroClaw)... (Type 'exit' to leave)${NC}"
|
||||
docker exec -it zeroclaw-dev /bin/bash
|
||||
;;
|
||||
|
||||
logs)
|
||||
docker compose -f "$COMPOSE_FILE" logs -f
|
||||
;;
|
||||
|
||||
build)
|
||||
echo -e "${YELLOW}🔨 Rebuilding images...${NC}"
|
||||
docker compose -f "$COMPOSE_FILE" build
|
||||
ensure_config
|
||||
docker compose -f "$COMPOSE_FILE" up -d
|
||||
echo -e "${GREEN}✅ Rebuild complete.${NC}"
|
||||
;;
|
||||
|
||||
ci)
|
||||
shift
|
||||
if [ "$BASE_DIR" = "." ]; then
|
||||
./ci.sh "${@:-all}"
|
||||
else
|
||||
./dev/ci.sh "${@:-all}"
|
||||
fi
|
||||
;;
|
||||
|
||||
clean)
|
||||
echo -e "${RED}⚠️ WARNING: This will delete 'target/.zeroclaw' data and Docker volumes.${NC}"
|
||||
read -p "Are you sure? (y/N) " -n 1 -r
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
docker compose -f "$COMPOSE_FILE" down -v
|
||||
rm -rf "$HOST_TARGET_DIR/.zeroclaw"
|
||||
echo -e "${GREEN}🧹 Cleaned up (playground/ remains intact).${NC}"
|
||||
else
|
||||
echo "Cancelled."
|
||||
fi
|
||||
;;
|
||||
|
||||
*)
|
||||
print_help
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
32
third_party/zeroclaw/dev/config.template.toml
vendored
Normal file
32
third_party/zeroclaw/dev/config.template.toml
vendored
Normal file
@@ -0,0 +1,32 @@
|
||||
workspace_dir = "/zeroclaw-data/workspace"
|
||||
config_path = "/zeroclaw-data/.zeroclaw/config.toml"
|
||||
# This is the Ollama Base URL, not a secret key
|
||||
api_key = "http://host.docker.internal:11434"
|
||||
default_provider = "ollama"
|
||||
default_model = "llama3.2"
|
||||
default_temperature = 0.7
|
||||
|
||||
[gateway]
|
||||
port = 42617
|
||||
host = "[::]"
|
||||
allow_public_bind = true
|
||||
require_pairing = false
|
||||
|
||||
# Cost tracking and budget enforcement configuration
|
||||
# Enable to track API usage costs and enforce spending limits
|
||||
[cost]
|
||||
enabled = false
|
||||
daily_limit_usd = 10.0
|
||||
monthly_limit_usd = 100.0
|
||||
warn_at_percent = 80
|
||||
allow_override = false
|
||||
|
||||
# Per-model pricing (USD per 1M tokens)
|
||||
# Uncomment and customize to override default pricing
|
||||
# [cost.prices."anthropic/claude-sonnet-4-20250514"]
|
||||
# input = 3.0
|
||||
# output = 15.0
|
||||
#
|
||||
# [cost.prices."openai/gpt-4o"]
|
||||
# input = 5.0
|
||||
# output = 15.0
|
||||
23
third_party/zeroclaw/dev/docker-compose.ci.yml
vendored
Normal file
23
third_party/zeroclaw/dev/docker-compose.ci.yml
vendored
Normal file
@@ -0,0 +1,23 @@
|
||||
name: zeroclaw-local-ci
|
||||
|
||||
services:
|
||||
local-ci:
|
||||
build:
|
||||
context: ..
|
||||
dockerfile: dev/ci/Dockerfile
|
||||
container_name: zeroclaw-local-ci
|
||||
working_dir: /workspace
|
||||
environment:
|
||||
- CARGO_TERM_COLOR=always
|
||||
- PATH=/usr/local/cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
|
||||
- CARGO_TARGET_DIR=/ci-target
|
||||
volumes:
|
||||
- ..:/workspace
|
||||
- cargo-registry:/usr/local/cargo/registry
|
||||
- cargo-git:/usr/local/cargo/git
|
||||
- ci-target:/ci-target
|
||||
|
||||
volumes:
|
||||
cargo-registry:
|
||||
cargo-git:
|
||||
ci-target:
|
||||
72
third_party/zeroclaw/dev/docker-compose.yml
vendored
Normal file
72
third_party/zeroclaw/dev/docker-compose.yml
vendored
Normal file
@@ -0,0 +1,72 @@
|
||||
# Development Environment for ZeroClaw Agentic Testing
|
||||
#
|
||||
# Use this for:
|
||||
# - Running the agent in a sandboxed environment
|
||||
# - Testing dangerous commands safely
|
||||
# - Developing new skills/integrations
|
||||
#
|
||||
# Usage:
|
||||
# cd dev && ./cli.sh up
|
||||
# or from root: ./dev/cli.sh up
|
||||
name: zeroclaw-dev
|
||||
services:
|
||||
# ── The Agent (Development Image) ──
|
||||
# Builds from source using the 'dev' stage of the root Dockerfile
|
||||
zeroclaw-dev:
|
||||
build:
|
||||
context: ..
|
||||
dockerfile: Dockerfile
|
||||
target: dev
|
||||
container_name: zeroclaw-dev
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- ZEROCLAW_GATEWAY_PORT=42617
|
||||
- SANDBOX_HOST=zeroclaw-sandbox
|
||||
secrets:
|
||||
- source: zeroclaw_env
|
||||
target: zeroclaw_env
|
||||
entrypoint: ["/bin/bash", "-lc"]
|
||||
command:
|
||||
- |
|
||||
if [ -f /run/secrets/zeroclaw_env ]; then
|
||||
set -a
|
||||
. /run/secrets/zeroclaw_env
|
||||
set +a
|
||||
fi
|
||||
exec zeroclaw gateway --port "${ZEROCLAW_GATEWAY_PORT:-42617}" --host "[::]"
|
||||
volumes:
|
||||
# Mount single config file (avoids shadowing other files in .zeroclaw)
|
||||
- ../target/.zeroclaw/config.toml:/zeroclaw-data/.zeroclaw/config.toml
|
||||
# Mount shared workspace
|
||||
- ../playground:/zeroclaw-data/workspace
|
||||
ports:
|
||||
- "127.0.0.1:42617:42617"
|
||||
networks:
|
||||
- dev-net
|
||||
|
||||
# ── The Sandbox (Ubuntu Environment) ──
|
||||
# A fully loaded Ubuntu environment for the agent to play in.
|
||||
sandbox:
|
||||
build:
|
||||
context: sandbox # Context relative to dev/
|
||||
dockerfile: Dockerfile
|
||||
container_name: zeroclaw-sandbox
|
||||
hostname: dev-box
|
||||
command: ["tail", "-f", "/dev/null"]
|
||||
working_dir: /home/developer/workspace
|
||||
user: developer
|
||||
environment:
|
||||
- TERM=xterm-256color
|
||||
- SHELL=/bin/bash
|
||||
volumes:
|
||||
- ../playground:/home/developer/workspace # Mount local playground
|
||||
networks:
|
||||
- dev-net
|
||||
|
||||
networks:
|
||||
dev-net:
|
||||
driver: bridge
|
||||
|
||||
secrets:
|
||||
zeroclaw_env:
|
||||
file: ../.env
|
||||
324
third_party/zeroclaw/dev/recompute_contributor_tiers.sh
vendored
Executable file
324
third_party/zeroclaw/dev/recompute_contributor_tiers.sh
vendored
Executable file
@@ -0,0 +1,324 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_NAME="$(basename "$0")"
|
||||
|
||||
usage() {
|
||||
cat <<USAGE
|
||||
Recompute contributor tier labels for historical PRs/issues.
|
||||
|
||||
Usage:
|
||||
./$SCRIPT_NAME [options]
|
||||
|
||||
Options:
|
||||
--repo <owner/repo> Target repository (default: current gh repo)
|
||||
--kind <both|prs|issues>
|
||||
Target objects (default: both)
|
||||
--state <all|open|closed>
|
||||
State filter for listing objects (default: all)
|
||||
--limit <N> Limit processed objects after fetch (default: 0 = no limit)
|
||||
--apply Apply label updates (default is dry-run)
|
||||
--dry-run Preview only (default)
|
||||
-h, --help Show this help
|
||||
|
||||
Examples:
|
||||
./$SCRIPT_NAME --repo zeroclaw-labs/zeroclaw --limit 50
|
||||
./$SCRIPT_NAME --repo zeroclaw-labs/zeroclaw --kind prs --state open --apply
|
||||
USAGE
|
||||
}
|
||||
|
||||
die() {
|
||||
echo "[$SCRIPT_NAME] ERROR: $*" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
require_cmd() {
|
||||
if ! command -v "$1" >/dev/null 2>&1; then
|
||||
die "Required command not found: $1"
|
||||
fi
|
||||
}
|
||||
|
||||
urlencode() {
|
||||
jq -nr --arg value "$1" '$value|@uri'
|
||||
}
|
||||
|
||||
select_contributor_tier() {
|
||||
local merged_count="$1"
|
||||
if (( merged_count >= 50 )); then
|
||||
echo "distinguished contributor"
|
||||
elif (( merged_count >= 20 )); then
|
||||
echo "principal contributor"
|
||||
elif (( merged_count >= 10 )); then
|
||||
echo "experienced contributor"
|
||||
elif (( merged_count >= 5 )); then
|
||||
echo "trusted contributor"
|
||||
else
|
||||
echo ""
|
||||
fi
|
||||
}
|
||||
|
||||
DRY_RUN=1
|
||||
KIND="both"
|
||||
STATE="all"
|
||||
LIMIT=0
|
||||
REPO=""
|
||||
|
||||
while (($# > 0)); do
|
||||
case "$1" in
|
||||
--repo)
|
||||
[[ $# -ge 2 ]] || die "Missing value for --repo"
|
||||
REPO="$2"
|
||||
shift 2
|
||||
;;
|
||||
--kind)
|
||||
[[ $# -ge 2 ]] || die "Missing value for --kind"
|
||||
KIND="$2"
|
||||
shift 2
|
||||
;;
|
||||
--state)
|
||||
[[ $# -ge 2 ]] || die "Missing value for --state"
|
||||
STATE="$2"
|
||||
shift 2
|
||||
;;
|
||||
--limit)
|
||||
[[ $# -ge 2 ]] || die "Missing value for --limit"
|
||||
LIMIT="$2"
|
||||
shift 2
|
||||
;;
|
||||
--apply)
|
||||
DRY_RUN=0
|
||||
shift
|
||||
;;
|
||||
--dry-run)
|
||||
DRY_RUN=1
|
||||
shift
|
||||
;;
|
||||
-h|--help)
|
||||
usage
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
die "Unknown option: $1"
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
case "$KIND" in
|
||||
both|prs|issues) ;;
|
||||
*) die "--kind must be one of: both, prs, issues" ;;
|
||||
esac
|
||||
|
||||
case "$STATE" in
|
||||
all|open|closed) ;;
|
||||
*) die "--state must be one of: all, open, closed" ;;
|
||||
esac
|
||||
|
||||
if ! [[ "$LIMIT" =~ ^[0-9]+$ ]]; then
|
||||
die "--limit must be a non-negative integer"
|
||||
fi
|
||||
|
||||
require_cmd gh
|
||||
require_cmd jq
|
||||
|
||||
if ! gh auth status >/dev/null 2>&1; then
|
||||
die "gh CLI is not authenticated. Run: gh auth login"
|
||||
fi
|
||||
|
||||
if [[ -z "$REPO" ]]; then
|
||||
REPO="$(gh repo view --json nameWithOwner --jq '.nameWithOwner' 2>/dev/null || true)"
|
||||
[[ -n "$REPO" ]] || die "Unable to infer repo. Pass --repo <owner/repo>."
|
||||
fi
|
||||
|
||||
echo "[$SCRIPT_NAME] Repo: $REPO"
|
||||
echo "[$SCRIPT_NAME] Mode: $([[ "$DRY_RUN" -eq 1 ]] && echo "dry-run" || echo "apply")"
|
||||
echo "[$SCRIPT_NAME] Kind: $KIND | State: $STATE | Limit: $LIMIT"
|
||||
|
||||
TIERS_JSON='["trusted contributor","experienced contributor","principal contributor","distinguished contributor"]'
|
||||
|
||||
TMP_FILES=()
|
||||
cleanup() {
|
||||
if ((${#TMP_FILES[@]} > 0)); then
|
||||
rm -f "${TMP_FILES[@]}"
|
||||
fi
|
||||
}
|
||||
trap cleanup EXIT
|
||||
|
||||
new_tmp_file() {
|
||||
local tmp
|
||||
tmp="$(mktemp)"
|
||||
TMP_FILES+=("$tmp")
|
||||
echo "$tmp"
|
||||
}
|
||||
|
||||
targets_file="$(new_tmp_file)"
|
||||
|
||||
if [[ "$KIND" == "both" || "$KIND" == "prs" ]]; then
|
||||
gh api --paginate "repos/$REPO/pulls?state=$STATE&per_page=100" \
|
||||
--jq '.[] | {
|
||||
kind: "pr",
|
||||
number: .number,
|
||||
author: (.user.login // ""),
|
||||
author_type: (.user.type // ""),
|
||||
labels: [(.labels[]?.name // empty)]
|
||||
}' >> "$targets_file"
|
||||
fi
|
||||
|
||||
if [[ "$KIND" == "both" || "$KIND" == "issues" ]]; then
|
||||
gh api --paginate "repos/$REPO/issues?state=$STATE&per_page=100" \
|
||||
--jq '.[] | select(.pull_request | not) | {
|
||||
kind: "issue",
|
||||
number: .number,
|
||||
author: (.user.login // ""),
|
||||
author_type: (.user.type // ""),
|
||||
labels: [(.labels[]?.name // empty)]
|
||||
}' >> "$targets_file"
|
||||
fi
|
||||
|
||||
if [[ "$LIMIT" -gt 0 ]]; then
|
||||
limited_file="$(new_tmp_file)"
|
||||
head -n "$LIMIT" "$targets_file" > "$limited_file"
|
||||
mv "$limited_file" "$targets_file"
|
||||
fi
|
||||
|
||||
target_count="$(wc -l < "$targets_file" | tr -d ' ')"
|
||||
if [[ "$target_count" -eq 0 ]]; then
|
||||
echo "[$SCRIPT_NAME] No targets found."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo "[$SCRIPT_NAME] Targets fetched: $target_count"
|
||||
|
||||
# Ensure tier labels exist (trusted contributor might be new).
|
||||
label_color=""
|
||||
for probe_label in "experienced contributor" "principal contributor" "distinguished contributor" "trusted contributor"; do
|
||||
encoded_label="$(urlencode "$probe_label")"
|
||||
if color_candidate="$(gh api "repos/$REPO/labels/$encoded_label" --jq '.color' 2>/dev/null || true)"; then
|
||||
if [[ -n "$color_candidate" ]]; then
|
||||
label_color="$(echo "$color_candidate" | tr '[:lower:]' '[:upper:]')"
|
||||
break
|
||||
fi
|
||||
fi
|
||||
done
|
||||
[[ -n "$label_color" ]] || label_color="C5D7A2"
|
||||
|
||||
while IFS= read -r tier_label; do
|
||||
[[ -n "$tier_label" ]] || continue
|
||||
encoded_label="$(urlencode "$tier_label")"
|
||||
if gh api "repos/$REPO/labels/$encoded_label" >/dev/null 2>&1; then
|
||||
continue
|
||||
fi
|
||||
|
||||
if [[ "$DRY_RUN" -eq 1 ]]; then
|
||||
echo "[dry-run] Would create missing label: $tier_label (color=$label_color)"
|
||||
else
|
||||
gh api -X POST "repos/$REPO/labels" \
|
||||
-f name="$tier_label" \
|
||||
-f color="$label_color" >/dev/null
|
||||
echo "[apply] Created missing label: $tier_label"
|
||||
fi
|
||||
done < <(jq -r '.[]' <<<"$TIERS_JSON")
|
||||
|
||||
# Build merged PR count cache by unique human authors.
|
||||
authors_file="$(new_tmp_file)"
|
||||
jq -r 'select(.author != "" and .author_type != "Bot") | .author' "$targets_file" | sort -u > "$authors_file"
|
||||
author_count="$(wc -l < "$authors_file" | tr -d ' ')"
|
||||
echo "[$SCRIPT_NAME] Unique human authors: $author_count"
|
||||
|
||||
author_counts_file="$(new_tmp_file)"
|
||||
while IFS= read -r author; do
|
||||
[[ -n "$author" ]] || continue
|
||||
query="repo:$REPO is:pr is:merged author:$author"
|
||||
merged_count="$(gh api search/issues -f q="$query" -F per_page=1 --jq '.total_count' 2>/dev/null || true)"
|
||||
if ! [[ "$merged_count" =~ ^[0-9]+$ ]]; then
|
||||
merged_count=0
|
||||
fi
|
||||
printf '%s\t%s\n' "$author" "$merged_count" >> "$author_counts_file"
|
||||
done < "$authors_file"
|
||||
|
||||
updated=0
|
||||
unchanged=0
|
||||
skipped=0
|
||||
failed=0
|
||||
|
||||
while IFS= read -r target_json; do
|
||||
[[ -n "$target_json" ]] || continue
|
||||
|
||||
number="$(jq -r '.number' <<<"$target_json")"
|
||||
kind="$(jq -r '.kind' <<<"$target_json")"
|
||||
author="$(jq -r '.author' <<<"$target_json")"
|
||||
author_type="$(jq -r '.author_type' <<<"$target_json")"
|
||||
current_labels_json="$(jq -c '.labels // []' <<<"$target_json")"
|
||||
|
||||
if [[ -z "$author" || "$author_type" == "Bot" ]]; then
|
||||
skipped=$((skipped + 1))
|
||||
continue
|
||||
fi
|
||||
|
||||
merged_count="$(awk -F '\t' -v key="$author" '$1 == key { print $2; exit }' "$author_counts_file")"
|
||||
if ! [[ "$merged_count" =~ ^[0-9]+$ ]]; then
|
||||
merged_count=0
|
||||
fi
|
||||
desired_tier="$(select_contributor_tier "$merged_count")"
|
||||
|
||||
if ! current_tier="$(jq -r --argjson tiers "$TIERS_JSON" '[.[] | select(. as $label | ($tiers | index($label)) != null)][0] // ""' <<<"$current_labels_json" 2>/dev/null)"; then
|
||||
echo "[warn] Skipping ${kind} #${number}: cannot parse current labels JSON" >&2
|
||||
failed=$((failed + 1))
|
||||
continue
|
||||
fi
|
||||
|
||||
if ! next_labels_json="$(jq -c --arg desired "$desired_tier" --argjson tiers "$TIERS_JSON" '
|
||||
(. // [])
|
||||
| map(select(. as $label | ($tiers | index($label)) == null))
|
||||
| if $desired != "" then . + [$desired] else . end
|
||||
| unique
|
||||
' <<<"$current_labels_json" 2>/dev/null)"; then
|
||||
echo "[warn] Skipping ${kind} #${number}: cannot compute next labels" >&2
|
||||
failed=$((failed + 1))
|
||||
continue
|
||||
fi
|
||||
|
||||
if ! normalized_current="$(jq -c 'unique | sort' <<<"$current_labels_json" 2>/dev/null)"; then
|
||||
echo "[warn] Skipping ${kind} #${number}: cannot normalize current labels" >&2
|
||||
failed=$((failed + 1))
|
||||
continue
|
||||
fi
|
||||
|
||||
if ! normalized_next="$(jq -c 'unique | sort' <<<"$next_labels_json" 2>/dev/null)"; then
|
||||
echo "[warn] Skipping ${kind} #${number}: cannot normalize next labels" >&2
|
||||
failed=$((failed + 1))
|
||||
continue
|
||||
fi
|
||||
|
||||
if [[ "$normalized_current" == "$normalized_next" ]]; then
|
||||
unchanged=$((unchanged + 1))
|
||||
continue
|
||||
fi
|
||||
|
||||
if [[ "$DRY_RUN" -eq 1 ]]; then
|
||||
echo "[dry-run] ${kind} #${number} @${author} merged=${merged_count} tier: '${current_tier:-none}' -> '${desired_tier:-none}'"
|
||||
updated=$((updated + 1))
|
||||
continue
|
||||
fi
|
||||
|
||||
payload="$(jq -cn --argjson labels "$next_labels_json" '{labels: $labels}')"
|
||||
if gh api -X PUT "repos/$REPO/issues/$number/labels" --input - <<<"$payload" >/dev/null; then
|
||||
echo "[apply] Updated ${kind} #${number} @${author} tier: '${current_tier:-none}' -> '${desired_tier:-none}'"
|
||||
updated=$((updated + 1))
|
||||
else
|
||||
echo "[apply] FAILED ${kind} #${number}" >&2
|
||||
failed=$((failed + 1))
|
||||
fi
|
||||
done < "$targets_file"
|
||||
|
||||
echo ""
|
||||
echo "[$SCRIPT_NAME] Summary"
|
||||
echo " Targets: $target_count"
|
||||
echo " Updated: $updated"
|
||||
echo " Unchanged: $unchanged"
|
||||
echo " Skipped: $skipped"
|
||||
echo " Failed: $failed"
|
||||
|
||||
if [[ "$failed" -gt 0 ]]; then
|
||||
exit 1
|
||||
fi
|
||||
34
third_party/zeroclaw/dev/sandbox/Dockerfile
vendored
Normal file
34
third_party/zeroclaw/dev/sandbox/Dockerfile
vendored
Normal file
@@ -0,0 +1,34 @@
|
||||
FROM ubuntu:22.04@sha256:c7eb020043d8fc2ae0793fb35a37bff1cf33f156d4d4b12ccc7f3ef8706c38b1
|
||||
|
||||
# Prevent interactive prompts during package installation
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
# Install common development tools and runtimes
|
||||
# - Node.js: Install v20 (LTS) from NodeSource
|
||||
# - Core: curl, git, vim, build-essential (gcc, make)
|
||||
# - Python: python3, pip
|
||||
# - Network: ping, dnsutils
|
||||
RUN apt-get update && apt-get install -y curl && \
|
||||
curl -fsSL https://deb.nodesource.com/setup_20.x | bash - && \
|
||||
apt-get install -y \
|
||||
nodejs \
|
||||
wget git vim nano unzip zip \
|
||||
build-essential \
|
||||
python3 python3-pip \
|
||||
sudo \
|
||||
iputils-ping dnsutils net-tools \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
&& node --version && npm --version
|
||||
|
||||
# Create a non-root user 'developer' with UID 1000
|
||||
# Grant passwordless sudo to simulate a local dev environment (using safe sudoers.d)
|
||||
RUN useradd -m -s /bin/bash -u 1000 developer && \
|
||||
echo "developer ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/developer && \
|
||||
chmod 0440 /etc/sudoers.d/developer
|
||||
|
||||
# Set up the workspace
|
||||
USER developer
|
||||
WORKDIR /home/developer/workspace
|
||||
|
||||
# Default command
|
||||
CMD ["/bin/bash"]
|
||||
261
third_party/zeroclaw/dev/test-termux-release.sh
vendored
Executable file
261
third_party/zeroclaw/dev/test-termux-release.sh
vendored
Executable file
@@ -0,0 +1,261 @@
|
||||
#!/usr/bin/env bash
|
||||
# Termux release validation script
|
||||
# Validates the aarch64-linux-android release artifact for Termux compatibility.
|
||||
#
|
||||
# Usage:
|
||||
# ./dev/test-termux-release.sh [version]
|
||||
#
|
||||
# Examples:
|
||||
# ./dev/test-termux-release.sh 0.3.1
|
||||
# ./dev/test-termux-release.sh # auto-detects from Cargo.toml
|
||||
#
|
||||
set -euo pipefail
|
||||
|
||||
BLUE='\033[0;34m'
|
||||
GREEN='\033[0;32m'
|
||||
RED='\033[0;31m'
|
||||
YELLOW='\033[0;33m'
|
||||
BOLD='\033[1m'
|
||||
DIM='\033[2m'
|
||||
RESET='\033[0m'
|
||||
|
||||
pass() { echo -e " ${GREEN}✓${RESET} $*"; }
|
||||
fail() { echo -e " ${RED}✗${RESET} $*"; FAILURES=$((FAILURES + 1)); }
|
||||
info() { echo -e "${BLUE}→${RESET} ${BOLD}$*${RESET}"; }
|
||||
warn() { echo -e "${YELLOW}!${RESET} $*"; }
|
||||
|
||||
FAILURES=0
|
||||
TARGET="aarch64-linux-android"
|
||||
VERSION="${1:-}"
|
||||
|
||||
if [[ -z "$VERSION" ]]; then
|
||||
if [[ -f Cargo.toml ]]; then
|
||||
VERSION=$(sed -n 's/^version = "\([^"]*\)"/\1/p' Cargo.toml | head -1)
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ -z "$VERSION" ]]; then
|
||||
echo "Usage: $0 <version>"
|
||||
echo " e.g. $0 0.3.1"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
TAG="v${VERSION}"
|
||||
ASSET_NAME="zeroclaw-${TARGET}.tar.gz"
|
||||
ASSET_URL="https://github.com/zeroclaw-labs/zeroclaw/releases/download/${TAG}/${ASSET_NAME}"
|
||||
TEMP_DIR="$(mktemp -d -t zeroclaw-termux-test-XXXXXX)"
|
||||
|
||||
cleanup() { rm -rf "$TEMP_DIR"; }
|
||||
trap cleanup EXIT
|
||||
|
||||
echo
|
||||
echo -e "${BOLD}Termux Release Validation — ${TAG}${RESET}"
|
||||
echo -e "${DIM}Target: ${TARGET}${RESET}"
|
||||
echo
|
||||
|
||||
# --- Test 1: Release tag exists ---
|
||||
info "Checking release tag ${TAG}"
|
||||
if gh release view "$TAG" >/dev/null 2>&1; then
|
||||
pass "Release ${TAG} exists"
|
||||
else
|
||||
fail "Release ${TAG} not found"
|
||||
echo -e "${RED}Release has not been published yet. Wait for the release workflow to complete.${RESET}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# --- Test 2: Android asset is listed ---
|
||||
info "Checking for ${ASSET_NAME} in release assets"
|
||||
ASSETS=$(gh release view "$TAG" --json assets -q '.assets[].name')
|
||||
if echo "$ASSETS" | grep -q "$ASSET_NAME"; then
|
||||
pass "Asset ${ASSET_NAME} found in release"
|
||||
else
|
||||
fail "Asset ${ASSET_NAME} not found in release"
|
||||
echo "Available assets:"
|
||||
echo "$ASSETS" | sed 's/^/ /'
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# --- Test 3: Download the asset ---
|
||||
info "Downloading ${ASSET_NAME}"
|
||||
if curl -fsSL "$ASSET_URL" -o "$TEMP_DIR/$ASSET_NAME"; then
|
||||
FILESIZE=$(wc -c < "$TEMP_DIR/$ASSET_NAME" | tr -d ' ')
|
||||
pass "Downloaded successfully (${FILESIZE} bytes)"
|
||||
else
|
||||
fail "Download failed from ${ASSET_URL}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# --- Test 4: Archive integrity ---
|
||||
info "Verifying archive integrity"
|
||||
if tar -tzf "$TEMP_DIR/$ASSET_NAME" >/dev/null 2>&1; then
|
||||
pass "Archive is a valid gzip tar"
|
||||
else
|
||||
fail "Archive is corrupted or not a valid tar.gz"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# --- Test 5: Contains zeroclaw binary ---
|
||||
info "Checking archive contents"
|
||||
CONTENTS=$(tar -tzf "$TEMP_DIR/$ASSET_NAME")
|
||||
if echo "$CONTENTS" | grep -q "^zeroclaw$"; then
|
||||
pass "Archive contains 'zeroclaw' binary"
|
||||
else
|
||||
fail "Archive does not contain 'zeroclaw' binary"
|
||||
echo "Contents:"
|
||||
echo "$CONTENTS" | sed 's/^/ /'
|
||||
fi
|
||||
|
||||
# --- Test 6: Extract and inspect binary ---
|
||||
info "Extracting and inspecting binary"
|
||||
tar -xzf "$TEMP_DIR/$ASSET_NAME" -C "$TEMP_DIR"
|
||||
BINARY="$TEMP_DIR/zeroclaw"
|
||||
|
||||
if [[ -f "$BINARY" ]]; then
|
||||
pass "Binary extracted"
|
||||
else
|
||||
fail "Binary not found after extraction"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# --- Test 7: ELF format and architecture ---
|
||||
info "Checking binary format"
|
||||
FILE_INFO=$(file "$BINARY")
|
||||
if echo "$FILE_INFO" | grep -q "ELF"; then
|
||||
pass "Binary is ELF format"
|
||||
else
|
||||
fail "Binary is not ELF format: $FILE_INFO"
|
||||
fi
|
||||
|
||||
if echo "$FILE_INFO" | grep -qi "aarch64\|ARM aarch64"; then
|
||||
pass "Binary targets aarch64 architecture"
|
||||
else
|
||||
fail "Binary does not target aarch64: $FILE_INFO"
|
||||
fi
|
||||
|
||||
if echo "$FILE_INFO" | grep -qi "android\|bionic"; then
|
||||
pass "Binary is linked for Android/Bionic"
|
||||
else
|
||||
# Android binaries may not always show "android" in file output,
|
||||
# check with readelf if available
|
||||
if command -v readelf >/dev/null 2>&1; then
|
||||
INTERP=$(readelf -l "$BINARY" 2>/dev/null | grep -o '/[^ ]*linker[^ ]*' || true)
|
||||
if echo "$INTERP" | grep -qi "android\|bionic"; then
|
||||
pass "Binary uses Android linker: $INTERP"
|
||||
else
|
||||
warn "Could not confirm Android linkage (interpreter: ${INTERP:-unknown})"
|
||||
warn "file output: $FILE_INFO"
|
||||
fi
|
||||
else
|
||||
warn "Could not confirm Android linkage (readelf not available)"
|
||||
warn "file output: $FILE_INFO"
|
||||
fi
|
||||
fi
|
||||
|
||||
# --- Test 8: Binary is stripped ---
|
||||
info "Checking binary optimization"
|
||||
if echo "$FILE_INFO" | grep -q "stripped"; then
|
||||
pass "Binary is stripped (release optimized)"
|
||||
else
|
||||
warn "Binary may not be stripped"
|
||||
fi
|
||||
|
||||
# --- Test 9: Binary is not dynamically linked to glibc ---
|
||||
info "Checking for glibc dependencies"
|
||||
if command -v readelf >/dev/null 2>&1; then
|
||||
NEEDED=$(readelf -d "$BINARY" 2>/dev/null | grep NEEDED || true)
|
||||
if echo "$NEEDED" | grep -qi "libc\.so\.\|libpthread\|libdl"; then
|
||||
# Check if it's glibc or bionic
|
||||
if echo "$NEEDED" | grep -qi "libc\.so\.6"; then
|
||||
fail "Binary links against glibc (libc.so.6) — will not work on Termux"
|
||||
else
|
||||
pass "Binary links against libc (likely Bionic)"
|
||||
fi
|
||||
else
|
||||
pass "No glibc dependencies detected"
|
||||
fi
|
||||
else
|
||||
warn "readelf not available — skipping dynamic library check"
|
||||
fi
|
||||
|
||||
# --- Test 10: SHA256 checksum verification ---
|
||||
info "Verifying SHA256 checksum"
|
||||
CHECKSUMS_URL="https://github.com/zeroclaw-labs/zeroclaw/releases/download/${TAG}/SHA256SUMS"
|
||||
if curl -fsSL "$CHECKSUMS_URL" -o "$TEMP_DIR/SHA256SUMS" 2>/dev/null; then
|
||||
EXPECTED=$(grep "$ASSET_NAME" "$TEMP_DIR/SHA256SUMS" | awk '{print $1}')
|
||||
if [[ -n "$EXPECTED" ]]; then
|
||||
if command -v sha256sum >/dev/null 2>&1; then
|
||||
ACTUAL=$(sha256sum "$TEMP_DIR/$ASSET_NAME" | awk '{print $1}')
|
||||
elif command -v shasum >/dev/null 2>&1; then
|
||||
ACTUAL=$(shasum -a 256 "$TEMP_DIR/$ASSET_NAME" | awk '{print $1}')
|
||||
else
|
||||
warn "No sha256sum or shasum available"
|
||||
ACTUAL=""
|
||||
fi
|
||||
|
||||
if [[ -n "$ACTUAL" && "$ACTUAL" == "$EXPECTED" ]]; then
|
||||
pass "SHA256 checksum matches"
|
||||
elif [[ -n "$ACTUAL" ]]; then
|
||||
fail "SHA256 mismatch: expected=$EXPECTED actual=$ACTUAL"
|
||||
fi
|
||||
else
|
||||
warn "No checksum entry for ${ASSET_NAME} in SHA256SUMS"
|
||||
fi
|
||||
else
|
||||
warn "Could not download SHA256SUMS"
|
||||
fi
|
||||
|
||||
# --- Test 11: install.sh Termux detection ---
|
||||
info "Validating install.sh Termux detection"
|
||||
INSTALL_SH="install.sh"
|
||||
if [[ ! -f "$INSTALL_SH" ]]; then
|
||||
INSTALL_SH="$(dirname "$0")/../install.sh"
|
||||
fi
|
||||
|
||||
if [[ -f "$INSTALL_SH" ]]; then
|
||||
if grep -q 'TERMUX_VERSION' "$INSTALL_SH"; then
|
||||
pass "install.sh checks TERMUX_VERSION"
|
||||
else
|
||||
fail "install.sh does not check TERMUX_VERSION"
|
||||
fi
|
||||
|
||||
if grep -q 'aarch64-linux-android' "$INSTALL_SH"; then
|
||||
pass "install.sh maps to aarch64-linux-android target"
|
||||
else
|
||||
fail "install.sh does not map to aarch64-linux-android"
|
||||
fi
|
||||
|
||||
# Simulate Termux detection (mock uname as Linux since we may run on macOS)
|
||||
detect_result=$(
|
||||
bash -c '
|
||||
TERMUX_VERSION="0.118"
|
||||
os="Linux"
|
||||
arch="aarch64"
|
||||
case "$os:$arch" in
|
||||
Linux:aarch64|Linux:arm64)
|
||||
if [[ -n "${TERMUX_VERSION:-}" || -d "/data/data/com.termux" ]]; then
|
||||
echo "aarch64-linux-android"
|
||||
else
|
||||
echo "aarch64-unknown-linux-gnu"
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
'
|
||||
)
|
||||
if [[ "$detect_result" == "aarch64-linux-android" ]]; then
|
||||
pass "Termux detection returns correct target (simulated)"
|
||||
else
|
||||
fail "Termux detection returned: $detect_result (expected aarch64-linux-android)"
|
||||
fi
|
||||
else
|
||||
warn "install.sh not found — skipping detection tests"
|
||||
fi
|
||||
|
||||
# --- Summary ---
|
||||
echo
|
||||
if [[ "$FAILURES" -eq 0 ]]; then
|
||||
echo -e "${GREEN}${BOLD}All tests passed!${RESET}"
|
||||
echo -e "${DIM}The Termux release artifact for ${TAG} is valid.${RESET}"
|
||||
else
|
||||
echo -e "${RED}${BOLD}${FAILURES} test(s) failed.${RESET}"
|
||||
exit 1
|
||||
fi
|
||||
Reference in New Issue
Block a user