Other Skills
Additional skills that support the Propel workflow: meta-skill routing, plan writing, paper processing, verification, worktrees, and project profiling.
Using Propel
The using-propel skill is the meta-skill — the entry point for the entire Propel research workflow. It activates before any action to check for applicable Propel skills, ensuring the correct skill is activated for every user request.
Mode-Aware Skill Routing
Before routing to any skill, using-propel checks the current mode. If the triggered skill is not active in the current mode, it informs the user and suggests the appropriate mode with /switch.
| Skill | Researcher | Engineer | Debugger | Trainer |
|---|---|---|---|---|
| investigation | Yes | Yes | Yes | — |
| deep-research | Yes | Yes | Yes | — |
| think-deeply | Yes | Yes | Yes | Yes |
| retrospective | Yes | Yes | Yes | Yes |
| context-hygiene | Yes | Yes | Yes | Yes |
| research-design | — | Yes | — | — |
| writing-plans | — | Yes | — | — |
| trainer-mode | — | — | — | Yes |
Gate Enforcement
Propel enforces five human-in-the-loop gates plus two Questioner checkpoints. The using-propel skill ensures no gate is skipped. At each gate, Claude stops, presents structured findings, and asks questions that reveal design assumptions — never "shall I proceed?" but specific, disjunctive questions.
| Mode | Active Gates |
|---|---|
| Researcher | Gate 0, Gate 1 |
| Engineer | All gates (0, 1, 2, 3, 4) |
| Debugger | Gate 0, Gate 1, Gate 4 |
| Trainer | Gate 4 (runtime bugs only) |
Writing Plans
The writing-plans skill converts an approved design proposal into a sequence of small, precise implementation tasks. It activates after research-design has produced an approved design (Gate 2 passed).
Micro-Task Breakdown
Each task is 2–5 minutes of work with clear verification criteria. For each task, the plan specifies:
- What: Exactly what to implement — specific functions, classes, or changes
- Where: Exact file paths and approximate line numbers
- Maps to: Paper equation/section or design component
- Dependencies: Which tasks must be done first
- Steps: Specific enough to implement without ambiguity
- Verification: Concrete checks (shape checks, paper alignment, gradient flow, regression)
- Auditors: Which auditors run after completion (this fires Gate 3)
Key Principles
| Principle | Details |
|---|---|
| 2–5 minutes per task | If it takes longer to implement than to describe, break it down further. |
| Exact file paths | Never "update the model" — always "add compute_residual_codes() to model/vq.py after line 145". |
| Verification is mandatory | Every task must have at least one concrete verification step. |
| Pause points every 3 tasks | Natural stopping points for /clear and session archival. |
Paper Extraction
The paper-extraction skill batch-processes multiple papers to build a structured literature database. It activates when you have a set of papers to catalog, not when surveying a topic (use Deep Research) or reading a single paper (/read-paper).
Database Format
Papers are organized in scratch/paper-db/:
scratch/paper-db/
README.md # Index of all processed papers, organized by topic
entries/ # One file per paper
author2024-shortname.md
...
tags.md # Tag index for cross-referencing
Each entry captures full metadata (authors, year, venue, BibTeX), a structured summary, method details (architecture, I/O, loss, training), quantitative results, tags for cross-referencing, and relevance notes for the current project.
Key Principles
- Preserve exact numbers: Reproduce reported metrics exactly. Don't round or approximate.
- Flag missing information: If a paper doesn't report code, hyperparameters, or ablations, note it.
- Tags enable discovery: Use specific tags ("vq-vae", "codebook-collapse") not generic ones ("machine learning").
- This is a database, not a review: Stay factual. Opinions go in "Relevance Notes" only.
- After processing a batch, add a cross-reference analysis noting connections, common patterns, and gaps.
Verification Before Completion
The verification-before-completion skill ensures no completion claims are made without fresh verification evidence. It activates when Claude is about to claim something is "done", "working", "fixed", or "ready".
Never claim something works without showing evidence from THIS session that it works. "I've implemented the feature" is NOT "the feature works." "I fixed the bug" is NOT "the bug is fixed."
Fresh Evidence Requirement
Before claiming completion, verify all of the following:
- Does it run? Execute the code or run the test — don't just read it. If execution is not possible, say so explicitly.
- Does it produce the right output? Show actual output, not expected output. For model changes: shapes, loss values, gradient norms from a real pass.
- Is the evidence fresh? Evidence from earlier in the session may be stale. Re-run after any code change, even "minor" ones.
- Is anything else broken? Run regression checks and existing tests.
Completion Template
When claiming something is done, the completion report includes: what was changed, evidence it works (real output), what was checked for regressions, and — crucially — what was NOT verified (honest about gaps).
Using Git Worktrees
The using-git-worktrees skill guides experiment branch isolation using git worktrees. It activates when starting a new experiment that could break existing functionality, or when you need to work on multiple variants in parallel.
Setup Process
- Create the worktree with a descriptive branch name:
git worktree add ../worktrees/{experiment-name} -b experiment/{experiment-name}Naming convention: include the key variables being tested (e.g.,
rvq-depth2-rotation,fsq-8bit-no-commitment). - Verify gitignore: Ensure
worktrees/is in.gitignore. - Install dependencies in the worktree directory.
- Run baseline: Before making any changes, verify the baseline works and record baseline metrics.
Key Principles
| Principle | Details |
|---|---|
| Verify baseline first | If the baseline fails in the worktree, experimental results are meaningless. |
| Keep main clean | Don't make experimental changes in the main checkout — it's your known-good reference. |
| One experiment per worktree | Don't mix unrelated changes — it's impossible to attribute results to specific changes. |
| Document what changed | The investigation README should note which worktree/branch contains the experimental code. |
Project Customization
The project-customization skill builds a persistent project profile in .propel/ by automatically analyzing code conventions, domain context, and development patterns. Claude references this profile silently on every session start.
Auto-Detection Process
The skill runs six analysis phases on first activation:
- Project Structure Scan: Language, framework, dependencies, directory layout, build tools, package manager
- Code Style Analysis: Naming conventions, import style, formatting, type hints, error handling, docstring style
- Domain Detection: Classifies the project domain from imports (JAX/ML, PyTorch/ML, Robotics/RL, Frontend, Backend, etc.)
- Git History Analysis: Commit style, branch naming, active areas, team size, commit frequency
- Existing Conventions: Reads CLAUDE.md, CONTRIBUTING.md, linter configs, CI configs, pre-commit hooks
- Profile Generation: Synthesizes all findings into
config.json,profile.md,conventions.md, and optionallydomain-context.md
Subsequent Runs
On subsequent activations, the skill computes a hash over key project files. If nothing changed, the profile is current. If files changed, it runs a delta analysis — only re-analyzing the categories that changed — and presents the diff to the user before updating.
The session-start hook reads .propel/profile.md and injects it as project context. You can disable profiling by setting "enabled": false in .propel/config.json, or remove it entirely by deleting the .propel/ directory.