Other Skills

Additional skills that support the Propel workflow: meta-skill routing, plan writing, paper processing, verification, worktrees, and project profiling.

Using Propel

The using-propel skill is the meta-skill — the entry point for the entire Propel research workflow. It activates before any action to check for applicable Propel skills, ensuring the correct skill is activated for every user request.

Mode-Aware Skill Routing

Before routing to any skill, using-propel checks the current mode. If the triggered skill is not active in the current mode, it informs the user and suggests the appropriate mode with /switch.

SkillResearcherEngineerDebuggerTrainer
investigationYesYesYes
deep-researchYesYesYes
think-deeplyYesYesYesYes
retrospectiveYesYesYesYes
context-hygieneYesYesYesYes
research-designYes
writing-plansYes
trainer-modeYes

Gate Enforcement

Propel enforces five human-in-the-loop gates plus two Questioner checkpoints. The using-propel skill ensures no gate is skipped. At each gate, Claude stops, presents structured findings, and asks questions that reveal design assumptions — never "shall I proceed?" but specific, disjunctive questions.

ModeActive Gates
ResearcherGate 0, Gate 1
EngineerAll gates (0, 1, 2, 3, 4)
DebuggerGate 0, Gate 1, Gate 4
TrainerGate 4 (runtime bugs only)

Writing Plans

The writing-plans skill converts an approved design proposal into a sequence of small, precise implementation tasks. It activates after research-design has produced an approved design (Gate 2 passed).

Micro-Task Breakdown

Each task is 2–5 minutes of work with clear verification criteria. For each task, the plan specifies:

Key Principles

PrincipleDetails
2–5 minutes per taskIf it takes longer to implement than to describe, break it down further.
Exact file pathsNever "update the model" — always "add compute_residual_codes() to model/vq.py after line 145".
Verification is mandatoryEvery task must have at least one concrete verification step.
Pause points every 3 tasksNatural stopping points for /clear and session archival.

Paper Extraction

The paper-extraction skill batch-processes multiple papers to build a structured literature database. It activates when you have a set of papers to catalog, not when surveying a topic (use Deep Research) or reading a single paper (/read-paper).

Database Format

Papers are organized in scratch/paper-db/:

scratch/paper-db/
  README.md           # Index of all processed papers, organized by topic
  entries/            # One file per paper
    author2024-shortname.md
    ...
  tags.md             # Tag index for cross-referencing

Each entry captures full metadata (authors, year, venue, BibTeX), a structured summary, method details (architecture, I/O, loss, training), quantitative results, tags for cross-referencing, and relevance notes for the current project.

Key Principles

Verification Before Completion

The verification-before-completion skill ensures no completion claims are made without fresh verification evidence. It activates when Claude is about to claim something is "done", "working", "fixed", or "ready".

Core Rule

Never claim something works without showing evidence from THIS session that it works. "I've implemented the feature" is NOT "the feature works." "I fixed the bug" is NOT "the bug is fixed."

Fresh Evidence Requirement

Before claiming completion, verify all of the following:

  1. Does it run? Execute the code or run the test — don't just read it. If execution is not possible, say so explicitly.
  2. Does it produce the right output? Show actual output, not expected output. For model changes: shapes, loss values, gradient norms from a real pass.
  3. Is the evidence fresh? Evidence from earlier in the session may be stale. Re-run after any code change, even "minor" ones.
  4. Is anything else broken? Run regression checks and existing tests.

Completion Template

When claiming something is done, the completion report includes: what was changed, evidence it works (real output), what was checked for regressions, and — crucially — what was NOT verified (honest about gaps).

Using Git Worktrees

The using-git-worktrees skill guides experiment branch isolation using git worktrees. It activates when starting a new experiment that could break existing functionality, or when you need to work on multiple variants in parallel.

Setup Process

  1. Create the worktree with a descriptive branch name:
    git worktree add ../worktrees/{experiment-name} -b experiment/{experiment-name}

    Naming convention: include the key variables being tested (e.g., rvq-depth2-rotation, fsq-8bit-no-commitment).

  2. Verify gitignore: Ensure worktrees/ is in .gitignore.
  3. Install dependencies in the worktree directory.
  4. Run baseline: Before making any changes, verify the baseline works and record baseline metrics.

Key Principles

PrincipleDetails
Verify baseline firstIf the baseline fails in the worktree, experimental results are meaningless.
Keep main cleanDon't make experimental changes in the main checkout — it's your known-good reference.
One experiment per worktreeDon't mix unrelated changes — it's impossible to attribute results to specific changes.
Document what changedThe investigation README should note which worktree/branch contains the experimental code.

Project Customization

The project-customization skill builds a persistent project profile in .propel/ by automatically analyzing code conventions, domain context, and development patterns. Claude references this profile silently on every session start.

Auto-Detection Process

The skill runs six analysis phases on first activation:

  1. Project Structure Scan: Language, framework, dependencies, directory layout, build tools, package manager
  2. Code Style Analysis: Naming conventions, import style, formatting, type hints, error handling, docstring style
  3. Domain Detection: Classifies the project domain from imports (JAX/ML, PyTorch/ML, Robotics/RL, Frontend, Backend, etc.)
  4. Git History Analysis: Commit style, branch naming, active areas, team size, commit frequency
  5. Existing Conventions: Reads CLAUDE.md, CONTRIBUTING.md, linter configs, CI configs, pre-commit hooks
  6. Profile Generation: Synthesizes all findings into config.json, profile.md, conventions.md, and optionally domain-context.md

Subsequent Runs

On subsequent activations, the skill computes a hash over key project files. If nothing changed, the profile is current. If files changed, it runs a delta analysis — only re-analyzing the categories that changed — and presents the diff to the user before updating.

Integration

The session-start hook reads .propel/profile.md and injects it as project context. You can disable profiling by setting "enabled": false in .propel/config.json, or remove it entirely by deleting the .propel/ directory.