Investigation
Scaffolds a structured investigation in scratch/ for empirical research and documentation. The foundation of every Propel workflow.
Overview
The Investigation skill activates when the user wants to explore, trace, or document a system before making changes. It creates a dated folder in scratch/ with a living README that accumulates findings across the session. Every non-trivial task in Propel starts with an investigation.
Before starting any investigation, Gate 0 (Intake) must have been completed. If the user has not been asked scoping questions and approved a scope statement, the investigation skill refuses to proceed.
| Property | Details |
|---|---|
| Trigger | "start an investigation", "trace X to Y", "what touches X", "follow the wiring", "archeology", "figure out why X happens" |
| Active Modes | Researcher Engineer Debugger |
| Output | scratch/{YYYY-MM-DD}-{name}/README.md |
| Prerequisite | Gate 0 (Intake) completed |
Scratch Scaffolding
When the investigation skill activates, it performs these steps in order:
- Check for past experiments. Before creating a new investigation, check
scratch/registry/for past entries with matching keywords. If found, inject them as context. - Create a dated folder in
{REPO_ROOT}/scratch/with the format{YYYY-MM-DD}-{descriptive-name}. - Create a
README.mdin this folder using the template below. - Create scripts and data files as needed for empirical work.
- Split into sub-documents as patterns emerge for complex investigations.
The scratch/ directory is in .gitignore and will NOT be committed. Never delete anything from scratch — it doesn't need cleanup. When distilling findings into PRs, include all relevant info inline; PRs must be self-contained.
README Template
Every investigation README follows this structure. The sections are filled progressively as findings accumulate.
# Investigation: {descriptive-name}
## Scope (Human-Approved)
{One paragraph scope statement from Gate 0.
This is what the user confirmed they want.}
## Paper References
{Links to any extracted paper notes in scratch/paper-notes/
that are relevant to this investigation.}
- [paper-name](../paper-notes/paper-name.md) — relevance note
## Task Checklist
- [ ] {task 1}
- [ ] {task 2}
## Findings
{Updated as investigation progresses}
## Surprises / Risks
{Things that were unexpected or will complicate
the implementation}
## Design Decisions (Human-Approved)
{Filled in after Gate 1 — documents the user's answers
to design-direction questions}
## Previous Experiment Learnings
{Entries from scratch/registry/ that matched this
investigation's keywords, if any}
Investigation Patterns
These are common patterns, not rigid categories. Most investigations blend multiple patterns.
Tracing
Triggers: "trace from X to Y", "what touches X", "follow the wiring"
- Follow call stack or data flow from a focal component to its connections
- Can trace forward (X → where does it go?) or backward (what leads to X?)
- Useful for: assessing impact of changes, understanding coupling
System Architecture Archeology
Triggers: "document how the system works", "archeology"
- Comprehensive documentation of an entire system or flow for reusable reference
- Start from entry points, trace through all layers, document relationships exhaustively
- For complex systems, consider numbered sub-documents (
01-cli.md,02-data.md, etc.)
Bug Investigation
Triggers: "figure out why X happens", "this is broken"
- Reproduce → trace root cause → propose fix
- For cross-repo bugs, consider per-repo task breakdowns
Technical Exploration
Triggers: "can we do X?", "is this possible?", "figure out how to"
- Feasibility testing with proof-of-concept scripts
- Document what works AND what doesn't
Design Research
Triggers: "explore the API", "gather context", "design alternatives"
- Understand systems and constraints before building
- Compare alternatives, document trade-offs
- Include visual artifacts (mockups, screenshots) when relevant
- For iterative decisions, use numbered "Design Questions" (DQ1, DQ2...) to structure review
Gate 1: Post-Investigation Checkpoint
After the investigation is complete, findings must be presented to the user before any design work begins. This is Gate 1.
## Investigation Summary
### What I Found
- [3-5 bullet summary of key findings:
architecture, code paths, dependencies]
### Surprises / Risks
- [Things that were unexpected or will
complicate the implementation]
- [Existing behavior that might conflict
with the proposed change]
### Open Questions — I Need Your Input
1. [Trade-off question surfaced by investigation]
2. [Architecture choice that requires human judgment]
3. [Priority question about which findings to act on]
Gate 1 Rules
- Present the investigation README to the user. They must read it.
- If the investigation revealed a fundamental problem with the user's idea, say so directly (the think-deeply skill activates here).
- Ask at most 3 questions. More than that means the investigation wasn't thorough enough.
- Document the user's answers in the README under "Design Decisions (Human-Approved)".
- User confirms design direction → proceed to design phase.
Task Checklists
The Task Checklist section in the README is updated as the investigation progresses. Each item should be specific and verifiable. Check items off as they are completed. For complex investigations, split tasks by pattern:
- Tracing tasks: "Trace data flow from encoder output to loss computation"
- Archeology tasks: "Document all entry points in the CLI module"
- Bug tasks: "Reproduce NaN in commitment loss with RVQ config"
- Exploration tasks: "Test whether JAX vmap supports dynamic batch sizes"
Living Documentation Across /clear
The investigation README is a living document — it persists across /clear boundaries. This is critical because research sessions often span multiple context windows. The README serves as the bridge between sessions:
- Before
/clear: update the README with all current findings, mark completed tasks, note where you stopped - After
/clear: read the README to reconstruct context and resume where you left off - The Findings section accumulates across sessions — never overwrite, only append
- The Surprises / Risks section captures unexpected discoveries that might be lost after clearing context
Best Practices
- Use
uvwith inline dependencies for standalone scripts; for scripts importing local project code, usepythondirectly - Use subagents for parallel exploration to save context
- Write small scripts to explore APIs interactively
- Generate figures/diagrams and reference inline in markdown
- For external package API review: clone to
scratch/repos/for direct source access