Researcher Mode
Literature, investigation, deep research — understanding the problem space before building anything.
Overview
Researcher Mode is for sessions where the goal is understanding, not implementation. You are surveying a domain, processing papers, tracing code paths, or building context for a future implementation session. The mode activates only the intake and investigation phases of the pipeline, keeping you focused on learning.
| Property | Details |
|---|---|
| Active Gates | G0 G1 |
| Active Skills | investigation, deep-research, paper-extraction, think-deeply, retrospective, context-hygiene, verification-before-completion, project-customization |
| Not Available | research-design, writing-plans, subagent-driven-research, research-validation, systematic-debugging, trainer-mode, using-git-worktrees |
| Switch Command | /switch researcher |
Researcher Mode Protocol
When Researcher Mode is active, the workflow follows three primary tracks: investigation scaffolding, deep research, and paper extraction.
Investigation Scaffolding
Every research session starts with the investigation skill. Gate 0 fires first — Claude asks scoping questions to understand what you are trying to learn. After scope is confirmed, Claude creates a scratch/ investigation directory with a living README:
- Checks
scratch/registry/for past experiment entries with matching keywords - Creates
scratch/{date}-{name}/README.mdwith scope, task checklist, and findings sections - Dispatches subagents for parallel exploration to save context
- Updates the README as findings accumulate
At Gate 1, Claude presents a structured summary: 3–5 bullet findings, surprises and risks, and open questions requiring your input. Your answers are recorded as design decisions in the README.
Deep Research Phases
The deep-research skill conducts structured literature surveys in four phases:
Phase 1: Scope Definition
Claude creates a research directory and proposes an outline of key subtopics. You approve, narrow, or expand the scope before any searching begins.
Phase 2: Systematic Search
For each subtopic, Claude searches and extracts structured notes: citation, key idea, method, results, relevance, and limitations. The README is updated as findings arrive.
Phase 3: Synthesis
Claude produces a comparison table, taxonomy of approaches, gap analysis, and a recommendation ranked by relevance to your specific use case.
Phase 4: User Review
You review the synthesis. You can dig deeper into specific methods, challenge the recommendation, add methods Claude missed, or refine the research question.
Paper Extraction Workflow
For processing specific papers, use /read-paper path/to/paper.pdf. This creates structured notes in scratch/paper-notes/{paper-name}.md with equations, architecture details, hyperparameters, and implementation notes. These notes become the ground truth that the paper-alignment-auditor references during future implementation sessions.
What Researcher Mode Does NOT Do
Researcher Mode deliberately excludes implementation-oriented skills and gates:
- No implementation — no code writing, no subagent-driven research, no validation
- No design plans — research-design and writing-plans skills are inactive
- No training — trainer-mode skill is inactive
- No debugging — systematic-debugging is inactive (investigation for understanding is different from investigation for fixing)
Gates 2, 3, and 4 do not fire. The pipeline stops after Gate 1 (post-investigation). If you discover during research that you are ready to build, switch to Engineer Mode.
When to Use
- Before you know what to build — surveying approaches, comparing methods, understanding trade-offs
- Surveying a new domain — "What approaches exist for discrete representation learning in robotics?"
- Processing papers — extracting implementation details from one or more papers into structured notes
- Building context for a future session — investigating a codebase before modifying it
- Tracing code paths — understanding what touches what before proposing changes
Out-of-Scope Redirects
| If You Ask For... | Redirect |
|---|---|
| Code implementation or changes | /switch engineer — full design-implement-validate workflow |
| Training launch or monitoring | /switch trainer — training execution and runtime debugging |
| Bug diagnosis and fixing | /switch debugger — root-cause analysis with evidence |
Example Session
A typical Researcher Mode session conducting a literature review:
User: /switch researcher
Claude: Researcher Mode active. Gates 0 and 1.
Active skills: investigation, deep-research, paper-extraction,
think-deeply, retrospective, context-hygiene.
What's the research question driving this session?
User: What approaches exist for codebook collapse prevention
in VQ-VAE variants? I want to understand the landscape
before choosing an approach for our robotics use case.
Claude: [Gate 0 — Intake]
A few scoping questions:
1. Are you focused on standard VQ-VAE, or also considering
RVQ, FSQ, and LFQ variants?
User: All variants. I want a broad survey.
Claude: 2. Is the downstream task action prediction specifically, or
discrete representations for robotics more broadly?
User: Action prediction with trajectory data.
Claude: Scope: Survey codebook collapse solutions across VQ-VAE,
RVQ, FSQ, and LFQ, ranked by relevance to action
prediction with trajectory data.
[creates: scratch/research/2026-03-28-codebook-collapse/]
Proposed research outline:
1. Original VQ-VAE collapse problem — why it happens
2. Codebook reset / EMA update strategies
3. FSQ and LFQ — collapse-free alternatives
4. RVQ — multi-level residual quantization
5. Applications in robotics / trajectory prediction
6. Comparison and recommendation
Does this outline cover what you need?
User: Add a section on commitment loss variants too.
Claude: Updated. Searching...
[Phase 2: systematic search across subtopics]
[Phase 3: synthesis with comparison table]
[Gate 1 — Post-Investigation]
Findings:
• FSQ eliminates collapse entirely by replacing learned
codebooks with fixed scalar quantization
• EMA reset strategies work but require tuning the
reset threshold per domain
• RVQ with per-level commitment scaling is most promising
for trajectory data (3 papers confirm this)
Research document: scratch/research/2026-03-28-codebook-collapse/
Questions:
1. Should I dig deeper into any of these approaches?
2. Ready to switch to Engineer Mode and implement one?