Code Reviewer

General code quality review with research-awareness — catches what generic reviewers miss in AI and robotics codebases.

Overview

The Code Reviewer reviews code for general quality AND research-specific concerns that generic code reviewers miss. It is called as part of the subagent-driven-research pipeline after implementation, or explicitly when a code review is requested.

PropertyDetails
ToolsRead, Grep, Glob (read-only)
Auto-DispatchYes — after implementation in the Engineer pipeline, and before merging feature branches
TriggerNew implementations, feature branches, explicit review requests

Correctness

Readability

Research-Specific Quality

These are the checks that generic code reviewers miss:

Research Context Matters

In research code, a quick prototype that works is often more valuable than a perfectly structured one that takes a week longer. The reviewer doesn't over-index on engineering perfection at the expense of research velocity.

JAX/ML Patterns

Performance

Maintainability

Output Format

The reviewer produces a structured report with four priority levels:

Must Fix (blocking)

Issues that must be resolved before merging. Each includes file and line reference, why it matters, and a specific fix suggestion.

Should Fix (non-blocking but important)

Issues that should be addressed but won't block progress. Includes impact assessment and suggested fix.

Suggestions (quality improvements)

Optional improvements that would make the code better but aren't required.

Good Patterns Observed

What the code does well — reinforces good practices so they continue.

Review Principles

Be specific — "This function is too complex" is useless; "This function does 3 things (parsing, validation, transformation) — split into 3 functions" is actionable. Prioritize by impact — a correctness bug outranks a style issue. Acknowledge good work — positive feedback reinforces good practices.