Reviewer pipeline
The reviewer pipeline is how Synthex turns "many reviewers, many findings" into one ranked list of things to fix. It's the same shape across plan review and code review — fan out, consolidate, address, repeat.
Stages
A single review cycle has five stages:
- Context bundle. The orchestrator assembles the artifact under review (diff, plan, PRD) plus its surrounding context (project conventions, specs, prior decisions) into a bundle each reviewer can consume. The Context Bundle Assembler is a utility agent that does only this — keeping it isolated keeps the cost of bundle assembly off the orchestrator's context.
- Fan-out. All configured reviewers run in parallel. Each one is independent. None sees another's findings until the consolidator combines them, which prevents groupthink and surfaces disagreement honestly.
- Findings consolidation. The Findings Consolidator agent takes raw reviewer outputs and produces a single deduplicated, severity-ranked list. It does not add new findings, does not change severities, does not edit reviewer wording beyond what consolidation requires. It exists to save the orchestrator's context window — three to five overlapping reviewer reports become one tractable list.
- Address. The orchestrator (Tech Lead for code, Product Manager for plans) addresses every finding at or above the severity threshold, then commits the fix.
- Re-review. Reviewers re-run on the updated artifact. If unresolved findings remain at the
threshold, the cycle continues — bounded by
max_cycles.
If the loop exits with findings still open above the threshold, those findings are recorded in the completion summary. The user chooses whether to merge, iterate further, or split the remainder into a follow-up task.
Reviewers by command
Different commands run different reviewer rosters. The defaults:
| Command | Default reviewers |
|---|---|
| refine-requirements/synthex:refine-requirements | Product Manager, Tech Lead, Designer |
| write-implementation-plan/synthex:write-implementation-plan | Architect, Designer, Tech Lead |
| review-code/synthex:review-code | Code Reviewer, Security Reviewer (mandatory) |
| design-system-audit/synthex:design-system-audit | Design System Agent |
| reliability-review/synthex:reliability-review | SRE Agent |
| performance-audit/synthex:performance-audit | Performance Engineer |
Each command has a reviewers: block in .synthex/config.yaml that lets you add specialists,
disable defaults, or change the focus prompts. See Configuration for the
exact keys.
Why parallel and independent
Two questions every project asks early:
Couldn't one capable reviewer do the work of three?
Yes, and the result will be coherent — but it will reflect that one reviewer's biases. Two reviewers who never see each other's drafts produce two genuine perspectives. When they agree, the finding has higher confidence. When they disagree, the disagreement itself is signal: the issue is more nuanced than either alone could capture.
Couldn't reviewers run sequentially, with each one seeing the prior?
That collapses into the first scenario over time. The second reviewer optimizes for completing the first reviewer's narrative. Independence is what gives you two genuine angles, and that costs you a little redundancy in exchange for catching different errors.
Findings consolidator: the mechanical layer
The Findings Consolidator is intentionally narrow. It runs Haiku because the work is purely structural:
- Parse each reviewer's structured output.
- Group findings that describe the same underlying issue (same file, same line range, same vulnerability class).
- Preserve the original wording and severity from each reviewer.
- Sort by severity, then by file/line.
- Emit one consolidated list with reviewer attribution.
By making consolidation mechanical, the orchestrator gets a clean input it can act on without re-reading three reports. By keeping it narrow, the consolidator can't accidentally rewrite or soften a reviewer's verdict.
The multi-model option
For high-stakes reviews, Synthex supports a multi-model review pipeline. The same diff is sent to the native reviewer (Claude) and to one or more external CLI adapters running other LLM families. The Multi-Model Review Orchestrator fans them all out in parallel, collects their verdicts, and feeds them through the same consolidator.
The shape is sometimes called proposer-aggregator: each model proposes findings independently; the aggregator (consolidator + orchestrator) merges and decides.
Why bother:
- Different model families fail differently. Claude's blind spots are not GPT's are not Gemini's. Catching errors any single family would miss is the entire point.
- Disagreement across families is high-signal. When one family flags a finding the others miss, it deserves attention even if its severity vote is borderline.
- It's optional and per-command. You can run multi-model on review-code/synthex:review-code for production code paths and skip it for documentation changes.
The mechanism is configured via the code_review and implementation_plan blocks; the
orchestrator decides whether to fire the multi-model branch using a documented decision order
based on artifact size, change risk, and config.
How the loop bounds itself
The default is max_cycles: 2 (most commands) or 3 (plans, since plan defects ripple). The
cycle counter is per-review, not per-task — sending a task back through review starts a fresh
counter.
When the loop hits its cap with unresolved findings:
- The findings are recorded with reviewer attribution and severity.
- The completion summary includes them as Open Findings.
- The user sees them in the merge prompt and decides what to do.
You can raise or lower the cap per command in config:
implementation_plan:
review_loops:
max_cycles: 3
code_review:
review_loops:
max_cycles: 2There's no global way to disable the loop — that would mean shipping unreviewed changes, which the project is opinionated about not doing.
What the pipeline doesn't do
- It doesn't merge. Reviewers and the consolidator are advisory. The orchestrator decides what to fix and the gate decides whether to merge.
- It doesn't run quietly. Every cycle's findings are surfaced to you. The point is transparency, not friction.
- It doesn't replace human judgment.
[H]criteria still require explicit user approval. Reviewers can recommend; only you can sign off.
Next
- Quality gates — the merge contract reviewers feed into
- The lifecycle — where reviews sit in the five-phase loop
- Configuration — tune reviewer rosters, severity, and cycle caps