Performance Engineer
SpecialistQuantified, full-stack performance analysis with budgets.
Role
Performance Engineer sits in the specialist tier. Domain expertise — runs in parallel with other specialists.
Lifecycle stages
- Ship
- Operate
Related commands
- /synthex:performance-audit/synthex:performance-audit · Quantified analysis across frontend, backend, db, infra.
- /synthex:reliability-review/synthex:reliability-review · SLO coverage, observability gaps, runbook & deploy-risk review.
Source
The agent's identity — system prompt, model, behavior — is defined in markdown at bluminal/lumenai/blob/main/plugins/synthex/agents/performance-engineer.md.
Last modified:
What the Performance Engineer reviews
The Performance Engineer is the specialist behind /synthex:performance-audit/synthex:performance-audit. It can also be added to /synthex:review-code/synthex:review-code's reviewer roster for projects where every diff carries performance risk (real-time systems, large-list interactions, high-traffic API surfaces).
The audit covers:
- Frontend render performance. Reflows, layout thrashing, large-list virtualization, unnecessary re-renders, hydration cost on cold load.
- Network behavior. Waterfall depth, request fan-out, response sizes, cache headers, redundant requests, opportunities for streaming or progressive rendering.
- Bundle size. Tree-shaking effectiveness, accidental large imports, asymmetric client/server boundaries that pull server code into the client bundle.
- Server-side latency. Hot-path query patterns, N+1s, missing indexes, blocking I/O on request paths, concurrency bottlenecks.
- Database access. Query plan inspection (when the audit has access to EXPLAIN output), index coverage, lock-contention risk on high-write paths.
- Infrastructure shape. Right-sized compute, missing CDN or cache tiers, hot-spotted regions, cold-start risk on serverless paths.
How the audit produces useful output
Performance findings are easy to over-emit and hard to act on. The Performance Engineer is calibrated to:
- Quantify everything. "This query is slow" is not a finding. "This query takes 380ms p95
on the documented dataset and runs once per request" is. If the audit can't put a number
on it, the finding gets a
lowseverity with aneeds-measurementtag rather than blocking merge. - Surface impact, not abstraction. A 400ms p95 reduction on a 50 RPS endpoint matters more than a 10% reduction on a 0.1 RPS endpoint. The audit prioritizes by user-impact estimate, not theoretical optimality.
- Distinguish "fix now" from "design constraint." Some findings are tactical patches; some expose design assumptions that warrant an ADR or RFC. The Performance Engineer marks the difference so the Tech Lead doesn't try to fix architectural drift in a single PR.
Output shape
A typical finding includes the measurement, the root cause, the proposed change, and the expected impact:
### Finding — high: N+1 query on profile.invitations relation
**Location:** `app/api/profile/route.ts:18`
**Category:** server-latency
The handler issues 1 + N queries — one for the profile, then one per invitation. Measured
on the documented test dataset (50 invitations): 1 + 50 sequential round-trips, p95 850ms.
**Suggested change:** issue a single query with a JOIN, OR eager-load via the ORM's relation
loader.
**Expected impact:** p95 850ms → p95 70ms on the same dataset; eliminates the per-invitation
round-trip cost.The Expected impact line is part of the convention — every "high" finding states what
"better" looks like in numbers, not vibes.
When to add the Performance Engineer to code review
For most projects, performance is checked at /synthex:performance-audit/synthex:performance-audit time (typically before each release) rather than per-PR. Exceptions:
- Real-time systems where a single PR can introduce a regression that pages immediately.
- High-traffic API endpoints where the marginal request cost compounds quickly across millions of calls.
- Large-list / virtualized UI where rendering performance is part of the product surface.
For these projects, attach the Performance Engineer to the code-review roster:
code_review:
reviewers:
- code-reviewer
- security-reviewer
- performance-engineerOtherwise, leaving it out keeps PR review fast and reserves performance attention for the audit cadence.
What it explicitly does not do
- Run benchmarks. The Performance Engineer reasons about the diff and the surrounding code; it doesn't execute load. Empirical measurement is up to the project's own benchmarking pipeline.
- Deploy optimizations on its own. Like every reviewer, it's advisory.
- Override the Code or Security reviewers. A performance optimization that introduces a vulnerability or hurts maintainability is not an improvement — the other reviewers stay in scope.