DOCS · EXTENDING · AUTHORING COMMANDS

Authoring commands

A command is one markdown file that describes what to do in clear English. Claude Code reads it as a system prompt for an orchestrating agent. There is no compiler, no runtime — the prose is the implementation.

When to write a new command

Reach for a new command when the workflow is:

  • Reusable — you'd run it more than a handful of times, or hand it to a teammate.
  • Multi-step — a single Claude Code prompt would be too long to remember.
  • Coordinating — it delegates to multiple agents and synthesizes their outputs.

If the workflow is one-shot or fits comfortably in an ad-hoc prompt, don't bother — write the prompt inline.

File structure

Commands live in commands/<verb>.md inside your plugin:

---
model: opus
---

# Spec Audit

Run a structured audit of the codebase against a target specification.
Produces a report at docs/runbooks/spec-audit-{timestamp}.md.

## Parameters

| Parameter      | Description                                      | Default                         | Required |
| -------------- | ------------------------------------------------ | ------------------------------- | -------- |
| `spec_path`    | Path to the spec file to audit against           | `docs/specs/design-system.md`   | No       |
| `scan_paths`   | Code directories to scan                         | `src/`                          | No       |

## Core responsibilities

You are an orchestrator. You do not perform the audit yourself; you delegate
to the Design System Agent (or the named specialist for the spec under audit)
and assemble the result.

## Workflow

1. Read the target specification at `{spec_path}`.
2. Identify the appropriate specialist agent for the spec category…
3. Spawn the specialist via the Task primitive with the spec contents and
   scan paths in scope…
4. Receive the specialist's findings. Run them through the Findings
   Consolidator…
5. Write a structured report to `docs/runbooks/spec-audit-{timestamp}.md`
6. Return a one-paragraph summary plus the report path to the user.

## Critical requirements

- **Specialist selection MUST match the spec category.** A design-system spec
  routes to design-system-agent; a security spec routes to security-reviewer;
  an SLO spec routes to sre-agent.
- **Never modify the spec or the code during audit.** This command is
  read-only.
- **Always produce a report file.** The on-screen summary is provisional;
  the report is the durable artifact.

Frontmatter conventions

FieldValuesUse it for
modelopus / sonnet / haikuSame selection rule as agents — smallest model that does the job well

Commands that orchestrate broad work (planning, building, retrospectives) typically run Opus. Commands that wrap a single specialist with light coordination (write-adr, write-rfc) can use Sonnet. Pure-mechanical commands are rare; if you find yourself reaching for Haiku, the work probably belongs as a utility agent rather than a top-level command.

Parameters

Document parameters in a markdown table. Use these column headers exactly so the synced parameter index can parse them later when typed parameter frontmatter lands upstream:

## Parameters

| Parameter | Description | Default | Required |
| --------- | ----------- | ------- | -------- |
| `name`    | …           | `value` | Yes/No   |

If a parameter has a default in .synthex/config.yaml (e.g. next_priority.concurrent_tasks), state both the default value and the config path that overrides it.

Workflow style

Synthex commands are structured as numbered, step-by-step workflows. The convention is:

  • One verb per step. "Read X." / "Identify Y." / "Spawn Z." Avoid compound steps.
  • Explicit delegation. When a step uses a sub-agent, name the agent and say what context it receives.
  • State machine for branches. "If condition A, do X. Otherwise, do Y." Don't leave the agent guessing.
  • Idempotent where possible. A command that's safe to re-run after partial failure is worth the extra prose.

The model interprets the workflow as a procedure to follow. Imprecise prose produces imprecise execution; tight prose produces tight execution.

Sub-agent delegation

When your command delegates to a sub-agent, follow the Task primitive convention. The standard shape:

4. Spawn the {agent-name} agent with the following context:
   - {what the agent needs to read}
   - {what scope to operate in}
   - {what output shape to return}

   Wait for the agent to complete. The expected return shape is:

   - findings: Array<{severity, location, summary, body}>
   - verdict: "PASS" | "WARN" | "FAIL"

The orchestrator and sub-agent communicate by returning structured text. There's no formal schema enforced by the runtime; the contract lives in the prose.

Critical requirements section

Every Synthex command ends with a Critical Requirements list — a short bulleted set of invariants the workflow must respect. Examples:

  • Validate ALL acceptance criteria before marking a task complete.
  • Every [T] criterion must have a linked, passing test — no exceptions.
  • Never cross phase boundaries in a single session.

Treat this section as the contract you'd want the model to re-read at the moment of decision. Repeat the most important constraints there even if they appear in the workflow body — a reviewer who skims should still encounter the rules.

Composing with existing commands

Many useful commands are thin compositions over Synthex's existing roster. Examples:

  • /myorg:weekly-retro — wraps retrospective/synthex:retrospective with org-specific prompts and saves to a different output path.
  • /myorg:critical-review — delegates to review-code/synthex:review-code with the performance-engineer and a custom compliance reviewer added to the roster.
  • /myorg:bootstrap-feature — sequences refine-requirements/synthex:refine-requirements, write-implementation-plan/synthex:write-implementation-plan, and an org-specific "spike-plan" step.

These are entirely valid plugins — composing in markdown is the supported extension model.

Testing

Like agents, commands aren't unit-tested per se. Two sanity checks:

  1. Run the command in a representative project. Does the workflow execute end-to-end without prompting for clarification?
  2. Run the command in an edge-case project (empty repo, very large repo, missing config sections). Does it fail clearly when it can't proceed, rather than misbehaving silently?

A command that asks the user too many clarifying questions is usually missing parameters or defaults. A command that fails silently is usually missing explicit pre-conditions in the workflow.

Next