How APEX Works
A comprehensive guide to the multi-agent orchestration system for Azure platform engineering.
Executive Summary
Section titled “Executive Summary”APEX is a multi-agent orchestration system where specialised AI agents collaborate through a structured multi-step workflow to transform Azure project requirements into deployed, production-grade Infrastructure as Code. The system coordinates specialized agents and subagents through mandatory human approval gates, producing Bicep or Terraform templates that conform to Azure Well-Architected Framework principles, Azure Verified Modules standards, and organisational governance policies. The agents are supported by reusable skills, instruction files, Copilot hooks, and MCP server integrations.
The core thesis is that AI agents can reliably produce production-grade Azure infrastructure when properly orchestrated with guardrails. The system achieves this through a layered knowledge architecture (agents, skills, instructions, registries), mechanical enforcement of invariants via automated validation scripts, and a human-in-the-loop design that preserves operator control at every critical decision point. Cost governance (budget alerts, forecast notifications, anomaly detection) and template repeatability (zero hardcoded values) are enforced as first-class concerns across all generated infrastructure.
Recommended Reading Order
Section titled “Recommended Reading Order”If you are new to APEX, read the docs in this order:
- System Architecture for the overall flow
- Agent Architecture for roles, handoffs, and subagents
- Skills & Instructions for the knowledge and rule layers
- Workflow Engine & Quality for gates, validation, and session state
- MCP Integration for external tool access and server capabilities
Intellectual Foundations
Section titled “Intellectual Foundations”This project draws directly from two bodies of work that define how autonomous AI agents can operate reliably in professional software engineering contexts.
Harness Engineering (OpenAI)
Section titled “Harness Engineering (OpenAI)”In February 2026, OpenAI published “Harness Engineering: Leveraging Codex in an Agent-First World,” describing how a small team built and shipped an internal product with zero lines of manually-written code. Every line — application logic, tests, CI configuration, documentation, and internal tooling — was generated by Codex agents. The key insights that shaped this project:
Repository as the system of record. Knowledge that lives in Google Docs, chat threads,
or people’s heads is invisible to agents. Only versioned, in-repo artifacts — code, markdown,
schemas, execution plans — exist from the agent’s perspective. This project implements this
principle by storing all agent outputs in agent-output/{project}/, all conventions in skills
and instructions, and all decisions in Architecture Decision Records.
Map, not manual. OpenAI initially tried a monolithic AGENTS.md approach and found it
failed: context is a scarce resource, and a giant instruction file crowds out the task.
Instead, they treat AGENTS.md as a table of contents that points to deeper sources.
This project adopts the same pattern: AGENTS.md is approximately 250 lines and points to
skills, instruction files, and multiple configuration registries.
Enforce invariants, not implementations. Rather than prescribing step-by-step procedures, the Harness Engineering approach encodes strict boundaries (architectural layering rules, naming conventions, security requirements) and lets agents choose their own path within those constraints. This project enforces invariants mechanically: validation scripts check naming conventions, template compliance, governance references, and architectural rules.
Human taste gets encoded. When a human reviewer catches a pattern issue, the fix is not to patch the output — it is to update the instruction or skill that should have prevented the issue. Over time, human judgment compounds in the system as linter rules, templates, and skill updates.
Garbage collection through continuous enforcement. Technical debt in an agent-generated system accumulates the same way it does in human-generated systems, but faster. The Harness Engineering approach runs recurring agents that scan for deviations and open targeted refactoring pull requests. This project implements a quarterly context audit checklist and weekly documentation freshness checks.
Ralph (Snarktank)
Section titled “Ralph (Snarktank)”Ralph is an autonomous AI agent loop (12k+ GitHub stars) based on Geoffrey Huntley’s Ralph pattern. It spawns fresh AI coding tool instances (Amp or Claude Code) in a bash loop, picking off PRD user stories one at a time until all items pass. Key concepts adopted from Ralph:
Fresh-context iteration model. Each Ralph iteration spawns a brand-new
AI instance with zero carry-over context. The only memory between iterations
is git history, a progress.txt append-only learning log, and a prd.json
task list. This project adopts the same philosophy through its apex-recall CLI
skill: each agent step is stateless, and all memory persists through versioned
artefact files in agent-output/{project}/ and the machine-readable
00-session-state.json.
Right-sized task decomposition. Ralph insists that each PRD item must be small enough to complete within a single context window — “Add a database column” not “Build the entire dashboard.” This project enforces the same principle at a different scale: each of the 7 main workflow steps plus Step 3.5 Governance is scoped to a single well-defined output (one requirements doc, one architecture assessment, one implementation plan), and subagents are further decomposed to atomic validation or review tasks.
AGENTS.md as compounding knowledge. Ralph treats AGENTS.md updates as
critical: after each iteration the AI appends discovered patterns, gotchas,
and conventions so that future iterations (and human developers) benefit.
This project elevates the same pattern to a first-class system: AGENTS.md
is the table of contents, skills contain deep domain knowledge, and
instructions encode discovered conventions as enforceable rules. Golden
Principle 7 — “Human Taste Gets Encoded” — directly mirrors Ralph’s
append-only learning loop.
Feedback loops as mandatory infrastructure. Ralph only works when typecheck catches errors, tests verify behaviour, and CI stays green — otherwise broken code compounds across iterations. This project’s 28 validation scripts, pre-commit/pre-push hooks, and circuit breaker pattern serve the identical function: mechanical feedback loops that prevent error propagation across agent steps.
Deterministic stop conditions. Ralph exits when all user stories have
passes: true. This project’s workflow engine defines explicit gate
conditions: each step transition requires either human approval or automated
validation pass, and the Orchestrator agent tracks completion state in the
session state file.
How This Project Synthesises Both
Section titled “How This Project Synthesises Both”Harness Engineering provides the philosophy: treat the repository as the single source of truth, encode human taste into mechanical rules, enforce invariants rather than implementations, and manage context as a scarce resource.
Ralph provides the execution model: stateless iteration loops, right-sized task decomposition, append-only learning, mandatory feedback loops, and deterministic stop conditions.
This project weaves both into a system purpose-built for Azure infrastructure:
| Concern | Harness Engineering Principle | Ralph Pattern | This Project |
|---|---|---|---|
| Knowledge management | Repo is system of record | AGENTS.md + progress.txt | Skills + instructions + agent-output/ |
| Context management | Map, not manual | Fresh context per iteration | Progressive skill loading + 3-tier compression |
| Quality enforcement | Mechanical enforcement of invariants | Mandatory CI feedback loops | Validators + pre-commit/push hooks + Copilot hooks |
| Workflow orchestration | Structured step progression | Bash loop + prd.json task list | workflow-graph.json + Orchestrator agent |
| Concurrency safety | — | Single-instance sequential loop | Serial execution (v3.0 — lock/claim removed) |
| Task decomposition | — | One context window per story | One artefact per workflow step |
| Cost optimisation | — | — | Model tier selection via Orchestrator |
| Failure resilience | — | CI-gated iteration | Failure taxonomy + stopping rules |
| Learning persistence | Human taste gets encoded | Append-only progress.txt | Skills + instructions evolve over time |
| Human control | Human taste gets encoded | Max iterations cap | 5 approval gates + challenger reviews |
| Cost governance | Enforce invariants, not impls | — | iac-best-practices.instructions.md + adversarial checklists |
| Context efficiency | Context is scarce | Fresh context per iteration | Session Break Protocol at Gates 2 & 3 + conditional pass 3 |
Golden Principles
Section titled “Golden Principles”The system operates under 10 principles adapted from the Harness Engineering philosophy:
- Repository Is the System of Record — All context lives in-repo
- Map, Not Manual — Instructions point to deeper sources; no monolithic docs
- Enforce Invariants, Not Implementations — Set boundaries, allow autonomy within them
- Parse at Boundaries — Validate inputs and outputs at module edges
- AVM-First, Security Baseline Always — Azure Verified Modules and security defaults
- Golden Path Pattern — Shared utilities over hand-rolled helpers
- Human Taste Gets Encoded — Review feedback becomes rules, not one-off fixes
- Context Is Scarce — Every token must earn its keep
- Progressive Disclosure — Start small, drill deeper when needed
- Mechanical Enforcement Over Documentation — Linters and validators over prose
References
Section titled “References”- Harness Engineering — OpenAI’s account of building a product with zero manually-written code. Read on openai.com
- Ralph — Autonomous AI agent loop based on Geoffrey Huntley’s Ralph pattern. View on GitHub
- Azure Well-Architected Framework — learn.microsoft.com
- Azure Verified Modules — aka.ms/AVM
- Azure Cloud Adoption Framework — learn.microsoft.com