Skip to content

How APEX Works

APEX Banner

A comprehensive guide to the multi-agent orchestration system for Azure platform engineering.

APEX is a multi-agent orchestration system where specialised AI agents collaborate through a structured multi-step workflow to transform Azure project requirements into deployed, production-grade Infrastructure as Code. The system coordinates specialized agents and subagents through mandatory human approval gates, producing Bicep or Terraform templates that conform to Azure Well-Architected Framework principles, Azure Verified Modules standards, and organisational governance policies. The agents are supported by reusable skills, instruction files, Copilot hooks, and MCP server integrations.

The core thesis is that AI agents can reliably produce production-grade Azure infrastructure when properly orchestrated with guardrails. The system achieves this through a layered knowledge architecture (agents, skills, instructions, registries), mechanical enforcement of invariants via automated validation scripts, and a human-in-the-loop design that preserves operator control at every critical decision point. Cost governance (budget alerts, forecast notifications, anomaly detection) and template repeatability (zero hardcoded values) are enforced as first-class concerns across all generated infrastructure.

If you are new to APEX, read the docs in this order:

  1. System Architecture for the overall flow
  2. Agent Architecture for roles, handoffs, and subagents
  3. Skills & Instructions for the knowledge and rule layers
  4. Workflow Engine & Quality for gates, validation, and session state
  5. MCP Integration for external tool access and server capabilities

This project draws directly from two bodies of work that define how autonomous AI agents can operate reliably in professional software engineering contexts.

Technology circuit board abstract background

In February 2026, OpenAI published “Harness Engineering: Leveraging Codex in an Agent-First World,” describing how a small team built and shipped an internal product with zero lines of manually-written code. Every line — application logic, tests, CI configuration, documentation, and internal tooling — was generated by Codex agents. The key insights that shaped this project:

Repository as the system of record. Knowledge that lives in Google Docs, chat threads, or people’s heads is invisible to agents. Only versioned, in-repo artifacts — code, markdown, schemas, execution plans — exist from the agent’s perspective. This project implements this principle by storing all agent outputs in agent-output/{project}/, all conventions in skills and instructions, and all decisions in Architecture Decision Records.

Map, not manual. OpenAI initially tried a monolithic AGENTS.md approach and found it failed: context is a scarce resource, and a giant instruction file crowds out the task. Instead, they treat AGENTS.md as a table of contents that points to deeper sources. This project adopts the same pattern: AGENTS.md is approximately 250 lines and points to skills, instruction files, and multiple configuration registries.

Enforce invariants, not implementations. Rather than prescribing step-by-step procedures, the Harness Engineering approach encodes strict boundaries (architectural layering rules, naming conventions, security requirements) and lets agents choose their own path within those constraints. This project enforces invariants mechanically: validation scripts check naming conventions, template compliance, governance references, and architectural rules.

Human taste gets encoded. When a human reviewer catches a pattern issue, the fix is not to patch the output — it is to update the instruction or skill that should have prevented the issue. Over time, human judgment compounds in the system as linter rules, templates, and skill updates.

Garbage collection through continuous enforcement. Technical debt in an agent-generated system accumulates the same way it does in human-generated systems, but faster. The Harness Engineering approach runs recurring agents that scan for deviations and open targeted refactoring pull requests. This project implements a quarterly context audit checklist and weekly documentation freshness checks.

Ralph is an autonomous AI agent loop (12k+ GitHub stars) based on Geoffrey Huntley’s Ralph pattern. It spawns fresh AI coding tool instances (Amp or Claude Code) in a bash loop, picking off PRD user stories one at a time until all items pass. Key concepts adopted from Ralph:

Fresh-context iteration model. Each Ralph iteration spawns a brand-new AI instance with zero carry-over context. The only memory between iterations is git history, a progress.txt append-only learning log, and a prd.json task list. This project adopts the same philosophy through its apex-recall CLI skill: each agent step is stateless, and all memory persists through versioned artefact files in agent-output/{project}/ and the machine-readable 00-session-state.json.

Right-sized task decomposition. Ralph insists that each PRD item must be small enough to complete within a single context window — “Add a database column” not “Build the entire dashboard.” This project enforces the same principle at a different scale: each of the 7 main workflow steps plus Step 3.5 Governance is scoped to a single well-defined output (one requirements doc, one architecture assessment, one implementation plan), and subagents are further decomposed to atomic validation or review tasks.

AGENTS.md as compounding knowledge. Ralph treats AGENTS.md updates as critical: after each iteration the AI appends discovered patterns, gotchas, and conventions so that future iterations (and human developers) benefit. This project elevates the same pattern to a first-class system: AGENTS.md is the table of contents, skills contain deep domain knowledge, and instructions encode discovered conventions as enforceable rules. Golden Principle 7 — “Human Taste Gets Encoded” — directly mirrors Ralph’s append-only learning loop.

Feedback loops as mandatory infrastructure. Ralph only works when typecheck catches errors, tests verify behaviour, and CI stays green — otherwise broken code compounds across iterations. This project’s 28 validation scripts, pre-commit/pre-push hooks, and circuit breaker pattern serve the identical function: mechanical feedback loops that prevent error propagation across agent steps.

Deterministic stop conditions. Ralph exits when all user stories have passes: true. This project’s workflow engine defines explicit gate conditions: each step transition requires either human approval or automated validation pass, and the Orchestrator agent tracks completion state in the session state file.

Harness Engineering provides the philosophy: treat the repository as the single source of truth, encode human taste into mechanical rules, enforce invariants rather than implementations, and manage context as a scarce resource.

Ralph provides the execution model: stateless iteration loops, right-sized task decomposition, append-only learning, mandatory feedback loops, and deterministic stop conditions.

This project weaves both into a system purpose-built for Azure infrastructure:

ConcernHarness Engineering PrincipleRalph PatternThis Project
Knowledge managementRepo is system of recordAGENTS.md + progress.txtSkills + instructions + agent-output/
Context managementMap, not manualFresh context per iterationProgressive skill loading + 3-tier compression
Quality enforcementMechanical enforcement of invariantsMandatory CI feedback loopsValidators + pre-commit/push hooks + Copilot hooks
Workflow orchestrationStructured step progressionBash loop + prd.json task listworkflow-graph.json + Orchestrator agent
Concurrency safetySingle-instance sequential loopSerial execution (v3.0 — lock/claim removed)
Task decompositionOne context window per storyOne artefact per workflow step
Cost optimisationModel tier selection via Orchestrator
Failure resilienceCI-gated iterationFailure taxonomy + stopping rules
Learning persistenceHuman taste gets encodedAppend-only progress.txtSkills + instructions evolve over time
Human controlHuman taste gets encodedMax iterations cap5 approval gates + challenger reviews
Cost governanceEnforce invariants, not implsiac-best-practices.instructions.md + adversarial checklists
Context efficiencyContext is scarceFresh context per iterationSession Break Protocol at Gates 2 & 3 + conditional pass 3

The system operates under 10 principles adapted from the Harness Engineering philosophy:

  1. Repository Is the System of Record — All context lives in-repo
  2. Map, Not Manual — Instructions point to deeper sources; no monolithic docs
  3. Enforce Invariants, Not Implementations — Set boundaries, allow autonomy within them
  4. Parse at Boundaries — Validate inputs and outputs at module edges
  5. AVM-First, Security Baseline Always — Azure Verified Modules and security defaults
  6. Golden Path Pattern — Shared utilities over hand-rolled helpers
  7. Human Taste Gets Encoded — Review feedback becomes rules, not one-off fixes
  8. Context Is Scarce — Every token must earn its keep
  9. Progressive Disclosure — Start small, drill deeper when needed
  10. Mechanical Enforcement Over Documentation — Linters and validators over prose