16  Writing a ~68,000-Word Book with Agent Fleets

Scope: 15 chapters, ~68,000 words — full technical book on AI-native software development
Duration: Single extended Copilot CLI session
Theme: Whether code orchestration patterns transfer to editorial composition

Writing a ~68,000-Word Book with the Agentic SDLC — using the methodology the book itself teaches.

TipKey Takeaways — The TL;DR
  • The same primitives that orchestrate code changes — personas, skills, file-scoped rules — orchestrate prose at book scale.
  • Wave ordering, checkpoint discipline, and context budgeting matter more than model capability.
  • Dynamic persona creation mid-project beats freezing the team at inception — three of eleven personas were created in response to gaps the process surfaced.
  • When agents unanimously agree, a single human override can still be the correct call (the apm compile veto).
  • Composition is load-bearing: 15 chapters required 50+ dispatches, 4 waves, and 2 integration passes. No single agent could hold the full manuscript.

16.1 The Meta-Narrative

This is a case study about a book that was written using the process it describes. The Agentic SDLC Handbook is a 15-chapter, ~68,000-word technical book on AI-native software development. It was drafted and composed through agentic orchestration in a single Copilot CLI session. The human author — Daniel Meppiel, Global Black Belt at Microsoft and creator of APM — provided domain expertise, intellectual property context, and editorial direction. Copilot CLI managed the agent team.

The agent team was packaged and distributed using APM itself: 8 persona definitions as .agent.md files, an orchestration workflow as SKILL.md, declared as a dependency in apm.yml, and deployed via apm install. APM was used to build the book that teaches about APM.

This case study maps the execution to the handbook’s own vocabulary (Chapter 14) and tests its structural properties (Chapter 15) under real conditions.


16.2 Team Topology: 11 Personas, Four Pods

The orchestrator designed an 8-persona expert panel at project inception, then dynamically created 3 more as needs emerged during execution. The personas organized into four functional pods:

graph TB
    subgraph Editorial["Editorial Pod"]
        CE[Chief Editor<br/>Coherence]
        TLA[Thought Leadership<br/>Positioning]
    end

    subgraph Domain["Domain Expert Pod"]
        CS[C-Suite Strategist<br/>Executive lens]
        PA[Practitioner Authority<br/>Eng credibility]
        MA[Market Analyst<br/>Landscape]
        PS[Platform Strategist<br/>Architecture]
    end

    subgraph Review["Review Pod"]
        CTO[CTO Proxy<br/>Buy for my team?]
        DL[Dev Lead Proxy<br/>Engineers use this?]
    end

    subgraph Audit["Audit Pod — Dynamic"]
        IL[Illustrator<br/>Visual strategy]
        FC[Fact-Ref-Checker<br/>Claims audit]
        PB[Publishing Advisor<br/>Distribution]
    end

    CE --> CTO
    CE --> DL
    CE --> IL
    CE --> FC

Figure 16.1: Agent team topology: 11 personas organized in four pods

Each persona was a primitive — a composable instruction unit defined in a single .agent.md file. The orchestrator never prompted agents as generic LLMs; every dispatch carried persona context, scope boundaries, and output format requirements.


16.3 Execution Timeline: Four Waves Plus Integration

The manuscript was produced in a phased pipeline: corpus audit → architecture → four writing waves → integration review → polish. Each wave followed checkpoint discipline: draft, review, revise, commit.

gantt
    title Manuscript Production Pipeline
    dateFormat X
    axisFormat %s

    section Phase 0
    Corpus Audit (4 agents)                  :done, p0, 0, 1
    Editor Synthesis                          :done, p0s, 1, 2

    section Phase 1
    Arch Attempt 1 (FAIL)                    :crit, p1f, 2, 3
    Arch Split (2 agents)                    :done, p1, 3, 4

    section Wave 0
    Ch1 Draft                                :done, w0d, 4, 5
    3 Reviews                                :done, w0r, 5, 6
    Revision                                 :done, w0v, 6, 7

    section Wave 1
    4 Drafts                                 :done, w1d, 7, 8
    4 Reviews                                :done, w1r, 8, 9
    4 Revisions                              :done, w1v, 9, 10

    section Wave 2
    5 Drafts                                 :done, w2d, 10, 11
    2 Reviews                                :done, w2r, 11, 12
    5 Revisions                              :done, w2v, 12, 13

    section Wave 3
    5 Drafts                                 :done, w3d, 13, 14
    2 Reviews                                :done, w3r, 14, 15
    2 Revisions                              :done, w3v, 15, 16

    section Integration
    Block 1+2 Reviews                        :done, ir, 16, 17
    2 Fix Agents                             :done, fix, 17, 18
    Polish + README                          :done, pol, 18, 19
Figure 16.2: Manuscript production pipeline: four writing waves plus integration

Wave ordering was deliberate. Wave 0 tested the pipeline with a single chapter. Wave 1 targeted chapters with the most source material (lowest risk). Wave 2 took on the hardest chapters requiring fresh writing. Wave 3 handled integration chapters that needed cross-references to earlier work. This is context budgeting — right-sizing each wave’s scope to what the agents could handle with available context.


16.4 The Draft→Review→Revise Cycle

Every chapter passed through an identical three-stage pipeline. This is the core quality loop the book describes in Chapter 13.

sequenceDiagram
    participant O as Orchestrator
    participant D as Draft Agent (Domain Specialist)
    participant R1 as CTO Proxy
    participant R2 as Dev Lead Proxy
    participant R3 as Chief Editor
    participant V as Revision Agent

    O->>D: Dispatch: Write chapter N<br/>(architecture spec + source material)
    D-->>O: Chapter draft (3,000-5,000 words)

    par Parallel Review
        O->>R1: Review for executive audience
        O->>R2: Review for practitioner audience
        O->>R3: Review for coherence + voice
    end

    R1-->>O: REVISE verdict + fixes
    R2-->>O: REVISE verdict + fixes
    R3-->>O: REVISE verdict + fixes

    O->>O: Synthesize consensus fixes
    O->>V: Dispatch: Apply N fixes to chapter
    V-->>O: Revised chapter
    O->>O: Checkpoint: commit draft + revision
Figure 16.3: The draft, review, revise cycle for each chapter

In later waves, the orchestrator batched reviews by persona rather than by chapter — sending one reviewer all 5 chapters at once rather than dispatching 15 separate reviews. This reduced dispatch overhead without sacrificing coverage, a practical application of reduced scope at the orchestration level.


16.5 Dynamic Persona Creation

Three of the eleven personas did not exist at project inception. They were created in response to gaps the process itself surfaced.

flowchart TD
    A[Integration Review<br/>Editor reads all 15 chapters] --> B{Visual anchors<br/>missing?}
    B -->|Yes| C[Create Illustrator Agent<br/>Visual Strategist persona]
    C --> D[Fleet of 2: Block 1 + Block 2<br/>40 visual opportunities<br/>25 Mermaid diagrams embedded]

    A --> E{Inconsistent statistics<br/>across chapters?}
    E -->|Yes| F[Create Fact-Ref-Checker<br/>Claims Auditor persona]
    F --> G[Fleet of 2: 75 flags found<br/>5 critical incl. PR-394<br/>statistic contradiction]

    H[Question: How to publish?] --> I{Publishing expertise<br/>needed?}
    I -->|Yes| J[Create Publishing Advisor<br/>5 paths evaluated]
    J --> K[Recommended Open Core +<br/>Premium model]

Figure 16.4: Dynamic persona creation in response to process gaps

This validates the handbook’s claim that primitives should be created and iterated throughout a project, not frozen at inception. Anti-pattern #10 (Not Fixing the Primitives) warns against correcting symptoms manually instead of updating the instruction set. Each dynamic persona was the fix — a new primitive that addressed a structural gap rather than a one-off patch.

TipTry This: Dynamic Persona Creation

When the process surfaces a gap, create a new persona rather than patching with ad-hoc prompts. Here is the pattern used for the Fact-Ref-Checker:

# .github/agents/fact-ref-checker.agent.md
---
name: Fact-Ref-Checker
role: Claims Auditor
---
You audit manuscripts for factual accuracy. For every claim:
1. Is it sourced? Flag unsourced statistics.
2. Is it consistent? Cross-reference numbers across chapters.
3. Is it falsifiable? Flag unfalsifiable superlatives.

Output: severity-ranked findings (CRITICAL / HIGH / MEDIUM).

The key: define the persona’s scope and output format explicitly. Generic “review this” dispatches produce generic output.


16.6 Escalation Events and Anti-Pattern Mapping

Four escalation events interrupted normal flow. Each maps to a specific anti-pattern from Chapter 14.

16.6.1 1. Architecture Agent Timeout

What happened: A single agent was dispatched to produce the full 15-chapter architecture. It failed after 26 minutes — a connection error caused by context window exhaustion.

Anti-pattern: #11 (Context Window Exhaustion) compounded by #6 (The Solo Hero). The orchestrator assigned monolithic scope to one agent, violating the one file, one agent per wave isolation principle.

Resolution: Split into two parallel agents — Part 1 (Chapters 1–8), Part 2 (Chapters 9–15 plus cross-cutting concerns). Both completed successfully, producing 1,020 lines of chapter specifications.

What held true: Context will remain finite and fragile. Regardless of model capability, there is always a limit to how much an agent can consider effectively.

16.6.2 2. Chief Editor Synthesis Scope

What happened: After four corpus audit specialists completed their parallel scans, their outputs needed cross-cutting synthesis that no individual specialist could produce — each had domain-specific findings but none had the full picture.

Anti-pattern: #2 (Context Dumping) — the temptation was to dump all four audit outputs into one agent’s context window. Instead, the orchestrator escalated to the Chief Editor persona, whose explicit role was cross-chapter coherence.

Resolution: Chief Editor ran as synthesizer, producing 7 consensus themes, 6 resolved tensions, 10 cuts, and 9 identified gaps. The persona’s scope matched the task.

What held true: Composition will remain necessary. Complex analysis required coordination across specialists and structured integration of results.

16.6.3 3. User Vetoed apm compile

What happened: During the APM strategic insertion phase, the orchestrator prepared six surgical insertions. The human author intervened: “Do not mention apm compile — niche feature.” All six insertions were adapted to respect the constraint.

This was not persona drift or agent failure — every agent was following its persona correctly. The user exercised editorial authority, overriding correct-but-misaligned agent output. This is the Architect role in practice: human judgment as the final arbiter of what the book should and should not say.

What held true: Human judgment remains the bottleneck and the differentiator. The scarce resource was not token generation but the ability to decide what the book should and should not say.

16.6.4 4. Panel Disagreement on APM Prominence

What happened: After the “zero APM mentions in ~68,000 words” discovery, the Market Analyst wanted more visibility; the Thought Leadership Advisor wanted less. The panel genuinely disagreed.

This demonstrated the value of explicit governing principles as tie-breakers. The Chief Editor resolved the disagreement with a single principle: “The book is 100% useful without APM. APM appears as proof, not prerequisite.” Once articulated, the principle made every subsequent decision mechanical.

Resolution: 6 surgical insertions, 5 name-mentions, approximately 475 words across 5 chapters. The lightest possible touch.

What held true: Explicit knowledge is more valuable than implicit knowledge. The governing principle, once articulated, made every subsequent decision mechanical.


A book about AI-native software development, written by APM’s creator, using APM’s orchestration infrastructure, contained zero mentions of APM across ~68,000 words. This was not a bug — it was the methodology working correctly. Each agent was dispatched with chapter-specific scope; no agent had “promote the author’s project” in its persona. The absence proved the review process was honest and prompted the four-wave strategic assessment that produced the surgical insertions above.


16.7 The Authenticity Question

A separate three-expert panel (HN Skeptic, Thought Leadership Strategist, Meta-Authenticity Analyst) evaluated the risk of openly documenting the AI-assisted pipeline. The consensus: net positive. The key test:

“Could this person have written a credible book without AI? If yes, AI reads as methodology demonstration.”

The framing rule: “built using the same methodology it teaches” – never “AI-written.” Expected risk: some initial skepticism from readers who dismiss anything AI-assisted, but engaged readers find the transparency more impressive than the alternative.

The README was rewritten to showcase the pipeline: an 11-agent team table, a 5-stage pipeline diagram, and links to review artifacts. Explicit knowledge over implicit knowledge — a property that held true here too. Hiding the process would contradict the book’s thesis.


16.8 What Held True

The five structural properties held under editorial conditions (see the APM Overhaul case study for the full treatment). The novel finding: composition-level orchestration — coordinating agents that write prose, not code — scaled without modification to the wave model.


NoteMetrics

~68,000 words · 15 chapters · 11 personas (8 + 3 dynamic) · ~50+ dispatches · 4 writing waves + integration + polish · 25 Mermaid diagrams · 75 fact-check flags (5 critical) · 40 visual opportunities · 5 APM mentions (~475 words) · 4 escalation events · 5/5 structural properties tested

📕 Get the PDF & EPUB — free download

Plus ~1 update/month max. No spam. Unsubscribe anytime.

Download the Handbook

CC BY-NC-ND 4.0 © 2025-2026 Daniel Meppiel · CC BY-NC-ND 4.0

Free to read and share with attribution. License details