gantt
title Growth Engine Build Phases
dateFormat X
axisFormat %s
section Strategy
Expert panels + user override :done, s1, 0, 3
section Kit Automation
Playwright scripts :crit, k1, 3, 5
Playwright MCP server :crit, k2, 5, 6
Kit API :crit, k3, 6, 7
Human escalation :done, k4, 7, 8
section Implementation
Landing page + email infra :done, l1, 8, 11
Bio correction (user override) :crit, l2, 11, 12
section Security & Launch
PII audit (4 parallel agents) :done, a1, 12, 14
History scrub + remediation :done, a2, 14, 15
18 Building a Growth Engine
Scope: Checkpoints 014-019 – 6 phases – 18 implementation tasks
Duration: ~8 hours wall-clock across multiple sessions
Theme: What happens when PROSE methodology meets non-engineering domains – and the pivots, platform limitations, and policy constraints that emerge
- When three automated approaches each hit a genuine platform limitation, escalating to a human with a precise checklist is the correct move – not a fourth attempt.
- A single factual error (wrong job title) cascades into every downstream deliverable – catch persona drift early.
- Organizational policy awareness lives nowhere in training data; human judgment remains the irreplaceable input.
- Pivots are normal: the plan changed repeatedly, and the orchestration protocol absorbed each one.
18.1 What Was Built
A growth engine for a free technical book – email capture, landing page, DNS and email infrastructure, PII audit, and launch preparation. The interesting story is not what got built, but how many times the plan changed and where the methodology hit its limits.
18.2 Panel Strategy
Expert panels drove the strategic decisions. Across six phases, the orchestrator convened panels with fifteen domain-specific personas (publishing strategy, growth, branding, LinkedIn, security) that produced synthesized recommendations through moderator agents. Early panels over-indexed on career coaching, prompting a user override: “Focus on the growth engine – brand, followers, industry gravitas, tactical distribution.” The orchestrator reframed the brief and produced an 18-task implementation plan. This is the Architect role from Chapter 13 – scope correction when agent output drifts from the actual objective.
18.3 Timeline
18.4 The Kit Automation Escalation
The most instructive failure sequence was the Kit email platform automation – three distinct approaches, each hitting a different wall.
sequenceDiagram
participant O as Orchestrator
participant PW as Playwright Scripts
participant MCP as Playwright MCP
participant API as Kit V3 API
participant H as Human
O->>PW: Automate form creation + tagging
PW-->>O: Login, tags, form creation succeeded
PW-->>O: FAIL: React textarea -- .fill(), Meta+a, JS setter all fail
PW-->>O: FAIL: React combobox -- not a native select
O->>MCP: Try MCP browser control
MCP-->>O: navigate, click, snapshot work
MCP-->>O: FAIL: browser_run_code -- SyntaxError on all inputs
MCP-->>O: FAIL: Still cannot solve React combobox
O->>API: Try Kit REST API directly
API-->>O: List forms and tags succeeded
API-->>O: FAIL: /v3/automations returns 404 (not exposed)
O->>H: Escalate with 3-step manual checklist
H-->>O: Completed in 2 minutes
What happened: Playwright scripts handled login, tag creation, form creation, and even form publishing. But Kit’s form builder uses React-controlled components – the textarea and combobox dropdowns are not native HTML elements. Standard DOM manipulation (element.fill(), keyboard shortcuts, JavaScript value setters) all failed because React reconciles its virtual DOM and discards external mutations.
The MCP server added browser_snapshot and browser_take_screenshot capabilities but couldn’t solve the fundamental React internals problem. The Kit V3 API could read forms and tags but returned 404 on automation rule endpoints – they simply aren’t exposed.
The automation failed. Three approaches, three walls. The orchestrator produced a 3-step manual checklist and the human completed it in two minutes. The methodology’s value was not in automating this task – it was in recognizing when to stop trying. Each approach was trusted to work based on partial success before validating the fundamental constraint: React’s virtual DOM rejects external mutations. The discipline was stopping after three genuine platform limitations instead of attempting a fourth DOM hack.
When multiple automated approaches hit platform limitations, produce a precise manual checklist instead of attempting a fourth approach:
Kit Form Automation – Human Escalation Checklist:
- Open Kit form editor > select the confirmation message textarea
- Paste: “Check your email to confirm your subscription”
- In the automation dropdown, select “Add subscriber to sequence: Welcome”
Three automated approaches failed on React internals. The human completed this in 2 minutes. The discipline: recognize when the last 10% is a platform limitation, not an application logic problem.
18.5 The Persona Drift Correction
During landing page work, the orchestrator’s authority profile described the author as a “software engineer.” The user corrected immediately: “I am NOT a Software Engineer. I’m a Global Black Belt with 14+ years enterprise strategy.”
This is Anti-Pattern #17 – Persona Drift. The agents had latched onto the most common tech-industry persona in their training data rather than the actual role described in the source documents. The correction propagated across all files – landing page, preface, chapter bios, panel briefs.
It also surfaced Anti-Pattern #7 – The Trust Fall. The authority profile had been generated in an earlier wave and accepted without human verification. A single factual error – job title – cascaded into every downstream deliverable that referenced the author’s credentials. The fix was cheap (find-and-replace across files), but only because it was caught early. Had the book shipped with “software engineer” in the bio, the credibility damage would have been real.
18.6 The PII Audit Pipeline
Four parallel agents scanned the repository for sensitive data. Three completed normally, finding findings across 25+ files. One agent refused to scan – the career directory contained personal documents and the agent’s safety guardrails triggered, declining to process content it classified as sensitive personal information. The guardrail was correct in principle (the files were sensitive), but the task was finding sensitive content to remove it – an edge case where safety boundaries and task objectives were aligned but the agent couldn’t distinguish audit-to-remove from audit-to-exploit. The orchestrator adapted with manual grep analysis.
flowchart LR
subgraph "Dispatch -- 4 Parallel Agents"
A1[Agent 1: career/]
A2[Agent 2: reviews/]
A3[Agent 3: audits + agents/]
A4[Agent 4: config files]
end
A1 -->|"REFUSED -- safety guardrails"| F1[Manual grep analysis]
A2 & A3 & A4 -->|findings| R[Remediation Plan]
F1 --> R
R --> G[git-filter-repo history scrub]
G --> V[Clean history, originals preserved locally]
18.7 Constraint Discovery
Two constraints emerged that no panel anticipated:
Email infrastructure: The domain registrar had silently discontinued free email forwarding. The orchestrator discovered this during DNS setup, pivoted to ImprovMX (free tier), configured MX records, added SPF and DKIM entries, and verified delivery.
CELA risk – the novel finding. A PR adding a handbook link to the APM README (a Microsoft org repository) was identified by the user as a potential compliance risk – personal lead generation via a corporate open-source asset. The PR was closed and a new constraint was added: no promotional links from microsoft/ org repos to personal content. This is the Architect role exercising judgment that no agent could have – organizational policy awareness that exists nowhere in the training data. No panel suggested this constraint. No agent flagged it. The entire deliberation architecture – fifteen personas across seven panels – missed it because organizational policy is not in the training data and cannot be inferred from public documentation.
Agents also needed specific artifacts in context to be useful. The first two landing page agents failed – one stalled for eight minutes, the other produced generic copy – because they lacked the actual HTML. The third attempt embedded the full page source and immediately produced field-by-field rewrites. The lesson: progressive disclosure means providing the right context, not the least context.
18.8 What Held True
The structural properties held where applicable. The novel and most important finding: organizational policy awareness lives nowhere in training data. This is a permanent boundary of agentic methodology that no model improvement will address.
The growth engine shipped. The system worked not because every agent succeeded – the Kit automation plainly failed – but because the orchestrator knew when to pivot and when to stop.
6 implementation phases · 7+ expert panels · 15 agent personas · 3 Kit automation attempts (all blocked by platform limitations) · 1 manual workaround · 1 CELA risk discovery · 3 anti-patterns observed (#17 Persona Drift, #7 The Trust Fall, #5 Scope Creep) · Methodology limitation discovered: organizational policy awareness