Humans+Agents
Agent Builder

Agent Builder

The meta-system for generating workflow packages. Deep research is the default — not an optional upgrade. Forge builds Forge. Every package follows the same 7-file structure, quality-weighted scoring, and example-first generation process.

What This System Produces

A Complete 7-File Workflow Package

  • 00-START-HERE.html — Landing page with setup instructions
  • 01-process-agent.html — Machine-readable execution instructions
  • 01-process-consultant.html — Human-readable consultant guide
  • 02-context.html — Input manifest with all required data
  • 03-example.html — Golden example showing completed deliverable
  • 04-quality.html — QC checklist with weighted scoring
  • quick-start-prompt.txt — Copy/paste prompt to bootstrap workflow

Estimated Time

5m

Setup

10m

Input Collection

15m

Generation

Cloud Mode

Use this mode with Claude Projects or ChatGPT.

Claude Project Recommended

  1. Create a new Claude Project
  2. Upload all 7 files from the Agent-Builder folder to the Project Knowledge
  3. Upload the Blue Ocean package folder as reference (the-blue-ocean-positioning-map/)
  4. Copy the contents of quick-start-prompt.txt and paste as your first message
  5. Provide the package name and description when prompted

Local Mode

Use this mode with Claude Code CLI.

Claude Code

  1. Open terminal in the workspace folder
  2. Run: claude
  3. Say: "Read the Agent-Builder folder in # Production System and the Blue Ocean package as reference"
  4. Provide the package name and description
  5. Agent will generate all 7 files to the specified output folder

Files in This Package

File Type Purpose
00-START-HERE.html Entry Orientation and setup
01-process-agent.html Agent Machine-readable instructions for AI execution
01-process-consultant.html Guide Human-readable process for consultants
02-context.html Context All inputs needed to generate a package
03-example.html Example Reference to Blue Ocean as golden example
04-quality.html Quality QC checklist for validating generated packages
quick-start-prompt.txt Prompt Bootstrap prompt for quick execution

Quick Start Prompt

Copy and paste this prompt into a Claude Project or Claude Code session to bootstrap a new workflow package generation.

You have been loaded with the Agent Builder workflow.

Read the 01-process-agent file and begin execution. You will be generating a complete 7-file workflow package.

IMPORTANT: Deep research is the default, not optional. Reference blueocean2.html to see the quality bar -- specific data, buyer quotes, competitive intel, risk treatment. Speed-run outputs without research are not acceptable.

Start by asking me for the required inputs:
1. Package name (e.g., "The Competitor Analysis")
2. Package description (1-2 sentences)
3. Firm name (The Growth Firm, The Revenue Firm, or The Talent Firm)
4. Output folder path

After gathering inputs:
1. Reference the Blue Ocean package (the-blue-ocean-positioning-map) for structure
2. Reference blueocean2.html for the quality bar on deliverable outputs
3. Generate all 7 files with research-grade content
4. Run QC validation (specificity weighted at 40%)
5. Report the final score before delivering

Context Reset

Re-Read Before Every Package

Before starting ANY new package, re-read these instructions from the beginning. Context drift accumulates across packages — frameworks from Package A bleed into Package B. Fresh read = fresh start.

Mandatory Pre-Flight Checklist

  • Re-read this entire process — Do not rely on memory from previous packages
  • Re-read the PRD — Refresh the system architecture and golden examples
  • Clear mental slate — The new package has its own domain, frameworks, and requirements
  • Web search for THIS deliverable — Do not reuse framework research from other packages

This prevents the #1 failure mode: copying patterns from the last package instead of building what's correct for this one.

Role & Objective

Identity

  • You are a Package Generator Agent
  • You create workflow packages that enable consultants and AI agents to produce client-ready deliverables
  • You follow the 7-file package structure exactly

Goal

  • Generate a complete 7-file workflow package for the specified deliverable
  • All files follow design system and quality standards
  • Package passes QC checklist at 90%+ score

Output Format

  • 7 HTML files + 1 TXT file per package
  • Saved to: /[output-path]/[firm-name]/the-[package-name]/
  • File naming: numbered prefixes (00-, 01-, 02-, etc.)

Execution Context

Reference Files

Before generating, read these files for structure and quality reference:

File Purpose
02-context.html Input manifest — what data you need to collect
03-example.html Golden reference — Blue Ocean package structure
04-quality.html QC criteria — validation checklist

Output File Structure

the-[package-name]/ ├── 00-START-HERE.html ├── 01-process-agent.html ├── 01-process-consultant.html ├── 02-context.html ├── 03-example.html ├── 04-quality.html └── quick-start-prompt.txt

Gather Inputs

Collect the following inputs before generating. Ask user one section at a time.

Package Definition

  • Package Name Required — Format: "The [Name]" (e.g., "The Competitor Analysis")
  • Package Description Required — 1-2 sentences describing deliverable purpose
  • Firm Name Required — "The [Name] Firm" (e.g., "The Growth Firm")

Output Configuration

  • Output Folder Path Required — Base path for saving (e.g., "/Outputs")
  • Design System Override Optional — Custom colors/fonts if not using default

Content Guidance

  • Example Company Type Recommended — Industry/type for fictional example (e.g., "B2B SaaS")
  • Key Sections Recommended — Main sections the deliverable should include
  • Similar Deliverables Optional — Reference packages to study for structure

Example-First Generation

Critical: Build the example FIRST, then derive all other files from it. This prevents framework drift.

Why Example-First

Without this sequence, you'll copy structure from unrelated packages and apply wrong frameworks. ERRC belongs in Blue Ocean, not Lean Six Sigma. DMAIC belongs in process improvement, not market analysis. Build the example with correct domain frameworks, then extract everything else from it.

Step 1: Web Search for Domain Frameworks

Before Creating Any Files

  • Search: "[deliverable type] framework methodology best practices"
  • Identify 3-5 core frameworks practitioners actually use
  • Note key terminology, standard structures, expected outputs
  • Flag industry-specific variations
  • DO NOT borrow frameworks from other packages — find what's correct for THIS deliverable

Step 2: Create 03-example.html FIRST

Build the Golden Example Before Anything Else

  • Apply the correct domain frameworks you identified in Step 1
  • Use a fictional but realistic client scenario
  • Include specific data, numbers, quotes to show research-backed quality
  • Make it complete enough to serve as a template
  • Validate: would a practitioner recognize this as correct methodology?

Step 3: Derive Process Files from Example

Reverse-Engineer from What You Built

  • Create 01-process-agent.html: What steps produced the example?
  • Create 01-process-consultant.html: Human-readable walkthrough of same process
  • Document the frameworks actually used (from Step 1, visible in Step 2)
  • Identify the logical sequence of analysis

Step 4: Derive Context from Example

What Inputs Made the Example Possible?

  • Create 02-context.html by examining what data the example needed
  • Required: What was essential to produce the example?
  • Recommended: What would have made it better?
  • Optional: What adds depth when available?

Step 5: Define Quality from Example

What Makes the Example Good?

  • Create 04-quality.html by analyzing success criteria
  • Specificity (40%): What specific data does the example contain?
  • Completeness (25%): Which sections are non-negotiable?
  • Validation (20%): What evidence supports the recommendations?
  • Presentation (15%): What formatting standards apply?

Step 6: Write Entry Files Last

Now You Know What the Package Does

  • Create 00-START-HERE.html: Explain when to use, what it produces, how to begin
  • Create quick-start-prompt.txt: Bootstrap prompt pointing to process file
  • Test: fresh LLM session with quick-start should work without clarification

File Specifications

Detailed requirements for each file type.

File 1: 00-START-HERE.html

Required Sections

  • What This Workflow Produces — bullet list of outputs
  • Estimated Time — 3-column grid (Setup, Input Collection, Generation)
  • Cloud Mode — step-by-step for Claude/ChatGPT Projects
  • Local Mode — step-by-step for Claude Code CLI
  • Files in This Package — table with links and badges
  • Footer navigation linking all package files

File 2: 01-process-agent.html

Required Sections

  • Section 01: Role & Objective — identity, goal, output format
  • Section 02: Execution Context — reference files, output paths
  • Section 03: Gather Inputs — full manifest with priority tags
  • Section 04: Generate Deliverable — section specs, format rules
  • Section 05: Quality Validation — QC process, pass threshold
  • Section 06: Output Delivery — save location, delivery summary

File 3: 01-process-consultant.html

Required Sections

  • Time Estimates — banner with total time + phase breakdown
  • 3-Step Process — Gather Context → Build Output → Quality Control
  • Interview vs Research Modes — when to use each
  • Sample Prompts — copy/paste prompts for each phase
  • What If Context Is Incomplete — handling gaps, minimum viable
  • Package Files — linked list with badges

File 4: 02-context.html

Required Sections

  • 5-6 logical input sections (varies by deliverable type)
  • Each input has: name, priority tag, description, format hint, example
  • Priority tags: Required (red), Recommended (orange), Optional (gray)
  • 25-35 total inputs typical

File 5: 03-example.html

Required Characteristics

  • Complete deliverable — all sections fully populated, NO placeholders
  • Fictional but realistic company — appropriate to deliverable type
  • Matches design system — same styling agents will produce
  • Shows quality bar — level of specificity and depth expected

Critical

The example file IS the template. Agents replicate its structure exactly. Every section, every data point must be present and realistic.

File 6: 04-quality.html

Required Sections

  • Scoring dashboard — sticky header with section scores + total
  • 4 weighted dimensions: Completeness (40%), Specificity (30%), Validation (20%), Presentation (10%)
  • 25-30 checkable criteria with Pass/Fail buttons
  • Interactive JavaScript for score calculation
  • Verdict thresholds: Approved (90%+), Minor Revisions (75-89%), Rejected (<75%)

File 7: quick-start-prompt.txt

Template

You have been loaded with the [Package Name] workflow.

Read the 01-process-agent file and begin execution. Start by asking me for the required context inputs, one section at a time. After gathering all inputs, generate the complete [deliverable name] following the 03-example structure. Run QC validation against the 04-quality criteria and report the final score before delivering.

My client is: [CLIENT NAME]

Quality Validation

After generating all files, validate against the QC checklist in 04-quality.html.

Pass Threshold

  • Approved (90%+): Proceed to delivery
  • Minor Revisions (75-89%): Fix failed criteria, re-run QC. Max 2 cycles.
  • Rejected (<75%): Regenerate entire package. If rejected twice, escalate.

Hard Rule

Do not deliver a package that scores below 90%. Iterate until Approved or escalate to human review.

Output Delivery

Save Location

/[output-path]/[firm-name]/the-[package-name]/

Delivery Summary Format

Report Template

## Package Generated: [Package Name]

**Location:** /[path]/the-[package-name]/
**Files Created:** 7
**QC Score:** [XX]%
**Verdict:** Approved

### Files
- 00-START-HERE.html
- 01-process-agent.html
- 01-process-consultant.html
- 02-context.html
- 03-example.html
- 04-quality.html
- quick-start-prompt.txt

3-Step Process

Human-readable process for creating workflow packages. Use this guide when running the Agent Builder manually or supervising AI execution. Total time: approximately 30 minutes.

1

Gather Context

Collect all inputs needed to generate the package. Reference the Context Loading section below for the full input manifest.

  • Package name and description
  • Target firm (Growth, Revenue, Talent, etc.)
  • Output folder path
  • Example company type (B2B SaaS, professional services, etc.)
  • Key sections the deliverable should include
2

Generate Package

Use AI to generate all 7 files. Reference the Blue Ocean package for structure.

  • Load the Agent Builder context into Claude/ChatGPT
  • Provide the gathered inputs
  • Review each file as it's generated
  • Request revisions for any issues before moving to next file
3

Quality Control

Validate the generated package against the QC checklist.

  • Run through all criteria (Completeness, Specificity, Validation, Presentation)
  • Score must reach 90%+ to approve
  • Fix any failed criteria and re-validate
  • Document the final score in delivery summary

Interview vs. Research Mode

Interview Mode

When you have access to a subject matter expert who can answer questions.

  • Faster — get answers directly
  • More accurate — SME knows the domain
  • Best for: new deliverable types, complex domains

Research Mode

When you need to research best practices independently.

  • Self-directed — no dependencies
  • Scalable — can batch multiple packages
  • Best for: common deliverables, well-documented domains

Sample Prompts

Initial Setup

Copy This Prompt
Read the Agent-Builder folder in # Production System. Then read the Blue Ocean package (the-blue-ocean-positioning-map) as your structural reference. You'll be generating a new workflow package.

Package Generation

Copy This Prompt
Generate a workflow package called "The [Package Name]" for The [Firm] Firm.

Description: [1-2 sentences describing the deliverable]

Save to: /Outputs/[firm-name]/the-[package-name]/

Use a fictional [industry type] company for the example. Follow the Blue Ocean structure exactly.

QC Validation

Copy This Prompt
Run the QC checklist from 04-quality.html against the package you just generated. Report the score for each dimension and the overall verdict. If any criteria fail, explain what needs to be fixed.

What If Context Is Incomplete?

Sometimes you won't have all the inputs. Here's how to handle gaps:

Missing Input Workaround
Package description unclear Research similar deliverables, propose structure, get approval
No example company type specified Default to B2B SaaS for Growth/Revenue Firm, professional services for Talent Firm
Key sections unknown Research best practices for that deliverable type, propose sections
Output path not specified Use default: /Outputs/[firm-name]/the-[package-name]/

Minimum Viable Package

At minimum, you need:

  • Package name
  • 1-sentence description
  • Target firm

Everything else can be researched or defaulted.

Context Manifest

All inputs needed to generate a workflow package. Gather these before running the Agent Builder.

Section 1: Package Definition

Core information about what package to create.

Package Name Required

The name of the deliverable package to create. Should follow "The [Deliverable Name]" convention.

Format

The [Name]

Example

The Blue Ocean Positioning Map

Package Description Required

A 1-2 sentence description of what this deliverable is and what it produces.

Format

1-2 sentences describing purpose and outputs

Example

A competitive positioning analysis that maps your offering against competitors across key value dimensions, identifying blue ocean opportunities.

Firm Name Required

Which firm this package belongs to. Determines folder structure and header branding.

Format

The [Name] Firm

Options

The Growth Firm, The Revenue Firm, The Talent Firm

Section 2: Output Configuration

Where and how to save the generated files.

Output Folder Path Required

The base path where package folders should be created. Packages save to /[path]/[firm-name]/the-[package-name]/.

Format

Absolute or relative file path

Example

/5. Outputs/Firms

Section 3: Content Guidance

Direction for the deliverable's content and structure.

Example Company Type Recommended

The type of fictional company to use in the golden example. Should be appropriate for the deliverable type.

Format

Industry/business model description

Example

B2B SaaS company selling to mid-market

Key Sections Recommended

Main sections the deliverable should include. If not provided, agent will research best practices.

Format

List of section names

Example

Value Dimensions, Competitor Profiles, Positioning Map, Strategic Recommendations

Reference Packages Optional

Existing packages to study for structure or style inspiration.

Format

List of package names or paths

Example

the-blue-ocean-positioning-map, the-ideal-customer-profile

Section 4: Design System

Visual identity specifications. Defaults apply if not overridden.

Color Palette Override Optional

Custom colors if not using the default palette.

Format

Named colors with hex values

Default

Noir #0a0a0a, Ivory #f5f4f0, White #ffffff, Charcoal #1a1a1a, Stone #8a8680

Typography Override Optional

Custom fonts if not using the default stack.

Format

Font names for headings, body, code

Default

Headings: DM Serif Display, Body: DM Sans (300), Code: JetBrains Mono

Section 5: Execution Options

How the Agent Builder should run.

Execution Mode Optional

Whether to run in Claude Code (local) or Claude/ChatGPT Project (cloud).

Options

local or cloud

Default

local (Claude Code)

Generate Reports Optional

Whether to generate Quality Report and System Improvement Report after package creation.

Format

true or false

Default

true

Golden Example — The Blue Ocean Positioning Map

The Blue Ocean Positioning Map package serves as the gold standard reference. The blueocean2.html output demonstrates research-backed quality — specific data, buyer quotes, competitive intel, risk treatment.

This package demonstrates the complete 7-file structure with all elements properly implemented. Use it as your structural template when generating new packages.

Why This Package?

Blue Ocean demonstrates the upgraded 7-file structure with numbered prefixes, dual process docs (agent + consultant), and quick-start prompt. The blueocean2.html output shows what deep research produces vs. a speed-run: specific stats ($166B market, 3% vs 90% completion), direct buyer quotes, competitive intel with sources, and risk scenarios with mitigations.

Structure Overview

Every package follows this exact file structure:

the-[package-name]/ ├── 00-START-HERE.html # Entry point, setup guide ├── 01-process-agent.html # Machine-readable instructions ├── 01-process-consultant.html # Human-readable guide ├── 02-context.html # Input manifest ├── 03-example.html # Completed deliverable ├── 04-quality.html # QC checklist └── quick-start-prompt.txt # Bootstrap prompt

7

Files

6

Sections per Process

25+

Context Inputs

25+

QC Criteria

What to Replicate

When generating a new package, ensure these elements match the Blue Ocean reference:

00-START-HERE.html

  • Hero with package title and description
  • "What This Workflow Produces" — bulleted output list
  • Estimated Time — 3-column grid
  • Cloud Mode setup steps
  • Local Mode setup steps
  • Files in This Package — linked table with badges
  • Footer with all file links

01-process-agent.html

  • Section 01: Role & Objective — identity, goal, output format
  • Section 02: Execution Context — reference files, paths
  • Section 03: Gather Inputs — full manifest with priority tags
  • Section 04: Generate Deliverable — section specs
  • Section 05: Quality Validation — QC process
  • Section 06: Output Delivery — save location, summary

01-process-consultant.html

  • Time banner with total + phase breakdown
  • 3-step process cards (Gather → Build → QC)
  • Interview vs Research mode comparison
  • Sample prompts for each phase
  • What If Context Is Incomplete section
  • Package files list with badges

02-context.html

  • 5-6 logical input sections
  • Each input: name, priority tag, description, format, example
  • Priority tags: Required (red), Recommended (orange), Optional (gray)
  • 25-35 total inputs

03-example.html (or direct deliverable output)

  • Complete deliverable — NO placeholders
  • Research-backed content: specific data, quotes, competitive intel
  • Buyer voice: direct quotes from real sources
  • Risk treatment: what could go wrong + mitigations
  • All sections fully populated with evidence
  • Matches design system exactly

04-quality.html

  • Sticky score dashboard
  • 4 dimensions: Completeness (40%), Specificity (30%), Validation (20%), Presentation (10%)
  • 25-30 checkable criteria
  • Interactive Pass/Fail buttons
  • JavaScript score calculation
  • Verdict thresholds (90%+ Approved)

quick-start-prompt.txt

  • References 01-process-agent file
  • Specifies execution sequence
  • Includes [CLIENT NAME] placeholder
  • Under 100 words

Design System Reference

All packages must follow this visual identity:

Colors

  • Noir: #0a0a0a — headers, badges, code blocks
  • Ivory: #f5f4f0 — page background
  • White: #ffffff — cards, content areas
  • Charcoal: #1a1a1a — body text
  • Stone: #8a8680 — secondary text, labels
  • Border: #d8d4cc — dividers

Typography

  • Headings: DM Serif Display, serif
  • Body: DM Sans, font-weight: 300
  • Code: JetBrains Mono, monospace

Badge Colors by File Type

  • Start Here: Orange (#d97706)
  • Process (Agent/Consultant): Purple (#8b5cf6)
  • Context: Blue (#1d4ed8)
  • Example: Charcoal (#1a1a1a)
  • Quality: Green (#16a34a)

Quality Scoring

Quality criteria for validating a generated 7-file workflow package. The gate check must pass before weighted scoring begins.

Gate Check — Framework Fit

This gate must pass before weighted scoring begins. If any item fails, STOP and rebuild the 03-example.html using the correct frameworks for this deliverable type.

Gate Check Items

All three must pass to unlock weighted scoring:

  • 1. Web search was performed to identify correct domain frameworks
  • 2. 03-example uses only frameworks appropriate to this deliverable domain
  • 3. No cross-domain framework contamination (e.g., ERRC in Six Sigma, DMAIC in Strategy)

Gate Failed — Required Action

STOP all further work. Return to Step 1 of the example-first workflow. Perform a web search to identify correct frameworks for this deliverable type, then rebuild 03-example.html from scratch using the correct methodology.

Failure Response Protocol

1st
Fix & Revalidate
Address specific failures, make targeted fixes, run QC again on failed sections only
2nd
Rebuild Section
Same section fails twice → rebuild that file from scratch using the 03-example as reference
3rd
Human Review
Same section fails 3x → escalate to human review, likely a systemic issue with the package design

Dimension 1: Completeness (40% weight)

All 7 files were created (00-START-HERE, 01-process-agent, 01-process-consultant, 02-context, 03-example, 04-quality, quick-start-prompt)
Files saved to correct folder: /[firm-name]/the-[package-name]/
File naming uses numbered prefixes (00-, 01-, 02-, etc.)
00-START-HERE has: output preview, time estimates, cloud mode, local mode, file table
01-process-agent has all 6 sections (Role, Context, Inputs, Generate, QC, Delivery)
01-process-consultant has: time banner, 3-step process, modes, prompts, gap handling
02-context has 25+ inputs with priority tags across 5+ sections
03-example is a complete deliverable with NO placeholders
04-quality has 25+ criteria across 4 weighted dimensions
quick-start-prompt.txt is present and under 100 words

Dimension 2: Specificity (20% weight)

03-example uses a realistic fictional company appropriate to the deliverable type
03-example data is plausible (realistic numbers, metrics, company details)
02-context inputs are specific to this deliverable (not generic)
02-context includes realistic example values for each input
04-quality criteria are relevant to this specific deliverable type
01-process-agent instructions are actionable (not vague)
01-process-consultant sample prompts are copy/paste ready

Dimension 3: Fit for Purpose (20% weight)

03-example uses frameworks appropriate to this deliverable type (not borrowed from unrelated packages)
A domain practitioner would recognize the methodology as correct for this deliverable
Framework terminology matches industry standards (e.g., DMAIC for Six Sigma, not ERRC)
01-process-agent instructions align with how practitioners actually execute this deliverable
02-context inputs request data that's actually needed for this methodology

Dimension 4: Validation (10% weight)

All internal links between files work correctly
Footer navigation present on all HTML files
04-quality JavaScript calculates scores correctly
Files render correctly in browser (no broken styles)

Dimension 5: Presentation (10% weight)

Header shows correct firm name
Doc badges use correct colors (Start: orange, Process: purple, Context: blue, Example: charcoal, Quality: green)
Color palette matches design system (noir, ivory, white, charcoal, stone)
Typography correct (DM Serif Display headings, DM Sans 300 body, JetBrains Mono code)
Voice is professional, confident, and direct (no hedging)

Verdict Thresholds

Score Verdict Action
90%+ Approved Proceed to delivery
75-89% Minor Revisions Fix failed criteria, re-run QC. Max 2 cycles.
<75% Rejected Regenerate entire package. If rejected twice, escalate to human review.

Forge builds Forge. The meta-system for building every other agent.

Cowork Operations Cluster
Command Center Chief of Staff Agent Builder Humans+Agents OS Strategy Three Moats Workflow Factory Expert Playbook Leaders Standard Work Kanban Ecosystem Brief
cowork.asapai.net · MasteryMade · © 2026