Overview
What This System Produces
A Complete 7-File Workflow Package
00-START-HERE.html— Landing page with setup instructions01-process-agent.html— Machine-readable execution instructions01-process-consultant.html— Human-readable consultant guide02-context.html— Input manifest with all required data03-example.html— Golden example showing completed deliverable04-quality.html— QC checklist with weighted scoringquick-start-prompt.txt— Copy/paste prompt to bootstrap workflow
Estimated Time
5m
Setup
10m
Input Collection
15m
Generation
Cloud Mode
Use this mode with Claude Projects or ChatGPT.
Claude Project Recommended
- Create a new Claude Project
- Upload all 7 files from the Agent-Builder folder to the Project Knowledge
- Upload the Blue Ocean package folder as reference (
the-blue-ocean-positioning-map/) - Copy the contents of
quick-start-prompt.txtand paste as your first message - Provide the package name and description when prompted
Local Mode
Use this mode with Claude Code CLI.
Claude Code
- Open terminal in the workspace folder
- Run:
claude - Say: "Read the Agent-Builder folder in # Production System and the Blue Ocean package as reference"
- Provide the package name and description
- Agent will generate all 7 files to the specified output folder
Files in This Package
| File | Type | Purpose |
|---|---|---|
00-START-HERE.html |
Entry | Orientation and setup |
01-process-agent.html |
Agent | Machine-readable instructions for AI execution |
01-process-consultant.html |
Guide | Human-readable process for consultants |
02-context.html |
Context | All inputs needed to generate a package |
03-example.html |
Example | Reference to Blue Ocean as golden example |
04-quality.html |
Quality | QC checklist for validating generated packages |
quick-start-prompt.txt |
Prompt | Bootstrap prompt for quick execution |
Quick Start
Quick Start Prompt
Copy and paste this prompt into a Claude Project or Claude Code session to bootstrap a new workflow package generation.
You have been loaded with the Agent Builder workflow.
Read the 01-process-agent file and begin execution. You will be generating a complete 7-file workflow package.
IMPORTANT: Deep research is the default, not optional. Reference blueocean2.html to see the quality bar -- specific data, buyer quotes, competitive intel, risk treatment. Speed-run outputs without research are not acceptable.
Start by asking me for the required inputs:
1. Package name (e.g., "The Competitor Analysis")
2. Package description (1-2 sentences)
3. Firm name (The Growth Firm, The Revenue Firm, or The Talent Firm)
4. Output folder path
After gathering inputs:
1. Reference the Blue Ocean package (the-blue-ocean-positioning-map) for structure
2. Reference blueocean2.html for the quality bar on deliverable outputs
3. Generate all 7 files with research-grade content
4. Run QC validation (specificity weighted at 40%)
5. Report the final score before delivering
Agent Process — Section 00
Context Reset
Re-Read Before Every Package
Before starting ANY new package, re-read these instructions from the beginning. Context drift accumulates across packages — frameworks from Package A bleed into Package B. Fresh read = fresh start.
Mandatory Pre-Flight Checklist
- Re-read this entire process — Do not rely on memory from previous packages
- Re-read the PRD — Refresh the system architecture and golden examples
- Clear mental slate — The new package has its own domain, frameworks, and requirements
- Web search for THIS deliverable — Do not reuse framework research from other packages
This prevents the #1 failure mode: copying patterns from the last package instead of building what's correct for this one.
Agent Process — Section 01
Role & Objective
Identity
- You are a Package Generator Agent
- You create workflow packages that enable consultants and AI agents to produce client-ready deliverables
- You follow the 7-file package structure exactly
Goal
- Generate a complete 7-file workflow package for the specified deliverable
- All files follow design system and quality standards
- Package passes QC checklist at 90%+ score
Output Format
- 7 HTML files + 1 TXT file per package
- Saved to:
/[output-path]/[firm-name]/the-[package-name]/ - File naming: numbered prefixes (00-, 01-, 02-, etc.)
Agent Process — Section 02
Execution Context
Reference Files
Before generating, read these files for structure and quality reference:
| File | Purpose |
|---|---|
02-context.html |
Input manifest — what data you need to collect |
03-example.html |
Golden reference — Blue Ocean package structure |
04-quality.html |
QC criteria — validation checklist |
Output File Structure
Agent Process — Section 03
Gather Inputs
Collect the following inputs before generating. Ask user one section at a time.
Package Definition
- Package Name Required — Format: "The [Name]" (e.g., "The Competitor Analysis")
- Package Description Required — 1-2 sentences describing deliverable purpose
- Firm Name Required — "The [Name] Firm" (e.g., "The Growth Firm")
Output Configuration
- Output Folder Path Required — Base path for saving (e.g., "/Outputs")
- Design System Override Optional — Custom colors/fonts if not using default
Content Guidance
- Example Company Type Recommended — Industry/type for fictional example (e.g., "B2B SaaS")
- Key Sections Recommended — Main sections the deliverable should include
- Similar Deliverables Optional — Reference packages to study for structure
Agent Process — Section 04
Example-First Generation
Critical: Build the example FIRST, then derive all other files from it. This prevents framework drift.
Why Example-First
Without this sequence, you'll copy structure from unrelated packages and apply wrong frameworks. ERRC belongs in Blue Ocean, not Lean Six Sigma. DMAIC belongs in process improvement, not market analysis. Build the example with correct domain frameworks, then extract everything else from it.
Step 1: Web Search for Domain Frameworks
Before Creating Any Files
- Search: "[deliverable type] framework methodology best practices"
- Identify 3-5 core frameworks practitioners actually use
- Note key terminology, standard structures, expected outputs
- Flag industry-specific variations
- DO NOT borrow frameworks from other packages — find what's correct for THIS deliverable
Step 2: Create 03-example.html FIRST
Build the Golden Example Before Anything Else
- Apply the correct domain frameworks you identified in Step 1
- Use a fictional but realistic client scenario
- Include specific data, numbers, quotes to show research-backed quality
- Make it complete enough to serve as a template
- Validate: would a practitioner recognize this as correct methodology?
Step 3: Derive Process Files from Example
Reverse-Engineer from What You Built
- Create 01-process-agent.html: What steps produced the example?
- Create 01-process-consultant.html: Human-readable walkthrough of same process
- Document the frameworks actually used (from Step 1, visible in Step 2)
- Identify the logical sequence of analysis
Step 4: Derive Context from Example
What Inputs Made the Example Possible?
- Create 02-context.html by examining what data the example needed
- Required: What was essential to produce the example?
- Recommended: What would have made it better?
- Optional: What adds depth when available?
Step 5: Define Quality from Example
What Makes the Example Good?
- Create 04-quality.html by analyzing success criteria
- Specificity (40%): What specific data does the example contain?
- Completeness (25%): Which sections are non-negotiable?
- Validation (20%): What evidence supports the recommendations?
- Presentation (15%): What formatting standards apply?
Step 6: Write Entry Files Last
Now You Know What the Package Does
- Create 00-START-HERE.html: Explain when to use, what it produces, how to begin
- Create quick-start-prompt.txt: Bootstrap prompt pointing to process file
- Test: fresh LLM session with quick-start should work without clarification
Agent Process — Section 05
File Specifications
Detailed requirements for each file type.
File 1: 00-START-HERE.html
Required Sections
- What This Workflow Produces — bullet list of outputs
- Estimated Time — 3-column grid (Setup, Input Collection, Generation)
- Cloud Mode — step-by-step for Claude/ChatGPT Projects
- Local Mode — step-by-step for Claude Code CLI
- Files in This Package — table with links and badges
- Footer navigation linking all package files
File 2: 01-process-agent.html
Required Sections
- Section 01: Role & Objective — identity, goal, output format
- Section 02: Execution Context — reference files, output paths
- Section 03: Gather Inputs — full manifest with priority tags
- Section 04: Generate Deliverable — section specs, format rules
- Section 05: Quality Validation — QC process, pass threshold
- Section 06: Output Delivery — save location, delivery summary
File 3: 01-process-consultant.html
Required Sections
- Time Estimates — banner with total time + phase breakdown
- 3-Step Process — Gather Context → Build Output → Quality Control
- Interview vs Research Modes — when to use each
- Sample Prompts — copy/paste prompts for each phase
- What If Context Is Incomplete — handling gaps, minimum viable
- Package Files — linked list with badges
File 4: 02-context.html
Required Sections
- 5-6 logical input sections (varies by deliverable type)
- Each input has: name, priority tag, description, format hint, example
- Priority tags: Required (red), Recommended (orange), Optional (gray)
- 25-35 total inputs typical
File 5: 03-example.html
Required Characteristics
- Complete deliverable — all sections fully populated, NO placeholders
- Fictional but realistic company — appropriate to deliverable type
- Matches design system — same styling agents will produce
- Shows quality bar — level of specificity and depth expected
Critical
The example file IS the template. Agents replicate its structure exactly. Every section, every data point must be present and realistic.
File 6: 04-quality.html
Required Sections
- Scoring dashboard — sticky header with section scores + total
- 4 weighted dimensions: Completeness (40%), Specificity (30%), Validation (20%), Presentation (10%)
- 25-30 checkable criteria with Pass/Fail buttons
- Interactive JavaScript for score calculation
- Verdict thresholds: Approved (90%+), Minor Revisions (75-89%), Rejected (<75%)
File 7: quick-start-prompt.txt
Template
You have been loaded with the [Package Name] workflow.
Read the 01-process-agent file and begin execution. Start by asking me for the required context inputs, one section at a time. After gathering all inputs, generate the complete [deliverable name] following the 03-example structure. Run QC validation against the 04-quality criteria and report the final score before delivering.
My client is: [CLIENT NAME]
Agent Process — Section 06
Quality Validation
After generating all files, validate against the QC checklist in 04-quality.html.
Pass Threshold
- Approved (90%+): Proceed to delivery
- Minor Revisions (75-89%): Fix failed criteria, re-run QC. Max 2 cycles.
- Rejected (<75%): Regenerate entire package. If rejected twice, escalate.
Hard Rule
Do not deliver a package that scores below 90%. Iterate until Approved or escalate to human review.
Agent Process — Section 07
Output Delivery
Save Location
/[output-path]/[firm-name]/the-[package-name]/
Delivery Summary Format
Report Template
## Package Generated: [Package Name]
**Location:** /[path]/the-[package-name]/
**Files Created:** 7
**QC Score:** [XX]%
**Verdict:** Approved
### Files
- 00-START-HERE.html
- 01-process-agent.html
- 01-process-consultant.html
- 02-context.html
- 03-example.html
- 04-quality.html
- quick-start-prompt.txt
Consultant Guide
3-Step Process
Human-readable process for creating workflow packages. Use this guide when running the Agent Builder manually or supervising AI execution. Total time: approximately 30 minutes.
Gather Context
Collect all inputs needed to generate the package. Reference the Context Loading section below for the full input manifest.
- Package name and description
- Target firm (Growth, Revenue, Talent, etc.)
- Output folder path
- Example company type (B2B SaaS, professional services, etc.)
- Key sections the deliverable should include
Generate Package
Use AI to generate all 7 files. Reference the Blue Ocean package for structure.
- Load the Agent Builder context into Claude/ChatGPT
- Provide the gathered inputs
- Review each file as it's generated
- Request revisions for any issues before moving to next file
Quality Control
Validate the generated package against the QC checklist.
- Run through all criteria (Completeness, Specificity, Validation, Presentation)
- Score must reach 90%+ to approve
- Fix any failed criteria and re-validate
- Document the final score in delivery summary
Consultant Guide
Interview vs. Research Mode
Interview Mode
When you have access to a subject matter expert who can answer questions.
- Faster — get answers directly
- More accurate — SME knows the domain
- Best for: new deliverable types, complex domains
Research Mode
When you need to research best practices independently.
- Self-directed — no dependencies
- Scalable — can batch multiple packages
- Best for: common deliverables, well-documented domains
Consultant Guide
Sample Prompts
Initial Setup
Package Generation
Description: [1-2 sentences describing the deliverable]
Save to: /Outputs/[firm-name]/the-[package-name]/
Use a fictional [industry type] company for the example. Follow the Blue Ocean structure exactly.
QC Validation
Consultant Guide
What If Context Is Incomplete?
Sometimes you won't have all the inputs. Here's how to handle gaps:
| Missing Input | Workaround |
|---|---|
| Package description unclear | Research similar deliverables, propose structure, get approval |
| No example company type specified | Default to B2B SaaS for Growth/Revenue Firm, professional services for Talent Firm |
| Key sections unknown | Research best practices for that deliverable type, propose sections |
| Output path not specified | Use default: /Outputs/[firm-name]/the-[package-name]/ |
Minimum Viable Package
At minimum, you need:
- Package name
- 1-sentence description
- Target firm
Everything else can be researched or defaulted.
Context Loading
Context Manifest
All inputs needed to generate a workflow package. Gather these before running the Agent Builder.
Section 1: Package Definition
Core information about what package to create.
The name of the deliverable package to create. Should follow "The [Deliverable Name]" convention.
A 1-2 sentence description of what this deliverable is and what it produces.
Which firm this package belongs to. Determines folder structure and header branding.
Section 2: Output Configuration
Where and how to save the generated files.
The base path where package folders should be created. Packages save to /[path]/[firm-name]/the-[package-name]/.
Section 3: Content Guidance
Direction for the deliverable's content and structure.
The type of fictional company to use in the golden example. Should be appropriate for the deliverable type.
Main sections the deliverable should include. If not provided, agent will research best practices.
Existing packages to study for structure or style inspiration.
Section 4: Design System
Visual identity specifications. Defaults apply if not overridden.
Custom colors if not using the default palette.
Custom fonts if not using the default stack.
Section 5: Execution Options
How the Agent Builder should run.
Whether to run in Claude Code (local) or Claude/ChatGPT Project (cloud).
Whether to generate Quality Report and System Improvement Report after package creation.
Example Output
Golden Example — The Blue Ocean Positioning Map
The Blue Ocean Positioning Map package serves as the gold standard reference. The blueocean2.html output demonstrates research-backed quality — specific data, buyer quotes, competitive intel, risk treatment.
This package demonstrates the complete 7-file structure with all elements properly implemented. Use it as your structural template when generating new packages.
Why This Package?
Blue Ocean demonstrates the upgraded 7-file structure with numbered prefixes, dual process docs (agent + consultant), and quick-start prompt. The blueocean2.html output shows what deep research produces vs. a speed-run: specific stats ($166B market, 3% vs 90% completion), direct buyer quotes, competitive intel with sources, and risk scenarios with mitigations.
Structure Overview
Every package follows this exact file structure:
7
Files
6
Sections per Process
25+
Context Inputs
25+
QC Criteria
What to Replicate
When generating a new package, ensure these elements match the Blue Ocean reference:
00-START-HERE.html
- Hero with package title and description
- "What This Workflow Produces" — bulleted output list
- Estimated Time — 3-column grid
- Cloud Mode setup steps
- Local Mode setup steps
- Files in This Package — linked table with badges
- Footer with all file links
01-process-agent.html
- Section 01: Role & Objective — identity, goal, output format
- Section 02: Execution Context — reference files, paths
- Section 03: Gather Inputs — full manifest with priority tags
- Section 04: Generate Deliverable — section specs
- Section 05: Quality Validation — QC process
- Section 06: Output Delivery — save location, summary
01-process-consultant.html
- Time banner with total + phase breakdown
- 3-step process cards (Gather → Build → QC)
- Interview vs Research mode comparison
- Sample prompts for each phase
- What If Context Is Incomplete section
- Package files list with badges
02-context.html
- 5-6 logical input sections
- Each input: name, priority tag, description, format, example
- Priority tags: Required (red), Recommended (orange), Optional (gray)
- 25-35 total inputs
03-example.html (or direct deliverable output)
- Complete deliverable — NO placeholders
- Research-backed content: specific data, quotes, competitive intel
- Buyer voice: direct quotes from real sources
- Risk treatment: what could go wrong + mitigations
- All sections fully populated with evidence
- Matches design system exactly
04-quality.html
- Sticky score dashboard
- 4 dimensions: Completeness (40%), Specificity (30%), Validation (20%), Presentation (10%)
- 25-30 checkable criteria
- Interactive Pass/Fail buttons
- JavaScript score calculation
- Verdict thresholds (90%+ Approved)
quick-start-prompt.txt
- References 01-process-agent file
- Specifies execution sequence
- Includes [CLIENT NAME] placeholder
- Under 100 words
Design System Reference
All packages must follow this visual identity:
Colors
- Noir: #0a0a0a — headers, badges, code blocks
- Ivory: #f5f4f0 — page background
- White: #ffffff — cards, content areas
- Charcoal: #1a1a1a — body text
- Stone: #8a8680 — secondary text, labels
- Border: #d8d4cc — dividers
Typography
- Headings: DM Serif Display, serif
- Body: DM Sans, font-weight: 300
- Code: JetBrains Mono, monospace
Badge Colors by File Type
- Start Here: Orange (#d97706)
- Process (Agent/Consultant): Purple (#8b5cf6)
- Context: Blue (#1d4ed8)
- Example: Charcoal (#1a1a1a)
- Quality: Green (#16a34a)
Quality
Quality Scoring
Quality criteria for validating a generated 7-file workflow package. The gate check must pass before weighted scoring begins.
Gate Check — Framework Fit
This gate must pass before weighted scoring begins. If any item fails, STOP and rebuild the 03-example.html using the correct frameworks for this deliverable type.
Gate Check Items
All three must pass to unlock weighted scoring:
- 1. Web search was performed to identify correct domain frameworks
- 2. 03-example uses only frameworks appropriate to this deliverable domain
- 3. No cross-domain framework contamination (e.g., ERRC in Six Sigma, DMAIC in Strategy)
Gate Failed — Required Action
STOP all further work. Return to Step 1 of the example-first workflow. Perform a web search to identify correct frameworks for this deliverable type, then rebuild 03-example.html from scratch using the correct methodology.
Failure Response Protocol
Dimension 1: Completeness (40% weight)
Dimension 2: Specificity (20% weight)
Dimension 3: Fit for Purpose (20% weight)
Dimension 4: Validation (10% weight)
Dimension 5: Presentation (10% weight)
Verdict Thresholds
| Score | Verdict | Action |
|---|---|---|
| 90%+ | Approved | Proceed to delivery |
| 75-89% | Minor Revisions | Fix failed criteria, re-run QC. Max 2 cycles. |
| <75% | Rejected | Regenerate entire package. If rejected twice, escalate to human review. |
Forge builds Forge. The meta-system for building every other agent.