Orchestrator Design for Multi-Agent AI4PKM System
π Document Status: This is the original design document from October 2025. The orchestrator has been successfully implemented and is now in production use.
β οΈ Important Changes: The implementation uses a nodes-based configuration in
orchestrator.yamlinstead of individual agent files in_Settings_/Agents/. Agent prompts are now stored in_Settings_/Prompts/.π Current Documentation: For usage instructions and current configuration format, see:
- User Guide: docs/orchestrator.md
- Architecture Overview: Blog Post
Orchestrator Design for Multi-Agent AI4PKM System
Executive Summary
This document provides the original design for the orchestrator that coordinates multiple AI agents in the AI4PKM system. The orchestrator manages any number of agents through configuration rather than code.
Implementation Status: β Complete (as of October 2025)
Key Innovations:
- Agent definitions as Markdown files with flat YAML frontmatter (Obsidian compatible)
- File system as the state database (no separate DB needed)
- Automatic agent triggering based on file system events
- Agent chaining through input/output path matching
- Skills as reusable, composable functions
Multi-Agent Ecosystem
Current Agents (7 total)
Note: These are example agents for illustration. Any agent can be defined using templates.
Ingestion Agents (5): EIC, PLL, PPP, GES, GDR Publishing Agents (1): CTP Research Agents (1): ARP
Data Flow Patterns
%% remove this section %%
Pattern 1: Sequential Processing Chain
External Source β Ingestion Agent β Processing Agent β Publication Agent
Example: Web Article β EIC β GDR β CTP
Pattern 2: Aggregation Pattern
Multiple Sources β Single Aggregation Agent
Example: EIC + PLL + GES β GDR (daily roundup)
Pattern 3: Parallel Processing
Single Source β Multiple Agents (different purposes)
Example: Limitless β [PLL (lifelog) + GES (events)]
Pattern 4: Ad-hoc Research
User Query + Vault Content β Research Agent β Output
Example: Question β ARP β Research Report
Ecosystem Diagram
Example for illustration. Actual agent ecosystem will vary by user.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β MULTI-AGENT ECOSYSTEM β
β (Input/Output Connection Map) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
EXTERNAL SOURCES INGESTION LAYER PROCESSING LAYER OUTPUT LAYER
βββββββββββββββ βββββββββββββββ ββββββββββββββββ ββββββββββββ
βββββββββββββββ
β Web β
β Clippings β
ββββββββ¬βββββββ
β
β saved to
βΌ
Ingest/Clipping/
{title}.md
β
β new_file event
βΌ
βββββββββββ AI/Clipping/ AI/Roundup/ AI/Sharable/
β EIC ββββββββββΆ {title} - EIC.md βββββ YYYY-MM-DD - GDR.md Thread*.md
β Agent β (processed) β β β²
βββββββββββ β β β
β β β
βββββββββββββββ β β β
β Limitless β β β aggregates β
βVoice Captureβ βββββββββΆβ β
ββββββββ¬βββββββ β β β
β β βΌ β
β daily capture β βββββββββββ β
βΌ β β CTP β β
Ingest/Limitless/ β β Agent ββββββββββββββββββ
YYYY-MM-DD.md β βββββββββββ
β β (publishes)
β β
ββββββββββββββββ β
β β β
β daily_file β updated_file β
βΌ βΌ β
βββββββββββ βββββββββββ β
β PLL β β GES β β
β Agent β β Agent βββββ MCP#gcal β
ββββββ¬βββββ ββββββ¬βββββ β
β β β
β β β
βΌ βΌ β
AI/Lifelog/ AI/Events/ β
YYYY-MM-DD YYYY-MM-DD β
Lifelog*.md Summary*.md βββββββββββββββββββββ
β
β integrates
β²
β
ββββββ΄ββββββ
βIngest/ β
βPhotolog/ β
βYYYYMMDD β
βPhotolog β
β.md β
ββββββ²ββββββ
β
β
ββββββ΄ββββββ
β PPP β
β Agent β
ββββββββββββ
β²
β
β daily_files
Ingest/Photolog/
Processed/
{images + metadata}
RESEARCH FLOW (Ad-hoc)
ββββββββββββββββββββββ
User Question
β
βΌ
βββββββββββ
β ARP ββββββ Full Vault Access
β Agent β (Topics, Roundup,
ββββββ¬βββββ Journal, etc.)
β
β synthesize
βΌ
AI/Research/
{topic} - ARP.md
β
ββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββ
β CTP β
β Agent β
βββββββββββ
Agent Definition Schema (ORIGINAL DESIGN)
β οΈ This section describes the ORIGINAL design. The actual implementation uses a different approach:
- Configuration is in
orchestrator.yamlnodes (not individual agent files)- Prompt files in
_Settings_/Prompts/have minimal frontmatter (title, abbreviation, category only)- See docs/orchestrator.md for current implementation details.
Frontmatter Format (Original Design - Not Implemented)
Each agent is defined in a Markdown file in _Settings_/Agents/ with YAML frontmatter:
---
# Basic Identity
title: Enrich Ingested Content
abbreviation: EIC
category: ingestion
# Trigger Configuration (when to run)
trigger_pattern: "Ingest/Clipping/*.md"
trigger_event: created
trigger_exclude_pattern: "*EIC*"
trigger_schedule: ""
trigger_wait_for: []
# Input/Output Paths
input_path: "Ingest/Clipping/"
input_type: new_file
output_path: "AI/Clipping/"
output_type: new_file
output_naming: "{title} - {agent}.md"
# Dependencies
skills:
- obsidian_links
- topic_matching
mcp_servers: []
# Execution
executor: claude_code
max_parallel: 1
timeout_minutes: 30
---
# Agent Instructions
[Prompt body with detailed instructions]
Field Definitions
Trigger Configuration
- trigger_pattern: Glob pattern for files that trigger this agent
-
trigger_event: createdmodifieddeletedscheduledmanual - trigger_exclude_pattern: Optional glob pattern to exclude files
- trigger_schedule: Cron-like schedule (e.g.,
daily@07:00) - trigger_wait_for: List of agent abbreviations to wait for completion
Input/Output
- input_path: Primary input directory (can be list)
-
input_type: new_fileupdated_filedaily_filesdate_rangevault_wide - output_path: Output directory
-
output_type: new_filenew_filesdaily_file - output_naming: Template for output filename
Dependencies
- skills: List of skill names to load
- mcp_servers: List of MCP servers required (e.g.,
gcal,web_search)
Execution
-
executor: claude_codegemini_clicodex_clicursor_agentcontinue_cli - max_parallel: Maximum concurrent executions
- timeout_minutes: Execution timeout
Task Schema
β οΈ Implementation Note: Task files are actually created in
_Settings_/Tasks/, not_Tasks_/at root level.
When the orchestrator triggers an agent, it creates a task file in _Settings_/Tasks/ based on the task template. The task file tracks execution state and is updated by the agent during execution:
---
title: "EIC: Process Web Clipping"
agent: EIC
status: in_progress
trigger_file: "Ingest/Clipping/Article Title.md"
created: 2025-10-25T10:30:00
started: 2025-10-25T10:30:05
completed: ""
error: ""
output_files: []
---
# Task Execution Log
[Agent updates this section during execution]
Task Lifecycle:
- Orchestrator creates task file with
status: pending - Execution Manager updates to
status: in_progresswhen agent starts - Agent updates task file during execution (logs, progress)
- Agent sets
status: completedorstatus: failedon finish - Orchestrator can query task files to monitor system health
Orchestrator Architecture
High-Level Components
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β ORCHESTRATOR β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Orchestrator Core β β
β β β β
β β 1. Load agent definitions from _Settings_/Agents/ β β
β β 2. Monitor file system for events β β
β β 3. Match events to agent triggers β β
β β 4. Spawn agents with proper context β β
β β 5. Track execution state β β
β β 6. Manage concurrency β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
β ββββββββββββββββββββββ¬βββββββββββββββββββββ¬βββββββββββββββββββ β
β β β β β β
β βΌ βΌ βΌ β β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β β
β β File System β β Agent β β Execution β β β
β β Monitor β β Registry β β Manager β β β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β β
βΌ βΌ βΌ
βββββββββββββββ βββββββββββββββ βββββββββββββββ
β Watchdog β β Agent β β Agent β
β Events β β Definitions β β Threads β
βββββββββββββββ βββββββββββββββ βββββββββββββββ
Note: Skills are managed by individual agents, not centrally by the orchestrator. Each agent specifies its required skills in frontmatter, and skills are loaded/invoked within the agentβs execution context.
Component 1: File System Monitor
Responsibilities:
- Watch vault for file system events (create, modify, delete)
- Filter events based on patterns
- Debounce rapid-fire events
- Parse frontmatter to extract state
- Queue events for processing
Key Methods:
class FileSystemMonitor:
def __init__(self, vault_path, agent_registry):
self.vault_path = vault_path
self.agent_registry = agent_registry
self.observer = Observer()
self.event_queue = Queue()
def start(self):
"""Start monitoring file system."""
def _on_file_event(self, event):
"""Handle file system event."""
# Parse frontmatter, queue event
def _parse_frontmatter(self, file_path) -> Dict:
"""Extract YAML from markdown."""
Component 2: Agent Registry
Responsibilities:
- Load all agent definitions from
_Settings_/Agents/ - Parse frontmatter and prompt body
- Validate agent definitions (using JSON schema)
- Provide agent lookup by trigger conditions
- Hot-reload when agent definitions change
- Export current configurations to JSON for debugging
Agent Definition JSON Schema:
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"required": ["title", "abbreviation", "category", "trigger_pattern", "trigger_event"],
"properties": {
"title": {"type": "string"},
"abbreviation": {"type": "string", "pattern": "^[A-Z]{3}$"},
"category": {"enum": ["ingestion", "publish", "research"]},
"trigger_pattern": {"type": "string"},
"trigger_event": {"enum": ["created", "modified", "deleted", "scheduled", "manual"]},
"executor": {"enum": ["claude_code", "gemini_cli", "codex_cli", "cursor_agent", "continue_cli"]},
"max_parallel": {"type": "integer", "minimum": 1},
"timeout_minutes": {"type": "integer", "minimum": 1}
}
}
Key Methods:
class AgentRegistry:
def __init__(self, agents_dir, vault_path):
self.agents_dir = agents_dir
self.agents: Dict[str, AgentDefinition] = {}
self.load_all_agents()
def load_all_agents(self):
"""Load all agent definitions from directory."""
def find_matching_agents(self, event_data) -> List[AgentDefinition]:
"""Find agents whose triggers match the event."""
def _matches_trigger(self, agent, event_path, event_type, frontmatter) -> bool:
"""Check if event matches agent's trigger conditions."""
def export_config_snapshot(self, output_path):
"""Export current agent configurations to JSON for debugging."""
# Creates _Tasks_/Logs/orchestrator/agent_config_snapshot.json
Component 3: Execution Manager
Responsibilities:
- Spawn agent execution threads/processes (non-blocking)
- Manage concurrency (limit parallel agents)
- Track execution state and metrics
- Handle agent failures and retries
- Collect logs and results
Key Methods:
class ExecutionManager:
def __init__(self, max_concurrent=3):
self.max_concurrent = max_concurrent
self.semaphore = Semaphore(max_concurrent)
self.active_executions: Dict[str, ExecutionContext] = {}
self.metrics = MetricsCollector()
def execute_agent(self, agent_def, trigger_data):
"""Execute an agent in a separate thread (non-blocking)."""
# Uses threading to avoid blocking orchestrator event loop
def _run_agent(self, context):
"""Run agent in current thread."""
def _execute_via_executor(self, executor, prompt, context, log_file):
"""Execute agent via specified executor (claude_code, etc.)."""
Orchestrator Main Loop
Note: Event loop uses queue with timeout to avoid blocking. Agent execution happens in separate threads.
class Orchestrator:
def __init__(self, vault_path):
self.vault_path = vault_path
self.agent_registry = AgentRegistry(vault_path / '_Settings_' / 'Agents')
self.file_monitor = FileSystemMonitor(vault_path, self.agent_registry)
self.execution_manager = ExecutionManager(max_concurrent=3)
def start(self):
"""Start the orchestrator."""
self.file_monitor.start()
self._event_loop()
def _event_loop(self):
"""Main event processing loop (non-blocking with timeout)."""
while True:
event = self.file_monitor.event_queue.get(timeout=1.0)
matching_agents = self.agent_registry.find_matching_agents(event)
for agent in matching_agents:
# Non-blocking: spawns thread and continues
self.execution_manager.execute_agent(
agent_def=agent,
trigger_data=event
)
System Architecture
Important Notes:
AI/folders are NOT created automatically - user must create them- Task files moved to
_Tasks_/at root level (notAI/Tasks/)
AI4PKM/
βββ ai4pkm_cli/
β βββ orchestrator/
β βββ __init__.py
β βββ core.py # Orchestrator main class
β βββ file_monitor.py # FileSystemMonitor
β βββ agent_registry.py # AgentRegistry
β βββ execution_manager.py # ExecutionManager
β βββ metrics.py # MetricsCollector
β βββ models.py # Data classes
β βββ utils.py # Orchestrator utilities
β # - YAML frontmatter parsing
β # - Multi-executor spawning (Claude/Gemini/OpenAI)
β # - Task file creation/updates
βββ _Settings_/
β βββ Agents/
β β βββ EIC Agent.md # Agent definitions with skills in frontmatter
β β βββ PLL Agent.md
β β βββ PPP Agent.md
β β βββ GES Agent.md
β β βββ GDR Agent.md
β β βββ CTP Agent.md
β β βββ ARP Agent.md
β βββ Skills/ # Skill definitions (Python + Markdown)
β βββ content_collection/
β βββ publishing/
β βββ knowledge_organization/
β βββ obsidian_rules/
βββ _Tasks_/ # Root-level task tracking
β βββ YYYY-MM-DD Task Name.md # Individual task files
β βββ Logs/
β βββ orchestrator/
β βββ orchestrator.log
β βββ metrics.json
β βββ agent_config_snapshot.json
βββ AI/ # User-created content folders
βββ Clipping/
βββ Lifelog/
βββ Roundup/
Note: Skills are stored in _Settings_/Skills/ and loaded by individual agents. The orchestrator uses utilities in utils.py for common operations (YAML parsing, spawning executors, task management).
Key Design Decisions
1. File System as State Database
Decision: Use file frontmatter to store execution state instead of separate database.
Rationale:
- Keeps state close to content (Obsidian-friendly)
- No separate DB to maintain
- Easy to inspect and debug
- Supports Obsidianβs property editor
Trade-offs:
- File I/O for state checks (slower than DB)
- Limited query capabilities
2. Flat Frontmatter Structure
Decision: Use flat key-value pairs instead of nested YAML.
Rationale:
- Obsidian property editor requires flat structure
- Easier to parse and validate
- Reduces YAML syntax errors
3. Thread-Based Execution
Decision: Use Python threads instead of processes for agent execution.
Rationale:
- Simpler concurrency model
- Shared memory (easier to access vault state)
- Lower resource overhead
Mitigation: Add timeout and error handling
4. Agent = Prompt Body
Decision: Agent is entirely defined by its prompt body (no hardcoded types). Agents can be run as prompts at will by the user.
Rationale:
- Maximum flexibility (easy to modify behavior)
- No code changes needed to update agents
- Supports user-created custom agents
- Users can invoke any agent manually via CLI with custom context
- Example:
ai4pkm run EIC --input "custom-file.md"orai4pkm run ARP --question "your research question"
5. Skills as Python Modules and Markdown Instructions
Decision: Skills can be both Python modules and Markdown instructions, managed by individual agents (not centrally by orchestrator).
Rationale:
- Follows Anthropicβs skills architecture (https://www.anthropic.com/news/skills)
- Python skills: Reusable code, leverage full Python ecosystem
- Markdown skills: Simple instructions, easy to create and modify
- Agent-level management: Each agent loads only the skills it needs
- Easy to test independently
FAQ
How is Agent different from Prompt?
Agent is simply runtime for prompts, where a given task is run using input and output specified. Users can run prompt manually, and use Orchestrator to run prompt automatically.
How do I make complex workflows?
You can chain prompts to build a complex workflow. Just write prompts so that the output of one prompt becomes the input of another prompt. Use file property if youβd like to chain one input agent (=prompt) to multiple output agents.
How do I debug agent execution?
- Check task files in
_Tasks_/for execution logs and status - Review
_Tasks_/Logs/orchestrator/orchestrator.logfor system events - Export agent configurations using
agent_config_snapshot.jsonfor debugging - Use
ai4pkm run <AGENT> --debugto run agents manually with verbose logging
Can I create custom agents?
Yes! Simply create a new Markdown file in _Settings_/Agents/ with proper frontmatter. The orchestrator will automatically load and execute it based on trigger conditions.
How do I handle agent dependencies?
Use the trigger_wait_for field in frontmatter to specify which agents must complete before this agent runs. Example: GDR waits for EIC, PLL, GES to finish.
Conclusion
This design provides a complete architecture for the orchestrator that will power the AI4PKM multi-agent system. Key features:
- Multi-Agent Ecosystem: Clear visualization of all agent connections and data flows
- Flexible Agent Definitions: Markdown files with Obsidian-compatible frontmatter
- Modular Architecture: Three core components (File System Monitor, Agent Registry, Execution Manager)
- Skills Framework: Both Python and Markdown skills, managed by individual agents
- File-Based State: No separate database needed
Migration Path from KTM
Current KTM Architecture Analysis
The existing Knowledge Task Management (KTM) system has several components we can reuse:
Reusable Components
- File Watchdog Infrastructure (
ai4pkm_cli/watchdog/file_watchdog.py)- Watchdog Observer setup
- Pattern matching logic
- BaseFileHandler abstract class
- Reuse: ~80%
- Task Semaphore (
ai4pkm_cli/watchdog/task_semaphore.py)- Concurrency control logic
- Shared semaphore management
- Reuse: 100% (rename to ConcurrencyManager)
- Thread-Specific Logging (
ai4pkm_cli/logger.py)- Log file creation per task
- Log path management
- Reuse: 100% (integrate into utils.py)
- Existing Handlers (all in
ai4pkm_cli/watchdog/handlers/)- Pattern definitions
- File detection logic
- Reuse: Extract patterns to agent definitions
Components to Replace
- KTG (Knowledge Task Generator) β Agent Registry
- Task request JSON creation β Agent trigger matching
- Pattern: Reuse handler patterns as
trigger_patternin agent frontmatter
- KTP (Knowledge Task Processor) β Execution Manager
- 3-phase processing β Thread-based agent execution
- Task status management β Task file frontmatter updates
- Handler Classes β Agent Definitions
- Each handler becomes agent configuration
- Code logic moves to agent prompts and skills
Migration Strategy
Phase 1: Parallel Implementation (Week 1-2)
Goal: Build orchestrator alongside KTM without disruption
Tasks:
- Create
ai4pkm_cli/orchestrator/directory structure - Implement core components:
- Copy
file_watchdog.pyβfile_monitor.py(modify for orchestrator) - Copy
task_semaphore.pyβ Integrate intoexecution_manager.py - Copy logging logic β
utils.py
- Copy
- Create minimal agent definitions for testing
- DO NOT modify existing KTM code
Validation:
- KTM continues to work normally
- Orchestrator components pass unit tests
- No file conflicts between systems
Phase 2: Single Agent Migration (Week 3)
Goal: Migrate EIC agent as proof of concept
Tasks:
- Create
_Settings_/Agents/EIC Agent.mdwith full frontmatter - Extract EIC prompt from existing code
- Test EIC via orchestrator (parallel to KTM)
- Compare outputs: KTM-EIC vs Orchestrator-EIC
- Keep both systems running
Validation:
- Both systems produce identical outputs for same input
- Orchestrator EIC completes without errors
- Task files tracked correctly in
_Tasks_/
Phase 3: Disable KTM Handlers One-by-One (Week 4-5)
Goal: Gradually switch traffic to orchestrator
Migration Order (low-risk first):
- PPP (Photo Processing) - Lowest risk, simple workflow
- EIC (Enrich Clippings) - Already tested in Phase 2
- GES (Event Summary) - Medium risk, MCP dependency
- PLL (Process Lifelogs) - Medium risk, complex integration
- GDR (Daily Roundup) - Higher risk, aggregates multiple sources
- CTP (Thread Posts) - Manual trigger, low risk
- ARP (Research) - Manual trigger, low risk
For Each Agent:
- Create agent definition in
_Settings_/Agents/ - Add orchestrator flag:
orchestrator_enabled: true - Run both systems in parallel for 24 hours
- Compare outputs side-by-side
- If identical β disable KTM handler
- If different β debug, fix, retry
Rollback: Set orchestrator_enabled: false to revert to KTM
Phase 4: Complete Cutover (Week 6)
Goal: Remove KTM code entirely
Tasks:
- All agents running via orchestrator for 1 week with no issues
- Archive KTM code:
mkdir -p archive/ktm-legacy mv ai4pkm_cli/watchdog/handlers/* archive/ktm-legacy/ mv ai4pkm_cli/commands/ktg_runner.py archive/ktm-legacy/ mv ai4pkm_cli/commands/ktp_runner.py archive/ktm-legacy/ - Update CLI entry points to use orchestrator
- Update documentation
- Create git tag:
ktm-to-orchestrator-migration-complete
Validation:
- All 7 agents running successfully
- No KTM code in active codebase
- All tests passing
Phase 5: Enhancements (Week 7+)
Goal: Add orchestrator-specific features
New Capabilities:
- Agent dependencies (
trigger_wait_for) - Scheduled execution (cron triggers)
- Agent configuration snapshots
- System improvement loop
- User installation wizard
Detailed Handler β Agent Mapping
1. ClippingFileHandler β EIC Agent
Current (clipping_file_handler.py):
def get_patterns(self) -> List[str]:
return ['Ingest/Clipping/*.md']
def should_process(self, file_path: Path) -> bool:
return 'EIC' not in file_path.name
New (Agent frontmatter):
trigger_pattern: "Ingest/Clipping/*.md"
trigger_event: created
trigger_exclude_pattern: "*EIC*"
Reuse: Pattern logic, exclusion logic Change: Execution method (KTG β direct agent spawn)
2. LimitlessFileHandler β PLL + GES Agents
Current (limitless_file_handler.py):
- Single handler triggers multiple tasks
- Conditional logic for meeting detection
New:
- Split into two agent definitions
- PLL: Daily schedule trigger
- GES: Modified file trigger
Reuse: File patterns, meeting detection logic (move to skill) Change: Split single handler into two agents
3. Task Request Handler β Agent Registry
Current (task_request_file_handler.py):
- Watches
AI/Tasks/Requests/*/*.json - Parses JSON to create task
New:
- Agent Registry directly creates tasks
- No intermediate JSON files
Reuse: Task file creation logic Change: Remove JSON layer, direct task creation
4. Task Processor β Execution Manager
Current (task_processor.py):
- 3-phase processing (TBD β PROCESSING β PROCESSED)
- Status transitions via frontmatter updates
New:
- Thread-based execution
- Same status transitions
- Cleaner status management
Reuse: Status state machine, frontmatter updates Change: Threading model, simpler execution flow
Conclusion
This design provides a complete architecture for the orchestrator that will power the AI4PKM multi-agent system. Key features:
- Multi-Agent Ecosystem: Clear visualization of all agent connections and data flows
- Flexible Agent Definitions: Markdown files with Obsidian-compatible frontmatter
- Modular Architecture: Three core components (File System Monitor, Agent Registry, Execution Manager)
- Skills Framework: Both Python and Markdown skills, managed by individual agents
- File-Based State: No separate database needed
- Safe Migration Path: Gradual migration from KTM with parallel running and rollback capability
- Comprehensive Testing: Unit, integration, and E2E tests ensure quality and compatibility
Next steps:
- Start with Phase 1: See [[2025-10-25 Phase 1 - Parallel Implementation]] for detailed implementation plan
- Migrate EIC agent as proof of concept (Week 3)
- Follow migration order for remaining agents (Week 4-5)
- Complete cutover and remove KTM code (Week 6)
- Add orchestrator-specific enhancements (Week 7+)
Document Version: 2.0 Last Updated: 2025-10-25 (Design), 2025-11-01 (Status Update) Status: HISTORICAL REFERENCE - Implementation Complete with Modifications
For Current Implementation: See docs/orchestrator.md