conductor-implement
Execute tasks from a track's implementation plan following TDD workflow
Documentation
Implement Track
Execute tasks from a track's implementation plan, following the workflow rules defined in conductor/workflow.md.
Use this skill when
- Working on implement track tasks or workflows
- Needing guidance, best practices, or checklists for implement track
Do not use this skill when
- The task is unrelated to implement track
- You need a different domain or tool outside this scope
Instructions
- Clarify goals, constraints, and required inputs.
- Apply relevant best practices and validate outcomes.
- Provide actionable steps and verification.
- If detailed examples are required, open
resources/implementation-playbook.md.
Pre-flight Checks
-
Verify Conductor is initialized:
- Check
conductor/product.mdexists - Check
conductor/workflow.mdexists - Check
conductor/tracks.mdexists - If missing: Display error and suggest running
/conductor:setupfirst
- Check
-
Load workflow configuration:
- Read
conductor/workflow.md - Parse TDD strictness level
- Parse commit strategy
- Parse verification checkpoint rules
- Read
Track Selection
If argument provided:
- Validate track exists:
conductor/tracks/{argument}/plan.md - If not found: Search for partial matches, suggest corrections
If no argument:
-
Read
conductor/tracks.md -
Parse for incomplete tracks (status
[ ]or[~]) -
Display selection menu:
Select a track to implement: In Progress: 1. [~] auth_20250115 - User Authentication (Phase 2, Task 3) Pending: 2. [ ] nav-fix_20250114 - Navigation Bug Fix 3. [ ] dashboard_20250113 - Dashboard Feature Enter number or track ID:
Context Loading
Load all relevant context for implementation:
-
Track documents:
conductor/tracks/{trackId}/spec.md- Requirementsconductor/tracks/{trackId}/plan.md- Task listconductor/tracks/{trackId}/metadata.json- Progress state
-
Project context:
conductor/product.md- Product understandingconductor/tech-stack.md- Technical constraintsconductor/workflow.md- Process rules
-
Code style (if exists):
conductor/code_styleguides/{language}.md
Track Status Update
Update track to in-progress:
-
In
conductor/tracks.md:- Change
[ ]to[~]for this track
- Change
-
In
conductor/tracks/{trackId}/metadata.json:- Set
status: "in_progress" - Update
updatedtimestamp
- Set
Task Execution Loop
For each incomplete task in plan.md (marked with [ ]):
1. Task Identification
Parse plan.md to find next incomplete task:
- Look for lines matching
- [ ] Task X.Y: {description} - Track current phase from structure
2. Task Start
Mark task as in-progress:
- Update plan.md: Change
[ ]to[~]for current task - Announce: "Starting Task X.Y: {description}"
3. TDD Workflow (if TDD enabled in workflow.md)
Red Phase - Write Failing Test:
Following TDD workflow for Task X.Y...
Step 1: Writing failing test
- Create test file if needed
- Write test(s) for the task functionality
- Run tests to confirm they fail
- If tests pass unexpectedly: HALT, investigate
Green Phase - Implement:
Step 2: Implementing minimal code to pass test
- Write minimum code to make test pass
- Run tests to confirm they pass
- If tests fail: Debug and fix
Refactor Phase:
Step 3: Refactoring while keeping tests green
- Clean up code
- Run tests to ensure still passing
4. Non-TDD Workflow (if TDD not strict)
- Implement the task directly
- Run any existing tests
- Manual verification as needed
5. Task Completion
Commit changes (following commit strategy from workflow.md):
git add -A
git commit -m "{commit_prefix}: {task description} ({trackId})"
Update plan.md:
- Change
[~]to[x]for completed task - Commit plan update:
git add conductor/tracks/{trackId}/plan.md
git commit -m "chore: mark task X.Y complete ({trackId})"
Update metadata.json:
- Increment
tasks.completed - Update
updatedtimestamp
6. Phase Completion Check
After each task, check if phase is complete:
- Parse plan.md for phase structure
- If all tasks in current phase are
[x]:
Run phase verification:
Phase {N} complete. Running verification...
- Execute verification tasks listed for the phase
- Run full test suite:
npm test/pytest/ etc.
Report and wait for approval:
Phase {N} Verification Results:
- All phase tasks: Complete
- Tests: {passing/failing}
- Verification: {pass/fail}
Approve to continue to Phase {N+1}?
1. Yes, continue
2. No, there are issues to fix
3. Pause implementation
CRITICAL: Wait for explicit user approval before proceeding to next phase.
Error Handling During Implementation
On Tool Failure
ERROR: {tool} failed with: {error message}
Options:
1. Retry the operation
2. Skip this task and continue
3. Pause implementation
4. Revert current task changes
- HALT and prese
Quick Info
- Source
- antigravity
- Category
- Document Processing
- Repository
- View Repo
- Scraped At
- Jan 29, 2026
Tags
Related Skills
ab-test-setup
Structured guide for setting up A/B tests with mandatory gates for hypothesis, metrics, and execution readiness.
airflow-dag-patterns
Build production Apache Airflow DAGs with best practices for operators, sensors, testing, and deployment. Use when creating data pipelines, orchestrating workflows, or scheduling batch jobs.
algorithmic-art
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.