20 KiB
| name | description |
|---|---|
| spec-kit-implementation-loop | Implement an existing TenantPilot/TenantAtlas Spec Kit feature, run tests, browser smoke checks where applicable, post-implementation analysis, fix all confirmed in-scope findings when safe and bounded, and repeat until no in-scope findings remain or a stop condition is reached. |
Skill: Spec Kit Implementation Loop
Purpose
Use this skill to implement an already prepared TenantPilot/TenantAtlas Spec Kit feature and verify it with a bounded implementation loop.
This skill assumes spec.md, plan.md, and tasks.md already exist and have passed preparation readiness or have been explicitly accepted by the user.
The intended workflow is:
active or explicitly named spec
→ inspect repo truth, constitution, spec, plan, tasks, and relevant code/tests
→ evaluate implementation gates
→ implement strictly task-by-task
→ run relevant tests/checks
→ run browser smoke test when UI/user-facing flows are affected
→ run strict post-implementation analysis
→ fix confirmed in-scope findings
→ repeat test + browser smoke + analysis + fix loop until clean or bounded stop condition is reached
→ final implementation report
When to Use
Use this skill when the user asks to:
- implement an active or explicitly named Spec Kit feature
- run Spec Kit implement
- analyze after implementation
- fix implementation findings
- repeat implementation verification until no confirmed in-scope findings remain
- run tests and browser smoke checks after implementation
Typical user prompts:
Implementiere die aktive Spec und analysiere danach, ob alles passt.
Implementiere specs/243-product-usage-adoption-telemetry streng nach tasks.md.
Mach Spec Kit implement und danach analyse. Behebe alle Abweichungen und wiederhole bis sauber.
Implementiere die vorbereitete Spec. Danach Tests, Browser Smoke Test falls UI betroffen ist, Analyse und Fix-Loop bis keine In-Scope Findings mehr offen sind.
Hard Rules
- Work strictly repo-based.
- Implement only the active or explicitly named Spec Kit feature.
- Do not choose a new candidate.
- Do not create a new spec.
- Do not expand scope beyond
spec.md,plan.md, andtasks.md. - Do not silently add roadmap features, adjacent UX rewrites, speculative architecture, or unrelated refactors.
- Follow the repository constitution and existing Spec Kit conventions.
- Preserve TenantPilot/TenantAtlas terminology.
- Prefer small, reviewable patches over broad rewrites.
- Treat repository truth as authoritative over assumptions.
- If repository truth conflicts with implementation scope, stop and report the conflict unless there is an obvious minimal correction inside active spec scope.
- Fix only confirmed findings from tests, static checks, browser smoke checks, or post-implementation analysis.
- Fix all confirmed in-scope findings, regardless of severity, when they are safe and bounded.
- Do not leave Medium/Low findings open silently. If they are not fixed, document exactly why.
- Never hide failing tests, weaken assertions, delete meaningful coverage, or mark tasks complete without implementation evidence.
- Do not run destructive commands.
- Do not force checkout, reset, stash, rebase, merge, or delete branches.
- Do not perform database-destructive actions unless the repository test workflow explicitly requires isolated test database resets.
- Do not continue analysis/fix loops indefinitely.
- Do not move from implementation to final status unless the Test Gate, Browser Smoke Test Gate where applicable, and Post-Implementation Analysis Gate have been evaluated.
- Do not claim merge-readiness unless the Merge Readiness Gate passes.
Required Inputs
The user should provide at least one of:
- explicit spec directory such as
specs/<number>-<slug>/ - instruction to use the current active Spec Kit feature
- instruction to implement the prepared/current spec
If the active spec cannot be determined safely, inspect the repository Spec Kit context first. If it is still ambiguous, stop and ask for the specific spec directory.
Required Repository Checks
Always check:
- active Spec Kit context / current branch
- git status
.specify/memory/constitution.md- the active spec directory
spec.mdplan.mdtasks.md- relevant templates or conventions under
.specify/templates/ - nearby existing specs with related terminology or scope
- application code surfaces referenced by the active spec
- existing tests related to the changed behavior
Git and Branch Safety
Before making implementation changes:
- Check the current branch.
- Check whether the working tree is clean.
- If there are unrelated uncommitted changes, stop and report them. Do not continue.
- If the working tree only contains user-intended changes for this operation, continue cautiously.
- Do not force checkout, reset, stash, rebase, merge, or delete branches.
- Do not overwrite unrelated work.
Quality Gates
Gate 1: Spec Readiness Gate
Required before implementation starts.
Pass criteria:
spec.md,plan.md, andtasks.mdexist.- The spec has clear problem statement, user value, functional requirements, out-of-scope boundaries, acceptance criteria, assumptions, and risks.
- The plan identifies likely affected repo surfaces and does not contradict repository architecture.
- The tasks are small, ordered, verifiable, and include test/validation tasks.
- RBAC, workspace/tenant isolation, auditability, OperationRun semantics, evidence/result-truth, and UX requirements are addressed where relevant.
- No open question blocks safe implementation.
- The scope is small enough for a bounded implementation loop.
Fail behavior:
- Stop before implementation.
- Report readiness gaps.
- Do not compensate for an unclear spec by inventing implementation scope.
Gate 2: Implementation Scope Gate
Required before changing application code.
Pass criteria:
- The active spec directory is known.
- The implementation target is traceable to specific tasks in
tasks.md. - The affected files/surfaces are consistent with
plan.mdor clearly justified by repository truth. - No required change would introduce unrelated product behavior.
- No required change conflicts with constitution, existing architecture, RBAC/isolation boundaries, or source-of-truth semantics.
Fail behavior:
- Stop before code changes and report the conflict or ambiguity.
- Suggest a minimal spec/plan/tasks correction if the issue is in the artifacts rather than the codebase.
Gate 3: Test Gate
Required after implementation and after each fix iteration.
Pass criteria:
- Targeted tests for changed behavior pass.
- Relevant existing tests pass or failures are proven unrelated and documented.
- Static analysis, linting, formatting, or type checks used by the repository pass when applicable.
- Security/governance-relevant changes have backend, policy, or domain coverage; UI-only verification is not enough.
- Regression coverage exists for each fixed Blocker or High finding where practical.
Fail behavior:
- Fix in-scope failures before post-implementation analysis.
- If failures are unrelated or pre-existing, document evidence and continue only if they do not invalidate the active spec.
- Do not weaken tests to pass the gate.
Gate 4: Browser Smoke Test Gate
Required before claiming implementation is ready for manual review/merge when the change affects Filament UI, Livewire interactions, navigation, forms, tables, actions, modals, dashboards, operation drilldowns, tenant/workspace context, or any user-facing flow.
Not required for backend-only, domain-only, enum-only, contract-only, or test-only changes unless those changes alter a user-facing flow.
Pass criteria:
- The relevant page or flow loads in a real browser or the repository's browser-testing harness.
- The primary action introduced or changed by the spec can be executed successfully.
- Expected UI states, labels, badges, actions, empty states, tables, forms, modals, and navigation are visible where relevant.
- Workspace/tenant context is preserved across the tested flow where relevant.
- RBAC/capability-dependent visibility behaves as expected where practical to verify.
- Livewire interactions complete without visible runtime errors.
- No relevant browser console errors occur.
- No failed network requests occur for the tested flow, except known unrelated development noise that is explicitly documented.
- OperationRun, audit, evidence, result, or support-diagnostic drilldowns work where relevant.
- The smoke-tested path is documented in the final response.
Fail behavior:
- Fix in-scope browser, UX, Livewire, navigation, or runtime failures before claiming merge-readiness.
- If a browser issue is unrelated existing debt, document evidence and residual risk.
- Do not treat a passing browser smoke test as a substitute for backend, policy, domain, security, feature, or integration tests.
- Do not expand the smoke test into a full E2E suite unless the user explicitly asks for that.
Gate 5: Post-Implementation Analysis Gate
Required after implementation and after each fix iteration.
Pass criteria:
- The implementation has been checked against
spec.md,plan.md,tasks.md, and constitution. - All completed tasks have implementation evidence.
- No confirmed in-scope findings remain.
- Medium/Low findings are fixed when they are inside active spec scope, clearly bounded, and safe.
- Medium/Low findings that remain open are explicitly documented with one of these reasons:
- out of scope
- requires separate spec
- risky refactor
- existing unrelated debt
- not reproducible
- blocked by unclear product/architecture decision
- No scope expansion was introduced during fixes.
Fail behavior:
- Fix confirmed in-scope findings, regardless of severity, when the fix is safe and bounded.
- Stop instead of fixing when remediation would expand scope, contradict repo architecture, introduce risky refactors, or repeat the same failed fix twice.
Gate 6: Merge Readiness Gate
Required before claiming the implementation is ready for manual review/merge.
Pass criteria:
- Spec Readiness Gate passed.
- Implementation Scope Gate passed.
- Test Gate passed.
- Browser Smoke Test Gate passed when applicable, or was explicitly marked not applicable with a reason.
- Post-Implementation Analysis Gate passed.
tasks.mdreflects actual completion status.- No confirmed in-scope findings remain.
- All remaining findings are documented as out-of-scope, follow-up candidates, unrelated existing debt, or explicit residual risks.
- Final response includes changed files, tests/checks run, browser smoke result, iterations performed, residual risks, and follow-up candidates.
Fail behavior:
- Do not claim merge-readiness.
- Report the failed gate, remaining risks, and the smallest recommended next action.
Implementation Loop
Execute the loop in bounded phases:
- Evaluate the Spec Readiness Gate.
- Evaluate the Implementation Scope Gate before changing application code.
- Implement the active Spec Kit feature scope task-by-task.
- Run targeted tests and relevant static/dynamic checks.
- Evaluate the Test Gate.
- Run a Browser Smoke Test when the change affects UI/user-facing flows.
- Evaluate the Browser Smoke Test Gate as passed, failed, or not applicable with a reason.
- Run strict post-implementation analysis against spec, plan, tasks, constitution, changed code, changed tests, browser smoke results where applicable, and relevant existing patterns.
- Evaluate the Post-Implementation Analysis Gate.
- Identify confirmed findings by severity: Blocker, High, Medium, Low.
- Fix all confirmed in-scope findings regardless of severity when safe and bounded.
- Do not fix findings that require scope expansion, risky unrelated refactors, or architectural/product decisions outside the active spec; document them as follow-up/residual risks with reasons.
- Re-run relevant tests and browser smoke checks where applicable after fixes.
- Repeat test + browser smoke + analysis + fix loop until no confirmed in-scope findings remain or a stop condition is reached.
- Evaluate the Merge Readiness Gate.
- Report final implementation status, changed files, tests, browser smoke result, residual risks, failed/passed gates, and manual review prompt.
Stop Conditions
Stop the implementation loop when any of the following is true:
- No confirmed in-scope findings remain.
- The same finding appears twice after attempted fixes.
- A required fix conflicts with the spec, plan, constitution, or repository architecture.
- A required fix would expand scope beyond the active spec.
- A required fix would require a risky unrelated refactor.
- A required fix depends on an unresolved product or architecture decision.
- Tests reveal an unrelated pre-existing failure that cannot be safely fixed inside the active spec.
- Browser smoke testing reveals an unrelated pre-existing UI/runtime failure that cannot be safely fixed inside the active spec.
- Three analysis/fix iterations have already been completed.
- The repository state is ambiguous enough that continuing would risk damaging architecture or data semantics.
When stopping before full cleanliness, report exactly why the loop stopped and what remains.
Post-Implementation Analysis Prompt
Use this prompt internally after implementation and after each fix iteration:
Du bist ein Senior Staff Software Engineer, Software Architect und Enterprise SaaS Reviewer.
Analysiere die Implementierung der aktiven Spec streng repo-basiert.
Ziel:
Prüfe, ob die Umsetzung vollständig, konsistent, getestet und constitution-konform ist.
Prüfe gegen:
- spec.md
- plan.md
- tasks.md
- .specify/memory/constitution.md
- geänderte Anwendungscodes
- geänderte Tests
- Browser-Smoke-Test-Ergebnis, falls UI/user-facing Flows betroffen sind
- bestehende Repository-Patterns
Wichtig:
- Keine Spekulation ohne Repo-Beleg.
- Keine Scope-Erweiterung.
- Keine neuen Produktideen als Pflicht-Fixes.
- Findings nach Blocker, High, Medium, Low gruppieren.
- Für jedes Finding konkrete Datei-/Code-Belege nennen.
- Für jedes Finding eine minimale Remediation nennen.
- Separat ausweisen, welche Findings innerhalb der aktiven Spec behoben werden müssen.
- Medium/Low Findings innerhalb der aktiven Spec ebenfalls zur Behebung markieren, wenn sie sicher und bounded sind.
- Bei UI-/Filament-/Livewire-Änderungen prüfen, ob ein Browser Smoke Test durchgeführt wurde und ob der getestete Operator-Flow wirklich funktioniert.
- Findings, die nicht behoben werden sollen, nur als Follow-up/Residual Risk ausweisen, wenn sie out of scope, risky refactor, unrelated existing debt, not reproducible oder durch eine offene Produkt-/Architekturentscheidung blockiert sind.
- Wenn keine bestätigten In-Scope Findings verbleiben, klare Implementierungsfreigabe geben.
Task Completion Rules
- Keep
tasks.mdaligned with actual implementation status. - Check off tasks only after the implementation and test evidence exists.
- If a task is obsolete because repository truth proves a different path, update the task note with the reason instead of silently deleting it.
- If a task cannot be completed inside scope, leave it unchecked and report why.
Testing Rules
- Add or update tests for all changed business behavior.
- Include RBAC and workspace/tenant isolation tests where relevant.
- Include OperationRun, audit, evidence, or result-truth tests where relevant.
- Prefer regression tests for every fixed Blocker or High finding.
- Add regression tests for Medium/Low findings when the behavior is important and testable without excessive churn.
- Do not weaken tests to pass the suite.
- Do not treat a green UI path as sufficient without backend or policy coverage when the behavior is security- or governance-relevant.
Browser Smoke Test Rules
Apply these rules when the active spec changes Filament UI, Livewire interactions, navigation, forms, tables, actions, modals, dashboards, operation drilldowns, tenant/workspace context, or any user-facing flow.
The browser smoke test should be narrow and focused. It is not a full E2E suite unless explicitly requested.
Minimum smoke path:
- Open the relevant page or entry point.
- Confirm the expected workspace/tenant context where relevant.
- Confirm the changed or newly introduced UI element is visible.
- Execute the primary action or interaction changed by the spec.
- Confirm the expected result state, notification, redirect, table update, modal state, operation link, or drilldown.
- Check for relevant console errors.
- Check for failed network requests related to the tested flow.
- Document the tested path in the final response.
For TenantPilot/TenantAtlas, pay special attention to:
- Filament actions and header actions
- Livewire polling, modals, validation, and actions
- workspace/tenant context preservation
- RBAC/capability-dependent action visibility
- OperationRun links and drilldown continuity
- audit/evidence/result/support-diagnostic drilldowns where relevant
- empty states, badges, labels, and decision guidance where relevant
Browser smoke testing is required for UI/user-facing changes and optional for backend-only changes.
Do not treat browser smoke success as proof that backend security, policies, domain logic, auditability, or workspace/tenant isolation are correct. Those still require automated tests or repo-based verification.
Failure Handling
If an implementation step, test phase, browser smoke phase, or post-implementation analysis fails:
- Stop at the relevant gate or stop condition.
- Report the failing command or phase.
- Summarize the error.
- Do not attempt unrelated implementation as a workaround.
- Suggest the smallest safe next action.
If the branch or working tree state is unsafe:
- Stop before implementation changes.
- Report the current branch and relevant uncommitted files.
- Ask the user to commit, stash, or move to a clean worktree.
Final Response Requirements
Respond with:
- Active spec directory
- Summary of implemented changes
- Tests/checks run and their results
- Browser smoke test result, tested path, or not-applicable reason
- Quality gates passed/failed and number of analysis/fix iterations performed
- Remaining in-scope findings, if any
- Residual risks and follow-up candidates, if relevant
- Files changed
- Explicit statement whether the Merge Readiness Gate passed and whether the implementation is ready for manual review/merge
Keep the final response concise, but include enough detail for the user to continue immediately.
Manual Review Prompt
Provide a ready-to-copy prompt like this, adapted to the active spec number and slug:
Du bist ein Senior Staff Software Architect und Enterprise SaaS Reviewer.
Führe eine finale manuelle Review der implementierten Spec `<spec-number>-<slug>` streng repo-basiert durch.
Ziel:
Prüfe, ob die Implementierung nach dem Agenten-Loop wirklich merge-ready ist.
Wichtig:
- Keine Implementierung.
- Keine Codeänderungen.
- Keine Scope-Erweiterung.
- Prüfe gegen spec.md, plan.md, tasks.md und constitution.md.
- Prüfe die geänderten Dateien, Tests, Browser-Smoke-Test-Ergebnis, RBAC, Workspace-/Tenant-Isolation, Auditability, UX und OperationRun-Semantik, soweit relevant.
- Benenne nur konkrete Findings mit Repo-Beleg.
- Gib am Ende eine klare Entscheidung: Merge-ready, merge-ready with notes, oder not merge-ready.
Example Invocation
User:
Nutze den Skill spec-kit-implementation-loop.
Implementiere die aktive Spec.
Danach Tests ausführen, Browser Smoke Test falls UI/user-facing betroffen ist, Post-Implementation Analyse durchführen und alle bestätigten In-Scope Findings unabhängig von Severity beheben, wenn safe und bounded.
Wiederhole test + browser smoke + analysis + fix bis keine In-Scope Findings mehr offen sind oder eine Stop Condition greift.
Expected behavior:
- Inspect active Spec Kit context, constitution, spec, plan, tasks, relevant code, and relevant tests.
- Evaluate the Spec Readiness Gate and Implementation Scope Gate.
- Implement only the active spec scope.
- Run targeted tests and relevant checks.
- Evaluate the Test Gate.
- Run and evaluate Browser Smoke Test when UI/user-facing flows are affected.
- Run post-implementation analysis.
- Fix all confirmed in-scope findings regardless of severity when safe and bounded.
- Repeat test + browser smoke + analysis + fix loop up to the stop conditions.
- Evaluate the Merge Readiness Gate.
- Report final status, changed files, tests, browser smoke result, residual risks, gates, and manual review prompt.