## Summary - add the shared operator explanation layer with explanation families, trustworthiness semantics, count descriptors, and centralized badge mappings - adopt explanation-first rendering across baseline compare, governance operation run detail, baseline snapshot presentation, tenant review detail, and review register rows - extend reason translation, artifact-truth presentation, fallback ops UX messaging, and focused regression coverage for operator explanation semantics ## Testing - vendor/bin/sail bin pint --dirty --format agent - vendor/bin/sail artisan test --compact tests/Feature/Monitoring/OperationsTenantScopeTest.php tests/Feature/Operations/OperationRunBlockedExecutionPresentationTest.php - vendor/bin/sail artisan test --compact ## Notes - Livewire v4 compatible - panel provider registration remains in bootstrap/providers.php - no destructive Filament actions were added or changed in this PR - no new global-search behavior was introduced in this slice Co-authored-by: Ahmed Darrazi <ahmed.darrazi@live.de> Reviewed-on: #191
5.7 KiB
5.7 KiB
Research: Operator Explanation Layer
Decision 1: Reuse the existing reason-translation and artifact-truth stack as the substrate
- Decision: Build the explanation layer on top of
ReasonPresenter,ReasonTranslator,ReasonResolutionEnvelope,OperatorOutcomeTaxonomy,BadgeCatalog, andArtifactTruthPresenterinstead of introducing a parallel explanation subsystem. - Rationale: The repo already contains the core pieces needed for domain-safe wording, centralized badge semantics, and multi-dimensional artifact truth. The missing layer is composition and reading order, not a lack of semantic primitives.
- Alternatives considered:
- New standalone explanation subsystem disconnected from artifact truth. Rejected because it would duplicate semantics and drift from the existing taxonomy.
- Page-local explanation logic only. Rejected because the spec explicitly targets a shared cross-domain pattern.
Decision 2: Model explanation as a reusable view-model contract, not a persistence model
- Decision: Introduce a shared operator explanation pattern as a composed read model that separates execution outcome, evaluation result, reliability, coverage, and next action.
- Rationale: The feature changes interpretation of already-produced outcomes. No new persistence model is required because the necessary data already exists in
OperationRun, artifact-truth envelopes, reason translation, and compare stats. - Alternatives considered:
- Add new database columns to store explanation states. Rejected because the problem is presentation and composition, not missing canonical storage.
- Encode explanation purely in Blade or Filament page code. Rejected because the same pattern must be reused across multiple domains.
Decision 3: Formalize count-role semantics so empty-looking results cannot imply health
- Decision: Define three count roles for reference surfaces: execution counts, evaluation-output counts, and coverage or reliability counts.
- Rationale: The motivating failure case is not that counts are absent, but that counts with different meanings are shown side by side without explanation. Explicit count roles prevent
0 findingsfrom being interpreted as complete evaluation when evidence or coverage was limited. - Alternatives considered:
- Hide counts in degraded cases. Rejected because operators still need the numbers, just with the right explanation.
- Keep current counts and add only warning badges. Rejected because this preserves the same ambiguity under a different visual wrapper.
Decision 4: Make Baseline Compare the golden-path reference implementation
- Decision: Use Baseline Compare as the first implementation surface for the shared explanation layer, then align Monitoring run detail and one additional governance artifact family.
- Rationale: The spec and existing candidate text both identify Baseline Compare as the clearest motivating case. The current
why no findingspath, evidence-gap counts, and coverage status already expose the problem vividly and provide a bounded proving ground. - Alternatives considered:
- Start with a generic governance artifact detail page only. Rejected because the main trust problem is easiest to verify on baseline compare.
- Start platform-wide on every governance surface. Rejected because the spec is intentionally a reference-surface rollout, not a monolithic redesign.
Decision 5: Keep diagnostics available but always secondary
- Decision: Preserve raw JSON, raw reason codes, low-level counters, and support metadata, but move them behind primary explanation blocks on the affected surfaces.
- Rationale: The constitution and existing product direction still require rich diagnostics for support, audit, and advanced troubleshooting. The operator problem is not that diagnostics exist; it is that diagnostics currently dominate the default reading path.
- Alternatives considered:
- Remove raw reason codes from the UI entirely. Rejected because support and audit workflows still need them.
- Leave diagnostics in place and only add a short summary above them. Rejected because that often leaves the surface visually dominated by technical details.
Decision 6: Extend reason translation with trustworthiness and next-action semantics instead of relying on message strings
- Decision: Treat domain reason codes as inputs to a richer explanation contract that includes operator label, operator explanation, trustworthiness impact, and next-action category.
- Rationale: Baseline compare currently exposes reason-code messages that are diagnostically useful but still too implementation-first. The explanation layer needs semantically structured outputs rather than a single message string.
- Alternatives considered:
- Keep using enum
.message()methods as the primary explanation. Rejected because this is the current limitation. - Hardcode next actions in each page. Rejected because the same cause class must read consistently across surfaces.
- Keep using enum
Decision 7: Route governance run detail through the same explanation reading order as artifact surfaces
- Decision: Apply the shared explanation pattern to governance-oriented Monitoring run detail so run pages and artifact pages answer the same operator questions in the same order.
- Rationale: The spec is explicitly cross-surface. If run detail keeps one reading model and baseline compare or artifact detail another, the same truth divergence problem reappears during drilldown.
- Alternatives considered:
- Limit the feature to baseline compare only. Rejected because the spec requires run-detail adoption as part of the first slice.
- Let run detail depend only on status and outcome badges. Rejected because that is insufficient for trust and absent-output interpretation.