# Research — Spec 117 Baseline Drift Engine (v1.5) This document resolves planning unknowns for implementing `specs/117-baseline-drift-engine/spec.md` in the existing Laravel + Filament codebase. ## Decision 1 — Provider chain for evidence **Decision**: Implement a batch-capable resolver (service) that selects evidence per subject via a provider chain: 1) **Content evidence**: `PolicyVersion.snapshot` (normalized) captured **since** baseline snapshot captured time. 2) **Meta evidence**: Inventory meta contract hash (existing `BaselineSnapshotIdentity::hashItemContent(...)`). 3) **No evidence**: return `null` and record evidence gap (no finding emitted). **Rationale**: - Matches spec’s “since” rule: baseline snapshot `captured_at` is the temporal reference. - Enables v1.5 requirement: compare is read-only and must not fetch upstream. - Batch resolution avoids N+1 DB queries. **Alternatives considered**: - Per-subject resolving inside compare job (rejected: N+1 + harder to test). - Always meta-only (rejected: violates “deep settings drift” requirement). ## Decision 2 — Fidelity calculation **Decision**: Compute finding fidelity as the weaker-of the two sides: - `content` is stronger than `meta`. - If either side is `meta`, overall fidelity is `meta`. **Rationale**: - Matches clarified spec rule. - Easy to implement and consistent with UX badge/filter semantics. **Alternatives considered**: - “best-of” fidelity (rejected: misleading; would claim content-level confidence when one side is meta-only). ## Decision 3 — Provenance storage (both sides) **Decision**: Store provenance for **both baseline and current evidence** on each finding: - `baseline`: `{ fidelity, source, observed_at, observed_operation_run_id? }` - `current`: `{ fidelity, source, observed_at, observed_operation_run_id? }` **Rationale**: - Required by accepted clarification (Q4). - Enables UI to show the “why” behind confidence. **Alternatives considered**: - Store a single combined provenance blob (rejected: loses per-side data). ## Decision 4 — Fidelity filter implementation (JSONB vs column) **Decision**: Add a dedicated `findings.evidence_fidelity` column (enum-like string: `content|meta`) and keep full provenance in `evidence_jsonb`. **Rationale**: - Filtering on a simple indexed column is clean, fast, and predictable. - Avoids complex JSONB query conditions and reduces coupling to evidence JSON structure. **Alternatives considered**: - JSONB filter over `evidence_jsonb->>'fidelity'` (rejected: harder indexing, more brittle). ## Decision 5 — Coverage and evidence-gap reporting location **Decision**: Put coverage breakdown and evidence-gap counts in `operation_runs.context` under `baseline_compare.coverage` and `baseline_compare.evidence_gaps`. **Rationale**: - `OperationRun.summary_counts` is restricted to numeric keys from `OperationSummaryKeys`. - Coverage details are operational diagnostics, not a general-purpose KPI. **Alternatives considered**: - Add new summary keys (rejected: violates key whitelist / contract). ## Decision 6 — No new HTTP APIs **Decision**: No new controllers/endpoints. Changes are: - queued job behavior + persistence (findings + run context) - Filament UI (Finding list filters + columns/details) **Rationale**: - The app is Filament-first; current functionality is already represented by panel routes. **Alternatives considered**: - Add REST endpoints for compare (rejected: not needed for v1.5). ## Notes on current codebase (facts observed) - Baseline capture stores meta contract hash into `baseline_snapshot_items.baseline_hash` and provenance-like fields in `meta_jsonb`. - Baseline compare recomputes current meta hash and currently hardcodes run context fidelity to `meta`. - Findings UI lacks fidelity filtering today. ## Open Questions None blocking Phase 1 design. Any remaining unknowns are implementation details that will be validated with focused tests.