TenantAtlas/specs/120-secret-redaction-integrity/research.md
ahmido cd811cff4f Spec 120: harden secret redaction integrity (#146)
## Summary
- replace broad substring-based masking with a shared exact/path-based secret classifier and workspace-scoped fingerprint hashing
- persist protected snapshot metadata on `policy_versions` and keep secret-only changes visible in compare, drift, restore, review, verification, and ops surfaces
- add Spec 120 artifacts, audit documentation, and focused Pest regression coverage for snapshot, audit, verification, review-pack, and notification behavior

## Validation
- `vendor/bin/sail artisan test --compact tests/Feature/Intune/PolicySnapshotRedactionTest.php tests/Feature/Intune/PolicySnapshotFingerprintIsolationTest.php tests/Feature/ReviewPack/ReviewPackRedactionIntegrityTest.php tests/Feature/OpsUx/OperationRunNotificationRedactionTest.php tests/Feature/Verification/VerificationReportViewerDbOnlyTest.php`
- `vendor/bin/sail bin pint --dirty --format agent`

## Spec / checklist status
| Checklist | Total | Completed | Incomplete | Status |
|-----------|-------|-----------|------------|--------|
| requirements.md | 16 | 16 | 0 | ✓ PASS |

- `tasks.md`: T001-T032 complete
- `tasks.md`: T033 manual quickstart validation is still open and noted for follow-up

## Filament / platform notes
- Livewire v4 compliance is unchanged
- no panel provider changes; `bootstrap/providers.php` remains the registration location
- no new globally searchable resources were introduced, so global search requirements are unchanged
- no new destructive Filament actions were added
- no new Filament assets were added; no `filament:assets` deployment change is required

## Testing coverage touched
- snapshot persistence and fingerprint isolation
- compare/drift protected-change evidence
- audit, verification, review-pack, ops-failure, and notification sanitization
- viewer/read-only Filament presentation updates

Co-authored-by: Ahmed Darrazi <ahmed.darrazi@live.de>
Reviewed-on: #146
2026-03-07 16:43:01 +00:00

45 lines
4.1 KiB
Markdown

# Research — Secret Redaction Hardening & Snapshot Data Integrity (Spec 120)
This document records the design choices for the reduced Spec 120 scope after removing the pre-go-live legacy-data remediation workflow.
## Decisions
### 1) Central classification authority
- Decision: Introduce one shared secret-classification service that evaluates protected fields by exact field name plus canonical path, and reuse it across snapshot protection, audit sanitization, verification sanitization, and ops failure sanitization.
- Rationale: The current codebase had multiple substring-based sanitizers. Spec 120 requires one authority so safe configuration fields like `passwordMinimumLength` remain visible while true secrets stay protected.
### 2) Canonical protected-path format
- Decision: Represent protected locations as source-bucketed RFC 6901 JSON Pointers, stored under `secret_fingerprints` buckets: `snapshot`, `assignments`, and `scope_tags`.
- Rationale: JSON Pointer is deterministic, array-safe, and avoids ambiguity between object keys and numeric list indexes.
### 3) Single ownership of persisted snapshot protection
- Decision: Make `VersionService::captureVersion()` the sole write-time owner of protected snapshot generation.
- Rationale: `VersionService` is the final `PolicyVersion` persistence boundary. Removing duplicate masking from `PolicyCaptureOrchestrator` eliminates double-redaction and ensures dedupe/version creation decisions use the same protected result.
### 4) Protected snapshot persistence contract
- Decision: Persist protected values as `[REDACTED]`, store the ruleset marker in `policy_versions.redaction_version`, and store path-keyed HMAC digests in `policy_versions.secret_fingerprints`.
- Rationale: The placeholder preserves JSON shape for downstream consumers, while dedicated columns keep the change signal and contract version out of generic metadata.
### 5) Fingerprint derivation strategy
- Decision: Use HMAC-SHA256 with a signing key derived from the app key and the stable `workspace_id`, then hash the tuple `(source_bucket, json_pointer, normalized_secret_value)`.
- Rationale: This satisfies the workspace-isolation requirement while keeping fingerprints deterministic inside one workspace and non-correlatable across workspaces.
### 6) Fingerprinting scope and version identity
- Decision: Apply the protected contract consistently to all persisted protected payload buckets: `snapshot`, `assignments`, and `scope_tags`. Version identity must incorporate both the visible protected payload and the fingerprint map so secret-only changes create a new `PolicyVersion`.
- Rationale: If dedupe ignores `secret_fingerprints`, secret-only changes still collapse into one version and FR-120-007 fails.
### 7) Output readability and integrity messaging
- Decision: Protected-value messaging remains text-first on existing viewers and export surfaces. The product explains that protected values were intentionally hidden, but it does not ship a dedicated historical-data remediation workflow.
- Rationale: Production starts with fresh compliant data, so the feature only needs to explain current protected behavior, not historical repair.
### 8) Regression strategy
- Decision: Replace substring-match regression expectations with a corpus-based test matrix covering safe fields, true secrets, secret-only version changes, audit/verification readability, and notification/export behavior.
- Rationale: The existing suite proved the old broken behavior. Phase 1 needs tests that lock in exact/path-based classification and block new broad substring redactors.
## Repo Facts Used
- `PolicySnapshotRedactor` previously used broad regex patterns and was invoked both in `PolicyCaptureOrchestrator` and `VersionService`.
- `AuditContextSanitizer`, `VerificationReportSanitizer`, and `RunFailureSanitizer` all contained substring-based protection logic.
- `policy_versions` already stores immutable snapshot evidence consumed by drift, compare, and restore flows.
- Pre-go-live data is disposable for this product rollout, so no supported legacy-data remediation workflow is required in this feature.