Adds: - plan.md: Technical context, constitution check, phases - research.md: 7 research decisions (progress tracking, chunking, type-to-confirm) - data-model.md: BulkOperationRun model, schema changes, query patterns - quickstart.md: Developer onboarding, testing workflows, debugging Key Decisions: - BulkOperationRun model + Livewire polling for progress - collect()->chunk(10) for memory-efficient processing - Filament form + validation for type-to-confirm - ignored_at flag to prevent sync re-adding deleted policies - Eligibility scopes for safe Policy Version pruning Estimated: 26-34 hours (3 phases for P1/P2 features) Next: /speckit.tasks to generate task breakdown
18 KiB
Research: Feature 005 - Bulk Operations
Feature: Bulk Operations for Resource Management
Date: 2025-12-22
Research Phase: Technology Decisions & Best Practices
Research Questions & Findings
Q1: How to implement type-to-confirm in Filament bulk actions?
Research Goal: Find a Laravel/Filament-idiomatic way to require explicit confirmation for destructive bulk operations (≥20 items).
Findings:
Filament BulkActions support conditional forms via ->form() method:
Tables\Actions\DeleteBulkAction::make()
->requiresConfirmation()
->modalHeading(fn (Collection $records) =>
$records->count() >= 20
? "⚠️ Delete {$records->count()} policies?"
: "Delete {$records->count()} policies?"
)
->form(fn (Collection $records) =>
$records->count() >= 20
? [
Forms\Components\TextInput::make('confirm_delete')
->label('Type DELETE to confirm')
->rule('in:DELETE')
->required()
->helperText('This action cannot be undone.')
]
: []
)
->action(fn (Collection $records, array $data) => {
// Validation ensures $data['confirm_delete'] === 'DELETE'
// Proceed with bulk delete
});
Key Insight: Filament's form validation automatically prevents submission if confirm_delete doesn't match "DELETE" (case-sensitive).
Alternatives Considered:
- Custom modal component (more code, less reusable)
- JavaScript validation (client-side only, less secure)
- Laravel form request (breaks Filament UX flow)
Decision: Use Filament ->form() with validation rule.
Q2: How to track progress for queued bulk jobs?
Research Goal: Enable real-time progress tracking for async bulk operations (≥20 items) without blocking UI.
Findings:
Filament notifications are not reactive by default. Must implement custom progress tracking:
-
Create BulkOperationRun model to persist state:
Schema::create('bulk_operation_runs', function (Blueprint $table) { $table->id(); $table->string('status'); // 'pending', 'running', 'completed', 'failed', 'aborted' $table->integer('total_items'); $table->integer('processed_items')->default(0); $table->integer('succeeded')->default(0); $table->integer('failed')->default(0); $table->json('item_ids'); $table->json('failures')->nullable(); // ... tenant_id, user_id, resource, action }); -
Job updates model after each chunk:
collect($this->policyIds)->chunk(10)->each(function ($chunk) use ($run) { foreach ($chunk as $id) { // Process item } $run->update([ 'processed_items' => $run->processed_items + $chunk->count(), // ... succeeded, failed counts ]); }); -
UI polls for updates via Livewire:
<div wire:poll.5s="refreshProgress"> Processing... {{ $run->processed_items }}/{{ $run->total_items }} </div>
Alternatives Considered:
-
Bus::batch(): Laravel's batch system tracks job progress, but adds complexity:
- Requires job_batches table (already exists in Laravel)
- Each item becomes separate job (overhead for small batches)
- Good for parallelization, overkill for sequential processing
- Decision: Not needed - our jobs process items sequentially with chunking
-
Filament Pulse: Real-time application monitoring tool
- Too heavy for single-feature progress tracking
- Requires separate service
- Decision: Rejected - use custom BulkOperationRun model
-
Pusher/WebSockets: Real-time push notifications
- Infrastructure overhead (Pusher subscription or custom WS server)
- Not needed for 5-10s polling interval
- Decision: Rejected - Livewire polling sufficient
Decision: BulkOperationRun model + Livewire polling (5s interval).
Q3: How to handle chunked processing in queue jobs?
Research Goal: Process large batches (up to 500 items) without memory exhaustion or timeout.
Findings:
Laravel Collections provide ->chunk() method for memory-efficient iteration:
collect($this->policyIds)->chunk(10)->each(function ($chunk) use (&$results, $run) {
foreach ($chunk as $id) {
try {
// Process item
$results['succeeded']++;
} catch (\Exception $e) {
$results['failed']++;
$results['failures'][] = ['id' => $id, 'reason' => $e->getMessage()];
}
}
// Update progress after each chunk (not per-item)
$run->update([
'processed_items' => $results['succeeded'] + $results['failed'],
'succeeded' => $results['succeeded'],
'failed' => $results['failed'],
'failures' => $results['failures'],
]);
// Circuit breaker: abort if >50% failed
if ($results['failed'] > count($this->policyIds) * 0.5) {
$run->update(['status' => 'aborted']);
throw new \Exception('Bulk operation aborted: >50% failure rate');
}
});
Key Insights:
- Chunk size: 10-20 items (balance between DB updates and progress granularity)
- Update BulkOperationRun after each chunk, not per-item (reduces DB load)
- Circuit breaker: abort if >50% failures detected mid-process
- Fail-soft: continue processing remaining items on individual failures
Alternatives Considered:
-
Cursor-based chunking:
Model::chunk(100, function)- Good for processing entire tables
- Not needed - we have explicit ID list
-
Bus::batch(): Parallel job processing
- Good for independent tasks (e.g., sending emails)
- Our tasks are sequential (delete one, then next)
- Adds complexity without benefit
-
Database transactions per chunk:
- Risk: partial failure leaves incomplete state
- Decision: No transactions - each item is atomic, fail-soft is intentional
Decision: collect()->chunk(10) with after-chunk progress updates.
Q4: How to enforce tenant isolation in bulk jobs?
Research Goal: Ensure bulk operations cannot cross tenant boundaries (critical security requirement).
Findings:
Laravel Queue jobs serialize model instances poorly (especially Collections). Best practice:
class BulkPolicyDeleteJob implements ShouldQueue
{
public function __construct(
public array $policyIds, // array, NOT Collection
public int $tenantId, // explicit tenant ID
public int $actorId, // user ID for audit
public int $bulkOperationRunId // FK to tracking model
) {}
public function handle(PolicyRepository $policies): void
{
// Verify all policies belong to tenant (defensive check)
$count = Policy::whereIn('id', $this->policyIds)
->where('tenant_id', $this->tenantId)
->count();
if ($count !== count($this->policyIds)) {
throw new \Exception('Tenant isolation violation detected');
}
// Proceed with bulk operation...
}
}
Key Insights:
- Serialize IDs as
array, notCollection(Collections don't serialize well) - Pass explicit
tenantIdparameter (don't rely on global scopes) - Defensive check in job: verify all IDs belong to tenant before processing
- Audit log records
tenantIdandactorIdfor compliance
Alternatives Considered:
-
Global tenant scope: Rely on Laravel's global scope filtering
- Risk: scope could be disabled/bypassed in job context
- Less explicit, harder to debug
- Decision: Rejected - explicit is safer
-
Pass User model:
public User $user- Serializes entire user object (inefficient)
- User could be deleted before job runs
- Decision: Rejected - use
actorIdinteger
Decision: Explicit tenantId + defensive validation in job.
Q5: How to prevent sync from re-adding "deleted" policies?
Research Goal: User bulk-deletes 50 policies locally, but doesn't want to delete them in Intune. How to prevent SyncPoliciesJob from re-importing them?
Findings:
Add ignored_at timestamp column to policies table:
// Migration
Schema::table('policies', function (Blueprint $table) {
$table->timestamp('ignored_at')->nullable()->after('deleted_at');
$table->index('ignored_at'); // query optimization
});
// Policy model
public function scopeNotIgnored($query)
{
return $query->whereNull('ignored_at');
}
public function markIgnored(): void
{
$this->update(['ignored_at' => now()]);
}
Modify SyncPoliciesJob:
// Before: fetched all policies from Graph, upserted to DB
// After: skip policies where ignored_at IS NOT NULL
public function handle(PolicySyncService $service): void
{
$graphPolicies = $service->fetchFromGraph($this->types);
foreach ($graphPolicies as $graphPolicy) {
$existing = Policy::where('graph_id', $graphPolicy['id'])
->where('tenant_id', $this->tenantId)
->first();
// Skip if locally ignored
if ($existing && $existing->ignored_at !== null) {
continue;
}
// Upsert policy...
}
}
Key Insight: ignored_at decouples local tracking from Intune state. User can:
- Keep policy in Intune (not deleted remotely)
- Hide policy in TenantPilot (ignored_at set)
- Restore policy later (clear ignored_at)
Alternatives Considered:
-
Soft delete only (
deleted_at):- Problem: Sync doesn't know if user deleted locally or Intune deleted remotely
- Would need separate "deletion source" column
- Decision: Rejected -
ignored_atis clearer intent
-
Separate "sync_ignore" column:
- Same outcome as
ignored_at, but less semantic - Decision: Accepted as alias -
ignored_atis more descriptive
- Same outcome as
Decision: Add ignored_at timestamp, filter in SyncPoliciesJob.
Q6: How to determine eligibility for Policy Version pruning?
Research Goal: Implement safe "bulk delete old policy versions" that won't break backups/restores.
Findings:
Eligibility criteria (all must be true):
is_current = false(not the latest version)created_at < NOW() - 90 days(configurable retention period)- NOT referenced in
backup_items.policy_version_id(foreign key check) - NOT referenced in
restore_runs.metadata->policy_version_id(JSONB check)
Implementation via Eloquent scope:
// app/Models/PolicyVersion.php
public function scopePruneEligible($query, int $retentionDays = 90)
{
return $query
->where('is_current', false)
->where('created_at', '<', now()->subDays($retentionDays))
->whereDoesntHave('backupItems') // FK relationship
->whereNotIn('id', function ($subquery) {
$subquery->select(DB::raw("CAST(metadata->>'policy_version_id' AS INTEGER)"))
->from('restore_runs')
->whereNotNull(DB::raw("metadata->>'policy_version_id'"));
});
}
Bulk prune job:
public function handle(): void
{
foreach ($this->versionIds as $id) {
$version = PolicyVersion::find($id);
if (!$version) {
$this->failures[] = ['id' => $id, 'reason' => 'Not found'];
continue;
}
// Check eligibility
$eligible = PolicyVersion::pruneEligible()
->where('id', $id)
->exists();
if (!$eligible) {
$this->skipped++;
$this->failures[] = ['id' => $id, 'reason' => 'Referenced or too recent'];
continue;
}
$version->delete(); // hard delete
$this->succeeded++;
}
}
Key Insight: Conservative eligibility check prevents accidental data loss. User sees which versions were skipped and why.
Alternatives Considered:
- Soft delete first, hard delete later: Adds complexity, no clear benefit
- Skip JSONB check: Risk of breaking restore runs that reference version
- Admin override: Allow force-delete even if referenced
- Too dangerous, conflicts with immutability principle
- Decision: Rejected
Decision: Eloquent scope pruneEligible() with strict checks.
Q7: How to display progress notifications in Filament?
Research Goal: Show real-time progress for bulk operations without blocking UI.
Findings:
Filament notifications are sent once and don't auto-update. For progress tracking:
Option 1: Custom Livewire Component
{{-- resources/views/livewire/bulk-operation-progress.blade.php --}}
<div wire:poll.5s="refresh">
@if($run && !$run->isComplete())
<div class="bg-blue-50 p-4 rounded">
<h3>{{ $run->action }} in progress...</h3>
<div class="w-full bg-gray-200 rounded">
<div class="bg-blue-600 h-2 rounded" style="width: {{ $run->progressPercentage() }}%"></div>
</div>
<p>{{ $run->processed_items }}/{{ $run->total_items }} items processed</p>
</div>
@elseif($run && $run->isComplete())
<div class="bg-green-50 p-4 rounded">
<h3>✅ {{ $run->summaryText() }}</h3>
@if($run->failed > 0)
<a href="{{ route('filament.admin.resources.audit-logs.view', $run->audit_log_id) }}">View details</a>
@endif
</div>
@endif
</div>
// app/Livewire/BulkOperationProgress.php
class BulkOperationProgress extends Component
{
public int $bulkOperationRunId;
public ?BulkOperationRun $run = null;
public function mount(int $bulkOperationRunId): void
{
$this->bulkOperationRunId = $bulkOperationRunId;
$this->refresh();
}
public function refresh(): void
{
$this->run = BulkOperationRun::find($this->bulkOperationRunId);
// Stop polling if complete
if ($this->run && $this->run->isComplete()) {
$this->dispatch('bulkOperationComplete', runId: $this->run->id);
}
}
public function render(): View
{
return view('livewire.bulk-operation-progress');
}
}
Option 2: Filament Infolist Widget (simpler, more integrated)
// Display in BulkOperationRun resource ViewRecord page
public static function form(Form $form): Form
{
return $form
->schema([
Infolists\Components\Section::make('Progress')
->schema([
Infolists\Components\TextEntry::make('summaryText')
->label('Status'),
Infolists\Components\ViewEntry::make('progress')
->view('filament.components.progress-bar')
->state(fn ($record) => [
'percentage' => $record->progressPercentage(),
'processed' => $record->processed_items,
'total' => $record->total_items,
]),
])
->poll('5s') // Filament's built-in polling
->hidden(fn ($record) => $record->isComplete()),
]);
}
Decision: Use Option 1 (custom Livewire component) for flexibility. Embed in:
- Filament notification body (custom view)
- Resource page sidebar
- Dashboard widget (if user wants to monitor all bulk operations)
Alternatives Considered:
- Pusher/WebSockets: Too complex for 5s polling
- JavaScript polling: Less Laravel-way, harder to test
- Filament Pulse: Overkill for single feature
Technology Stack Summary
| Component | Technology | Justification |
|---|---|---|
| Admin Panel | Filament v4 | Built-in bulk actions, forms, notifications |
| Reactive UI | Livewire v3 | Polling, state management, no JS framework needed |
| Queue System | Laravel Queue | Async job processing, retry, failure handling |
| Progress Tracking | BulkOperationRun model + Livewire polling | Persistent state, survives refresh, queryable |
| Type-to-Confirm | Filament form validation | Built-in UI, secure, reusable |
| Tenant Isolation | Explicit tenantId param | Fail-safe, auditable, no implicit scopes |
| Job Chunking | Collection::chunk(10) | Memory-efficient, simple, testable |
| Eligibility Checks | Eloquent scopes | Reusable, composable, database-level filtering |
| Database | PostgreSQL + JSONB | Native JSON support for item_ids, failures |
Best Practices Applied
Laravel Conventions
- ✅ Queue jobs implement
ShouldQueueinterface - ✅ Use Eloquent relationships, not raw queries
- ✅ Form validation via Filament rules
- ✅ PSR-12 code formatting (Laravel Pint)
Safety & Security
- ✅ Tenant isolation enforced at job level
- ✅ Type-to-confirm for ≥20 destructive items
- ✅ Fail-soft: continue on individual failures
- ✅ Circuit breaker: abort if >50% fail
- ✅ Audit logging for compliance
Performance
- ✅ Chunked processing (10-20 items)
- ✅ Indexed queries (tenant_id, ignored_at)
- ✅ Polling interval: 5s (not 1s spam)
- ✅ JSONB for flexible metadata storage
Testing
- ✅ Unit tests for jobs, scopes, eligibility
- ✅ Feature tests for E2E flows
- ✅ Pest assertions for progress tracking
- ✅ Manual QA checklist for UI flows
Rejected Alternatives
| Alternative | Why Rejected |
|---|---|
| Bus::batch() | Adds complexity, not needed for sequential processing |
| Filament Pulse | Overkill for single-feature progress tracking |
| Pusher/WebSockets | Infrastructure overhead, 5s polling sufficient |
| Global tenant scopes | Less explicit, harder to debug, security risk |
| Custom modal component | More code, less reusable than Filament form |
| Hard delete without checks | Too risky, violates immutability principle |
Open Questions for Implementation
- Chunk size: Start with 10, benchmark if needed
- Polling interval: 5s default, make configurable?
- Retention period: 90 days for versions, make configurable?
- Max bulk items: Hard limit at 500? 1000?
- Retry failed items: Future enhancement or MVP?
Status: Research Complete
Next Step: Generate data-model.md