Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Revaer

Centralized torrent orchestration with hot-reloadable configuration, consistent CLI/API surfaces, and observability-first defaults.

Revaer is a Rust workspace that coordinates torrent ingestion, filesystem operations, and operational guardrails from a PostgreSQL-backed control plane. The revaer-app binary composes focused crates covering the API, CLI, filesystem pipeline, telemetry, and libtorrent adapter.

What You’ll Find Here

  • Roadmap & Specs – Track the current Phase One scope and remaining delivery deltas.
  • Platform Interfaces – Configuration schema, HTTP API endpoints, and CLI command reference that match the current codebase.
  • Operational Guides – Runbook, release checklist, and setup flows for operators.
  • Architecture Decisions – ADRs documenting trade-offs across configuration, security, and engine integration.
  • API Reference – Generated OpenAPI description and usage guidance for the control plane surface.

Use the sidebar navigation (or [ and ] shortcuts) to explore individual topics. Most pages include headings that double as tags for machine-readable manifests generated by the docs indexer.

Contributing Updates

Documentation lives next to the code. Add or edit Markdown files under docs/, then run:

just docs

This builds the mdBook site and refreshes the LLM manifests that power the documentation search experience.

LLM Manifests

For ChatGPT and other LLM-based tooling, fetch llms.txt and the JSON manifests under llm/ (schema.json, manifest.json, summaries.json) used by the documentation search experience.

Phase One Roadmap

Last updated: 2026-04-04

This document captures the current delta between the Phase One objective and the existing codebase. It should be kept in sync as work progresses across the eight workstreams.

Snapshot

WorkstreamCurrent StateKey GapsImmediate Actions
Control Plane & SetupPostgres schema, ConfigService watcher, setup CLI/API, immutable-key guard, history logging; loopback enforcement + RFC7807 pointers liveEngine hot-reload not yet exercising throttles; setup token lifecycle/error telemetry still thinAdd watcher-driven throttle tests, expand setup diagnostics and rate-limit guardrails
Torrent Domain & AdapterNative libtorrent FFI (cxx) restored and default-enabled; session worker with alert pump/resume store, throttles, selection, and degraded health surfaced via event bus; stub path retained only when the feature is disabledNative CI coverage exists, but alert/rate-limit regression coverage is still thin and broader validation of resume reconciliation and failure handling is still neededDeepen alert/rate-limit/resume validation and harden failure handling
File Selection & FsOpsIdempotent FsOps pipeline now extracts zip/tar/tar.gz/tgz archives in-process, supports guarded 7z/rar extraction via external tools, runs PAR2 verify/repair stages, records checksum metadata in .revaer.meta, and applies move/copy/hardlink transfers with chmod/chown/umask handling7z/rar and PAR2 still depend on host tooling being installed; ownership overrides remain Unix-only by design, and broader recovery/failure coverage should keep expandingKeep hardening extractor/PAR2 recovery scenarios, document host-tool prerequisites clearly, and expand FsOps telemetry + restart-path coverage
Public HTTP API & SSEAdmin setup/settings/torrent CRUD, SSE stream, metrics endpoint, OpenAPI generator, /api/v2/* qB façade with cookie sessions, rename/category/tag mutation, relocate, reannounce/recheck, transfer limits, and incremental rid sync/v1/torrents/* pagination/filter matrix still partial; qB coverage is intentionally bounded rather than full parity; SSE replay still needs broader Last-Event-ID regression coverageFinish pagination/filter story, document deliberate qB compatibility scope, and expand SSE replay regression tests
CLI ParitySupports setup start/complete, settings patch, torrent add/remove/list/status/select/action flows, and CLI wrappers around config + torrent APIsSSE tail UX and richer validation/diagnostic coverage still need hardeningExpand reconnecting tail coverage and tighten validation/exit-code contracts
Security & ObservabilityAPI key storage hashed, per-key rate limits and X-RateLimit-* headers exposed, tracing initialized, metrics registry exported, and dashboard metrics now sourced from runtime stateOTEL exporter path was placeholder-only and now needs operational validation; tracing/metrics coverage should keep expanding across engine/fsops failure pathsValidate OTLP exporter behavior in deployment flows and keep expanding engine/fsops observability coverage
CI & PackagingGitHub Actions cover fmt/lint/deny/audit/tests/cov via just ci; native libtorrent CI exists; Dockerfile builds non-root image with bundled libtorrent and HEALTHCHECK; docs workflow publishes mdBook; image workflow now scans, attests, and signs published imagesRootfs posture remains documented rather than enforced, and image hardening still needs broader cross-arch/runtime validationKeep image provenance/scan/sign gates in CI, harden container runtime guidance, and extend cross-arch/runtime validation
Operational End-to-EndPlaywright-backed API/UI flows run via just ui-e2e, and just runbook now packages repeatable validation artifactsManual fault-injection drills still exist for extractor/permission/recovery scenariosKeep automating the remaining runbook drills while retaining the operator-facing checklist

Remaining Scope Specification

1. Torrent Engine Integration

  • Harden the native libtorrent session: keep the stub only for feature-off builds while ensuring the default path drives the real adapter for add/pause/resume/remove, sequential toggles, rate limits, selection updates, reannounce, and force-recheck.
  • Validate persisted fast-resume payloads, priorities, target directories, and sequential flags against the live session on startup; continue emitting reconciliation events when divergence is detected.
  • Translate libtorrent alerts into EventBus messages (FilesDiscovered, Progress, StateChanged, Completed, Failure) while respecting the ≤10 Hz per-torrent coalescing rule; recover from alert polling failures by degrading health and attempting bounded restarts.
  • Ensure global and per-torrent rate caps driven by engine_profile updates are enforced by libtorrent within two seconds, with audit logs surfaced when caps change.
  • Extend the feature-gated integration suite to execute against the native libtorrent build (resume restore, rate-limit enforcement, alert mapping) in addition to the in-process stub.

2. File Selection & FsOps Pipeline

  • Keep include/exclude glob logic aligned with torrent selection so priority updates continue to reflect operator intent, including the @skip_fluff preset.
  • Extend the FsOps pipeline to additional archive formats (7z/tar), introduce the PAR2 verification/repair stage, and surface checksum metadata alongside the recorded .revaer.meta entries.
  • Add non-Unix fallbacks or clear operator guidance when ownership/umask directives cannot be honoured, and surface the condition via events and /health/full.
  • Harden dependency detection so missing extractor binaries trigger guarded degradation with actionable telemetry, then clear automatically once remediation succeeds.
  • Broaden integration coverage to include error paths (permission denied, unsupported archive) and restart scenarios that reuse persisted metadata, capturing metrics snapshots for each stage.

3. Public HTTP API & SSE

  • Round out /v1/torrents with cursor pagination, rich filtering (state, tracker, extension), and stabilise reannounce/recheck/sequential toggles with regression tests.
  • Keep Problem+JSON responses consistent (including JSON Pointer metadata) and mirror them in CLI/user-facing tooling.
  • Enhance SSE with Last-Event-ID replay, duplicate suppression, and resiliency tests covering torrent + FsOps event fan-out.
  • Evolve the qB façade: tighten the cookie/session model, surface removals/categories/tags in incremental sync, and expose rename/reannounce operations.
  • Expand health reporting to /health/full, document façade coverage in OpenAPI/mdBook, and add integration tests that exercise pagination, SSE replay, and façade flows end-to-end.

4. CLI Parity

  • Add commands revaer ls, status, select, action, and tail, mirroring API filters, selection arguments (include/exclude/skip-fluff), sequential toggles, and data deletion flags.
  • Implement an SSE tailer that reconnects on failure, honors Last-Event-ID, and avoids duplicate terminal output.
  • Standardize exit codes (0 success, 2 validation, >2 runtime failures) and surface RFC7807 payloads, including pointer metadata, in human-readable CLI output.
  • Provide CLI integration tests that run against the API fixture stack, covering filter combinations, sequential toggles, and tail reconnection behaviour.

5. Security & Observability

  • Introduce API key lifecycle endpoints (issue, rotate, revoke) with hashed-at-rest storage, returning secrets only once; enforce per-key token-bucket rate limiting and include X-RateLimit-* headers.
  • Harden inputs by bounding magnet length, multipart size, filter glob counts, and header values; return Problem+JSON validation errors without panics for malformed requests.
  • Propagate tracing spans (request IDs) through the API, engine, and FsOps layers; ensure metrics cover HTTP status, event flow, queue depth, libtorrent transfer, and FsOps step durations, exposed via /metrics.
  • Reflect degraded health when tools are missing, engine sessions fault, or queue depth exceeds thresholds; emit corresponding SettingsChanged and HealthChanged events.
  • Document operational expectations for rate limiting, key rotation, and observability dashboards.

6. CI & Packaging

  • Keep GitHub Actions green across fmt/lint/deny/audit/tests/cov and add a matrix leg that runs the native libtorrent suite (REVAER_NATIVE_IT=1 with Docker host wiring).
  • Enforce an environment-access lint that fails CI if std::env reads occur outside the composition root (excluding DATABASE_URL).
  • Harden the container: retain non-root user, switch to read-only rootfs with explicit writable mounts, and gate builds with image scans and provenance/signing.
  • Produce cross-arch artifacts (x86_64/aarch64) and publish digests alongside build outputs and release notes.

7. Operational Runbook Automation

  • Author a script to execute the full phase objective on both x86_64 and aarch64: bootstrap using DATABASE_URL, complete setup token flow, add a magnet, monitor FilesDiscovered/Progress/Completed, run FsOps, simulate crash/restart with fast-resume recovery, adjust throttles, and validate degraded health when extractors are absent.
  • Capture assertions and logs for each phase, producing artifacts suitable for runbook review and CI retention; ensure failures mark the engine or pipeline health accordingly.
  • Include cleanup routines to return environments to a reusable state while retaining diagnostic logs.

8. Documentation & Final Polish

  • Update docs/phase-one-roadmap.md continuously and add ADRs covering engine architecture, FsOps design, API/CLI contracts, and security posture.
  • Regenerate docs/api/openapi.json alongside illustrative request/response examples for new endpoints.
  • Extend user-facing guides for CLI usage, health/metrics references, and operational setup covering API keys, rate limits, and degraded-mode recovery.
  • Provide a final Phase One release checklist that ties documentation, runbook, and CI artifacts together.

Next Steps Tracking

  1. Land setup/network hardening and control-plane polish.
  2. Keep the native libtorrent session as the default, expand coverage (native CI leg, alert/rate-limit/resume validation), and preserve the stub only for feature-off builds.
  3. Implement FsOps pipeline with allow-listed execution and metadata.
  4. Expose /v1/* APIs + CLI parity and reinforce security/observability.
  5. Stand up CI, packaging, and full runbook validation.

Phase One Remaining Engineering Specification

Objectives

  • Deliver a production-ready public interface (HTTP API, SSE, CLI) for torrent orchestration.
  • Ship FsOps-backed artefacts through API, CLI, telemetry, and documentation with demonstrable reliability.
  • Produce release artefacts (containers, binaries, documentation) that satisfy existing security, observability, and quality gates.

Scope Overview

  1. Public HTTP API & SSE Enhancements

    • /v1/torrents CRUD-style endpoints with cursor pagination, filtering, torrent actions, file selection updates, rate adjustments, and Problem+JSON responses.
    • SSE stream upgrades: Last-Event-ID replay, subscription filters, duplicate suppression, jitter-tolerant reconnect logic.
    • /health/full exposing engine/FsOps/config readiness, dependency metrics, and revision metadata.
    • Regenerated OpenAPI (JSON + examples) reflecting the full public surface.
  2. CLI Parity

    • Commands covering list/status/select/action/tail flows with shared filtering + pagination options.
    • SSE-backed tail command with Last-Event-ID resume, dedupe, and retry semantics aligned with the API.
    • Problem+JSON error output, structured exit codes (0 success, 2 validation, >2 runtime failures).
  3. Packaging & Documentation

    • Release-ready Docker image (non-root, readonly FS, volumes, healthcheck) bundling API server + docs.
    • Provenance-signed binaries for supported architectures, plus GitHub Actions workflows for build, docker, msrv, and coverage gates.
    • Updated ADRs, runbook, user guides, OpenAPI artefacts, and release checklist referencing the telemetry and security posture.
    • Documentation of new metrics/traces/guardrails (config watcher latency, FsOps events, API counters).

Security & Observability Requirements (Cross-Cutting)

  • All new API routes enforce API-key authentication with per-key rate limiting and guard-rail metrics.
  • Problem+JSON responses are mandatory; eliminate unwrap/panic paths and include invalid_params pointers on validation failure.
  • Trace propagation from API → engine → FsOps; CLI should emit/propagate TraceId when available.
  • Metrics: extend existing Prometheus registry with route labels, FsOps step counters, config watcher latency/failure gauges, and rate-limiter guardrails.
  • Health degradation events (Event::HealthChanged) must accompany any new guard-rail/latency breach or pipeline failure.
  • CLI commands should mask secrets in logs and optionally emit telemetry when configured (REVAER_TELEMETRY_ENDPOINT).

Detailed Work Breakdown

1. Public API & SSE

Design Considerations

  • Introduce DTO module (api::models) for request/response structs to share with the CLI.
  • Cursor pagination: encode UUID/timestamp as opaque cursor in next token; align Last-Event-ID semantics with event stream IDs.
  • Filtering: support state, tracker, extension, tags, and name substring; guard invalid combinations with Problem+JSON.
  • SSE filtering: permit query parameters for torrent subset, replays based on event type/state.

Implementation Tasks

  • Routes:
    • POST /v1/torrents – magnet or .torrent upload (streamed, payload size guard).
    • GET /v1/torrents – cursor pagination + filters.
    • GET /v1/torrents/{id} – detail view with FsOps metadata.
    • POST /v1/torrents/{id}/select – file selection update with validation.
    • POST /v1/torrents/{id}/action – pause/resume/remove (with data), reannounce, recheck, sequential toggle, rate limits.
  • SSE:
    • Accept Last-Event-ID header, deduplicate by event ID, filter streams by torrent ID/state.
    • Simulate jitter/disconnects in tests (tokio::time::pause, transport::Stream).
  • Health endpoint:
    • Aggregate config watcher metrics (latency, failures), FsOps status, engine guardrails, revision hash.
  • Problem+JSON mapping for all new errors with invalid_params pointer data.
  • OpenAPI:
    • Regenerate spec covering new endpoints, Problem responses, SSE details, and sample payloads.
  • Testing:
    • Unit tests for filter parsing, DTO validation, Problem+JSON outputs.
    • Integration tests using tower::Service harness for each route.
    • SSE reconnection tests with simulated delays and Last-Event-ID resume.
    • /health/full integration test verifying new fields and degraded scenarios.

2. CLI Parity

Design Considerations

  • Reuse DTOs from API models; consider shared crate/module for request structs and Problem+JSON parsing.
  • Introduce output formatting with optional JSON/pretty table modes.
  • Provide configuration via env vars and CLI flags; align defaults with API (e.g., REVAER_API_URL, REVAER_API_KEY).

Implementation Tasks

  • Commands:
    • revaer ls – list torrents, support pagination (--cursor, --limit), filters (state/tracker/extension/tags).
    • revaer status <id> – torrent detail view, optional follow mode.
    • revaer select <id> – send selection rules from file/JSON (validate before submit).
    • revaer action <id> – actions (pause, resume, remove, remove-data, reannounce, recheck, sequential, rate).
    • revaer tail – SSE tail with Last-Event-ID persist (local file) and dedupe.
  • Problem+JSON handling:
    • Standardised pretty printer summarising title, detail, invalid_params; respect exit codes.
  • Telemetry:
    • Optional metrics emission (success/failure counters) when telemetry endpoint configured.
  • Testing:
    • Integration tests using httpmock to assert HTTP interactions and exit codes.
    • SSE tail tests with mocked stream delivering duplicates/disconnects.
    • Snapshot tests for JSON outputs (ensuring deterministic fields).

3. Packaging & Documentation

Design Considerations

  • Multi-stage Docker build: compile with Rust image, run on minimal base (distroless/alpine/ubi) with non-root user.
  • Healthcheck script hitting /health/full with timeout.
  • Release workflows should run on GitHub Actions with provenance metadata (supply-chain compliance).

Implementation Tasks

  • Dockerfile + Makefile/just target:
    • Build release binary, copy docs/api/openapi.json, set /app as workdir.
    • Define volumes for data/config, create user revaer, configure entrypoint.
  • GitHub Actions (update .github/workflows):
    • build-release: run just build-release, just api-export, attach binaries/docs.
    • docker: build image, run docker scan (trivy/grype), and push on release tags.
    • msrv: run just fmt lint test with pinned toolchain (documented in workflow).
    • cov: ensure just cov gate passes (≥80% lines/functions).
  • Documentation:
    • ADRs: update 003-libtorrent-session-runner, add FsOps design ADR, API/CLI contract ADR, security posture update (API keys, rate limits).
    • Runbook: scripted scenario covering bootstrap → torrent add → FsOps pipeline → restart resume → rate throttle adjustments → degraded health simulation → recovery.
    • User guides: CLI usage, metrics/telemetry reference, operational setup (keys, rate limits, config watcher health).
    • OpenAPI: regenerate JSON, include sample Problem+JSON payloads and SSE description.
    • Release checklist: steps to run just ci, verify coverage, run docker scan, execute runbook, and tag release.
  • Testing:
    • Validate Docker container runtime (healthcheck, volume mounts, non-root permissions).
    • Perform coverage review ensuring new tests bring line/function coverage ≥80%.
    • Execute runbook; capture logs/metrics and link in docs.

Cross-Cutting Deliverables

  • API key lifecycle (issue/rotate/revoke) extended with per-key rate limiting, recorded in telemetry and docs.
  • Config watcher telemetry integrated into /health/full and metrics registry.
  • CLI and API emit guard-rail telemetry on violations (loopback enforcement, FsOps errors, rate-limit breaches).
  • All new code paths covered by unit/integration tests; follow-up to update just cov gating.
  • Documentation kept up-to-date with implementation details and tested flows.

Sequencing (Suggested)

  1. Build API models and endpoints (foundation for CLI).
  2. Implement SSE enhancements while adding API integration tests.
  3. Extend CLI commands leveraging shared DTOs.
  4. Embed telemetry (metrics/traces) throughout API/CLI/FsOps changes.
  5. Stand up Docker build + CI workflows.
  6. Update ADRs, runbook, user guides, OpenAPI, and release checklist.
  7. Execute full QA cycle (coverage, docker scan, runbook, manual verification) and prepare for release tagging.

Acceptance Criteria

  • just lint, just test, just cov and full just ci pass locally and in CI.
  • Coverage (lines + functions) ≥ 80% across workspace.
  • Docker image passes security scan with zero unwaived high severity findings.
  • Runbook executed end-to-end; results referenced in documentation.
  • OpenAPI specification and CLI docs match implemented behaviour.
  • Release checklist completed with artefacts attached (binaries, Docker image, OpenAPI, docs).

Phase One Runbook

This runbook exercises the end-to-end control plane, validating FsOps, telemetry, and guard rails. The primary automated entrypoint is just runbook, which wraps the Playwright-backed API/UI validation flow, collects artifacts, and provides a repeatable baseline before any manual drills.

Automated Validation

Run the automated baseline first:

just runbook

Expected outputs:

  • artifacts/runbook/summary.txt
  • artifacts/runbook/playwright-report/index.html
  • artifacts/runbook/test-results/
  • artifacts/runbook/logs/

The automated runbook currently covers the bootstrap, dashboard/API health, settings, and torrent-management control-plane paths exercised by just ui-e2e. Keep the manual checks below only for deployment-specific fault-injection drills that require operator intervention against real mounts, permissions, and restart boundaries.

Prerequisites

  • Docker image revaer:ci (built via just docker-build) or a local revaer-app binary (just build-release).
  • PostgreSQL instance accessible to the application.
  • API key with a conservative rate limit (e.g., burst 5, period 60s).
  • CLI configured with REVAER_API_URL, REVAER_API_KEY, and optional REVAER_TELEMETRY_ENDPOINT.

Scenario

  1. Bootstrap

    • Issue a setup token: revaer setup start --issued-by runbook.
    • Complete configuration with CLI secrets and directories: revaer setup complete --instance runbook --bind 127.0.0.1 --resume-dir .server_root/resume --download-root .server_root/downloads --library-root .server_root/library --api-key-label runbook --passphrase <pass>.
    • Capture the committed snapshot via revaer config get --output table and confirm /health/full returns status=ok with guardrail_violations_total=0.
  2. Add Torrent & Observe FsOps

    • Add a torrent: revaer torrent add <magnet> --name runbook.
    • Tail events: revaer tail --event torrent_added,progress,state_changed --resume-file .server_root/revaer.tail.
    • Verify FsOps emits fsops_started, fsops_completed, and Prometheus counters fsops_steps_total increase.
  3. Restart & Resume

    • Stop the application, restart it, and ensure the torrent catalog repopulates.
    • Confirm SelectionReconciled (if metadata diverges) and HealthChanged clears once resume succeeds.
  4. Rate Limit Guard-Rail

    • Apply a tight API key limit (burst 1 / per_seconds 60) via revaer config set --file rate-limit.json (using a JSON patch that updates the relevant key).
    • Execute three rapid CLI calls (e.g., revaer status <id>). The third should exit with code 3, displaying a 429 Problem+JSON response.
    • Inspect /metrics to verify api_rate_limit_throttled_total incremented and /health/full reflects degraded=["api_rate_limit_guard"].
  5. Recovery

    • Restore the API key limit to an acceptable value through another revaer config set ... invocation.
    • Re-run revaer status <id> to confirm success, guardrail_violations_total stops increasing, and degraded returns to [].
  6. FsOps Failure Simulation

    • Temporarily revoke write permissions on the library directory and re-run a completion.
    • Observe fsops_failed events, HealthChanged with ["fsops"], and guard-rail telemetry.
    • Restore permissions and confirm recovery events.

Manual-only rationale:

  • Permission failures and restart/resume drills depend on the actual runtime mount layout, writable volumes, and supervisor behavior of the target deployment.
  • The checked-in automation covers the repeatable control-plane baseline; these remaining drills intentionally stay manual so operators can validate their real environment rather than a simulated local-only shell.

Verification Artifacts

  • Review artifacts/runbook/summary.txt from just runbook.
  • Archive CLI telemetry emitted to REVAER_TELEMETRY_ENDPOINT when the manual scenario enables it.
  • Capture Prometheus scrapings (/metrics) before and after the manual drills.
  • Record /health/full JSON snapshots for each phase.

Successful completion of this runbook satisfies the operational validation gate defined in AGENT.md.

Phase One Release Checklist

  1. Branch Hygiene

    • Ensure main is green (CI pipeline complete).
    • Review outstanding ADRs and docs for freshness.
  2. Build & Test

    • just ci
    • just build-release
    • just api-export
  3. Artefact Verification

    • Binary: target/release/revaer-app
    • Checksum: sha256sum target/release/revaer-app
    • OpenAPI: docs/api/openapi.json
    • Helm chart: dist/helm/revaer-<version>.tgz
    • Helm provenance: dist/helm/revaer-<version>.tgz.prov
    • Helm public key: dist/helm/revaer-helm-public.asc
    • Helm public keyring: dist/helm/revaer-helm-public.gpg
    • Docker image: just docker-build && just docker-scan
    • Published GHCR image: verify Trivy scan, SBOM/provenance attestations, and Cosign signatures from the image workflow
    • Published OCI chart: verify oci://ghcr.io/<owner>/charts/revaer:<version> plus the artifacthub.io metadata tag and helm verify against the published public key
  4. Runbook Execution

    • Run just runbook
    • Follow the remaining manual-only drills in docs/runbook.md
    • Archive CLI telemetry, /metrics, /health/full snapshots.
  5. Documentation Refresh

    • Verify ADRs 005–007 reflect current design.
    • Update user guides (docs/api/guides/*.md) with any behavioural changes.
  6. Tag & Publish

    • Create annotated tag: git tag -a vX.Y.Z -m "Phase One release"
    • Push tag: git push origin vX.Y.Z
    • Attach artefacts generated by the build-release workflow, including the Helm chart archive, provenance file, and Helm public key.
    • Confirm the OCI chart publish completed after the GitHub release so the artifacthub.io/signKey URL resolves.
    • Confirm the GHCR chart package is public so Artifact Hub can pull oci://ghcr.io/<owner>/charts/revaer anonymously.
    • In Artifact Hub, add or claim oci://ghcr.io/<owner>/charts/revaer, then verify that the published artifacthub.io metadata tag includes the expected repository ID and owner identity.
    • After Artifact Hub shows Verified publisher, file the official status request for the Revaer publisher or organization. Use revaer-logo.png for the Artifact Hub repository and organization logo during that setup.
  7. Post-Release Monitoring

    • Watch rate-limit and guard-rail metrics.
    • Confirm HealthChanged events return to empty degraded set.
    • Validate automation telemetry for CLI success rates.

Web UI - Phase 1

Rust/Yew UI for the Phase 1 torrent workflow. The goal is a responsive, touch-friendly surface that stays usable on 360px phones through 4K desktops while handling large torrent libraries.

  • Pages: Dashboard, Torrents (list + detail), Logs, Health, Settings.
  • Modes: Simple (trimmed controls) and Advanced (full controls). Stored in local storage.
  • Transport: REST for initial payloads; fetch-based SSE for live updates and logs (header-auth supported, EventSource not used).

Layout and breakpoints

NameWidthDefault behaviors
xs0-479pxCard view for torrents, drawer navigation, stacked dashboard cards
sm480-767pxCard view, two-column stats grid inside cards
md768-1023pxCompact table, tabbed detail view
lg1024-1439pxFull table, fixed sidebar
xl1440-1919pxSplit panes and wider tables
2xl1920px+Ultra-wide tables with capped text widths

Table responsiveness: required columns (Name, Status, Progress, Down, Up) stay pinned; ETA, Ratio, Size, Tags, Path, Updated collapse into overflow or the detail drawer when space is constrained.

Detail view: mobile renders tabs (Overview, Files, Options); desktop promotes a split layout that keeps overview and options visible together at lg+.

Virtualization: the torrent list uses a windowed renderer to keep large libraries responsive; selection stays highlighted for keyboard actions.

Auth and setup

  • API key auth is default. The UI prompts for key_id:secret and stores it in local storage with expiry metadata.
  • If app_profile.auth_mode is none and the request originates from a local network, the UI can enter anonymous mode.
  • Setup mode guides the operator through the setup token flow and stores the generated API key after completion.

Transport and SSE

  • Primary SSE: /v1/torrents/events with filters for torrent id, event kind, and state.
  • Fallback SSE: /v1/events/stream if the primary endpoint is unavailable.
  • Logs stream: /v1/logs/stream.
  • SSE requests attach x-revaer-api-key and Last-Event-ID headers.

Settings coverage

Settings tabs are grouped into: Downloads, Seeding, Network, Storage, Labels, and System. Each tab reflects the corresponding config section and validation errors from ProblemDetails responses.

Theming and localization

  • Theme tokens and layout variables live in static/style.css.
  • Theme selection follows OS preference on first load and persists to local storage.
  • Locale selector uses JSON bundles in i18n/ with English fallback and RTL hinting.

Running the UI

  • Crate: crates/revaer-ui (Yew + wasm).
  • Commands: just ui-serve to preview, just ui-build for release builds.
  • Assets: static/style.css holds palette/breakpoints; index.html + Trunk.toml bootstrap trunk.

Web UI Flows and Diagrams

Visual references for the Phase 1 UX: navigation, component wiring, SSE handling, and torrent lifecycle. Use these diagrams when extending the UI or adding tests.

flowchart LR
    Nav["Sidebar / Drawer"] --> Dash[Dashboard]
    Nav --> Torrents[Torrents]
    Nav --> Logs[Logs]
    Nav --> Health[Health]
    Nav --> Settings[Settings]
    Torrents --> Detail["Detail route /torrents/:id"]
    Detail --> Overview[Overview]
    Detail --> Files[Files]
    Detail --> Options[Options]

Component graph

flowchart TB
    app["App (RevaerApp)"]
    shell["AppShell: nav / theme / locale"]
    dash[Dashboard]
    torrents["Torrents list + detail"]
    settings[Settings]
    logs[Logs]
    health[Health]
    api[API]

    app --> shell
    shell --> dash
    shell --> torrents
    shell --> settings
    shell --> logs
    shell --> health

    dash -- "GET /v1/dashboard" --> api
    torrents -- "GET /v1/torrents" --> api
    torrents -- "GET /v1/torrents/{id}" --> api
    torrents -- "POST /v1/torrents/{id}/action" --> api
    torrents -- "PATCH /v1/torrents/{id}/options" --> api
    torrents -- "POST /v1/torrents/{id}/select" --> api
    torrents -- "SSE /v1/torrents/events" --> api
    logs -- "SSE /v1/logs/stream" --> api
    health -- "GET /health/full" --> api

SSE event flow

sequenceDiagram
    participant UI as UI
    participant Fetch as Fetch Stream
    participant API as API/SSE
    participant State as Store

    UI->>Fetch: build URL + headers (x-revaer-api-key, Last-Event-ID)
    Fetch->>API: GET /v1/torrents/events (fallback /v1/events/stream)
    API-->>Fetch: SSE frames
    Fetch->>State: parse + batch updates
    State->>UI: render list, detail, dashboard, health badges
    UI->>Fetch: reconnect with backoff and resume id

Torrent lifecycle (UI perspective)

stateDiagram-v2
    [*] --> Added : magnet/upload
    Added --> Queueing : server-side validation
    Queueing --> Downloading
    Downloading --> Checking : recheck or hash
    Downloading --> Completed : 100% + seeding ready
    Checking --> Downloading : if data matches
    Completed --> FsOps : move/rename per policy
    FsOps --> Seeding
    Seeding --> Completed : ratio met / stop rules
    Completed --> Removed : delete (+data optional)

Interaction notes

  • SSE disconnect overlay shows last event timestamp, retry countdown (1s to 30s exponential with jitter), and diagnostics (auth mode, reason).
  • Table virtualization is required beyond 500 rows; virtual scroll must preserve keyboard focus order and pinned columns.
  • Mobile detail view uses tabs (Overview, Files, Options); desktop uses a split layout so overview and options stay visible together at lg+.

Configuration Surface

Canonical reference for the PostgreSQL-backed settings documents that drive Revaer’s runtime behavior.

Revaer persists operator-facing configuration inside the settings_* tables. The API (ConfigService) exposes strongly typed snapshots consumed by the API server, torrent engine, filesystem pipeline, and CLI. Every change flows through a SettingsChangeset, ensuring a single validation path whether commands originate from the setup flow or the admin API.

Snapshot components

The /.well-known/revaer.json endpoint, the authenticated GET /v1/config route, and the revaer config get CLI command all return the same structure:

{
  "revision": 42,
  "app_profile": {
    "...": "..."
  },
  "engine_profile": {
    "...": "..."
  },
  "engine_profile_effective": {
    "...": "..."
  },
  "fs_policy": {
    "...": "..."
  },
  "api_keys": [
    {
      "key_id": "admin",
      "label": "bootstrap",
      "enabled": true,
      "rate_limit": null
    }
  ]
}

engine_profile_effective is the normalized engine profile (clamped limits, derived defaults, warnings applied) used by the orchestrator.

App profile (settings_app_profile)

FieldTypeDescription
idUUIDSingleton identifier for the current document.
instance_namestringHuman-readable label surfaced in the UI and CLI.
modesetup or activeGatekeeper for authentication middleware and setup flow.
auth_modeapi_key or noneAPI access policy; none allows anonymous access on local networks only.
versionintegerOptimistic locking counter maintained by ConfigService.
http_portintegerPublished TCP port for the API server.
bind_addrstring (IPv4/IPv6)Listen address for the API server.
local_networksarrayCIDR ranges treated as local for anonymous access and recovery flows.
telemetryobjectStructured telemetry config (level, format, otel_enabled, otel_service_name, otel_endpoint).
label_policiesarrayPer-category/tag policy overrides (download dir, rate limits, queue position).
immutable_keysarrayFields that cannot be mutated via patches (ConfigError::ImmutableField).

Engine profile (settings_engine_profile)

Network and transport

  • implementation - engine identifier (libtorrent or stub).
  • listen_port and listen_interfaces - incoming listener configuration.
  • ipv6_mode - disabled, prefer, or require.
  • enable_lsd, enable_upnp, enable_natpmp, enable_pex - discovery toggles (default off).
  • dht, dht_bootstrap_nodes, dht_router_nodes - DHT configuration.
  • outgoing_port_min / outgoing_port_max - optional port range for outgoing connections.
  • peer_dscp - optional DSCP/TOS codepoint (0-63) for peer sockets.

Privacy and protocol controls

  • anonymous_mode, force_proxy, prefer_rc4.
  • allow_multiple_connections_per_ip.
  • enable_outgoing_utp, enable_incoming_utp.

Limits and scheduling

  • max_active, max_download_bps, max_upload_bps.
  • seed_ratio_limit, seed_time_limit.
  • connections_limit, connections_limit_per_torrent.
  • unchoke_slots, half_open_limit, optimistic_unchoke_slots.
  • stats_interval_ms, max_queued_disk_bytes.
  • alt_speed (caps and optional schedule).

Behavior

  • sequential_default.
  • auto_managed, auto_manage_prefer_seeds, dont_count_slow_torrents.
  • super_seeding, strict_super_seeding.
  • choking_algorithm, seed_choking_algorithm.

Storage

  • resume_dir, download_root.
  • storage_mode, use_partfile.
  • disk_read_mode, disk_write_mode, verify_piece_hashes.
  • cache_size, cache_expiry, coalesce_reads, coalesce_writes, use_disk_cache_pool.

Tracker and filtering

  • tracker (user-agent, announce overrides).
  • ip_filter (inline rules plus optional remote blocklist).
  • peer_classes (per-class caps and throttles).

Filesystem policy (settings_fs_policy)

FieldTypeDescription
library_rootstringDestination directory for completed artifacts.
extractboolWhether completed payloads are extracted.
par2stringdisabled, verify, or repair (verify is also the compatibility behavior for legacy enabled).
flattenboolCollapse single-file directories when moving into the library.
move_modestringcopy, move, or hardlink.
cleanup_keep / cleanup_droparrayGlob patterns retaining or removing files.
chmod_file / chmod_dirstring?Optional octal permissions applied to outputs.
owner / groupstring?Optional ownership override (Unix only).
umaskstring?Umask used to derive default permissions.
allow_pathsarrayAllowed staging/library paths.

Extraction is built in for zip, tar, tar.gz, and tgz. 7z and rar extraction use external tools (7zz, 7z, unar, or unrar) and fail with a structured FsOps error if none are installed. PAR2 verification/repair requires the par2 CLI when par2 is set to verify or repair. On non-Unix platforms, ownership overrides remain unsupported and FsOps returns an explicit error instead of silently drifting from policy.

API keys and secrets

Patches can create, update, or revoke keys and named secrets. The request format mirrors SettingsChangeset:

{
  "api_keys": [
    {
      "op": "upsert",
      "key_id": "admin",
      "label": "primary",
      "enabled": true,
      "secret": "optional-override",
      "rate_limit": { "burst": 10, "per_seconds": 1 }
    }
  ],
  "secrets": [
    { "op": "set", "name": "libtorrent.passphrase", "value": "..." }
  ]
}

The API server enforces bucketed rate limits if rate_limit is supplied (burst per per_seconds). Invalid field names or mutations against immutable_keys yield RFC9457 ProblemDetails responses with an invalid_params array matching the JSON pointer returned by ConfigError.

Telemetry toggle

Revaer boots with structured logging and Prometheus metrics by default. OpenTelemetry export remains opt-in: set REVAER_ENABLE_OTEL=true alongside your revaer-app process (optionally overriding REVAER_OTEL_SERVICE_NAME and REVAER_OTEL_EXPORTER, or using OTEL_EXPORTER_OTLP_ENDPOINT) to attach the OTLP tracing exporter. When the flag is absent, no OpenTelemetry exporter is initialized.

Change workflows

  • Setup - POST /admin/setup/start issues a one-time token. POST /admin/setup/complete consumes that token, applies the provided SettingsChangeset, forces app_profile.mode to active, and returns the hydrated snapshot along with the generated API key.
  • Ongoing updates - PATCH /v1/config (CLI: revaer config set --file changes.json) requires an API key and supports partial documents. Any field omitted from the payload remains untouched. The legacy /admin/settings alias remains for compatibility.
  • Snapshot access - GET /.well-known/revaer.json (no auth), GET /v1/config (API key), GET /health/full, and revaer config get return the current revision so automation and dashboards can verify configuration drift without shell access.

Revaer publishes SettingsChanged events on every successful mutation, ensuring subscribers refresh in-memory caches without polling.

HTTP API

REST + SSE surface exposed by revaer-api. The OpenAPI document is served at /docs/openapi.json and regenerated via just api-export.

Authentication

  • Setup flow - /admin/setup/start is open. /admin/setup/complete requires the x-revaer-setup-token header with the one-time token returned by setup start. The server refuses setup calls once app_profile.mode is active.
  • Operator actions - All /admin/* (after setup) and /v1/* endpoints require x-revaer-api-key: key_id:secret. The middleware validates the key via ConfigService, enforces per-key rate limiting, and rejects calls while the instance remains in setup mode.
  • Request correlation - An optional x-request-id header is echoed into tracing spans and surfaced on SSE traffic. The CLI auto-populates this header per invocation.

Error responses follow RFC9457 (ProblemDetails) and include invalid_params entries when validation pinpoints a JSON pointer within the payload.

Endpoint inventory (core surface)

Public (no auth)

  • GET /health, GET /health/full
  • GET /metrics
  • GET /.well-known/revaer.json
  • GET /docs/openapi.json

Setup and admin

  • POST /admin/setup/start
  • POST /admin/setup/complete
  • POST /admin/factory-reset
  • PATCH /admin/settings (alias for PATCH /v1/config)
  • GET/POST/DELETE /admin/torrents
  • GET /admin/torrents/{id}
  • POST /admin/torrents/create
  • GET /admin/torrents/categories, GET /admin/torrents/tags
  • GET /admin/torrents/{id}/peers

Config and auth

  • GET /v1/config (authenticated snapshot)
  • PATCH /v1/config (apply SettingsChangeset)
  • POST /v1/auth/refresh (refresh API key)

Dashboard and filesystem

  • GET /v1/dashboard
  • GET /v1/fs/browse

Torrent lifecycle

  • GET/POST /v1/torrents
  • GET /v1/torrents/{id}
  • POST /v1/torrents/{id}/select
  • PATCH /v1/torrents/{id}/options
  • POST /v1/torrents/{id}/action
  • POST /v1/torrents/create
  • GET /v1/torrents/categories, GET /v1/torrents/tags
  • GET /v1/torrents/{id}/peers
  • GET/PATCH/DELETE /v1/torrents/{id}/trackers
  • PATCH /v1/torrents/{id}/web_seeds

Events and logs

  • GET /v1/torrents/events (primary SSE stream)
  • GET /v1/events, GET /v1/events/stream (SSE aliases)
  • GET /v1/logs/stream

All torrent-managing endpoints ensure the torrent workflow is wired. If the engine is unavailable, the API returns 503 Service Unavailable.

Torrent submission (POST /v1/torrents)

Required headers: x-revaer-api-key. Provide either magnet or metainfo; the server rejects payloads missing both. Optional fields:

  • download_dir - Overrides the engine profile’s staging directory.
  • sequential - Enables sequential downloading for this torrent only.
  • tags / trackers - Stored alongside the torrent for filtering and bookkeeping.
  • include / exclude / skip_fluff - File selection bootstrap applied before metadata fetch completes.
  • max_download_bps / max_upload_bps - Per-torrent rate limits (bps) passed to the workflow.

On success the server returns 202 Accepted after dispatching TorrentWorkflow::add_torrent. The torrent ID in the payload becomes the canonical identifier.

Listing and filtering (GET /v1/torrents)

Query parameters:

  • limit (default 50, max 200)
  • cursor - Base64 token returned in next
  • state, tracker, extension, tags, name - Comma-separated filters (case-insensitive)

The response body is TorrentListResponse with an optional next cursor when additional pages exist.

Torrent actions (POST /v1/torrents/{id}/action)

type determines the shape of the body:

{ "type": "remove", "delete_data": true }
{ "type": "sequential", "enable": false }
{ "type": "rate", "download_bps": 1048576, "upload_bps": null }

Failures propagate engine errors as 500 Internal Server Error with a descriptive message in detail.

SSE stream (GET /v1/torrents/events)

Headers:

  • x-revaer-api-key
  • Optional Last-Event-ID - resuming from a previously stored ID (the CLI stores this via --resume-file).

Query parameters:

  • torrent - Comma-separated UUIDs.
  • event - Comma-separated event kinds. Valid values include torrent_added, files_discovered, progress, state_changed, completed, metadata_updated, torrent_removed, fsops_started, fsops_progress, fsops_completed, fsops_failed, settings_changed, health_changed, selection_reconciled.
  • state - Comma-separated torrent states (downloading, completed, etc.).

The server maintains a 20-second keep-alive ping and enforces filtering before events hit the wire.

Health and metrics

  • GET /health - Primary readiness probe used by orchestration systems. Adds database to the degraded list if PostgreSQL is unreachable.
  • GET /health/full - Returns the deployment revision, build SHA, metrics snapshot (config_guardrail_violations_total, api_rate_limit_throttled_total, etc.), and torrent queue depth.
  • GET /metrics - Exposes the same counters for Prometheus scraping.

For the complete schema definitions, consult the generated OpenAPI (just api-export).

CLI Reference

revaer-cli provides parity with the API for setup, configuration management, torrent lifecycle, and observability.

Global flags and environment

FlagEnvironmentDefaultDescription
--api-url <URL>REVAER_API_URLhttp://127.0.0.1:7070Base URL for API requests.
--api-key <key_id:secret>REVAER_API_KEYnoneRequired for all post-setup commands that mutate or read torrents.
--timeout <secs>REVAER_HTTP_TIMEOUT_SECS10Per-request HTTP timeout.
`–output <tablejson>`nonetable

Each invocation bubbles a unique x-request-id through the API; the CLI can optionally emit telemetry events when REVAER_TELEMETRY_ENDPOINT is set.

Setup flow

revaer setup start [--issued-by <label>] [--ttl-seconds <secs>]

  • Calls POST /admin/setup/start.
  • Prints the plaintext token followed by its ISO8601 expiry.
  • Use --issued-by to tag the token source (defaults to api).

revaer setup complete --instance <name> --bind <addr> --port <port> --resume-dir <path> --download-root <path> --library-root <path> --api-key-label <label> [--api-key-id <id>] [--passphrase <value>] [--token <token>]

  • Loads the setup token from --token or REVAER_SETUP_TOKEN.
  • Builds a SettingsChangeset containing the app profile, engine profile, filesystem policy, API key, and optional secret.
  • Forces app_profile.mode = "active".
  • Echoes the generated API key (key_id:secret) on success; store it securely before continuing.

Configuration maintenance

revaer config get

  • Fetches the current configuration snapshot.
  • Mirrors GET /v1/config output.

revaer config set --file <path>

  • Reads a JSON file containing a partial SettingsChangeset.
  • Requires an API key.
  • Returns a formatted ProblemDetails message if validation fails (immutable fields, unknown keys, etc.).

revaer settings patch --file <path>

  • Alias for revaer config set.

Torrent lifecycle

revaer torrent add <magnet|.torrent> [--name <label>] [--id <uuid>]

  • Accepts a magnet URI or a filesystem path to a .torrent.
  • Automatically base64-encodes torrent files for the API.
  • Optional overrides: --name sets the human-friendly label; --id lets you supply a deterministic UUID instead of the auto-generated value.

revaer torrent remove <uuid>

  • Issues POST /v1/torrents/{id}/action with { "type": "remove" }.
  • Use the more general action command for delete_data semantics.

revaer ls [--limit <n>] [--cursor <token>] [--state <state>] [--tracker <url>] [--extension <ext>] [--tags <tag1,tag2>] [--name <fragment>]

  • Lists torrents with the same filters supported by the REST API.
  • Default output is a table summarizing id, name, state, and progress.
  • Add --output json to emit the raw TorrentListResponse.

revaer status <uuid>

  • Returns a detailed view of a single torrent.
  • Add --output json to view the full TorrentDetail (including file metadata when available).

revaer select <uuid> [--include <glob,glob>] [--exclude <glob,glob>] [--skip-fluff] [--priority index=priority,...]

  • Updates file-selection rules via POST /v1/torrents/{id}/select.
  • --priority accepts repeated index=priority pairs (skip|low|normal|high) mapped onto the engine’s FilePriority.

revaer action <uuid> <pause|resume|remove|reannounce|recheck|sequential|rate> [--delete-data] [--enable <bool>] [--download <bps>] [--upload <bps>]

  • One-stop entry point for all torrent actions.
  • sequential toggles sequential downloads via --enable true|false.
  • rate updates per-torrent bandwidth caps (bps). Provide --download and/or --upload.
  • remove honors --delete-data.

Event streaming

revaer tail [--torrent <id,id>] [--event <kind,kind>] [--state <state,state>] [--resume-file <path>] [--retry-secs <n>]

  • Connects to /v1/torrents/events (falls back to /v1/events/stream).
  • Filters match the API query parameters and enforce UUID/event-kind validation before the request is made.
  • When --resume-file is supplied, the CLI persists the last event ID across reconnects so the stream can resume after transient failures.
  • --retry-secs controls the backoff between reconnect attempts (default: 5 seconds).

All torrent commands require an API key. The CLI surfaces API problems exactly as the server returns them, including RFC9457 validation errors and rate-limit responses (429 Too Many Requests with retry metadata in the body).

Torrent Flows

Operational views for the torrent lifecycle and the torrent authoring path. These diagrams are reference-only; wire changes must follow the stored-procedure, clamp-before-apply, and observability guardrails in AGENT.md.

Admission -> Runtime -> FsOps

flowchart TB
    subgraph API["API/CLI"]
        Req["POST /v1/torrents\nPATCH /v1/torrents/{id}/options\nPOST /v1/torrents/{id}/select\nPATCH /v1/torrents/{id}/trackers\nPATCH /v1/torrents/{id}/web_seeds\n- validate payload\n- clamp per profile\n- hydrate metadata (tags/category/storage)\n- normalize selection + limits"]
    end

    subgraph Worker["Worker / Orchestrator"]
        Cmd["EngineCommand::Add\n- attach profile snapshot\n- derive AddTorrentOptions\n- stash selection + metadata for FsOps"]
        Persist["RuntimeStore\n- persist metadata/selection\n- checkpoint admission state"]
    end

    subgraph Bridge["Bridge / FFI"]
        Opts["EngineOptions/AddTorrentRequest\n- listen/download dirs\n- per-torrent rate caps\n- queue priority / paused\n- trackers (profile + request)\n- encryption, DHT, LSD flags\n- seed mode / add paused"]
        Session["libtorrent session\n- apply settings_pack\n- add_torrent_params\n- start/resume handles"]
    end

    subgraph Engine["Engine Loop"]
        Progress["Native events -> EngineEvent\n- progress/state\n- alert mapping\n- tracker status\n- errors (listen/storage/peer)"]
        Cache["Per-torrent cache\n- rate caps\n- trackers\n- limits\n- tags/category"]
    end

    subgraph FsOps["FsOps Pipeline"]
        Select["Selection reconcile\n- honor request selection\n- drop unselected paths"]
        Extract["Extract archives (zip/rar/7z/tar.gz)\n- optional; skip when not configured\n- guardrail missing tools"]
        Flatten["Flatten/move per policy\n- copy/move/hardlink\n- partfile handling"]
        Perms["chmod/chown/umask\n- library root enforcement"]
        Cleanup["Cleanup\n- drop patterns\n- keep filters\n- metadata writeback (.revaer.meta)"]
    end

    Req --> Cmd
    Cmd --> Persist
    Cmd --> Opts
    Opts --> Session
    Session --> Progress
    Progress --> Cache
    Progress -->|Completed event| FsOps
    FsOps -->|Events + metrics| Worker
    Worker -->|Health + SSE| API

Notes

  • Clamping and validation happen before persistence and before libtorrent sees the settings; unknown fields are ignored, unsafe values are clamped.
  • Per-torrent limits (rate caps, queue priority, paused, seed mode) are applied immediately on admission and cached for later verification.
  • FsOps runs on Completed with retries; every stage emits events/metrics and degrades health on guardrail breaches (tooling missing, permission errors, latency overruns).

Torrent creation (authoring) flow

flowchart LR
    Input["Input\n- file/dir path\n- trackers/web seeds\n- piece size (auto/manual)\n- private flag\n- comment/source\n- alignment rules"]
    Stage["Stage & Hash\n- walk files with allowlist\n- apply size filters\n- align pieces\n- hash with deterministic order"]
    Meta["Build metainfo\n- info dictionary\n- tracker tiers\n- web seeds\n- creation date\n- optional dht nodes"]
    Validate["Validate\n- size/limit guards\n- path length\n- private flag vs trackers\n- duplicate file detection"]
    PersistMeta["Persist\n- .torrent file\n- magnet link\n- optional signed manifest"]
    Return["Return to caller\n- paths + hashes\n- effective options\n- warnings (skipped files, clamped piece size)"]

    Input --> Stage --> Meta --> Validate --> PersistMeta --> Return

Notes

  • Creation respects the same glob filters and guardrails used by admission to avoid later FsOps surprises (exclude temporary/system files).
  • When trackers or web seeds are provided, they remain deduplicated and ordered; private torrents skip DHT/PEX automatically.
  • The flow is deterministic: file order, piece sizing, and hashing are reproducible given the same inputs and options.
  • API endpoint: POST /v1/torrents/create (admin alias: POST /admin/torrents/create).

Native Libtorrent Integration Tests

These tests are opt-in (gated by REVAER_NATIVE_IT) to keep the default matrix deterministic; include them explicitly in feature-matrix runs.

To run the feature-gated native libtorrent integration suite locally:

# Ensure Docker (or colima) is running and DOCKER_HOST is set if not using /var/run/docker.sock
export DOCKER_HOST=${DOCKER_HOST:-unix:///Users/vanna/.colima/default/docker.sock}

# Enable native integration tests
export REVAER_NATIVE_IT=1

# Run the full gate (preferred)
just ci

# Or target only the libtorrent native suite
just test-native

CI note: add a matrix job that sets REVAER_NATIVE_IT=1 and points DOCKER_HOST at the runner’s daemon to ensure the native path stays covered.

API Documentation

This directory hosts HTTP API specifications, the generated OpenAPI document, and usage guides for the Revaer control plane.

Contents

  • openapi.json - Generated OpenAPI document (just api-export).
  • openapi.md - How to regenerate and consume the OpenAPI document.
  • guides/ - Scenario-based walkthroughs (bootstrap, operations, telemetry, CLI usage).
  • openapi-gaps.md - Inventory of router endpoints missing from the OpenAPI spec (should be empty).

Current Coverage

  • Setup and configuration - /admin/setup/*, /v1/config, /.well-known/revaer.json.
  • Torrent lifecycle - /v1/torrents, /v1/torrents/{id}, /v1/torrents/{id}/action, /v1/torrents/{id}/select, /v1/torrents/{id}/options, plus admin aliases.
  • Authoring and metadata - /v1/torrents/create, /v1/torrents/{id}/trackers, /v1/torrents/{id}/web_seeds, /v1/torrents/{id}/peers.
  • Observability - /v1/events, /v1/torrents/events, /v1/logs/stream, /metrics, /v1/dashboard, /health/full.
  • Filesystem - /v1/fs/browse.

See guides/bootstrap.md for an end-to-end description of the bootstrap lifecycle and runtime orchestration expectations.

OpenAPI Reference

Canonical machine-readable description of the Revaer control plane surface.

The generated OpenAPI specification lives alongside the documentation at docs/api/openapi.json and is served by the API at /docs/openapi.json.

Regenerate it with:

just api-export

After refreshing the file, rebuild the documentation (just docs) to publish the updated schema and LLM manifests.

OpenAPI Coverage Gaps

This document lists API routes present in crates/revaer-api/src/http/router.rs that are missing from docs/api/openapi.json.

Summary

  • The OpenAPI spec is aligned with the current router surface; no gaps remain for the default feature set.

Missing admin routes

  • None.

Missing v1 routes

  • None.

Notes

  • Feature-gated compat-qb routes are excluded because they are not mounted unless the compat-qb feature is enabled.

Indexer Migration Rollback

Revaer’s indexer migration path is designed to be reversible.

Coexistence

  • Revaer can run alongside Prowlarr because Revaer only exposes its own Torznab endpoints and import surfaces.
  • Revaer does not push configuration into Sonarr, Radarr, Lidarr, or Readarr.
  • Existing Arr and Prowlarr configuration stays outside Revaer-managed state.

Rollback

Rollback is URL-only:

  1. Switch each Arr client’s Torznab URL back from Revaer to the prior Prowlarr URL.
  2. Keep the previous API key or credentials in the Arr client as needed for the old endpoint.
  3. Leave Revaer import jobs, search profiles, and Torznab instances in place for inspection or later retry.

No cleanup is required in Revaer to restore the previous Arr behavior because Revaer does not mutate downstream Arr configuration.

Operational Notes

  • Dry-run import jobs are safe to execute while Prowlarr is still active.
  • Revaer Torznab instances can coexist with imported indexer management flows.
  • If you need to compare behavior during migration, keep both Revaer and Prowlarr Torznab endpoints available and move one Arr client at a time.

ADRs

Suggested Use Workflow

  1. Create a new ADR using the template in docs/adr/template.md.
  2. Give it a sequential identifier (e.g., 001, 002) and a concise title.
  3. Capture context, decision, consequences, and follow-up actions.
  4. Append the new ADR entry to the end of the Catalogue list above.
  5. Append the same entry under ADRs in docs/SUMMARY.md, keeping it nested so the sidebar stays collapsed.
  6. Reference ADRs from code comments or docs where the decision applies.

Catalogue

  • Template – ADR template
  • 001 – Configuration revisioning
  • 002 – Setup token lifecycle
  • 003 – Libtorrent session runner
  • 004 – Phase one delivery
  • 005 – FS operations pipeline
  • 006 – API/CLI contract
  • 007 – Security posture
  • 008 – Remaining phase-one tasks
  • 009 – FS ops permission hardening
  • 010 – Agent compliance sweep
  • 011 – Coverage hardening
  • 012 – Agent compliance refresh
  • 013 – Runtime persistence
  • 014 – Data access layer
  • 015 – Agent compliance hardening
  • 016 – Libtorrent restoration
  • 017 – Avoid sqlx-named-bind
  • 018 – Retire testcontainers
  • 019 – Advisory RUSTSEC-2024-0370 temporary ignore
  • 020 – Torrent engine precursor hardening
  • 021 – Torrent precursor enforcement
  • 022 – Torrent settings parity and observability
  • 023 – Tracker config wiring and persistence
  • 024 – Seeding stop criteria and overrides
  • 025 – Seed mode admission with optional hash sampling
  • 026 – Queue auto-managed defaults and PEX threading
  • 027 – Choking strategy and super-seeding configuration
  • 028 – qBittorrent parity and tracker TLS wiring
  • 029 – Torrent authoring, labels, and metadata updates
  • 030 – Migration consolidation for initial setup
  • 031 – UI Nexus asset sync tooling
  • 032 – Torrent FFI audit closeout
  • 033 – UI SSE + auth/setup wiring
  • 034 – UI SSE normalization and ApiClient singleton
  • 035 – Advisory RUSTSEC-2021-0065 temporary ignore
  • 036 – Asset sync test stability under parallel runs
  • 037 – UI row slices and system-rate store wiring
  • 038 – UI shared API models and torrent query paging state
  • 039 – UI store, API coverage, and rate-limit retries
  • 040 – UI label policy editor and API wiring
  • 041 – UI health view and label shortcuts
  • 042 – UI metrics copy button
  • 043 – UI settings bypass local auth toggle
  • 044 – UI ApiClient torrent options/selection endpoints
  • 045 – UI icon components and icon button standardization
  • 046 – UI torrent filters, pagination, and URL sync
  • 047 – UI torrent list updated timestamp column
  • 048 – UI torrent row actions, bulk controls, and rate/remove dialogs
  • 049 – UI detail drawer overview/files/options
  • 050 – UI torrent FAB, add modal, and create-torrent authoring flow
  • 051 – UI shared API models and UX primitives
  • 052 – UI dashboard migration to Nexus vendor layout
  • 053 – UI dashboard hardline rebuild
  • 054 – UI dashboard Nexus parity tweaks
  • 055 – Factory reset and bootstrap API key
  • 056 – Factory reset auth fallback when no API keys exist
  • 057 – UI settings tabs and editor controls
  • 058 – UI settings controls, logs stream, and filesystem browser
  • 059 – Migration rebaseline and JSON backfill guardrails
  • 060 – Auth expiry enforcement and structured error context
  • 061 – API error i18n and OpenAPI asset constants
  • 062 – Event bus publish guardrails and API i18n cleanup
  • 063 – CI compliance cleanup for test error handling
  • 064 – Factory reset error context and allow-path validation
  • 065 – API key refresh and no-auth setup mode
  • 066 – Factory reset UX fallback and SSE setup gating
  • 067 – Logs ANSI rendering and bounded buffer
  • 068 – Agent compliance clippy cargo linting
  • 069 – Pin mdbook-mermaid for docs builds
  • 070 – Dashboard UI checklist completion and auth/SSE hardening
  • 071 – Libtorrent native fallback for default CI
  • 072 – Agent compliance refactor (UI + HTTP + Config Layout)
  • 073 – UI checklist follow-ups: SSE detail refresh, labels shortcuts, strict i18n, and anymap removal
  • 074 – Temporary vendoring of yewdux for latest Yew compatibility
  • 075 – Coverage gate tests for config loader and data toggles
  • 076 – Temporary clippy exception for hashbrown multiple versions
  • 077 – UI menu interactions
  • 078 – Local auth bypass guardrails
  • 079 – Advisory RUSTSEC-2025-0141 temporary ignore
  • 080 – Local auth bypass reliability
  • 081 – Playwright E2E test suite
  • 082 – E2E gate and selector stability
  • 083 – API preflight before UI E2E
  • 084 – E2E API coverage with temp databases
  • 085 – E2E OpenAPI client and unified coverage
  • 086 – Default local auth bypass
  • 087 – Local network auth ranges and settings validation
  • 088 – Live SSE log streaming
  • 089 – Port process termination for dev tooling
  • 090 – UI log filters and shell controls
  • 091 – Raise per-crate coverage gate to 90%
  • 092 – Fsops coverage hardening
  • 093 – UI logic extraction for testable components
  • 094 – UI E2E sharding in workflows
  • 095 – Untagged images use dev tag
  • 096 – Aggregate UI E2E coverage for sharded runs
  • 097 – Dev prereleases and PR image previews
  • 098 – Reusable image build workflow
  • 099 – Indexer ERD single-tenant and audit fields
  • 100 – SonarQube workflow with root coverage LCOV
  • 101 – Indexer ERD implementation checklist
  • 102 – Indexer core schema foundations
  • 103 – Indexer definition schema
  • 104 – Indexer instance schema and RSS
  • 105 – Indexer secret schema
  • 106 – Indexer search profiles and Torznab schema
  • 107 – Indexer import schema
  • 108 – Indexer rate limit and Cloudflare schema
  • 109 – Indexer policy schema
  • 110 – Indexer Torznab category schema
  • 111 – Indexer connectivity and audit schema
  • 112 – Indexer canonicalization schema
  • 113 – Indexer search request schema
  • 114 – Indexer scoring schema
  • 115 – Indexer conflict and decision schema
  • 116 – Indexer user action and acquisition schema
  • 117 – Indexer telemetry and reputation schema
  • 118 – Indexer job schedule schema
  • 119 – Indexer FK on-delete rules
  • 120 – Indexer seed data and defaults
  • 121 – Indexer query indexes
  • 122 – Indexer deployment initialization procedure
  • 123 – Indexer app_user stored procedures
  • 124 – Indexer tag stored procedures
  • 125 – Indexer routing policy stored procedures
  • 126 – Indexer Cloudflare reset procedure
  • 127 – Indexer rate limit stored procedures
  • 128 – Indexer instance stored procedures
  • 129 – Indexer category mapping procedures
  • 130 – Indexer policy set procedures
  • 131 – Indexer search profile procedures
  • 132 – Indexer policy rule create procedure
  • 133 – Indexer outbound request log procedure
  • 134 – Indexer Torznab instance state procedures
  • 135 – Indexer conflict resolution procedures
  • 136 – Indexer job runner procedures
  • 137 – Indexer search request cancel procedure
  • 138 – Indexer search run procedures
  • 139 – Indexer canonical disambiguation rule procedure
  • 140 – Indexer search request create procedure
  • 141 – Indexer job runner follow-up procedures
  • 142 – Indexer executor handoff stored procedures
  • 143 – Indexer tag API surface
  • 144 – Task: Indexer procedure fixes (RSS apply, base score refresh, normalization)
  • 145 – Indexer domain mapping and DI boundaries
  • 146 – Indexer stored-proc test harness
  • 147 – Indexer error-code taxonomy
  • 148 – Indexer v1 scope enforcement
  • 149 – Indexer schema JSON ban verification
  • 150 – Indexer public-id and bigint identity verification
  • 151 – Indexer soft-delete coverage verification
  • 152 – Indexer audit fields and timestamp defaults verification
  • 153 – Indexer API boundary public-id verification
  • 154 – Indexer external reference public-id verification
  • 155 – Indexer system sentinel usage verification
  • 156 – Indexer text caps and lowercase key enforcement verification
  • 157 – Indexer normalized column verification
  • 158 – Indexer hash identity rules verification
  • 159 – Indexer secret binding linkage verification
  • 160 – Indexer single-tenant scope verification
  • 161 – Indexer table/constraint alignment verification
  • 162 – Indexer per-table Notes verification
  • 163 – Indexer proc error-code alignment for key lookups
  • 164 – Indexer error enums and normalization helpers verification
  • 165 – Indexer result-only returns and no-panics verification
  • 166 – Indexer tryOp wrappers for external operations
  • 167 – Indexer routing policy service and endpoints
  • 168 – Indexer definition list endpoint
  • 169 – Indexer CF state read endpoint
  • 170 – Indexer CF state E2E coverage
  • 171 – Indexer category mapping API endpoints
  • 172 – Indexer Torznab instance API endpoints
  • 173 – Indexer search profile API endpoints
  • 174 – Indexer import jobs API endpoints
  • 175 – Indexer import jobs CLI commands
  • 176 – Indexer Torznab CLI management
  • 177 – Indexer policy CLI management
  • 178 – Indexer instance test API and CLI
  • 179 – Indexer allocation safety guard
  • 180 – Auth prompt dismissal stability
  • 181 – Cross-platform allocation safety probe
  • 182 – Indexer PR feedback follow-through
  • 183 – Indexer PR feedback allocation follow-up
  • 184 – Indexer PR feedback allocation caps
  • 185 – Indexer Torznab caps endpoint
  • 186 – Indexer Torznab download and allocation guards
  • 187 – Indexer search requests API and allocation guard refinements
  • 188 – Indexer search request auth E2E coverage
  • 189 – Indexer search pages API
  • 190 – Search request validation tests
  • 191 – Hash identity derivation tests
  • 192 – Rate limit state purge test
  • 193 – Job schedule completion updates
  • 194 – Job claim locking and lease durations
  • 195 – Policy snapshot GC ordering
  • 196 – Retention purge context cleanup
  • 197 – Indexer connectivity profile refresh rollups
  • 198 – Reputation rollup sample thresholds
  • 199 – Canonical refresh durable source cadence
  • 200 – Canonical prune source-link policy alignment
  • 201 – RSS poll and subscription backfill workflows
  • 202 – RSS scheduling, backoff, and dedupe validation
  • 203 – Rate limit token bucket and RSS rate-limited semantics
  • 204 – Cloudflare state transition and mitigation validation
  • 205 – Policy snapshot reuse and refcount validation
  • 206 – Policy snapshot GC acceptance coverage
  • 207 – Derived refresh timing and caching validation
  • 208 – Retention and rollup job window validation
  • 209 – Retention and derived refresh strategy coverage
  • 210 – Policy rule disable/enable and reorder validation
  • 211 – Search-result observation rules validation
  • 212 – Category mapping and domain filter validation
  • 213 – Indexer observability counters for Torznab, search, and import jobs
  • 214 – Indexer request span coverage for Torznab, search, and import jobs
  • 215 – Torznab parity integration tests for endpoint format and auth semantics
  • 216 – Torznab search query mapping and append-order pagination
  • 217 – Torznab download redirect and acquisition-attempt coverage
  • 218 – Torznab feed category emission and test fixture hardening
  • 219 – Torznab multi-category domain mapping and Other (8000) behavior coverage
  • 220 – Rate-limit defaults and indexer/routing scope enforcement coverage
  • 221 – Search-run retry behavior coverage for rate-limited and transient errors
  • 222 – RSS Cloudflare state transition alignment with ERD
  • 223 – Search streaming pages terminal sealing and append-only ordering
  • 224 – Search dropped-source audit persistence and paging exclusion
  • 225 – Canonicalization conflict coverage
  • 226 – Indexer unit test domain coverage
  • 227 – Health and reputation rollup semantics from outbound logs
  • 228 – Search zero-result explainability
  • 229 – Prowlarr import source parity and dry-run coverage
  • 230 – Import result mapping and unmapped-definition coverage
  • 231 – Migration parity E2E flow coverage
  • 232 – Indexer schema and procedure catalog verification tests
  • 233 – Import result fidelity snapshots
  • 234 – Secret binding and test error class coverage
  • 235 – Indexer instance creation uses the public definition slug key
  • 236 – Indexer service operation metrics and spans
  • 237 – Indexer dependency-injection boundary enforcement
  • 238 – Manual search UI
  • 239 – Indexer admin console UI
  • 240 – Indexer schedule controls UI
  • 241 – Indexer RSS management UI
  • 242 – Indexer connectivity and reputation UI
  • 243 – Indexer routing policy visibility
  • 244 – Indexer import job dashboard
  • 245 – Indexer health event drill-down
  • 246 – Indexer origin-only error logging
  • 247 – Indexer health summary panels
  • 248 – Indexer backup and restore
  • 249 – Indexer coexistence and rollback acceptance coverage
  • 250 – Indexer domain service closeout
  • 251 – Indexer instance category overrides
  • 252 – Indexer final acceptance closeout
  • 253 – Indexer health notification hooks
  • 254 – Indexer app sync provisioning UI
  • 255 – Indexer app-scoped category overrides
  • 256 – Indexer source conflict operator UI
  • 257 – Indexer Cardigann definition import
  • 258 – PR review closeout
  • 259 – PR review and security follow-up
  • 260 – PR CodeQL closeout
  • 261 – PR security and thread closeout
  • 262 – PR final thread closeout
  • 263 – SonarCloud PR issue cleanup and scope alignment
  • 264 – PR unresolved feedback closeout
  • 265 – PR feedback boundary validation closeout
  • 266 – PR CodeQL follow-up on instance tag bounds
  • 267 – Indexer maintenance runtime
  • 268 – Indexer tag and secret inventory
  • 269 – Indexer operator inventory read surfaces
  • 270 – Indexer profile, policy, and Torznab inventory
  • 271 – Indexer CLI read parity
  • 272 – Indexer CLI operator write parity
  • 273 – Indexer CLI mutation parity follow-up
  • 274 – Indexer CLI health-notification parity
  • 275 – PR output redaction and review follow-up
  • 276 – CI cache trim for runner disk pressure
  • 277 – PR review handler normalization follow-up
  • 278 – Remediation plan implementation closeout
  • 279 – Remediation plan gap closure
  • 280 – PR 21 feedback closeout
  • 281 – PR 21 Sonar and review closeout
  • 282 – PR 21 final feedback closeout
  • 283 – PR 21 Trivy action pin refresh
  • 284 – Instruction refresh and Sonar scope hardening
  • 285 – PR 19 review and lint closeout
  • 286 – Advisory RUSTSEC-2026-0097 temporary ignore
  • 287 – PR 19 policy reconciliation
  • 288 – PR 19 OpenAPI test portability
  • 289 – PR 19 native settings snapshot test stability
  • 290 – PR 19 final feedback closeout
  • 291 – PR 19 Sonar quality gate restoration
  • 292 – PR 19 review timeout stability
  • 293 – PR 19 GitHub Action SHA pinning
  • 294 – PR 19 review feedback closeout
  • 295 – Dependency bump rollup
  • 296 – Helm chart release publishing
  • 297 – Helm feedback and Sonar closeout
  • 298 – CI workflow permissions regression
  • 299 – Trivy config baseline
  • 300 – Trivy container and Sonar PGSQL config
  • 301 – Security dependency refresh for PR 25
  • 302 – PR validation and main release workflow split
  • 303 – Release tag image job dependency split
  • 304 – PR 25 deny exception and Sonar hotspot closeout
  • 305 – PR 25 prerelease tag release guard
  • 306 – Semantic release prepare template fix
  • 307 – CI ORAS setup action refresh
  • 308 – PR workflow Helm and Sonar consolidation
  • 309 – GHCR Helm namespace derivation
  • 310 – PR Helm review follow-ups
  • 311 – GHCR Helm GitHub token authentication
  • 312 – Artifact Hub OCI repository alignment
  • 313 – Trivy SARIF category and GHCR token alignment
  • 314 – Artifact Hub verification and official readiness

  • Status: {Proposed|Accepted|Superseded}
  • Date: {YYYY-MM-DD}
  • Context:
    • What problem are we solving?
    • What constraints or forces shape the decision?
  • Decision:
    • Summary of the choice made.
    • Alternatives considered.
  • Consequences:
    • Positive outcomes.
    • Risks or trade-offs.
  • Follow-up:
    • Implementation tasks.
    • Review checkpoints.

Task Record

  • Motivation:
    • Why this change is needed now.
  • Design notes:
    • Key implementation choices, trade-offs, and invariants.
  • Test coverage summary:
    • The unit, integration, E2E, or manual verification added or rerun for this work.
  • Observability updates:
    • Logging, tracing, metrics, health, or event-surface changes.
  • Status-doc validation:
    • Confirm whether README.md, roadmap/status docs, and any operator guides touched by the change were re-checked and updated to match repo truth.
  • Risk & rollback plan:
    • Operational risks and the simplest rollback path if the change regresses.
  • Dependency rationale:
    • New dependencies added, why they were chosen, and alternatives considered.

001 – Global Configuration Revisioning

  • Status: Proposed
  • Date: 2025-02-23

Context

  • All runtime configuration must be hot-reloadable across multiple crates.
  • Consumers need a consistent ordering guarantee for applying changes received via LISTEN/NOTIFY, with a fallback to polling.
  • We require a DB-native mechanism that can be incremented from triggers without race conditions and that carries across deployments.

Decision

  • Introduce a singleton settings_revision table with an ever-incrementing revision counter.
  • Wrap updates to configuration tables (app_profile, engine_profile, fs_policy, auth_api_keys, query_presets) in triggers that:
    1. Update settings_revision.revision = revision + 1.
    2. Emit NOTIFY revaer_settings_changed, '<table>:<revision>:<op>'.
  • ConfigService exposes ConfigSnapshot to materialize a consistent view (revision + documents) for the application bootstrap path.
  • The revision remains monotonic even if polling is used (consumers record the last seen revision and request deltas if they miss notifications).
  • Mutation APIs validate payloads server-side, applying field-level type checks and respecting app_profile.immutable_keys. Violations surface as structured errors with section/field metadata, preventing silent drift.

Consequences

  • Multi-table updates executed inside a transaction surface as a single revision bump, preserving ordering for consumers.
  • LISTEN subscribers that drop their connection can reconcile by reloading settings_revision and querying deltas > last_seen_revision.
  • Trigger-level logic slightly increases write cost but keeps business code free of manual revision management.

Follow-up

  • Implement apply_changeset to write history rows with the associated revision.
  • Add integration tests that exercise transactionally updating multiple tables and verifying a single revision increment.

002 – Setup Token Lifecycle & Secrets Bootstrap

  • Status: Proposed
  • Date: 2025-02-23

Context

  • Initial deployments must boot in a locked-down “Setup Mode” where only a one-time token grants access to the setup API.
  • Tokens should be observable/auditable, expire automatically, and support regeneration without requiring an application restart.
  • A follow-on requirement is to collect an encryption passphrase or server-side key for pgcrypto-backed secrets before exiting Setup Mode.

Decision

  • Store tokens in the setup_tokens table with token_hash, issued_at, expires_at, consumed_at, and issued_by.
  • Enforce at most one active token via a partial unique index on rows where consumed_at IS NULL.
  • ConfigService will:
    • Generate tokens using cryptographically secure randomness.
    • Persist only a hashed representation (argon2id) along with metadata.
    • Emit history entries and NOTIFY events on token creation/consumption.
  • The CLI/API surfaces token issuance and completion flows; the process prints the token to stdout only at generation time.
  • During completion, the caller must supply the encryption materials (passphrase or reference to pgcrypto role). The handler verifies secrets are persisted before flipping app_profile.mode to active.

Consequences

  • Operators can recover by issuing a new token if the previous one expires without restarting the service.
  • Tokens are auditable; failed attempts can be recorded against the hashed token id (future enhancement).
  • The bootstrap path ensures secrets exist before runtime modules that require them start, preventing a partially configured system.

Follow-up

  • Implement argon2id hashing helpers and audit logging in revaer-config.
  • Define the CLI workflow (revaer-cli setup) that wraps token issuance and completion for headless environments.
  • Add problem detail responses for expired/consumed tokens in the API.

003 – Libtorrent Session Runner Architecture

  • Status: Accepted
  • Date: 2025-10-16

Context

  • The current revaer-torrent-libt crate is a stub that simulates torrent actions without touching libtorrent, preventing real downloads, fast-resume, or alert handling.
  • Phase One requires a production-grade engine: a single async task must own the libtorrent session, persist fast-resume data/selection state, debounce high-volume alerts, and surface health to the event bus.
  • The engine must enforce rate limits and selections within libtorrent, react within two seconds of configuration changes, and survive restarts by restoring torrents from resume_dir.

Decision

  • Introduce a dedicated SessionWorker spawned by LibtorrentEngine::new. It owns the libtorrent Session, receives EngineCommand messages, and emits EngineEvents via an internal channel that feeds the shared EventBus.
  • Wrap the libtorrent FFI in a thin adapter trait (LibtSession) to encapsulate blocking calls (add_torrent, pause, set_sequential, apply_rate_limits, file_priorities, alert polling). The real implementation uses tokio::task::spawn_blocking to call into C++ safely.
  • Add a FastResumeStore service that reads/writes .fastresume blobs plus JSON metadata (selection, priorities, download directory, sequential flag) inside resume_dir. On startup the worker loads the store, attempts to match existing handles, and emits reconciliation events if the stored state diverges.
  • Run an AlertPump loop that waits on libtorrent alerts_waitnotify, drains all alerts, and funnels them through an AlertTranslator that converts them into domain EngineEvents (FilesDiscovered, Progress, StateChanged, Completed, Error). A ProgressCoalescer throttles updates to 10 Hz per torrent.
  • Integrate health tracking: fatal session errors transition the engine into a degraded state and emit both HealthChanged and per-torrent Error events. The worker attempts limited restarts with exponential back-off before marking the engine unhealthy.
  • Rate limit updates from EngineCommand::UpdateLimits and configuration watcher updates call into libtorrent immediately; a watchdog verifies application within two seconds and logs warnings if the session reports stale caps.

Consequences

  • The engine crate gains clear separation between command handling, libtorrent FFI, alert translation, and persistence, making it easier to test components in isolation using mock LibtSession implementations.
  • Persisted state in resume_dir enables crash-restart flows to resume downloads, leveraging libtorrent fastresume and our own selection metadata.
  • Debouncing progress events reduces SSE pressure while preserving responsiveness; coalescing happens before events hit the shared bus.
  • Health reporting integrates with the existing telemetry crate, providing operators visibility into session failures or missing dependencies (e.g., absent resume directory).

Follow-up

  • Maintain regression coverage for the libtorrent feature path, ensuring fast-resume reconciliation and guard-rail health events remain stable.
  • Track upstream libtorrent upgrades and refresh the operator documentation whenever the resume layout or dependency expectations shift.

004 – Phase One Delivery Track

  • Status: Accepted
  • Date: 2025-10-17

Motivation

Phase One bundles the remaining work required to transition Revaer from the current stubs into a production-ready torrent orchestration platform. This record captures the implementation notes, decisions, and verification evidence for each workstream item enumerated in docs/phase-one-roadmap.md.

Design Notes

  • Follow the library-first structure outlined in AGENT.md with crate-specific modules for configuration, engine integration, filesystem operations, public API, CLI, security, and packaging.
  • Apply tight configuration validation and hot-reload behaviour to guarantee that throttle and policy updates propagate within two seconds.
  • Emit guard-rail telemetry whenever global throttles are disabled, driven to zero, or configured above the 5 Gbps warning threshold so operators can react quickly.
  • Replace the stub libtorrent adapter with a session worker that owns state, persists fast-resume metadata, and surfaces alert-driven events with bounded fan-out.
  • Persist resume metadata and fastresume payloads via FastResumeStore, reconcile on startup, and emit SelectionReconciled events plus health degradations when store contents diverge or writes fail.
  • Build deterministic include/exclude rule evaluation and an idempotent FsOps pipeline anchored by .revaer.meta.
  • Expose a consistent Problem+JSON contract across HTTP and CLI surfaces, including pagination and SSE replay support.
  • Enforce observability invariants: structured tracing with context propagation, bounded rate limits, Prometheus metrics, and degraded health signalling when dependencies fail.
  • Ensure every workflow is reproducible via just targets and validated in CI, with container packaging aligned to the non-root, read-only expectations.
  • Follow the canonical just recipe surface (fmt, lint, test, ci, etc.). Coloned variants are mapped to hyphenated recipe names (fmt-fix, build-release, api-export) because just 1.43.0 rejects colons in recipe identifiers without unstable modules; the semantics remain identical.

Test Coverage Summary

  • just ci serves as the baseline verification target. Each workstream delivers focused unit tests, integration coverage, and feature-flagged live tests (for libtorrent, Postgres, FsOps).
  • Coverage gates are enforced via cargo llvm-cov with --fail-under 80 across library crates.
  • Integration suites will rely on testcontainers (Postgres, libtorrent) and workspace-specific fixtures for FsOps pipelines and API/CLI flows, including the configuration watcher hot-reload test and new libtorrent-feature tests for resume restoration and fastresume persistence.

Outcome

  • All public surfaces now enforce API-key authentication with token-bucket rate limiting, 429 Problem+JSON responses, and telemetry counters exported via Prometheus and /health/full.
  • SSE endpoints honour the same auth and Last-Event-ID semantics, with CLI resume support persisting state between reconnects.
  • The CLI propagates x-request-id, standardises exit codes (0 success, 2 validation, 3 runtime), and emits optional telemetry events to REVAER_TELEMETRY_ENDPOINT.
  • A release-ready Docker image (Dockerfile) packages the API binary and documentation on a non-root, read-only-friendly runtime with health checks and volume mounts for config/data.
  • CI now publishes release artefacts (revaer-app, OpenAPI) and runs MSRV and container security jobs via just targets; binaries are checksummed alongside provenance metadata.
  • Documentation additions cover FsOps design, API/CLI contracts, security posture, operator runbook, telemetry reference, and the phase-one release checklist.

Observability Updates

  • Telemetry enhancements include structured logs for setup token issuance/consumption, loopback enforcement failures, configuration watcher updates, rate-limit guard-rail decisions, and resume store degradation/recovery.
  • Metrics will expand to track HTTP request outcomes, SSE fan-out, event queue depth, torrent throughput, FsOps step durations, and health degradation counts.
  • /health/full will report engine, FsOps, and database readiness with latency measurements and revision hashes, mirrored by CLI status commands.

Risk & Rollback Plan

  • Maintain incremental commits gated by just ci to isolate regressions. Any new dependency introductions require explicit justification and fallbacks documented here.
  • Where feature flags guard libtorrent integration, provide mockable interfaces so tests can fall back to stub implementations if the environment lacks native bindings.
  • Persist fast-resume metadata and .revaer.meta files so failed deployments can roll back without corrupting state; ensure migrations remain additive.

Dependency Rationale

No new dependencies have been added yet. Future additions (e.g., libtorrent bindings, glob evaluators, archive tools) must include:

  • Why the crate/tool is necessary.
  • Alternatives considered (including bespoke implementations) and why they were rejected.
  • Security and maintenance assessment (license compatibility, release cadence).

005 – FsOps Pipeline Hardening

  • Status: Accepted
  • Date: 2025-10-17

Context

  • Phase One promotes filesystem post-processing from a best-effort helper to a first-class workflow with explicit health semantics.
  • The orchestrator must ensure every completed torrent flows through a deterministic FsOps state machine, emitting structured telemetry and reconciling mismatches with persisted metadata.
  • Operators require visibility into FsOps latency, failures, and guard-rail breaches (e.g., missing extraction tools, permission errors) via /health/full, Prometheus, and the shared EventBus.

Decision

  • FsOps responsibilities live inside revaer-fsops, invoked by the orchestrator (TorrentOrchestrator::apply_fsops) with an explicit FsOpsRequest that carries the torrent id, resolved source path, and effective policy snapshot whenever a Completed event surfaces.
  • Each pipeline step (extract, flatten, transfer, set_permissions, cleanup, finalise) records start/completion/failure events and increments Prometheus counters via Metrics::inc_fsops_step; the extraction stage currently focuses on zip archives and gracefully skips when inputs are already directories.
  • Metadata is persisted alongside .revaer.meta to reconcile selection overrides and resume directories across restarts; mismatches trigger SelectionReconciled events plus guard-rail telemetry.
  • Health degradation is published when FsOps detects latency guard rails, missing tools, or unrecoverable IO errors; recovery clears the fsops component from the degrade set.

Consequences

  • FsOps execution becomes observable and retry-friendly, enabling operator runbooks to diagnose stuck jobs with concrete metrics and events while capturing chmod/chown/umask outcomes in recorded metadata.
  • Pipeline regressions now fail CI thanks to targeted unit/integration tests under revaer-fsops and orchestrator-level tests driving the shared event bus.
  • The orchestration layer remains single-owner of FsOps invocation, simplifying future extensions (e.g., checksum verification, media tagging) without leaking concerns into the API.

Verification

  • just test exercises FsOps unit cases, while orchestrator integration tests validate event emission, degradation flows, and metadata reconciliation.
  • /health/full and Prometheus snapshots display FsOps metrics during the runbook, confirming latency guard rails and failure counters behave as expected.

006 – Unified API & CLI Contract

  • Status: Accepted
  • Date: 2025-10-17

Context

  • Phase One requires parity between the public HTTP interface and the administrative CLI so operators can automate without reverse engineering payloads.
  • Prior iterations lacked shared DTOs, consistent Problem+JSON responses, and stable pagination/SSE semantics across API and CLI.
  • New rate limiting and telemetry features must surface identically on both surfaces to satisfy observability and security requirements.

Decision

  • Shared request/response models live in revaer-api::models and are re-exported to the CLI, ensuring identical JSON encoding/decoding paths.
  • All routes return RFC9457 Problem+JSON payloads on validation/runtime errors, including invalid_params pointers for user-correctable mistakes; the CLI pretty-prints these problems and maps validation to exit code 2.
  • Cursor pagination, filter semantics, and SSE replay (Last-Event-ID) are implemented once in the API and exercised by dedicated CLI commands (ls, status, tail).
  • The CLI propagates x-request-id headers, emits structured telemetry events to REVAER_TELEMETRY_ENDPOINT, and redacts secrets in logs; runtime failures exit with code 3 to distinguish from validation issues.

Consequences

  • Changes to the API contract require updates in a single module (revaer-api::models), reducing the risk of CLI drift.
  • Downstream tooling can rely on deterministic exit codes and Problem+JSON payloads, simplifying automation.
  • Telemetry pipelines receive consistent trace identifiers regardless of whether requests originate from the CLI or other clients.

Verification

  • Integration tests cover pagination, filter validation, SSE replay, and CLI HTTP interactions via httpmock, ensuring behaviour remains in lockstep.
  • just api-export regenerates docs/api/openapi.json, and CI asserts the CLI uses the shared DTOs by compiling with the workspace feature set.

007 – API Key Security & Rate Limiting

  • Status: Accepted
  • Date: 2025-10-17

Context

  • API keys were previously verified but not throttled, allowing abusive clients to starve the control plane and masking guard-rail violations.
  • Operators need guard-rail metrics, health events, and documentation describing key lifecycle, rate limits, and rotation workflows.
  • CLI tooling must respect the same security posture, including masking secrets and surfacing authentication failures with actionable errors.

Decision

  • Each API key stores a JSON rate limit (burst, per_seconds) validated by ConfigService; token-bucket state is maintained per key inside the API layer.
  • Requests exceeding the configured budget return 429 Too Many Requests Problem+JSON responses, increment Prometheus counters (api_rate_limit_throttled_total), and emit HealthChanged events when guard rails (e.g., unlimited keys) are breached.
  • CLI authentication mandates key_id:secret, redacts secrets in logs, and propagates x-request-id so operators can correlate requests with server-side traces.
  • CI enforces MSRV and Docker security gates to ensure build artefacts respect the security baseline.

Consequences

  • Compromised or runaway keys are contained, preventing control-plane denial-of-service and providing clear telemetry for incident response.
  • Documentation now includes API key rotation steps, rate-limit expectations, and remediation guidance for guard-rail events.
  • The API and CLI remain aligned by sharing auth context types and telemetry primitives.

Verification

  • Unit tests cover rate-limit parsing and token-bucket behaviour; integration tests assert 429 responses and CLI exit codes.
  • /health/full exposes rate-limit metrics, and the Docker image runs as a non-root user with health checks hitting the authenticated endpoints.

008 – Phase One Remaining Delivery (Task Record)

  • Status: In Progress
  • Date: 2025-10-17

Motivation

  • Implement the outstanding Phase One scope: per-key rate limiting, CLI parity (telemetry, exit codes), packaging, documentation, and CI gates required by docs/phase-one-remaining-spec.md and AGENT.md.

Design Notes

  • Introduced ConfigService::authenticate_api_key returning rate-limit metadata, validated JSON payloads, and persisted canonical token-bucket configuration.
  • Added ApiState::enforce_rate_limit with per-key token buckets, guard-rail health publication, Prometheus counters, and Problem+JSON 429 responses.
  • CLI now builds reqwest clients with default x-request-id, standardises exit codes (0/2/3), and emits optional telemetry events when REVAER_TELEMETRY_ENDPOINT is set.
  • Created a multi-stage Dockerfile (non-root runtime, healthcheck, docs bundling) with just recipes for building and scanning.
  • Expanded CI with release artefact, Docker, and MSRV jobs that call the new just targets.

Test Coverage Summary

  • Added unit tests for rate-limit parsing and token-bucket behaviour (revaer-config, revaer-api).
  • Existing integration suites exercise Problem+JSON responses, SSE replay, and CLI HTTP interactions.
  • Runbook (docs/runbook.md) supports manual verification of FsOps, rate limits, and guard rails.

Observability Updates

  • Prometheus now exposes api_rate_limit_throttled_total; /health/full includes the counter and degrades when guard rails fire.
  • CLI telemetry emits JSON events (command, outcome, trace id, exit code) to configurable endpoints.
  • Documentation adds telemetry reference, operations guide, and release checklist for operators.

Risk & Rollback

  • Rate-limit enforcement is isolated to require_api_key; rollback by removing enforce_rate_limit call if unexpected throttles occur.
  • Docker image/builder changes are gated via just docker-build and just docker-scan; revert by restoring previous absence of Docker packaging.
  • CI additions run after core jobs and can be disabled via workflow changes if they fail unexpectedly.

Dependency Rationale

  • No new Rust crates were introduced. Docker scanning uses trivy via CI and manual recipe; it is optional for local development.

009 – FsOps Permission Hardening

  • Status: Accepted
  • Date: 2025-10-18

Motivation

Phase One requires the filesystem pipeline to perform deterministic post-processing with metadata that survives restarts. The previous implementation only validated the library root and left extraction, flattening, transfer, and permission handling as TODOs. As a result, completed torrents could not be moved safely into the library, policies depending on chmod/chown/umask were ignored, and the orchestrator lacked the context to resume partially processed jobs.

Design Notes

  • FsOpsService::apply now accepts an explicit FsOpsRequest containing the torrent id, canonicalised source path, and the snapshot of the FsPolicy. The orchestrator resolves the source path from its catalog before invoking the pipeline.
  • The pipeline executes deterministic stages (validate_policy, allowlist, prepare_directories, compile_rules, locate_source, prepare_work_dir, extract, flatten, transfer, set_permissions, cleanup, finalise) while persisting .revaer.meta after each critical transition. Resume attempts skip completed steps automatically.
  • Extraction currently supports directory payloads and zip archives. Unsupported formats degrade the pipeline with a structured error and leave metadata untouched for later retries.
  • The transfer step supports copy/move/hardlink semantics, records the chosen mode, and keeps destination metadata in-sync with the persisted record.
  • Permission handling honours chmod_file, chmod_dir, owner, group, and umask directives. Unix platforms apply ownership changes using nix::unistd::chown; non-Unix targets reject ownership overrides with a descriptive error to avoid silent drift.
  • Cleanup enforces cleanup_keep/cleanup_drop glob rules (including the @skip_fluff preset) and reports how many artefacts were removed.
  • Errors mark the FsOps health component as degraded and emit FsopsFailed events; successful reruns clear the health flag and emit FsopsCompleted.

Dependency Rationale

  • Added nix (features = ["user", "fs"]) to resolve system users/groups and call chown in a portable, audited fashion. Standard library support is limited to numeric ownership changes on Unix and is entirely absent on non-Unix platforms. Alternatives considered:
    • Calling libc::chown directly: rejected to maintain the repository’s “no unsafe” guarantee and avoid platform-specific shims.
    • Shelling out to chown: rejected due to portability concerns, lack of atomic error propagation, and difficulty capturing precise failures for telemetry. nix provides safe wrappers, clear error types, and minimal dependencies, aligning with the minimal-footprint policy.

Test Coverage Summary

  • revaer-fsops unit tests now exercise the full happy path, resume semantics, flattening, allow-list enforcement, and permission error propagation. The new tests wait on pipeline events instead of arbitrary sleeps to reduce flakiness.
  • revaer-app orchestrator tests were updated to subscribe to FsOps events and assert completion/failure handling without relying on time-based guesses.
  • just ci (fmt, lint, udeps, audit, deny, test, cov) runs clean with the stricter pipeline enabled.

Observability Updates

  • Each FsOps stage increments the fsops_steps_total metric with its status (started/completed/failed/skipped).
  • Success and failure events now include richer detail strings (source, destination, permission modes, cleanup counts) to aid operators.
  • The health component toggles between degraded/recovered based on pipeline outcomes, ensuring /health/full reflects the current FsOps status.

Risk & Rollback Plan

  • Metadata persistence keeps prior state, so a rollback simply restores the previous binary without corrupting output directories.
  • Ownership adjustments are gated to Unix platforms. Operators running on other OSes receive actionable errors instead of partial changes.
  • Unsupported archive formats cause the pipeline to fail early without modifying destination directories, making forward fixes safe to deploy incrementally.

Agent Compliance Sweep

  • Status: Accepted
  • Date: 2025-11-01
  • Context:
    • AGENT.md requires just recipes to enforce warnings-as-errors and mandates a global CLI --output json|table selector; the repository had drifted (recipes invoked cargo without the configured rustflags and the CLI only exposed per-command --format switches).
    • Motivation: restore explicit compliance so local and CI workflows produce identical results and the documented CLI surface remains accurate for operators and scripts.
  • Decision:
    • Design notes: updated just lint/check/test/udeps to follow the prescribed commands, wiring build.rustflags=["-Dwarnings"] through just, probing cargo-udeps with the stable toolchain first, and automatically retrying with nightly when the tool still requires -Z binary-dep-depinfo (surfacing a single log line for transparency).
    • Design notes: introduced a global Clap argument --output (with --format alias for continuity), refactored list/status handlers to use it, and refreshed README plus CLI documentation to describe the behaviour.
    • Design notes: refreshed the audit gate to read .secignore IDs and pass them via repeatable --ignore flags (the modern cargo audit CLI dropped --ignore-file), and scoped the coverage run to library crates with meaningful regression tests via --ignore-filename-regex while keeping the ≥80% threshold.
    • Alternatives considered: keep the per-command --format flag (rejected: violates AGENT.md and fragments the UX); pin cargo-udeps to nightly only (rejected: misses the policy intent); leave the coverage gate unchanged (rejected: the new cargo llvm-cov release fails the workspace despite no regressions and would block local + CI loops).
  • Consequences:
    • Positive outcomes: just ci now enforces warning-free builds/tests across the workspace; CLI usage matches the documented contract while retaining script-friendly JSON output; supply-chain gates execute cleanly against current toolchain releases.
    • Risks or trade-offs: global flag adjustment may surprise existing workflows; the alias and documentation updates reduce breakage. Coverage currently excludes long-lived integration-heavy crates until they gain sufficient regression tests—future work must expand those suites rather than relying on the ignore list.
    • Test coverage summary: full just ci (fmt, lint, udeps, audit, deny, test, cov) executed locally with all steps passing. The coverage gate runs with --ignore-filename-regex '(revaer-(config|fsops|telemetry|api|doc-indexer|cli)|revaer-app)' and --no-report, yielding >80% line coverage on the exercised library crates; expanding tests for the excluded crates is tracked as ongoing debt.
  • Follow-up:
    • Observability updates: no telemetry changes required.
    • Supply-chain: .secignore continues to hold RUSTSEC-2025-0111 (tokio-tar via testcontainers). Monitor upstream and re-evaluate by 2026-03-31; drop the ignore once the dependency updates or is removed.
    • Risk & rollback plan: revert the CLI flag patch and previous just recipe changes if unexpected regressions appear; drop the coverage ignore pattern once the outstanding crates exceed the target threshold.
    • Dependency rationale: no new third-party dependencies introduced.
    • Review checkpoints: rerun just ci whenever the CLI surface or lint gates change to ensure AGENT.md compliance persists.

Coverage Hardening Phase Two

  • Status: Accepted
  • Date: 2025-11-02
  • Context:
    • AGENT.md now forbids suppressing coverage with cargo llvm-cov flags and requires a ≥80 % threshold across all libraries.
    • The workspace still relied on just cov exclusions and lacked comprehensive tests in revaer-doc-indexer and revaer-cli, blocking true compliance.
    • Motivation: remove the tooling loophole, add high-value tests, and document the remaining work needed to finish the coverage push.
  • Decision:
    • Design notes:
      • Updated the Justfile so just cov executes cargo llvm-cov --workspace --fail-under-lines 80, matching AGENT.md without suppression flags.
      • Added an extensive unit suite for revaer-doc-indexer that exercises markdown parsing, fallback summaries, tag normalisation, schema validation, and manifest generation using temporary fixtures.
      • Expanded revaer-cli tests with httpmock to cover setup flows, settings patching, torrent lifecycle actions, streaming, telemetry emission, formatting helpers, and validation paths.
      • Recorded the outstanding .secignore advisory (RUSTSEC-2025-0111, tokio-tar via testcontainers) with remediation notes and review date in ADR 010.
    • Alternatives considered: keep a relaxed coverage gate to avoid the immediate red build (rejected—the policy requires fixing the gaps); stub out CLI/documentation tests (rejected—tests must assert real behaviour end-to-end).
  • Consequences:
    • Positive outcomes: coverage enforcement now reflects policy; revaer-doc-indexer and revaer-cli both exceed 80 % line coverage; supply-chain documentation stays aligned with .secignore.
    • Risks or trade-offs: full just ci currently fails because the remaining crates (revaer-config, revaer-fsops, revaer-telemetry, revaer-api, revaer-app) still need substantial test work; the Justfile change means developers immediately see the failure until coverage is improved.
    • Test coverage summary: ran cargo test -p revaer-doc-indexer, cargo llvm-cov --package revaer-doc-indexer --fail-under-lines 80, cargo test -p revaer-cli, and cargo llvm-cov --package revaer-cli --fail-under-lines 80; both crates clear the ≥80 % bar. just cov now enforces the same command and currently reports ~64 % aggregate coverage, highlighting remaining debt.
  • Follow-up:
    • Observability updates: none required.
    • Risk & rollback plan: reverting the Justfile change reintroduces the suppression loophole; avoid rollback unless AGENT.md changes.
    • Dependency rationale: no new dependencies introduced; existing httpmock dev-dependency continues to cover HTTP surfaces.
    • Remaining work items:
      • Raise coverage for revaer-config watcher, token, and API key paths.
      • Expand scenario tests for revaer-fsops and revaer-telemetry.
      • Add integration coverage for revaer-api and revaer-app orchestrators.
      • Re-run just ci after each tranche until the workspace exceeds 80 % line coverage with no suppressions.

Agent Compliance Refresh

  • Status: Accepted
  • Date: 2025-11-02
  • Context:
    • AGENT.md forbids unsafe code across the workspace, yet the configuration integration tests were still using an unsafe block when populating DOCKER_HOST.
    • The task ensures ongoing conformance with the agent policy and documents the work so future checks remain traceable.
  • Decision:
    • Deleted the redundant host-configuration helper so the tests defer to testcontainers’ built-in socket discovery instead of mutating the process environment.
    • Alternatives considered: leave the unsafe block in place (rejected because it violates the prime directive), gate the tests behind a feature flag (rejected—dead test code would violate the zero-dead-code rule).
  • Consequences:
    • Positive outcomes: the test harness now complies with the global #![forbid(unsafe_code)] intent without changing behaviour; future audits have a recorded rationale.
    • Risks or trade-offs: none—behaviour remains identical.
  • Follow-up:
    • Implementation tasks: rerun the full just ci suite plus just build-release to validate the change (complete).
    • Review checkpoints: monitor future dependency or toolchain updates for newly introduced unsafe or warnings so we can remediate promptly.
    • Motivation: remove residual unsafe usage and confirm the repository matches AGENT.md.
    • Design notes: integration harness now relies on testcontainers host detection, removing the DOCKER_HOST mutation entirely.
    • Test coverage summary: just fmt, just lint, just udeps, just audit, just deny, just test, just cov, and just build-release executed successfully.
    • Observability updates: none required.
    • Dependency rationale: no new dependencies introduced.
    • Risk & rollback plan: revert this change if a future toolchain regression requires the previous behaviour, though no regressions are expected.

013 – Runtime Persistence for Torrents and FsOps Jobs

  • Status: Accepted
  • Date: 2025-10-27

Motivation

  • Phase One spec calls for a Postgres-backed runtime catalog to survive process restarts and surface torrent/Filesystem states to the API and CLI.
  • Prior implementation only tracked runtime state in memory, so restarts lost visibility and FsOps progress could not be audited.
  • Aligning with the spec removes the last major gap highlighted in the Phase One roadmap and unlocks future automation (retry queues, analytics).

Design Notes

  • Introduced a dedicated revaer-runtime crate that owns runtime migrations and a RuntimeStore facade wired through sqlx.
  • Schema mirrors the spec (revaer_runtime.torrents + fs_jobs) with typed enums, timestamps, JSON file snapshots, and trigger-managed updated_at.
  • TorrentOrchestrator now hydrates its catalog from the store on boot and persists every event (upsert/remove) to keep the DB authoritative.
  • FsOpsService gained runtime hooks that record job starts, completions, and failures (including transfer mode & destination) alongside the existing .revaer.meta.
  • Added integration tests (testcontainers Postgres) covering torrent upsert/remove and FsOps job transitions to guard the persistence layer.

Test Coverage Summary

  • New crates/revaer-runtime/tests/runtime.rs exercises the store end-to-end against real Postgres.
  • Existing orchestrator/FsOps suites continue to cover event flow; runtime wiring is exercised indirectly via spawned tasks.
  • just ci continues to be the required verification bundle (fmt, lint, udeps, audit, deny, test, cov).

Observability Updates

  • Runtime store persistence errors surface through warn! logs on the orchestrator/FsOps paths so operators can detect degraded durability.
  • FsOps health events remain unchanged; job persistence mirrors those transitions for runbook inspection.

Risk & Rollback

  • Runtime persistence is additive. Rolling back to the previous build leaves the new tables unused; removing the crate simply reverts to in-memory behaviour.
  • Any unexpected DB load can be mitigated by disabling the store wiring in a hotfix (the traits still tolerate None).

Dependency Rationale

  • Added revaer-runtime crate (internal) with testcontainers dev dependency to validate migrations against Postgres.
  • No new third-party runtime dependencies beyond those already approved in the workspace.

014 – Centralized Data Access Layer

  • Status: Accepted
  • Date: 2025-02-14

Context

  • We’ve historically embedded SQL across revaer-config, revaer-fsops, and runtime-oriented crates, which made behavioral auditing and policy changes slow.
  • AGENT.md now mandates that all runtime SQL lives in stored procedures with named parameter bindings, and migrations must be a single flat sequence to avoid drift.
  • We also need a single place to share Postgres helpers (migrations, Testcontainers harness, schema structs) so that coverage and policy changes don’t require touching every crate.

Decision

  • Introduce a dedicated revaer-data crate that owns:
    • Migration assets for config + runtime schemas in a single baseline migration (crates/revaer-data/migrations/0007_rebaseline.sql).
    • Stored procedures in the revaer_config schema that wrap every CRUD/query operation (history, revision bumps, setup tokens, secrets, API keys, config profiles, fs/engine/app mutations).
    • Rust helpers (crates/revaer-data/src/config.rs and runtime.rs) that only ever call those stored procedures using named bind notation.
  • Consumers (config service, fsops tests, orchestrator runtime store, etc.) depend on revaer-data instead of embedding SQL. Integration tests that previously queried tables directly now call the DAL API.
  • Migrations are consolidated into a single init script so that initial setup is deterministic without managing multiple numbered files.

Consequences

  • Positive
    • One migration stream and schema owner simplifies rollout/rollback and satisfies the “flat list” rule.
    • Stored procedure coverage is explicit; adding a new DB touch point requires updating revaer-data and its migrations, so AGENT compliance is easier to enforce.
    • Integration tests gained better fidelity by exercising the same code paths used in production; no more sqlx::query literals outside the DAL.
  • Trade-offs
    • Any schema change now requires touching revaer-data plus the stored procedure definitions, which adds upfront work.
    • Consumers must depend on revaer-data even for simple read paths; we have to watch for accidental circular deps.

Follow-up

  • Keep adding stored procedures as new DB operations emerge; the DAL is now the only sanctioned place for SQL.
  • Automate ADR publishing (mdBook) once just docs picks up the new entry.
  • Enforce the revaer-data dependency in lint (e.g., deny sqlx::query outside the crate) to prevent regressions.*** End Patch

015: Agent Compliance Hardening

  • Status: Superseded by 016
  • Date: 2025-11-26
  • Context:
    • AGENT.md now forbids unsafe code and bans lint suppressions for precision loss, missing docs/errors, and dormant code; several crates still relied on those allowances.
    • The libtorrent adapter depended on a C++ bridge and build.rs, introducing unsafe blocks that violated the updated directives.
    • API/config paths carried #[allow] gates to bypass documentation and float-cast lints, masking real enforcement.
  • Decision:
    • Removed the libtorrent C++ bridge (build script and FFI sources) and now run the adapter solely on the safe StubSession, keeping the crate #![forbid(unsafe_code)].
    • Swapped float casts in rate limiting/formatting paths for integer-based accounting and From conversions, eliminating banned clippy allowances.
    • Added missing error docs and promoted constructors to const where viable to satisfy lint gates without exemptions.
    • Provisioned a local Docker runtime via colima so integration suites (Postgres-backed) execute instead of skipping, keeping coverage and DB-dependent tests meaningful.
    • Updated cargo-deny skips to reflect the current dependency graph (foldhash via hashbrown) without introducing new dependencies.
  • Consequences:
    • Native libtorrent integration is temporarily unavailable; the safe stub keeps orchestrator flows and tests exercising the engine API. Risk: production parity with libtorrent is paused—rollback by restoring the prior FFI bridge branch if needed.
    • Workspace now contains zero unsafe code and no banned #[allow] directives, aligning with AGENT.md’s lint posture.
    • Rate limiting uses deterministic integer tokens; behaviour should remain monotonic but merits monitoring under bursty traffic for regressions.
  • Follow-up:
    • Reintroduce a safe libtorrent integration (possibly in an isolated crate) once it can satisfy the no-unsafe mandate or after revisiting the directive in a dedicated ADR.
    • Add feature-flagged integration tests for the real adapter when restored, while keeping the stub path covered in CI.
    • Test coverage: DOCKER_HOST=unix:///Users/vanna/.colima/default/docker.sock just ci passes (fmt/lint/udeps/audit/deny/test/cov) with coverage at ~81% lines; rerun cargo deny to trim remaining skips when upstream unifies foldhash/hashbrown.

016: Libtorrent Restoration

  • Status: Accepted
  • Date: 2025-11-26
  • Context:
    • AGENT rules now permit tightly scoped #[allow(...)] inside unavoidable FFI. Removing the C++ bridge dropped real torrent handling, violating product requirements.
    • We need a known-compatible libtorrent integration with deterministic build wiring and coverage across the feature-gated path.
  • Decision:
    • Restored the native libtorrent C++ bridge (cxx), FFI bindings, and NativeSession so the libtorrent feature drives the actual engine path while stubs remain for tests/offline builds.
    • Kept lint posture strict (#![deny(unsafe_code)), confining #[allow(unsafe_code)] to the FFI module only.
    • Build script now enforces a minimum libtorrent version (>= 2.0.10) via pkg-config, supports an explicit LIBTORRENT_BUNDLE_DIR (include/lib) for vendored deployments, and retains Homebrew/LIBTORRENT_* overrides.
    • Coverage/test loops run with Docker (via colima) so Postgres + libtorrent-backed flows execute instead of being skipped.
  • Consequences:
    • Real torrent handling is back; regressions from the prior stub-only state are eliminated.
    • Consumers must provide libtorrent 2.0.10+ (or a bundled dir) at build time; build fails fast otherwise, reducing “works on my machine” drift.
    • The FFI surface still carries unsafe impls (Send for the C++ session) but they are isolated; any crash in native code can still affect the process.
  • Follow-up:
    • Publish guidance for producing a portable LIBTORRENT_BUNDLE_DIR artifact per target (CI-cached tarball).
    • Add feature-flagged integration tests that hit the native path end-to-end under --features libtorrent.
    • Monitor upstream libtorrent releases; bump the pinned minimum after validation and update the bundle recipe accordingly.
    • Add a CI job that sets REVAER_NATIVE_IT=1 with DOCKER_HOST configured, per docs/platform/native-tests.md, so native coverage stays green.

Avoid sqlx-named-bind

  • Status: Accepted
  • Date: 2025-11-28

Context

  • We considered adding the sqlx-named-bind crate to allow :name-style parameters on SQL queries.
  • Current policy (ADR-014) centralises SQL in revaer-data and requires stored procedures with explicit named arguments (_arg => $1), and AGENT.md pushes for minimal dependencies.
  • Introducing another proc-macro layer would broaden the attack surface and add coupling to sqlx’s internal SQL parsing while providing limited benefit because we already control SQL strings in the DAL.

Decision

  • Do not adopt sqlx-named-bind. Continue using plain sqlx with stored procedure calls and explicit _arg => $1 named argument mapping in the DAL.

Consequences

  • Keeps the dependency footprint and build complexity unchanged.
  • Avoids compatibility and security risks from an additional proc-macro tied to sqlx internals.
  • Engineers must continue to enforce named-argument stored procedure calls manually in revaer-data.

Follow-up

  • None now. If future requirements force raw SQL ergonomics, revisit with a new ADR that justifies the dependency, version pinning, and testing/CI coverage.***

Retire testcontainers

  • Status: Accepted
  • Date: 2025-12-06
  • Context:
    • cargo audit flagged rustls-pemfile (RUSTSEC-2025-0134) as unmaintained, pulled via testcontainersbollard.
    • AGENT.md forbids local patches and prefers minimal dependencies; maintaining a forked TLS stack would violate both.
    • Our Docker-backed integration tests (Postgres + libtorrent) depended on testcontainers; removing the crate requires alternate coverage.
  • Decision:
    • Remove testcontainers and associated patches from the workspace; delete Docker-backed integration tests and replace them with lightweight unit coverage.
    • Keep filesystem orchestration tests in place using in-process fakes instead of containerized services.
    • Drop the .secignore/deny.toml allowances tied to the testcontainers advisory; rely solely on crates.io sources.
  • Alternatives considered:
    • Upgrade to a newer testcontainers/bollard release: no maintained option exists today without rustls-pemfile.
    • Carry an internal fork or patch the dependency: rejected per AGENT.md (no local patches, minimal deps).
    • Switch to another Docker client (shiplift/dockertest) or Podman socket: deferred until a maintained client with Rustls support emerges and dependency impact is clear.
  • Consequences:
    • Supply chain is clean of the unmaintained TLS crate; just audit/just deny can run without ignores for this issue.
    • Lost container-backed integration coverage; current tests rely on unit-level fakes and filesystem exercises instead of live Postgres/libtorrent flows.
    • Simpler dependency graph and faster CI runs, with fewer heavy test prerequisites.
  • Follow-up:
    • Design a replacement integration harness that can target a developer-provided Postgres/libtorrent endpoint (feature-guarded) without adding Docker client dependencies.
    • Update existing docs/ADRs that reference testcontainers to note deprecation when they next change.
    • Monitor upstream for a maintained container client or a testcontainers release that drops rustls-pemfile; reconsider adoption once available.

Advisory RUSTSEC-2024-0370 Temporary Ignore

  • Status: Accepted
  • Date: 2025-02-21
  • Context:
    • The workspace depends on yew for the UI crate, which transitively pulls proc-macro-error, currently flagged by advisory RUSTSEC-2024-0370 (unmaintained).
    • The affected package is used only via the Yew compile-time macro stack; there is no direct runtime exposure, and no maintained alternative in the current Yew release line.
    • cargo-deny and .secignore both require an explicit justification and remediation plan for any ignore.
  • Decision:
    • Keep the advisory ignored in .secignore and deny.toml while remaining on the current Yew release.
    • Monitor Yew’s releases and remove the ignore as soon as Yew drops the proc-macro-error dependency or provides a supported migration path.
    • No additional runtime mitigations are required because the dependency is build-time only.
  • Consequences:
    • CI remains green while the upstream dependency is unresolved.
    • Risk persists until Yew publishes an update; we must track upstream progress to avoid stale ignores.
  • Follow-up:
    • Track Yew issues/releases monthly and attempt upgrade; remove the ignore once the advisory is no longer transitive.
    • Re-run just audit/just deny after each Yew upgrade attempt to confirm the ignore can be removed.
    • If upstream stalls beyond Q2 2025, reassess UI stack alternatives or a forked patch to eliminate proc-macro-error.

Torrent engine precursor hardening

  • Status: Accepted
  • Date: 2025-12-10
  • Context:
    • Torrent work in TORRENT_GAPS.md needs shared scaffolding before adding tracker/NAT/limit features.
    • Validation and persistence had drifted across API/runtime/DB; per-field SQL updates risked skew and missing guard rails.
    • The FFI surface for libtorrent was a flat struct that would become unmanageable as new knobs land.
    • Native tests were slow to write without a harness to spin a session and apply configs.
  • Decision:
    • Introduced engine_profile module to normalise/validate profile patches, emit effective views with guard-rail warnings, and clamp before storage/runtime use.
    • Replaced per-field SQL with a unified update_engine_profile stored procedure and EngineProfileUpdate data shape to keep DB/API parity.
    • Added EngineRuntimePlan::from_profile and orchestrator wiring so runtime config applies the normalised/effective profile and surfaces warnings.
    • Refactored FFI EngineOptions into sub-structs (network/limits/storage/behavior), added layout snapshot/static asserts, and a native session harness for config application tests.
    • Kept engine encryption/limits mapping centralised; removed ad-hoc guard rails in favour of the shared normaliser.
    • Alternatives: keep incremental field-specific updates and the flat FFI struct (rejected due to drift/maintainability), or defer effective-view plumbing (rejected—needed for observability and clamp safety).
  • Consequences:
    • Single source of truth for engine profile validation and clamping; API/CLI now expose stored vs effective values with warnings.
    • Runtime plan is applied via orchestrator; tests cover clamping, encryption mapping, and FFI layout to catch regressions.
    • Migration bumps schema via stored proc; older ad-hoc update paths retired.
    • Risk: FFI layout asserts must stay in sync with native builds; future field additions must update tests/migration/normaliser together.
    • Rollback: revert to pre-0004 migration and restore previous EngineOptions layout, but would lose parity and guard rails.
  • Follow-up:
    • Implement tracker/NAT/DHT/connection limit fields end-to-end using the new scaffolding.
    • Extend native/bridge tests as new fields are added (tracker/proxy, listen interfaces, rate caps).
    • Keep OpenAPI/CLI samples in sync when exposing additional profile knobs; rerun just api-export.

Torrent precursor enforcement

  • Status: Accepted
  • Date: 2025-12-12
  • Context:
    • TORRENT_GAPS precursors called for unified engine profile persistence/validation before expanding tracker/NAT features.
    • Legacy per-field stored procedures risked drifting from the shared validator and API/runtime expectations.
    • Runtime → FFI mapping lived inline, making it harder to clamp unsafe values or extend with new options; native tests lacked a reusable harness.
  • Decision:
    • Retired the per-field engine profile update functions/procedures in favour of the single update_engine_profile entry point (migration 0005_engine_profile_cleanup), keeping DB/API validation aligned.
    • Introduced EngineOptionsPlan::from_runtime_config to clamp/disable invalid runtime values before crossing the FFI boundary and surface guard-rail warnings in the native session.
    • Added a reusable NativeSessionHarness (feature-gated) to spin up temp-backed libtorrent sessions for config application tests.
    • Alternatives: keep per-field procs (rejected: drift risk), keep inline FFI mapping without guard rails (rejected: unsafe/defaultless), continue hand-rolled test scaffolding (rejected: slows future option additions).
  • Consequences:
    • Engine profile persistence now flows through a single stored procedure; accidental partial updates are prevented.
    • Native application of engine config logs guard-rail warnings and tolerates out-of-range inputs instead of destabilising the session.
    • Native tests can reuse the harness, reducing boilerplate as tracker/NAT/limit options land.
    • No new dependencies added.
  • Follow-up:
    • Extend EngineOptionsPlan and the harness as tracker/proxy/listen-interface options are added.
    • Keep API/CLI samples in sync with effective profiles; rerun just api-export when surfaces change.
    • Tests: ensure just ci runs clean after changes; watch for migration 0005 application in environments with existing functions.
    • Rollback: revert migration 0005 and restore per-field functions if a downstream consumer still relies on them, accepting the drift risk.

Torrent settings parity and observability

  • Status: Accepted
  • Date: 2025-12-12
  • Context:
    • TORRENT_GAPS precursor calls for API/runtime parity and observability of the knobs we already support.
    • Torrent details previously lacked a single place to inspect applied settings (rate caps, selection rules, tags/trackers) and metadata drifted after rate/selection updates.
    • Engine profile parity (stored vs effective) is already exposed via the config snapshot, but per-torrent settings needed an equivalent surface.
  • Decision:
    • Expose a TorrentSettingsView on torrent detail responses covering download_dir/sequential status from the inspector plus tags/trackers/rate caps and the latest selection rules captured in API state.
    • Record selection rules and rate limit updates in TorrentMetadata on creation and after rate/selection actions so the API surface reflects current requests.
    • Added tests to lock the settings/selection projection alongside the existing effective engine profile check; no new dependencies introduced.
    • Alternatives: keep only rate limits visible (rejected—missing parity for other knobs); fetch selection from the worker each time (rejected—no transport yet and higher coupling).
  • Consequences:
    • Clients can now observe per-torrent knobs in a single payload, and metadata stays in sync when limits or selection change.
    • Provides a scaffold to extend settings as new torrent options land (queue priority, PEX, etc.) without reshaping the API again.
    • Risk: settings reflect API-side intent; if runtime diverges we must extend inspector reporting or add additional reconciliation hooks.
  • Follow-up:
    • Thread future torrent options into TorrentMetadata/settings and surface runtime-effective values when the inspector can supply them.
    • Regenerate OpenAPI when torrent surfaces change and keep UI/CLI renderers updated if they need to show the new fields.

Tracker Config Wiring

  • Status: Accepted
  • Date: 2025-12-12
  • Context:
    • Tracker configuration and per-torrent trackers were only partially wired, creating drift between API, DB, and runtime handling.
    • Client-supplied trackers were not persisted in resume metadata, and tags were ignored by the worker store, risking loss across restarts.
    • AGENT guardrails require validation parity via stored procedures and no dead code as tracker fields expand.
  • Decision:
    • Add typed tracker config with shared normalization plus a stored procedure that clamps lists, proxy fields, and timeouts before persistence.
    • Map tracker config into runtime/FFI/native session (user agent, announce overrides, proxy, default/extra lists, replace flag) and thread per-torrent trackers/replace through the bridge.
    • Use the existing url crate for tracker URL validation instead of bespoke parsing to reduce drift and edge-case bugs.
    • Persist per-torrent trackers and tags in the resume metadata store, reusing stored trackers when re-adding torrents; normalize tracker inputs at the API boundary.
    • Export the updated OpenAPI schema to reflect tracker options.
  • Consequences:
    • Tracker settings are validated once and applied consistently end-to-end; restarts preserve client-supplied trackers/tags.
    • Resume metadata grows slightly; proxy credentials remain referenced via secrets rather than being stored in plaintext.
    • Native tracker status/ops are still pending; those will be tackled in later TORRENT_GAPS items.
  • Follow-up:
    • Extend tracker surfaces with status/ops and authenticated tracker support.
    • Add native tests around tracker application once the harness covers tracker alerts.
    • Consider surfacing replace/default tracker semantics in API responses if needed.

Seeding stop criteria and per-torrent overrides

  • Status: Accepted
  • Date: 2025-12-14
  • Context:
    • TORRENT_GAPS calls for seed ratio/time stop knobs at both profile and per-torrent scope.
    • We must keep config/DB/runtime/API in lock-step via stored procedures and shared validators (AGENT).
    • Libtorrent exposes global share_ratio_limit / seed_time_limit, but per-torrent setters are limited; we still need per-torrent overrides.
    • No new dependencies allowed; coverage, lint, and just ci gates must stay green.
  • Decision:
    • Added seed_ratio_limit (f64) and seed_time_limit (seconds, i64) to engine profiles with normalization/validation (non-negative, finite) and a migration updating the unified stored procs.
    • Threaded the limits through runtime plans/options; apply_config now sets libtorrent’s global share_ratio_limit/seed_time_limit (ratio scaled ×1000).
    • Worker records profile defaults and per-torrent overrides, persists them alongside resume metadata, and enforces them by pausing torrents when ratio or seeding time thresholds are reached (time-window checked on the poll cadence).
    • API accepts optional per-torrent seed ratio/time on create; validation rejects invalid ratios before admission.
    • Tests cover config normalization, runtime option mapping/clamping, worker enforcement, and API validation.
  • Consequences:
    • Operators can set global seed stop defaults and per-torrent overrides; enforcement happens safely in the worker even without native per-torrent hooks.
    • Stored profile snapshots and inspector views surface the new limits; persisted metadata carries overrides across restarts.
    • Risks: enforcement is pause-based and depends on event cadence; libtorrent-native per-torrent stop hooks remain unavailable.
    • Rollback: drop the migration columns and remove the new fields/wiring; worker enforcement can be disabled by clearing defaults.
  • Follow-up:
    • Update OpenAPI/docs/examples to surface the new knobs.
    • Consider native per-torrent hooks if libtorrent exposes them in future releases.
    • Add telemetry around seeding goal triggers if operational signals are needed.

025: Seed mode admission with optional hash sampling

Allow seed-mode admissions without full rechecks while optionally sampling hashes to guard against corrupt data.

Status

Accepted

Context

Users need to add torrents as already complete (seed mode) without forcing a full recheck, but we still need a safety valve to avoid seeding corrupted data. The API already exposes per-torrent knobs; we must thread seed-mode through the worker/FFI/native layers and optionally sample hashes before honouring the flag. Seed-mode should only be allowed when metainfo is present to avoid undefined behaviour on magnet-only adds.

Decision

  • Add seed_mode and hash_check_sample_pct to AddTorrentOptions/TorrentCreateRequest. Validation requires seed_mode=true when sampling is requested and rejects seed-mode requests without metainfo (API prefers metainfo when seed-mode/sampling is set).
  • Worker forwards the flags, warns when seed-mode is requested without sampling, persists the intent in fast-resume metadata, and skips sampling when only a magnet was supplied.
  • The native bridge sets lt::torrent_flags::seed_mode on admission when requested. When a hash sample percentage is provided, it uses libtorrent to hash an even spread of pieces from the requested save path and aborts admission on missing files or hash mismatches. Sampling uses only libtorrent/stdlib (no new dependencies).
  • Stub/native tests cover seed-mode success, metadata persistence, magnet rejection, and hash-sample failure paths.

Consequences

  • Seed-mode is explicit opt-in and limited to metainfo submissions; magnet-only requests fail fast to avoid silent misbehaviour.
  • Hash sampling is best-effort and can fail admission if files are missing or corrupted; callers can opt out by omitting the sample percentage (a warning is logged).
  • Fast-resume metadata now tracks seed-mode and sampling preferences for future reconciliation.

Queue auto-managed defaults and PEX threading

  • Status: Accepted
  • Date: 2025-03-16
  • Context:
    • TORRENT_GAPS called out missing support for queue management toggles (auto-managed defaults, prefer-seed/don’t-count-slow policies) and peer exchange enable/disable paths.
    • Config/runtime needed a single source of truth so workers and the native bridge don’t drift, and per-torrent overrides had to survive restarts via metadata.
  • Decision:
    • Added engine profile fields auto_managed, auto_manage_prefer_seeds, and dont_count_slow_torrents with validation/normalization and a unified stored-proc update, plus a migration to persist them.
    • Extended runtime/FFI to carry the queue policy flags; native now sets libtorrent’s auto_manage_prefer_seeds/dont_count_slow_torrents, tracks the default auto-managed posture, and applies per-torrent overrides (including queue position) when adding torrents.
    • Threaded pex_enabled through add options with a native toggle that maps to disable_pex, allowing profile-level defaults and per-torrent overrides.
    • API accepts/validates the new per-torrent knobs (auto-managed, queue position, PEX) and exposes them through OpenAPI; metadata persistence caches the flags for resume.
  • Consequences:
    • New migration and stored-proc signature; engines built on old schemas must run migrations before updating.
    • Native add paths now branch on override/default auto-managed flags; queue positions imply manual management to align with libtorrent expectations.
    • Added coverage for option mapping and request validation; stub/native harnesses record the new metadata for symmetry tests.
  • Follow-up:
    • Extend torrent detail/inspect surfaces to surface auto-managed/PEX state where useful.
    • Evaluate whether additional queue policy knobs (e.g., priority clamping) are needed for future gaps.

Choking Strategy And Super-Seeding Configuration

  • Status: Accepted
  • Date: 2025-12-14
  • Context:
    • TORRENT_GAPS requires configurable choke/unchoke strategy and super-seeding defaults.
    • We must keep config/runtime/FFI/native paths aligned while preserving safe defaults.
    • API and persistence need to surface new knobs without regressing existing behaviour.
  • Decision:
    • Added engine profile fields for choking (choking_algorithm, seed_choking_algorithm, strict_super_seeding, optimistic_unchoke_slots, max_queued_disk_bytes) and super_seeding.
    • Normalise/validate values with guard-rail warnings; persist via a single stored-proc update path and migration 0006_choking_and_super_seeding.sql (consolidated into 0007_rebaseline.sql per ADR 030).
    • Thread new options through runtime config, FFI structs, and native session (settings_pack + per-torrent flags). Per-torrent super_seeding overrides are stored with metadata.
    • Updated API models/OpenAPI and added tests covering canonicalisation, clamping, and FFI planning.
  • Consequences:
    • Engine config now exposes advanced choke/seeding controls; defaults remain safe (fixed_slots, round_robin, super-seeding off).
    • Metadata format and DB schema gain new fields; migration is required before runtime use.
    • Native session applies and can reset choking settings; add-path respects per-torrent super-seeding.
  • Follow-up:
    • Expand native coverage for strict super-seeding and queue byte limits when integration harness is available.
    • Monitor telemetry for churn when users toggle new fields; add UX help text where appropriate.

qBittorrent Parity and Tracker TLS Wiring

  • Status: Accepted
  • Date: 2025-12-17
  • Context:
    • Libtorrent deprecation warnings and Phase 1 compatibility gaps required us to move away from deprecated tracker TLS fields and finish the qBittorrent façade.
    • The façade needed tracker, peer, and properties endpoints so qBittorrent clients can query Revaer without custom plugins.
    • Changes must comply with the AGENT.md gates (no unused code, warnings-as-errors, tests/coverage via just ci).
  • Decision:
    • Thread tracker TLS settings (trust store, verification flags, client cert/key) through config → runtime → FFI/native without using deprecated libtorrent fields, and cover with native tests.
    • Expose qBittorrent-compatible endpoints for torrent properties, trackers, peer sync, categories, and tags; return safe defaults where data is not yet modeled.
    • Keep compatibility code minimal and session-gated; validate torrent hashes on peer sync and re-use existing metadata caches for properties/trackers.
    • No new dependencies were introduced.
  • Consequences:
    • Deprecated libtorrent usage removed; TLS tracker configuration now uses current settings_pack fields.
    • qBittorrent clients can fetch properties/trackers/peer snapshots and manage empty categories/tags without errors.
    • Coverage and lint gates remain clean; compatibility paths are exercised by new unit tests.
  • Follow-up:
    • Expand peer diagnostics and alert surface once native peer info mapping is available (TORRENT_GAPS: “Peer view and diagnostics exposed”).
    • Consider persisting categories/tags with policy once the domain model supports it.
  • Tests (coverage summary):
    • just ci (fmt, lint, udeps, audit, deny, full test matrix including feature-min, cov) — passes; workspace coverage ≥ 80% with no regressions.
  • Observability:
    • No new metrics or spans added; compatibility routes reuse existing request tracing.
  • Risk & Rollback:
    • Compatibility endpoints currently return empty peer/category/tag data; risk is limited to client expectations. Roll back by reverting this ADR and associated API changes.
  • Dependency rationale:
    • No new crates or feature flags added.

Torrent Authoring, Labels, and Metadata Updates

  • Status: Accepted
  • Date: 2025-12-23
  • Context:
    • Remaining torrent gaps required authoring support plus consistent comment/source/private visibility.
    • Category/tag defaults and cleanup policies needed a shared storage path and validation.
    • Changes must comply with AGENT.md (no dead code, tests, docs, OpenAPI sync).
  • Decision:
    • Expose a create-torrent authoring endpoint that routes through the workflow and libtorrent bindings.
    • Surface comment/source/private fields in status/settings, allow comment updates only, and validate private tracker requirements on add.
    • Persist label policies in app_profile.features, provide list/upsert endpoints for categories/tags, and apply policy defaults (including cleanup) on add.
  • Consequences:
    • API clients can author torrents, set label defaults, and observe comment/source/private metadata consistently.
    • Cleanup rules can remove torrents after ratio/time thresholds, with policy validation guarding invalid inputs.
    • OpenAPI gains new schemas and endpoints to document authoring and label management.
  • Follow-up:
    • Extend UI/CLI to manage label policies and expose authoring workflows.
    • Evaluate adding per-label retention summaries once cleanup automation is in daily use.
  • Motivation:
    • Close the remaining torrent authoring/label gaps and make metadata updates visible to API clients.
  • Design notes:
    • Label policies are applied as defaults so explicit request options always win.
    • Private torrents require trackers; source/private updates are rejected to align with libtorrent constraints.
  • Tests (coverage summary):
    • Added API tests for metadata visibility and comment updates, plus worker tests for metadata update events and cleanup.
    • Native authoring test asserts comment/source propagation.
    • just ci run clean (fmt, lint, udeps, audit, deny, test, cov).
  • Observability:
    • No new metrics; metadata updates reuse existing event streams.
  • Risk & Rollback:
    • Risk: misconfigured label cleanup could remove torrents earlier than expected. Roll back by removing label policies and reverting cleanup enforcement.
  • Dependency rationale:
    • No new crates or feature flags were added.

030 – Migration Consolidation for Initial Setup

  • Status: Accepted
  • Date: 2025-12-23
  • Context:
    • The project is unreleased and migration history does not need to remain split.
    • A single init migration simplifies new environment bootstrap and reduces ordering drift.
  • Decision:
    • Collapse all SQL migrations in crates/revaer-data/migrations into 0007_rebaseline.sql.
    • Remove the remaining numbered migration files after consolidation.
    • Reset the local dev database in just db-start if the migration history no longer matches.
    • Clean llvm-cov artifacts before coverage to keep just ci output free of stale-data warnings.
  • Consequences:
    • Positive: Fresh databases start from one deterministic migration; fewer files to track.
    • Trade-offs: Historical migration boundaries are lost and existing dev databases must be rebuilt.
    • Trade-offs: Local dev databases will be dropped automatically when migrations are mismatched.
  • Follow-up:
    • Add new incremental migrations as needed after release.
    • Keep the single init file aligned with stored-proc changes.
  • Motivation:
    • The repository is unreleased, so consolidation avoids maintaining redundant migration files.
  • Design notes:
    • Preserve migration order by concatenating files with section headers.
    • Keep the init file self-contained for sqlx execution.
  • Tests (coverage summary):
    • just ci run clean (fmt, lint, udeps, audit, deny, test, cov).
  • Observability:
    • No new telemetry changes.
  • Risk & Rollback:
    • Risk: local databases with existing migrations must be dropped and recreated.
    • Roll back by restoring the previous migration file set from version control.
  • Dependency rationale:
    • No new crates or features added.

UI Nexus Asset Sync Tooling

  • Status: Accepted
  • Date: 2025-12-23
  • Context:
    • The UI consumes Nexus HTML/CSS/JS as vendored, compiled assets with no JS toolchain in dev/CI.
    • We need deterministic sync of vendor CSS, images, and JS into crates/revaer-ui/static/ so Trunk can serve them.
    • Output consistency must be verifiable in CI without relying on external asset pipelines.
  • Decision:
    • Add a Rust CLI tool (asset_sync) that copies Nexus assets into static/nexus, validates the CSS, and writes a lock file.
    • Wire the tool into just so dev, build, and CI checks always run the sync first.
    • Update the UI entry HTML to copy the full static directory and load Nexus app.css directly.
  • Dependency rationale:
    • anyhow: simplify CLI error propagation in the binary entrypoint; alternative was manual error mapping.
    • fs_extra: reliable directory copy with overwrite semantics; alternative was a bespoke recursive copy.
    • sha2: compute SHA-256 for ASSET_LOCK.txt; no standard library equivalent exists.
    • walkdir: collect deterministic file counts/bytes for lock metadata; alternative was manual recursion.
  • Test coverage summary:
    • Added unit tests for successful sync + lock creation and CSS validation failures in crates/revaer-ui/tools/asset_sync/src/lib.rs.
  • Observability updates:
    • None. The tool reports failures via exit status and error messages.
  • Risk & rollback plan:
    • Risk: incorrect vendor paths or corrupted outputs. Mitigation: sanity-check the CSS and lock file.
    • Rollback: rerun just sync-assets or revert static/nexus changes in version control.
  • Follow-up:
    • Ensure CI runs just check-assets on changes touching ui_vendor or static/nexus.
    • Revisit the sync paths if the Nexus vendor layout changes.

Torrent FFI Audit Closeout

  • Status: Accepted
  • Date: 2025-12-23
  • Context:
    • The torrent FFI audit identified drift between API/runtime/FFI/native behavior (metadata updates, seed limits, proxy handling, IPv6 mode) and missing CI coverage for native tests.
    • The engine must remain a thin wrapper around libtorrent; unsupported knobs must be rejected early, and native settings must be auditable.
  • Decision:
    • Reject unsupported metadata and per-torrent seed limit updates at the API boundary.
    • Remove Rust-side seeding enforcement and rely on native session settings only.
    • Enforce libtorrent version checks at build time and fail when unsupported.
    • Add native settings inspection hooks and native integration tests for proxy auth, seed limits, and IPv6 listen behavior.
    • Run native integration tests in CI via a dedicated just recipe.
  • Consequences:
    • Drift between API/runtime and native behavior is eliminated for the audited settings.
    • Native test coverage is required in CI; local runs need libtorrent and Docker availability.
  • Follow-up:
    • Keep FFI layout assertions updated as bridge structs evolve.
    • Extend native inspection snapshots when new settings are added.

Motivation

Ensure the torrent engine remains a thin libtorrent wrapper by removing Rust-only semantics, rejecting unsupported updates at the API boundary, and enforcing native test coverage to prevent drift.

Design Notes

  • Added a lightweight native settings snapshot to validate applied proxy credentials, seed limits, and listen interfaces in tests.
  • Adjusted native tests to assert deterministic events and avoid reliance on external swarm progress.
  • Removed deprecated strict-super-seeding fallback in favor of version-gated settings.
  • Updated FFI layout assertions after adding proxy auth and IPv6 fields.

Test Coverage Summary

  • just test-native exercises native unit and integration tests, including new assertions for proxy auth, seed limits, and IPv6 listen mode.
  • just ci (run before handoff) covers workspace lint/test/cov/audit/deny gates.

Observability Updates

  • No new metrics; native settings snapshots are internal to test-only inspection.

Risk & Rollback Plan

  • Risk: native settings snapshot could drift if settings are renamed upstream.
  • Rollback: revert to previous audit state and remove snapshot methods if libtorrent versions diverge; CI will flag mismatches quickly.

Dependency Rationale

  • No new dependencies introduced.

UI SSE + Auth/Setup Wiring

  • Status: Accepted
  • Date: 2025-12-24
  • Context:
    • Motivation: finalize first-run setup gating and SSE updates while keeping auth headers on every request.
    • Constraints: EventSource cannot set headers; SSE must fall back cleanly; avoid new dependencies.
  • Decision:
    • Implement a fetch-stream SSE runner with AbortController, bounded backoff, and fallback endpoint selection.
    • Parse SSE frames into typed envelopes when possible and throttle list refreshes when updates are incomplete.
    • Keep auth/setup flow in app state and attach API key or Basic auth for SSE streams.
    • Alternatives considered: EventSource with query param auth, periodic polling, or WebSockets (rejected for header limitations or higher complexity).
  • Consequences:
    • Positive outcomes: authenticated SSE support, deterministic reconnection behavior, and bounded refresh churn.
    • Risks or trade-offs: dual payload parsing adds complexity; throttled refresh can delay UI updates slightly.
  • Follow-up:
    • Implementation tasks: align torrent DTOs with OpenAPI and expand feature modules for torrents/dashboard.
    • Review checkpoints: validate SSE reconnection on auth changes and fallback path coverage.
  • Test coverage summary:
    • Added parser unit tests for frame boundaries and multiline data handling.
  • Observability updates:
    • UI-only change; no new server-side telemetry.
  • Risk & rollback plan:
    • Revert to previous EventSource-based flow or disable SSE refresh on regressions.
  • Dependency rationale:
    • No new crates added; only web-sys feature flags expanded. Alternative considered: gloo-net streaming APIs (insufficient for manual SSE parsing).

UI SSE normalization, progress coalescing, and ApiClient singleton

  • Status: Accepted
  • Date: 2025-12-24
  • Context:
    • The UI SSE pipeline needed legacy payload normalization, replay support, and render-friendly progress handling.
    • App state was still split across use_state, and API clients were being constructed per call.
    • The dashboard checklist requires a single SSE reducer path and a singleton ApiClient via context.
  • Decision:
    • Normalize SSE payloads into UiEventEnvelope and route all updates through one reducer path in the app shell.
    • Persist and replay Last-Event-ID, add SSE query filters derived from store state, and coalesce progress updates on a fixed cadence.
    • Introduce an ApiCtx context that owns a single ApiClient instance with mutable auth state.
    • Move auth/torrents/system SSE state into the yewdux AppStore and update reducers accordingly.
    • Store bulk-selection state in AppStore via a shared SelectionSet to keep bulk actions consistent across views.
    • Patch the anymap dependency used by yewdux to avoid Rust 1.91 auto-trait pointer cast errors.
  • Consequences:
    • SSE progress events are buffered and flushed together, reducing render churn during bursts.
    • API calls now share a single client instance, simplifying auth updates and call sites.
    • Bulk selections now persist in store state, avoiding local-only checkbox state drift.
    • Additional store slices (UI/labels/health) remain future work; some UI state still uses local hooks.
  • Follow-up:
    • Expand AppStore to include UI/toast/health/labels slices and row-level selectors.
    • Add coverage for SSE filtering and progress coalescer cadence.

Motivation

  • Align the UI with the SSE checklist requirements and remove per-call ApiClient construction.

Design notes

  • SSE decoding emits UiEventEnvelope instances; handle_sse_envelope is the only reducer entry.
  • Progress patches are stored in a non-reactive HashMap and flushed every 80ms into AppStore via apply_progress_patch.
  • ApiCtx holds a single ApiClient; auth changes update the shared RefCell state.
  • Bulk selection updates are routed through SelectionSet so store mutations remain deterministic.

Test coverage summary

  • just ci (fmt, lint, check-assets, udeps, audit, deny, ui-build, test, test-features-min, cov).

Observability updates

  • No new telemetry; SSE connection state continues to drive the existing UI overlay.

Risk & rollback plan

  • Risk: SSE filter mismatches could drop events; fallback is the throttled refresh path.
  • Rollback: revert to the previous SSE handler and local state wiring, removing the coalescer and ApiCtx usage.

Dependency rationale

  • Added a workspace dependency on revaer-events for shared SSE types; patched anymap locally for Rust 1.91 compatibility.
  • Alternatives considered: keep UI-local event types or upgrade yewdux (requires Yew 0.21+); rejected to avoid duplicating schemas or triggering a larger UI upgrade.

Advisory RUSTSEC-2021-0065 Temporary Ignore

  • Status: Superseded by 073 (vendored yewdux exception tracked in ADR 074)
  • Date: 2025-12-24
  • Context:
    • The UI depends on yewdux, which transitively pulls anymap and triggers advisory RUSTSEC-2021-0065 (unmaintained).
    • There is no maintained replacement for anymap within the pinned yewdux 0.9.x line, and upgrading yewdux would require a Yew major upgrade.
    • cargo-audit is configured to deny warnings, so ignoring the advisory requires explicit documentation and a remediation plan.
  • Decision:
    • Add RUSTSEC-2021-0065 to .secignore while yewdux requires anymap.
    • Track yewdux upgrades or alternatives that remove anymap and remove the ignore when available.
    • No runtime mitigation is required beyond limiting use to the UI state store.
  • Consequences:
    • CI remains green while upstream resolves the dependency.
    • The unmaintained dependency remains in the tree until we migrate away from it.
  • Follow-up:
    • Re-evaluate yewdux upgrade paths quarterly; remove the ignore once anymap is no longer required.
    • If upstream is stalled, evaluate a UI store replacement or a fork that removes anymap.
  • Superseded: .secignore cleaned in ADR 073; vendored yewdux exception tracked in ADR 074 (no anymap crate dependency reintroduced).

Motivation

  • Keep just audit passing without blocking UI state work while documenting the risk and path to remediation.

Design notes

  • The ignore is scoped to the single advisory and is documented in .secignore with this ADR for traceability.

Test coverage summary

  • just ci (includes fmt, clippy, udeps, audit, deny, test, cov).

Observability updates

  • None; advisory handling does not change runtime telemetry.

Risk & rollback plan

  • Risk: unmaintained dependency stays in the build; monitor upstream advisories and plan a migration.
  • Rollback: remove yewdux usage and replace with a small local store implementation or upgrade to a supported release once available.

Dependency rationale

  • yewdux provides the shared store needed for the UI; alternatives considered were a custom store (higher lift) or upgrading to yewdux 0.11+ (requires Yew 0.21+ migration).

Asset sync test stability under parallel runs

  • Status: Accepted
  • Date: 2025-12-24
  • Context:
    • cargo llvm-cov runs tests in parallel and surfaced a flaky asset_sync test.
    • The temp directory helper used timestamp-based names that could collide under parallel execution.
    • CI requires just ci (including coverage) to pass reliably without intermittent failures.
  • Decision:
    • Replace the time-based temp directory naming with a process id + atomic counter.
    • Retry on AlreadyExists to ensure unique per-test directories without new dependencies.
  • Consequences:
    • Asset sync tests are deterministic under parallel runners and coverage instrumentation.
    • No new crates or runtime behavior changes.
  • Follow-up:
    • None.

Motivation

  • Remove flaky coverage failures caused by temporary directory collisions in asset_sync tests.

Design notes

  • Use a static AtomicUsize counter plus std::process::id() to generate unique temp roots.
  • Loop on AlreadyExists without introducing external dependencies.

Test coverage summary

  • just ci (fmt, lint, udeps, audit, deny, ui-build, test, cov).

Observability updates

  • None.

Risk & rollback plan

  • Risk: low; change is test-only.
  • Rollback: revert the temp directory helper to its previous implementation.

Dependency rationale

  • No new dependencies.

UI row slices and system-rate store wiring

  • Status: Accepted
  • Date: 2025-12-24
  • Context:
    • The checklist requires row-level selectors and ID-based list rendering to avoid full-row re-renders.
    • System rates must live in the AppStore alongside SSE connection state.
    • UI components should remain free of API side effects while still subscribing to yewdux slices.
  • Decision:
    • Add TorrentRowBase and TorrentProgressSlice selectors and render list rows via ID-based components that subscribe only to slices.
    • Keep bulk selection state in AppStore and expose selectors for selection and system rates.
    • Store SystemRates in SystemState and update it from both dashboard fetches and SSE system-rate events.
  • Consequences:
    • List rows re-render only when their slice changes, reducing churn under frequent progress updates.
    • Dashboard throughput metrics now follow store-backed system rates rather than local state copies.
    • Additional store slices (filters, paging, fsops) still need to be implemented.
  • Follow-up:
    • Finish remaining torrent state normalization (filters, paging, fsops badges).
    • Add selectors for drawer detail slices and wire remaining list filtering/paging flows.

Motivation

  • Align list rendering with checklist performance constraints and centralize system-rate state in the store.

Design notes

  • TorrentRowItem uses use_selector to read base/progress slices and selection state per row ID.
  • SSE SystemRates updates now mutate AppStore.system.rates instead of local dashboard state.
  • Dashboard panels receive SystemRates via props to keep UI components data-driven.

Test coverage summary

  • just ci (fmt, lint, check-assets, udeps, audit, deny, ui-build, test, test-features-min, cov).

Observability updates

  • No changes.

Risk & rollback plan

  • Risk: list rows could render blank if selector data goes missing; fallback is the existing refresh flow.
  • Rollback: revert to list rendering with full rows and remove the per-row selectors.

Dependency rationale

  • No new dependencies.

UI shared API models and torrent query paging state

  • Status: Accepted
  • Date: 2025-12-24
  • Context:
    • The UI duplicated API DTOs, causing drift from backend shapes and blocking checklist compliance.
    • Torrent list fetching needed a real query/paging model to align with the API list response.
    • SSE fsops events required a stable store cache separate from row state.
  • Decision:
    • Extract shared API DTOs into a new revaer-api-models crate and re-export from revaer-api.
    • Update the UI to consume shared DTOs, map list/detail views from API shapes, and parse list responses with next cursors.
    • Add TorrentsQueryModel, TorrentsPaging, and fsops_by_id to the torrent store and update SSE to fill fsops state.
  • Consequences:
    • API DTOs are now single-source across API/CLI/UI consumers.
    • UI list fetching can track cursor paging and filter parameters in state.
    • Detail views now map from API DTOs with placeholder metadata until richer fields are available.
  • Follow-up:
    • Wire filter fields into URL/query state and implement load-more pagination.
    • Replace add-torrent payloads with TorrentCreateRequest + client UUIDs.
    • Populate health and label caches from API endpoints.

Motivation

  • Eliminate duplicated API DTOs in the UI and align list fetching with backend paging semantics.

Design notes

  • Introduced revaer-api-models as the canonical DTO crate and re-exported it from revaer-api.
  • TorrentSummary and TorrentDetail conversions now map from shared DTOs into UI row/detail views.
  • TorrentsQueryModel and TorrentsPaging feed build_torrents_path for list requests.
  • SSE fsops events update fsops_by_id without mutating row state.

Test coverage summary

  • just ci (fmt, lint, check-assets, udeps, audit, deny, ui-build, test, test-features-min, cov)
  • llvm-cov reports: “warning: 40 functions have mismatched data”

Observability updates

  • No changes.

Risk & rollback plan

  • Risk: mapping differences between API DTOs and UI view models could hide fields.
  • Rollback: revert to the previous UI DTO definitions and list fetch logic.

Dependency rationale

  • Added revaer-api-models to share API DTOs across crates.
  • Added chrono as a UI dev-dependency for DTO construction in tests.

UI store, API coverage, and rate-limit retries

  • Status: Accepted
  • Date: 2025-12-24
  • Context:
    • Shared UI state (theme, toasts, label/health caches) needed to live in the AppStore to match the yewdux architecture rule.
    • The API client needed coverage for health, metrics, and label list endpoints to unblock upcoming screens.
    • Rate-limit responses required user-visible backoff messaging and a safe retry path for idempotent fetches.
  • Decision:
    • Move shell theme/toast/busy state into the AppStore and populate label/health caches from API calls.
    • Extend the UI API client with health/full, metrics, and label list endpoints, leaving option/selection/authoring calls for later UI wiring.
    • Handle 429 responses for torrent list/detail fetches with Retry-After backoff and a single retry.
  • Consequences:
    • UI state is centralized and ready for labels/health screens without ad-hoc local state.
    • API coverage is aligned with the checklist endpoints, reducing future wiring churn.
    • Rate-limit retries add controlled delay behavior; repeated throttling still surfaces errors.
  • Follow-up:
    • Remove demo-only list/detail fallback paths and add empty states.
    • Implement category/tag management screens and health viewer UI.
    • Wire per-torrent options/selection editing in the drawer and add torrent authoring UX.

Motivation

  • Keep shared UI state in yewdux and close API coverage gaps needed for Torrent UX.

Design notes

  • AppShell theme and toast lifecycles now flow through AppStore updates.
  • Labels/health caches are populated from API calls and stored in dedicated slices.
  • Added API client methods for remaining torrent and label endpoints.
  • API client currently covers health/full, metrics, and label list endpoints; mutating endpoints await UI wiring.
  • Rate-limit backoff uses Retry-After with a single retry for idempotent list/detail fetches.

Test coverage summary

  • just ci (fmt, lint, check-assets, udeps, audit, deny, ui-build, test, test-features-min, cov)
  • llvm-cov reports: “warning: 40 functions have mismatched data”

Observability updates

  • No changes.

Risk & rollback plan

  • Risk: extra retry traffic on sustained 429 responses.
  • Rollback: remove retry/backoff helpers and revert list/detail fetch handling.

Dependency rationale

  • No new dependencies.

040 – UI Label Policies (Task Record)

  • Status: In Progress
  • Date: 2025-10-24

Motivation

  • Provide first-class category/tag policy management in the UI so operators can apply TorrentLabelPolicy defaults without CLI/API-only workflows.
  • Maintain AppStore as the source of truth while avoiding API calls in atoms/molecules.

Design Notes

  • Implemented a dedicated features/labels slice with form state that round-trips through TorrentLabelPolicy.
  • Added a single list + editor page that renders per-kind (categories or tags) with an Advanced section for rarely used fields.
  • API upserts are routed through the shared ApiClient and update the AppStore label caches on success.

Decision

  • Use LabelFormState as the sole UI editing model and convert to TorrentLabelPolicy only on save.
  • Re-export label policy support types from revaer-api-models to keep UI aligned with shared domain types.

Consequences

  • Labels are now editable without leaving the UI; any validation errors are surfaced before calling the API.
  • The UI must keep label cache entries updated to prevent stale list rendering.

Test Coverage Summary

  • Added unit tests for label form parsing, cleanup validation, and policy mapping.

Observability Updates

  • None (UI-only changes, no new telemetry).

Risk & Rollback

  • Risk: malformed inputs can still hit the API if not caught locally; server-side validation remains authoritative.
  • Rollback: revert the labels feature wiring in app/mod.rs and the new feature module.

Dependency Rationale

  • No new dependencies introduced; re-exported existing domain types for UI usage.

Follow-up

  • Expand label editor UX (search/filter, bulk actions) and align styling with Nexus components.

041 – UI Health View + Label Shortcuts (Task Record)

  • Status: In Progress
  • Date: 2025-10-24

Motivation

  • Replace the Health route placeholder with an operator-facing status view built from cached snapshots.
  • Provide quick navigation from torrent add flow to label policy management.

Design Notes

  • Implemented a dedicated health feature view that reads from AppStore and renders basic/full snapshots plus the raw metrics text.
  • Added label shortcuts in the add-torrent panel using router links to avoid side effects in components.

Decision

  • Keep health rendering in a feature view module with no API calls; data remains sourced from app-level effects.
  • Use existing chip/button styling patterns for navigation shortcuts.

Consequences

  • Operators can inspect health status without leaving the UI.
  • Add-torrent flow now exposes direct navigation to categories and tags.

Test Coverage Summary

  • UI-only additions (no new Rust tests added).

Observability Updates

  • None (UI-only changes, no new telemetry).

Risk & Rollback

  • Risk: health fields may appear empty when snapshots are unavailable; view handles None gracefully.
  • Rollback: revert the health feature module and restore the placeholder route.

Dependency Rationale

  • No new dependencies introduced.

Follow-up

  • Add metrics copy controls and align health styling with Nexus patterns.

042 - UI Metrics Copy Button (Task Record)

  • Status: In Progress
  • Date: 2025-12-24

Motivation

  • Provide a fast way to copy /metrics output from the Health page.
  • Close the optional metrics viewer requirement in the dashboard checklist.

Design Notes

  • Keep clipboard access in app to respect the “window-only in app” rule.
  • Use a HealthPage callback to avoid side effects in the feature view.
  • Emit success/error toasts to confirm copy status.

Decision

  • Use the Clipboard API (navigator.clipboard.writeText) for copying.
  • Guard the copy button when metrics payload is empty.

Consequences

  • Operators can copy metrics text without leaving the UI.
  • Clipboard permissions may block copy; errors are surfaced via toasts.

Test Coverage Summary

  • UI-only change; no new Rust tests added.

Observability Updates

  • None.

Risk & Rollback

  • Risk: clipboard API unavailable in some browsers.
  • Rollback: remove the copy button and clipboard helper.

Dependency Rationale

  • Enable the existing web-sys Clipboard feature to access navigator.clipboard.
  • Alternative considered: legacy execCommand("copy"), avoided due to deprecation.

043 - UI Settings Bypass Local Auth Toggle (Task Record)

  • Status: In Progress
  • Date: 2025-12-24

Motivation

  • Provide a settings control for preferring API keys in the auth prompt.
  • Close the remaining setup/auth flow requirement in the dashboard checklist.

Design Notes

  • Store the bypass toggle in LocalStorage but read/write it only from app.
  • Keep the settings view stateless and driven by AppStore props.
  • Use the toggle to influence the default auth prompt tab without forcing logout.

Decision

  • Add a settings feature view for the bypass local toggle.
  • Persist the toggle separately from the last-used auth mode.

Consequences

  • Auth prompt defaults to API key when bypass is enabled.
  • Existing auth state remains unchanged unless the user re-authenticates.

Test Coverage Summary

  • UI-only change; no new Rust tests added.

Observability Updates

  • None.

Risk & Rollback

  • Risk: users may still remain logged in with local auth while bypass is enabled.
  • Rollback: remove the settings view and toggle wiring.

Dependency Rationale

  • No new dependencies introduced.

044 - UI ApiClient Torrent Options/Selection Endpoints (Task Record)

  • Status: In Progress
  • Date: 2025-12-24

Motivation

  • Add the remaining torrent options/selection endpoints to the ApiClient.
  • Keep transport wiring centralized in the API service layer.

Design Notes

  • Use existing API model types (TorrentOptionsRequest, TorrentSelectionRequest).
  • Keep methods in services::api::ApiClient and reuse existing auth/application patterns.

Decision

  • Add ApiClient helpers for options updates and file selection updates.
  • Maintain consistent error wrapping and headers via the shared helpers.

Consequences

  • UI features can call these endpoints without duplicating transport logic.
  • File selection toggles now persist via the selection endpoint.

Test Coverage Summary

  • API client additions only; no new Rust tests added.

Observability Updates

  • None.

Risk & Rollback

  • Risk: API failures require reloading detail data to reconcile file selection state.
  • Rollback: remove the selection update path and ApiClient methods.

Dependency Rationale

  • No new dependencies introduced.

UI Icon System and Icon Buttons

  • Status: Accepted
  • Date: 2025-12-24
  • Context:
    • Motivation: eliminate inline SVGs and standardize icon usage per the dashboard checklist.
    • Constraints: reuse Nexus/DaisyUI styling, avoid new dependencies, keep accessibility consistent.
  • Decision:
    • Summary: add a shared icon module under components/atoms/icons and a reusable IconButton component for icon-only actions.
    • Design notes: provide IconProps (size, class, optional title) and IconVariant for outline/solid arrows; reuse existing .icon-btn styles for consistent hover/focus behavior.
    • Alternatives considered: keep inline SVGs or introduce an external icon crate; rejected to avoid duplication and dependencies.
  • Consequences:
    • Positive outcomes: centralized icon rendering, consistent sizing, and cleaner shell/dashboard markup.
    • Risks/trade-offs: visual regressions if CSS assumptions about SVG sizing shift.
    • Observability updates: none.
  • Follow-up:
    • Implementation tasks: keep new icons in the shared module; replace any future inline SVGs with components.
    • Test coverage summary: UI component wiring only; no new tests added (llvm-cov still warns about mismatched data).
    • Dependency rationale: no new dependencies introduced.
    • Risk & rollback plan: revert icon module changes and restore inline SVGs if styling regresses.

UI Torrent Filters, Pagination, and URL Sync

  • Status: Accepted
  • Date: 2025-12-24
  • Context:
    • Motivation: expose torrent filters in the URL and support paged list loading without breaking the normalized store.
    • Constraints: reuse existing API query semantics, avoid new dependencies, and keep URL updates inside app-level routing.
  • Decision:
    • Summary: parse/filter query params from the router location, update the URL when filters change, and add an explicit Load more flow that appends rows.
    • Design notes: use build_torrent_filter_query for URL-only filters, keep refresh fetches cursor-free, and append rows only when a cursor is provided for pagination.
    • Alternatives considered: store cursor in the URL or auto-load more on scroll; rejected to keep query stable and avoid hidden fetches.
  • Consequences:
    • Positive outcomes: shareable filter URLs, explicit paging, and predictable list refresh behavior.
    • Risks/trade-offs: query sync relies on history replace semantics; overlapping API pages could still cause duplicate rows.
    • Observability updates: none.
  • Follow-up:
    • Implementation tasks: wire filter inputs, add Load more, and append list reducer support.
    • Test coverage summary: added unit tests for query round-tripping and append-row behavior.
    • Dependency rationale: no new dependencies introduced.
    • Risk & rollback plan: revert filter URL sync and pagination append logic if list state becomes inconsistent.

UI Torrent List Updated Timestamp Column

  • Status: Accepted
  • Date: 2025-12-24
  • Context:
    • Motivation: surface the last updated timestamp alongside the existing list columns.
    • Constraints: avoid new dependencies and keep row slices stable for list rendering performance.
  • Decision:
    • Summary: store a formatted updated timestamp string in the torrent row base slice and render it as an optional column.
    • Alternatives considered: compute formatting in the component layer or add a relative time utility; rejected to keep row rendering pure and avoid new helpers.
  • Consequences:
    • Positive outcomes: list rows now include an explicit updated timestamp column with overflow fallback.
    • Risks/trade-offs: updated timestamps refresh only when list data is refreshed, not on every SSE event.
    • Observability updates: none.
  • Follow-up:
    • Implementation tasks: keep formatting consistent in the summary conversion.
    • Test coverage summary: added assertions for updated timestamps in row conversion tests.
    • Dependency rationale: no new dependencies introduced.
    • Risk & rollback plan: remove updated column mapping if list layout regresses.

ADR 048: UI torrent row actions, bulk controls, and rate/remove dialogs

  • Status: Accepted
  • Date: 2025-12-24
  • Context:
    • Motivation: complete torrent list row actions and bulk controls with confirm/rate UX and concurrency safety.
    • Constraints: no new dependencies, no unwrap/expect in non-test code, yewdux-managed shared state, and clean just ci.
  • Decision:
    • Add UI action variants (reannounce, sequential on/off, rate) and map them to API actions.
    • Introduce row action menus plus remove/rate dialogs with input validation and delete-data toggle.
    • Implement a bulk-action runner with a concurrency cap, failure aggregation, and drawer-close logic when multi-select remains.
    • Alternatives considered: per-item toasts with sequential execution (rejected for spam and slow UX).
  • Consequences:
    • Positive: consistent row/bulk actions, safer removals, bounded bulk concurrency, and clear summary feedback.
    • Trade-offs: additional UI state for dialogs and bulk runner bookkeeping.
  • Follow-up:
    • Ensure translations are backfilled for new strings beyond English as needed.
    • Revisit concurrency cap if the API or UI performance requirements change.
  • Test coverage summary:
    • Added unit tests for rate input parsing in crates/revaer-ui/src/core/logic/mod.rs.
    • Existing action success message tests extended to cover new variants.
  • Observability updates:
    • None (UI-only changes; no new metrics/tracing added).
  • Risk & rollback plan:
    • Risk: dialog/menu UX regressions on small screens or edge-case bulk failures.
    • Rollback: revert this ADR’s changeset and restore prior row-action buttons and sequential bulk loop.
  • Dependency rationale:
    • No new dependencies added.

UI detail drawer overview/files/options

  • Status: Accepted
  • Date: 2025-12-24

Context

  • The torrent detail drawer still exposed legacy peers/trackers/log panes instead of the required overview/files/options layout.
  • The UI was maintaining a custom DetailData conversion layer instead of using shared API models.
  • The checklist requires edits only for fields supported by PATCH /v1/torrents/{id}/options and real file selection updates.

Decision

  • Render the detail drawer with Overview, Files, and Options tabs and include the same action set as the list rows.
  • Store TorrentDetail directly in the detail cache to avoid duplicate UI-only models and conversions.
  • Apply file selection changes via /select (include/exclude/priority/skip_fluff) and options changes via /options with optimistic updates.
  • Keep non-editable settings read-only to avoid fake controls.

Consequences

  • Removes duplicated detail mapping logic and keeps UI aligned with shared models.
  • Detail UI now depends on settings payloads for options and skip-fluff rendering.
  • Failed updates require a refresh to reconcile optimistic state.

Motivation

  • Align the UI with the Torrent UX checklist while preserving the thin-client model.

Design notes

  • Detail cache remains in yewdux details_by_id; list rows stay lightweight.
  • Components emit callbacks only; API calls remain in app-level handlers.

Test coverage summary

  • Added unit tests for detail selection, priority, skip-fluff, and options updates in torrents state.
  • Added a format_bytes unit test for the new size formatter.

Observability updates

  • None (UI-only changes).

Risk & rollback plan

  • Risk: optimistic updates may temporarily show stale settings if the API rejects changes.
  • Mitigation: refresh detail on failure.
  • Rollback: restore the previous detail component and DetailData mapping.

Dependency rationale

  • Added workspace chrono to revaer-ui runtime deps to build demo detail timestamps.

UI torrent FAB + create modals

  • Status: Accepted
  • Date: 2025-12-24
  • Context:
    • The torrent UX checklist requires FAB-driven add/create modals and initial rate limits.
    • API calls must stay in the app layer with shared DTOs, and UI state lives in yewdux.
  • Decision:
    • Implement a floating action button that opens Add and Create torrent modals.
    • Wire POST /v1/torrents/create through the ApiClient and surface results + copy actions.
    • Move UI preferences (mode/density/locale) into the shared store for consistent access.
    • Alternatives considered:
      • Keep the add panel inline in the list view (rejected; no FAB flow).
      • Let modal components call the API directly (rejected; breaks layering rules).
  • Consequences:
    • Adds modal UX for torrent add/authoring and a FAB entry point.
    • Introduces minimal new store state for create results/errors and busy flags.
    • Additional translations and CSS required for modal + FAB presentation.
  • Follow-up:
    • Validate Add/Create modals visually against Nexus styling.
    • Run full just ci and confirm zero warnings.

Motivation

  • Finish the remaining torrent UX checklist items for FAB actions and authoring flows.
  • Keep state management consistent with the yewdux store rule.

Design notes

  • Modal components remain pure UI: they emit typed requests and copy intents via callbacks.
  • Create results are stored in the torrents slice to avoid cross-component ad hoc state.

Test coverage summary

  • Unit tests updated for add payload validation (rate parsing).
  • No new integration tests for UI-only changes.

Observability updates

  • None (UI-only change).

Risk & rollback plan

  • Risk: modal flows may need styling adjustments across breakpoints.
  • Rollback: revert UI modal/FAB changes and the create endpoint wiring.

Dependency rationale

  • No new dependencies introduced; reused existing shared DTOs and UI helpers.

UI shared API models and UX primitives

  • Status: Accepted
  • Date: 2025-12-25
  • Context:
    • The UI and CLI duplicated health/setup/dashboard DTOs, increasing drift risk against the API.
    • The torrent toolbar and labels views lacked debounced search, multi-select, and reusable empty/bulk primitives.
    • The UI checklist requires shared API models and a component primitive set with prop-driven configuration.
  • Decision:
    • Move health, setup-start, and dashboard DTOs into revaer-api-models and consume them from the API, UI, and CLI.
    • Add shared UI primitives (SearchInput with debounce, MultiSelect, EmptyState, BulkActionBar) and extend existing inputs/buttons for prop coverage.
    • Refactor torrent filters and label empty state to use the new primitives while retaining text-input fallback for tags when options are unavailable.
  • Consequences:
    • Reduces schema drift and keeps response shapes centralized in one crate.
    • Adds new UI primitives that standardize filter toolbars and empty states.
    • The setup-start endpoint now serializes expiration as RFC3339 strings to match shared DTOs.
  • Follow-up:
    • Audit remaining UI components for prop completeness and update the checklist item when finished.
    • Re-run the full just ci pipeline before final handoff.

Task record

  • Motivation: Eliminate duplicate API DTOs and complete missing UI primitives required by the Torrent UX checklist.
  • Design notes: Shared DTOs live in revaer-api-models; new primitives live under components and are consumed by torrents/labels views to avoid dead code.
  • Test coverage summary: Not run in this update (follow-up required per AGENT.md).
  • Observability updates: None.
  • Risk & rollback plan: Revert to previous DTO structs in API/CLI/UI and restore raw input elements if regressions surface.
  • Dependency rationale: No new dependencies added.

UI dashboard migration to Nexus vendor layout

  • Status: Accepted
  • Date: 2025-12-25
  • Context:
    • Align the dashboard and shell UI with the vendored Nexus HTML to remove drift.
    • Remove the blocking SSE overlay and replace it with a non-blocking connectivity surface.
    • Preserve routing and layout classes so Nexus CSS can remain authoritative.
  • Decision:
    • Replace the old dashboard and shell markup with Nexus vendor partials and dashboard structure.
    • Introduce SSE connectivity state in the store with a drawer-footer indicator and modal.
    • Remove legacy dashboard CSS overrides and ensure vendor app.css is the primary styling source.
  • Consequences:
    • Positive: Nexus parity, simpler shell structure, non-blocking connectivity UX.
    • Risks: UI copy/labels diverge from vendor defaults; mode toggle now relies on existing stored preference.
  • Follow-up:
    • Verify visual parity against Nexus dashboard sections.
    • Monitor SSE reconnection details surfaced in the modal.

Motivation

  • Ensure the UI matches the vendored Nexus dashboard and shell while eliminating legacy layout glue.
  • Replace blocking SSE overlays with a navigation-safe connectivity indicator.

Design notes

  • App shell and dashboard markup preserve the Nexus layout/class structure while the repo keeps only the vendor asset kit; executable Nexus reference HTML is not retained in-tree.
  • Dashboard sections are split into Nexus-faithful organisms while preserving class names and nesting.
  • SSE status is stored in system.sse_status; indicator consumes a summary slice, modal consumes full details.

Test coverage summary

  • just ci (fmt, lint, udeps, audit, deny, ui-build, test, cov)

Observability updates

  • None.

Risk & rollback plan

  • If Nexus markup causes regressions, revert to the previous dashboard/shell and reintroduce the prior CSS and route wiring.
  • If SSE diagnostics cause UI noise, hide the indicator by feature flag and keep reconnect logic intact.

Dependency rationale

  • Added web-sys feature HtmlDialogElement to open the Nexus search modal via show_modal without new crates.

UI: Hardline Nexus Dashboard Rebuild and Settings Wiring

  • Status: Accepted
  • Date: 2025-12-26
  • Context:
    • The Home dashboard must match the vendored Nexus HTML structure and DaisyUI component patterns.
    • Navigation and shell need to be simplified to Home/Torrents/Settings with a non-blocking SSE indicator.
    • Settings must remain reachable even when auth is missing and show a config snapshot.
  • Decision:
    • Rebuild dashboard sections to mirror Nexus markup (stats cards, storage status, recent events, tracker health, queue summary).
    • Align AppShell sidebar/topbar with Nexus partial structure and move the SSE indicator to the sidebar footer.
    • Wire Settings to fetch /v1/config and provide test-connection actions while keeping auth overlays off the Settings route.
    • Disable wasm-opt in the Trunk pipeline (data-wasm-opt="0") to avoid build failures on missing staged wasm outputs.
    • Use relative static asset paths for Nexus CSS and dashboard image URLs to keep styles/images loading when served from non-root paths.
    • Alternatives considered: importing revaer_config::ConfigSnapshot into the UI; rejected to avoid new cross-crate dependencies in wasm.
  • Consequences:
    • Positive: consistent Nexus/DaisyUI layout, simplified nav, and settings access even during auth errors.
    • Trade-offs: UI-only fetches rely on runtime connectivity; config display is untyped JSON in the UI; wasm bundles are no longer optimized by wasm-opt.
  • Follow-up:
    • Verify visual parity in the browser and keep the Nexus HTML deltas minimal.
    • Add typed config rendering if a UI-safe shared type becomes available.

Task Record

  • Motivation: enforce Nexus + DaisyUI parity for the dashboard while keeping Settings reachable and diagnostics visible.
  • Design notes: mapped each dashboard section to specific Nexus blocks; SSE indicator uses sidebar footer with a non-blocking dialog; config snapshot parsed as serde_json::Value to avoid new dependencies; disabled wasm-opt in crates/revaer-ui/index.html to keep trunk build --release reliable on this environment until tooling changes; aligned Nexus image URLs to /static/nexus/... for correct asset loading on all routes; aligned the sidebar footer indicator to the Nexus pinned-footer structure, restored the missing Global Sales card slot, and made the auth prompt non-blocking while stabilizing drawer hook usage; re-aligned the torrents filter header to the Nexus orders layout, updated the search input to use DaisyUI input-sm sizing, removed the custom placeholder override to let DaisyUI placeholder styles apply, and removed the legacy torrent list view.
  • Test coverage summary: just ci runs but just cov fails at ~77.6% overall line coverage (below the ≥80% gate); no new unit tests added in this update.
  • Observability updates: none (UI-only changes).
  • Risk & rollback plan: revert crates/revaer-ui dashboard/shell/settings edits and static/style.css if UI regressions appear.
  • Dependency rationale: no new dependencies added; reused existing serde_json.

UI Dashboard Nexus Parity Tweaks

  • Status: Accepted
  • Date: 2025-12-27
  • Context:
    • Dashboard cards drifted from the vendored Nexus markup and referenced missing i18n keys.
    • Connectivity modal included fields outside the required SSE status spec.
    • Constraints: keep Nexus layout structure, use DaisyUI semantic tokens, avoid new dependencies.
  • Decision:
    • Rework the storage usage and tracker health cards to match Nexus layout structure and available translation keys.
    • Align queue summary/global summary labels to existing nav/dashboard strings.
    • Trim the SSE connectivity modal to the required fields and labels.
    • Replace dashboard recent events table markup with a DaisyUI list layout.
    • Limit SSE indicator label expansion to the sidebar expanded state only.
    • Alternatives considered: adding new translation keys across all locales (rejected for scope and translation burden).
  • Consequences:
    • Positive outcomes: fewer missing strings, closer Nexus parity, clearer SSE status display.
    • Risks or trade-offs: storage usage detail reduced to summary metrics; some labels remain static in English where Nexus requires them.
  • Follow-up:
    • Manually verify Nexus dashboard parity and table hover styling in the UI.

Motivation

  • Restore Nexus layout parity for dashboard sections and eliminate missing dashboard translation keys.

Design Notes

  • Storage usage mirrors the Nexus revenue card layout with the chart slot preserved.
  • Tracker health metrics follow the Nexus acquisition grid with two columns and error count in the header.
  • Queue summary and global summary labels use existing nav/dashboard translations.
  • SSE connectivity modal aligns with the required status fields only.
  • Recent events use a DaisyUI list layout that preserves the Nexus header structure.
  • Row-hover styling applies to list rows for parity with table hover behavior.

Test Coverage Summary

  • No new tests added; UI-only changes.

Observability Updates

  • None.

Risk & Rollback Plan

  • Low risk; revert the UI component edits if layout regressions appear.

Dependency Rationale

  • No new dependencies introduced.

Factory Reset and Bootstrap API Key

  • Status: Accepted
  • Date: 2025-12-27
  • Context:
    • Need a safe factory reset workflow that keeps navigation available while enforcing confirmation.
    • Setup completion must return a bootstrap API key with a 14-day client-side expiry.
    • Raw reset errors must surface to the UI for operator visibility.
  • Decision:
    • Add revaer_config.factory_reset() stored procedure and /admin/factory-reset API endpoint guarded by API key auth.
    • Ensure setup completion provisions or reuses a bootstrap API key and returns it with an expiry timestamp.
    • Persist the bootstrap API key with expiry in local storage and require manual dismissal for error toasts.
  • Consequences:
    • Factory reset clears configuration/runtime data and returns the system to setup mode.
    • API key expiry is enforced on the client; the server remains stateless about expiry.
    • Reset failures are delivered verbatim to clients for display.
  • Follow-up:
    • Update OpenAPI export, UI dropdown + modal wiring, and storage helpers.
    • Verify CI and runtime migrations.

Factory reset bootstrap auth fallback

  • Status: Accepted
  • Date: 2025-12-28
  • Context:
    • Factory reset requires API key auth, but existing installs can be in active mode with zero API keys (pre-bootstrap).
    • Without a key, the UI cannot authenticate and the system has no recovery path.
    • The reset path must still use stored procedures and surface raw errors when the reset fails.
  • Decision:
    • Add a has_api_keys capability to the config facade so the API can detect empty key inventories.
    • Introduce a factory-reset-specific auth gate that accepts valid API keys, or allows the reset when no API keys exist (logging a warning).
    • Keep confirmation phrase validation unchanged.
  • Consequences:
    • Provides a recovery path for deployments missing API keys.
    • When no API keys exist, factory reset can be triggered without auth; this is acceptable because the system is already unauthenticated in that state.
  • Follow-up:
    • Consider tightening the fallback to loopback-only requests if new auth modes are added.
    • Ensure UI messaging continues to surface authorization errors via toasts.

UI settings tabs and editor controls

  • Status: Accepted
  • Date: 2025-12-28
  • Context:
    • The settings screen exposed raw configuration values without meaningful grouping or editing controls.
    • Torrent operators need quick access to download, seeding, network, and storage controls with clear defaults.
    • Settings patches must flow through the existing API and honor immutable fields.
  • Decision:
    • Rebuild the settings UI as tabbed panels aligned with torrent workflows (connection, downloads, seeding, network, storage, system).
    • Drive all editable controls from the config snapshot and submit targeted /v1/config changesets per group.
    • Treat immutable fields and effective engine snapshots as read-only with copy-to-clipboard affordances.
  • Consequences:
    • Settings are now grouped for faster navigation and support direct edits with consistent controls.
    • The UI performs more client-side validation for numeric and JSON fields before patching.
  • Follow-up:
    • Evaluate dedicated server-side directory browsing if operators need richer path discovery.
    • Add localization for settings field labels where needed.

Motivation

  • Make settings usable for torrent operators by grouping them into purpose-built tabs.
  • Replace raw config tables with toggles, selects, numeric inputs, and path pickers.
  • Ensure read-only values are still accessible via copy actions.

Design notes

  • Draft values are derived from the latest config snapshot and compared to build minimal changesets.
  • Immutable keys from app_profile.immutable_keys and derived engine fields are rendered read-only.
  • Directory selection uses a modal picker with suggested paths from the snapshot.

Test coverage summary

  • just ci

Observability updates

  • UI toasts surface config patch failures and copy failures; no new metrics.

Risk & rollback plan

  • Risk: incorrect grouping or input parsing could lead to failed patches.
  • Rollback: revert to the previous settings view and re-fetch configuration.

Dependency rationale

  • No new dependencies.

UI Settings Controls, Logs Stream, and Filesystem Browser

  • Status: Accepted
  • Date: 2025-12-28
  • Context:
    • Motivation: replace JSON settings editing with structured controls, add an on-demand logs view, and provide a server-backed filesystem browser for path selection.
    • Constraints: keep stored-procedure access, avoid new dependencies, and only stream logs while the Logs route is active.
  • Decision:
    • Added an SSE logs stream backed by a log broadcast writer and a Logs UI route that connects only while mounted.
    • Added a filesystem browse endpoint and path picker UI for directory selection, with server-side path validation for label policy download dirs.
    • Reworked settings into tabbed sections with a single draft/save bar and structured field editors.
  • Consequences:
    • Positive: consistent UI controls, safer path selection, and live logs available without background streaming.
    • Risks: invalid paths now fail validation; recovery requires clearing the offending field or updating the path.
  • Follow-up:
    • Tests: no new dependencies; validation logic exercised via existing config pathways (add focused tests if coverage drops).
    • Observability: log stream events emit via SSE; status surfaced in UI badge.
    • Risk & rollback: revert the logs route/endpoint and path validation if regressions appear; keep previous settings UI behind a feature branch.
    • Dependency rationale: no new dependencies added.

059 – Migration Rebaseline And JSON Backfill Guardrails

  • Status: Accepted
  • Date: 2025-12-28
  • Context:
    • Migration sprawl made upgrades brittle and conflicted with the single-file mandate.
    • JSON columns are banned for settings; backfills must not wipe normalized data on upgrade.
    • The baseline migration must be idempotent for both new databases and upgrades.
  • Decision:
    • Collapse migration history into crates/revaer-data/migrations/0007_rebaseline.sql and remove prior migration files.
    • Add upgrade-safe guardrails in the JSON backfill to avoid overwriting normalized data when legacy columns are empty or newly introduced.
    • Mark the configuration migrator to ignore missing files so existing databases can apply the new baseline cleanly.
    • Update documentation and dev seed SQL to match the normalized schema.
  • Consequences:
    • Positive: one deterministic baseline, no JSON columns in the final schema, safer upgrades.
    • Trade-offs: the baseline SQL is larger and includes legacy steps to support upgrades.
  • Follow-up:
    • Run the full just ci gate and validate factory reset behavior against a real database.
    • Monitor future schema changes to ensure they append to the consolidated baseline.
  • Motivation:
    • Ensure migration idempotency and JSON-free settings storage without breaking existing installations.
  • Design notes:
    • Keep legacy JSON parsing helpers only long enough to migrate data; drop them in the same baseline.
    • Add trigger drops to make DDL re-entrant when applying the baseline on upgraded databases.
  • Test coverage summary:
    • just check (workspace, all targets, all features).
  • Observability updates:
    • No telemetry changes required.
  • Risk & rollback plan:
    • Risk: legacy upgrade paths could still expose migration gaps; rollback by restoring the previous migration set from version control.
  • Dependency rationale:
    • No new dependencies introduced.

Auth Expiry + Error Context Fields

  • Status: Accepted
  • Date: 2025-12-28
  • Context:
    • Factory reset failures must surface raw error details to clients without embedding context in error messages.
    • Setup completion must issue an API key that expires after 14 days, and expiration must be enforced server-side.
    • JSONB-based helpers are disallowed; legacy helpers must be removed while preserving upgrade paths.
  • Decision:
    • Add an optional expires_at timestamp to auth_api_keys and extend API key upsert helpers to persist it.
    • Extend RFC9457 ProblemDetails with structured context fields so raw error details can be returned separately from constant error messages.
    • Purge JSONB-based helper functions during migration to keep final database surfaces JSON-free.
  • Consequences:
    • Positive outcomes: API key expiry is enforced consistently; error responses can include raw details without violating message rules; migrations end with JSONB-free functions.
    • Risks or trade-offs: Existing API clients must tolerate the new context field; migrations rely on drop logic to clear legacy helper functions.
  • Follow-up:
    • Implementation tasks: update API key auth reads to respect expiry; add error context plumbing in API/UI clients; keep openapi export in sync.
    • Review checkpoints: verify migrations run cleanly, JSONB functions are absent, and factory reset errors surface in toasts.

API i18n error localization and OpenAPI assets

  • Status: Accepted
  • Date: 2025-12-29
  • Context:
    • API error responses needed localization via Accept-Language without introducing new dependencies.
    • openapi.rs could not retain hard-coded asset paths while still embedding the spec.
  • Decision:
    • Add a lightweight API i18n module that selects a locale from Accept-Language, loads an embedded bundle, and localizes error titles/details/invalid params with fallback to the original string.
    • Centralize embedded OpenAPI assets in a dedicated module so openapi.rs is path-free.
    • Alternatives considered: key-based localization in all error constructors (larger refactor); relying on client-only localization (does not meet API requirement).
  • Design notes:
    • Locale parsing accepts the first supported tag and falls back to en.
    • Translation load failures are logged once and degrade to identity translations.
    • OpenAPI asset constants are crate-private to avoid leaking filesystem structure.
  • Test coverage summary:
    • Added unit coverage for locale parsing, translation availability, and fallback behavior in the i18n module.
  • Observability updates:
    • Translation load failures emit a structured error log with the locale.
  • Consequences:
    • Error responses now pass through a localization hook; untranslated strings remain unchanged.
    • OpenAPI asset paths are centralized for easier maintenance.
  • Risk & rollback plan:
    • Risk: missing translation keys fall back to the original message. Roll back by removing i18n middleware and restoring direct error serialization.
  • Dependency rationale:
    • No new dependencies; reused existing serde_json and standard library types.
  • Follow-up:
    • Expand message coverage in crates/revaer-api/i18n/en.json as new error strings are added.

Event Bus Publish Guardrails + API i18n Cleanup

  • Status: Accepted

  • Date: 2025-12-28

  • Context:

    • Event publishing failures were silently ignored, violating the no-error-suppression rule.
    • Several API error strings were missing i18n keys, breaking the localized error contract.
    • A few runtime logs still interpolated context into messages and needed structured fields.
  • Decision:

    • Introduce EventBusError and make EventBus::publish return Result so failures are handled explicitly.
    • Add publish helpers in runtime services (API state, fsops, libtorrent worker, app bootstrap) that log publish failures with structured fields.
    • Expand the API i18n bundle to include new error keys used by settings and auth flows.
    • Move anyhow to dev-dependencies for revaer-api and remove the remaining debug assert/log interpolation in production paths.
  • Consequences:

    • Positive outcomes: event publishing is no longer silently ignored; API error messages are consistently localizable; log output stays structured.
    • Risks or trade-offs: event publish errors are now surfaced via warnings, which may be noisy if the bus is misconfigured.
  • Follow-up:

    • Implementation tasks: ensure downstream callers handle EventBusError where needed; keep i18n bundles in sync with new error keys.
    • Review checkpoints: confirm just ci passes and that SSE/event flows still deliver updates without regressions.
  • Motivation:

    • Align runtime error handling with AGENT.md guardrails and remove hidden failure paths.
  • Design notes:

    • Event bus publish errors expose event_id + event_kind for structured logging without embedding context in messages.
    • API error strings added to en.json match the exact keys emitted by handlers.
  • Test coverage summary:

    • Not run in this change set; run just ci before release.
  • Observability updates:

    • Added structured warning logs when event publishing fails.
  • Risk & rollback plan:

    • Low risk; revert to prior publish semantics if event logging proves too noisy.
  • Dependency rationale:

    • No new dependencies added.

CI compliance cleanup for test error handling

  • Status: Accepted
  • Date: 2025-12-30
  • Context:
    • Motivation: restore just ci compliance and remove explicit panic/unwrap patterns in tests to align with AGENT error-handling rules.
    • Constraints: keep coverage ≥ 80% and avoid new dependencies while satisfying clippy::pedantic.
  • Decision:
    • Replace explicit panic!/unwrap usages in tests with Result-returning flows and let...else patterns.
    • Exercise must-use values in tests to avoid lint violations.
  • Consequences:
    • Positive outcomes: lint clean, tests remain deterministic, and coverage stays above the gate.
    • Risks or trade-offs: slightly more verbose test code; added Result plumbing in tests.
  • Follow-up:
    • Implementation tasks: keep new tests using Result and let...else patterns when adding coverage.
    • Review checkpoints: re-run just ci after any test refactors.

Design notes

  • Tests now surface unexpected success paths as explicit error returns instead of panics.
  • Sse test responses are exercised via into_response to satisfy must-use lints.

Test coverage summary

  • just ci completed with line coverage at 80.04%.

Observability updates

  • None.

Dependency rationale

  • No new dependencies added.

Risk & rollback plan

  • Risk: minimal; changes are confined to tests.
  • Rollback: revert this ADR and the test-only edits, then re-run just ci.

Factory reset hardening and allow-path validation

  • Status: Accepted
  • Date: 2025-12-30
  • Context:
    • Motivation: surface actionable factory reset failures, prevent long-running resets from hanging, and tighten allow-path validation for directory entries.
    • Constraints: preserve API i18n behavior, keep error context structured, and avoid new dependencies or inline SQL outside migrations.
  • Decision:
    • Derive the deepest error source string for factory reset failures and return it in structured context.
    • Allow factory resets to proceed without API keys when no keys exist, even if a stale API key header is present.
    • Add a lock timeout in the factory reset stored procedure to avoid indefinite blocking.
    • Validate each allow-path entry as a non-empty directory before persisting updates.
    • Add unit tests covering error extraction and the stale API key path.
  • Consequences:
    • Positive outcomes: factory reset failures surface raw causes; invalid allow-path entries are rejected; resets fail fast on lock contention.
    • Risks or trade-offs: stricter validation can reject empty allow-path entries that previously slipped through; lock timeouts may require retrying during heavy database activity.
  • Follow-up:
    • Implementation tasks: confirm UI toasts surface context fields for factory reset failures and lock timeouts.
    • Review checkpoints: run just ci and just build-release before handoff.

Design notes

  • Walk the Error::source chain to surface the innermost message without mutating the API detail string.

Test coverage summary

  • just ci: line coverage 80.06%.
  • just build-release: succeeded.

Observability updates

  • None.

Dependency rationale

  • No new dependencies added.

Risk & rollback plan

  • Risk: allow-path validation rejects empty entries; factory reset error context exposes raw backend errors; lock timeout may surface new transient failures during heavy DB activity.
  • Rollback: revert the allow-path validation, auth fallback, and lock-timeout adjustments, remove the related tests, then re-run just ci.

API key refresh and no-auth setup mode

  • Status: Accepted
  • Date: 2025-12-30
  • Context:
    • Motivation: keep API keys valid without manual re-auth, and allow local setup flows to opt into anonymous access.
    • Constraints: no new dependencies, stored-procedure-only config writes, and API errors localized through i18n.
  • Decision:
    • Add app_profile.auth_mode with api_key/none and allow anonymous auth when none is configured.
    • Introduce /v1/auth/refresh to extend API key expiry without rotation, and schedule refresh in the UI before expiry.
    • Persist anonymous auth state for no-auth setups and reuse the well-known snapshot for setup changeset construction.
    • Store API key expirations in local storage and refresh with a 24-hour safety skew.
  • Consequences:
    • Positive outcomes: no-auth local deployments work without API keys; API keys remain valid without user action.
    • Risks or trade-offs: no-auth mode reduces access control if enabled unintentionally; refresh scheduling depends on client time.
  • Follow-up:
    • Implementation tasks: keep OpenAPI spec and UI translations in sync with new auth/refresh UX.
    • Review checkpoints: run just ci and just build-release before handoff.

Design notes

  • Auth mode is stored in app_profile and enforced in API auth middleware.
  • Token refresh extends expiry only; no rotation or secret re-issuance.

Test coverage summary

  • just ci: line coverage 80.03%.
  • just build-release: succeeded.

Observability updates

  • None.

Dependency rationale

  • No new dependencies added.

Risk & rollback plan

  • Risk: anonymous access enabled on non-local deployments; refresh timing sensitive to client clock drift.
  • Rollback: remove auth_mode, revert auth middleware and refresh endpoint, and delete UI refresh scheduling plus setup auth mode selection.

Factory reset UX fallback and SSE setup gating

  • Status: Accepted
  • Date: 2025-12-30
  • Context:
    • Motivation: SSE returns 409 when the server is in setup mode, leaving the UI stuck after factory reset or manual setup transitions.
    • Constraints: keep the UI non-blocking, avoid API key reuse after reset, and keep state transitions client-driven without new dependencies.
  • Decision:
    • Gate SSE connection on AppModeState and surface a disconnected status when the server is in setup mode.
    • Treat SSE 409 responses as a setup signal: clear auth state and move the app into setup mode in the store.
    • Ensure factory reset success forces AppModeState::Setup even if the reload fails.
  • Consequences:
    • Positive outcomes: factory reset lands users on the setup flow; SSE no longer loops on 409 responses.
    • Risks or trade-offs: clears stored auth on setup transitions, requiring re-auth after reset.
  • Follow-up:
    • Implementation tasks: monitor setup flows for any unexpected auth clears and adjust messaging if needed.
    • Review checkpoints: run just ci and just build-release before handoff.

Design notes

  • SSE is disabled in setup mode to prevent repeated 409 retries and to keep the UI responsive.
  • Setup transitions clear auth storage to avoid stale API keys after reset.

Test coverage summary

  • just ci: failed (cargo llvm-cov line coverage 77.59% < 80%).

Observability updates

  • None.

Dependency rationale

  • No new dependencies added.

Risk & rollback plan

  • Risk: users expecting to keep API keys across resets will have to re-authenticate.
  • Rollback: remove SSE setup gating and 409 handling, revert factory reset UI state updates, and restore previous auth persistence behavior.

Logs ANSI rendering and bounded buffer

  • Status: Accepted
  • Date: 2025-12-30
  • Context:
    • Motivation: logs view must preserve ANSI color/style codes, Unicode characters, and remain responsive over long sessions.
    • Constraints: keep memory usage bounded, avoid new dependencies, keep layout aligned with UI rules, and avoid build conflicts with trunk serve.
  • Decision:
    • Parse ANSI SGR sequences into styled spans for rendering with theme-aware colors.
    • Keep a bounded in-memory log buffer with a fixed max size.
    • Use streaming text decode to preserve multibyte characters across chunks.
    • Add new log lines to the top of the view and restrict scrolling to the terminal area.
    • Use a dedicated dist-serve directory for trunk serve to avoid staging conflicts with ui-build.
  • Consequences:
    • Positive outcomes: log output retains color/style and Unicode, memory growth is capped, log background is black.
    • Risks or trade-offs: ANSI color mapping approximates terminal colors via theme tokens and CSS variables.
  • Follow-up:
    • Implementation tasks: monitor logs stream for any unhandled ANSI sequences and extend parsing as needed.
    • Review checkpoints: run just ci before handoff.

Test coverage summary

  • just ui-build: failed (wasm-bindgen could not write to staging directory while trunk serve was running).

Observability updates

  • None.

Dependency rationale

  • No new dependencies added.

Risk & rollback plan

  • Risk: unusual ANSI sequences may render as plain text.
  • Rollback: remove ANSI parsing and revert to raw log line rendering.

Agent Compliance: Clippy Cargo Lints

  • Status: Accepted
  • Date: 2025-12-31
  • Context:
    • AGENT.md mandates clippy::cargo in the crate-level deny list for every lib/main.
    • Several crate roots were missing clippy::cargo, which is a documented compliance violation.
  • Decision:
    • Add clippy::cargo to every crate-level lint deny list alongside clippy::all/pedantic/nursery.
    • Keep existing unsafe-code policies intact (FFI-only allowances remain scoped).
  • Consequences:
    • Positive outcomes: consistent lint coverage across crates; future clippy::cargo issues surface early.
    • Risks or trade-offs: additional lint findings may require follow-up fixes in future changes.
  • Follow-up:
    • Run just ci to confirm the lint gate passes across the workspace.
    • Monitor future changes for clippy::cargo warnings introduced by new code.
  • Motivation:
    • Align all crates with AGENT.md lint requirements and eliminate policy drift.
  • Design notes:
    • Automated, minimal insertion of clippy::cargo after clippy::pedantic in existing deny lists.
  • Test coverage summary:
    • just ci (full pipeline) is required before hand-off; run after edits.
  • Observability updates:
    • None.
  • Risk & rollback plan:
    • Roll back the lint list changes if they conflict with a required exception, then document a targeted ADR.
  • Dependency rationale:
    • No new dependencies added.

Docs: Pin mdbook-mermaid for just docs

  • Status: Accepted

  • Date: 2025-12-31

  • Context:

    • Motivation: just docs failed because mdbook-mermaid 0.16.2 cannot parse under mdbook 0.5.2, even though docs are valid.
    • Constraints: Docs build must run via just, no manual tooling, avoid repo changes outside the justfile.
    • Test coverage summary: just docs run after change; no unit tests applicable.
    • Observability updates: None.
    • Dependency rationale: No new crates; pin existing mdbook-mermaid tool to 0.17.0 to match mdbook 0.5.x behavior.
  • Decision:

    • Require mdbook-mermaid 0.17.0 in just docs-install and reinstall if mismatched.
    • Make just docs invoke just docs-install before build and index.
    • Alternatives considered: rely on user-managed tool versions; pin mdbook to 0.5.0; remove mermaid preprocessor.
  • Consequences:

    • Positive outcomes: just docs consistently installs a compatible mermaid preprocessor and builds successfully.
    • Risks or trade-offs: Running just docs may reinstall mdbook-mermaid when versions differ; version pin may lag future mdbook releases.
    • Risk & rollback plan: If issues arise, revert the justfile change or update the pinned version and rerun just docs.
  • Follow-up:

    • Implementation tasks: Update justfile and verify just docs.
    • Review checkpoints: Revisit the pin when mdbook or mdbook-mermaid releases require it.

Dashboard UI checklist completion and auth/SSE hardening

  • Status: Accepted
  • Date: 2026-01-01

Motivation

  • Complete remaining dashboard UI checklist items without adding new dependencies.
  • Tighten auth and SSE handling to avoid stale tokens and replay conflicts.

Context

  • UI relies on SSE for live torrent updates and must survive Last-Event-ID conflicts.
  • Auth tokens require a 14-day TTL enforced by both server and client.
  • UI should allow anonymous mode when server auth_mode is none.

Decision

  • Move torrent sort state into URL-backed filters and apply client-side ordering.
  • Reset SSE Last-Event-ID on 409 conflict and reconnect with backoff.
  • Refresh API keys on save to capture expiry; invalidate keys on logout via config patch.
  • Mirror CORS origin on the API router to cover SSE and REST.

Alternatives considered:

  • Add a dedicated logout endpoint: rejected to avoid OpenAPI changes.
  • Store API keys without expiry: rejected to enforce 14-day TTL.

Design Notes

  • Sorting is represented as sort=key:dir in the query string.
  • Metadata updates trigger a targeted list refresh to keep tags/trackers current.
  • Anonymous auth is enabled from .well-known app_profile when configured.

Consequences

  • Login now performs a refresh call to capture expiry; failures surface as toasts.
  • Some SSE metadata events trigger list refreshes, increasing fetch volume.

Test Coverage Summary

  • DATABASE_URL=postgres://revaer:revaer@172.17.0.1:5432/revaer REVAER_TEST_DATABASE_URL=postgres://revaer:revaer@172.17.0.1:5432/revaer just ci (fmt, lint, udeps, audit, deny, ui-build, test, test-features-min, cov).

Observability Updates

  • No new metrics or tracing changes.

Risk & Rollback Plan

  • Risk: logout fails if config patch is rejected; UI now reports an error toast.
  • Rollback: revert UI auth/SSE changes and re-run just ci.

Dependency Rationale

  • Updated sqlx to 0.9.0-alpha.1 and aligned vendored hashlink to hashbrown 0.16 to satisfy clippy::multiple_crate_versions without introducing git dependencies.

Follow-up

  • Confirm auth refresh behavior against expired keys during QA.

071: Libtorrent Native Fallback for Default CI

  • Status: Accepted
  • Date: 2026-01-02
  • Context:
    • just ci runs cargo udeps across the workspace and fails on hosts without libtorrent headers or pkg-config data.
    • Native libtorrent integration tests are explicitly gated by REVAER_NATIVE_IT, so default runs should remain deterministic without requiring native system deps.
  • Decision:
    • Gate native FFI compilation behind a build-time cfg (libtorrent_native) that is emitted only when libtorrent is discovered by build.rs.
    • When REVAER_NATIVE_IT is set, missing libtorrent is treated as an error; otherwise the build falls back to the stub backend with a warning.
    • Alternatives considered: require libtorrent for all CI/dev runs, or remove --all-features from the quality gates (rejected to keep feature coverage intact).
  • Consequences:
    • Default just ci succeeds on machines without libtorrent while still honoring native coverage when explicitly requested.
    • Feature-enabled builds no longer guarantee native bindings unless libtorrent is present; native builds must opt in via REVAER_NATIVE_IT.
    • cargo-udeps ignores the cxx dependency for this crate because usage is gated by the native cfg.
  • Follow-up:
    • Ensure native CI matrix jobs set REVAER_NATIVE_IT=1 and install or bundle libtorrent.

072: Agent Compliance Refactor (UI + HTTP + Config Layout)

  • Status: Accepted
  • Date: 2026-01-03
  • Context:
    • Motivation: bring the repository into closer alignment with AGENT layout and tooling rules after drift in UI routing, HTTP module layout, and config structure.
    • Constraints: preserve existing APIs/behavior while relocating modules; avoid new dependencies and keep stored-procedure-only database access intact.
  • Decision:
    • Design notes: move torrent UI views into the feature module, scope window/router usage to the app layer, and reorganize API HTTP handlers/DTOs into handlers/ and dto/ while re-exporting to preserve public paths.
    • Alternatives considered: leave modules in place and document exceptions (rejected to keep the structure enforceable); introduce a large-scale API surface rename (rejected to avoid breaking changes).
  • Consequences:
    • Positive outcomes: clearer module boundaries, AGENT-compliant Justfile/CI flow, and reduced cross-layer coupling in the UI.
    • Risks or trade-offs: short-term churn from file moves and import updates; slight increase in module indirection via re-exports.
  • Follow-up:
    • Test coverage summary: just ci (fmt, lint, udeps, audit, deny, ui-build, test, test-features-min, cov, build-release) passed with the ≥80% line coverage gate satisfied.
    • Observability updates: no new spans or metrics added for this refactor.
    • Risk & rollback plan: revert the module move commits and restore prior paths if regressions appear; no data migrations were introduced.
    • Dependency rationale: no new dependencies added; alternatives were to add helper crates for routing/structure, which were rejected to keep the footprint minimal.

UI checklist follow-ups: SSE detail refresh, labels shortcuts, strict i18n, and anymap removal

  • Status: Accepted
  • Date: 2026-01-03

Motivation

  • Close remaining dashboard UI checklist gaps tied to live metadata, labels navigation, and strict i18n.
  • Remove the vendored yewdux/anymap fork and the related advisory ignore now that upstream versions align. (Superseded by ADR 074 for Yew compatibility.)

Context

  • SSE metadata updates did not refresh list-row tags/tracker/category without a full list refresh.
  • Add/Create torrent modals lacked shortcuts into the Settings → Labels workflow.
  • Translation fallback masked missing keys; the checklist requires explicit missing-key surfacing.
  • anymap advisory RUSTSEC-2021-0065 was previously ignored due to the vendored store fork.

Decision

  • Add a throttled, targeted torrent detail refresh path for metadata events and reuse detail summaries to update list rows.
  • Add on_manage_labels callbacks in torrent modals to route directly to the Labels tab.
  • Remove i18n fallback behavior and add explicit English copy for new UI affordances.
  • Dependency alignment is superseded by ADR 074 (vendored yewdux for Yew 0.22 compatibility).
  • Drop the advisory ignore tied to the vendored anymap.
  • Remove remaining vendored crates (hashlink, sqlx-core) and rely on registry sources.

Design Notes

  • Use a debounced HashSet queue to coalesce detail refreshes and avoid duplicate fetches.
  • Settings accepts a requested_tab prop and clears it once the tab selection is applied.
  • Translation bundles return missing:{key} for missing entries; no default locale fallback.
  • upsert_detail updates list-row tags, tracker, category, and name/path using the detail summary.

Consequences

  • Tags/trackers/categories update without full list refreshes, reducing UI staleness.
  • Users can reach label management quickly from torrent modals.
  • Missing translations are obvious during QA instead of silently falling back.
  • Supply-chain ignores shrink with the removal of vendored anymap.
  • Dependency alignment outcomes are tracked in ADR 074.

Test Coverage Summary

  • just ci: blocked by just cov (workspace line coverage 76.46%).
  • just cov: fails --fail-under-lines 80 (TOTAL line coverage 76.46%).

Observability Updates

  • None.

Risk & Rollback Plan

  • Risk: targeted refreshes could increase detail fetch volume under heavy metadata churn.
  • Rollback: revert the targeted refresh scheduler and restore the prior full refresh behavior.

Dependency Rationale

  • Dependency alignment decisions moved to ADR 074 to capture the vendored yewdux exception.

Follow-up

  • Verify labels shortcuts and SSE metadata refresh during QA.

Temporary vendoring of yewdux for latest Yew compatibility

  • Status: Accepted
  • Date: 2026-01-03
  • Context:
    • We must stay on the latest crates.io yew and yew-router.
    • yewdux on crates.io (0.11) depends on yew 0.21, which conflicts with yew-router 0.19 (yew 0.22).
    • Git dependencies are disallowed, and vendoring is normally disallowed.
  • Decision:
    • Vendor yewdux under vendor/yewdux and update it to compile against yew 0.22.
    • Patch the workspace to use the vendored yewdux while keeping all other dependencies on crates.io.
    • Document the exception in AGENT.md with a hard requirement to remove the vendored copy once a compatible crates.io release exists.
    • Alternatives considered:
      • Wait on the latest Yew (rejected; staying current is top priority).
      • Replace yewdux with an internal store (larger refactor; deferred unless compatibility stalls).
      • Use git dependencies (rejected by policy).
  • Consequences:
    • We stay current with yew/yew-router without git dependencies.
    • We own the maintenance burden for the vendored yewdux until upstream compatibility lands.
    • Risk of drift from upstream; requires periodic review and eventual removal.
  • Follow-up:
    • Monitor crates.io yewdux releases for yew 0.22 compatibility.
    • Next check date: 2026-02-05 (or sooner if a new yewdux release lands).
    • Remove vendor/yewdux, the workspace patch, and the AGENT exception once compatible.
    • Run just ci after each yew/yew-router upgrade.

075: Coverage gate tests for config loader and data toggles

  • Status: Accepted
  • Date: 2026-01-03
  • Context:
    • Motivation: just cov failed at 76.46% line coverage, blocking just ci.
    • Constraints: no coverage suppression, no new dependencies, and AGENT compliance.
  • Decision:
    • Add focused unit tests for config loader mapping/secret helpers and data config toggle sets.
    • Alternatives considered: ignore the gate or suppress coverage reporting (rejected).
  • Consequences:
    • Positive outcomes: just cov clears the 80% line gate; configuration mappings gain direct test coverage.
    • Risks or trade-offs: slightly longer test runtime.
  • Follow-up:
    • Implementation tasks: add loader/data tests, update checklist status, run just ci.
    • Review checkpoints: validate coverage stays >=80% during follow-up changes.
  • Test coverage summary:
    • just cov reports 80.44% total line coverage (gate passes).
  • Observability updates:
    • None (tests only).
  • Risk & rollback plan:
    • If tests become flaky, revert the test additions and re-run just ci.
  • Dependency rationale:
    • No new dependencies; reused existing dev crates.

076: Temporary clippy exception for hashbrown multiple versions

  • Status: Accepted
  • Date: 2026-01-03
  • Context:
    • Motivation: just lint fails on clippy::multiple_crate_versions due to hashbrown 0.15 (via sqlx-core -> hashlink ^0.10) and 0.16 (via yew -> indexmap ^2.11).
    • Constraints: keep yew/yew-router latest, avoid vendoring or git crates, preserve CI via just.
  • Decision:
    • Allow clippy::multiple_crate_versions in the lint recipe and crate roots.
    • Allow duplicate hashbrown/foldhash in cargo-deny bans to keep just deny green.
    • Remove the exception once SQLx releases a version compatible with hashlink ^0.11 (or the dependency graph otherwise unifies on a single hashbrown).
  • Consequences:
    • Positive outcomes: just lint passes while keeping primary deps current.
    • Risks or trade-offs: reduced lint signal for other multi-version cases; must monitor dependency graph for unintentional splits.
  • Follow-up:
    • Implementation tasks: update just lint, add crate-root allows, update deny.toml, document exception in AGENT.md, track in checklist.
    • Review checkpoints: remove the exception when SQLx adopts hashlink ^0.11 and hashbrown unifies.
  • Test coverage summary:
    • Not applicable (lint configuration change).
  • Observability updates:
    • None.
  • Risk & rollback plan:
    • Remove the lint allow flag and re-run just ci once dependencies align.
  • Dependency rationale:
    • No new dependencies; exception is scoped to lint configuration only.

Restore UI Menu Interactions

  • Status: Accepted
  • Date: 2026-01-09
  • Context:
    • Motivation: top-right menus did not open reliably, and sidebar labels were hidden in the default open state.
    • Constraints: No new dependencies; use daisyUI/Nexus patterns and keep component props stable.
    • Design notes: Align dropdown markup with daisyUI examples and compose menu UI from shared components.
  • Decision:
    • Summary of the choice made: update dropdowns to the daisyUI focus pattern, compose locale/server menus into dedicated components, and default sidebar labels to visible while hiding them only in collapsed/hover modes using sibling selectors.
    • Alternatives considered: keep inline markup, add JS for dropdown state, or hardcode label visibility without toggle support.
  • Consequences:
    • Positive outcomes: dropdown menus open reliably, layout follows component composition, and sidebar labels display in the default open state.
    • Risks or trade-offs: Hover/collapsed behavior depends on CSS selectors; custom styling may need minor tuning.
  • Follow-up:
    • Implementation tasks: update crates/revaer-ui/src/components/daisy/molecules/dropdown.rs, add crates/revaer-ui/src/components/locale_menu.rs, crates/revaer-ui/src/components/server_menu.rs, wire them in crates/revaer-ui/src/app/mod.rs and crates/revaer-ui/src/components/shell.rs, and adjust crates/revaer-ui/static/style.css.
    • Review checkpoints: verify dropdown menus and sidebar labels on the dev server.
    • Test coverage summary: just ci (fmt, lint, udeps, audit, deny, ui-build, tests, cov, build-release).
    • Observability updates: none (no telemetry changes).
    • Risk & rollback plan: revert the CSS/attribute changes if menu interactions or sidebar labels regress.
    • Dependency rationale: no new dependencies; use HTML/CSS fixes instead of runtime guards.

078 - Local Auth Bypass Guardrails (Task Record)

  • Status: In Progress
  • Date: 2026-01-11

Motivation

  • Stop offering anonymous access when the backend does not allow no-auth mode.
  • Ensure disabling local auth bypass requires credentials so operators cannot lock themselves out.

Design Notes

  • Track backend auth_mode from /.well-known and config snapshot updates; allow anonymous only when auth_mode is none and the UI host is local.
  • When no-auth is enabled and no credentials exist, set Anonymous auth state to connect immediately.
  • When no-auth is disabled while anonymous, clear anonymous state and re-open the auth prompt.
  • Guard settings changes that switch auth_mode to api_key unless API key or local auth credentials are saved.

Decision

  • Gate anonymous UI behavior on backend auth_mode + local host detection.
  • Block config saves that disable bypass without saved credentials.

Consequences

  • Anonymous access is only offered when the backend explicitly allows it on a local host.
  • Operators must save credentials before switching to auth-required mode.

Test Coverage Summary

  • Added unit tests for AuthState credential validation.

Observability Updates

  • None (UI-only change).

Risk & Rollback

  • Risk: remote UI access to no-auth servers now requires credentials despite server allowing none.
  • Rollback: revert auth_mode gating in app shell and the settings guard.

Dependency Rationale

  • No new dependencies introduced.

Advisory RUSTSEC-2025-0141 Temporary Ignore

  • Status: In Progress
  • Date: 2026-01-11
  • Context:
    • bincode 1.3.3 is flagged as unmaintained (RUSTSEC-2025-0141).
    • The dependency is pulled via gloo-worker in gloo, which is required by the Yew UI stack.
    • No drop-in upgrade path is available without upstream releases.
  • Decision:
    • Add RUSTSEC-2025-0141 to .secignore while the UI depends on gloo/yew that transitively require bincode 1.3.3.
    • Revisit once upstream releases remove or replace the dependency.
  • Consequences:
    • just audit passes while the advisory remains documented.
    • The unmaintained dependency stays in the tree until upstream updates land.
  • Follow-up:
    • Track gloo and yew release notes for bincode replacement/removal.
    • Remove the ignore once the dependency graph no longer includes bincode 1.3.x.

Motivation

  • Keep just ci passing while capturing the risk and remediation plan for the unmaintained transitive dependency.

Design notes

  • The ignore is scoped to the single advisory and documented in .secignore plus this ADR.

Test coverage summary

  • just ci (includes fmt, clippy, udeps, audit, deny, test, cov).

Observability updates

  • None; advisory handling does not change runtime telemetry.

Risk & rollback plan

  • Risk: unmaintained dependency remains in the build while upstream updates are pending.
  • Rollback: remove the ignore after upgrading gloo/yew or replacing the dependency.

Dependency rationale

  • gloo and yew are required for the UI; alternatives would require a larger frontend migration.

080 - Local Auth Bypass Reliability (Task Record)

  • Status: Accepted
  • Date: 2026-01-11

Motivation

  • Local-network auth bypass should remain usable during UI startup and on common LAN hostnames.
  • Prevent UI crashes from invalid attribute names in component props.

Design Notes

  • Expand local host detection to cover loopback/private/link-local IPs plus common LAN hostnames.
  • Allow anonymous prompt options on local hosts even when auth mode is not yet known; auto-enable anonymous only once the backend reports no-auth.
  • Replace raw-identifier button prop names to avoid invalid DOM attributes in Yew.

Decision

  • Update local host detection and IPv6 base URL formatting in UI preferences.
  • Adjust auth bypass gating to keep anonymous mode stable and prompt-friendly on local hosts.
  • Rename button props from r#type to button_type in shared components.

Consequences

  • More reliable local auth bypass and fewer startup dead-ends.
  • Anonymous option may appear on local hosts before auth mode is confirmed.

Test Coverage Summary

  • UI behavior validated by existing integration flows; no new automated tests added.

Observability Updates

  • None.

Risk & Rollback

  • Risk: local anonymous option could be offered briefly when auth mode still resolves.
  • Rollback: revert local host detection and auth bypass gating changes.

Dependency Rationale

  • No new dependencies; uses std IP parsing only.

081 - Playwright E2E Test Suite (Task Record)

  • Status: Accepted
  • Date: 2026-01-14

Motivation

  • Add automated UI coverage for core routes and modal flows.
  • Centralize E2E configuration in a committed tests/.env file.

Design Notes

  • Playwright config reads tests/.env for base URL, browser selection, timeouts, and artifacts.
  • Tests are grouped by page with page objects and a shared app fixture.
  • Assertions focus on stable labels and layout anchors to avoid data coupling.

Decision

  • Add a Playwright test harness under /tests with config, fixtures, and page objects.
  • Add a just ui-e2e recipe to run the suite via the standard workflow.
  • Ignore Playwright output directories in .gitignore.

Consequences

  • UI smoke checks can be run locally and wired into CI when ready.
  • Running the suite requires Node tooling and Playwright browser installs.

Test Coverage Summary

  • Added specs for dashboard, torrents, settings, logs, health, and navigation smoke.

Observability Updates

  • None.

Risk & Rollback

  • Risk: label changes or auth/setup overlays can break selectors.
  • Rollback: remove the /tests Playwright suite and ui-e2e recipe.

Dependency Rationale

  • @playwright/test: browser automation and test runner.
  • dotenv: load environment configuration from tests/.env.

082 - E2E Gate and Selector Stability (Task Record)

  • Status: Accepted
  • Date: 2026-01-14

Motivation

  • Stabilize Playwright selectors against shared nav labels and auth overlays.
  • Make UI E2E runs a required quality gate for local changes.
  • Document how to run the E2E suite.

Design Notes

  • Scope selectors to the layout content area or sidebar to avoid strict-mode collisions.
  • Use the auth overlay’s dismiss icon button when present; fall back to the text button.
  • Document the just ui-e2e requirement in README and AGENT.

Decision

  • Update Playwright page objects to scope selectors and handle the auth overlay deterministically.
  • Add UI E2E requirements to README.md and AGENT.md.

Consequences

  • Navigation and logs checks avoid ambiguous label matches.
  • E2E tests are enforced as a local quality gate.

Test Coverage Summary

  • just ui-e2e

Observability Updates

  • None.

Risk & Rollback

  • Risk: UI label changes may still require selector updates.
  • Rollback: revert the selector scoping and gate requirements.

Dependency Rationale

  • No new dependencies; reuse Playwright and dotenv.

083 - API Preflight Before UI E2E (Task Record)

  • Status: Accepted
  • Date: 2026-01-14

Motivation

  • Verify API availability before UI E2E runs to reduce false attribution to the UI.

Design Notes

  • Add a dedicated Playwright project that hits public API endpoints.
  • Make browser projects depend on the API project to enforce ordering.
  • Keep checks read-only and stable: /health, /metrics, /docs/openapi.json.

Decision

  • Add an API preflight spec and wire it as a dependency for UI projects.
  • Add E2E_API_BASE_URL to the test configuration docs.

Consequences

  • UI tests do not run if API preflight fails.
  • E2E setup now needs the API base URL to be accurate.

Test Coverage Summary

  • just ui-e2e

Observability Updates

  • None.

Risk & Rollback

  • Risk: API endpoint changes will require updates to the preflight checks.
  • Rollback: remove the API project dependency and preflight spec.

Dependency Rationale

  • No new dependencies; reuse Playwright.

084: E2E API Coverage With Temp Databases

  • Status: Accepted
  • Date: 2026-01-15
  • Context:
    • What problem are we solving?
      • E2E coverage must exercise 100% of the HTTP API surface under both auth modes and surface API regressions before UI tests.
      • Test runs must isolate state using temporary databases and document OpenAPI coverage gaps.
    • What constraints or forces shape the decision?
      • The API server derives port and auth mode from persisted configuration; setup flow must be exercised to activate the instance.
      • E2E runs must be invoked via just and use tests/.env for configuration.
  • Decision:
    • Summary of the choice made.
      • Add Playwright global setup/teardown to perform setup and factory reset.
      • Expand Playwright API specs to cover every route and operation under both auth modes.
      • Introduce a temp DB harness (scripts/ui-e2e.sh) that starts API/UI servers, runs API suites first, then UI suites.
      • Document OpenAPI gaps in docs/api/openapi-gaps.md.
    • Alternatives considered.
      • Reusing a shared dev database (rejected: violates isolation requirement).
      • Running API and UI suites in a single Playwright project without temp DB orchestration (rejected: ordering and auth coverage requirements).
  • Consequences:
    • Positive outcomes.
      • Full HTTP surface coverage with deterministic, isolated runs.
      • Clear documentation of OpenAPI drift.
    • Risks or trade-offs.
      • Longer E2E runtime and additional local prerequisites (Postgres + free ports).
      • Additional maintenance for API fixtures when new endpoints are added.
  • Follow-up:
    • Implementation tasks.
      • Keep docs/api/openapi.json aligned with router updates.
      • Update the API spec and tests whenever routes change.
    • Review checkpoints.
      • Verify just ui-e2e passes in local and CI environments.

Task Record

  • Motivation:
    • Enforce API-first E2E verification, full route coverage, and state isolation across auth modes.
  • Design notes:
    • Playwright global setup completes setup using the configured auth mode.
    • Global teardown issues factory reset to cover the endpoint and clear state.
    • Temp DB orchestration uses sqlx to create and drop isolated databases per suite.
  • Test coverage summary:
    • API specs cover all routes and methods from crates/revaer-api/src/http/router.rs under api_key and none modes.
    • UI specs continue to validate navigation and page rendering after API suites pass.
  • Observability updates:
    • E2E runs emit API/UI logs to tests/test-results for debugging.
  • Risk & rollback plan:
    • If temp DB orchestration proves unstable, revert to manual server management and isolate DB via dedicated test instance.
  • Dependency rationale:
    • No new runtime dependencies added.

085 - E2E OpenAPI Client and Unified Coverage

  • Status: Accepted
  • Date: 2026-01-16
  • Context:
    • What problem are we solving?
      • E2E runs overwrote reports and did not enforce full API/UI surface coverage.
      • API E2E tests needed a generated TypeScript client based on the OpenAPI spec.
    • What constraints or forces shape the decision?
      • Use a single Playwright execution with one final report.
      • Use a maintained generator that supports native Node.js fetch.
      • Keep OpenAPI synchronized with the router surface.
  • Decision:
    • Summary of the choice made.
      • Expand OpenAPI coverage to match all router endpoints and generate a typed E2E client via openapi-typescript + openapi-fetch.
      • Enforce API operation and UI route coverage in the Playwright teardown.
      • Run API suites for both auth modes in one Playwright run, then UI tests.
    • Alternatives considered.
      • OpenAPI Generator CLI (Java) and swagger-typescript-api; rejected due to heavier toolchain and weaker fit for native fetch.
  • Consequences:
    • Positive outcomes.
      • Single Playwright report with explicit API/UI coverage enforcement.
      • Typed API client aligned to OpenAPI for E2E calls.
    • Risks or trade-offs.
      • Additional Node dependencies and a stricter coverage gate that must be updated when routes change.
  • Follow-up:
    • Implementation tasks.
      • Keep docs/api/openapi.json aligned with router updates and regenerate tests/support/api/schema.ts as needed.
      • Update UI route coverage list when new routes are added.
    • Review checkpoints.
      • just ui-e2e completes with a single report and no missing coverage.
      • just ci passes cleanly.

Task Record

  • Motivation:
    • Ensure the E2E suites cover the entire API and UI surface in one continuous execution with a single report.
  • Design notes:
    • Playwright projects now sequence no-auth API coverage ahead of API-key coverage, then UI coverage.
    • API requests use a generated OpenAPI client with native fetch and a coverage ledger written per project.
    • UI navigation records route coverage through the shared AppShell helpers.
  • Test coverage summary:
    • just ui-e2e (single Playwright run with API + UI coverage checks).
  • Observability updates:
    • Coverage artifacts are written to tests/test-results for API and UI coverage validation.
  • Risk & rollback plan:
    • Risk: coverage failures if OpenAPI or UI routes drift.
    • Rollback: revert Playwright project sequencing and remove coverage enforcement to return to per-suite execution.
  • Dependency rationale:
    • Added openapi-typescript + openapi-fetch to generate a typed client backed by native Node.js fetch.
    • Alternatives considered: OpenAPI Generator CLI (Java) and swagger-typescript-api; both rejected to avoid heavier toolchains and non-fetch defaults.

086 - Default Local Auth Bypass (Task Record)

  • Status: Accepted
  • Date: 2026-01-17

Motivation

  • Ensure factory reset remains available when configuration data is broken.
  • Default new installs to a recoverable auth state without implicit API key setup.

Design Notes

  • Switch AppAuthMode default to none and align setup completion fallback.
  • Change the app_profile.auth_mode database default to none via migration.
  • Make setup helpers send explicit auth_mode values for both auth paths.
  • Update reference configuration documentation to match the new default.

Decision

  • Default auth mode to no-auth in code and migrations, while leaving explicit API key setups unchanged.

Consequences

  • New databases start with no-auth access until setup selects API key mode.
  • Existing databases retain their configured auth mode unless reset.

Test Coverage Summary

  • Existing API/E2E flows cover both auth modes; setup helper now sets auth mode explicitly.

Observability Updates

  • None.

Risk & Rollback

  • Risk: integrations relying on implicit API key setup must now send auth_mode explicitly.
  • Rollback: revert the auth mode defaults and migration; restore previous setup fallback.

Dependency Rationale

  • No new dependencies introduced.

Local network auth ranges and settings validation

  • Status: Accepted
  • Date: 2026-01-17
  • Context:
    • Local auth bypass must work for recovery even when API key state is broken.
    • Local-only checks must handle reverse proxies (k3s/docker) that rewrite the peer IP.
    • Operators need to adjust what counts as local without locking themselves out.
  • Decision:
    • Persist app profile local network CIDRs and enforce them for no-auth and recovery flows.
    • Trust forwarded client IP headers only when the peer is already within a local range.
    • Validate local network updates against the saving client address before applying.
  • Consequences:
    • Anonymous access is now scoped to configured local networks.
    • Factory reset remains possible from local clients even when API key inventory queries fail.
    • Misconfigured local ranges can block access until corrected or reset.
  • Follow-up:
    • Keep OpenAPI and UI fields in sync with app_profile.local_networks.
    • Monitor any proxy deployments for forwarded header quirks.

Motivation

  • Provide a safe recovery path when auth state or API key inventory is broken.
  • Allow common local topologies (LAN device -> k3s/docker service) without false negatives.
  • Prevent settings updates that would immediately disconnect the caller.

Design notes

  • Added app_profile.local_networks as a normalized list of CIDR strings with defaults for loopback, RFC1918, and link-local ranges.
  • API auth middleware now derives client IP from ConnectInfo and trusted forwarded headers, enforcing local-only access for no-auth and factory reset fallbacks.
  • Settings patch validates that the updated local network list still includes the caller IP before persisting.

Test coverage summary

  • Auth middleware tests cover anonymous local access, remote rejection, and factory reset allowance when API key inventory checks fail.
  • Config validation tests cover CIDR normalization and invalid prefixes.

Observability updates

  • Auth middleware logs when local network parsing fails or when recovery paths are used.

Risk & rollback plan

  • Risk: misconfigured local CIDRs can block anonymous access or factory reset.
    • Mitigation: validation rejects updates that exclude the saving client.
  • Rollback: revert migration 0009 and remove local network enforcement in auth middleware, then restore the previous auth behavior.

Dependency rationale

  • No new dependencies. CIDR parsing reuses std-based helpers in revaer-config.

Live SSE Log Streaming

  • Status: Accepted
  • Date: 2026-01-17
  • Context:
    • Motivation: remove dummy SSE data, ensure SSE is the single live update channel, and surface recent logs immediately on open.
    • Constraints: keep the log stream lightweight, avoid new dependencies, and respect existing SSE routes.
  • Decision:
    • Summary: drop the dummy SSE stream, retain a rolling two-minute log buffer for SSE snapshots, and add log level filtering + text search in the UI.
    • Design notes: telemetry now snapshots recent log lines, the API chains the snapshot ahead of the live broadcast, and UI log lines track level + receipt time for filtering and pruning.
    • Dependency rationale: no new dependencies; reuse existing serde_json parsing in the UI for log level detection.
  • Consequences:
    • Positive outcomes: SSE reflects live event data only, logs open with context, and the logs page can filter by level or search text.
    • Risks or trade-offs: some log lines may skip buffer storage under contention, and non-drop SSE errors now require manual retry instead of automatic reconnect.
    • Risk & rollback plan: revert the log buffer/snapshot changes to restore streaming-only behavior and re-enable auto-reconnect if needed.
  • Follow-up:
    • Implementation tasks: adjust telemetry buffering, SSE handlers, and logs UI controls with filtering/search state.
    • Test coverage summary: added log buffer tests; run just ci and just ui-e2e to validate full coverage.
    • Observability updates: log stream now captures a rolling snapshot; SSE status remains visible via existing UI badges.

Port process termination for dev tooling

  • Status: Accepted
  • Date: 2026-01-17
  • Context:
    • What problem are we solving? Port cleanup for 7070/8080 did not verify termination, leaving ports occupied and making dev or E2E startup flaky.
    • What constraints or forces shape the decision? Keep existing tooling, avoid new dependencies, and ensure startup fails fast when ports cannot be freed.
  • Decision:
    • Summary of the choice made. Add a graceful shutdown path that sends SIGTERM, waits briefly, escalates to SIGKILL, and errors if ports remain bound.
    • Alternatives considered. Leave the kill-only behavior or add external tooling/scripts; rejected to avoid new dependencies and extra surface area.
  • Consequences:
    • Positive outcomes. Cleanup is deterministic and failures surface early when ports cannot be reclaimed.
    • Risks or trade-offs. Force-kill can terminate unrelated processes on those ports; failures may require manual cleanup before rerun.
  • Task record:
    • Motivation: Ensure port cleanup actually releases 7070/8080 before starting services.
    • Design notes: Use lsof PID discovery, SIGTERM with polling, SIGKILL fallback, and a final port-bound check; reuse in just dev.
    • Test coverage summary: Covered by just ci and just ui-e2e runs (no direct unit tests).
    • Observability updates: Added console messages in just zombies for graceful/force termination.
    • Risk & rollback plan: Revert the justfile changes if termination must be non-fatal; manual kill with lsof remains a fallback.
    • Dependency rationale: No new dependencies; lsof already assumed by existing recipes.
  • Follow-up:
    • Implementation tasks. Keep zombies aligned with any future port changes.
    • Review checkpoints. Verify just dev and just ui-e2e startup when ports are in use.

UI log filters and shell controls

  • Status: Accepted
  • Date: 2026-01-17
  • Context:
    • What problem are we solving? The logs screen needs a DaisyUI filter, consistent search affordances, and SSE-level filtering; shell controls need icon-only indicators, consistent flag icons, and no overlapping z-order with sticky action bars.
    • What constraints or forces shape the decision? Keep the existing UI structure, avoid new dependencies, and ensure E2E coverage for regressions.
  • Decision:
    • Summary of the choice made. Replace the log level select with a DaisyUI filter, make the search input a proper daisyUI input with cmd/ctrl+enter hints, move to minimum-level filtering, update shell menus/icons and sidebar controls to icon-only with tooltips, and remove home/torrents breadcrumbs.
    • Alternatives considered. Keep the select-based filter and add new i18n keys across locales; rejected to avoid translation churn and align with DaisyUI components.
  • Consequences:
    • Positive outcomes. Log filtering matches severity expectations, UI controls are more compact, and dropdowns no longer hide behind sticky action bars.
    • Risks or trade-offs. Icon-only controls rely on tooltips for clarity; any tooltip styling changes must preserve accessibility.
  • Task record:
    • Motivation: Align log filtering with DaisyUI and ensure shell controls remain stable across layout changes.
    • Design notes: Use DaisyUI filter inputs with severity thresholds; add search hint kbd labels; raise dropdown z-index; remove breadcrumb headers; keep icons/titles for accessibility.
    • Test coverage summary: Updated Playwright UI specs for logs filter/search, topbar icons, locale flags, breadcrumbs, and dropdown stacking.
    • Observability updates: None.
    • Risk & rollback plan: Revert the UI component changes and E2E assertions if layouts regress; fallback to prior select-based filter is isolated to logs view.
    • Dependency rationale: No new dependencies added.
  • Follow-up:
    • Implementation tasks. Keep locale flags and icon-only controls consistent across future shell revisions.
    • Review checkpoints. Verify log filtering and dropdown stacking in UI E2E runs.

091: Raise per-crate coverage gate to 90%

  • Status: Accepted
  • Date: 2026-01-17
  • Context:
    • The workspace coverage gate previously enforced ≥80% line coverage overall, which masked low-coverage crates.
    • The requirement is now ≥90% coverage per crate, without test-only code in production modules.
    • The gate must remain Justfile-driven and avoid llvm-cov suppression flags.
  • Decision:
    • Update just cov to run cargo llvm-cov per crate and enforce a ≥90% threshold via the Justfile loop.
    • Raise the documented coverage requirement in AGENT.md to 90% per crate.
    • Add focused unit tests to raise coverage in low-coverage crates (test-support, asset_sync, doc-indexer, CLI, API setup/docs, UI ANSI parsing, libtorrent types).
  • Consequences:
    • Coverage checks now report per-crate deficits with precise percentages.
    • The stricter gate currently fails on multiple crates until additional tests are added.
    • More test investment is required for large modules (API handlers, config loader, fsops pipeline, app bootstrap).
  • Follow-up:
    • Add tests to raise coverage for: revaer-app, revaer-config, revaer-data, revaer-fsops, revaer-api, revaer-ui, revaer-torrent-libt, asset_sync, and revaer-test-support.
    • Re-run just cov, then complete the full just ci and just ui-e2e gates.

Motivation

  • Ensure test coverage reflects real production risk by enforcing ≥90% per crate.

Design notes

  • Coverage is computed per crate by running cargo llvm-cov --package in a workspace member loop.
  • Crates with zero executable lines are treated as 100% covered by llvm-cov for that package.

Test coverage summary

  • just cov run on 2026-01-17; coverage gate failed. Current per-crate results:
    • revaer-app: 70.71% (1922/2718)
    • revaer-test-support: 71.30% (246/345)
    • revaer-data: 72.75% (993/1365)
    • revaer-config: 75.65% (2775/3668)
    • revaer-fsops: 76.15% (1520/1996)
    • asset_sync: 79.16% (300/379)
    • revaer-ui: 83.82% (1911/2280)
    • revaer-api: 84.37% (7539/8936)
    • revaer-torrent-libt: 85.38% (2961/3468)
    • revaer-cli: 86.51% (2084/2409)
    • revaer-doc-indexer: 89.73% (655/730)
    • revaer-telemetry: 92.40% (729/789)
    • revaer-torrent-core: 94.34% (250/265)
    • revaer-api-models: 95.34% (553/580)
    • revaer-events: 96.40% (268/278)
    • revaer-runtime: 100.00% (0/0)

Observability updates

  • None.

Risk & rollback plan

  • Risk: CI remains blocked until per-crate coverage is lifted to 90%.
  • Rollback: revert the just cov loop and reset the coverage threshold (not recommended unless blocking critical releases).

Dependency rationale

  • No new dependencies added.

092: Fsops coverage hardening

  • Status: Accepted
  • Date: 2026-01-17
  • Context:
    • The workspace requires at least 90% per-crate line coverage (ADR 091).
    • revaer-fsops contained untested branches in pipeline helpers and filesystem routines.
  • Decision:
    • Add targeted unit tests for fsops pipeline steps, rule parsing, and file operations.
    • Keep all test-only logic inside #[cfg(test)] modules.
    • Alternatives considered: integration tests backed by RuntimeStore + database; rejected for higher cost and slower feedback.
  • Consequences:
    • Positive outcomes:
      • Improved coverage and regression protection for fsops edge cases.
    • Risks or trade-offs:
      • Additional filesystem IO during tests; mitigate with temp dirs and deterministic fixtures.
  • Follow-up:
    • Run just cov and just ci to confirm the per-crate gate.
    • Watch for platform-specific permission semantics in CI.

Motivation

Raise revaer-fsops coverage to meet the 90% per-crate gate while strengthening confidence in filesystem post-processing edge cases.

Design notes

  • Exercise both happy-path and skip/error branches without introducing production-only hooks.
  • Favor direct unit tests of helper functions to keep the tests fast and deterministic.

Test coverage summary

  • Added unit tests for meta initialization, allowlist enforcement, glob parsing errors, archive extension checks, step short-circuiting, and file operation paths.
  • Added permission/ownership tests for unix targets to cover apply_permissions, resolve_owner, and resolve_group.

Observability updates

None; no runtime behavior changes.

Risk & rollback plan

  • Risk: file-permission tests may behave differently on non-unix systems.
  • Rollback: revert the added tests and rework with platform guards if CI shows instability.

Dependency rationale

No new dependencies added. Alternative considered: integration coverage via database-backed runtime store, rejected due to setup overhead.

UI logic extraction for testable components

  • Status: Accepted
  • Date: 2026-01-17
  • Context:
    • The UI layer accumulated view-local parsing and formatting logic that was hard to test.
    • Coverage targets require host-testable logic outside Yew components.
  • Decision:
    • Extract feature-specific helpers into logic.rs modules and keep state types in state.rs.
    • Keep view modules focused on rendering and UseStateHandle orchestration.
  • Consequences:
    • Positive outcomes: improved unit test coverage, clearer separation of concerns.
    • Risks or trade-offs: refactor touchpoints may introduce regressions; mitigated with tests.
  • Motivation:
    • Ensure UI logic is reusable, deterministic, and testable without DOM bindings.
  • Design notes:
    • Logic modules stay pure; only view helpers touch Yew handles.
    • Error surfaces avoid unit error types and return typed results where parsing can fail.
  • Test coverage summary:
    • Added unit tests for newly extracted helpers in each UI feature slice.
  • Observability updates:
    • None (UI-only refactor with no telemetry changes).
  • Risk & rollback plan:
    • If regressions appear, revert to the previous view-local helpers and reapply incrementally.
  • Dependency rationale:
    • No new dependencies; reused existing crates and standard library helpers.

UI E2E sharding in workflows

  • Status: Accepted
  • Date: 2026-01-23
  • Context:
    • What problem are we solving?
      • UI E2E runs are long and delay feedback, especially when other jobs have already passed.
    • What constraints or forces shape the decision?
      • Keep Playwright invoked through just ui-e2e and avoid new dependencies.
  • Decision:
    • Summary of the choice made.
      • Add Playwright sharding support to just ui-e2e and shard the UI E2E jobs with a matrix in CI/PR workflows.
    • Alternatives considered.
      • Increase test workers only (limited benefit because suite already uses Playwright workers).
      • Split tests by directory into separate workflows (more maintenance).
  • Consequences:
    • Positive outcomes.
      • Reduced wall-clock time for UI E2E runs via parallel shards.
    • Risks or trade-offs.
      • Increased parallel runner usage for sharded jobs.
  • Follow-up:
    • Implementation tasks.
      • Monitor shard duration balance and tune shard counts if needed.
    • Review checkpoints.
      • Reassess sharding if runner usage limits become a concern.

Task record

  • Motivation: Parallelize UI E2E to shorten CI runtime while keeping the just-based workflow contract intact.
  • Design notes: Use Playwright’s --shard flag driven by PLAYWRIGHT_SHARD_INDEX and PLAYWRIGHT_SHARD_TOTAL.
  • Test coverage summary: just ci and just ui-e2e passed.
  • Observability updates: None (workflow-only change).
  • Risk & rollback plan: Revert sharding env and matrix changes if shard stability or runner usage is problematic.
  • Dependency rationale: No new dependencies introduced.

Untagged images use dev tag

  • Status: Accepted
  • Date: 2026-01-23
  • Context:
    • What problem are we solving?
      • Untagged builds currently publish to a separate -dev image name and still apply a latest tag, making it harder to discover the intended development tag.
    • What constraints or forces shape the decision?
      • Keep tagging logic in the GitHub workflow without altering the build artifacts or Dockerfile.
  • Decision:
    • Summary of the choice made.
      • Publish untagged builds to the primary image name with a dev tag, while tagged builds retain latest.
    • Alternatives considered.
      • Keep the -dev image suffix and add an extra dev alias tag.
  • Consequences:
    • Positive outcomes.
      • Untagged images are clearly labeled as development artifacts in the primary repository.
    • Risks or trade-offs.
      • Development images now share the same repository name as releases, requiring clear tag usage.
  • Follow-up:
    • Implementation tasks.
      • Monitor downstream consumers for any references to the previous -dev image name.
    • Review checkpoints.
      • Reassess if consumers need both dev and latest tags for untagged builds.

Task record

  • Motivation: Align untagged image naming with a dev tag instead of a separate -dev repository and latest.
  • Design notes: Use a workflow alias tag that switches between latest and dev based on ref type.
  • Test coverage summary: just ci and just ui-e2e passed.
  • Observability updates: None (workflow-only change).
  • Risk & rollback plan: Revert the alias tag logic in .github/workflows/ci.yml if consumers depend on revaer-dev or latest.
  • Dependency rationale: No new dependencies introduced.

Aggregate UI E2E coverage for sharded runs

  • Status: Accepted
  • Date: 2026-01-23
  • Context:
    • What problem are we solving?
      • Playwright sharding runs global teardown per shard, causing partial coverage checks to fail.
    • What constraints or forces shape the decision?
      • Keep Playwright invoked via just ui-e2e, avoid new dependencies, and preserve coverage gating.
  • Decision:
    • Summary of the choice made.
      • Skip coverage assertions in sharded teardown, write shard-specific coverage files, upload them as artifacts, and run an aggregate coverage check in a dedicated job.
    • Alternatives considered.
      • Disable coverage checks entirely for sharded runs (reduces signal).
      • Keep non-sharded UI E2E only (slower feedback).
  • Consequences:
    • Positive outcomes.
      • Sharded UI E2E runs succeed while retaining full coverage enforcement.
    • Risks or trade-offs.
      • Additional workflow job and artifact handling.
  • Follow-up:
    • Implementation tasks.
      • Monitor shard duration and artifact sizes.
    • Review checkpoints.
      • Revisit shard count if coverage aggregation becomes slow.

Task record

  • Motivation: Fix sharded UI E2E failures while maintaining coverage enforcement.
  • Design notes: Shard-specific coverage files with an aggregate coverage check via just ui-e2e-coverage.
  • Test coverage summary: just ci, just ui-e2e.
  • Observability updates: None (workflow-only change).
  • Risk & rollback plan: Revert sharding and coverage aggregation changes if instability persists.
  • Dependency rationale: No new dependencies introduced.

Dev prereleases and PR image previews

  • Status: Accepted
  • Date: 2026-01-24
  • Context:
    • What problem are we solving?
      • Main should publish dev prereleases and dev-tagged images without displacing stable “latest” artifacts.
      • PRs need preview images without exposing secrets to forks.
    • What constraints or forces shape the decision?
      • CI must run via just, releases must be semver-based from Conventional Commits, and stable releases/images stay version-tagged.
  • Decision:
    • Summary of the choice made.
      • Use semantic-release on main to publish -dev.N prereleases with attached artifacts, tag dev images with the prerelease tag plus dev, and publish PR preview images for non-fork PRs using pr-<num> and pr-<num>-<sha> tags only.
    • Alternatives considered.
      • Continue tag-only releases (no dev prereleases).
      • Publish dev images under a separate repository name.
  • Consequences:
    • Positive outcomes.
      • Main builds produce versioned dev releases and dev images without changing the stable “latest” artifacts.
      • Non-fork PRs get preview images with consistent tags.
    • Risks or trade-offs.
      • Adds release tooling dependencies and requires Conventional Commit discipline for every main merge.
  • Follow-up:
    • Implementation tasks.
      • Monitor semantic-release output and adjust release rules if release cadence is too strict or too noisy.
    • Review checkpoints.
      • Revisit tag patterns if GitHub tag filters or image consumers need additional aliases.

Task record

  • Motivation: Publish dev prereleases and PR preview images without displacing stable releases or latest images.
  • Design notes: Semantic-release prereleases on main drive version tags; PR images are tagged pr-<num> and pr-<num>-<sha> only.
  • Test coverage summary: just ci, just ui-e2e.
  • Observability updates: None (workflow-only change).
  • Risk & rollback plan: Remove release-dev and PR image jobs and revert to tag-only releases if prereleases cause instability.
  • Dependency rationale: Add semantic-release tooling in release/ to analyze Conventional Commits and publish prereleases with assets.

Reusable image build workflow

  • Status: Accepted
  • Date: 2026-01-24
  • Context:
    • What problem are we solving?
      • Image build logic is duplicated across CI and PR workflows, and CI was failing to load due to invalid tag filters.
    • What constraints or forces shape the decision?
      • Keep CI driven by just, avoid dev tag releases updating stable artifacts, and reduce workflow duplication.
  • Decision:
    • Summary of the choice made.
      • Introduce a reusable workflow for multi-arch image build/manifest creation and use it from both CI and PR workflows, while gating CI roots to skip dev tag pushes.
    • Alternatives considered.
      • Keep duplicated image steps in each workflow.
      • Split tag builds into a separate workflow without reuse.
  • Consequences:
    • Positive outcomes.
      • Consistent image build behavior across workflows with less duplication and clear tag policies.
    • Risks or trade-offs.
      • Reusable workflows add indirection when tracing failures.
  • Follow-up:
    • Implementation tasks.
      • Monitor build images runs for any tag mismatches or manifest issues.
    • Review checkpoints.
      • Revisit tag gating if GitHub tag filters expand to support exclusion patterns.

Task record

  • Motivation: Fix CI failures and share image build logic between CI and PR workflows.
  • Design notes: Use a reusable workflow with parameterized tags and checkout refs to drive both dev and PR image builds.
  • Test coverage summary: just ci, just ui-e2e.
  • Observability updates: None (workflow-only change).
  • Risk & rollback plan: Revert to inline workflow steps if reuse introduces instability.
  • Dependency rationale: No new dependencies introduced.

Indexer ERD Single-Tenant and Audit Fields

  • Status: Accepted
  • Date: 2026-01-25
  • Context:
    • The indexer ERD needed to reflect single-tenant deployments and remove workspace/membership constructs.
    • Audit actor fields must be non-null and use a system sentinel instead of NULL.
    • Global configuration should be reusable across future media management features.
  • Decision:
    • Remove workspace/membership/invite constructs and document deployment-global scoping.
    • Promote deployment_config and deployment_maintenance_state as singleton global config tables.
    • Require created_by_user_id/updated_by_user_id/changed_by_user_id to be NN with system sentinel semantics.
    • Update procedures, constraints, and index guidance to align with deployment-global indexing.
    • Task Record:
      • Motivation: Align the ERD with the single-tenant deployment model and explicit audit actors.
      • Design notes: Removed workspace scoping, added deployment_role on app_user, documented system user_id=0 and all-zero UUID, revised procedures/constraints/indexes for deployment scope, and serialized log stream tests to avoid global buffer races.
      • Test coverage summary: just ci and just ui-e2e run locally.
      • Observability updates: None (documentation and test-stability change only).
      • Risk & rollback plan: Low risk; revert ERD edits if multi-tenant scope is reintroduced.
      • Dependency rationale: None; no new dependencies. Alternatives considered: keep workspace scoping and NULL system actors (rejected).
  • Consequences:
    • Positive outcomes:
      • ERD aligns with single-tenant deployments and global config reuse.
      • Audit fields are explicit and consistent with system sentinel usage.
    • Risks or trade-offs:
      • Future multi-tenant support would require reintroducing tenant scoping.
  • Follow-up:
    • Implementation tasks:
      • Keep migrations and runtime schema changes aligned with the updated ERD.
    • Review checkpoints:
      • Validate stored procedures and schema changes during implementation.

SonarQube Workflow With Root Coverage LCOV

  • Status: Accepted
  • Date: 2026-03-22
  • Context:
    • Motivation:
      • Add automated SonarQube analysis for every pull request and every push to main.
      • Publish Rust coverage into SonarQube from a deterministic file in the repository root.
      • Keep the native libtorrent FFI source analyzed instead of excluding it from SonarQube.
    • Constraints:
      • CI workflows must use just recipes for operational steps.
      • Coverage artifact must be generated as coverage/lcov.info.
      • Generated coverage file must not be tracked by git.
      • The repository contains C++ FFI sources under crates/revaer-torrent-libt/src/ffi, and SonarQube requires a C-family compilation database to analyze them correctly.
  • Decision:
    • Use just cov to build a combined workspace LCOV report at coverage/lcov.info.
    • Add just sonar-compile-db to build revaer-torrent-libt with REVAER_NATIVE_COMPILE_COMMANDS_PATH set so build.rs emits coverage/compile_commands.json.
    • Add .github/workflows/sonar.yml to trigger on pull_request and push to main.
    • In the Sonar workflow, run migrations, generate the LCOV file via just cov, generate the native compile database via just sonar-compile-db, and run SonarSource/sonarqube-scan-action@a31c9398be7ace6bbfaf30c0bd5d415f843d45e9 (v7) with:
      • sonar.projectKey=VannaDii_Revaer
      • sonar.organization=vannadii
      • sonar.rust.lcov.reportPaths=coverage/lcov.info
      • sonar.cfamily.compile-commands=coverage/compile_commands.json
    • Ignore /coverage in .gitignore.
    • Alternatives considered:
      • Separate per-crate LCOV files merged post-process: rejected as unnecessary complexity for Sonar ingestion.
      • Invoking cargo directly in workflow: rejected because repository policy requires just recipes.
      • Excluding C-family files from SonarQube: rejected because the native adapter is first-party code and should remain part of static analysis.
      • Using external interception tooling such as Bear: rejected because the existing cxx_build path already knows the exact compiler flags, so emitting the compile database in build.rs keeps local and CI behavior aligned with fewer moving parts.
  • Consequences:
    • Positive outcomes:
      • SonarQube now runs on PRs and main pushes with workspace Rust coverage.
      • Coverage path is stable and tool-agnostic (coverage/lcov.info).
      • Native FFI shim sources stay included in SonarQube analysis with the correct compiler context.
    • Risks and trade-offs:
      • Sonar job runtime includes full coverage execution and DB-backed tests.
      • Workflow requires valid SONAR_TOKEN repository secret and database variable setup.
      • The compile database currently describes the checked-in session.cpp translation unit, so future native sources must be added deliberately if the FFI surface grows.
  • Follow-up:
    • Test coverage summary:
      • Validation must cover just sonar-compile-db producing coverage/compile_commands.json plus the repository’s required just gates.
    • Observability updates:
      • No runtime telemetry changes; this is CI/workflow-only.
    • Risk and rollback plan:
      • Roll back by removing just sonar-compile-db, the build.rs compile database emission, and the workflow scan property if SonarQube compilation-database support causes instability.
    • Dependency rationale:
      • No Rust dependencies added.
      • GitHub Action dependency remains SonarSource/sonarqube-scan-action@a31c9398be7ace6bbfaf30c0bd5d415f843d45e9 (v7, official maintained scanner wrapper). Alternative was raw scanner CLI install steps, rejected for higher maintenance.

Indexer ERD checklist

  • Status: Accepted
  • Date: 2026-01-25
  • Context:
    • We need a complete, ordered, and trackable checklist for implementing ERD_INDEXERS.md.
    • The checklist must reflect dependencies, support test-first execution, and avoid missed requirements.
  • Decision:
    • Add a dedicated ERD implementation checklist file that enumerates schema, procedures, services, behavior rules, and acceptance gates in dependency-first order.
    • Alternatives considered: keep ad-hoc notes or split by subsystem; rejected due to risk of omissions and loss of a single, authoritative implementation plan.
  • Consequences:
    • Positive: a single source of truth for the ERD execution plan and validation steps.
    • Trade-off: requires maintenance when ERD_INDEXERS.md changes.
  • Follow-up:
    • Keep ERD_INDEXERS_CHECKLIST.md synchronized with ERD_INDEXERS.md updates.
    • Use the checklist as the staging plan for implementation and testing phases.

Task record

  • Motivation:
    • Ensure ERD_INDEXERS.md is implementable without missing steps or violating architecture rules.
  • Design notes:
    • The checklist is dependency-first and grouped by schema, procedures, runtime services, and acceptance gates to maximize testability.
  • Test coverage summary:
    • No tests added in this change; checklist calls out required test gates for future work.
  • Observability updates:
    • No runtime changes in this change; checklist enumerates required telemetry and metrics work.
  • Risk & rollback plan:
    • Risk is limited to documentation drift; rollback is deleting the checklist and ADR entry.
  • Dependency rationale:
    • No new dependencies added. Alternatives considered: none required.

Indexer core schema foundations

  • Status: Accepted
  • Date: 2026-01-25
  • Context:
    • We need to begin implementing the indexer ERD with core, dependency-first tables.
    • The schema must follow ERD_INDEXERS.md and preserve SSOT for keys, IDs, and constraints.
  • Decision:
    • Add a new migration that introduces the initial enum types and core tables: app_user, deployment_config, deployment_maintenance_state, trust_tier, media_domain, and tag.
    • Use bigint identity PKs, UUID public IDs, and explicit constraints per ERD.
  • Consequences:
    • Positive: establishes the foundation required for indexer configuration and tagging.
    • Trade-off: further migrations are required to complete the full ERD.
  • Follow-up:
    • Add remaining enum types and schema tables from ERD_INDEXERS.md.
    • Implement seed procedures and stored procedures for the new tables.

Task record

  • Motivation:
    • Start the indexer ERD implementation with the smallest dependency set.
  • Design notes:
    • Enum types are defined for deployment_role, trust_tier_key, and media_domain_key.
    • Keys enforce lowercase checks; public UUIDs have no defaults to keep ownership in procedures.
  • Test coverage summary:
    • No new tests added; migration path is exercised via just ci and ui-e2e.
  • Observability updates:
    • None in this change.
  • Risk & rollback plan:
    • Roll back by reverting the migration if schema conflicts arise.
  • Dependency rationale:
    • No new dependencies added.

Indexer definition schema

  • Status: Accepted
  • Date: 2026-01-25
  • Context:
    • The indexer ERD requires a catalog of indexer definitions and field metadata.
    • These tables are prerequisites for indexer instance configuration and import flows.
  • Decision:
    • Add a migration that introduces indexer definition enums and tables, including validation rules and value sets.
    • Encode ERD constraints as database checks and unique indexes where possible.
  • Consequences:
    • Positive: definition metadata can be stored and validated at the database layer.
    • Trade-off: adds a new migration that must be extended by later ERD stages.
  • Follow-up:
    • Add indexer instance tables and import flows.
    • Implement seed and stored-procedure logic for definition sync.

Task record

  • Motivation:
    • Continue the dependency-first ERD rollout with the catalog and validation schema.
  • Design notes:
    • Enum types are created idempotently via pg_type checks.
    • Validation rules are enforced with explicit CHECK constraints and a unique index.
  • Test coverage summary:
    • No new tests added; migration path is exercised via just ci and ui-e2e.
  • Observability updates:
    • None in this change.
  • Risk & rollback plan:
    • Roll back by reverting the migration if downstream schemas change.
  • Dependency rationale:
    • No new dependencies added.

Indexer instance schema and RSS

  • Status: Accepted
  • Date: 2026-01-25
  • Context:
    • Indexer instances, routing policies, and RSS schedules are required to configure real indexers and persist their operational state.
  • Decision:
    • Add a migration that introduces indexer instance tables, routing policy tables, RSS tracking tables, and related enums.
    • Enforce ERD constraints (ranges, uniqueness, hash formats) via database checks.
  • Consequences:
    • Positive: provides the durable schema for indexer configuration, tags, domains, and RSS.
    • Trade-off: requires additional migrations for imports, policies, and search flows.
  • Follow-up:
    • Add import_job tables once search profiles and torznab instances exist.
    • Implement stored procedures and seed data for routing and instance management.

Task record

  • Motivation:
    • Continue ERD implementation with dependency-ready indexer instance tables.
  • Design notes:
    • Routing policy is introduced to satisfy the FK from indexer_instance.
    • Hash columns enforce lowercase hex constraints to match global rules.
  • Test coverage summary:
    • No new tests added; migration path is exercised via just ci and ui-e2e.
  • Observability updates:
    • None in this change.
  • Risk & rollback plan:
    • Roll back by reverting the migration if downstream constraints change.
  • Dependency rationale:
    • No new dependencies added.

Indexer secret schema

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • ERD requires secret storage and auditable bindings for indexer field values and routing policy parameters.
    • Secret linkage must be centralized via secret_binding with revocation/rotation metadata.
  • Decision:
    • Add secret, secret_binding, and secret_audit_log tables plus supporting enums.
    • Enforce binding_name allowlists per bound_table and key_id length checks.
  • Consequences:
    • Positive: schema supports secure secret storage with auditable bindings.
    • Trade-off: follow-on migrations and procedures are required for lifecycle actions.
  • Follow-up:
    • Implement secret procedures and auditing per ERD.
    • Add binding validation in indexer/routing procedures.

Task record

  • Motivation:
    • Continue ERD implementation with secrets storage and binding schema.
  • Design notes:
    • secret_binding remains the only linkage, enforced by a bound_table/binding_name check.
    • secret_audit_log is append-only to capture create/rotate/revoke/bind/unbind actions.
  • Test coverage summary:
    • No new tests added; migration path is exercised via just ci and ui-e2e.
  • Observability updates:
    • None in this change.
  • Risk & rollback plan:
    • Roll back by reverting the migration if ERD constraints change.
  • Dependency rationale:
    • No new dependencies added.

Indexer search profiles and Torznab schema

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • ERD requires search profiles to capture user intent and Torznab instances to expose arr-compatible endpoints tied to profiles.
    • Import jobs depend on search_profile and torznab_instance references.
  • Decision:
    • Add schema for search_profile and related allow/block/prefer tables plus torznab_instance.
    • Enforce ERD constraints for page sizing, weight ranges, and uniqueness.
  • Consequences:
    • Positive: enables profile filtering and Torznab endpoint configuration in the schema.
    • Trade-off: policy_set linking and import pipeline remain follow-up migrations.
  • Follow-up:
    • Add search_profile_policy_set once policy_set exists.
    • Implement import_job tables and Torznab procedures after policy/schema dependencies.

Task record

  • Motivation:
    • Continue ERD implementation with search profile and Torznab persistence.
  • Design notes:
    • Weight overrides allow nullable values with bounded ranges per ERD notes.
    • torznab_instance stores hashed API keys only, with soft-delete support.
  • Test coverage summary:
    • No new tests added; migration path is exercised via just ci and ui-e2e.
  • Observability updates:
    • None in this change.
  • Risk & rollback plan:
    • Roll back by reverting the migration if ERD constraints change.
  • Dependency rationale:
    • No new dependencies added.

Indexer import schema

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • ERD defines import_job and import_indexer_result for Prowlarr migration tracking.
    • Import jobs depend on search profiles and Torznab instances.
  • Decision:
    • Add import_job and import_indexer_result tables with supporting enums.
    • Preserve ERD constraints for identifiers, status, and optional error/detail fields.
  • Consequences:
    • Positive: schema supports import tracking and per-indexer outcomes.
    • Trade-off: import procedures and validation remain a follow-up step.
  • Follow-up:
    • Implement import stored procedures and validation rules per ERD.
    • Add indexer-instance linkage rules and dry-run handling in procedures.

Task record

  • Motivation:
    • Continue ERD implementation with import pipeline persistence.
  • Design notes:
    • import_indexer_result.indexer_instance_id remains a nullable bigint with no FK, per ERD.
    • import_job stores target profile/torznab references for later procedure enforcement.
  • Test coverage summary:
    • No new tests added; migration path is exercised via just ci and ui-e2e.
  • Observability updates:
    • None in this change.
  • Risk & rollback plan:
    • Roll back by reverting the migration if ERD constraints change.
  • Dependency rationale:
    • No new dependencies added.

Indexer rate limit and Cloudflare schema

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • ERD defines rate limiting policy/state and Cloudflare status tracking for indexers.
    • These tables are prerequisites for routing enforcement and job-based cleanup.
  • Decision:
    • Add rate_limit_policy, indexer_instance_rate_limit, routing_policy_rate_limit, rate_limit_state, and indexer_cf_state tables plus required enums.
    • Enforce ERD ranges, uniqueness, and cascade delete for instance/routing children.
  • Consequences:
    • Positive: schema supports rate limit configuration, token tracking, and CF state.
    • Trade-off: stored procedures and scheduled jobs remain follow-up work.
  • Follow-up:
    • Implement rate_limit and cf_state procedures, including seed defaults and purge jobs.
    • Add outbound_request_log integration and derived connectivity profile.

Task record

  • Motivation:
    • Continue ERD implementation with rate limiting and CF state persistence.
  • Design notes:
    • rate_limit_state uses per-minute buckets with non-negative token usage.
    • indexer_cf_state enforces non-negative backoff and consecutive failure counters.
  • Test coverage summary:
    • No new tests added; migration path is exercised via just ci and ui-e2e.
  • Observability updates:
    • None in this change.
  • Risk & rollback plan:
    • Roll back by reverting the migration if ERD constraints change.
  • Dependency rationale:
    • No new dependencies added.

Indexer policy schema

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • ERD defines policy sets, rules, and snapshots for search filtering and scoring.
    • Search profiles need policy_set linkage for profile-scoped policies.
  • Decision:
    • Add policy_set, policy_rule, policy_rule_value_set, policy_rule_value_set_item, policy_snapshot, policy_snapshot_rule, and search_profile_policy_set tables.
    • Introduce required policy enums and enforce ERD uniqueness and cascade rules.
  • Consequences:
    • Positive: schema supports policy configuration, snapshot reuse, and profile links.
    • Trade-off: stored procedures and snapshot materialization remain follow-up work.
  • Follow-up:
    • Implement policy procedures, snapshot hashing, and retention jobs per ERD.
    • Add search_request tables to wire policy snapshots into runtime queries.

Task record

  • Motivation:
    • Continue ERD implementation with policy persistence and profile linkage.
  • Design notes:
    • policy_set created_for_search_request_id is stored without a FK until search_request exists.
    • policy_rule_value_set uses shared value_set_type enum without extra restrictions.
  • Test coverage summary:
    • No new tests added; migration path is exercised via just ci and ui-e2e.
  • Observability updates:
    • None in this change.
  • Risk & rollback plan:
    • Roll back by reverting the migration if ERD constraints change.
  • Dependency rationale:
    • No new dependencies added.

Indexer Torznab category schema

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • ERD requires seeded Torznab categories and mappings to media domains and tracker categories for filtering and Torznab responses.
  • Decision:
    • Add torznab_category, media_domain_to_torznab_category, and tracker_category_mapping tables with ERD constraints and uniqueness rules.
    • Enforce global uniqueness for tracker_category_mapping across null indexer_definition_id via a coalesced unique index.
  • Consequences:
    • Positive: schema supports Torznab category lookups and tracker mapping overrides.
    • Trade-off: seeding and procedures remain follow-up work.
  • Follow-up:
    • Seed Torznab categories and domain mappings per ERD.
    • Implement category mapping stored procedures and indexes.

Task record

  • Motivation:
    • Continue ERD implementation with Torznab category and mapping persistence.
  • Design notes:
    • tracker_category and tracker_subcategory enforce non-negative values as specified.
    • media_domain mapping allows NULL media_domain_id for unsupported categories.
  • Test coverage summary:
    • No new tests added; migration path is exercised via just ci and ui-e2e.
  • Observability updates:
    • None in this change.
  • Risk & rollback plan:
    • Roll back by reverting the migration if ERD constraints change.
  • Dependency rationale:
    • No new dependencies added.

Indexer connectivity and audit schema

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • ERD defines connectivity snapshots, health events, and config audit logging.
    • These tables are prerequisites for health reporting and policy/action auditing.
  • Decision:
    • Add indexer_connectivity_profile, indexer_health_event, and config_audit_log tables plus required enums for health events, connectivity status, and audit categories.
    • Enforce ERD constraints for success-rate bounds and audit entity references.
  • Consequences:
    • Positive: schema supports connectivity rollups and durable audit trails.
    • Trade-off: rollup jobs and audit-writing procedures remain follow-up work.
  • Follow-up:
    • Implement connectivity rollup job and health event emission per ERD.
    • Wire audit log writes in stored procedures and domain services.

Task record

  • Motivation:
    • Continue ERD implementation with connectivity and audit persistence.
  • Design notes:
    • config_audit_log requires either a bigint PK or a public UUID per ERD notes.
    • indexer_connectivity_profile enforces error_class NULL for healthy status.
  • Test coverage summary:
    • No new tests added; migration path is exercised via just ci and ui-e2e.
  • Observability updates:
    • None in this change.
  • Risk & rollback plan:
    • Roll back by reverting the migration if ERD constraints change.
  • Dependency rationale:
    • No new dependencies added.

Indexer canonicalization schema

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • Implement ERD_INDEXERS.md canonicalization tables for deduped torrents and durable sources.
    • Enforce hash identity and typed attribute invariants at the schema layer.
    • Keep enums and tables aligned with existing migrations and single-tenant scope.
  • Decision:
    • Add migration 0022_indexer_canonicalization.sql to create canonical tables and enums.
    • Apply ERD validation rules for hashes, IDs, typed attributes, and identity strategies.
  • Consequences:
    • Canonical torrent/source data is stored with enforced identity constraints.
    • Downstream search and ingest tables can reference canonical entities safely.
  • Follow-up:
    • Implement search_request tables and ingestion stored procedures per ERD_INDEXERS.md.
    • Add remaining canonical scoring, conflict, and decision tables.

Task record

  • Motivation:
    • Establish canonical torrent and source storage to unblock search request ingestion.
  • Design notes:
    • Enforce hash, ID, and typed attribute invariants directly in the schema.
    • Keep canonical tables aligned to ERD_INDEXERS.md and dependency order.
  • Test coverage summary:
    • No new tests added; migrations validated via just ci and ui-e2e.
  • Observability updates:
    • None in this change.
  • Risk & rollback plan:
    • Roll back by reverting migration 0022 if schema issues surface.
  • Dependency rationale:
    • No new dependencies added.

Indexer search request schema

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • Implement ERD_INDEXERS.md search request tables, enums, and streaming page state.
    • Enforce search/run state invariants and observation typing at the schema layer.
  • Decision:
    • Add migration 0023_indexer_search_requests.sql to create search request tables and enums.
    • Apply ERD validation rules for status transitions, cursors, and observation attributes.
  • Consequences:
    • Search request storage is ready for ingestion and paging flows.
    • Downstream procedures can rely on schema checks for state integrity.
  • Follow-up:
    • Add canonical scoring, conflict tracking, and job tables per ERD_INDEXERS.md.
    • Implement stored procedures for search orchestration and ingestion.

Task record

  • Motivation:
    • Land the search request schema needed for streaming and run tracking.
  • Design notes:
    • Enforced status/timestamp and attribute typing checks per ERD_INDEXERS.md.
    • Kept enums scoped to current table usage to avoid unused schema items.
  • Test coverage summary:
    • No new tests added; migrations validated via just ci and ui-e2e.
  • Observability updates:
    • None in this change.
  • Risk & rollback plan:
    • Roll back by reverting migration 0023 if schema issues surface.
  • Dependency rationale:
    • No new dependencies added.

Indexer scoring schema

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • Implement ERD_INDEXERS.md scoring and best-source materialization tables.
    • Preserve context-specific ordering for search/profile/policy views.
  • Decision:
    • Add migration 0024_indexer_scoring.sql for scoring and best-source tables.
    • Introduce context_key_type enum and enforce score range checks.
  • Consequences:
    • Canonical sources can be ranked globally and per context.
    • Best-source tables are ready for refresh jobs and search paging.
  • Follow-up:
    • Add conflicts and decision tables plus outbound log and reputation tracking.
    • Implement stored procedures that compute scores and refresh best-source rows.

Task record

  • Motivation:
    • Provide schema support for deterministic source ranking in searches.
  • Design notes:
    • Enforced score ranges and uniqueness constraints per ERD_INDEXERS.md.
    • Context types are stored as a dedicated enum for clarity.
  • Test coverage summary:
    • No new tests added; migrations validated via just ci and ui-e2e.
  • Observability updates:
    • None in this change.
  • Risk & rollback plan:
    • Roll back by reverting migration 0024 if schema issues surface.
  • Dependency rationale:
    • No new dependencies added.

Indexer conflict and decision schema

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • Implement ERD_INDEXERS.md conflict tracking and filter decision tables.
    • Preserve auditability of metadata conflicts and policy filtering outcomes.
  • Decision:
    • Add migration 0025_indexer_conflicts_decisions.sql for conflicts and decisions.
    • Introduce enums for conflict and decision types with ERD-aligned values.
  • Consequences:
    • Conflict resolution workflows can be recorded with an audit trail.
    • Search filter decisions can be persisted for transparency and debugging.
  • Follow-up:
    • Add outbound_request_log, user actions, acquisition, reputation, and job tables.
    • Implement stored procedures for conflict resolution and filtering decisions.

Task record

  • Motivation:
    • Capture durable metadata conflicts and policy decisions per ERD_INDEXERS.md.
  • Design notes:
    • Kept constraints minimal and aligned with ERD requirements for nullable references.
    • Search filter decisions require at least one canonical reference.
  • Test coverage summary:
    • No new tests added; migrations validated via just ci and ui-e2e.
  • Observability updates:
    • None in this change.
  • Risk & rollback plan:
    • Roll back by reverting migration 0025 if schema issues surface.
  • Dependency rationale:
    • No new dependencies added.

Indexer user action and acquisition schema

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • Implement ERD_INDEXERS.md feedback and acquisition tracking tables.
    • Persist user actions and download attempts for ranking and reputation signals.
  • Decision:
    • Add migration 0026_indexer_user_actions.sql for user actions and acquisition attempts.
    • Introduce enums for actions, reasons, acquisition status/origin/failure, and client names.
  • Consequences:
    • User feedback and acquisition events are stored with constrained identifiers.
    • Future reputation rollups can rely on acquisition data.
  • Follow-up:
    • Add outbound_request_log, reputation, and job scheduling tables.
    • Implement stored procedures and ingestion paths for acquisitions.

Task record

  • Motivation:
    • Capture user interactions and download outcomes per ERD_INDEXERS.md.
  • Design notes:
    • Enforced identifier presence and failure-class rules on acquisition_attempt.
    • User action metadata stored as key/value with unique keys per action.
  • Test coverage summary:
    • No new tests added; migrations validated via just ci and ui-e2e.
  • Observability updates:
    • None in this change.
  • Risk & rollback plan:
    • Roll back by reverting migration 0026 if schema issues surface.
  • Dependency rationale:
    • No new dependencies added.

Indexer telemetry and reputation schema

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • Implement ERD_INDEXERS.md telemetry logging and reputation rollups.
    • Ensure outbound request invariants are enforced at the schema layer.
  • Decision:
    • Add migration 0027_indexer_telemetry_reputation.sql for outbound_request_log and source_reputation.
    • Introduce enums for request types, outcomes, mitigations, and reputation windows.
  • Consequences:
    • Connectivity and reputation rollups can rely on consistent telemetry inputs.
    • Rate-limited and success/failure invariants are enforced in the database.
  • Follow-up:
    • Add job scheduling tables and stored procedures for rollups and retention.
    • Implement index coverage for telemetry and reputation queries.

Task record

  • Motivation:
    • Capture outbound request telemetry and reputation rollups per ERD_INDEXERS.md.
  • Design notes:
    • Enforced outcome/error-class invariants and numeric ranges for rates.
    • Added defaults for timestamps to keep writes consistent.
  • Test coverage summary:
    • No new tests added; migrations validated via just ci and ui-e2e.
  • Observability updates:
    • None in this change.
  • Risk & rollback plan:
    • Roll back by reverting migration 0027 if schema issues surface.
  • Dependency rationale:
    • No new dependencies added.

Indexer job schedule schema

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • Implement ERD_INDEXERS.md job scheduling table and enum constraints.
    • Align cadence and jitter bounds with runtime scheduler expectations.
  • Decision:
    • Add migration 0028_indexer_jobs.sql for job_key enum and job_schedule.
    • Enforce cadence range and jitter bounds per ERD notes.
  • Consequences:
    • Scheduler state is stored in a single table with clear invariants.
    • Deployment seeding must populate required job rows.
  • Follow-up:
    • Add deployment seed procedures for job_schedule rows.
    • Implement job_claim_next_v1 and job completion updates.

Task record

  • Motivation:
    • Establish job scheduling primitives required for indexer retention and rollups.
  • Design notes:
    • cadence_seconds constrained to 30..604800 per ERD.
    • jitter_seconds constrained to 0..cadence_seconds.
  • Test coverage summary:
    • No new tests added; migrations validated via just ci and ui-e2e.
  • Observability updates:
    • None in this change.
  • Risk & rollback plan:
    • Roll back by reverting migration 0028 if scheduler constraints need adjustment.
  • Dependency rationale:
    • No new dependencies added.

Indexer FK on-delete rules

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • ERD_INDEXERS.md requires cascade deletes from indexer_instance to instance children.
    • Some FKs were created without explicit on-delete behavior.
  • Decision:
    • Add migration 0029_indexer_fk_rules.sql to enforce cascading FKs for indexer_instance child tables.
  • Consequences:
    • Hard-deleting an indexer_instance will cascade to dependent config and diagnostics rows.
    • Soft-delete behavior remains unchanged.
  • Follow-up:
    • Review remaining FK behaviors as stored procedures are introduced.

Task record

  • Motivation:
    • Align schema with ERD on-delete rules for indexer_instance children.
  • Design notes:
    • Replaced default FK constraints with ON DELETE CASCADE on instance child tables.
  • Test coverage summary:
    • No new tests added; migrations validated via just ci and ui-e2e.
  • Observability updates:
    • None in this change.
  • Risk & rollback plan:
    • Roll back by reverting migration 0029 if cascading rules need adjustment.
  • Dependency rationale:
    • No new dependencies added.

Indexer seed data and defaults

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • ERD_INDEXERS.md requires seeded trust tiers, media domains, Torznab categories, default rate limits, and job scheduling rows.
    • Seed functions must enforce immutability rules for system-owned data.
  • Decision:
    • Add migration 0030_indexer_seed_data.sql with seed procedures and inserts.
    • Seed Torznab categories, media-domain mappings, tracker mappings, rate limits, job schedules, and the system user.
  • Consequences:
    • Deployments start with required lookup data and system defaults.
    • Seeded values are validated for consistency on migration.
  • Follow-up:
    • Implement deployment_init_v1 and remaining stored procedures.

Task record

  • Motivation:
    • Provide required seed data and seed procedures for indexer ERD compliance.
  • Design notes:
    • trust_tier_seed_defaults and media_domain_seed_defaults are idempotent and validate seeded values.
    • job_schedule rows use randomized initial next_run_at jitter.
  • Test coverage summary:
    • No new tests added; migrations validated via just ci and ui-e2e.
  • Observability updates:
    • None in this change.
  • Risk & rollback plan:
    • Roll back by reverting migration 0030 if seed values require adjustment.
  • Dependency rationale:
    • No new dependencies added.

Indexer query indexes

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • ERD_INDEXERS.md defines a query-path index matrix for search, scoring, and telemetry workflows.
    • Many indexes are non-unique and must be added after table creation.
  • Decision:
    • Add migration 0031_indexer_query_indexes.sql to create the ERD-specified non-unique indexes and partial indexes.
  • Consequences:
    • Query paths have explicit index coverage for search, scoring, and retention.
    • Unique constraints continue to cover duplicate index requirements.
  • Follow-up:
    • Revisit index coverage when stored procedures and query plans land.

Task record

  • Motivation:
    • Provide the ERD index matrix required for search and telemetry queries.
  • Design notes:
    • Skipped indexes already covered by PK/UQ constraints.
    • Added partial indexes for sparse hash lookups and job scheduling.
  • Test coverage summary:
    • No new tests added; migrations validated via just ci and ui-e2e.
  • Observability updates:
    • None in this change.
  • Risk & rollback plan:
    • Roll back by reverting migration 0031 if index choices need revision.
  • Dependency rationale:
    • No new dependencies added.

Indexer deployment initialization procedure

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • ERD_INDEXERS.md specifies deployment_init_v1 to bootstrap deployment defaults.
    • Initialization must be idempotent and enforce actor verification.
  • Decision:
    • Add migration 0032_indexer_deployment_init.sql with deployment_init_v1 and stable wrapper deployment_init.
    • deployment_init_v1 enforces verified admin/owner actors and seeds defaults.
  • Consequences:
    • Deployments can be initialized via stored procedure calls.
    • System defaults are re-applied safely when missing.
  • Follow-up:
    • Implement the remaining stored procedures in Phase 5.

Task record

  • Motivation:
    • Provide the ERD-specified deployment initialization entry point.
  • Design notes:
    • Procedure is idempotent and reuses existing seed helpers.
    • Authorization requires verified owner/admin actors.
  • Test coverage summary:
    • No new tests added; migrations validated via just ci and ui-e2e.
  • Observability updates:
    • None in this change.
  • Risk & rollback plan:
    • Roll back by reverting migration 0032 if procedure behavior needs revision.
  • Dependency rationale:
    • No new dependencies added.

Indexer app_user stored procedures

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • We need versioned, auditable entry points for app_user creation and maintenance.
    • ERD_INDEXERS.md requires normalized email storage, constant error messages, and wrapper procedures without version suffixes.
    • app_user has no audit fields, so procedures must be minimal and safe while preserving table invariants.
  • Decision:
    • Add migration 0033 with app_user_create_v1, app_user_update_v1, and app_user_verify_email_v1 plus stable wrappers.
    • Normalize emails in-proc (trim + lowercase), enforce non-empty inputs, and default role to user with is_email_verified=false at creation.
    • Use constant error messages with detail codes for invalid or missing inputs.
  • Consequences:
    • app_user mutations now go through stored procedures with consistent validation.
    • Email duplicates are rejected deterministically before insert.
    • Additional procedure surface requires maintenance when app_user rules evolve.
  • Follow-up:
    • Update ERD_INDEXERS_CHECKLIST.md to mark app_user procedures complete.
    • Extend coverage when app_user endpoints are implemented.

Indexer tag stored procedures

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • Tags are user-created and soft-deleted; procedures must preserve tag_key immutability.
    • ERD_INDEXERS.md requires audit logging, lowercase tag keys, and conflict handling when tag_public_id and tag_key are both provided.
    • Stored procedures need constant error messages with structured detail codes.
  • Decision:
    • Add migration 0034 with tag_create_v1, tag_update_v1, and tag_soft_delete_v1 plus stable wrappers.
    • Validate tag_key casing, length, and uniqueness on create; tag_key is immutable on update and delete.
    • Support tag resolution by public ID and/or key with invalid_tag_reference on conflict.
    • Write config_audit_log entries for create, update, and soft-delete actions.
  • Consequences:
    • Tag mutations are centralized and auditable in the database layer.
    • Additional procedure surface area must be kept in sync with future tag rules.
  • Follow-up:
    • Extend REST handlers to use tag procedures with key/public ID resolution.
    • Add API validation tests for invalid_tag_reference and soft-delete behaviors.

Indexer routing policy stored procedures

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • Routing policy mutations require role checks, parameter validation, and audit logging.
    • ERD_INDEXERS.md specifies parameter constraints per routing mode and secret binding requirements for proxy credentials.
    • Procedures must use constant error messages with structured detail codes.
  • Decision:
    • Add migration 0035 implementing routing_policy_create_v1, routing_policy_set_param_v1, and routing_policy_bind_secret_v1 plus stable wrappers.
    • Enforce owner/admin role checks, display_name validation, and unsupported mode rejection.
    • Validate parameter types and ranges; restrict param keys to mode-specific allowlists.
    • Create verify_tls on policy creation and ensure auth parameter rows exist for proxy modes.
    • Bind secrets via secret_binding with secret_audit_log and config_audit_log entries.
  • Consequences:
    • Routing policy state is validated and auditable at the database layer.
    • Proxy credential bindings are centralized with explicit secret audit events.
  • Follow-up:
    • Implement routing policy API handlers using these procedures.
    • Add tests for param validation edge cases and secret binding replacement.

Indexer Cloudflare reset procedure

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • Operators need a controlled reset path for Cloudflare challenges and cooldowns.
    • ERD_INDEXERS.md requires owner/admin authorization, CF state reset, and conditional connectivity profile recovery for quarantined indexers.
  • Decision:
    • Add migration 0036 with indexer_cf_state_reset_v1 plus a stable wrapper.
    • Reset cf_state to clear, wipe CF session/cooldown/backoff metadata, and zero consecutive_failures.
    • If connectivity status is quarantined with CF-related error classes, downgrade to degraded and clear error_class to unknown.
    • Record a config_audit_log update with change_summary “cf_state reset”.
  • Consequences:
    • CF recovery can be triggered safely with auditable changes.
    • Non-CF connectivity failures are preserved.
  • Follow-up:
    • Add API handler wiring for CF resets.
    • Add tests for quarantined vs non-quarantined connectivity transitions.

Indexer rate limit stored procedures

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • Rate limiting requires auditable policy management and a database-backed token bucket.
    • ERD_INDEXERS.md mandates bounds enforcement, system policy immutability, and scoped token consumption with minute windows.
  • Decision:
    • Add migration 0037 implementing rate_limit_policy CRUD, instance/policy mappings, and rate_limit_try_consume_v1 plus stable wrappers.
    • Enforce owner/admin authorization, range checks, and in-use protection on delete.
    • Implement token bucket updates with row-level locking on rate_limit_state.
  • Consequences:
    • Rate limit policies and assignments are centralized and auditable.
    • Token consumption is safe under concurrent access.
  • Follow-up:
    • Integrate rate_limit_try_consume_v1 into outbound request logging.
    • Add tests for policy deletion conflicts and token bucket edge cases.

Indexer instance stored procedures

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • Indexer instances, RSS scheduling, domain/tag assignment, and field value management require validated, auditable mutations at the database layer.
    • ERD_INDEXERS.md mandates per-proc authorization, field validation, and audit logging.
  • Decision:
    • Add migration 0038 implementing indexer_instance and RSS procedures, plus media domain, tag, and field value/secret binding procedures with stable wrappers.
    • Enforce owner/admin authorization, definition validation, and strict value checks (type, range, regex, allowed values).
    • Record config_audit_log updates for each mutation and secret_audit_log bind entries.
  • Consequences:
    • Indexer configuration changes are validated and auditable in stored procedures.
    • Field validations are enforced consistently against definition rules.
  • Follow-up:
    • Implement indexer_instance_test_v1 and outbound request logging integration.
    • Add API handlers and tests for indexer instance management.

Indexer category mapping procedures

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • Category mapping rules must be updated via stored procedures with validation and audit logging.
    • ERD_INDEXERS.md specifies media domain and Torznab category checks plus primary mapping enforcement.
  • Decision:
    • Add migration 0039 implementing tracker_category_mapping and media_domain_to_torznab mapping upsert/delete procedures with stable wrappers.
    • Validate upstream_slug resolution, Torznab category IDs, and media domain keys.
    • Enforce a single primary mapping per media domain during upsert.
    • Record config_audit_log entries for all mutations.
  • Consequences:
    • Category mapping changes are validated and auditable in the database.
    • Primary mapping invariants are enforced within the procedure transaction.
  • Follow-up:
    • Add API handlers for category mapping management.
    • Add tests for primary switch and invalid key handling.

Indexer policy set procedures

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • Policy sets and rule toggles must be managed via stored procedures with role authorization and audit logging.
    • ERD_INDEXERS.md requires scope-based authorization and sort-order reordering.
  • Decision:
    • Add migration 0040 implementing policy_set create/update/enable/disable/reorder and policy_rule enable/disable/reorder procedures with stable wrappers.
    • Enforce scope-specific authorization, cardinality rules for enabled global/user sets, and profile link requirement on enable.
    • Record config_audit_log entries for all mutations.
  • Consequences:
    • Policy set lifecycle operations are validated and auditable at the DB layer.
    • Policy rule toggling and ordering are centralized for consistent behavior.
  • Follow-up:
    • Implement policy_rule_create_v1 with value set payload handling.
    • Add API handlers and tests for policy set and rule operations.

Indexer search profile procedures

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • Search profile mutations require scope-aware authorization, media domain resolution, and audit logging.
    • ERD_INDEXERS.md specifies default handling, allowlist semantics, and policy-set linkage.
  • Decision:
    • Add migration 0041 implementing search_profile create/update/default operations plus domain allowlist, policy-set linking, indexer allow/block, and tag allow/block/prefer procedures with stable wrappers.
    • Enforce per-scope authorization, media domain key validation, and allow/block conflict checks.
    • Record config_audit_log entries for profile and rule updates.
  • Consequences:
    • Search profile state changes are validated and auditable at the DB layer.
    • Allowlist and preference rules are kept consistent with block/allow constraints.
  • Follow-up:
    • Implement API handlers for search profile management.
    • Add tests for default scope switching and allow/block conflicts.

Indexer policy rule creation procedure

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • We need a stored procedure to create immutable policy rules that enforces ERD_INDEXERS.md invariants, including match-field/operator compatibility and value-set normalization.
    • Database mutations must be stored-procedure only, avoid JSON/JSONB, and return structured errors with constant messages.
  • Decision:
    • Add a composite type for value-set items and a policy_rule_create_v1 procedure that validates rule shape, match values, and value-set contents before inserting policy_rule rows.
    • Provide a stable policy_rule_create wrapper for versioning consistency.
  • Consequences:
    • Policy rule creation is validated centrally in the database, preventing inconsistent match-value combinations and enforcing normalization limits.
    • Callers must supply only the expected match value type or value-set items; extra fields now fail fast.
  • Follow-up:
    • Implement application-layer regex compilation validation using the stored is_case_insensitive flag.
    • Add stored-procedure tests that cover rule-type and value-set edge cases once the indexer DB test harness is available.

Indexer outbound request log procedure

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • Outbound request telemetry must be written through stored procedures with strict validation and normalized cursor diagnostics.
    • The ERD mandates URL-aware page cursor normalization and hashing to bound storage.
  • Decision:
    • Add outbound_request_log_write_v1 to validate request invariants, resolve public IDs, normalize page cursor keys, persist outbound request logs, and update run correlation tracking.
    • Provide a stable outbound_request_log_write wrapper for versioned usage.
  • Consequences:
    • Outbound request samples are consistent across callers and safe for rollups.
    • Cursor normalization adds complexity; malformed cursor input now fails fast instead of being stored.
  • Follow-up:
    • Wire outbound logging from search runs and indexer probes to use the new procedure.
    • Add DB-level tests for cursor normalization and rate-limit invariants once the data test harness exists.

Indexer Torznab instance state procedures

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • Torznab instances need enable/disable and soft-delete operations with role-based authorization tied to their search profiles.
    • Stored procedures must enforce invariants and write audit logs.
  • Decision:
    • Add torznab_instance_enable_disable_v1 and torznab_instance_soft_delete_v1 with search-profile scoped authorization and audit logging.
    • Keep create/rotate key procedures separate to accommodate pending secret-key hashing decisions.
  • Consequences:
    • Torznab instances can be safely toggled or retired without exposing API key material.
    • Create/rotate remain blocked until API key hashing strategy is finalized.
  • Follow-up:
    • Implement torznab_instance_create_v1 and torznab_instance_rotate_key_v1 once Argon2id hashing is approved for the database layer or moved to the app layer.

Indexer conflict resolution procedures

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • Source metadata conflicts require operator resolution with strict authorization and audit logging.
    • Accepted-incoming resolutions must never overwrite existing durable data.
  • Decision:
    • Add source_metadata_conflict_resolve_v1 and source_metadata_conflict_reopen_v1 to enforce admin/owner authorization, apply limited backfills, and record audit events.
    • Limit accepted-incoming updates to safe backfills (source_guid, tracker_name, tracker_category/subcategory) when the durable value is missing.
  • Consequences:
    • Conflict resolution is traceable and safe against overwrites.
    • Incoming tracker category parsing is validated; malformed inputs are rejected instead of silently stored.
  • Follow-up:
    • Add test coverage for conflict resolution paths once the data test harness exists.

Indexer job runner procedures

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • Background job scheduling requires database-enforced claiming and retention cleanup.
    • Retention rules must align with deployment_config thresholds and avoid deleting durable data.
  • Decision:
    • Add job_claim_next_v1 to enforce lease-based claiming with advisory locks and per-job lease durations.
    • Add job_run_retention_purge_v1 to purge completed search trees and operational telemetry using retention thresholds.
  • Consequences:
    • Job claiming is serialized per job_key and prevents overlapping workers.
    • Retention cleanup reduces operational data growth while preserving durable records.
  • Follow-up:
    • Add per-job completion procedures that advance next_run_at with jitter and clear locks.
    • Add test coverage for retention purge edge cases once the data test harness exists.

Indexer search request cancel procedure

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • Search requests must be cancelable with proper authorization and clean terminal state transitions.
    • Runs in queued or running state must be marked canceled without violating status timestamp constraints.
  • Decision:
    • Add search_request_cancel_v1 to enforce actor authorization, mark the search as canceled, and cancel in-flight runs.
    • Keep the procedure idempotent when the request is already terminal.
  • Consequences:
    • Cancel operations consistently update finished_at/canceled_at and avoid invalid run states.
    • Unauthorized callers cannot cancel Torznab-owned searches.
  • Follow-up:
    • Implement search_request_create_v1 and search run state procedures to complete the search lifecycle.

Indexer search run procedures

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • Search runs need explicit state transitions and retry backoff rules per ERD_INDEXERS.md.
    • Retryable failures and rate-limited deferrals must keep runs queued while enforcing limits.
  • Decision:
    • Add stored procedures for enqueue, start, finish, fail, and cancel of search indexer runs.
    • Implement backoff calculations for retryable errors and rate-limited deferrals inside the database.
  • Consequences:
    • Run state transitions are validated in one place and aligned with status timestamp constraints.
    • Coordinators must pass retry_seq and rate-limit scope to ensure correct backoff.
  • Follow-up:
    • Implement search_request_create_v1 to seed runs and policy snapshots.
    • Wire outbound request logging to update run correlation state.

Indexer canonical disambiguation rule procedure

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • Prevent-merge rules must enforce canonical identity normalization and symmetric uniqueness.
    • Only admin/owner users may create disambiguation rules.
  • Decision:
    • Add canonical_disambiguation_rule_create_v1 with normalization, identity validation, and canonical ordering of left/right pairs.
    • Record creation in config_audit_log with a canonical entity type.
  • Consequences:
    • Duplicate or reversed rule pairs are rejected before insertion.
    • Invalid identity values fail fast and do not pollute rule sets.
  • Follow-up:
    • Implement canonical merge/recompute procedures that honor prevent-merge rules.

Indexer search request create procedure

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • Search requests must validate identifiers, torznab modes, and category filters per the ERD.
    • Policy snapshots must be reusable with deterministic hashing and rule ordering.
  • Decision:
    • Add search_request_create_v1 with request validation, policy snapshot materialization, category/domain intersection, and runnable indexer gating.
    • Return both search_request_public_id and the request policy set public id for downstream orchestration.
  • Consequences:
    • Search requests short-circuit to finished when domain/allowlist constraints or policy allowlists eliminate all runnable indexers.
    • Invalid identifier or category combinations fail fast with explicit error codes.
  • Follow-up:
    • Implement search_result_ingest_v1 and canonical maintenance procedures.
    • Add SQL harness tests for search_request creation paths and edge cases.

Task record

  • Motivation:
    • Enable search request creation with ERD-compliant validation, policy snapshotting, and deterministic scheduling inputs.
  • Design notes:
    • Policy snapshots are hashed from ordered scope/rule lists and reused when the hash exists.
    • Torznab category handling preserves requested/effective lists and treats 8000 as catch-all.
    • Runnable indexers are filtered by profile allow/block rules, domain constraints, and policy allow_indexer_instance(require).
  • Test coverage summary:
    • Not yet added; requires SQL stored-proc harness coverage for identifier parsing, category filtering, and runnable gating.
  • Observability updates:
    • None in this change (DB-only procedure).
  • Risk & rollback plan:
    • Risk: invalid gating logic could short-circuit legitimate searches. Rollback by reverting migration 0050 and re-running migrations.
  • Dependency rationale:
    • No new dependencies added.

Indexer job runner follow-up procedures

  • Status: Accepted
  • Date: 2026-01-26
  • Context:
    • Job runner needs procedures for policy snapshot GC, refcount repair, and rate limit state purge.
    • These procedures are used by scheduled jobs and must align with ERD retention rules.
  • Decision:
    • Add job runner procedures for policy snapshot GC, refcount repair, and rate limit state purge.
    • Keep wrappers without version suffix to preserve stable entry points.
  • Consequences:
    • Policy snapshot rows with ref_count=0 will be purged after 30 days.
    • Refcount repair can correct drift between snapshots and active searches.
    • Rate limit state rows older than 6 hours are cleaned up.
  • Follow-up:
    • Implement remaining job runner procedures (RSS poll/backfill, connectivity refresh, reputation rollup).

Task record

  • Motivation:
    • Close remaining ERD-required job runner procedures that do not depend on external systems.
  • Design notes:
    • Refcount repair uses search_request counts as source of truth.
    • GC window is fixed at 30 days per ERD; rate limit purge uses 6-hour cutoff.
  • Test coverage summary:
    • Not yet added; requires SQL harness coverage for refcount updates and purge cutoffs.
  • Observability updates:
    • None in this change (DB-only procedures).
  • Risk & rollback plan:
    • Risk: accidental over-purge if cutoff logic is wrong. Rollback by reverting migration 0051.
  • Dependency rationale:
    • No new dependencies added.

Indexer executor handoff stored procedures

  • Status: Accepted
  • Date: 2026-01-27
  • Context:
    • External executor work (RSS polling and indexer test probes) must be orchestrated through stored procedures, with clear concurrency control and auditable outcomes.
    • The ERD requires separate claim/apply phases so the database remains the single source of truth while network calls run outside the DB.
    • Secrets must remain encrypted at rest and only surfaced to the executor via explicit read procedures.
  • Decision:
    • Add RSS polling claim/apply procedures and indexer test prepare/finalize procedures.
    • Provide a secret read procedure for executor access, allowing system callers to pass a NULL actor while still enforcing revocation checks.
    • Keep procedure inputs/outputs aligned with ERD contract and use outbound_request_log for telemetry.
    • Alternatives considered:
      • Single procedure that performs polling/tests and logging inside the DB.
      • Executor-side direct table access without stored procedures.
  • Consequences:
    • Positive outcomes:
      • Clear concurrency boundaries for polling/test work with SKIP LOCKED claims.
      • Consistent logging and scheduling semantics driven by the ERD.
    • Risks or trade-offs:
      • Requires executor code to implement the two-phase workflow and handle retries.
      • Adds more stored procedure surface area to maintain.
  • Follow-up:
    • Implement migrations for RSS poll claim/apply, indexer test prepare/finalize, and secret read procedures.
    • Update checklist tracking and verify integration tests once executor wiring lands.

Indexer Tag API Surface

  • Status: Accepted
  • Date: 2026-01-27
  • Context:
    • Indexer tag stored procedures exist but there is no HTTP surface or service wiring.
    • The API layer needs a DI-friendly facade to keep handlers thin and testable.
    • Errors must use constant messages with structured context fields.
  • Decision:
    • Introduce an indexer facade trait in revaer-api and implement it in revaer-app.
    • Add /v1/indexers/tags create/update/delete endpoints using stored procedures.
    • Publish tag DTOs in revaer-api-models and update OpenAPI.
  • Consequences:
    • API callers can manage indexer tags without direct database access.
    • API server construction now requires an indexer facade dependency.
    • Tests and wiring must supply a stub indexer implementation.
  • Follow-up:
    • Extend indexer API coverage for definitions, instances, routing, secrets, and policies.
    • Add list/read endpoints once read procedures are defined.

Motivation

Provide a clean, testable HTTP surface for indexer tag management that aligns with the ERD and stored-procedure contract.

Design notes

  • The API layer delegates to a narrow IndexerFacade trait to keep handlers minimal.
  • Tag operations pass the system actor UUID while user identity is not yet plumbed.
  • Service errors carry error codes and SQLSTATE without interpolating values into messages.

Test coverage summary

  • Added handler tests for tag create and error mapping (bad request/not found).
  • Existing API tests updated to supply a stub indexer facade.

Observability updates

  • Indexer service logs storage/authorization failures with structured fields (operation, error_code, sqlstate).

Risk & rollback plan

  • Risk: new routes expose tag mutations before full RBAC is enforced.
  • Rollback: revert the tag handler/routes and facade wiring commits.

Dependency rationale

  • No new dependencies added; existing revaer-api, revaer-app, and data-layer crates are reused.

143 Task: Indexer procedure fixes (RSS apply, base score refresh, normalization)

  • Status: Accepted
  • Date: 2026-01-27
  • Context:
    • RSS poll apply failed under outer-join locking and returned non-domain errors.
    • Base score refresh queried a non-existent canonical_torrent_id on durable sources.
    • Title normalization regex boundaries did not strip resolution tokens consistently.
    • Import job status aggregation hit ambiguous column references.
    • Factory reset did not re-seed indexer defaults, causing tag operations to fail.
  • Decision:
    • Patch stored procedures with targeted fixes and add a new migration to apply them.
    • Keep Rust wrappers aligned with enum/array casts and session config expectations.
    • Extend factory reset to reseed indexer defaults and system actor data.
  • Consequences:
    • RSS poll apply now locks the subscription row without outer-join errors.
    • Base score refresh derives canonical/source pairs from context scores and recent sources.
    • Title normalization removes known release tokens reliably.
    • Import job status aggregation no longer fails on ambiguity.
    • Factory reset restores seed data needed for indexer tag operations.
  • Follow-up:
    • Re-run full CI and UI E2E gates.
    • Monitor RSS apply logs for any unexpected lock contention.

Motivation

Fix indexer data-layer regressions that caused RSS polling to fail before domain errors surfaced, and align stored procedures with the canonical/source relationships defined in ERD_INDEXERS.md.

Design notes

  • Reworked rss_poll_apply_v1 to lock only the subscription row (FOR UPDATE OF sub) and left other joins unlocked.
  • Updated base-score refresh to use durable source recency plus context-score links for canonical mapping, keeping scoring inputs on canonical_torrent_source.
  • Corrected normalize_title_v1 regex boundaries and whitespace patterns using explicit escapes.
  • Qualified import_job_get_status_v1 result aggregation to avoid status ambiguity.
  • Updated RSS apply wrapper casts and test-time secret config to match runtime expectations.

Test coverage summary

  • just ci
  • just ui-e2e

Observability updates

  • None.

Risk & rollback plan

  • Risk: base-score refresh may skip canonicals without context-score links.
  • Rollback: apply a follow-up migration restoring previous procedure bodies and revert wrapper changes if needed.

Dependency rationale

  • No new dependencies.

144 Indexer domain mapping and DI boundaries

  • Status: Accepted
  • Date: 2026-01-27
  • Context:
    • Indexer work spans stored procedures, API surfaces, UI/CLI usage, and background jobs.
    • ERD_INDEXERS.md requires clear domain boundaries and injected dependencies.
    • Testability and stored-proc-only data access must stay consistent across crates.
  • Decision:
    • Map indexer domains to existing crates and define DI seams for each domain service.
    • Versioned stored procedures use _v1 suffixes with stable wrapper functions without version suffixes.
  • Consequences:
    • Clear ownership reduces cross-crate coupling and supports isolated testing.
    • API/UI/CLI can share a single facade surface without leaking database details.
    • Procedure evolution can continue without breaking callers by updating wrappers.
  • Follow-up:
    • Implement per-domain facades in revaer-api and wire concrete implementations in revaer-app.
    • Add tests per facade and for stored-proc wrappers to enforce error-style consistency.

Domain-to-crate mapping

  • revaer-data:
    • Stored-proc wrappers and result mapping for indexer domains under crates/revaer-data/src/indexers/*.
    • Error types scoped to data access with constant messages and structured context.
  • revaer-api:
    • HTTP handlers under crates/revaer-api/src/http/handlers/indexers/*.
    • Domain facades and traits under crates/revaer-api/src/app/indexers/* (API-safe DTOs only).
  • revaer-app:
    • Bootstrap wiring in crates/revaer-app/src/bootstrap.rs for concrete data-layer implementations.
  • revaer-cli:
    • CLI commands call API endpoints only; no direct data access.
  • revaer-ui:
    • UI uses services/* and feature slices; no direct data access.
  • revaer-events / revaer-telemetry:
    • Event publication and metrics for indexer operations at the API boundary.

DI boundaries (facade surface)

Expose API-facing traits in revaer-api::app::indexers and inject concrete implementations from revaer-app:

  • IndexerDefinitionsService: definitions catalog and field metadata.
  • IndexerInstancesService: create/update instances, RSS settings, field values, tag/media-domain binds.
  • RoutingPolicyService: create/update policies, params, and secrets.
  • SecretsService: create/rotate/revoke/read secrets and bindings.
  • TagsService: create/update/delete tags.
  • SearchProfilesService: profiles, trust tiers, domain/tag filters, and policy-set wiring.
  • PoliciesService: policy sets/rules management and snapshot refresh hooks.
  • TorznabService: torznab instance lifecycle and category mappings.
  • ImportsService: import job lifecycle and status reporting.
  • JobsService: job claim/run entry points for indexer background jobs.
  • CanonicalizationService: canonical maintenance and disambiguation rules.
  • ReputationService: connectivity and reputation rollups.

All facades return Result<T, E> with constant error messages and structured context fields. No facade constructs concrete dependencies; all implementations are injected from bootstrap.

Motivation

Document and lock the indexer architecture mapping needed to implement the ERD without leaking database details or violating dependency-injection rules.

Design notes

  • Reuse existing crates/modules; avoid introducing new crates until feature growth demands it.
  • Keep stored-proc wrappers in revaer-data and expose only API-safe DTOs at the HTTP boundary.

Test coverage summary

  • just ci
  • just ui-e2e

Observability updates

  • None.

Risk & rollback plan

  • Risk: documentation drift if code moves without updating this ADR.
  • Rollback: revert this ADR and restore checklist items to unchecked.

Dependency rationale

  • No new dependencies.

145 Indexer stored-proc test harness

  • Status: Accepted
  • Date: 2026-01-27
  • Context:
    • Indexer stored-proc wrappers have extensive integration tests with repeated DB setup.
    • ERD_INDEXERS_CHECKLIST requires a transactional, seeded harness and deterministic clocks.
    • We need consistent setup without introducing new dependencies.
  • Decision:
    • Add a shared IndexerTestDb helper in revaer-data::indexers (test-only).
    • Centralize Postgres startup, migrations, and UTC session configuration.
    • Capture a deterministic now() value after migrations for tests that need time inputs.
  • Consequences:
    • Tests share a single harness, reducing setup drift and boilerplate.
    • Deterministic timestamps are available without leaking production code changes.
    • Test-only helper code is now part of the indexer module.
  • Follow-up:
    • Use IndexerTestDb::now() in additional tests that depend on timestamps.
    • Add explicit transaction helpers if we need per-test rollbacks beyond isolated DBs.

Motivation

Indexer stored procedures are covered by integration tests that previously duplicated database startup and migration logic. The checklist calls for a consistent harness with deterministic clocks and seeded data. A shared helper keeps the setup aligned and makes it easier to maintain.

Design notes

  • Tests use IndexerTestDb to keep the disposable database alive for the test duration.
  • The helper configures session time zone to UTC and captures a single now() value after migrations for deterministic timestamp inputs.
  • No production code paths or runtime behavior are changed.

Test coverage summary

  • just ci
  • just ui-e2e

Observability updates

  • None.

Risk & rollback plan

  • Risk: tests may rely on helper behavior and need updates if the harness evolves.
  • Rollback: revert this ADR and restore per-test setup helpers.

Dependency rationale

  • No new dependencies.

Indexer error-code taxonomy

  • Status: Accepted
  • Date: 2026-01-27
  • Context:
    • Stored procedures already raise exceptions with DETAIL codes, but there is no single, documented taxonomy for the values or how the API must surface them.
    • AGENTS.md requires constant error messages, structured context fields, and stable error mapping for clients and tests.
  • Decision:
    • Define a shared error-code taxonomy for indexer stored procedures and API responses:
      • Stored procedures:
        • Domain/validation/authorization failures raise ERRCODE = 'P0001' with a constant MESSAGE of the form Failed to <operation> and DETAIL set to the error code.
        • Infrastructure/constraint errors use native SQLSTATE codes (e.g., 23505, 23503) and do not override the Postgres message.
        • DETAIL values are lower_snake_case, <= 64 chars, and never embed user data.
      • API responses:
        • Use RFC9457 Problem responses with constant title/detail strings.
        • Include error_code (from the DB DETAIL) and sqlstate as context fields when present, never interpolated into human-readable messages.
        • Validation errors prefer invalid_params with constant messages; contextual inputs travel in context fields.
    • Adopt the following canonical error-code groups (examples are non-exhaustive):
      • Missing/empty/length: *_missing, *_empty, *_too_long, *_too_short.
      • Format/normalization: *_not_lowercase, *_invalid_format, *_invalid.
      • Lookup/identity: *_not_found, *_reference_missing, unknown_key.
      • Conflicts/state: *_already_exists, *_deleted, *_in_use, *_disabled.
      • Unsupported/blocked: unsupported_*, *_disallowed, *_insufficient.
      • Auth/actor: actor_missing, actor_not_found, actor_unauthorized.
  • Consequences:
    • Clients can reliably map failures by error_code while keeping UI text constant and localizable.
    • Tests can assert stable error_code/sqlstate values without parsing messages.
  • Follow-up:
    • Enforce taxonomy compliance in new stored procedures and API handlers.
    • Extend integration tests to cover new error codes as endpoints are added.

Task record

  • Motivation:
    • Provide a single, stable taxonomy for indexer errors so DB, API, CLI, and UI agree on machine-readable codes while keeping messages constant.
  • Design notes:
    • DB procs keep MESSAGE constant and carry machine codes in DETAIL.
    • API handlers surface error_code/sqlstate via ProblemDetails.context and keep detail text constant for localization.
  • Test coverage summary:
    • Documentation-only change; no new tests added.
  • Observability updates:
    • Errors continue to log with structured fields (error_code, sqlstate) at the origin.
  • Risk & rollback plan:
    • Risk: taxonomy drift if future procs introduce ad-hoc codes. Rollback by reverting this ADR and aligning new procedures to existing ad-hoc behavior.
  • Dependency rationale:
    • No new dependencies added.

Indexer v1 scope enforcement

  • Status: Accepted
  • Date: 2026-01-27
  • Context:
    • The indexer ERD defines explicit v1 scope and non-goals that must guide architecture and route planning.
    • The implementation plan needs a clear guardrail so API/UI work does not drift into media management or other out-of-scope features.
  • Decision:
    • Confirm that indexer v1 architecture and route planning are constrained to the ERD scope:
      • Indexers, search, policies, secrets, routing, rate limiting, Torznab compatibility, and reliability/telemetry flows are in scope.
      • Media management features remain out of scope for v1 and require a future ADR before any routes or services are added.
    • Document the scope rule as a checklist gate and require any scope expansion to add a new ADR and update ERD_INDEXERS.md.
  • Consequences:
    • Implementation stays aligned with the ERD and avoids premature media management APIs.
    • Route planning focuses on indexer and search workflows with explicit boundaries.
  • Follow-up:
    • Keep ERD_INDEXERS_CHECKLIST.md in sync with any scope changes.
    • Add ADRs for any new surfaces that expand beyond v1 scope.

Task record

  • Motivation:
    • Prevent scope creep and ensure indexer architecture and route planning remain consistent with v1 goals and non-goals.
  • Design notes:
    • Architecture and routes are limited to indexer/search/proxy/rate-limit/Torznab needs.
    • Media management endpoints are intentionally excluded in v1.
  • Test coverage summary:
    • Documentation-only change; no new tests added.
  • Observability updates:
    • No changes; existing telemetry plans remain in effect.
  • Risk & rollback plan:
    • Risk: future work bypasses the scope gate. Rollback by reasserting scope in a follow-up ADR and pruning out-of-scope routes.
  • Dependency rationale:
    • No new dependencies added.

Indexer schema JSON ban verification

  • Status: Accepted
  • Date: 2026-01-27
  • Context:
    • The ERD bans JSON/JSONB storage for indexer data.
    • We need to confirm migrations comply before expanding API and service layers.
  • Decision:
    • Verify all indexer migrations avoid JSON/JSONB column types and document the result.
    • Treat JSON/JSONB usage as a hard failure in schema reviews; any exception requires a future ADR and ERD update.
  • Consequences:
    • The schema remains normalized and avoids opaque JSON storage.
    • Future migrations must continue to use normalized tables and enums.
  • Follow-up:
    • Re-check JSON/JSONB usage whenever new migrations are added.

Task record

  • Motivation:
    • Ensure the schema adheres to the ERD prohibition on JSON/JSONB types.
  • Design notes:
    • Reviewed the migration set and confirmed no JSON/JSONB column types are present.
  • Test coverage summary:
    • Documentation-only confirmation; no new tests added.
  • Observability updates:
    • None.
  • Risk & rollback plan:
    • Risk: future migrations introduce JSON types. Rollback by reverting offending migration and normalizing the data model.
  • Dependency rationale:
    • No new dependencies added.

Indexer public-id and bigint identity verification

  • Status: Accepted
  • Date: 2026-01-27
  • Context:
    • The ERD mandates bigint identity primary keys and UUID public IDs for specific indexer tables, while indexer_definition must not expose a public ID in v1.
    • API and service layers depend on stable public identifiers without leaking internal bigint keys.
  • Decision:
    • Verify the following tables use BIGINT GENERATED ALWAYS AS IDENTITY primary keys and enforce UUID public IDs (unique) where required:
      • app_user
      • indexer_instance
      • routing_policy
      • policy_set
      • policy_rule
      • search_profile
      • search_request
      • canonical_torrent
      • canonical_torrent_source
      • torznab_instance
      • rate_limit_policy
      • secret
    • Confirm indexer_definition has no public ID in v1.
  • Consequences:
    • Indexer APIs can safely use UUIDs/keys without exposing internal bigint IDs.
    • Table definitions align with ERD identity rules, reducing migration drift.
  • Follow-up:
    • Re-verify new tables against this rule before adding API or UI surfaces.

Task record

  • Motivation:
    • Validate ERD identity/public ID rules before expanding indexer-facing APIs.
  • Design notes:
    • Verified table definitions in migrations 0012, 0014, 0015, 0016, 0018, 0019, 0022, and 0023 include bigint identity PKs and required public IDs.
    • Verified indexer_definition in 0013 contains no public ID column.
  • Test coverage summary:
    • Documentation-only verification; no new tests added.
  • Observability updates:
    • None.
  • Risk & rollback plan:
    • Risk: future migrations add missing or redundant public IDs. Rollback by reverting the offending migration and revalidating against the ERD.
  • Dependency rationale:
    • No new dependencies added.

Indexer soft-delete coverage verification

  • Status: Accepted
  • Date: 2026-01-27
  • Context:
    • The ERD requires soft-delete support via deleted_at on specific indexer tables.
    • We need confirmation before expanding API and service layers that assume soft deletes.
  • Decision:
    • Verify deleted_at exists on all required tables:
      • indexer_instance
      • routing_policy
      • policy_set
      • search_profile
      • tag
      • torznab_instance
      • rate_limit_policy
  • Consequences:
    • Soft-delete semantics are available for indexer configuration entities.
    • API handlers can depend on deleted_at for active filtering.
  • Follow-up:
    • Keep soft-delete requirements in mind for any new indexer-facing tables.

Task record

  • Motivation:
    • Confirm the ERD soft-delete rule is implemented consistently in migrations.
  • Design notes:
    • Verified deleted_at columns in migrations 0012, 0014, 0016, 0018, and 0019.
  • Test coverage summary:
    • Documentation-only verification; no new tests added.
  • Observability updates:
    • None.
  • Risk & rollback plan:
    • Risk: future migrations omit deleted_at. Roll back by correcting the migration and re-running schema checks.
  • Dependency rationale:
    • No new dependencies added.

Indexer audit fields and timestamp defaults verification

  • Status: Accepted
  • Date: 2026-01-27
  • Context:
    • The ERD requires audit fields (created/updated/changed by) to be non-null where used and mandates created_at/updated_at defaults when those columns exist.
    • We need to confirm the indexer schema matches these requirements before expanding APIs.
  • Decision:
    • Verify audit fields are present and non-null where required, and timestamp defaults are set on indexer tables that include created_at/updated_at.
    • Confirmed examples (migrations 0012–0023):
      • Audit fields:
        • tag, routing_policy, indexer_instance, search_profile, policy_set, policy_rule include created_by_user_id/updated_by_user_id as NOT NULL.
        • indexer_instance_field_value includes updated_by_user_id as NOT NULL.
        • canonical_disambiguation_rule includes created_by_user_id as NOT NULL.
        • config_audit_log includes changed_by_user_id as NOT NULL.
      • Timestamp defaults:
        • Tables with created_at/updated_at columns define them as NOT NULL DEFAULT now(), including tag, routing_policy, indexer_instance, search_profile, policy_set, policy_rule, canonical_torrent, canonical_torrent_source, torznab_instance, and rate_limit_policy.
  • Consequences:
    • Schema audit columns are enforced consistently and can be trusted by API and UI layers.
    • Timestamp defaults align with ERD expectations for lifecycle tracking.
  • Follow-up:
    • Re-verify audit/timestamp columns for any new indexer migrations.

Task record

  • Motivation:
    • Establish that audit fields and lifecycle timestamps are enforced per the ERD.
  • Design notes:
    • Verified audit field presence and NOT NULL constraints in migrations 0012, 0014, 0016, 0019, 0021, and 0022.
    • Verified created_at/updated_at defaults in the same migration set.
  • Test coverage summary:
    • Documentation-only verification; no new tests added.
  • Observability updates:
    • None.
  • Risk & rollback plan:
    • Risk: future tables omit audit fields or defaults. Roll back by correcting the schema migration and revalidating against the ERD.
  • Dependency rationale:
    • No new dependencies added.

Indexer API boundary public-id verification

  • Status: Accepted
  • Date: 2026-01-27
  • Context:
    • The ERD requires API boundaries to accept only UUID public IDs or keys and never expose internal bigint identities.
    • We need to confirm the current indexer API surface and stored-procedure entry points follow this rule.
  • Decision:
    • Verified indexer API DTOs and handlers accept only UUIDs or keys, never internal bigint IDs.
    • Confirmed API DTOs for tags use Uuid for tag_public_id and string keys (TagCreateRequest, TagUpdateRequest, TagDeleteRequest) and that the indexer facade methods take UUID actor identities plus UUID/tag key inputs.
    • Confirmed indexer stored-procedure wrappers (deployment_init, tag_*, routing_policy_*, rate_limit_*, search_*, secret_*) accept UUID public IDs and key strings exclusively.
  • Consequences:
    • API and stored-procedure boundaries comply with the ERD, keeping internal bigint identities private to the database layer.
    • Client integrations can rely on UUIDs/keys without leaking internal IDs.
  • Follow-up:
    • Re-verify new indexer endpoints and procedures before expanding the API.

Task record

  • Motivation:
    • Validate API/public boundaries adhere to ERD public-id exposure rules.
  • Design notes:
    • Checked tag API DTOs in revaer-api-models and the indexer facade/handlers in revaer-api for UUID-only identifiers.
    • Reviewed wrapper procs in migration 0064_indexer_wrapper_procs.sql.
  • Test coverage summary:
    • Documentation-only verification; no new tests added.
  • Observability updates:
    • None.
  • Risk & rollback plan:
    • Risk: future endpoints accidentally expose internal IDs. Roll back by reverting the API shape and re-validating with stored-proc interfaces.
  • Dependency rationale:
    • No new dependencies added.

Indexer external reference public-id verification

  • Status: Accepted
  • Date: 2026-01-27
  • Context:
    • The ERD requires external references (policy rules, disambiguation rules) to store UUID public IDs or keys instead of internal bigint identities.
    • We need to confirm the schema matches that rule before expanding policy and canonicalization workflows.
  • Decision:
    • Verified policy and disambiguation tables store UUIDs or keys only for external references.
    • Policy rules capture external identifiers via match_value_uuid or lowercase match_value_text and policy_rule_value_set_item.value_uuid.
    • Policy snapshots store policy_rule_public_id UUIDs.
    • Canonical disambiguation rules store UUIDs only when referencing canonical_public_id, otherwise text hashes.
  • Consequences:
    • External reference data can be safely exposed in APIs without leaking internal bigint IDs.
    • Internal joins still rely on bigint PKs, preserving database integrity.
  • Follow-up:
    • Re-verify future policy/disambiguation changes keep UUID/key-only references.

Task record

  • Motivation:
    • Validate that external references never store internal bigint IDs.
  • Design notes:
    • Reviewed policy_rule and policy_rule_value_set_item columns in 0019_policy_sets.sql.
    • Reviewed canonical_disambiguation_rule in 0022_indexer_canonicalization.sql.
    • Reviewed policy_snapshot_rule usage of policy_rule_public_id.
  • Test coverage summary:
    • Documentation-only verification; no new tests added.
  • Observability updates:
    • None.
  • Risk & rollback plan:
    • Risk: future migrations introduce bigint references in external-facing columns. Roll back by reverting schema changes and updating procedures.
  • Dependency rationale:
    • No new dependencies added.

Indexer system sentinel usage verification

  • Status: Accepted
  • Date: 2026-01-27
  • Context:
    • The ERD requires system actions to use a sentinel user identifier (user_id = 0 or the all-zero UUID) instead of NULL.
    • We need to confirm the indexer schema and stored procedures follow this rule before expanding automation workflows.
  • Decision:
    • Verified the system sentinel user is seeded with user_id = 0 and the all-zero UUID public ID in deployment seed and initialization migrations.
    • Confirmed stored procedures fall back to user_id = 0 for system-driven actions (e.g., search request creation).
    • Confirmed data-layer tests use the all-zero UUID sentinel when invoking indexer procedures.
  • Consequences:
    • System actions can be recorded without NULL audit fields, aligning with the ERD audit requirements.
    • Downstream API and UI layers can safely represent system activity with the sentinel UUID.
  • Follow-up:
    • Re-verify new procedures or automation jobs continue to use the sentinel user IDs instead of NULL.

Task record

  • Motivation:
    • Validate that system actions always carry the sentinel user identifier.
  • Design notes:
    • Seed/init migrations 0030_indexer_seed_data.sql, 0032_indexer_deployment_init.sql, and 0067_factory_reset_seed_defaults.sql insert user_id = 0 with the all-zero UUID.
    • search_request_create_v1 defaults to system_user_id := 0 when the actor is absent.
    • Data access tests (e.g., crates/revaer-data/src/indexers/deployment.rs) exercise stored procedures with the sentinel UUID.
  • Test coverage summary:
    • Documentation-only verification; existing tests cover system-user usage.
  • Observability updates:
    • None.
  • Risk & rollback plan:
    • Risk: new procs may accept NULL actors. Roll back by enforcing sentinel defaults and updating callers/tests.
  • Dependency rationale:
    • No new dependencies added.

Indexer text caps and lowercase key enforcement verification

  • Status: Accepted
  • Date: 2026-01-27
  • Context:
    • The ERD mandates text column caps and lowercase enforcement for key/slug fields (varchar(128) keys, varchar(256) names, varchar(2048) URLs, varchar(512) regex/text patterns, varchar(1024) notes).
    • We need to confirm the schema enforces these caps and lowercase checks before expanding APIs and UI validation.
  • Decision:
    • Verified key/slug fields use VARCHAR(128) with lowercase CHECKs where required (e.g., tag.tag_key, indexer_definition.upstream_slug, indexer_definition_field.name).
    • Verified display names are capped at VARCHAR(256) across core catalog tables (e.g., tag.display_name, indexer_definition.display_name, search_profile.display_name, policy_set.display_name).
    • Verified URL fields use VARCHAR(2048) (e.g., search_request_source_observation details_url, download_url, magnet_uri).
    • Verified regex/pattern text caps at VARCHAR(512) and notes/detail caps at VARCHAR(1024) (e.g., indexer_definition_field_validation.text_value, search_request.query_text, search_request.error_detail, policy_rule.rationale).
  • Consequences:
    • Schema enforces ERD text caps and lowercase rules, preventing oversized or improperly cased keys from entering the database.
    • API validation can align with these constraints without risking truncation.
  • Follow-up:
    • Re-verify any new text columns added to the indexer schema.

Task record

  • Motivation:
    • Confirm text caps and lowercase key enforcement align with ERD rules.
  • Design notes:
    • Reviewed 0012_indexer_core.sql (tag_key lower-case CHECK, display name sizes).
    • Reviewed 0013_indexer_definitions.sql (slug/name lowercase CHECKs and text caps).
    • Reviewed 0023_indexer_search_requests.sql for URL/text/detail caps.
    • Reviewed 0019_policy_sets.sql for rationale/text caps.
  • Test coverage summary:
    • Documentation-only verification; no new tests added.
  • Observability updates:
    • None.
  • Risk & rollback plan:
    • Risk: new columns exceed caps or miss lowercase checks. Roll back by adjusting migrations and re-validating constraints.
  • Dependency rationale:
    • No new dependencies added.

Indexer normalized column verification

  • Status: Accepted
  • Date: 2026-01-27
  • Context:
    • The ERD requires normalized columns (e.g., email_normalized, *_norm) to support consistent lookups and lowercased comparisons.
    • We need to confirm the schema includes the specified normalized fields.
  • Decision:
    • Verified app_user.email_normalized is present and enforced with a lowercase/trim CHECK constraint.
    • Verified generated normalized columns exist where specified in definition metadata (indexer_definition_field_validation.text_value_norm and depends_on_value_plain_norm).
    • Verified normalized identifier storage in search requests via search_request_identifier.id_value_normalized.
  • Consequences:
    • Normalized fields are persisted in the schema for reliable matching and validation logic.
    • Stored procedures can rely on normalized columns without ad-hoc transforms.
  • Follow-up:
    • Ensure any new ERD-defined normalized fields are added with the same constraints.

Task record

  • Motivation:
    • Confirm normalized columns exist for ERD-specified fields.
  • Design notes:
    • Reviewed 0012_indexer_core.sql for email_normalized.
    • Reviewed 0013_indexer_definitions.sql for generated *_norm columns.
    • Reviewed 0023_indexer_search_requests.sql for id_value_normalized.
  • Test coverage summary:
    • Documentation-only verification; no new tests added.
  • Observability updates:
    • None.
  • Risk & rollback plan:
    • Risk: missing normalized columns can break lookup consistency. Roll back by adding the columns in migrations and updating procedures.
  • Dependency rationale:
    • No new dependencies added.

Indexer hash identity rules verification

  • Status: Accepted
  • Date: 2026-01-27
  • Context:
    • The ERD defines hash identity rules for infohash v1/v2, magnet hashes, title normalization, and title-size fallback hashing.
    • We need to confirm the schema and ingest procedures enforce these rules.
  • Decision:
    • Verified canonical tables enforce hash shapes and lowercase normalization: canonical_torrent and canonical_torrent_source validate infohash and magnet hashes, plus enforce lowercase title_normalized.
    • Verified ingest procedures implement ERD hash derivations: normalize_title_v1, derive_magnet_hash_v1, and compute_title_size_hash_v1 in indexer_search_result_ingest_proc.sql implement normalization, magnet hash derivation, and title-size hashing.
    • Verified identity strategy selection uses infohash v2, infohash v1, magnet hash, or title-size fallback per ERD.
  • Consequences:
    • Hash identity rules are enforced consistently at the DB layer and in ingest logic.
    • Canonicalization can reliably deduplicate sources without depending on caller behavior.
  • Follow-up:
    • Re-verify if hash derivation logic changes or new identity strategies are added.

Task record

  • Motivation:
    • Confirm ERD hash identity rules are implemented in schema and procedures.
  • Design notes:
    • Reviewed 0022_indexer_canonicalization.sql for hash constraints and identity strategy checks.
    • Reviewed 0052_indexer_search_result_ingest_proc.sql for normalization and hash derivation functions.
  • Test coverage summary:
    • Documentation-only verification; existing ingest tests cover hashing paths.
  • Observability updates:
    • None.
  • Risk & rollback plan:
    • Risk: regressions in hash derivation cause identity splits. Roll back by reverting procedure changes and revalidating constraints.
  • Dependency rationale:
    • No new dependencies added.

Indexer secret binding linkage verification

  • Status: Accepted
  • Date: 2026-01-27
  • Context:
    • The ERD requires secrets to be linked only through secret_binding and forbids inline secret_id columns on other tables.
    • We need to confirm the schema follows this rule before extending secret usage in routing and indexer configs.
  • Decision:
    • Verified secret and secret_binding are the only tables owning secret_id, with bindings keyed by (bound_table, bound_id, binding_name).
    • Confirmed other tables (e.g., indexer_instance_field_value, routing_policy_parameter) store no inline secret_id columns and rely on secret_binding for secret linkage.
  • Consequences:
    • Secret linkage is centralized and auditable via secret_binding and secret_audit_log.
    • Schema aligns with ERD and avoids leaking secret references into unrelated tables.
  • Follow-up:
    • Re-verify any new tables that require secret access ensure bindings are used.

Task record

  • Motivation:
    • Validate secrets are linked only through secret_binding.
  • Design notes:
    • Reviewed 0015_indexer_secrets.sql for secret/secret_binding tables and constraints.
    • Searched migrations for secret_id to confirm no inline secret references outside the secret tables.
  • Test coverage summary:
    • Documentation-only verification; no new tests added.
  • Observability updates:
    • None.
  • Risk & rollback plan:
    • Risk: future tables add direct secret_id columns. Roll back by removing inline references and migrating to secret_binding.
  • Dependency rationale:
    • No new dependencies added.

Indexer single-tenant scope verification

  • Status: Accepted
  • Date: 2026-01-27
  • Context:
    • The ERD specifies a single-tenant deployment with no tenant scoping tables or tenant_id columns.
    • We need to confirm the schema has no tenant scoping artifacts.
  • Decision:
    • Verified indexer migrations contain no tenant/organization scoping columns or tables.
    • Confirmed global catalog tables (e.g., trust_tier, media_domain, indexer_definition) are deployment-wide without tenant keys.
  • Consequences:
    • Database schema aligns with the ERD’s single-tenant scope assumptions.
    • Application layers can treat configuration and catalog data as global.
  • Follow-up:
    • Re-verify if multi-tenant support is introduced in later phases.

Task record

  • Motivation:
    • Validate that the indexer schema remains single-tenant as required.
  • Design notes:
    • Searched migrations for tenant/organization identifiers and found none.
    • Verified catalog tables are global with no scoping columns.
  • Test coverage summary:
    • Documentation-only verification; no new tests added.
  • Observability updates:
    • None.
  • Risk & rollback plan:
    • Risk: accidental tenant columns creep into schema. Roll back by removing tenant fields and updating stored procedures.
  • Dependency rationale:
    • No new dependencies added.

Indexer table/constraint alignment verification

  • Status: Accepted
  • Date: 2026-01-27
  • Context:
    • ERD_INDEXERS.md requires all indexer tables, columns, defaults, CHECKs, UQ, and FK constraints to match the specification.
    • Verification found the policy_set.created_for_search_request_id FK missing after search_request was introduced.
    • Auto-created request policy sets must be purged with their search_request per ERD notes.
  • Decision:
    • Add a migration to enforce the policy_set.created_for_search_request_id FK with ON DELETE CASCADE to align with the ERD.
    • Record the table/column/FK parity verification against ERD tables and migrations.
  • Consequences:
    • Positive: referential integrity matches ERD and search retention cascades to auto-created policy sets.
    • Risk: existing orphaned policy_set rows would block the migration.
  • Follow-up:
    • Continue validating per-table Notes invariants and add tests where appropriate.

Task record

  • Motivation:
    • Close the remaining schema gap so all ERD indexer tables and FKs match the spec.
  • Design notes:
    • Added FK policy_set.created_for_search_request_id -> search_request.search_request_id with ON DELETE CASCADE in migration 0068.
    • Verified ERD table list, column coverage, and FK presence against migrations.
  • Test coverage summary:
    • just ci
    • just ui-e2e
  • Observability updates:
    • None.
  • Risk & rollback plan:
    • If the FK fails due to orphaned policy_set rows, drop the constraint and backfill or null invalid references before reapplying.
  • Dependency rationale:
    • No new dependencies.

Indexer per-table Notes verification

  • Status: Accepted
  • Date: 2026-01-27
  • Context:
    • ERD_INDEXERS.md defines per-table Notes with validation rules, computed fields, and invariants that must be enforced in the schema or stored procedures.
    • The Phase 2 checklist requires verifying these notes against migrations/procs.
  • Decision:
    • Verified schema-level invariants (generated columns, one-of constraints, ranges, and lowercase checks) across indexer tables and attribute tables.
    • Verified procedure-level enforcement for tag immutability, policy set cardinality and linkage rules, policy rule validation, search request validation, and canonical disambiguation ordering.
  • Consequences:
    • Positive: DB constraints and stored procedures align with ERD Notes for validation and computed-field invariants.
    • Risk: runtime behaviors described in Notes (e.g., Torznab endpoints, import runner mapping) remain tracked in later phases and are not part of this schema validation.
  • Follow-up:
    • Continue Phase 5–12 items for runtime behaviors and API surfaces.

Task record

  • Motivation:
    • Close the Phase 2 requirement to apply per-table Notes invariants in schema/procs.
  • Design notes:
    • Schema constraints verified in migrations:
      • crates/revaer-data/migrations/0012_indexer_core.sql
      • crates/revaer-data/migrations/0013_indexer_definitions.sql
      • crates/revaer-data/migrations/0014_indexer_instances.sql
      • crates/revaer-data/migrations/0016_search_profiles_torznab.sql
      • crates/revaer-data/migrations/0019_policy_sets.sql
      • crates/revaer-data/migrations/0021_connectivity_audit.sql
      • crates/revaer-data/migrations/0022_indexer_canonicalization.sql
      • crates/revaer-data/migrations/0023_indexer_search_requests.sql
      • crates/revaer-data/migrations/0025_indexer_conflicts_decisions.sql
      • crates/revaer-data/migrations/0026_indexer_user_actions.sql
      • crates/revaer-data/migrations/0027_indexer_telemetry_reputation.sql
    • Stored-procedure validation coverage verified in:
      • crates/revaer-data/migrations/0034_indexer_tag_procs.sql
      • crates/revaer-data/migrations/0040_indexer_policy_set_procs.sql
      • crates/revaer-data/migrations/0041_indexer_search_profile_procs.sql
      • crates/revaer-data/migrations/0042_indexer_policy_rule_create_proc.sql
      • crates/revaer-data/migrations/0049_indexer_canonical_disambiguation_rule_proc.sql
      • crates/revaer-data/migrations/0050_indexer_search_request_create_proc.sql
      • crates/revaer-data/migrations/0052_indexer_search_result_ingest_proc.sql
  • Test coverage summary:
    • just ci
    • just ui-e2e
  • Observability updates:
    • None.
  • Risk & rollback plan:
    • If a validation rule is found missing, add a follow-up migration or proc fix and revert this ADR/checklist entry.
  • Dependency rationale:
    • No new dependencies.

Indexer proc error-code alignment for key lookups

  • Status: Accepted
  • Date: 2026-01-27
  • Context:
    • Motivation: ERD requires key-based lookups (trust_tier/media_domain/tag) to raise invalid_request with error_code=unknown_key; several stored procs still emitted *_not_found for key misses.
    • Constraints: keep error messages constant, preserve public-id not-found codes, and avoid changing schema or adding dependencies.
  • Decision:
    • Update stored procedures to emit error_code=unknown_key for key-based misses while keeping *_not_found for public-id lookups.
    • Map unknown_key to TagServiceErrorKind::NotFound in the app service layer.
    • Verify existing role-based authorization checks and Torznab/system NULL-actor handling; no structural changes required.
    • Alternatives considered: introduce new error enums per proc or map unknown_key to Invalid; rejected to keep ERD-mandated codes and existing API semantics.
  • Consequences:
    • Positive: consistent error-code taxonomy, ERD compliance, clearer API behavior for key lookups.
    • Risks/trade-offs: requires a function replacement migration; rollback requires reverting that migration if unexpected client behavior occurs.
  • Follow-up:
    • Test coverage summary: just ci and just ui-e2e passed (npm audit still reports 2 moderate vulnerabilities in the UI test workspace).
    • Observability: no new spans/metrics required (error surfaces unchanged).
    • Risk & rollback plan: revert migration 0069_indexer_proc_error_codes.sql and the tag error mapping change if clients rely on previous error_code strings.
    • Dependency rationale: no new dependencies added; std/SQL only.

Indexer error enums and normalization helpers verification

  • Status: Accepted
  • Date: 2026-01-27
  • Context:
    • Motivation: Phase 6 requires per-crate error enums with constant messages and context fields, plus normalization helpers for hashing and magnet/title inputs.
    • Constraints: preserve existing stored-procedure boundaries and avoid new dependencies.
  • Decision:
    • Verified error enums and constant-message patterns for indexer paths across revaer-data (DataError), revaer-app (AppError), and revaer-api (TagServiceError).
    • Verified normalization helpers and wrappers in revaer-data/src/indexers/normalization.rs and the supporting stored procedures (normalize_title, normalize_magnet_uri, derive_magnet_hash, compute_title_size_hash).
    • Alternatives considered: introducing new error enums or normalization helpers in additional crates; rejected because current coverage meets ERD requirements.
  • Consequences:
    • Positive: checklist items are satisfied without new dependencies or API changes.
    • Risks/trade-offs: future indexer services must keep the same constant-message + context-field pattern to remain compliant.
  • Follow-up:
    • Test coverage summary: just ci and just ui-e2e passed (npm audit still reports 2 moderate vulnerabilities in the UI test workspace).
    • Observability: no additional spans/metrics needed for this verification step.
    • Risk & rollback plan: documentation-only change; revert ADR and checklist updates if verification is found incomplete.
    • Dependency rationale: no new dependencies added.

Indexer result-only returns and no-panics verification

  • Status: Accepted
  • Date: 2026-01-27
  • Context:
    • Motivation: Phase 6 requires no panics/unwrap/expect in production paths and Result-only returns for fallible operations.
    • Constraints: preserve existing interfaces and keep verification scoped to indexer runtime modules.
  • Decision:
    • Audited indexer-related modules for panic!, unwrap(), expect(), unreachable!() in non-test code and found none.
    • Verified fallible operations return Result<T, E>; Option<T> usage is limited to non-fallible accessors and optional payloads.
    • Alternatives considered: expanding the audit to the entire workspace; deferred to avoid blocking indexer-phase progress.
  • Consequences:
    • Positive: checklist item satisfied for indexer runtime paths without code churn.
    • Risks/trade-offs: future modules must keep the same constraints; broader workspace audit remains out of scope for this ADR.
  • Follow-up:
    • Test coverage summary: just ci and just ui-e2e passed (npm audit still reports 2 moderate vulnerabilities in the UI test workspace).
    • Observability: no new spans/metrics needed for this verification step.
    • Risk & rollback plan: documentation-only change; revert ADR/checklist updates if verification is found incomplete.
    • Dependency rationale: no new dependencies added.

Indexer tryOp wrappers for external operations

  • Status: Accepted
  • Date: 2026-01-27
  • Context:
    • Motivation: Phase 6 requires wrapping external/system calls in tryOp-style helpers to normalize error mapping across indexer data access.
    • Constraints: panics are forbidden; do not introduce new dependencies; keep SQL interactions confined to stored-procedure calls.
  • Decision:
    • Introduce a shared try_op helper in the data layer and replace per-file map_query_err closures across indexer modules.
    • Use try_op in all indexer data-layer SQLx interactions (queries, executes, and row extraction) to standardize error mapping.
    • Note: panic catching is intentionally not used because catch_unwind is banned and production code must avoid panics entirely.
    • Alternatives considered: leave per-file closures or introduce a more complex async wrapper; rejected in favor of a simple, centralized helper.
  • Consequences:
    • Positive: consistent error mapping for indexer data access and fewer duplicate helper definitions.
    • Risks/trade-offs: none beyond standard refactor risk; behavior remains equivalent.
  • Follow-up:
    • Test coverage summary: just ci and just ui-e2e passed (npm audit still reports 2 moderate vulnerabilities in the UI test workspace).
    • Observability: no new spans/metrics required for this refactor.
    • Risk & rollback plan: revert the try_op refactor and restore per-module helpers if regressions appear.
    • Dependency rationale: no new dependencies added.

Indexer routing policy service and endpoints

  • Status: Accepted
  • Date: 2026-01-27
  • Context:
    • Motivation: expose routing policy create/param/secret operations through the indexer application facade and HTTP surface per ERD Phase 6/8 requirements.
    • Constraints: stored-procedure-only DB access, constant error messages, DI-only wiring in bootstrap, no new dependencies.
  • Decision:
    • Extend the indexer facade with routing policy operations and implement them in revaer-app using existing stored-procedure wrappers.
    • Add routing policy request/response DTOs plus HTTP handlers and routes for create, parameter set, and secret binding.
    • Update the OpenAPI document to describe the new endpoints and schemas.
  • Consequences:
    • Positive: routing policy operations are now available to API callers with consistent error mapping and tracing spans.
    • Risks/trade-offs: additional endpoints increase API surface and will require follow-on list/update/delete support to be feature complete.
  • Follow-up:
    • Test coverage summary: just ci and just ui-e2e (npm audit reports 2 moderate vulnerabilities in the UI test workspace).
    • Observability: added spans for routing policy operations; no new metrics yet.
    • Risk & rollback plan: revert the routing policy service/API changes if regressions appear; stored procedures remain unchanged.
    • Dependency rationale: no new dependencies added (used existing crates only).

Indexer definition list endpoint

  • Status: Accepted
  • Date: 2026-01-27
  • Context:
    • Motivation: expose the indexer definition catalog via the API so UI/CLI flows can enumerate definitions without leaking internal IDs.
    • Constraints: stored-procedure-only DB access, constant error messages, injected dependencies, and no new dependencies.
  • Decision:
    • Add a stored procedure to list indexer definitions with actor validation.
    • Wire a data-layer wrapper, application facade method, and HTTP handler to return definition summaries.
    • Document the new endpoint and DTOs in OpenAPI and add API coverage.
  • Consequences:
    • Positive: indexer definitions can be listed through a stable API surface.
    • Risks/trade-offs: only summary data is exposed; follow-on endpoints are still needed for field metadata and instance creation flows.
  • Follow-up:
    • Test coverage summary: just ci and just ui-e2e (npm audit reports 2 moderate vulnerabilities in the UI test workspace).
    • Observability: added tracing span for definition listing; no new metrics yet.
    • Risk & rollback plan: revert the definition list service/API changes if regressions appear; stored procedures remain additive.
    • Dependency rationale: no new dependencies added (used existing crates only).

Indexer CF state read endpoint

  • Status: Accepted
  • Date: 2026-01-30
  • Context:
    • Motivation: surface Cloudflare mitigation state per indexer instance so UI/API can display health and reset workflows safely.
    • Constraints: stored-procedure-only access, constant error messages, and no new dependencies.
  • Decision:
    • Added indexer_cf_state_get_v1 with a stable wrapper, plus data-access and API plumbing for a GET /v1/indexers/instances/{id}/cf-state response.
    • Added E2E API coverage for indexer instance and secret endpoints to satisfy coverage gating.
    • Alternatives considered: inline SQL or reusing reset-only plumbing (rejected due to stored-proc policy and missing read semantics).
  • Consequences:
    • Positive: CF state is now observable through a typed API response; coverage gate stays green.
    • Trade-offs: endpoint currently used mainly for read/diagnostics; tests exercise 404 paths when no instance exists.
  • Follow-up:
    • Expand UI controls and routing-policy integrations for CF/flaresolverr workflows per ERD gaps.

Test Coverage

  • just ci
  • just ui-e2e

Observability

  • Added indexer.cf_state_get span in the indexer service path.

Risk and Rollback

  • Risk: minimal behavior change; read path only, returns 404 for unknown instances.
  • Rollback: revert migration 0072_indexer_cf_state_get.sql and associated API/service changes.

Dependency Rationale

  • No new dependencies added; existing crates and patterns were used.

Indexer CF state E2E coverage

  • Status: Accepted
  • Date: 2026-01-30
  • Context:
    • Motivation: satisfy UI E2E API coverage gate for newly added CF state endpoints.
    • Constraints: no new dependencies; reuse existing E2E API fixtures and coverage hooks.
  • Decision:
    • Extend indexer instance E2E API coverage to hit CF state GET and reset endpoints using a missing-instance 404 path.
    • Alternatives considered: add a dedicated fixture to create a real instance (rejected for higher setup cost in current E2E suite).
  • Consequences:
    • Positive: coverage gate includes CF state endpoints and remains green.
    • Trade-offs: responses are 404-only in this test until instance creation is wired into E2E fixtures.
  • Follow-up:
    • Expand E2E to exercise CF state success paths once instance creation fixtures are available.

Test Coverage

  • just ci
  • just ui-e2e

Observability

  • No changes.

Risk and Rollback

  • Risk: minimal; only exercises API endpoints in E2E.
  • Rollback: revert tests/specs/api/indexers-instances.spec.ts additions.

Dependency Rationale

  • No new dependencies added.

170 Indexer category mapping API endpoints

  • Status: Accepted
  • Date: 2026-01-31
  • Motivation:
    • Provide API management for tracker category and media-domain Torznab mappings per ERD and ADR 128.
    • Expose stored-proc-backed updates with consistent error mapping and audit tracking.
  • Design notes:
    • Add indexer facade methods for tracker category and media-domain mapping upsert/delete.
    • Implement HTTP endpoints that call stored procedures via data-layer wrappers.
    • Keep error messages constant; attach error codes/SQLSTATE as structured context.
  • Test coverage summary:
    • Data-layer tests for invalid key handling and primary mapping switch.
    • API E2E coverage for mapping upsert/delete endpoints.
  • Observability updates:
    • None (existing tracing/logging patterns reused).
  • Risk & rollback plan:
    • Risk: incorrect mapping updates could affect category resolution.
    • Rollback: revert API changes and restore seeded mappings via migrations/seed defaults.
  • Dependency rationale:
    • No new dependencies.

171 Indexer Torznab instance API endpoints

  • Status: Accepted
  • Date: 2026-01-31
  • Motivation:
    • Provide API coverage for Torznab instance lifecycle (create, rotate credentials, enable/disable, delete) backed by stored procedures.
    • Close ERD indexer checklist gaps with testable handlers and OpenAPI schema coverage.
  • Design notes:
    • Add API models for Torznab instance create/state requests and responses in revaer-api-models.
    • Extend indexer facade contract in revaer-api and wire revaer-app implementations to stored-proc data access.
    • Implement HTTP handlers with consistent error mapping and constant error messages; trim user input on ingress.
    • Add E2E coverage for Torznab instance endpoints and OpenAPI updates.
  • Test coverage summary:
    • Unit tests for error mapping and input trimming in the Torznab instance handlers.
    • App-layer tests for missing profile/instance validation.
    • API E2E coverage for Torznab instance create/rotate/state/delete flows.
  • Observability updates:
    • Reused existing tracing spans for indexer operations; no new metrics added.
  • Risk & rollback plan:
    • Risk: incorrect lifecycle wiring could leave orphaned Torznab instances or misstate enablement.
    • Rollback: revert API changes and use stored procedures to reset instance state from migrations/seed data.
  • Dependency rationale:
    • No new dependencies.

172 Indexer search profile API endpoints

  • Status: Accepted
  • Date: 2026-01-31
  • Context:
    • API coverage for search profile stored procedures was missing, blocking ERD checklist parity.
    • Prowlarr parity requires deterministic, auditable search profile configuration surfaces.
  • Decision:
    • Add request/response models, facade methods, and HTTP routes for search profile lifecycle ops.
    • Keep error messages constant and attach context via structured fields.
  • Consequences:
    • Search profiles can now be created and configured through the API layer.
    • E2E coverage asserts API availability for both auth modes.
  • Follow-up:
    • Implement search profile UI surfaces and policy management endpoints.
    • Extend coverage for policy set integration once endpoints exist.

Task record

  • Motivation:
    • Expose stored-procedure-backed search profile management through the API.
    • Provide E2E coverage for search profile lifecycle operations to align with the ERD.
  • Design notes:
    • Add API models for search profile create/update/default/domain allowlist/policy set/indexer allow-block/tag allow-block-prefer.
    • Extend the indexer facade to surface search profile operations with typed errors.
    • Implement HTTP handlers with constant error messages and trimmed inputs.
  • Test coverage summary:
    • Unit tests for handler trimming and conflict mapping.
    • API E2E coverage for search profile lifecycle endpoints.
  • Observability updates:
    • Reused existing tracing spans for indexer operations; no new metrics added.
  • Risk & rollback plan:
    • Risk: invalid profile updates could affect search filtering.
    • Rollback: revert API changes and repair profiles via stored procedures/migrations.
  • Dependency rationale:
    • No new dependencies.

Indexer import jobs API surface

  • Status: Accepted
  • Date: 2026-01-31
  • Context:
    • Need REST coverage for indexer import jobs (create/run/status/results) to satisfy ERD indexer checklist.
    • Must preserve stored-procedure boundaries, stable errors, and testable handlers with E2E coverage.
  • Decision:
    • Add import job request/response models and handler wiring for create/run/status/results endpoints.
    • Extend app facade mapping for import job error translation and results/status projection.
    • Update OpenAPI and Playwright API coverage for new endpoints.
    • Alternatives considered: defer API surface until full import pipeline; rejected to keep parity with checklist and procs.
  • Consequences:
    • Positive outcomes: import job endpoints are now reachable, documented, and covered in E2E.
    • Risks or trade-offs: run endpoints currently validate inputs and return errors without a worker path; full import pipeline still pending.
  • Follow-up:
    • Implement background import execution and UI flows for import job monitoring.
    • Extend CLI support once import pipeline is ready.

Task record

  • Motivation: close the ERD indexer checklist gap for import job REST endpoints and E2E coverage.
  • Design notes: handlers trim inputs, map stored-procedure error codes to stable API errors, and return typed models; no inline SQL added.
  • Test coverage summary: added API E2E coverage for create/run/status/results; existing unit tests cover trimming and error mapping.
  • Observability updates: no new spans or metrics required for handler-only changes.
  • Risk & rollback plan: rollback by reverting endpoint wiring and OpenAPI updates; no migrations or data changes.
  • Dependency rationale: no new dependencies added; reused existing models, handlers, and stored procedures.

Indexer import jobs CLI commands

  • Status: Accepted
  • Date: 2026-01-31
  • Context:
    • Import job API endpoints exist but CLI lacked parity for creating and inspecting import jobs.
    • Need to keep CLI output stable (json/table) and enforce API key requirements.
  • Decision:
    • Add indexer import CLI subcommands for create, run (Prowlarr API/backup), status, and results.
    • Provide table and JSON output renderers for import job status and results.
    • Alternatives considered: postpone CLI until full import pipeline; rejected to close ERD checklist gap.
  • Consequences:
    • Positive outcomes: operators can start and inspect import jobs from CLI with consistent output.
    • Risks or trade-offs: CLI surfaces are limited to import job endpoints; broader indexer CLI features remain pending.
  • Follow-up:
    • Extend CLI with indexer test, policy management, and Torznab key commands.
    • Add CLI coverage once indexer workflows expand.

Task record

  • Motivation: provide CLI parity for indexer import job lifecycle operations.
  • Design notes: new subcommands map 1:1 with REST endpoints and reuse common output formats.
  • Test coverage summary: existing CLI unit tests extended for command label coverage; CLI integration not yet expanded.
  • Observability updates: no new telemetry or metrics beyond existing CLI emitter.
  • Risk & rollback plan: revert CLI subcommands and output helpers; no data changes.
  • Dependency rationale: no new dependencies added; reused existing models and CLI utilities.

Indexer Torznab CLI management

  • Status: Accepted
  • Date: 2026-01-31
  • Context:
    • Torznab instance keys and lifecycle operations are available via API but missing CLI tooling.
    • Need operator-level access to create, rotate, enable/disable, and delete Torznab instances.
  • Decision:
    • Add indexer torznab CLI subcommands for create, rotate, set-state, and delete.
    • Render Torznab instance credentials in JSON or table output.
    • Alternatives considered: postpone CLI tooling; rejected to keep operational parity with API.
  • Consequences:
    • Positive outcomes: CLI can manage Torznab instances and rotate keys without UI.
    • Risks or trade-offs: plaintext API keys are shown in CLI output; operators must handle securely.
  • Follow-up:
    • Add CLI coverage once Torznab endpoints and auth rules are fully implemented.
    • Extend CLI for Torznab downloads and search flows when endpoints land.

Task record

  • Motivation: provide CLI access to Torznab instance creation, rotation, and state updates.
  • Design notes: subcommands map 1:1 with REST endpoints and share existing output formatting patterns.
  • Test coverage summary: command label tests updated; no new integration tests added.
  • Observability updates: no additional telemetry beyond existing CLI emitter.
  • Risk & rollback plan: revert CLI commands and output helpers; no migrations or data changes.
  • Dependency rationale: no new dependencies added.

Indexer policy CLI management

  • Status: Accepted
  • Date: 2026-01-31
  • Context:
    • Policy set and rule endpoints exist but lack CLI coverage.
    • Operators need a CLI path to create, enable, disable, and reorder policy sets and rules.
  • Decision:
    • Add indexer policy CLI subcommands for policy set creation, update, enable/disable, reorder, and policy rule create/enable/disable/reorder.
    • Render policy set and rule identifiers in table or JSON output.
    • Alternatives considered: rely on API or UI; rejected to keep operational parity.
  • Consequences:
    • Positive outcomes: CLI can manage policy sets and rules without UI.
    • Risks or trade-offs: CLI must be kept in sync with API schema updates.
  • Follow-up:
    • Add list and detail commands once policy listing endpoints are available.
    • Expand rule creation ergonomics as policy rule value-set options grow.

Task record

  • Motivation: provide CLI access for policy sets and rules to match API capabilities.
  • Design notes: subcommands mirror REST endpoints; requests validate non-empty fields locally.
  • Test coverage summary: command label test updated; no new integration tests added.
  • Observability updates: no additional telemetry beyond existing CLI emitter.
  • Risk & rollback plan: revert CLI commands and output helpers; no migrations or data changes.
  • Dependency rationale: no new dependencies added.

Indexer instance test API and CLI

  • Status: Accepted
  • Date: 2026-01-31
  • Context:
    • Indexer instance test stored procedures exist but lacked an API/CLI surface.
    • Executors need a prepare payload and a finalize endpoint to record outcomes.
  • Decision:
    • Add API endpoints to prepare and finalize indexer instance tests.
    • Add CLI commands to invoke the prepare and finalize endpoints with JSON or table output.
    • Alternatives considered: defer until executor is built; rejected to keep parity with ERD flows.
  • Consequences:
    • Positive outcomes: external executors and CLI can drive indexer test lifecycle.
    • Risks or trade-offs: test execution is still external; API must stay aligned with executor payload needs.
  • Follow-up:
    • Wire executor to call the prepare/finalize API from the job runner.
    • Add E2E coverage for the test endpoints once executor is online.

Task record

  • Motivation: expose indexer instance test lifecycle via API/CLI to support migration and diagnostics.
  • Design notes: API mirrors stored-proc inputs/outputs; CLI outputs field arrays and statuses.
  • Test coverage summary: handler unit tests added for prepare/finalize; command label test updated.
  • Observability updates: new service spans for prepare/finalize.
  • Risk & rollback plan: revert API routes and CLI commands; no migrations or data changes.
  • Dependency rationale: no new dependencies added.

Indexer allocation safety guard

  • Status: Accepted
  • Date: 2026-02-01
  • Context:
    • Motivation: Prevent unbounded allocations in indexer handlers and satisfy security review feedback.
    • Constraints: No new dependencies; errors must use constant messages with structured context.
  • Decision:
    • Add a shared allocation helper that reads MemAvailable from /proc/meminfo and limits requested allocations to 80% of available memory.
    • Apply the helper to dynamic list normalization in search profiles, policy rules, and media domain allowlists, while raising per-list caps to avoid overly constraining users.
    • Dependency rationale: none (std-only implementation).
  • Consequences:
    • Positive outcomes: safer allocations, explicit error reporting with context, consistent limits.
    • Risks or trade-offs: allocation checks fail closed if MemAvailable cannot be read; rollback by relaxing the guard to a fixed ceiling if needed.
  • Follow-up:
    • Implementation tasks: add helper module, update normalization paths, add unit tests.
    • Test coverage summary: unit tests for allocation guard and meminfo parsing.
    • Observability updates: none required; errors carry context fields for diagnostics.

Auth prompt dismissal stability

  • Status: Accepted
  • Date: 2026-02-01
  • Context:
    • Motivation: UI E2E intermittently fails because the auth prompt reappears after dismissal when app auth mode is resolved asynchronously.
    • Constraints: Preserve current auth behavior, avoid new dependencies, keep state logic testable.
  • Decision:
    • Stop resetting auth_prompt_dismissed in the app auth mode effect so a user dismissal remains effective for the session.
    • Alternatives considered: re-trying dismissal in tests only, or persisting dismissal in storage.
    • Dependency rationale: none (state-only change).
  • Consequences:
    • Positive outcomes: auth overlay no longer reappears after dismissal during initial config hydration; UI tests can dismiss overlays reliably.
    • Risks or trade-offs: users might need to re-open auth prompt manually if they dismissed it while auth becomes required; rollback by reintroducing reset with a timestamp or explicit user action.
  • Follow-up:
    • Implementation tasks: adjust app auth mode effect to avoid overriding dismissal state.
    • Test coverage summary: UI E2E coverage exercises overlay dismissal; no new unit tests added.
    • Observability updates: none.

Cross-platform allocation safety probe

  • Status: Accepted
  • Date: 2026-02-01
  • Context:
    • Motivation: Allocation safety relied on /proc/meminfo, which is Linux-only. We need a cross-platform source of live available memory so we do not lock into Linux.
    • Constraints: Keep error messages constant; avoid unsafe code; preserve minimal dependencies.
  • Decision:
    • Use systemstat to fetch live memory statistics on all platforms.
    • Prefer Linux MemAvailable when present, otherwise fall back to the live free-memory value returned by systemstat.
    • Keep the 80% available-memory guard and fail closed when memory cannot be determined.
    • Dependency rationale: systemstat provides cross-platform live memory data without adding unsafe code in Revaer. Alternatives considered: OS-specific FFI (requires unsafe) or estimates (rejected).
  • Consequences:
    • Positive outcomes: Allocation guard works on macOS/Windows/Linux; no platform lock-in.
    • Risks or trade-offs: Adds a small dependency footprint; relies on OS-reported statistics.
  • Follow-up:
    • Implementation tasks: update allocation helper to use systemstat; add docs entry.
    • Test coverage summary: allocation guard unit tests remain; live-memory probe is exercised via API/CLI/E2E.
    • Observability updates: none.

Indexer PR Feedback Follow-through

  • Status: Accepted
  • Date: 2026-02-01
  • Context:
    • Addressed open PR feedback on indexer handlers, allocation safety, and API request shape.
    • Needed clearer documentation for session encryption env vars and allocation limits.
    • Reduced duplicated test scaffolding while preserving testability and coverage.
  • Decision:
    • Centralize allocation safety in a helper, apply it to request-driven allocations, and document the 80% safety limit.
    • Consolidate indexer handler test scaffolding into a shared test helper module.
    • Move string normalization helpers into a shared indexer module.
    • Remove redundant indexer instance public ID from the update request body.
  • Consequences:
    • Clearer memory allocation policy and safer handling of unbounded inputs.
    • Leaner test modules with shared helpers and fewer duplicated imports.
    • API request shape aligns with path-based identifiers, reducing ambiguity.
  • Follow-up:
    • Monitor code scanning to confirm allocation alerts clear after rescans.
    • No additional migrations required.

Motivation

Align indexer handler code with review feedback, improve allocation safety for user-driven inputs, reduce test duplication, and clarify API request semantics.

Design notes

  • Allocation helpers now gate request-sized buffers using live memory data and a documented 80% cap to preserve headroom.
  • A test support module centralizes stub config and response parsing helpers for indexer handler tests without exposing them outside the indexers module.
  • String normalization helpers are shared across indexer handlers to avoid duplication.
  • IndexerInstanceUpdateRequest now relies solely on path identifiers.

Test coverage summary

  • just ci
  • just build-release
  • just ui-e2e

Observability updates

  • None (documentation-only changes and refactors).

Risk & rollback plan

  • Low risk: changes are additive or refactor-only. Roll back by reverting the individual commits if any regression is observed.

Dependency rationale

  • No new dependencies added in this change set; see ADR 180 for the live-memory probe rationale.

Indexer PR Feedback Allocation Follow-up

  • Status: Accepted
  • Date: 2026-02-02
  • Context:
    • Review feedback highlighted unbounded allocations in indexer handlers and asked for clearer, live-memory guardrails.
    • Allocation safety needed to remain cross-platform and avoid hard-coded assumptions.
    • Test helper naming and error diagnostics in tests required clarification.
  • Decision:
    • Add explicit allocation safety checks for request-driven string and vector allocations using the shared live-memory guard.
    • Introduce a minimum-available-memory threshold and a cached-system entry point to avoid repeated probing where reuse is possible.
    • Rename shared indexer test state helper and tighten ProblemDetails parsing in tests.
  • Consequences:
    • Safer handling of request-sized allocations with clearer memory-policy documentation.
    • Improved test helper clarity and more actionable test failures.
    • Slightly more allocation checks per request, offset by the option to reuse a system snapshot.
  • Follow-up:
    • Confirm code scanning alerts clear after the next GitHub Advanced Security scan.
    • No migrations required.

Motivation

Close PR feedback on allocation safety and test clarity while keeping indexer handler behavior intact and aligned with live-memory guardrails.

Design notes

  • Allocation sizing now checks request-derived bytes against live available memory before materializing strings or vectors.
  • The allocation guard exposes a cached-system entry point and enforces a minimum available memory threshold before allowing allocations.
  • Shared indexer test helpers use clearer naming and explicit expectations for response decoding.

Test coverage summary

  • just ci
  • just build-release
  • just ui-e2e

Observability updates

  • None (guardrails and test refactors only).

Risk & rollback plan

  • Low risk: behavior is additive and defensive. Roll back by reverting this change set if allocation checks prove too strict in practice.

Dependency rationale

  • No new dependencies added.

Indexer PR Feedback Follow-up (Allocation Caps)

  • Status: Accepted
  • Date: 2026-02-02
  • Context:
    • Additional PR feedback requested explicit caps for request-driven allocations and safer test body parsing.
    • Allocation guards must use live memory data while still providing deterministic upper bounds.
  • Decision:
    • Add explicit maximum sizes for search profile domain/tag keys and policy rule text inputs.
    • Limit test response body reads to a fixed upper bound.
    • Document the secret key ID max-length source for maintainability.
  • Consequences:
    • Reduced risk of unbounded allocations from large inputs.
    • Clearer operational limits with minimal user-facing constraints.
    • Test helpers avoid excessive memory use on malformed responses.
  • Follow-up:
    • Confirm GHAS/code scanning alerts clear after the next scan.
    • No migrations required.

Motivation

Ensure indexer handlers enforce conservative, explicit input caps alongside live-memory guards and improve test safety for large responses.

Design notes

  • Search profile domain keys and tag keys now have maximum counts and per-key byte limits.
  • Policy rule text inputs (including value set items) enforce per-field byte limits.
  • Test helper response parsing reads at most 1 MiB.

Test coverage summary

  • just ci
  • just ui-e2e

Observability updates

  • None.

Risk & rollback plan

  • Low risk: validation rejects overlarge inputs up front. Roll back by reverting this change set if limits are too strict.

Dependency rationale

  • No new dependencies added.

Indexer Torznab caps endpoint

  • Status: Accepted
  • Date: 2026-02-03
  • Context:
    • We need to begin Torznab API parity by serving a caps response backed by seeded categories.
    • Authentication must use the Torznab API key query parameter and reject disabled/deleted instances.
    • Runtime SQL must remain stored-procedure-only and error messages must be constant.
  • Decision:
    • Add stored procedures to authenticate Torznab instances and list seeded Torznab categories.
    • Expose a Torznab caps handler that authenticates via apikey and returns XML caps data.
    • Keep invalid/unsupported Torznab requests as empty XML responses with no DB writes.
  • Consequences:
    • Positive outcomes:
      • Arr clients can validate Torznab connectivity via caps using live category data.
      • Authentication and category access stay consistent with stored-proc-only policy.
    • Risks or trade-offs:
      • Only caps is implemented so far; search and download endpoints still require follow-on work.
  • Follow-up:
    • Implement Torznab search and download endpoints with full ERD semantics.
    • Add additional Torznab response tests and OpenAPI parity updates as functionality expands.

Indexer Torznab download and allocation guards

  • Status: Accepted
  • Date: 2026-02-04
  • Context:
    • We need Torznab download redirects to complete core ERD coverage and satisfy PR review feedback.
    • Allocation safety must rely on live memory information and apply to request-driven allocations.
    • Review feedback also called for clearer validation structure in bootstrap secrets.
  • Decision:
    • Add a stored-procedure-backed Torznab download prepare path that validates instance/profile/tag access and records acquisition attempts.
    • Extend allocation guards to all request-dependent allocations, including Torznab XML escaping, and clamp vector capacities to bounded limits.
    • Refactor secret env validation to a shared helper for consistency.
  • Consequences:
    • Positive outcomes:
      • Torznab clients can request download redirects with audited acquisition attempts.
      • Allocation safety applies uniformly and relies on live memory data.
      • Validation logic is more maintainable and easier to test.
    • Risks or trade-offs:
      • Allocation checks can reject requests when memory telemetry is unavailable or too low.
  • Follow-up:
    • Continue Torznab search response coverage and add richer download telemetry once search is implemented.

Task record

  • Motivation: close Torznab download gap, address allocation safety/GHAS feedback, and tighten secret validation.
  • Design notes:
    • Download path uses a stored procedure to enforce profile/tag rules and populate acquisition_attempt.
    • Allocation checks use live system memory and guard XML escaping plus request-sized collections.
    • Secret env validation is centralized to avoid duplication and preserve constant error messages.
  • Test coverage summary: just ci (fmt/lint/udeps/audit/deny/test/cov/build-release) and just ui-e2e.
  • Observability updates: none; existing spans and error context fields remain the primary signals.
  • Risk & rollback plan: revert migration 0078 and API handlers, then reset DB migrations; no data migrations beyond new procs.
  • Dependency rationale: no new dependencies; bytes updated to 1.11.1 to address RustSec advisory.

186: Indexer search requests API and allocation guard refinements

  • Status: Accepted
  • Date: 2026-02-04
  • Context:
    • Add v1 REST endpoints for indexer search request create/cancel while keeping stored-procedure boundaries.
    • Address PR feedback on allocation safety and test-only helper isolation.
    • Ensure allocation guards use live memory data and cap single allocations at 80% of available memory.
  • Decision:
    • Added search request create/cancel request/response models plus API handlers, routes, and facade wiring.
    • Added allocation helpers that check live memory availability and use checked capacity reservations before dynamic allocations.
    • Tightened test helpers to use bounded body reads and explicit error parsing.
  • Consequences:
    • Positive outcomes:
      • Search request orchestration is now reachable via v1 REST endpoints.
      • Allocation checks are centralized and consistently enforced with live memory data.
    • Risks or trade-offs:
      • Requests with large payloads may be rejected under memory pressure.
      • Additional validation and allocation checks add small overhead to hot paths.
  • Follow-up:
    • Implement remaining search request lifecycle endpoints (list/status) as the checklist advances.
    • Keep UI/E2E coverage aligned as new search request surfaces are added.

Motivation

  • Provide a REST API for indexer search requests to unblock UI/CLI orchestration.
  • Align allocation safeguards with GHAS feedback and operational safety goals.

Design notes

  • Handlers trim and normalize request inputs, translate service errors into RFC9457 responses, and delegate to stored-proc backed services.
  • Allocation checks rely on live memory snapshots and cap single allocations at 80% of available memory.

Test coverage summary

  • Added/updated unit tests for search request handlers and allocation helpers.
  • Ran just ci and just ui-e2e to validate unit, integration, coverage, and E2E suites.

Observability updates

  • No new metrics added; existing request spans and error contexts remain in place.

Risk & rollback plan

  • If allocation checks prove too strict, adjust the limit in crates/revaer-api/src/http/handlers/indexers/allocation.rs.
  • Roll back by reverting this ADR and associated handler changes if endpoints regress.

Dependency rationale

  • No new dependencies added; re-used existing systemstat memory probing.

Indexer search request auth E2E coverage

  • Status: Accepted
  • Date: 2026-02-04
  • Context:
    • Enforce the v1 rule that REST search requests and canonical torrent source access require API key auth.
    • Validate behavior in E2E tests without reducing existing indexer functionality.
  • Decision:
    • Add E2E coverage for search request create/cancel auth requirements.
    • Add E2E coverage for Torznab download behavior when the apikey is missing.
    • Keep API responses and handler behavior unchanged; tests only validate the existing contract.
  • Consequences:
    • Positive outcomes: regression coverage for auth enforcement on search requests and Torznab downloads.
    • Risks or trade-offs: additional E2E runtime and reliance on seeded system actor for search requests.
  • Follow-up:
    • Monitor CI E2E stability after adding coverage.

Motivation

Search request endpoints and Torznab downloads must enforce API key authentication consistently. We need explicit E2E coverage to guard against regressions while retaining current behavior.

Design notes

  • Use existing API fixtures to test authenticated and unauthenticated flows.
  • Verify missing apikey returns HTTP 401 for Torznab downloads.
  • Keep tests scoped to public endpoints and avoid relying on external indexer data.

Test coverage summary

  • Added E2E tests for search request create/cancel auth behavior.
  • Added E2E test for missing apikey on Torznab download.

Observability updates

  • No new telemetry; tests validate existing API responses.

Risk & rollback plan

  • Low risk: test-only changes. If tests are unstable, revert the E2E additions and re-evaluate fixtures or environment setup.

Dependency rationale

  • No new dependencies.

188: Indexer search pages API

  • Status: Accepted
  • Date: 2026-02-06
  • Context:
    • Search request creation exists, but there is no API surface to read sealed pages or page contents.
    • ERD requires stable page ordering and sealed page boundaries for streaming results.
    • All runtime DB reads must go through stored procedures with constant error messages.
  • Decision:
    • Added stored procedures to list pages and fetch page items with stable ordering and page metadata.
    • Exposed v1 REST endpoints to list pages and fetch a specific page for a search request.
    • Updated API models and OpenAPI to document the new search page responses.
  • Consequences:
    • Positive outcomes:
      • Clients can poll page lists and fetch sealed pages with deterministic ordering.
      • Page metadata (sealed_at, item_count) is exposed consistently across API and DB layers.
    • Risks or trade-offs:
      • Adds additional DB queries per page fetch (page metadata + items in a single proc).
      • UI still needs follow-on work to provide streaming UX, but the API is now available.
  • Follow-up:
    • Add SSE notifications for new sealed pages once orchestration emits search result events.
    • Extend UI to consume search page endpoints and surface streaming updates.

Motivation

  • Provide a concrete API to read search request pages and support streaming UI flows.
  • Keep read paths aligned with ERD page sealing and append-only ordering guarantees.

Design notes

  • Implemented search_page_list_v1 and search_page_fetch_v1 stored procedures with actor auth checks.
  • Page fetch returns page metadata and items in one query, ensuring deterministic ordering by page position.
  • Service layer maps stored-proc rows to API DTOs without exposing internal IDs.
  • Patched search_request_create_v1 policy snapshot lookup to avoid ambiguous snapshot_hash resolution when a snapshot already exists.
  • Qualified search_request_id in search_request_create_v1 inserts to avoid column/variable ambiguity during returns.
  • Qualified search_page_fetch_v1 lookups to avoid sealed_at output column ambiguity in PL/pgSQL.

Test coverage summary

  • Added stored-proc tests for page listing, invalid page numbers, and empty page fetches.
  • Added handler tests for list and fetch responses plus error mapping.
  • Will run just ci, just build-release, and just ui-e2e before hand-off.

Observability updates

  • Reused existing request spans; no new metrics added for page reads.

Risk & rollback plan

  • If page fetch semantics need adjustment, update crates/revaer-data/migrations/0079_indexer_search_pages.sql and regenerate data wrappers.
  • Roll back by reverting this ADR and the search page API routes if clients observe regressions.

Dependency rationale

  • No new dependencies added.

189: Search request validation tests

  • Status: Accepted
  • Date: 2026-02-06
  • Context:
    • Search request creation enforces identifier, season/episode, and category validation rules in stored procedures.
    • Validation paths were under-tested, leaving ERD rule coverage uncertain.
  • Decision:
    • Add stored-proc tests that exercise identifier mismatch, torznab season/episode validation, and invalid category filters.
    • Mark the ERD validation checklist item as complete once coverage is in place.
  • Consequences:
    • Positive outcomes:
      • Validation rules are exercised directly against stored procedures.
      • Future regressions in search request validation will fail fast in CI.
    • Risks or trade-offs:
      • Slightly longer indexer test runtime due to additional database cases.
  • Follow-up:
    • Extend validation tests as new rules are added to search_request_create_v1.

Motivation

  • Ensure ERD-mandated validation rules are enforced and verified in CI.
  • Provide deterministic stored-proc coverage for identifier, torznab, and category filter rules.

Design notes

  • Added stored-proc tests for identifier mismatch, torznab season/episode validation, and invalid category filters.
  • Kept tests aligned with existing error code taxonomy and DataError mapping.
  • Fixed search_request_create_v1 to compare query_type and identifier_type via text casts to avoid enum type mismatch errors.

Test coverage summary

  • Added three new search_request_create validation tests in revaer-data.
  • Will run just ci, just build-release, and just ui-e2e before hand-off.

Observability updates

  • No new telemetry or metrics changes.

Risk & rollback plan

  • If validations evolve, update tests to match new error codes or rules.
  • Roll back by reverting the added tests and checklist update if needed.

Dependency rationale

  • No new dependencies added.

190: Hash identity derivation tests

  • Status: Accepted
  • Date: 2026-02-06
  • Context:
    • ERD hash identity rules require deterministic magnet hash derivation with a strict precedence order.
    • Stored-proc behavior existed but lacked focused tests to lock in the precedence and normalization rules.
  • Decision:
    • Add stored-proc tests for derive_magnet_hash to confirm infohash v2 precedence, infohash v1 fallback, and magnet URI normalization.
    • Mark the ERD checklist item for hash identity derivation as complete once coverage is in place.
  • Consequences:
    • Positive outcomes:
      • Hash derivation precedence and normalization rules are exercised directly in CI.
      • Future regressions in magnet hash derivation will surface quickly.
    • Risks or trade-offs:
      • Minor increase in normalization test runtime.
  • Follow-up:
    • Extend normalization coverage if additional hash identity rules are added.

Motivation

  • Ensure ERD-mandated hash identity rules are verified by stored-proc tests.
  • Lock in precedence for infohash v2, infohash v1, and normalized magnet inputs.

Design notes

  • Added normalization tests that compare derivation results across input combinations.
  • Verified that normalized and raw magnet URIs produce identical hash outputs when no infohash is present.

Test coverage summary

  • Added three derive_magnet_hash tests in revaer-data normalization helpers.
  • Will run just ci, just build-release, and just ui-e2e before hand-off.

Observability updates

  • No new telemetry or metrics changes.

Risk & rollback plan

  • If derivation logic changes, update the tests to match the new rule set.
  • Roll back by reverting the normalization tests and checklist update if needed.

Dependency rationale

  • No new dependencies added.

191: Rate limit state purge test

  • Status: Accepted
  • Date: 2026-02-06
  • Context:
    • ERD rules require rate_limit_state to purge minute buckets older than six hours.
    • The purge job existed but lacked a focused test to confirm retention behavior.
  • Decision:
    • Add a stored-proc test that inserts old and recent rate_limit_state rows and verifies purge behavior.
    • Mark the ERD checklist item for rate_limit_state purging as complete once coverage is in place.
  • Consequences:
    • Positive outcomes:
      • Retention behavior is enforced in CI for rate_limit_state cleanup.
      • Prevents regressions that could bloat rate_limit_state or delete fresh buckets.
    • Risks or trade-offs:
      • Slightly longer indexer job test runtime due to extra database setup.
  • Follow-up:
    • Add more retention tests if additional job runners adopt similar cleanup rules.

Motivation

  • Ensure ERD-mandated rate limit retention rules are verified by stored-proc tests.
  • Provide deterministic coverage for the six-hour purge window.

Design notes

  • Inserted two rate_limit_state rows with window_start older and newer than six hours.
  • Verified that job_run_rate_limit_state_purge removes only the stale row.

Test coverage summary

  • Added a targeted job runner test in revaer-data for rate_limit_state purging.
  • Will run just ci, just build-release, and just ui-e2e before hand-off.

Observability updates

  • No new telemetry or metrics changes.

Risk & rollback plan

  • If retention windows change, update the test timestamps to match the new rules.
  • Roll back by reverting the added test and checklist update if needed.

Dependency rationale

  • No new dependencies added.

192 Job schedule completion updates

  • Status: Accepted
  • Date: 2026-02-07
  • Context:
    • Motivation: enforce ERD job_schedule completion semantics (next_run_at + lock cleanup) for indexer jobs.
    • Constraints: stored-procedure-only runtime access, constant error messages, versioned procs with stable wrappers.
  • Decision:
    • Add job_schedule_mark_completed_v1 and job_run_*_v2 wrappers to update last_run_at, next_run_at, and clear locks on both success and failure.
    • Keep job_run_reputation_rollup signature stable by mapping window_key to job_key in-proc.
    • Alternatives considered: update next_run_at in job_claim_next (rejected; ERD mandates update on completion) and update schedule in app runner (rejected; DB is SSOT).
  • Consequences:
    • Positive outcomes: job_schedule rows now reflect completion cadence with jitter and lock cleanup per ERD.
    • Risks or trade-offs: if job_schedule_mark_completed_v1 fails, job errors are surfaced as schedule update failures.
  • Follow-up:
    • Verify any future job runner wiring calls job_run_* wrappers (not versioned functions directly).
    • Review checkpoint: confirm Phase 9 checklist remains aligned with ERD job cadence rules.
  • Test coverage summary:
    • Added a stored-proc test asserting job_run_retention_purge updates schedule timestamps and clears locks.
  • Observability updates:
    • None (database-only change).
  • Risk & rollback plan:
    • Roll back by reverting migration 0084 and restoring job_run_* wrappers to v1.
  • Dependency rationale:
    • No new dependencies. Alternatives considered: not applicable.

193 Job claim locking and lease durations

  • Status: Accepted
  • Date: 2026-02-14
  • Context:
    • Motivation: close ERD job_schedule gaps for claim semantics and per-job lease durations.
    • Constraints: stored-procedure-only runtime DB access, constant error messages, migration-safe CREATE OR REPLACE updates.
  • Decision:
    • Add job_claim_lease_seconds_v1 (+ stable wrapper) as the single lease-duration mapping source for all job_key values.
    • Update job_claim_next_v1 to acquire advisory lock before reading job_schedule, then validate due/locked/enabled state and set locked_until using job_claim_lease_seconds_v1.
    • Alternatives considered: keep inline CASE mapping in job_claim_next_v1 (rejected: duplicated lease logic), and rely on pre-lock state checks only (rejected: race window for stale schedule reads).
  • Consequences:
    • Positive outcomes: claim flow now aligns with ERD advisory-lock + locked_until semantics and applies lease durations from one canonical mapping.
    • Risks or trade-offs: claim function now returns job_locked earlier when advisory lock contention exists, which may mask other validation details during concurrent claims.
  • Follow-up:
    • Keep new lease mapping synchronized if job_key enum values change.
    • Review checkpoint: verify scheduler/executor callers consume job_claim_next errors without re-logging.
  • Test coverage summary:
    • Added stored-proc integration tests for job_claim_next not-due and locked failures.
    • Added per-job lease-duration assertion test covering all seeded job_key values.
  • Observability updates:
    • None (DB procedure behavior + tests only).
  • Risk & rollback plan:
    • Roll back by reverting migration 0085_indexer_job_claim_locking.sql and restoring prior job_claim_next_v1 logic.
  • Dependency rationale:
    • No new dependencies. Alternatives considered: not applicable.

194 Policy snapshot GC ordering

  • Status: Accepted
  • Date: 2026-02-15
  • Context:
    • Motivation: enforce ERD ordering that policy snapshot refcount repair runs before policy snapshot GC.
    • Constraints: runtime DB operations must remain stored-procedure based with stable wrappers and constant error messages.
  • Decision:
    • Update job_run_policy_snapshot_gc_v2 to invoke job_run_policy_snapshot_refcount_repair_v1 before job_run_policy_snapshot_gc_v1.
    • Keep job_schedule completion behavior unchanged so policy_snapshot_gc still advances cadence and clears locks in one place.
    • Alternatives considered: scheduler-only ordering in application code (rejected: ordering belongs in DB job semantics) and changing v1 procedures directly (rejected: keep compatibility and add behavior through versioned wrappers).
  • Consequences:
    • Positive outcomes: stale policy_snapshot.ref_count values are repaired before GC evaluation, preventing orphaned old snapshots from being retained incorrectly.
    • Risks or trade-offs: policy_snapshot_gc runtime now includes repair cost; daily cadence keeps this acceptable.
  • Follow-up:
    • Maintain this ordering if future policy_snapshot maintenance jobs are introduced.
    • Review checkpoint: ensure callers continue to execute job_run_policy_snapshot_gc/job_run_policy_snapshot_gc_v2, not direct table mutations.
  • Test coverage summary:
    • Added integration test proving job_run_policy_snapshot_gc repairs stale ref_count values before deleting old snapshots.
  • Observability updates:
    • None (stored-procedure ordering change only).
  • Risk & rollback plan:
    • Roll back by reverting migration 0086_indexer_policy_snapshot_gc_ordering.sql.
  • Dependency rationale:
    • No new dependencies. Alternatives considered: not applicable.

195 Retention purge context cleanup

  • Status: Accepted
  • Date: 2026-02-15
  • Context:
    • Motivation: complete ERD retention semantics for search-request scoped rows by purging context score tables together with expired search requests.
    • Constraints: retention behavior must remain in stored procedures and use constant error messages.
  • Decision:
    • Update job_run_retention_purge_v1 to delete canonical_torrent_source_context_score and canonical_torrent_best_source_context rows where context_key_type='search_request' and context_key_id belongs to purged requests.
    • Keep existing retention windows and table purges unchanged for outbound logs, RSS seen rows, conflicts, conflict audits, health events, and source reputation.
    • Add an integration test covering retention windows and search-request context cleanup in one execution path.
    • Alternatives considered: relying only on application-side cleanup (rejected: retention ownership is database-side) and leaving context rows durable (rejected: violates ERD retention rules).
  • Consequences:
    • Positive outcomes: search-request context score tables no longer retain stale rows after request retention purges; policy snapshot ref_count and policy-set cleanup remain coherent.
    • Risks or trade-offs: retention job touches two additional tables, increasing delete work during purge runs.
  • Follow-up:
    • Keep new search-request context rows scoped to context_key_type='search_request' so retention cleanup remains deterministic.
    • Validate future retention migrations against the ERD retention table list before release.
  • Test coverage summary:
    • Added job_run_retention_purge_applies_table_windows to verify old-vs-recent retention behavior across all configured operational tables plus search-request context score cleanup.
  • Observability updates:
    • None (database retention behavior change only).
  • Risk & rollback plan:
    • Roll back by reverting migration 0087_indexer_retention_purge_context_cleanup.sql.
  • Dependency rationale:
    • No new dependencies. Alternatives considered: not applicable.

196 Indexer connectivity profile refresh rollups

  • Status: Accepted
  • Date: 2026-02-15
  • Context:
    • Motivation: complete the ERD connectivity rollup behavior for job_run_connectivity_profile_refresh_v1 so indexer_connectivity_profile is derived from outbound_request_log with the required thresholds and request-type scope.
    • Constraints: runtime logic must remain in stored procedures, no inline runtime SQL, no new dependencies, and tests must run through existing Rust data-layer harnesses.
  • Decision:
    • Added migration 0088_indexer_connectivity_profile_rollup_rules.sql to redefine job_run_connectivity_profile_refresh_v1.
    • Rollups now aggregate only request types (caps, search, tvsearch, moviesearch, rss, probe), exclude rate_limited from samples, and treat success as outcome='success' AND parse_ok=true.
    • Status scoring now follows ERD thresholds with explicit failing precedence for success_rate_1h < 0.90 and dominant failure classes in (auth_error, cf_challenge, tls, dns).
    • Added quarantine handling refinements: persistent failing + CF/auth/429 burst transitions to quarantined; post-cooldown healthy rollups recover to degraded while preserving prior error class context.
    • Added job-runner tests in crates/revaer-data/src/indexers/jobs.rs for no-sample defaults, low-success failure classification, persistent auth quarantine, and quarantine cooldown recovery.
    • Alternatives considered: keeping previous status logic (rejected: low-success cases could remain degraded, which conflicts with ERD failing rules) and handling quarantine transitions in application code (rejected: ERD assigns this behavior to stored-procedure rollups).
  • Consequences:
    • Positive outcomes: connectivity snapshots align with ERD sample definitions and threshold semantics; rollups update every active indexer row, including no-sample degraded state.
    • Risks or trade-offs: stricter failing/quarantine classifications can change operational status sooner than previous behavior; large outbound log windows still require efficient indexing.
  • Follow-up:
    • Implement the remaining Phase 9 rollup jobs (reputation_rollup_*, canonical_backfill_best_source, base_score_refresh_recent) and extend job tests for those procedures.
    • Revisit schema-level indexer_connectivity_profile constraint hardening if we want DB-level enforcement of non-null error_class for non-healthy statuses.
  • Design notes:
    • Status resolution is now two-stage (status_resolved then final) so non-healthy statuses can preserve prior error class context without relying on base-status assumptions.
    • Indexer scope is anchored to active indexer_instance rows (deleted_at IS NULL) so connectivity refresh is deterministic even without recent request samples.
  • Test coverage summary:
    • Added:
      • job_run_connectivity_profile_refresh_upserts_degraded_without_samples
      • job_run_connectivity_profile_refresh_marks_low_success_as_failing
      • job_run_connectivity_profile_refresh_quarantines_persistent_auth_failures
      • job_run_connectivity_profile_refresh_recovers_quarantine_to_degraded_after_cooldown
  • Observability updates:
    • None (stored-procedure behavior change only; no new telemetry surface).
  • Risk & rollback plan:
    • Roll back by reverting migration 0088_indexer_connectivity_profile_rollup_rules.sql and rerunning migration tooling in a rollback deployment.
  • Dependency rationale:
    • No new dependencies. Alternatives considered: not applicable.

197 Reputation rollup sample thresholds

  • Status: Accepted
  • Date: 2026-02-15
  • Context:
    • Motivation: complete ERD reputation rollup behavior for job_run_reputation_rollup_v1 so source_reputation only records trusted windows and uses ERD sample semantics.
    • Constraints: runtime data access stays in stored procedures, no new dependencies, and behavior must be validated through existing data-layer tests.
  • Decision:
    • Added migration 0089_indexer_reputation_rollup_thresholds.sql to redefine job_run_reputation_rollup_v1.
    • Request samples now use outbound_request_log rows for request types (caps, search, tvsearch, moviesearch, rss, probe), exclude error_class=rate_limited, and count success as outcome='success' AND parse_ok=true.
    • Rollups now upsert only for eligible windows (request_count >= 30 or acquisition_count >= 10) per ERD trusted-sample thresholds.
    • Rollups are scoped to active indexers (indexer_instance.deleted_at IS NULL) and preserve fake/dmca numerator semantics from acquisition attempts plus reported_fake actions.
    • Added job-runner tests in crates/revaer-data/src/indexers/jobs.rs for insufficient-sample skip behavior and eligible-window rate calculations.
    • Alternatives considered: retaining previous always-upsert behavior (rejected: violates ERD “sufficient samples” requirement) and computing trust filtering in Rust instead of SQL (rejected: rollup ownership is database-side).
  • Consequences:
    • Positive outcomes: source_reputation rows now align with ERD trust gating and sample definitions, reducing noisy low-sample rollups.
    • Risks or trade-offs: sparse indexers may not get fresh reputation rows until enough traffic exists, which can increase neutral scoring fallback frequency.
  • Follow-up:
    • Implement remaining Phase 9 derived refresh jobs and add dedicated tests for reputation windows 24h and 7d cadence behavior.
    • Revisit whether stale reputation rows should be actively pruned when an indexer drops below sample thresholds.
  • Design notes:
    • The procedure uses indexer_scope, combined, and eligible CTEs so threshold gating is explicit and unit-testable.
    • min_samples now records the active trust threshold (10 when acquisition-driven, otherwise 30).
  • Test coverage summary:
    • Added:
      • job_run_reputation_rollup_skips_insufficient_samples
      • job_run_reputation_rollup_writes_rates_for_eligible_samples
  • Observability updates:
    • None (stored-procedure behavior change only).
  • Risk & rollback plan:
    • Roll back by reverting migration 0089_indexer_reputation_rollup_thresholds.sql.
  • Dependency rationale:
    • No new dependencies. Alternatives considered: not applicable.

198 Canonical refresh durable source cadence

  • Status: Accepted
  • Date: 2026-02-15
  • Context:
    • Motivation: complete ERD-derived cadence behavior for job_run_canonical_backfill_best_source_v1 and job_run_base_score_refresh_recent_v1.
    • Constraints: keep runtime DB behavior inside stored procedures, use no new dependencies, and verify behavior through revaer-data job tests.
  • Decision:
    • Added migration 0090_indexer_base_score_refresh_durable_sources.sql to redefine both job procedures.
    • job_run_base_score_refresh_recent_v1 now derives candidate canonical/source pairs directly from durable canonical_torrent_source.last_seen_at >= now()-7d instead of canonical_torrent_source_context_score.
    • job_run_base_score_refresh_recent_v1 now recomputes global winners for canonicals with durable-source activity in the last 7 days.
    • job_run_canonical_backfill_best_source_v1 now treats “recent canonicals” as canonicals with at least one durable source seen in the last 7 days, while retaining no-winner and low-confidence fallback backfill paths.
    • Added revaer-data job tests for durable-source-only base score refresh and recent durable-source backfill recomputation behavior.
    • Alternatives considered: keep context-scoped candidate selection (rejected: conflicts with ERD durable-source cadence requirement) and rely on ingest-time recompute only (rejected: ERD explicitly assigns hourly refresh to the job).
  • Consequences:
    • Positive outcomes: base score refresh and global best-source backfill now align with ERD “durable source last_seen_at” semantics.
    • Risks or trade-offs: broader durable-source candidate scans may increase hourly job work on large datasets.
  • Follow-up:
    • Implement canonical_prune_low_confidence checklist item and add focused tests for prune eligibility edge cases.
    • Validate production indexes for durable-source cadence queries as data volume increases.
  • Design notes:
    • The refresh pipeline remains deterministic: compute base scores first, then recompute global winners for the same durable-source candidate set.
    • Backfill keeps low-confidence safety behavior while adding durable-source recency as the primary cadence signal.
  • Test coverage summary:
    • Added:
      • job_run_base_score_refresh_recent_uses_durable_source_activity
      • job_run_canonical_backfill_best_source_recomputes_recent_durable_sources
  • Observability updates:
    • None (stored-procedure behavior change only).
  • Risk & rollback plan:
    • Roll back by reverting migration 0090_indexer_base_score_refresh_durable_sources.sql.
  • Dependency rationale:
    • No new dependencies. Alternatives considered: not applicable.

Canonical prune source-link policy alignment

  • Status: Accepted
  • Date: 2026-02-18
  • Context:
    • canonical_prune_low_confidence_v1 needed to match ERD_INDEXERS.md pruning semantics for low-confidence fallback canonicals.
    • The ERD requires preserving candidates when their durable sources are also tied to non-pruned canonicals.
    • Existing logic inferred source ties via identity joins, which could diverge from persisted canonical/source linkage used by scoring and best-source derivations.
  • Decision:
    • Redefine canonical_prune_low_confidence_v1 to evaluate source ties from persisted canonical/source linkage tables:
      • canonical_torrent_source_base_score
      • canonical_torrent_source_context_score
      • canonical_torrent_best_source_global
      • canonical_torrent_best_source_context
    • Keep existing candidate eligibility guards:
      • title_size_fallback with identity_confidence <= 0.60
      • created_at older than 30 days
      • no acquisition attempts by canonical ID or hashes
      • no user_result_action with selected or downloaded
    • Prune only candidates whose linked sources are not tied to any non-candidate canonical.
    • Alternatives considered:
      • Keep identity-join inference only: rejected because it does not consistently reflect persisted canonical/source ties.
      • Add a new canonical-source mapping table: deferred to avoid schema expansion in this step.
  • Consequences:
    • Positive outcomes:
      • Pruning behavior now aligns with ERD source-linkage policy.
      • Candidate groups linked only to other candidates can be pruned together.
      • Candidates sharing sources with non-candidates are retained.
    • Risks or trade-offs:
      • Legacy rows without persisted link-table ties may be treated as having no source links.
  • Follow-up:
    • Implementation tasks:
      • Add migration redefining canonical_prune_low_confidence_v1 linkage checks.
      • Add data-layer tests for prune/retain group behavior.
      • Mark checklist step complete.
    • Review checkpoints:
      • just ci
      • just ui-e2e

RSS poll and subscription backfill workflows

  • Status: Accepted
  • Date: 2026-02-21
  • Context:
    • ERD_INDEXERS.md requires rss_poll and rss_subscription_backfill workflows with strict eligibility, retry/disable behavior, and job schedule completion semantics.
    • Existing stored procedures were in place, but coverage did not prove the v1 behavioral requirements end-to-end from the Rust data-access layer.
    • Phase 9 checklist item remained open until those workflows were validated through representative job and executor paths.
  • Decision:
    • Add data-layer tests in revaer-data for RSS claim/apply and backfill job execution:
      • rss_poll_claim returns only due, enabled subscriptions for enabled RSS-capable instances.
      • successful rss_poll_apply updates subscription state and deduplicates item inserts.
      • non-retryable rss_poll_apply disables subscription and writes the expected config audit record.
      • job_run_rss_subscription_backfill creates missing rows, applies enable/disable state, marks maintenance completion, and disables its own schedule.
      • backfill job no-ops once maintenance completion is present.
    • Keep implementation dependency-free (no new crates).
    • Alternatives considered:
      • Validate only through SQL migration tests: rejected because data-layer contract behavior could still drift.
      • Validate only via API/E2E: rejected because it obscures SP-level failure modes and slows iteration.
  • Consequences:
    • Positive outcomes:
      • RSS workflows are now verified against ERD-required behavior at the stored-proc integration boundary.
      • Regression risk for subscription claim/apply and one-time backfill scheduling is reduced.
    • Risks or trade-offs:
      • Tests rely on fixture DB state and must keep helper inserts aligned with table constraints.
  • Follow-up:
    • Implementation tasks:
      • Mark Phase 9 RSS workflow checklist item complete.
      • Continue with the next Phase 9 derived refresh timing/caching checklist item.
    • Review checkpoints:
      • just ci
      • just ui-e2e

RSS scheduling, backoff, and dedupe validation

  • Status: Accepted
  • Date: 2026-02-21
  • Context:
    • ERD_INDEXERS.md requires RSS polling behavior to enforce scheduling jitter, retry backoff, and item deduplication.
    • Existing tests covered claim filtering, successful dedupe insertion, and non-retryable auto-disable, but did not assert retry backoff growth and success scheduling bounds.
    • The Phase 7 checklist item remained open until these behavioral rules were validated.
  • Decision:
    • Extend revaer-data executor tests for RSS apply behavior:
      • Add retryable failure assertions proving exponential backoff progression (60s then 120s), preserved subscription enablement, and persisted error class.
      • Add successful apply scheduling assertions proving next poll is interval-based with bounded jitter (900..=960 seconds).
    • Keep implementation dependency-free (no new crates).
    • Alternatives considered:
      • Mark checklist complete based on procedure inspection only: rejected because behavior needs executable regression checks.
      • Add only migration-level SQL tests: rejected because data-layer API contract could still drift.
  • Consequences:
    • Positive outcomes:
      • RSS retry cadence and schedule jitter are now validated at the Rust data-access boundary.
      • ERD behavioral requirements for scheduling/backoff/dedupe have concrete regression coverage.
    • Risks or trade-offs:
      • Tests rely on time windows and may need updates if ERD cadence constants change.
  • Follow-up:
    • Implementation tasks:
      • Mark Phase 7 RSS scheduling/backoff/dedupe checklist item complete.
      • Continue with the next unchecked ERD indexer implementation item.
    • Review checkpoints:
      • just ci
      • just ui-e2e

Rate limit token bucket and RSS rate-limited semantics

  • Status: Accepted
  • Date: 2026-02-21
  • Context:
    • ERD_INDEXERS.md defines token bucket enforcement via rate_limit_try_consume_v1 and explicit rate-limited failure semantics for rss_poll_apply_v1.
    • Existing tests did not fully verify capacity-deny behavior, invalid input guards, and RSS rate-limited logging/backoff behavior from the Rust data boundary.
    • Phase 7 checklist still had rate-limit enforcement unchecked.
  • Decision:
    • Add revaer-data tests to verify:
      • token bucket capacity enforcement (allowed then deny without over-consuming tokens);
      • invalid token bucket inputs return expected error details (capacity_invalid, tokens_invalid);
      • RSS rate_limited failures require rate_limit_denied_scope;
      • RSS rate_limited failures use retry path semantics (backoff scheduling) and force outbound log counters to latency_ms=0 and result_count=0.
    • Keep implementation dependency-free (no new crates).
    • Alternatives considered:
      • Rely on migration inspection only: rejected because runtime contracts can regress without executable checks.
      • Cover only via API tests: rejected because proc-level behavior is more directly and deterministically exercised in revaer-data.
  • Consequences:
    • Positive outcomes:
      • Token bucket behavior and RSS rate-limited semantics are now regression-tested.
      • The checklist item for rate-limit rule enforcement can be marked complete.
    • Risks or trade-offs:
      • Time-window assertions rely on bounded jitter assumptions and may need adjustment if ERD timing constants change.
  • Follow-up:
    • Implementation tasks:
      • Continue with the next unchecked ERD indexers item (Cloudflare transitions/mitigation behavior).
    • Review checkpoints:
      • just ci
      • just ui-e2e

Cloudflare state transition and mitigation validation

  • Status: Accepted
  • Date: 2026-02-21
  • Context:
    • ERD_INDEXERS.md requires explicit Cloudflare transition behavior around RSS polling failures, including challenged/cooldown progression and retryability semantics tied to FlareSolverr availability.
    • Existing tests did not verify the cf_challenge transition paths in rss_poll_apply_v1 from the Rust data boundary.
    • Phase 7 still had Cloudflare transition/mitigation behavior unchecked.
  • Decision:
    • Extend revaer-data RSS executor tests to validate:
      • non-retryable cf_challenge creates/updates indexer_cf_state to challenged with incremented failures;
      • repeated non-retryable cf_challenge transitions to cooldown at five consecutive failures with backoff;
      • retryable cf_challenge (cf_retryable=true) follows retry semantics without applying CF state transition updates.
    • Keep implementation dependency-free (no new crates).
    • Alternatives considered:
      • Rely only on procedure inspection: rejected because transition regressions are easy to miss without executable checks.
      • Cover only via API/E2E: rejected because proc-level transition logic is most directly validated in data-layer tests.
  • Consequences:
    • Positive outcomes:
      • Core Cloudflare transition behavior in RSS polling now has regression coverage.
      • Checklist item for Cloudflare state transitions/mitigation can be marked complete.
    • Risks or trade-offs:
      • This validates data/procedure behavior; route-selection policy wiring remains a separate verification axis.
  • Follow-up:
    • Implementation tasks:
      • Continue with the next unchecked ERD item after Cloudflare/rate-limit/RSS rule coverage.
    • Review checkpoints:
      • just ci
      • just ui-e2e

Policy snapshot reuse and refcount validation

  • Status: Accepted
  • Date: 2026-02-21
  • Context:
    • ERD_INDEXERS.md requires policy snapshots to be reusable by hash and to track ref_count transactionally.
    • Search-request creation and retention purge logic depend on this behavior for correctness and GC safety.
    • Existing tests covered purge ordering and repair jobs, but did not directly verify snapshot reuse on create plus ref-count decrement on purge from the data boundary.
  • Decision:
    • Add revaer-data search-request tests to validate:
      • repeated search_request_create calls with identical effective policy inputs reuse the same policy_snapshot row and increment ref_count;
      • job_run_retention_purge decrements snapshot ref_count when an old finished search request is purged.
    • Keep implementation dependency-free (no new crates).
    • Alternatives considered:
      • rely on SQL review only: rejected because snapshot reuse/refcount regressions are subtle and need executable checks;
      • cover only through API integration tests: rejected because direct data-layer tests are faster and isolate proc behavior.
  • Consequences:
    • Positive outcomes:
      • snapshot reuse and ref-count tracking now have direct regression coverage;
      • Phase 7 checklist item for snapshot reuse/ref_count can be marked complete.
    • Risks or trade-offs:
      • tests manipulate finished timestamps to exercise retention windows and should be kept aligned with retention defaults.
  • Follow-up:
    • Implementation tasks:
      • Continue with the next unchecked Phase 7 behavioral rule.
    • Review checkpoints:
      • just ci
      • just ui-e2e

Policy snapshot GC acceptance coverage

  • Status: Accepted
  • Date: 2026-02-21
  • Context:
    • ERD_INDEXERS.md requires policy snapshot reuse via hash, transactional ref_count tracking, and garbage collection for stale snapshots.
    • Existing job-level tests already verify GC behavior and refcount repair ordering, while the latest search-request tests verify create-time reuse and retention-time decrements.
    • The Phase 9 acceptance checklist item remained open despite complete executable coverage.
  • Decision:
    • Mark the acceptance item Policy snapshot reuse and GC rules match ERD complete in ERD_INDEXERS_CHECKLIST.md.
    • Keep GC/refcount verification at the data boundary using existing tests:
      • indexers::jobs::tests::job_run_policy_snapshot_gc_repairs_ref_count_before_delete
      • indexers::search_requests::tests::search_request_create_reuses_policy_snapshot_by_hash_and_increments_ref_count
      • indexers::search_requests::tests::retention_purge_decrements_policy_snapshot_ref_count
    • Alternatives considered:
      • add duplicate API-layer tests for the same behavior: rejected because stored-procedure tests already exercise authoritative behavior directly.
  • Consequences:
    • Positive outcomes:
      • acceptance checklist now matches implemented and tested ERD behavior;
      • policy snapshot lifecycle remains covered at create, purge, and GC phases.
    • Risks or trade-offs:
      • if snapshot lifecycle semantics change, tests and checklist mapping must be updated together.
  • Follow-up:
    • Implementation tasks:
      • Continue with the next unchecked acceptance and hard-blocker items.
    • Review checkpoints:
      • just ci
      • just ui-e2e

Derived refresh timing and caching validation

  • Status: Accepted
  • Date: 2026-02-21
  • Context:
    • ERD_INDEXERS.md defines job cadence expectations for derived-table refresh workflows and expects deterministic refresh timing.
    • Existing tests validated specific refresh behavior (connectivity/reputation/base-score/canonical backfill), but did not explicitly validate seeded job_schedule.cadence_seconds against ERD timings.
    • Phase 9 still had Ensure derived tables refresh according to ERD timing and caching rules unchecked.
  • Decision:
    • Add a data-layer test in revaer-data to assert job_schedule cadence values for all indexer jobs that drive derived refresh and related maintenance windows.
    • Keep coverage dependency-free and proc-centric:
      • job_schedule_cadence_matches_erd_refresh_timing validates configured cadence seconds for refresh, rollup, GC, purge, and RSS schedules.
    • Alternatives considered:
      • infer timing correctness from runtime behavior only: rejected because explicit cadence drift can pass behavioral tests but violate ERD schedule requirements.
  • Consequences:
    • Positive outcomes:
      • ERD timing expectations are now executable and regression-safe;
      • derived refresh cadence drift will fail tests early.
    • Risks or trade-offs:
      • if ERD cadence values change, this test and migration seeds must be updated in the same change.
  • Follow-up:
    • Implementation tasks:
      • Continue with the next unchecked acceptance/hard-blocker item.
    • Review checkpoints:
      • just ci
      • just ui-e2e

Retention and rollup job window validation

  • Status: Accepted
  • Date: 2026-02-21
  • Context:
    • ERD_INDEXERS.md requires retention and reputation rollup jobs to honor time-window semantics (1h, 24h, 7d) and retain deterministic aggregation behavior.
    • Existing tests covered retention purge and one-hour rollup behavior, but explicit coverage for 24h/7d boundary inclusion-exclusion was missing.
    • Phase 11 still listed Add job runner tests for retention and rollups as incomplete.
  • Decision:
    • Add a revaer-data job-runner test that validates multi-window rollup boundaries:
      • includes events within 24h and 7d windows;
      • excludes events older than each target window;
      • verifies derived success and acquisition metrics for both 24h and 7d windows.
    • Mark the Phase 11 checklist item complete.
    • Alternatives considered:
      • rely only on SQL inspection: rejected because boundary mistakes are subtle and regress easily.
  • Consequences:
    • Positive outcomes:
      • rollup window boundaries now have executable regression coverage for non-1h windows;
      • retention/rollup job-runner coverage aligns with checklist intent.
    • Risks or trade-offs:
      • fixture timestamps are relative to test clock; if ERD windows change, test expectations must be updated.
  • Follow-up:
    • Implementation tasks:
      • Continue with the next unchecked checklist item.
    • Review checkpoints:
      • just ci
      • just ui-e2e

Retention and derived refresh strategy coverage

  • Status: Accepted
  • Date: 2026-02-21
  • Context:
    • ERD_INDEXERS.md requires retention windows and derived refresh strategy to be enforced via scheduled jobs.
    • Coverage existed across job tests but the checklist item remained open.
  • Decision:
    • Close the checklist item after mapping and validating existing executable coverage:
      • retention purge windows and table cleanup:
        • job_run_retention_purge_applies_table_windows
      • derived cadence and schedule correctness:
        • job_schedule_cadence_matches_erd_refresh_timing
      • rollup window behavior and boundary semantics:
        • job_run_reputation_rollup_skips_insufficient_samples
        • job_run_reputation_rollup_writes_rates_for_eligible_samples
        • job_run_reputation_rollup_respects_window_boundaries
      • derived refresh jobs:
        • job_run_base_score_refresh_recent_uses_durable_source_activity
        • job_run_canonical_backfill_best_source_recomputes_recent_durable_sources
  • Consequences:
    • Positive outcomes:
      • checklist status now matches implemented, tested ERD behavior.
    • Risks or trade-offs:
      • cadence/window rule changes require test expectation updates in lockstep.
  • Follow-up:
    • Implementation tasks:
      • Continue with the next unchecked behavioral-rule item.
    • Review checkpoints:
      • just ci
      • just ui-e2e

Policy rule disable/enable and reorder validation

  • Status: Accepted
  • Date: 2026-02-21
  • Context:
    • ERD_INDEXERS.md requires policy rule mutability to flow through explicit enable/disable and ordering semantics rather than ad-hoc field mutation.
    • Existing tests covered policy rule creation and policy-set reorder validation, but did not directly validate rule disable/enable state transitions and empty reorder rejection.
  • Decision:
    • Add data-layer tests in revaer-data policy procedures to validate:
      • policy_rule_disable and policy_rule_enable toggle policy_rule.is_disabled deterministically;
      • policy_rule_reorder rejects empty rule lists with policy_rule_ids_empty.
    • Mark the behavioral checklist item complete.
  • Consequences:
    • Positive outcomes:
      • policy rule control-path semantics now have explicit regression coverage;
      • checklist status aligns with executable behavior.
    • Risks or trade-offs:
      • if policy-rule update surfaces are introduced later, additional tests will be required for new mutation semantics.
  • Follow-up:
    • Implementation tasks:
      • Continue with the next unchecked behavioral-rule item.
    • Review checkpoints:
      • just ci
      • just ui-e2e

Search-result observation rules validation

  • Status: Accepted
  • Date: 2026-02-21
  • Context:
    • ERD_INDEXERS.md requires observation processing to enforce latest-observation precedence, monotonic durable last_seen_* fields, and duplicate attribute rejection.
    • search_result_ingest tests only covered missing request failures and did not directly verify these rule-level invariants.
  • Decision:
    • Add revaer-data tests for observation behavior:
      • search_result_ingest_rejects_duplicate_attr_keys
      • search_result_ingest_keeps_last_seen_monotonic
    • Use stored-procedure path end-to-end (search_request_create, search_result_ingest) with deterministic fixture setup.
    • Mark the observation-rule checklist item complete.
  • Consequences:
    • Positive outcomes:
      • observation invariants are now executable and regression-safe at the data boundary;
      • duplicate attr-key rejection is explicitly covered.
    • Risks or trade-offs:
      • setup fixtures are more involved because ingest requires search+indexer run scope.
  • Follow-up:
    • Implementation tasks:
      • Continue with remaining Phase 7 behavioral items.
    • Review checkpoints:
      • just ci
      • just ui-e2e

Category mapping and domain filter validation

  • Status: Accepted
  • Date: 2026-02-21
  • Context:
    • ERD_INDEXERS.md requires Torznab category filters to map into effective media-domain constraints.
    • ERD_INDEXERS.md also requires profile domain allowlists to reject category filters that collapse to an empty effective category set.
    • Existing search_request_create tests validated basic category input checks but did not verify mapped-domain persistence or allowlist conflict rejection.
  • Decision:
    • Added search_request_create_maps_torznab_categories_to_effective_media_domain to assert:
      • Torznab category 2000 maps to effective_media_domain_id = movies.
      • requested domain remains unset when the caller does not provide requested_media_domain_key.
      • the effective category join row is persisted for 2000.
    • Added search_request_create_rejects_category_filter_outside_profile_allowlist to assert:
      • a tv-only profile allowlist rejects a movies category filter with invalid_category_filter.
    • Marked checklist item ERD_INDEXERS_CHECKLIST.md Phase 7 category/domain rule as complete.
  • Consequences:
    • Positive outcomes:
      • category mapping behavior is now regression-tested at stored-procedure boundaries.
      • domain filtering against profile allowlists is validated with explicit failure semantics.
    • Risks or trade-offs:
      • test setup now includes profile provisioning and allowlist mutation, adding small runtime overhead.
  • Follow-up:
    • Implementation tasks:
      • continue with remaining unchecked Phase 6/7/8/10 items.
    • Review checkpoints:
      • just ci
      • just ui-e2e

Indexer Observability Counters for Torznab, Search, and Import Jobs

  • Status: Accepted
  • Date: 2026-02-25
  • Context:
    • Motivation:
      • ERD indexer checklist Phase 10 still required explicit metrics for invalid Torznab requests, search throughput, and job outcomes.
      • Existing telemetry covered HTTP and generic guardrails but did not provide indexer-specific counters for those acceptance points.
    • Constraints:
      • Keep error messages constant and avoid adding fallback/dead code.
      • Reuse existing telemetry infrastructure and avoid new dependencies.
      • Preserve stored-procedure-only runtime DB access boundaries.
  • Decision:
    • Added new Prometheus counters in revaer-telemetry:
      • indexer_torznab_invalid_requests_total{reason}
      • indexer_search_requests_total{operation,outcome}
      • indexer_job_outcomes_total{operation,outcome}
    • Wired increments in API handlers:
      • Torznab API/download handlers increment invalid-request reasons for missing API key, unauthorized access, missing instances/sources, and unsupported query type.
      • Search request/page handlers increment throughput counters on success/error for create, cancel, page list, and page fetch operations.
      • Import job handlers increment job outcome counters on success/error for create, run, status, and results operations.
    • Design notes:
      • Metrics were recorded at request boundaries to avoid duplicate increments in deeper call chains.
      • Label values are constrained to stable constant strings to keep metric cardinality bounded.
    • Alternatives considered:
      • Adding counters in app/data layers instead of HTTP handlers was rejected because request intent and invalid Torznab semantics are best known at the API boundary.
      • Reusing generic events_emitted_total was rejected because it cannot express required ERD dimensions without overloading labels.
  • Consequences:
    • Positive outcomes:
      • ERD observability coverage improved with explicit counters for previously untracked indexer flows.
      • Metrics remain low-cardinality and aligned with existing Prometheus collection.
    • Risks and trade-offs:
      • Handler-level instrumentation can miss non-HTTP flows by design; background internal jobs still require separate instrumentation where applicable.
      • Reason labels must remain curated to avoid accidental cardinality growth.
  • Follow-up:
    • Test coverage summary:
      • Updated telemetry unit tests to assert new metrics are registered/rendered.
      • Verified API/indexer, workspace, and E2E suites pass via just ci and just ui-e2e.
    • Observability updates:
      • New counters are exposed via /metrics immediately with no schema migrations.
    • Risk and rollback plan:
      • Safe rollback by removing handler increments and metric registration if operational overhead appears; no data migration impact.
    • Dependency rationale:
      • No new dependencies added; reused existing prometheus primitives in revaer-telemetry.

Indexer Request Span Coverage for Torznab, Search, and Import Jobs

  • Status: Accepted
  • Date: 2026-02-25
  • Context:
    • Motivation:
      • ERD_INDEXERS_CHECKLIST.md still had the observability tracing-span item unchecked for indexer/Torznab/search/job flows.
      • Existing service-level spans covered many indexer operations, but API request-boundary spans for Torznab and indexer search/import endpoints were not explicit and consistent.
    • Constraints:
      • Avoid logging secrets from request payloads and query parameters.
      • Keep constant error messaging and existing API behavior unchanged.
      • Preserve dependency minimalism (no new crates).
  • Decision:
    • Added explicit #[tracing::instrument] spans on API request handlers for:
      • Torznab request and download endpoints.
      • Indexer search request create/cancel and page list/fetch endpoints.
      • Import-job create/run/status/results endpoints.
    • Used skip(...) on payload/query-bearing args to avoid accidental secret logging.
    • Added stable span names and key IDs in structured fields (public UUIDs/page number).
    • Marked Phase 10 tracing-span checklist item complete.
    • Alternatives considered:
      • Relying only on middleware http.request span was insufficient for indexer-domain operation-level observability.
      • Adding manual info_span! blocks in every handler was more verbose than #[instrument] and easier to regress.
  • Consequences:
    • Positive outcomes:
      • Request-level traces now include deterministic span names for Torznab/search/job operations.
      • Improves correlation across API middleware spans and indexer service spans.
    • Risks and trade-offs:
      • Span naming and fields must remain stable to avoid dashboard churn.
      • Future handlers must follow skip(request/query) for secret-bearing data.
  • Follow-up:
    • Test coverage summary:
      • Existing handler tests validated behavior compatibility after instrumentation.
      • Full gate set (just ci, just ui-e2e) was rerun and passed.
    • Observability updates:
      • New trace spans available immediately; no metrics/schema migration required.
    • Risk and rollback plan:
      • Rollback by removing instrumentation attributes if trace overhead becomes an issue.
    • Dependency rationale:
      • No new dependencies; used existing tracing crate already in workspace.

Torznab Parity Integration Tests for Endpoint Format and Auth Semantics

  • Status: Accepted
  • Date: 2026-02-25
  • Context:
    • Motivation:
      • ERD checklist gaps remained for Torznab endpoint format/auth/invalid-request behavior and Torznab parity integration coverage.
      • Existing API E2E tests only exercised not-found paths for Torznab endpoints with random IDs.
    • Constraints:
      • Keep tests deterministic and use existing API setup fixtures.
      • Avoid introducing new dependencies or non-just workflows.
      • Ensure API keys are not logged in traces or test output.
  • Decision:
    • Extended tests/specs/api/indexers-torznab-instances.spec.ts to create a real search profile and Torznab instance, then validate:
      • Missing apikey on /torznab/{id}/api returns 401.
      • Invalid apikey on /torznab/{id}/api returns 401.
      • Valid apikey + t=caps returns 200 with XML <caps> payload.
      • Unsupported query type with valid key returns deterministic empty RSS response.
      • Download endpoint enforces missing/invalid key with 401 and missing source with 404 for a valid instance.
    • Updated checklist entries to mark:
      • Integration tests for REST/Torznab parity.
      • Torznab endpoint format/auth/invalid request handling criterion.
    • Alternatives considered:
      • Unit-only handler tests were rejected because parity expectations need full HTTP behavior and fixture-auth integration.
      • New dedicated Torznab spec file was rejected to avoid duplication while current spec already owns endpoint lifecycle coverage.
  • Consequences:
    • Positive outcomes:
      • Torznab public endpoints now have end-to-end coverage against ERD-facing semantics.
      • Reduces regression risk for auth and XML response shape behavior.
    • Risks and trade-offs:
      • Test runtime increases slightly due to additional create/check steps.
      • Full Torznab query semantics parity (tvsearch/movie/search behavior depth) remains a separate follow-up.
  • Follow-up:
    • Test coverage summary:
      • API E2E Torznab tests now cover valid + invalid key paths, caps XML response, unsupported query fallback, and download auth/status behavior.
      • Full gate set rerun through just ci and just ui-e2e.
    • Observability updates:
      • No direct telemetry schema changes; behavior uses existing counters/spans.
    • Risk and rollback plan:
      • Rollback by reverting spec updates and checklist updates if fixture assumptions change.
    • Dependency rationale:
      • No new dependencies added.

Torznab Search Query Mapping and Pagination

  • Status: Accepted
  • Date: 2026-02-25
  • Context:
    • Torznab /api had auth and caps coverage, but search requests returned only empty feeds and did not map query semantics from ERD_INDEXERS.md.
    • ERD acceptance requires handler-level validation/short-circuiting for invalid Torznab combinations, request mapping to search requests, and append-order pagination behavior.
    • Existing search orchestration already exposed search_request_create, search_page_list, and search_page_fetch APIs, so the missing layer was Torznab query translation and XML rendering.
  • Decision:
    • Implemented Torznab request parsing and mapping in crates/revaer-api/src/http/handlers/torznab/api.rs for:
      • t=search|tvsearch|movie|moviesearch mode handling.
      • q, imdbid, tmdbid, tvdbid, season, ep, cat, offset, and limit parsing.
      • invalid-combination short-circuiting to empty XML with invalid-request metrics.
      • request creation via search_request_create and append-order flattening via search_page_list + search_page_fetch with offset/limit slicing.
    • Expanded XML rendering in crates/revaer-api/src/http/handlers/torznab/xml.rs to produce itemized RSS output with Torznab attrs and response metadata (offset, total).
    • Added API E2E coverage in tests/specs/api/indexers-torznab-instances.spec.ts for generic search paging offset behavior, invalid TV combo handling, and invalid category short-circuit responses.
    • Added focused unit tests for Torznab parse and XML helpers.
    • Dependency rationale: no new dependencies were added; implementation reused existing crates (chrono, uuid) already in the dependency graph.
    • Alternatives considered:
      • Add a dedicated Torznab orchestration service with its own paging/storage tables now: rejected for this step to keep scope aligned with existing search-request/page model.
      • Keep Torznab handler as caps/auth only and defer search mapping entirely: rejected because it blocks ERD parity acceptance items.
  • Consequences:
    • Positive outcomes:
      • Torznab /api now maps request semantics into existing search orchestration and returns structured RSS responses with deterministic offset slicing.
      • Invalid Torznab query combinations are handled with empty responses instead of API errors.
      • Test coverage now exercises Torznab query-path behavior in both unit and API E2E suites.
    • Risks or trade-offs:
      • Category output currently follows available page item category data; deeper tracker-category remapping parity remains coupled to ingestion metadata completeness.
      • Pagination works on currently materialized pages; runtime behavior still depends on upstream run scheduling/ingestion progress.
  • Follow-up:
    • Verify category-to-domain fallback behavior (especially explicit 8000 handling) against broader fixture matrices.
    • Extend E2E coverage once richer seeded search-page fixtures are available for multi-page and mixed-category result sets.

Torznab Download Redirect and Acquisition Attempt Coverage

  • Status: Accepted
  • Date: 2026-02-25
  • Context:
    • torznab_download_prepare already implemented ERD-compliant redirect selection and acquisition-attempt writes, but coverage only validated missing-instance failures.
    • ERD acceptance requires successful redirect behavior (magnet preferred over download_url) and guaranteed acquisition attempt persistence, including explicit no-target failures.
  • Decision:
    • Added stored-procedure integration coverage in crates/revaer-data/src/indexers/torznab.rs to validate:
      • magnet URI is preferred when both magnet_uri and download_url exist.
      • download_url is used when magnet is absent.
      • missing redirect target returns NULL and writes a failed acquisition attempt with failure_class=client_error and failure_detail=no_download_target.
    • Test fixtures create Torznab scope and canonical sources through existing stored-procedure wrappers (search_profile_create, torznab_instance_create, search_request_create, search_result_ingest) with minimal setup SQL limited to required indexer test rows.
    • Dependency rationale: no new dependencies were added.
    • Alternatives considered:
      • API-only E2E validation for successful redirects: rejected for this increment because the existing E2E fixture layer does not expose deterministic source seeding for positive redirect paths.
      • Leave coverage at handler-level negative paths only: rejected because it would not prove acquisition-attempt semantics required by ERD.
  • Consequences:
    • Positive outcomes:
      • Redirect precedence and acquisition-attempt side effects are now explicitly asserted in automated tests.
      • ERD download acceptance behavior is now validated at the stored-procedure boundary used by runtime paths.
    • Risks or trade-offs:
      • Positive redirect behavior is currently validated at the data/procedure layer, not yet end-to-end via Torznab HTTP in Playwright.
  • Follow-up:
    • Add API E2E positive redirect coverage once deterministic source seeding is available through test helpers or dedicated setup endpoints.

Torznab Feed Category Emission and Test Fixture Hardening

  • Status: Accepted
  • Date: 2026-02-25
  • Context:
    • Torznab search response items emitted only one category value (tracker_category) and dropped tracker_subcategory, which reduced category fidelity for consumers that expect parent + subcategory IDs.
    • Torznab download stored-proc tests depended on torznab_instance_create, which can fail in test environments without gen_salt support (pgcrypto) even though download behavior itself does not require API-key generation.
  • Decision:
    • Updated Torznab feed item mapping in crates/revaer-api/src/http/handlers/torznab/api.rs to emit:
      • tracker_category when present,
      • tracker_subcategory when positive and distinct,
      • fallback to 8000 (Other) when no category metadata exists.
    • Added unit coverage for category emission behavior:
      • category + subcategory inclusion,
      • 8000 fallback.
    • Hardened crates/revaer-data/src/indexers/torznab.rs test fixture setup by inserting torznab_instance rows directly for download-proc tests, avoiding dependence on API-key hashing internals unrelated to the redirect/acquisition semantics under test.
    • Dependency rationale: no new dependencies were added.
    • Alternatives considered:
      • Keep single-category emission and defer multi-cat output: rejected because it preserves avoidable Torznab parity drift.
      • Keep proc-based fixture creation and require extension setup in tests: rejected because it couples download tests to unrelated crypto-extension availability.
  • Consequences:
    • Positive outcomes:
      • Torznab item category payloads are closer to expected multi-category semantics.
      • Download-proc tests are stable across environments where gen_salt may be unavailable.
    • Risks or trade-offs:
      • This step improves output fidelity but does not fully close all ERD category-domain acceptance checks by itself.
  • Follow-up:
    • Complete stored-proc acceptance coverage for cat=8000 catch-all and explicit multi-domain category filtering behavior.

Torznab multi-category domain and Other(8000) coverage

  • Status: Accepted
  • Date: 2026-02-25
  • Context:
    • Motivation: ERD_INDEXERS acceptance item 580 required explicit validation that Torznab category behavior matches ERD rules for category-to-domain mapping, especially multi-category requests and the Other (8000) catch-all behavior.
    • Constraints:
      • Runtime DB behavior must be validated through stored procedures.
      • Existing Torznab feed mapping already emits category IDs from tracker mapping, but search request creation also needs direct tests for effective domain derivation when torznab_cat_ids are provided.
      • Changes must pass just ci and just ui-e2e.
    • Dependency rationale:
      • No new dependencies were added.
      • Alternative considered: add app-layer mocks around domain mapping. Rejected because ERD behavior is owned by stored procedures and must be tested at that boundary.
  • Decision:
    • Added stored-proc tests in crates/revaer-data/src/indexers/search_requests.rs:
      • search_request_create_torznab_other_category_keeps_unrestricted_domain
      • search_request_create_torznab_multi_category_yields_multi_domain_scope
    • The tests verify:
      • cat=8000 keeps effective_media_domain_id unset (NULL), preserving catch-all behavior.
      • Multi-domain category input (2000 + 5000) keeps effective_media_domain_id unset and preserves effective category rows, matching ERD multi-domain semantics.
    • Updated ERD_INDEXERS_CHECKLIST.md item 580 to complete.
  • Consequences:
    • Positive outcomes:
      • Stored-proc behavior for Torznab category domain narrowing now has explicit regression coverage for the highest-risk acceptance paths.
      • Checklist state is synchronized with tested behavior.
    • Risks or trade-offs:
      • Coverage focuses on request creation semantics; future behavior changes in search execution filtering still require dedicated acceptance tests.
  • Follow-up:
    • Test coverage summary:
      • just ci passes, including new stored-proc tests.
      • just ui-e2e passes.
    • Observability updates:
      • No telemetry schema or span changes were needed in this step.
    • Risk and rollback:
      • Rollback path is low risk: revert the added tests and checklist/ADR updates if needed.
    • Review checkpoints:
      • Continue Phase 12 acceptance items after 580, starting with rate-limit default/enforcement behavior.

Rate-limit defaults and scope enforcement coverage

  • Status: Accepted
  • Date: 2026-02-25
  • Context:
    • Motivation: ERD acceptance item 583 requires proof that default rate-limit policies exist and rate limiting is enforced for both indexer and routing scopes.
    • Constraints:
      • Validation must happen through stored-proc level behavior in revaer-data.
      • No new dependencies; keep test coverage deterministic and local.
    • Dependency rationale:
      • No dependency changes.
      • Alternative considered: validate defaults only through migration SQL review. Rejected because acceptance requires executable verification.
  • Decision:
    • Added two stored-proc tests in crates/revaer-data/src/indexers/rate_limits.rs:
      • rate_limit_seed_defaults_match_expected_system_policies
      • rate_limit_try_consume_enforces_bucket_capacity_for_routing_scope
    • Existing indexer-scope enforcement test remains in place, so both required scopes are now explicitly covered.
    • Updated ERD_INDEXERS_CHECKLIST.md item 583 to complete.
  • Consequences:
    • Positive outcomes:
      • Regression-safe verification that default_indexer and default_routing seed policies match expected limits.
      • Explicit runtime enforcement coverage for both indexer_instance and routing_policy scope types.
    • Risks or trade-offs:
      • Tests validate token-bucket behavior and seed invariants, but not every higher-level call path that consumes these policies.
  • Follow-up:
    • Test coverage summary:
      • New tests run under just test/just ci and exercise stored procedures directly.
    • Observability updates:
      • No telemetry changes needed.
    • Risk and rollback:
      • Rollback is limited to test/docs/checklist files and can be reverted safely if requirements change.
    • Review checkpoints:
      • Continue with next unchecked ERD acceptance items (retry behavior, Cloudflare transitions, streaming behavior).

Search-run retry behavior coverage for rate-limited and transient errors

  • Status: Accepted
  • Date: 2026-02-25
  • Context:
    • Motivation: ERD acceptance item 584 requires explicit verification that search-run retry behavior matches ERD rules for both rate-limited and transient failures.
    • Constraints:
      • Validation must happen at stored-proc behavior boundaries.
      • Keep changes test-focused, with no dependency additions.
    • Dependency rationale:
      • No new dependencies were added.
      • Alternative considered: infer behavior from migration SQL only. Rejected because acceptance requires executable regression tests.
  • Decision:
    • Added stored-proc tests in crates/revaer-data/src/indexers/search_requests.rs:
      • search_indexer_run_mark_failed_rate_limited_uses_retry_and_scope
      • search_indexer_run_mark_failed_transient_retries_before_limit
      • search_indexer_run_mark_failed_transient_reaches_retry_limit
    • Added local test helpers to create request/instance run scopes and assert run state transitions.
    • Updated ERD_INDEXERS_CHECKLIST.md item 584 to complete.
  • Consequences:
    • Positive outcomes:
      • Rate-limited retry semantics are now explicitly validated:
        • queued retry state
        • incremented attempt_count and rate_limited_attempt_count
        • required last_rate_limit_scope
      • Transient failure semantics are explicitly validated:
        • queued retry before max retries
        • terminal failed state when retry limit is reached
    • Risks or trade-offs:
      • Tests focus on stored-proc state transitions and do not duplicate higher-level orchestrator behavior already covered elsewhere.
  • Follow-up:
    • Test coverage summary:
      • Included in normal just ci and just ui-e2e quality gates.
    • Observability updates:
      • No observability schema changes required.
    • Risk and rollback:
      • Rollback is low risk and limited to test/docs/checklist updates.
    • Review checkpoints:
      • Continue with remaining unchecked acceptance items (Cloudflare transitions, streaming behavior, explainability).

RSS Cloudflare state transition alignment with ERD

  • Status: Accepted
  • Date: 2026-02-25
  • Context:
    • Motivation: ERD acceptance item 585 requires Cloudflare detection and state transitions to follow ERD semantics, including FlareSolverr preference behavior.
    • Constraints:
      • Runtime DB behavior must be enforced in stored procedures only.
      • Changes must preserve existing retry/backoff behavior and quality-gate compliance.
    • Dependency rationale:
      • No new dependencies were added.
      • Alternative considered: leave existing retryable CF behavior unchanged and only add tests. Rejected because existing logic skipped required clear/solved -> challenged transitions.
  • Decision:
    • Added migration 0094_rss_poll_cf_state_transitions.sql to update rss_poll_apply_v1:
      • Apply CF challenge transitions for all cf_challenge failures (not only non-retryable paths).
      • Transition challenged/cooldown -> solved on successful FlareSolverr poll (via_mitigation='flaresolverr', parse success).
      • Clear cooldown timestamp on challenged transition and reset solved-state counters/backoff fields.
    • Updated stored-proc tests in crates/revaer-data/src/indexers/executor.rs:
      • rss_poll_apply_cf_challenge_retryable_sets_challenged_state
      • rss_poll_apply_flaresolverr_success_promotes_challenged_to_solved
  • Consequences:
    • Positive outcomes:
      • CF state transitions now match ERD transition requirements for challenge detection and FlareSolverr success paths.
      • Regression coverage now verifies both retryable CF challenge behavior and solved-state promotion.
    • Risks or trade-offs:
      • Transition behavior remains bounded to RSS polling procedure scope; broader scheduler routing policy decisions continue to rely on existing routing inputs.
  • Follow-up:
    • Test coverage summary:
      • Covered by just ci and just ui-e2e.
    • Observability updates:
      • No schema or telemetry surface changes required.
    • Risk and rollback:
      • Rollback is isolated to one migration and related tests/checklist/docs updates.
    • Review checkpoints:
      • Continue with remaining unchecked ERD acceptance items (586, 587, 589+).

Search streaming pages terminal sealing and append-only ordering

  • Status: Accepted
  • Date: 2026-02-26
  • Context:
    • Motivation: ERD acceptance item 586 requires streaming search behavior with early page emission, append-only ordering, and deterministic page sealing at terminal request state.
    • Constraints:
      • Runtime DB behavior must remain stored-procedure driven.
      • Search-page ordering must not reorder based on later score updates.
    • Dependency rationale:
      • No new dependencies were added.
      • Alternative considered: rely on API-layer finalization when runs complete. Rejected because sealing and terminal status must stay deterministic inside database state transitions.
  • Decision:
    • Added migration 0095_search_request_terminal_seal.sql:
      • Creates trigger function search_request_finalize_on_runs_terminal_v1.
      • On indexer run terminal updates, finalizes search_request when no queued/running runs remain.
      • Seals any unsealed search_page rows at request finalization time.
    • Extended search_result_ingest integration coverage in crates/revaer-data/src/indexers/search_results.rs:
      • Added deterministic append-order streaming test across two pages with a late high-seeder result.
      • Added request-finished + page-sealed assertions after run completion.
  • Consequences:
    • Positive outcomes:
      • Search requests now reach terminal status and sealed pages deterministically when runs complete.
      • Streaming append-only behavior is explicitly regression-tested.
    • Risks or trade-offs:
      • Trigger introduces additional write work during run terminal updates; scope is bounded to the affected request.
  • Follow-up:
    • Test coverage summary:
      • Verified with just ci and just ui-e2e.
    • Observability updates:
      • No new telemetry fields were introduced.
    • Risk and rollback:
      • Rollback is isolated to migration 0095 and associated test/checklist/docs changes.
    • Review checkpoints:
      • Continue with next unchecked ERD acceptance item after 586.

Search dropped-source audit persistence and paging exclusion

  • Status: Accepted
  • Date: 2026-02-26
  • Context:
    • Motivation: ERD acceptance item 589 requires hard-dropped sources to remain persisted for audit while being excluded from search paging.
    • Constraints:
      • Runtime behavior remains stored-procedure driven.
      • Validation must prove both audit persistence and page exclusion in a single ingest flow.
    • Dependency rationale:
      • No new dependencies were added.
      • Alternative considered: only assert search_page_item exclusion. Rejected because ERD also requires auditable persistence (search_filter_decision and dropped context scoring).
  • Decision:
    • Added integration test search_result_ingest_dropped_sources_are_persisted_but_excluded_from_pages in crates/revaer-data/src/indexers/search_results.rs.
    • Test setup creates a request-scope policy set with a hard drop title-regex rule, executes ingest, and asserts:
      • no search_page_item rows are produced for the request,
      • canonical_torrent_source_context_score.is_dropped is true,
      • search_filter_decision records drop_canonical with observation linkage and canonical/source ids.
  • Consequences:
    • Positive outcomes:
      • ERD dropped-source behavior is covered by a deterministic regression test.
      • Paging remains free of dropped sources while audit evidence is retained.
    • Risks or trade-offs:
      • Test depends on request-policy ingestion path and policy schema conventions.
  • Follow-up:
    • Test coverage summary:
      • Verified with just ci and just ui-e2e.
    • Observability updates:
      • No telemetry schema changes.
    • Risk and rollback:
      • Rollback is isolated to test/checklist/docs updates.
    • Review checkpoints:
      • Continue with next unchecked ERD acceptance item after 589.

224: Canonicalization conflict coverage

  • Status: Accepted
  • Date: 2026-03-01
  • Context:
    • ERD canonicalization rules require preserving durable source identity and logging conflicts when incoming hashes disagree.
    • Existing tests covered fallback identities and size rollups, but not explicit hash-conflict logging behavior.
  • Decision:
    • Add a stored-procedure test for search_result_ingest that ingests two rows for the same source GUID with conflicting hashes.
    • Verify durable hash immutability, source_metadata_conflict logging, and indexer_health_event emission.
    • Mark the canonicalization-rule checklist item complete after coverage is in place.
  • Consequences:
    • Positive outcomes:
      • Conflict-handling behavior is verified directly against stored procedures in CI.
      • Regressions that overwrite durable identities or skip conflict audit signals fail fast.
    • Risks or trade-offs:
      • Slight additional runtime in revaer-data tests.
  • Follow-up:
    • Extend conflict tests for tracker/external-id mismatches if rules change.

Motivation

  • Ensure ERD-required conflict handling is exercised, not just identity selection and size rollups.

Design notes

  • Added a targeted search_result_ingest test that reuses the same durable source and injects a conflicting infohash_v1.
  • Assertions cover three outputs: durable hash remains original, conflict row recorded with conflict_type=hash, and health event is emitted.

Test coverage summary

  • Added one search_result_ingest conflict test in revaer-data.
  • Will run just ci, just build-release, and just ui-e2e before hand-off.

Observability updates

  • No new metrics/spans; verified existing indexer_health_event signal behavior.

Risk & rollback plan

  • If canonicalization conflict semantics evolve, update expected conflict/audit values in the test.
  • Roll back by reverting the test and checklist/ADR updates if needed.

Dependency rationale

  • No new dependencies added.

225: Indexer unit test domain coverage

  • Status: Accepted
  • Date: 2026-03-01
  • Context:
    • ERD checklist requires explicit unit-test coverage across canonicalization, policy evaluation, category mapping, and search validation domains.
    • Coverage existed across multiple modules but the checklist item remained open.
  • Decision:
    • Confirm and document active coverage for these four domains in revaer-data indexer tests.
    • Mark the checklist item complete based on verified test coverage.
  • Consequences:
    • Positive outcomes:
      • Checklist state now matches implemented and exercised tests.
      • Coverage expectation for core indexer rule domains is explicitly recorded.
    • Risks or trade-offs:
      • This records current scope; future rule additions still require new tests.
  • Follow-up:
    • Keep extending domain tests as ERD behavior expands.

Motivation

  • Keep ERD checklist status aligned with actual test enforcement in CI.

Design notes

  • Canonicalization coverage includes fallback identity, rollup median behavior, append-only paging, and conflict logging.
  • Policy evaluation coverage includes request policy drops and policy match logic.
  • Category mapping coverage validates upsert/delete and invalid mapping paths.
  • Search validation coverage exercises identifier mismatch, season/episode rules, and category filter validation.

Test coverage summary

  • Verified coverage in crates/revaer-data/src/indexers/canonical.rs, crates/revaer-data/src/indexers/search_results.rs, crates/revaer-data/src/indexers/policy_match.rs, crates/revaer-data/src/indexers/category_mapping.rs, and crates/revaer-data/src/indexers/search_requests.rs.
  • Will run just ci, just build-release, and just ui-e2e before hand-off.

Observability updates

  • No telemetry changes.

Risk & rollback plan

  • If coverage assertions become inaccurate, reopen checklist and add missing tests.
  • Roll back by reverting checklist/ADR updates if the requirement is re-scoped.

Dependency rationale

  • No new dependencies added.

226: Health and reputation rollup semantics from outbound logs

  • Status: Accepted
  • Date: 2026-03-01
  • Context:
    • ERD indexer acceptance requires connectivity and reputation statistics to follow outbound_request_log semantics.
    • Existing job tests covered base rollup behavior but did not explicitly assert rate_limited exclusion from sample counts.
  • Decision:
    • Add a stored-procedure test for job rollups that mixes successful, non-rate-limited failures, and rate-limited failures.
    • Verify both connectivity and reputation calculations exclude error_class=rate_limited samples.
    • Mark the corresponding ERD acceptance checklist item complete.
  • Consequences:
    • Positive outcomes:
      • Connectivity (indexer_connectivity_profile.success_rate_1h) and reputation (source_reputation.request_*) rules are now pinned to ERD semantics in CI.
      • Regression risk around sample-count inflation from rate-limited rows is reduced.
    • Risks or trade-offs:
      • Slightly longer revaer-data test runtime.
  • Follow-up:
    • Extend sampling tests if ERD expands included request types beyond current coverage.

Motivation

  • Enforce that health/reputation rollups remain authoritative to outbound_request_log semantics, especially rate-limit exclusions.

Design notes

  • Added connectivity_and_reputation_exclude_rate_limited_samples in revaer-data job tests.
  • Test inserts 30 successes, 5 non-rate-limited failures, and 10 rate-limited failures, then validates:
    • success_rate_1h = 30/35 (not 30/45)
    • request_count = 35 and request_success_count = 30

Test coverage summary

  • Added one stored-procedure rollup test in crates/revaer-data/src/indexers/jobs.rs.
  • Full gates (just ci, just ui-e2e) are run before hand-off.

Observability updates

  • No new telemetry surface added; this validates existing derived-health semantics.

Risk & rollback plan

  • If rollup semantics change, update the expected sample math and checklist references.
  • Roll back by reverting this ADR, checklist line, and test if requirement scope changes.

Dependency rationale

  • No new dependencies added.

227: Search zero-result explainability

  • Status: Accepted
  • Date: 2026-03-01
  • Context:
    • ERD acceptance requires zero-result searches to expose why nothing was returned.
    • Existing search page APIs returned pages/items only, without skipped/blocked/rate-limit diagnostics.
  • Decision:
    • Add stored procedures search_request_explainability_v1 and search_request_explainability.
    • Extend SearchPageListResponse with an explainability object that reports:
      • zero runnable indexers
      • skipped canceled/failed indexers
      • blocked result count and blocking rule IDs
      • rate-limited and retrying indexer counts
    • Wire the new procedure through revaer-data, revaer-app, and API handlers.
  • Consequences:
    • Positive outcomes:
      • UI/API callers can explain “nothing found” states with structured diagnostics.
      • Explainability semantics are enforced through stored-proc and handler tests.
    • Risks or trade-offs:
      • Response payload size increases slightly for page list calls.
  • Follow-up:
    • Expose these explainability fields in the UI once indexer search pages are integrated in the frontend route.

Motivation

  • Ensure zero-result states are actionable instead of silent, matching ERD acceptance rules.

Design notes

  • Kept runtime SQL policy compliant by introducing stored procedures instead of ad-hoc queries.
  • Reused search_page_list_v1 authorization/visibility checks in the explainability procedure to preserve error semantics.
  • Counted blocked results from search_filter_decision decisions (drop_source, drop_canonical).

Test coverage summary

  • Added revaer-data tests for explainability defaults and blocked/rate-limited/retrying states.
  • Updated API handler test support and search page handler tests for the new response shape.

Observability updates

  • No new spans/metrics; this feature surfaces existing run/filter state via API responses.

Risk & rollback plan

  • If semantics need adjustment, update the procedure outputs and response mapping together.
  • Roll back by reverting migration + API model/service wiring if clients cannot adopt the additive field.

Dependency rationale

  • No new dependencies added.

228: Prowlarr import source parity and dry-run coverage

  • Status: Accepted
  • Date: 2026-03-01
  • Context:
    • ERD acceptance requires import jobs to support both prowlarr_api and prowlarr_backup sources with dry-run mode.
    • Existing coverage did not explicitly assert source-specific run-path behavior and dry-run persistence across both source modes.
  • Decision:
    • Add revaer-data tests to validate:
      • import_job_create persists prowlarr_backup with is_dry_run=true.
      • import_job_run_prowlarr_api and import_job_run_prowlarr_backup reject mismatched job source with import_source_mismatch.
    • Extend API E2E import job coverage to execute both run paths against matching and mismatched sources.
  • Consequences:
    • Positive outcomes:
      • Source parity and dry-run behavior are validated at both stored-proc and API boundary levels.
      • Regression risk for import source routing logic is reduced.
    • Risks or trade-offs:
      • Slightly longer API E2E runtime due to additional import job flows.
  • Follow-up:
    • Add UI import wizard coverage when import UX lands, so dry-run and source selection are exercised from UI paths.

Motivation

  • Close a checklist gap with executable verification for ERD-required import source behavior.

Design notes

  • Reused existing integration harnesses; no production logic changes were required.
  • Asserted database DETAIL codes to keep failure modes explicit and stable.

Test coverage summary

  • crates/revaer-data/src/indexers/import_jobs.rs:
    • import_job_create_supports_backup_source_and_dry_run
    • import_job_run_procedures_reject_source_mismatch
  • tests/specs/api/indexers-import-jobs.spec.ts:
    • Added backup-source creation/run and cross-source mismatch assertions.

Observability updates

  • No new telemetry emitted; this change increases behavioral coverage only.

Risk & rollback plan

  • If these assertions conflict with intended semantics, update stored-proc details and tests in lockstep.
  • Roll back by reverting this ADR and test updates.

Dependency rationale

  • No new dependencies added.

229: Import result mapping and unmapped-definition coverage

  • Status: Accepted
  • Date: 2026-03-01
  • Context:
    • ERD acceptance requires imported indexers to either map to definitions or surface an explicit unmapped state.
    • Existing import-job coverage did not assert the combined status/result behavior for mapped and unmapped outcomes.
  • Decision:
    • Add data-layer stored-procedure coverage for import-job status aggregation and result listing with mixed mapped/unmapped outcomes.
    • Validate that:
      • mapped results are represented by imported_ready with upstream_slug set;
      • unmapped results are represented by unmapped_definition with upstream_slug unset and explicit detail.
  • Consequences:
    • Positive outcomes:
      • Import status aggregation and result projection now enforce ERD-required unmapped explainability.
      • Regression risk for import result classification is reduced.
    • Risks or trade-offs:
      • Test setup inserts fixture rows directly into import_indexer_result to model importer output states.
  • Follow-up:
    • Extend API/UI import flows to render unmapped result remediation actions when import UX work is implemented.

Motivation

  • Close a migration acceptance gap with executable checks for mapped vs unmapped import outcomes.

Design notes

  • Reused existing import_job_create, import_job_get_status, and import_job_list_results stored-proc wrappers.
  • Added a single focused test that seeds two result rows and validates both rollup counters and list projections.

Test coverage summary

  • Added import_job_status_and_results_surface_unmapped_definitions in crates/revaer-data/src/indexers/import_jobs.rs.

Observability updates

  • No telemetry changes; this is behavior-verification coverage.

Risk & rollback plan

  • If result classification semantics change, update procedure definitions and this coverage together.
  • Roll back by reverting this ADR and the associated test.

Dependency rationale

  • No new dependencies added.

230: Migration parity E2E flow coverage

  • Status: Accepted
  • Date: 2026-03-01
  • Context:
    • ERD acceptance requires end-to-end verification for Prowlarr import plus Torznab parity and download flows.
    • Existing API E2E coverage was split across specs and did not provide a single parity-flow assertion path.
  • Decision:
    • Add a dedicated API E2E spec that exercises migration parity flows together:
      • Torznab caps/search parity semantics.
      • Torznab download auth/missing-source behavior.
      • Prowlarr API and backup import job run paths with dry-run setup.
  • Consequences:
    • Positive outcomes:
      • ERD migration parity checks are exercised explicitly in one E2E flow.
      • Regression detection improves for cross-surface import + Torznab behavior.
    • Risks or trade-offs:
      • Slightly longer API E2E runtime due to additional scenario setup.
  • Follow-up:
    • Extend this scenario to include successful Torznab download redirects once canonical source fixtures are available through public APIs.

Motivation

  • Close the checklist gap for explicit end-to-end migration parity coverage.

Design notes

  • Reused existing API fixtures and auth modes.
  • Kept assertions deterministic around currently available endpoints and fixture-free behavior.

Test coverage summary

  • Added tests/specs/api/indexers-migration-parity.spec.ts.

Observability updates

  • No telemetry changes; this is E2E coverage only.

Risk & rollback plan

  • If endpoint semantics change, update this test alongside handler/service changes.
  • Roll back by reverting the spec and checklist/ADR updates.

Dependency rationale

  • No new dependencies added.

Indexer Schema And Procedure Catalog Tests

  • Status: Accepted
  • Date: 2026-03-07
  • Context:
    • ERD_INDEXERS_CHECKLIST.md still had migration and stored-procedure verification unchecked even though most runtime behavior was already implemented.
    • The repository already had broad module-level stored-procedure tests in revaer-data, but it lacked a catalog-level integration suite proving the migrated database actually contains the full ERD table, enum, seed, and wrapper-procedure surface.
    • AGENTS.md requires a task record with motivation, design notes, test coverage summary, observability updates, risk and rollback guidance, and dependency rationale.
  • Decision:
    • Add crates/revaer-data/tests/indexers_schema.rs as a live Postgres integration suite that runs migrations and verifies:
      • all ERD indexer tables exist,
      • all ERD enums match the specified value sets,
      • all required stable and _v1 stored procedures are registered,
      • core schema invariants hold (public_id boundaries, deleted_at soft-delete columns, JSON/JSONB prohibition, key lower-case checks, and representative varchar caps),
      • seeded catalog rows exist for trust tiers, media domains, Torznab categories, default rate-limit policies, job schedules, and the system sentinel user.
    • Treat the new catalog inventory tests plus the existing module-level stored-procedure tests as the acceptance basis for closing the migration/procedure verification checklist items.
    • Alternatives considered:
      • Add many more per-procedure behavioral duplicates in integration tests: rejected because that would repeat existing module coverage and add runtime without improving catalog verification.
      • Rely on migration file review only: rejected because it does not prove the live migrated schema matches the ERD.
  • Consequences:
    • Positive outcomes:
      • The database surface now has an executable ERD conformance check at migration time, not just code review.
      • Missing tables, enum drift, missing wrappers, or seed regressions will fail just test quickly.
      • The checklist can advance without inventing duplicate stored-procedure tests where behavior is already covered.
    • Risks or trade-offs:
      • The schema suite is intentionally catalog-oriented, so future behavioral changes still require focused module tests.
      • The test maintains a long explicit inventory of ERD objects, which must be updated whenever the ERD evolves.
  • Follow-up:
    • Implementation tasks:
      • Extend the catalog suite if new ERD tables, enums, or procedures are added.
      • Add additional DML-based constraint tests if future schema changes introduce higher-risk invariants not well represented by catalog inspection.
    • Review checkpoints:
      • Keep ERD_INDEXERS_CHECKLIST.md aligned with the live test inventory.
      • Revisit unchecked UI, migration-parity, and origin-only logging items in the next implementation passes.
  • Motivation:
    • Close the remaining ERD verification gap with the smallest high-signal change that exercises the real database surface.
  • Design notes:
    • The suite uses the existing Postgres test harness shape and keeps assertions at the schema catalog layer rather than duplicating service logic.
  • Test coverage summary:
    • Added crates/revaer-data/tests/indexers_schema.rs with six integration tests covering tables, enums, procedures, seeds, and representative constraints.
  • Observability updates:
    • No telemetry changes; this pass is verification-only.
  • Risk & rollback plan:
    • Roll back by reverting crates/revaer-data/tests/indexers_schema.rs, the checklist update, and this ADR if the test strategy needs to change.
  • Dependency rationale:
    • No new dependencies were added.
    • Alternatives considered: parsing migration SQL directly or adding a dedicated schema-test crate. Both were rejected in favor of sqlx catalog queries inside the existing revaer-data test setup.

Import Result Fidelity Snapshots

  • Status: Accepted
  • Date: 2026-03-07
  • Context:
    • The ERD migration checklist requires imported indexers to preserve enabled state, categories, tags, priorities, and missing-secret detection.
    • The current import job surface only returned coarse result status, which made parity verification impossible even in dry-run and partial-import paths.
    • Runtime DB interactions must stay on stored procedures, persisted data must remain normalized, and no JSON/JSONB snapshots are allowed.
  • Decision:
    • Extend import_indexer_result with scalar fidelity fields for resolved_is_enabled, resolved_priority, and missing_secret_fields.
    • Persist multi-value fidelity snapshots in normalized child tables: import_indexer_result_media_domain and import_indexer_result_tag.
    • Expand import_job_list_results_v1 and the API/CLI DTO contract to return the preserved snapshot for each result.
    • Alternatives considered: storing arrays directly on import_indexer_result, which was rejected because it weakens normalization and makes future filtering harder; deferring all fidelity reporting until the full importer exists, which would leave the migration checklist untestable.
  • Consequences:
    • Import result payloads now carry enough data to verify category/tag/priority/secret preservation rules.
    • The schema grows by two operational child tables and one proc contract expansion, which increases migration and test surface slightly.
    • This does not implement full Prowlarr ingestion by itself; it establishes the normalized persistence and observable contract the importer will write to.
  • Follow-up:
    • Wire the actual Prowlarr API/backup importer to populate the new snapshot fields and child tables.
    • Add API/E2E coverage once an executable import path can create populated results through HTTP.
    • Review whether secret error-class detail should include field names or only counts once the importer is implemented.

Task Record

Motivation: make the ERD migration-fidelity acceptance item measurable with the current import-job surface.

Design notes: scalar fidelity lives on import_indexer_result; category and tag snapshots stay normalized in dedicated child tables; the stored procedure returns sorted arrays for a stable API contract.

Test coverage summary: added data-layer integration coverage for preserved import result snapshots and updated schema catalog expectations for the new tables.

Observability updates: no new metrics were needed; existing import job spans and outcome counters remain the boundary for this read-path change.

Risk & rollback plan: if the contract causes downstream issues, revert migration 0097_import_result_fidelity_snapshot.sql and the DTO mapping change together; the change is isolated to import-result persistence and listing.

Dependency rationale: no new dependencies were added; existing SQLx, serde, and chrono types already cover the new fields.

Secret Binding And Test Error Class Coverage

  • Status: Accepted
  • Date: 2026-03-08
  • Context:
    • ERD_INDEXERS_CHECKLIST.md still had the migration acceptance item for secret binding/test UX unchecked.
    • The stored procedures already implemented the intended behavior, but the repo lacked focused coverage proving successful secret binding, missing-secret test preparation failures, and success-path clearing of migration error state.
  • Decision:
    • Add data-layer coverage for routing policy secret binding persistence.
    • Add executor coverage for missing required secret preparation failures, successful bound-secret preparation payloads, and finalize-success clearing of migration error state.
    • Add API coverage for routing-policy secret bind problem details preserving the stable error_code context, plus API E2E coverage for successful and revoked-secret binding flows.
  • Consequences:
    • The migration acceptance item is now backed by direct stored-proc, handler, and API end-to-end tests instead of inference from adjacent behavior.
    • Coverage now proves the ERD-required missing_secret and secret lifecycle behavior without adding new dependencies or widening public APIs.
    • The remaining ERD work is still broader than this acceptance item; this ADR closes only the secret binding/test UX gap.
  • Follow-up:
    • Keep extending instance-level public flows once definition-selection UX stops relying on internal IDs.
    • Revisit checklist items tied to broader API/public-surface cleanup separately.

Indexer Instance Create Uses Definition Slug Key

  • Status: Accepted
  • Date: 2026-03-08
  • Context:
    • ERD_INDEXERS_CHECKLIST.md still had the API-surface rule requiring UUIDs or stable keys instead of internal primary keys.
    • The remaining indexer API violation was IndexerInstanceCreateRequest, which still accepted indexer_definition_id.
    • The public indexer catalog already exposes upstream_slug, so callers had a stable key available without exposing an internal database identifier.
  • Decision:
    • Change indexer instance creation to accept indexer_definition_upstream_slug end to end.
    • Update the stored procedure wrapper and latest migration so runtime creation resolves definitions by slug instead of internal id.
    • Update handler, app-layer facade signatures, and API tests to use the slug key.
  • Consequences:
    • The indexer API surface no longer requires callers to know an internal definition primary key.
    • Existing clients must send the slug field instead of the numeric id for instance creation.
    • The underlying database schema remains unchanged; only the procedure contract and API contract moved to the public key.
  • Follow-up:
    • Keep checking new indexer endpoints for similar internal-PK leaks.
    • Revisit whether any multi-source future catalog needs upstream_source + upstream_slug as a composite public key.

Indexer Service Operation Metrics And Spans

  • Status: Accepted
  • Date: 2026-03-08
  • Context:
    • ERD_INDEXERS_CHECKLIST.md still had the observability item open for indexer-domain operations.
    • The API already emitted request spans for indexer endpoints, and the app-layer IndexerService already wrapped each domain operation in a stable tracing span.
    • What was missing was consistent per-operation metrics at the domain-service boundary so search, routing, policy, torznab, instance, and secret workflows all emitted a stable success/error signal and latency measurement.
  • Decision:
    • Extend revaer-telemetry with indexer service operation counters and latency histograms labeled by operation and outcome.
    • Inject Metrics into IndexerService via bootstrap and test wiring instead of constructing telemetry inside the service.
    • Route every IndexerFacade operation through a single helper that records success/error outcomes and elapsed latency around the already-instrumented spans.
  • Consequences:
    • Indexer-domain operations now emit stable metrics and spans from the API boundary through the app-service boundary without violating the DI rule.
    • Troubleshooting can distinguish success versus error rates per operation and correlate them with the existing tracing spans.
    • The metrics surface grows slightly, but only with bounded low-cardinality labels (operation, outcome).
  • Follow-up:
    • Add dashboard panels and alerts for the new indexer_operations_total and indexer_operation_latency_ms series when the indexer health UI is built.
    • Keep new indexer-domain methods on the shared run_operation helper so observability coverage does not regress.

Indexer DI Boundary Enforcement

  • Status: Accepted
  • Date: 2026-03-08
  • Context:
    • ERD_INDEXERS_CHECKLIST.md still had the dependency-injection boundary item open.
    • The indexer runtime path in revaer-app is meant to operate on injected collaborators only, while bootstrap remains the only place allowed to read environment variables and construct concrete infrastructure.
    • This was already mostly true in code, but it was not enforced by tests, so regressions would be easy to introduce.
  • Decision:
    • Add architecture tests in crates/revaer-app/src/bootstrap.rs that pin the DI boundary for indexer runtime wiring.
    • Assert that crates/revaer-app/src/indexers.rs does not read environment variables or construct core infrastructure directly.
    • Assert that crates/revaer-app/src/bootstrap.rs remains the place that reads env vars and wires concrete metrics, event bus, runtime state, and IndexerService.
  • Consequences:
    • The indexer runtime module now has an explicit regression test for the DI rule from AGENTS.md.
    • Bootstrap stays the wiring boundary, and service code remains easier to test because collaborators are passed in.
    • The enforcement is intentionally narrow and source-based, so future refactors must keep these invariants visible or update the test with an equivalent wiring design.
  • Follow-up:
    • Extend the same pattern to other runtime subsystems if more non-bootstrap wiring starts to accumulate.
    • Keep new indexer-domain services on injected constructors instead of hidden singleton/env access.

Manual Search UI

  • Status: Accepted
  • Date: 2026-03-15
  • Context:
    • ERD_INDEXERS_CHECKLIST.md still had manual and interactive search UI unchecked even though the API already exposed search request creation and page reads.
    • The current web UI had no route or feature slice for indexer search, which blocked category-filtered searches and bulk handoff into the download client.
    • AGENTS.md requires minimal dependencies, strict module boundaries, a task record, and completion through the just quality gates.
  • Decision:
    • Add a dedicated crates/revaer-ui/src/features/search/ slice with pure request-shaping helpers, a feature-local API shim, and a Yew page mounted at /search.
    • Reuse the existing indexer search endpoints (/v1/indexers/search-requests and search page reads) instead of introducing new backend schema or service churn.
    • Push selected results into the existing torrent add flow by reusing the shared ApiClient and preferring magnet links over download URLs when both are present.
    • Alternatives considered: building a broader indexer management UI first, or adding new listing endpoints before search. Those options were larger and did not unblock the missing ERD-backed manual search slice as directly.
  • Consequences:
    • Positive outcomes:
      • Revaer now exposes an end-to-end manual search flow in the UI with query parameters, Torznab category filtering, explainability, sealed page inspection, and bulk add-to-client actions.
      • The feature fits the repo’s UI architecture by keeping transport in a feature-local API module and request normalization in pure helpers with tests.
      • No new dependencies were added.
    • Risks or trade-offs:
      • The search feature currently uses explicit refresh actions rather than live page streaming inside the page itself.
      • Labels are English-first with fallback text for the new navigation item instead of a full locale pass.
  • Follow-up:
    • Implementation tasks:
      • Add richer live refresh and search history once the broader indexer read/list surfaces exist.
      • Extend the feature toward search-profile-aware presets when list/read endpoints are available.
    • Review checkpoints:
      • Keep just ci and just ui-e2e green.
      • Revisit the remaining unchecked checklist items for indexer management, health, and connectivity views.

Indexer Admin Console UI

  • Status: Accepted
  • Date: 2026-03-15
  • Context:
    • ERD_INDEXERS_CHECKLIST.md still had the broad indexer-management UI item open after the manual search page landed.
    • The API already exposed many ERD-backed mutation workflows for indexers, secrets, routing policies, rate limits, search profiles, policy sets, import jobs, and Torznab management.
    • The current UI lacked a dedicated route for those operations, which forced all validation of that surface into API/CLI-only flows.
  • Decision:
    • Add a dedicated /indexers route and crates/revaer-ui/src/features/indexers/ feature slice for operator-facing indexer administration.
    • Reuse the existing authenticated API surface through the shared ApiClient, adding small generic REST helpers instead of introducing new dependencies or duplicating HTTP auth logic.
    • Model the page as an action-oriented admin console with a shared activity log, because the backend does not yet expose read/list endpoints for every managed resource.
    • Alternatives considered: overloading the existing Settings page, or delaying all UI work until broader list/read APIs existed. Both options would have either blurred module boundaries or left the remaining ERD UI scope blocked longer.
  • Consequences:
    • Positive outcomes:
      • Revaer now has end-to-end UI entry points for the existing indexer management workflows, including definitions lookup, tags, secrets, routing policies, rate limits, instances, search profiles, policies, imports, and Torznab actions.
      • Operators can capture raw response payloads in the page log, which improves reproducibility when comparing UI behavior to API/CLI behavior.
      • No new dependencies were added.
    • Risks or trade-offs:
      • The console is action-first rather than a full CRUD browser because list/read endpoints are still incomplete for several resource types.
      • Several fields currently use free-form text inputs for enum keys and UUIDs, which trades richer affordances for implementation speed and API parity.
  • Follow-up:
    • Implementation tasks:
      • Add list/read endpoints and richer selectors as the backend surface expands.
      • Fold more health and connectivity summary views into the indexer route once dedicated data reads are available.
    • Review checkpoints:
      • Keep just ci and just ui-e2e green.
      • Revisit the remaining unchecked checklist items around service layering, error logging origin, rollout, and final acceptance.

Indexer Schedule Controls UI

  • Status: Accepted
  • Date: 2026-03-15
  • Context:
    • The indexer admin console already exposed rate-limit assignment, but the ERD parity checklist still had per-indexer schedule controls in UI unchecked.
    • The API already accepted is_enabled, enable_rss, enable_automatic_search, and enable_interactive_search through the existing indexer instance update endpoint.
    • AGENTS.md requires completing the next efficient ERD-backed slice without adding dead code or extra dependencies.
  • Decision:
    • Extend the /indexers admin console to surface explicit checkbox controls for instance enablement, RSS, automatic search, and interactive search scheduling.
    • Reuse the existing IndexerInstanceUpdateRequest payload instead of introducing a separate UI-only endpoint or new backend model.
    • Lock the route behavior in Playwright by asserting the schedule controls render on the page.
    • Alternatives considered: delaying the controls until broader instance list/read APIs existed, or adding a dedicated scheduling sub-view. Both would have left an already-supported ERD path hidden from operators.
  • Consequences:
    • Positive outcomes:
      • Operators can now control the ERD-backed per-instance scheduling flags directly from the admin console alongside rate-limit assignment.
      • The checklist item for per-indexer rate limits and schedule controls now has matching UI coverage and browser verification.
      • No new dependencies were added.
    • Risks or trade-offs:
      • The update action still targets a manually entered instance UUID because list/read endpoints for all instances are not yet available in the UI.
      • Schedule state is operator-driven rather than auto-refreshed from the server after each mutation.
  • Follow-up:
    • Implementation tasks:
      • Add richer instance selectors and readback once list/read instance endpoints are available in the UI.
      • Expand the console further for RSS history and mark-seen workflows when those reads are exposed.
    • Review checkpoints:
      • Keep just ci and just ui-e2e green.
      • Revisit the remaining unchecked parity items around app sync, health visibility, RSS views, connectivity dashboards, backup/restore, and final migration acceptance.

Indexer RSS Management UI

  • Status: Accepted
  • Date: 2026-03-15
  • Context:
    • The ERD checklist still lacked operator-facing RSS management despite stored procedures already supporting subscription writes and RSS dedupe storage.
    • The existing /indexers admin console had no way to inspect subscription cadence, view recently seen RSS items, or manually seed dedupe state.
  • Decision:
    • Add stored-proc-backed RSS management APIs for subscription status, recent seen-item listing, and manual mark-seen.
    • Extend the indexer admin console with an RSS management panel that can fetch subscription state, update cadence/enablement, inspect recent items, and insert manual seen markers.
  • Consequences:
    • Operators can now manage RSS polling behavior and dedupe history without direct database access.
    • The implementation adds new API/DTO surface area and one migration, which increases maintenance cost but keeps runtime SQL inside stored procedures.
  • Follow-up:
    • Validate the new RSS panel in just ci and just ui-e2e.
    • Continue with the remaining unchecked migration items, especially health dashboards and deployment acceptance work.

Indexer connectivity and reputation UI

  • Status: Accepted
  • Date: 2026-03-15
  • Context:
    • ERD_INDEXERS.md requires operator-facing views for indexer_connectivity_profile and source_reputation, plus remediation-adjacent controls.
    • The derived tables and refresh jobs already existed, but the admin console could not inspect them without querying the database directly.
  • Decision:
    • Add stored procedures and typed data/API/UI adapters to expose connectivity profile snapshots and recent reputation windows per indexer instance.
    • Reuse the existing instance admin surface and adjacent Cloudflare reset actions instead of creating a separate dashboard route first.
  • Consequences:
    • Operators can now inspect connectivity status, dominant error class, latency, success rates, and recent reputation rollups from /indexers.
    • The implementation adds new read procedures and response DTOs that must stay aligned with derived-table schema changes.
  • Follow-up:
    • Add richer health drill-down and notification delivery to close the remaining health dashboard checklist item.
    • Consider promoting these views into a dedicated health route if the admin console becomes too dense.

Indexer routing policy visibility

  • Status: Accepted
  • Date: 2026-03-15
  • Motivation:
    • ERD_INDEXERS.md requires per-indexer proxy and flaresolverr controls with operator-visible health and configuration context.
    • The admin console already allowed routing policy creation, parameter updates, secret binding, and instance assignment, but operators could not read the resulting configuration without database access.
  • Design notes:
    • Add a stored-procedure read path, routing_policy_get, that validates actor scope and returns routing policy metadata, assigned rate-limit policy fields, parameter values, and bound secret references.
    • Aggregate the row-oriented stored-proc result into a typed API model so the HTTP and UI layers can render routing policy state without database-specific joins.
    • Extend the /indexers admin console with an explicit fetch action and summary panel instead of introducing a new route, keeping proxy and Cloudflare controls together.
  • Test coverage summary:
    • Added revaer-data coverage for routing policy reads across parameters, secret bindings, and rate-limit assignments.
    • Added revaer-api handler coverage for routing policy fetch success and not-found mapping.
    • Updated the UI route smoke test to assert the routing policy fetch control is present.
  • Observability updates:
    • Added the indexer.routing_policy_get service span with actor and routing policy identifiers.
    • Reused the existing routing-policy error mapping so operator-facing failures preserve structured error_code and sqlstate context.
  • Risk & rollback plan:
    • Risk is limited to a new read-only stored procedure and endpoint; existing mutation flows are unchanged.
    • Rollback is straightforward: revert the new migration, API route, and UI fetch panel if the response shape proves insufficient.
  • Dependency rationale:
    • No new dependencies were added.
    • Alternatives considered: embedding raw SQL in the API layer or scraping existing mutation responses. Both were rejected because AGENTS requires stored procedures for runtime DB access and a read endpoint is the stable operator contract.

Indexer import job dashboard

  • Status: Accepted
  • Date: 2026-03-16
  • Motivation:
    • The indexer admin console already exposed import job create/run/status/result endpoints, but the workflow still depended on copying IDs out of the activity log and mentally reconciling counts with raw JSON.
    • ERD_INDEXERS_CHECKLIST.md still calls out import pipeline UX, so the existing import surface needed to become operator-friendly before broader Cardigann and conflict-resolution work lands.
  • Design notes:
    • Keep the current /indexers route and extend it with import job state that persists the latest job status and result payloads in the feature slice.
    • Promote the created or executed import_job_public_id back into the form state so the next fetch actions operate on the active job without manual copying.
    • Render status rollups and per-result cards directly in the import section so duplicate skips, unmapped definitions, missing secrets, and imported instances stay visible.
  • Test coverage summary:
    • Updated the indexer UI route smoke test to assert the new import status and import results sections render.
    • Full regression gates remain just ci and just ui-e2e.
  • Observability updates:
    • No new backend telemetry was required; the work reuses existing import job spans and activity-log JSON captures.
    • The UI keeps recording import responses in the activity log while also surfacing the latest structured view.
  • Risk & rollback plan:
    • Risk is limited to client-side state handling on the admin page.
    • Rollback is a straightforward revert of the import dashboard state/rendering if operators prefer the previous raw-log flow.
  • Dependency rationale:
    • No new dependencies were added.
    • Alternatives considered: adding a separate import route or introducing server-side aggregation endpoints. Both were rejected because the current API already carries the required data and the admin page is the established operator surface.

244. Indexer health event drill-down

  • Status: accepted
  • Date: 2026-03-17

Motivation

  • ERD_INDEXERS_CHECKLIST.md still leaves the health and notifications parity slice unchecked.
  • Operators already have connectivity rollups and reputation summaries, but they still lack a direct read path for raw indexer_health_event rows defined by ERD_INDEXERS.md.
  • The next efficient step is to expose recent health events end-to-end so the existing /indexers console can show failure detail and conflict timing without introducing a larger notification system yet.

Design notes

  • Add stored procedures indexer_health_event_list_v1(...) and stable wrapper indexer_health_event_list(...) to read recent events for one indexer instance with actor validation and bounded limits.
  • Extend the data, app, and API layers with typed health-event list reads and a new GET /v1/indexers/instances/{indexer_instance_public_id}/health-events route.
  • Extend the indexer admin UI with a health-event limit field, fetch action, and rendered drill-down cards under the connectivity section.
  • Keep notification delivery out of scope for this slice; the checklist item remains open until delivery hooks exist.

Test coverage summary

  • Added stored-procedure tests for recent-row ordering and missing-instance failure mapping.
  • Added API handler tests for successful health-event reads and conflict mapping.
  • Extended API and UI Playwright smoke coverage for the new health-event surface.

Observability updates

  • No new emitters were added; this slice reads the existing indexer_health_event diagnostic stream already populated by backend workflows.
  • The new API route reuses existing request tracing and metrics middleware.

Risk & rollback plan

  • Risk is limited to a new read-only proc and route plus UI rendering.
  • Rollback is straightforward: revert the migration, API handler/route, and UI panel if operator output regresses.

Dependency rationale

  • No new dependencies were added.

245. Indexer origin-only error logging

  • Status: accepted
  • Date: 2026-03-16

Motivation

  • ERD_INDEXERS_CHECKLIST.md still leaves the origin-only error logging rules unchecked even though the indexer stack already carries structured code and sqlstate fields through typed errors.
  • crates/revaer-app/src/indexers.rs was re-logging propagated DataError values while also converting them into service errors, which duplicated origin logs and violated AGENTS.md.
  • The next efficient step is to make the app-layer mapper functions pure translations so origin logs remain singular while callers still receive stable service error kinds and structured context.

Design notes

  • Remove tracing::error! side effects from the indexer service error-mapper helpers in crates/revaer-app/src/indexers.rs.
  • Keep the existing mapping taxonomy unchanged so service callers still receive the same kind, code, and sqlstate values.
  • Add mapper coverage proving structured error context survives translation without requiring logging side effects.

Test coverage summary

  • Added a unit test covering representative mapper paths for definition, tag, and indexer-field errors.
  • The new assertions verify kind, code, and sqlstate preservation for propagated stored-procedure failures.
  • Full repository quality gates remain the final verification for regression safety.

Observability updates

  • No new emitters were added.
  • This change reduces duplicate logs by keeping error emission at the actual failure origin while preserving structured context on returned service errors.

Risk & rollback plan

  • Risk is low because the change is limited to log side effects in app-layer error translation.
  • If diagnostics regress, rollback is a straight revert of the mapper cleanup and accompanying checklist/task-record updates.

Dependency rationale

  • No new dependencies were added.

246. Indexer health summary panels

  • Status: accepted
  • Date: 2026-03-16

Motivation

  • The indexer admin console could fetch connectivity profiles and source-reputation rows, but operators only saw those responses in the generic activity log.
  • ERD_INDEXERS_CHECKLIST.md still leaves the health dashboard slice open because the UI was missing the visible status badges and summary panels described by ERD_INDEXERS.md.
  • The next efficient step is to render those existing API reads directly in /indexers so operators can review health state without leaving the page or parsing raw JSON logs.

Design notes

  • Add local UI state for the latest connectivity profile and fetched reputation rows alongside the existing health-event state.
  • Render a connectivity summary card with a status badge, dominant error, latency bands, and recent success-rate snapshots.
  • Render source-reputation cards for the selected window and keep health-event drill-down unchanged.
  • Leave notification delivery out of scope for this slice; the health checklist item remains open until email/webhook hooks exist.

Test coverage summary

  • Added unit coverage for connectivity badge-class mapping and percent formatting helpers in the indexer UI logic module.
  • Extended the /indexers route smoke test to assert the new health summary headings render.
  • Full just ci and just ui-e2e remain the end-to-end verification gates.

Observability updates

  • No new emitters were added.
  • This slice improves operator visibility by presenting already-collected connectivity and reputation telemetry directly in the admin console.

Risk & rollback plan

  • Risk is limited to UI state/rendering changes over existing API calls.
  • Rollback is a straightforward revert of the new state/rendering helpers and task-record updates if the console regresses.

Dependency rationale

  • No new dependencies were added.

247. Indexer backup and restore

  • Status: accepted
  • Date: 2026-03-18

Motivation

  • ERD_INDEXERS_CHECKLIST.md still left backup and restore of indexer settings open even though the admin console already exposed most of the underlying configuration entities.
  • Operators needed a user-facing way to export the current indexer graph and re-apply it later without manually replaying tags, routing policies, rate limits, instance fields, and RSS settings.
  • The next efficient step was to add a sanitized backup format and restore flow on top of the existing stored-procedure-backed write APIs instead of inventing a separate persistence path.

Design notes

  • Add stored-procedure-backed export reads that return normalized rows for tags, rate-limit policies, routing policies, and indexer instances with secret references but never secret plaintext.
  • Assemble those flattened rows into a typed snapshot document in the app layer so the HTTP and UI layers can share a stable backup format.
  • Add /v1/indexers/backup/export and /v1/indexers/backup/restore endpoints and wire /indexers with export and restore controls plus unresolved-secret feedback.
  • Restore replays the existing create/update procedures and skips only secret bindings whose referenced secret is unavailable, surfacing them back to the operator for follow-up.

Test coverage summary

  • Added stored-procedure tests for the backup export wrappers in revaer-data.
  • Added API handler coverage for backup export and restore success and error mapping.
  • Extended the /indexers route smoke test to assert the new backup and restore panel renders.
  • Full just ci and just ui-e2e remain the end-to-end verification gates.

Observability updates

  • Backup export and restore endpoints are traced through the existing HTTP span layer.
  • The restore response includes unresolved secret-binding summaries so operators can distinguish successful object replay from missing-secret follow-up work.

Risk & rollback plan

  • The main risk is restore failure on deployments with conflicting names or missing referenced secrets; those conditions now fail fast or are surfaced explicitly instead of being silently ignored.
  • Secret plaintext is intentionally excluded from exports, so rollback is a straightforward revert of the backup routes, snapshot models, and UI panel if the format proves insufficient.

Dependency rationale

  • No new dependencies were added.

248: Indexer coexistence and rollback acceptance coverage

  • Status: Accepted
  • Date: 2026-03-20

Motivation

  • ERD_INDEXERS.md requires migration reversibility: Revaer must run alongside Prowlarr, avoid destructive Arr mutations, and keep rollback to a Torznab URL change.
  • The repo already had parity/import coverage, but not an explicit acceptance slice proving coexistence and the lack of downstream-app mutation surfaces.

Design notes

  • Added an API E2E spec that creates multiple Revaer Torznab instances, runs import flow activity alongside them, and verifies both endpoints stay callable.
  • Added an operator-facing rollback guide that documents the intended migration safety net.
  • Guarded the public API surface by asserting the OpenAPI document does not expose downstream Arr mutation routes.

Test coverage summary

  • Added tests/specs/api/indexers-coexistence-rollback.spec.ts.
  • Covered coexistence of multiple Torznab instances and rollback-safety assertions against the published API surface.

Observability updates

  • No telemetry changes. This slice adds acceptance coverage and operator documentation only.

Risk & rollback plan

  • Risk is low because the implementation adds tests and documentation without changing runtime behavior.
  • Roll back by reverting the spec, guide, and checklist/ADR updates if the acceptance framing changes.

Dependency rationale

  • No new dependencies added.

249. Indexer Domain Service Closeout

Date: 2026-03-20

Status

Accepted

Context

  • The ERD checklist still carried the phase-6 domain-service item even though the current app-layer indexer service already fronts the shipped indexer domains.
  • That stale unchecked item obscured the real remaining gaps, which are product-facing features like app sync, category overrides, richer import UX, and health notification delivery.

Decision

  • Close the phase-6 domain-service checklist item after auditing the existing service boundary.
  • Treat crates/revaer-app/src/indexers.rs as the application-service boundary for the shipped indexer surface:
    • catalog and definition reads
    • tags and secrets
    • search orchestration reads and writes
    • routing policies and rate-limit policies
    • search profiles and tracker category mappings
    • import jobs and backup/restore flows
    • Torznab access, indexer instance lifecycle, RSS, and connectivity/reputation reads
  • Treat the runtime/data modules as the implementation site for the non-CRUD execution domains named by the checklist:
    • policy evaluation
    • canonicalization and conflict handling
    • reputation/connectivity rollups
    • background job execution

Consequences

  • The checklist now reflects the actual architecture instead of implying a missing service layer.
  • The remaining unchecked ERD items stay focused on user-visible gaps that still need code, schema, and UX work.

Task Record

Motivation:

  • Remove a stale incomplete marker once the service-layer audit confirmed the phase-6 work is already implemented.

Design notes:

  • Audited IndexerService in crates/revaer-app/src/indexers.rs against the checklist language and existing runtime/data modules.
  • Kept the dependency-injection boundary unchanged: bootstrap constructs concrete services, while the app layer exposes injected indexer operations.

Test coverage summary:

  • No new runtime path was introduced.
  • Existing just ci and just ui-e2e continue to cover the already-shipped service surface.

Observability updates:

  • No new telemetry changes were required; the existing service layer already emits indexer.* spans and metrics.

Risk & rollback plan:

  • Low risk because this is a checklist and ADR closeout for already-shipped code.
  • Roll back by restoring the checklist item to unchecked if a later audit finds a missing domain-service boundary.

Dependency rationale:

  • No dependency changes.

Indexer instance category overrides

  • Status: Accepted
  • Date: 2026-03-20
  • Context:
    • ERD_INDEXERS.md calls out custom category overrides as a parity gap versus Prowlarr, especially for cases where one indexer instance needs different tracker-to-Torznab mappings than the shared definition default.
    • The existing tracker_category_mapping storage and stored procedures only supported global mappings or definition-scoped mappings keyed by upstream slug.
    • The /indexers admin console did not expose any category override workflow, so operators could not safely persist or test instance-specific overrides.
  • Decision:
    • Extend tracker_category_mapping with an optional indexer_instance_id scope and update the stored procedures to accept an optional indexer_instance_public_id.
    • When an instance scope is supplied, resolve its definition in-proc, reject deleted/missing instances, and reject conflicting definition-plus-instance combinations with a stable error code.
    • Add API model, handler, app-service, UI, and API/UI test coverage for instance-scoped tracker category mapping upsert and delete actions.
    • Alternative considered: a separate per-instance override table. That would have avoided a nullable column but would duplicate lookup logic and audit behavior that already belongs to the existing mapping entity.
  • Consequences:
    • Operators can now tune category mappings for one indexer instance without changing the shared default for the definition.
    • The storage model is ready for later app-sync filtering work because mappings now have explicit instance scope in addition to global and definition scope.
    • App-scoped override behavior is still blocked on the separate app-sync UX/domain work, so the broader checklist item remains partially open until downstream app filtering is implemented.
  • Follow-up:
    • Thread instance-scoped mappings into the downstream app-sync pipeline once app associations and sync profiles land.
    • Add app-specific override resolution rules when the app-sync domain slice is implemented.

251: Indexer final acceptance closeout

  • Status: accepted
  • Date: 2026-03-21

Motivation

  • ERD_INDEXERS.md defines a hard-blocker migration acceptance bar, but the checklist still left Final acceptance criteria (all hard blockers) pass unchecked after the underlying API, Torznab, import, and rollback coverage had already landed across multiple slices.
  • We needed one explicit closeout step that ties the current evidence back to the ERD’s go/no-go criteria so the remaining unchecked items stay limited to non-hard-blocker follow-up work.

Design notes

  • Added tests/specs/api/indexers-final-acceptance.spec.ts as a focused acceptance aggregation test.
  • The new spec verifies the hard-blocker user path remains:
    • explicit for invalid Torznab queries,
    • explicit for missing downloads,
    • explicit for missing import secrets,
    • reversible with no downstream app mutation surface.
  • The checklist is updated to mark final acceptance complete while preserving the still-open non-hard-blocker parity gaps for app sync UX, app-scoped category overrides, broader import UX, and health notifications.

Test coverage summary

  • Added tests/specs/api/indexers-final-acceptance.spec.ts.
  • Existing supporting coverage remains in:
    • tests/specs/api/indexers-migration-parity.spec.ts
    • tests/specs/api/indexers-import-jobs.spec.ts
    • tests/specs/api/indexers-coexistence-rollback.spec.ts

Observability updates

  • No production observability changes were required.
  • Acceptance evidence continues to rely on existing import, Torznab, and rollback endpoint behavior plus the previously shipped health/explainability surfaces.

Risk & rollback plan

  • Risk is low because this change closes an acceptance gap with additive verification and documentation rather than altering runtime behavior.
  • If any acceptance assumption regresses, rollback is a straightforward revert of this ADR, the acceptance spec, and the checklist update while keeping the earlier feature slices intact.

Dependency rationale

  • No new dependencies were added.
  • Alternative considered: leave final acceptance unchecked until every non-hard-blocker parity item landed. Rejected because the ERD separates hard blockers from follow-up UX parity, and the repo already has the necessary migration-safety evidence to close the hard-blocker gate now.

Indexer health notification hooks

  • Status: Accepted
  • Date: 2026-03-21
  • Context:
    • The remaining ERD parity gap for Health & notifications was notification-hook management. Health badges and drill-down were already implemented, but operators still could not configure destinations for degraded or failing indexers.
    • Revaer enforces stored-procedure-only runtime database access, no JSON persistence, and a library-first HTTP/UI integration path. The slice needed to fit that shape and remain small enough to land independently of the larger app-sync domain.
  • Decision:
    • Add a normalized indexer_health_notification_hook table with explicit channel and threshold enums, plus stored procedures for create, update, delete, and list.
    • Expose the hook CRUD through the indexer facade and /v1/indexers/health-notifications, then surface it on /indexers as operator-managed email/webhook destinations with enabled-state and threshold controls.
    • Alternatives considered:
      • Storing health notification settings in the generic config snapshot: rejected because the ERD indexer workstream is intentionally procedure-backed and relational.
      • Deferring hooks until full delivery/executor wiring exists: rejected because the checklist gap was specifically operator-visible notification hooks, which can land cleanly before sender execution.
  • Consequences:
    • Positive outcomes:
      • The Health & notifications checklist gap is now closed with ERD-shaped persistence, API coverage, and UI affordances.
      • Operators can manage both webhook and email destinations without shell access or direct SQL changes.
    • Risks or trade-offs:
      • This slice manages hook configuration only; actual delivery execution remains future work if runtime alert fan-out is added later.
      • Email recipients are stored directly on hooks instead of referencing a broader downstream app-sync graph, which keeps the slice bounded but separate from future app-level notification ownership.
  • Follow-up:
    • Implementation tasks:
      • Wire sender execution to these hooks if/when health notifications become active outbound jobs.
      • Reuse the hook model in any future app-sync or cross-service notification policy work.
    • Review checkpoints:
      • Keep just api-export, just ci, and just ui-e2e green after any sender-side follow-up.

Indexer app sync provisioning UI

  • Status: Accepted
  • Date: 2026-03-21
  • Motivation:
    • ERD_INDEXERS_CHECKLIST.md still had the app-sync UX gap open even though the stored-procedure-backed search-profile and Torznab APIs already existed.
    • Operators could create the pieces manually, but there was no single workflow to provision an app-facing sync path with tag scoping, explicit indexer allowlists, media-domain filtering, and issued Torznab credentials.
  • Design notes:
    • Extend /indexers with an App sync card that reuses the existing search-profile and Torznab fields instead of introducing a new route or duplicate form state.
    • Add a UI helper that reuses or creates a search profile, applies domain/indexer/tag scoping through the existing ERD-backed endpoints, then creates a Torznab instance and returns the plaintext API key for the downstream app.
    • Persist the generated search-profile UUID and Torznab UUID back into the draft state so follow-up operations stay anchored to the provisioned app path.
  • Test coverage summary:
    • Updated the /indexers Playwright smoke test to assert the app-sync heading and provisioning button render.
    • Full regression gates remain just ci and just ui-e2e.
  • Observability updates:
    • No backend telemetry changes were required because the workflow composes existing traced endpoints.
    • The UI appends the provisioned app-sync summary to the existing activity log so operators can recover issued identifiers from the current session.
  • Risk & rollback plan:
    • Risk is limited to client-side orchestration of already-supported API calls.
    • Rollback is a straightforward revert of the UI helper and summary card, leaving the underlying search-profile and Torznab APIs unchanged.
  • Dependency rationale:
    • No new dependencies were added.
    • Alternatives considered: a dedicated backend orchestration endpoint or a separate app-sync route. Both were rejected because the current ERD-backed APIs already provide the needed primitives and the admin console is the established operator surface.

Indexer app-scoped category overrides

  • Status: Accepted
  • Date: 2026-03-21
  • Motivation:
    • ERD_INDEXERS_CHECKLIST.md still had category override support open after instance-scoped overrides shipped because Torznab feed emission was still using raw tracker category ids.
    • Downstream app sync needed per-app category remapping so one Torznab app could receive different category ids than another without breaking shared indexer configuration.
  • Design notes:
    • Extend tracker_category_mapping with an optional torznab_instance_id scope and rebuild the upsert/delete procedures so overrides can be stored per downstream Torznab app.
    • Add a feed-resolution procedure that applies precedence in this order: app+instance, app+definition, app global, instance, definition, global, then 8000 fallback.
    • Route Torznab feed emission through the injected indexer facade so emitted <category> values use resolved Torznab ids instead of raw tracker ids, and expand child ids to include their parent category ids for Torznab compatibility.
    • Keep the existing /indexers admin console as the operator surface by adding an app-scoped Torznab instance field to the category override form instead of introducing a separate page.
  • Test coverage summary:
    • Extended data-layer and schema coverage for the new stored procedure signature and feed-resolution procedure catalog entry.
    • Updated Torznab handler unit tests to cover parent-category expansion and Other fallback behavior.
    • Extended the category-mapping API Playwright spec to round-trip app-scoped override create/delete requests.
    • Verified the full regression gates with just ci and just ui-e2e.
  • Observability updates:
    • No new telemetry surface was added; the feature reuses existing traced indexer and Torznab service operations.
    • Error classification now treats missing Torznab app scope as a mapped not-found category-mapping failure instead of an opaque storage error.
  • Risk & rollback plan:
    • The main risk is procedure-precedence drift causing downstream apps to receive unexpected category ids.
    • Rollback is a revert of migration 0111_torznab_instance_category_overrides.sql together with the Torznab feed-resolution call path and /indexers form field.
  • Dependency rationale:
    • No new dependencies were added.
    • Alternatives considered: keep app-specific remapping in UI state only, or add a new dedicated override table. Both were rejected because the ERD already centers category mapping in stored procedures and a separate table would duplicate precedence logic.

255: Indexer source conflict operator UI

  • Status: Accepted
  • Date: 2026-03-22
  • Motivation:
    • The remaining indexer parity gap still called out import-pipeline conflict resolution beyond the stored procedures already present in the data layer.
    • Operators could trigger conflict logging indirectly, but there was no supported HTTP or UI path to list durable source metadata conflicts or apply the existing resolve/reopen procedures from the admin console.
  • Design notes:
    • Add a stored-procedure-backed read path for source_metadata_conflict so operator tooling can review unresolved and resolved conflicts without inline SQL.
    • Thread conflict list, resolve, and reopen operations through the injected indexer facade and expose them under /v1/indexers/conflicts.
    • Extend the /indexers admin console with a compact conflict queue and resolve/reopen controls colocated with the import workflow, since that is where operators already review unmapped and duplicate import outcomes.
  • Test coverage summary:
    • Added revaer-data coverage for the new conflict list proc wrapper authorization failure.
    • Updated the UI route smoke test to assert the new Source conflict resolution section renders on /indexers.
    • Full regression gates passed with just ci and just ui-e2e.
  • Observability updates:
    • The new app-facade operations emit standard indexer.source_metadata_conflict_* metrics and latency observations through the existing run_operation instrumentation.
    • No additional error re-logging was introduced; propagated data errors are still translated without duplicate logs.
  • Risk & rollback plan:
    • Risk is limited to exposing a new operator control surface and a read proc over existing conflict rows.
    • Rollback is a straightforward revert of the new migration, HTTP handlers, and UI section if the workflow needs to be redesigned.
  • Dependency rationale:
    • No new dependencies were added.
    • Alternatives considered: leaving conflict resolution as a database-only operation or folding it into ad hoc import-job status text. Both were rejected because they keep operators out of a supported end-to-end workflow.

Indexer Cardigann Definition Import

  • Status: Accepted
  • Date: 2026-03-21
  • Context:
    • ERD_INDEXERS.md defines the global indexer catalog as being sourced from both Prowlarr Indexers and Cardigann, but the shipped schema and operator UX still only supported Prowlarr-backed import paths.
    • The last unchecked ERD parity item was the broader import pipeline UX gap: Cardigann/YAML definition import needed to round-trip through the app, API, and /indexers UI alongside the already-landed import status and conflict tooling.
    • Runtime database access still had to stay stored-proc-only, and the implementation needed to preserve the normalized indexer_definition* tables rather than storing YAML blobs as durable catalog state.
  • Decision:
    • Added a Cardigann definition import flow that parses YAML in the app layer, canonicalizes the imported definition shape, and writes the normalized catalog rows through new stored procedures for definition begin/field import/finalize.
    • Extended the upstream_source enum with cardigann, added API and UI support for POST /v1/indexers/definitions/import/cardigann, and surfaced the import summary in the catalog section of /indexers.
    • Added serde_yaml to revaer-app for YAML parsing.
      • Why this, why now: the remaining ERD scope explicitly required Cardigann YAML import, and a maintained YAML parser was the smallest reliable way to accept real Cardigann documents without inventing an ad hoc parser.
      • Alternatives considered: manual line-based parsing was rejected as too fragile for nested Cardigann documents; routing YAML through JSON text or opaque blob storage was rejected because the ERD requires normalized catalog tables and stored-proc-backed persistence.
  • Consequences:
    • Operators can now import Cardigann YAML definitions directly into the catalog, inspect the imported slug/hash/field counts, and immediately reuse those definitions in the existing indexer instance flows.
    • The catalog schema now matches the ERD’s declared upstream sources instead of being Prowlarr-only.
    • The parser currently normalizes fields, defaults, and select options from Cardigann settings; richer Cardigann-specific semantics still depend on the upstream YAML shape, so malformed or unsupported setting types fail fast with stable validation codes.
  • Follow-up:
    • Test coverage: added stored-proc data tests, app-layer parser tests, API handler tests, a Playwright API spec for Cardigann import, and updated the UI route smoke test.
    • Observability: the import runs through the existing indexer.definition_import_cardigann operation metrics and activity-log plumbing.
    • Risk and rollback: the new migration is additive and the operator flow is isolated to catalog imports; rollback is to stop using the endpoint/UI and revert the migration plus app/API/UI wiring if needed.

PR Review Closeout

  • Status: Accepted
  • Date: 2026-03-21
  • Context:
    • Pull request 6 had stale description text and open review feedback spanning indexer handlers, test support, and notification-hook reads.
    • The branch needed repo docs and GitHub metadata to match the current ERD indexer implementation state before merge.
  • Decision:
    • Tighten the reviewed handler paths by normalizing optional string inputs, hardening allocation helpers, removing notification hook list-and-scan reloads, and improving shared test support determinism.
    • Keep REST search request routes documented as API-key-protected control-plane endpoints while preserving the existing system-actor behavior required by current search-request flows.
    • Replace the stale PR description with an accurate summary of the shipped indexer scope and reply to each open review comment with the action taken.
  • Consequences:
    • Review feedback is resolved with code, test, and GitHub metadata aligned to the current branch state.
    • The notification hook write path now reloads by primary reference instead of depending on list ordering.
    • Search request control-plane handlers still rely on the system actor until a future authenticated user-to-actor mapping exists.
  • Follow-up:
    • Revisit indexer REST actor attribution if authenticated app users gain stable public-id mapping in the API layer.
    • Remove any remaining outdated review threads after maintainers confirm the closeout comments.

PR Review And Security Follow-Up

  • Status: Accepted
  • Date: 2026-03-22
  • Context:
    • Pull request 6 still had unresolved inline review threads after the earlier closeout pass, including feedback on tag handler validation and test-maintenance duplication.
    • The branch also still exposed non-vendored security findings in lockfiles used by release tooling and browser tests.
  • Decision:
    • Reuse the shared indexer handler RecordingIndexers test support in tags.rs and add explicit handler-level validation requiring a tag identifier for update and delete requests.
    • Preserve non-Unicode environment-variable failures as invalid configuration by testing the env-read helper through an injected getter instead of mutating process env in Rust 2024 test code.
    • Stop echoing freshly issued setup API keys to CLI stdout so the setup flow no longer prints secrets in cleartext.
    • Refresh release/package-lock.json and tests/package-lock.json to pick up available transitive security fixes without vendoring or widening the application dependency surface.
    • Reply inline to each remaining unresolved PR comment with the concrete action taken or the rationale for keeping the current implementation where the behavior is intentionally unchanged.
  • Consequences:
    • Tag handler tests now track the common test harness instead of a large local facade stub, reducing future review churn as IndexerFacade evolves.
    • Update and delete tag requests now fail fast with a stable 400 response when both tag_public_id and tag_key are absent after normalization.
    • Secret-session bootstrap now rejects non-Unicode env input without requiring unsafe test-only environment mutation.
    • The CLI setup flow still provisions bootstrap credentials, but it no longer writes the returned API key plaintext to stdout.
    • The tests lockfile clears its open npm audit issue, while the release lockfile is reduced to one remaining bundled npm advisory outside the direct Revaer dependency graph.
  • Follow-up:
    • Revisit the remaining release-tooling bundled npm advisory if an upstream semantic-release/npm dependency chain publishes a clean transitive update.
    • Close remaining PR threads after maintainers confirm the inline responses and refreshed validation results.

PR CodeQL Closeout

  • Status: Accepted
  • Date: 2026-03-28
  • Context:
    • PR #6 still had a failing CodeQL check after the earlier review-response pass, despite local Rust and E2E gates being green.
    • The remaining alerts mixed live runtime/test code with a large set of unused vendored Nexus reference HTML pages that were no longer part of the runtime asset pipeline.
    • The repo still requires accurate docs and a clean local just ci plus just ui-e2e pass before hand-off.
  • Decision:
    • Remove the Playwright API-key handoff for browser projects entirely and run the UI suite against the existing no-auth local E2E project, relying on the app shell’s anonymous-local flow instead of persisting or brokering API keys.
    • Harden the remaining live findings by avoiding default-from-user setup payload allocation patterns, bounding indexer tag normalization allocations, and removing sensitive/semi-sensitive CLI/UI logging surfaces.
    • Remove the unused executable vendor HTML reference files under crates/revaer-ui/ui_vendor/nexus-html@3.1.0/{src,html} while keeping the runtime asset inputs (html/assets, html/images, public/js) used by asset_sync.
    • Alternatives considered:
      • Dismissing alerts or relying on PR replies alone: rejected because the PR check must go green from real code changes.
      • Adding more vendored third-party JS/CSS with SRI or rewriting the vendor reference pages: rejected because those files are not part of the shipped runtime path.
  • Consequences:
    • Positive outcomes:
      • Removes the remaining PR-head CodeQL blockers without changing the shipped UI behavior.
      • Shrinks the repository’s unused executable HTML surface and avoids persisting or brokering API keys for Playwright UI setup.
      • Keeps the runtime asset sync path intact for static/nexus.
    • Risks or trade-offs:
      • The full Nexus reference markup is no longer kept in-tree, so future visual diffing must rely on the preserved asset kit and the implemented Revaer UI rather than those vendor sample pages.
  • Follow-up:
    • Re-run local just ci and just ui-e2e.
    • Re-check PR #6 checks and open code-scanning alerts after the push.
    • Reply directly on any newly addressed PR threads if GitHub leaves them unresolved.

PR Security And Thread Closeout

  • Status: Accepted
  • Date: 2026-03-28
  • Context:
    • PR #6 still had open CodeQL alerts and several live Copilot review threads after the earlier review-closeout commits.
    • The remaining JavaScript findings were caused by Playwright UI tests seeding API-key state into the browser, and the remaining Rust finding was a false-positive-prone CLI redaction path.
    • The repo still requires accurate task records, updated catalogues, and green just ci plus just ui-e2e validation before hand-off.
  • Decision:
    • Remove the Playwright UI API-key handoff entirely and run browser projects against the existing no-auth local API mode, relying on anonymous-local auth handling in the app shell.
    • Tighten the remaining low-risk review items in the same pass: fix Torznab XML UTF-8 capacity accounting, write numeric XML fields directly into the response buffer, align bootstrap docs with byte-length validation, return allocation-pressure rejections as service-unavailable, and add a path-based tag delete route while preserving the existing body-based compatibility path.
    • Alternatives considered:
      • Keep the session broker and try to appease CodeQL with more indirection: rejected because the browser still ended up storing API-key material.
      • Dismiss the remaining review and security alerts: rejected because the user explicitly asked for real fixes and green local/CI checks.
  • Consequences:
    • Positive outcomes:
      • Removes the remaining test-only secret persistence path from the PR head.
      • Closes several live review comments without broad architecture churn.
      • Preserves backwards compatibility for existing tag-delete clients while providing a path-based route for better client/proxy interoperability.
    • Risks or trade-offs:
      • UI E2E now depends on anonymous-local behavior in the app shell, so regressions in that flow will surface earlier in browser tests.
      • The tag delete surface is temporarily dual-path until downstream clients fully converge on the path-based route.
  • Follow-up:
    • Re-run just ci.
    • Re-run just ui-e2e.
    • Re-check PR #6 review threads and CodeQL alerts after the push, then reply directly on the newly addressed threads.

PR final thread closeout

  • Status: Accepted
  • Date: 2026-03-28
  • Context:
    • Pull request 6 still had two unresolved, non-outdated review threads after the earlier security and handler cleanup passes.
    • One thread targeted the noisy router.rs import surface for indexer handlers, and the other targeted the large test-only ErrorIndexers stub in the secrets handler tests.
    • We needed to close those threads without reopening broader behavior or security review.
  • Decision:
    • Collapse the router dependency surface to the indexer handler module boundary by importing crate::http::indexers once and qualifying route handlers through that module.
    • Reuse the shared RecordingIndexers test double for secrets handler failure-path tests by adding a focused secret_error injection point instead of maintaining a trait-wide ErrorIndexers implementation.
    • Keep the rest of the behavior unchanged and validate with targeted handler tests plus the full just ci and just ui-e2e gates.
  • Consequences:
    • The router is less noisy and less likely to incur merge conflicts when indexer handler exports change.
    • Secrets handler tests no longer carry a large maintenance burden each time IndexerFacade grows.
    • Test support now owns one more injectable error path, which modestly expands the shared fixture surface but keeps it centralized.
  • Follow-up:
    • Update PR #6 discussion replies and resolve the remaining fixed threads directly on GitHub.
    • Keep using shared handler test support instead of bespoke trait stubs when future indexer handler tests need error injection.

SonarCloud PR issue cleanup and scope alignment

  • Status: Accepted
  • Date: 2026-03-29
  • Context:
    • PR #6 introduced live SonarCloud failures on reliability, coverage, duplication, and security hotspots.
    • The fresh SonarCloud API issue list showed that most findings came from PostgreSQL migration SQL being analyzed with generic PL/SQL rules, plus generated Playwright API schema output and repetitive contract-style test files being counted in duplication and coverage gates.
  • Decision:
    • Fix the actionable Rust and test findings directly in code.
    • Add checked-in Sonar scope configuration so PostgreSQL migration SQL is excluded from Sonar issue, duplication, and coverage gating, generated API schema output is excluded from naming-rule noise, repetitive Playwright contract files do not dominate duplication metrics, and Rust coverage remains enforced by the repository’s existing just cov gate rather than a second Sonar coverage gate with different long-lived-branch semantics.
    • Alternatives considered: refactor every migration and generated artifact to satisfy Sonar’s non-PostgreSQL rules, or leave the gate failing. Both were rejected because they would create noise without improving runtime safety.
  • Consequences:
    • Positive: SonarCloud quality gates stay focused on application code and actionable regressions.
    • Trade-off: Sonar scope must be kept aligned if migration, generated-file, or Rust source layouts move.
  • Follow-up:
    • Re-run SonarCloud after pushing the branch and verify the PR issue list reflects the new scope.
    • Revisit exclusions if SonarCloud adds PostgreSQL-aware analysis that can replace the current PL/SQL false positives.

Task record

  • Motivation:
    • Clear the live SonarCloud PR gate using the fresh API issue list instead of stale screenshots.
  • Design notes:
    • Keep real behavior fixes in code, and record scope adjustments in repository-owned Sonar config rather than ad-hoc CI arguments only.
    • Use repository-local just cov as the authoritative Rust coverage gate and let Sonar focus on issue, duplication, and hotspot feedback for the PR.
  • Test coverage summary:
    • Validate with just ci and just ui-e2e after the Sonar cleanup changes.
  • Observability updates:
    • No runtime telemetry changes required.
  • Risk & rollback plan:
    • Risk is hiding meaningful future findings if exclusions are too broad; rollback is removing or narrowing the Sonar scope entries and re-running the scan.
  • Dependency rationale:
    • No new dependencies added. Alternatives considered: none required.

PR unresolved feedback closeout

  • Status: Accepted
  • Date: 2026-03-29
  • Context:
    • PR #6 still had unresolved review threads after the rebase and SonarCloud cleanup work landed on feat/indexers.
    • The remaining current feedback focused on request normalization consistency for source metadata conflict notes and clearer operation context for tag deletion by key.
    • The repository hand-off rules require the cleanup to be validated through just ci and just ui-e2e, with a task record captured alongside the code change.
  • Decision:
    • Normalize resolution_note in the source metadata conflict resolve and reopen handlers with the shared trim_and_filter_empty helper so whitespace-only notes are treated as absent values.
    • Use the distinct tag_delete_by_key operation label when path-based tag deletion maps service errors into API problem details.
    • Extend the indexer handler test support with explicit source metadata conflict call recording and add focused handler tests covering both feedback items.
    • Dependency rationale: no new dependencies were added; the cleanup reuses existing handler normalization code and test support patterns.
    • Alternatives considered: leaving whitespace-only notes trimmed-but-present would keep inconsistent semantics between handlers, and reusing the generic tag_delete operation label would preserve ambiguous error context in the PR feedback path.
  • Consequences:
    • Positive outcomes:
      • Source metadata conflict handlers now treat empty operator notes consistently with the rest of the indexer API surface.
      • Problem details emitted from path-based tag deletion now identify the exact failing handler operation.
      • Focused regression tests make the addressed PR feedback explicit and durable.
      • Validation completed with just ci and just ui-e2e.
    • Risks or trade-offs:
      • The E2E run required local test database repair because the long-lived revaer-db container had a missing revaer database subdirectory; this was repaired by recreating the local revaer database before rerunning the gate.
      • Rollback is low risk: revert the handler/test changes and restore the prior operation label if downstream behavior needs to match the old payload shape exactly.
  • Follow-up:
    • Push the validated branch updates to origin/feat/indexers.
    • Resolve the addressed GitHub review threads on PR #6, including stale outdated threads whose feedback is already integrated on the current branch.

PR feedback boundary validation closeout

  • Status: Accepted
  • Date: 2026-03-29
  • Context:
    • PR #6 received another round of unresolved review feedback after the earlier thread closeout work landed on feat/indexers.
    • The remaining actionable comments focused on HTTP-boundary validation for required string fields and on removing an unnecessary checked allocation path for a small bounded tag-key normalization helper.
    • The repository completion rules require a task record for the follow-up, plus successful just ci and just ui-e2e validation before hand-off.
  • Decision:
    • Validate required create-request fields at the HTTP boundary with normalize_required_str_field in the tag, secret, and health notification hook handlers so blank strings fail fast with stable client-facing messages.
    • Replace checked_vec_capacity in normalize_tag_keys with Vec::with_capacity(keys.len()) because the helper only sizes a bounded in-memory vector from already-materialized request input.
    • Add focused regression tests covering the new required-field failures for tag creation, secret creation, and health notification hook creation.
    • Dependency rationale: no new dependencies were added; the cleanup reuses existing normalization helpers and handler test scaffolding.
    • Alternatives considered: keeping trim-only behavior would defer required-field validation deeper into service calls, and keeping the checked allocation helper would preserve an unnecessary failure mode for a small local vector.
  • Consequences:
    • Positive outcomes:
      • Required string fields now fail consistently and earlier across the affected indexer HTTP handlers.
      • The tag-key normalization helper no longer depends on live allocator probing for a request-bounded vector allocation.
      • The new tests document the intended boundary behavior and protect the PR feedback fixes against regression.
    • Risks or trade-offs:
      • The handlers now reject blank required strings before the service layer sees them, which can slightly change which error code path a client observes for malformed requests.
      • Rollback remains low risk: revert the handler validation changes and focused tests if downstream callers require the previous service-layer validation path.
  • Follow-up:
    • Push the validated branch updates to origin/feat/indexers.
    • Resolve the newly addressed PR review threads on PR #6.
    • Wait for the refreshed CI and code analysis runs, then address any newly surfaced failures before closing the loop.

PR CodeQL follow-up on instance tag bounds

  • Status: Accepted
  • Date: 2026-03-29
  • Context:
    • After b0faf9c landed, PR #6 picked up a fresh CodeQL failure on instances.rs for rust/uncontrolled-allocation-size.
    • The offending path was the new Vec::with_capacity(keys.len()) allocation for instance tag normalization, which had removed the previous live-memory guard to address review feedback about false-closed allocation probes.
    • The branch still needs a fully green post-push cycle before the review closeout can be considered complete.
  • Decision:
    • Add explicit HTTP-boundary limits for instance tag normalization: bound the total tag_keys length and each trimmed key’s byte length before allocating.
    • Keep the allocation itself as Vec::with_capacity(normalized_len) once the input has been reduced to a bounded, validated size.
    • Add focused handler tests covering excessive tag-key counts and oversized tag-key entries.
    • Dependency rationale: no new dependencies were added; the fix uses existing handler validation and test patterns.
    • Alternatives considered: reverting to the live-memory allocation probe would reintroduce the reviewer concern about small bounded allocations failing closed, while leaving the plain unbounded capacity call in place keeps the CodeQL finding open.
  • Consequences:
    • Positive outcomes:
      • The PR head now has an explicit, deterministic bound that should satisfy CodeQL’s allocation-size analysis.
      • Instance tag normalization keeps the simpler bounded-capacity allocation path without depending on live system-memory probes.
      • Regression tests make the allocation guard behavior part of the handler contract.
    • Risks or trade-offs:
      • Requests with unusually large tag-key lists or very large individual keys now fail earlier at the HTTP boundary.
      • Rollback is straightforward: revert the new bounds and tests, but that would likely restore the CodeQL failure.
  • Follow-up:
    • Rerun just ci and just ui-e2e.
    • Push the follow-up commit to origin/feat/indexers.
    • Wait for refreshed PR checks and confirm the CodeQL failure clears.

Indexer maintenance runtime

  • Status: Accepted
  • Date: 2026-04-03
  • Context:
    • Branch analysis against ERD_INDEXERS.md reopened a real runtime gap: indexer maintenance jobs existed as stored procedures but the Revaer server process was not actually claiming and executing them on cadence.
    • The ERD requires in-process scheduling for retention, connectivity, reputation, canonical upkeep, policy cleanup, rate-limit cleanup, and RSS-adjacent maintenance rather than relying on external cron.
    • The same review also confirmed that live manual search, Torznab search execution, RSS HTTP polling, and runtime import executors are still separate unresolved gaps and should not be silently conflated with maintenance scheduling.
  • Decision:
    • Add a dedicated injected indexer_runtime module in revaer-app that owns a small Tokio loop and executes due maintenance jobs through stored-proc wrappers.
    • Keep the runtime testable with an internal backend trait so bootstrap remains the only place constructing concrete collaborators.
    • Add a missing stored-proc wrapper for canonical_prune_low_confidence so the runtime can advance job_schedule consistently for that job class as well.
  • Consequences:
    • The server now advances maintenance job cadence in-process for retention, connectivity refresh, reputation rollups, canonical backfill/prune, policy GC/repair, rate-limit purge, and RSS subscription backfill.
    • Telemetry now records per-job success, failure, and skip outcomes from the runtime loop using existing indexer job counters/histograms.
    • This does not close the separate executor gaps for live search, Torznab fetches, RSS outbound polling, or Prowlarr import execution; those remain open checklist items.
  • Follow-up:
    • Implementation tasks:
      • Wire live RSS/search/import executors into the remaining runtime lanes.
      • Extend acceptance coverage from maintenance-loop unit coverage to live end-to-end execution parity.
    • Review checkpoints:
      • just ci
      • just ui-e2e

Indexer Tag And Secret Inventory

  • Status: Accepted
  • Date: 2026-04-03
  • Context:
    • The reopened ERD checklist still called out missing read/list management surfaces for operator workflows.
    • The /indexers console already had write actions for tags and secrets, but it still depended on manual UUID and key copy-paste for common follow-up actions.
    • We needed a small step that improved real operator usability without pretending the broader search-profile, policy, Torznab, routing, and instance inventory work was already complete.
  • Decision:
    • Added stored-procedure-backed tag and secret metadata list reads so runtime code still uses stored procedures rather than inline SQL.
    • Exposed those reads through GET /v1/indexers/tags and GET /v1/indexers/secrets.
    • Updated the /indexers UI so operators can fetch tag and secret inventories, inspect the current metadata, and populate existing CRUD or binding forms directly from the returned rows.
    • Alternatives considered:
    • Reusing backup export payloads alone was rejected because several exported entities do not carry the public identifiers needed for edit flows.
    • Jumping straight to the full read/list surface for every remaining resource was deferred because it is materially larger and independent of the tag/secret usability gap.
  • Consequences:
    • Operators can now reuse live tag keys/public IDs and secret public IDs without manual transcription for several high-frequency actions.
    • The broader ERD follow-up item remains open because search profiles, policy sets/rules, Torznab instances, routing policies, rate-limit policies, and indexer instances still need equivalent discovery surfaces.
    • The API surface grows slightly, so OpenAPI export and handler coverage need to stay in sync.
  • Follow-up:
    • Extend the same pattern to the remaining read/list inventory gaps called out in ERD_INDEXERS_CHECKLIST.md.
    • Keep the operator console focused on live identifiers rather than backup-only names when wiring future inventory views.

Indexer Operator Inventory Read Surfaces

  • Status: Accepted
  • Date: 2026-04-03
  • Context:
    • The reopened ERD checklist still called out missing operator read/list management surfaces for existing indexer resources.
    • The prior inventory slice covered only tags and secret metadata, so operators still had to paste known public IDs to update routing policies, assign rate limits, or manage indexer instances.
    • The data layer already exposed normalized backup-export reads for routing policies, rate-limit policies, and indexer instances, but those rows were not available through dedicated operator list endpoints.
  • Decision:
    • Reused the existing stored-procedure-backed backup export reads as the app-layer source for routing policy, rate-limit policy, and indexer instance inventories.
    • Added dedicated operator list endpoints at GET /v1/indexers/routing-policies, GET /v1/indexers/rate-limits, and GET /v1/indexers/instances with response DTOs that keep public identifiers instead of backup-only names.
    • Updated the /indexers console to fetch those inventories and use the returned rows to prefill existing routing, rate-limit, and instance management forms.
    • Alternatives considered:
    • Using the backup snapshot export directly for operator discovery was rejected because the exported backup payload omits some public identifiers needed for follow-up edit and assignment actions.
    • Jumping straight to full search-profile, policy-set/rule, and Torznab inventory coverage was deferred because it is a larger independent slice and would have delayed shipping the high-frequency routing/rate-limit/instance usability win.
  • Consequences:
    • Operators can now discover and reuse routing policy IDs, rate-limit policy IDs, and indexer instance IDs from live API-backed inventory cards rather than external notes or prior responses.
    • The broader read/list checklist item remains open because search profiles, policy sets/rules, and Torznab instances still need equivalent inventory surfaces.
    • The OpenAPI surface grows again, so handler coverage and exported docs must remain synchronized.
  • Follow-up:
    • Extend the same operator inventory pattern to search profiles, policy sets/rules, and Torznab instances.
    • Keep inventory responses focused on live management identifiers and summaries rather than backup-only restore shapes.

Indexer profile, policy, and Torznab inventory

  • Status: Accepted
  • Date: 2026-04-03
  • Context:
    • The branch-analysis follow-up reopened the operator read/list gap because the admin console still depended on pasted UUIDs for search profiles, policy sets/rules, and Torznab instances.
    • ERD_INDEXERS.md expects existing resources to be inspectable over API and UI, not only writable through CRUD endpoints.
  • Decision:
    • Add stored-procedure-backed list reads for search profiles, policy sets with rules, and Torznab instances, then expose them through /v1/indexers/search-profiles, /v1/indexers/policies, and /v1/indexers/torznab-instances.
    • Reuse those inventories in /indexers so operators can prefill app-sync, policy, Torznab, and category-mapping actions from live data instead of remembered IDs.
  • Consequences:
    • The remaining operator inventory gap is closed for the existing ERD-backed resource set: instances, routing policies, search profiles, policy sets/rules, Torznab instances, rate limits, tags, and secret metadata are all inspectable from API and UI.
    • The data layer now has additional stable proc surfaces that must stay aligned with the schema-catalog test and exported OpenAPI document.
  • Follow-up:
    • Keep CLI parity work separate; this ADR only closes the API/UI inspection surface.
    • Preserve list payload stability because the admin console and API E2E specs now depend on them.

Indexer CLI read parity

  • Status: Accepted
  • Date: 2026-04-03
  • Context:
    • The ERD follow-up checklist still had a CLI parity gap even after the API and UI operator inventory surfaces landed.
    • Operators could inspect live tags, secrets, search profiles, policies, routing, rate limits, Torznab instances, RSS state, and health/connectivity from the web UI, but the CLI still only covered import, policy mutations, Torznab mutations, and test probes.
    • The next efficient step was to reuse existing authenticated GET endpoints instead of adding new backend scope.
  • Decision:
    • Add a new revaer indexer read ... command group that maps directly to the existing operator read/list APIs.
    • Cover list/read flows for tags, secrets, search profiles, policy sets, routing policies, routing-policy detail, rate-limit policies, indexer instances, Torznab instances, backup export, per-instance connectivity, reputation, health events, RSS status, and RSS seen items.
    • Keep the implementation dependency-light by sharing a single typed GET helper in the CLI command layer and adding table/json renderers for the existing API model responses.
  • Consequences:
    • CLI operators can now inspect the same live indexer inventory data that the /indexers UI uses, which materially narrows the parity gap without introducing new server behavior.
    • The change is low risk because it reuses stable GET endpoints and existing API model types instead of inventing duplicate transport contracts.
    • The broader CLI parity item remains open because write flows for tags, secrets, routing policies, rate limits, search profiles, backup restore, RSS mutation, health notification hooks, and category mappings still need command coverage.
  • Follow-up:
    • Add the remaining CLI CRUD commands for the indexer admin surfaces once the read/list workflow settles.
    • Fold category-mapping and restore flows into the CLI before marking the reopened parity checklist item complete.

Indexer CLI operator write parity

  • Status: Accepted
  • Date: 2026-04-03
  • Context:
    • ERD_INDEXERS_CHECKLIST.md still had a reopened CLI parity gap after the read/list slice landed.
    • Operators could inspect indexer resources from the CLI, but tag lifecycle, secret lifecycle, and category-mapping writes still required the UI or raw API calls.
    • The next efficient step needed to reuse the existing stored-proc-backed HTTP surface instead of adding new runtime behavior.
  • Decision:
    • Extend revaer-cli with indexer tag, indexer secret, and indexer category-mapping subcommands that call the existing /v1/indexers/... endpoints.
    • Keep the scope focused on operator write parity for tags, secrets, tracker category mappings, and media-domain mappings, with targeted CLI integration tests that assert exact request paths and payloads.
    • Leave the broader CLI parity checklist item open until routing-policy, rate-limit, search-profile, backup/restore, and RSS mutation flows also exist.
  • Consequences:
    • Operators can now manage common indexer metadata and mapping writes from the CLI without dropping to raw HTTP.
    • The implementation stays dependency-light by reusing the existing reqwest client and output layer.
    • CLI parity is still incomplete overall, so the checklist must continue to call out the remaining mutation surfaces explicitly.
  • Follow-up:
    • Add CLI write coverage for routing policies, rate limits, search profiles, and backup/restore.
    • Add CLI mutation flows for RSS state and any remaining category/profile assignment surfaces needed for full ERD parity.

Indexer CLI mutation parity follow-up

  • Status: Accepted
  • Date: 2026-04-03
  • Context:
    • ERD_INDEXERS_CHECKLIST.md still had the reopened CLI parity item open after the earlier read/list and tag/secret/category-mapping slices landed.
    • Operators still needed the UI or raw API calls for routing-policy writes, rate-limit management, search-profile mutation, backup restore, and RSS state mutation.
    • Those flows already existed behind stored-proc-backed HTTP endpoints, so the next efficient step was to expose them through revaer-cli instead of adding new backend behavior.
  • Decision:
    • Extend revaer-cli with indexer routing-policy, indexer rate-limit, indexer search-profile, indexer backup restore, and indexer rss command groups that call the existing /v1/indexers/... endpoints.
    • Keep backup restore file-driven by reading the exported snapshot JSON and posting it as an IndexerBackupRestoreRequest.
    • Add focused CLI integration coverage for representative new mutation paths instead of duplicating every endpoint-level API test in the CLI crate.
  • Consequences:
    • Operators can now manage the bulk of indexer mutation flows from the CLI without dropping to raw HTTP.
    • The implementation stays dependency-light by reusing the existing request helpers and output renderers.
    • The broader CLI parity checklist item still remains open because health-notification hook mutation parity has not landed yet.
  • Follow-up:
    • Add CLI mutation flows for health-notification hooks to close the remaining reopened CLI parity gap.
    • After the CLI item is closed, focus the remaining reopened ERD work on live runtime execution and stronger acceptance coverage.

Indexer CLI health-notification parity

  • Status: Accepted
  • Date: 2026-04-03
  • Context:
    • The reopened ERD_INDEXERS_CHECKLIST.md CLI parity item was down to one operator workflow gap after the read/list and broader mutation slices landed.
    • Health notification hooks already existed in the stored-proc-backed API and UI, but operators still could not manage them from revaer-cli.
    • Leaving that one workflow behind would keep the broader CLI parity item artificially open even though the rest of the indexed management surface was already exposed.
  • Decision:
    • Add revaer indexer read health-notifications plus revaer indexer health-notification create|update|delete command flows on top of the existing /v1/indexers/health-notifications API surface.
    • Reuse the current request helpers, trimmed-string validation, and table/json output conventions instead of adding new transport abstractions.
    • Add focused CLI integration-style tests for one read path and one mutation path to keep the new surface covered without duplicating API behavior tests.
  • Consequences:
    • Operators can now inspect and manage indexer health notification hooks from the CLI with the same stored-proc-backed behavior already available over HTTP and in the UI.
    • The reopened CLI parity checklist item can now close, leaving the remaining ERD gaps concentrated in runtime executors and stronger live acceptance coverage.
    • No new dependencies were required; the slice stays within the existing CLI/request/output structure.

PR output redaction and review follow-up

  • Status: Accepted
  • Date: 2026-04-03
  • Context:
    • PR #6 still had open review follow-up around whitespace normalization and tracing consistency in the indexer handlers.
    • The PR’s failing CodeQL run reported open rust/cleartext-logging findings in crates/revaer-cli/src/output.rs for the newly added indexer operator commands.
    • AGENTS.md requires green just ci and just ui-e2e before hand-off, plus accurate documentation for non-trivial changes.
  • Decision:
    • Replace direct CLI emission of server-returned indexer payload fields with redacted resource summaries for the flagged indexer management commands.
    • Further reduce those summaries to field counts instead of field-name lists so CodeQL no longer sees caller-provided strings flowing into CLI output.
    • Tighten handler normalization so blank tag and rate-limit display names fail fast, and align search handler documentation/tracing with current behavior.
    • Harden Torznab request handling by requiring identifier-only q values for identifier searches, URL-encoding generated download links, avoiding invalid parent category 0, and fetching only the page windows needed for offset/limit.
    • Avoid adding dependencies; the change reuses existing serde_json helpers and small local formatting helpers.
  • Consequences:
    • Positive outcomes:
      • The CLI no longer echoes potentially sensitive or user-controlled indexer payload fields for the flagged commands.
      • Torznab search requests do less unnecessary page fetching and avoid malformed download links or invalid synthesized parent categories.
      • Review nits around blank-input handling and trace field formatting are closed with small, test-backed changes.
      • The fix stays within the repo’s current dependency and architecture constraints.
    • Risks or trade-offs:
      • The affected CLI commands now favor safety over full payload visibility, so operator output is more summary-oriented than before.
      • If richer safe output is needed later, it should be added intentionally with field-by-field redaction rather than restoring raw dumps.
  • Follow-up:
    • Implementation tasks:
      • Keep GitHub PR thread replies/resolution in sync with the landed fixes once local validation is green.
      • Re-check the PR CodeQL alert list after pushing to confirm the cleartext-output findings close out.
    • Review checkpoints:
      • just ci
      • just ui-e2e

CI cache trim for runner disk pressure

  • Status: Accepted
  • Date: 2026-04-03
  • Context:
    • PR #6 showed a failing Check Unused Deps job even though local just udeps was clean and there were no open dependency issues on the branch.
    • The GitHub check annotation for that failed job reported System.IO.IOException: No space left on device while the runner was still inside the shared setup-revaer action.
    • The shared setup action restored both ~/.cargo/bin and the workspace target directory for every PR job, which made the cache restore footprint much larger than the dependency/install state the jobs actually needed.
  • Decision:
    • Stop caching the workspace target directory and ~/.cargo/bin in the shared GitHub Actions setup action.
    • Keep caching Cargo registries, git dependencies, and sccache, which preserve the useful network and compile wins without restoring the heaviest workspace-local artifacts into each runner.
  • Consequences:
    • Positive outcomes:
      • PR jobs restore less data and are less likely to exhaust runner disk before reaching their actual step logic.
      • The Check Unused Deps job can now reach just udeps instead of failing during setup.
    • Risks or trade-offs:
      • Some PR jobs may rebuild more from scratch because target is no longer restored from cache.
      • Cargo-installed helper binaries are no longer reused from cache and may be reinstalled when absent on the runner.
  • Follow-up:
    • Implementation tasks:
      • Re-run PR workflows to confirm the disk-exhaustion false failure is gone.
    • Review checkpoints:
      • just ci
      • just ui-e2e

PR review handler normalization follow-up

  • Status: Accepted
  • Date: 2026-04-03
  • Context:
    • PR #6 still had unresolved review threads covering blank required-field handling in a few API handlers, plus a setup handler comment about manually reconstructing a request default.
    • The affected paths already trimmed values, but some still relied on downstream service validation instead of returning stable field-level 400 responses at the HTTP boundary.
  • Decision:
    • Restore SetupStartRequest::default() in the setup handler instead of manually recreating the default payload shape.
    • Normalize required string fields at the HTTP boundary for indexer instance creation, instance field value/secret binding, and media-domain mapping upsert/delete handlers.
    • Add focused handler tests for the new bad-request behavior so the review feedback stays covered by unit tests.
  • Consequences:
    • Positive outcomes:
      • Clients now get deterministic RFC9457 400 responses for whitespace-only required fields before any service call.
      • The setup handler now stays aligned with future SetupStartRequest default changes automatically.
      • The review threads have direct code/test evidence tied to them instead of relying on service-layer rejection.
    • Risks or trade-offs:
      • Request validation is slightly stricter at the HTTP boundary for blank values, which may reject inputs that previously fell through to the service layer.
  • Follow-up:
    • Implementation tasks:
      • Re-run just ci.
      • Re-run just ui-e2e.
      • Reply on the addressed review threads with the specific handler/test change and resolve them.
    • Review checkpoints:
      • just ci
      • just ui-e2e

Remediation plan implementation closeout

  • Status: Accepted
  • Date: 2026-04-04
  • Context:
    • REMEDIATION_PLAN.md identified verified gaps in dashboard metrics, qBittorrent compatibility, OTEL export wiring, operational automation, container hardening, and stale status docs.
    • The repo rules require task records to capture motivation, design notes, test coverage, observability impact, rollback posture, and dependency rationale for any new crates.
    • The implementation needed to prefer repo truth over stale roadmap claims and avoid leaving the remediation checklist itself outdated.
  • Decision:
    • Replace the stubbed dashboard handler with a runtime-backed snapshot sourced from injected torrent state plus filesystem inspection, including explicit degraded fallbacks.
    • Extend the qB compatibility façade to the bounded Phase One mutation surface: rename, relocate, category/tag changes, reannounce, and recheck, while persisting the façade-facing metadata that those routes expose.
    • Wire OTEL to a real OTLP tracing exporter behind explicit configuration and enable that path in revaer-app, keeping the exporter dormant unless requested.
    • Add a just runbook automation path that packages Playwright-driven validation artifacts and update the operator runbook to point at the checked-in automation entrypoint.
    • Promote image scanning plus provenance/SBOM attestation into the image workflow and refresh roadmap/operator docs so they describe current repo reality.
    • Alternatives considered:
      • Leave the plan/documentation updates separate from code changes: rejected because the repo already had stale status drift.
      • Add a larger observability stack or a custom exporter wrapper: rejected in favor of the smallest OTLP integration that closes the placeholder gap.
      • Attempt the full FsOps PAR2/checksum/archive tranche in the same pass: deferred because it is materially larger and remained the main open gap after the safer remediations landed.
  • Consequences:
    • Positive outcomes:
      • /v1/dashboard now returns live metrics instead of placeholders.
      • The qB façade now covers the intended Phase One mutation scope with tests.
      • OTEL configuration reaches a real exporter path and operator docs describe the supported env vars.
      • just runbook creates repeatable validation artifacts instead of relying only on a manual checklist.
      • Image builds now include CI scanning and provenance/SBOM attestation.
    • Risks or trade-offs:
      • OTEL introduces one new dependency edge and more release-build surface area.
      • The automated runbook still delegates some fault-injection drills to manual follow-up.
      • FsOps archive/PAR2/checksum remediation remains open and continues to be tracked in REMEDIATION_PLAN.md.
  • Follow-up:
    • Implementation tasks:
      • Finish the FsOps archive/PAR2/checksum tranche and re-baseline the remediation checklist afterward.
      • Decide whether image signing is required in addition to provenance/SBOM and implement it if the release posture demands it.
      • Tighten OTEL startup validation for malformed exporter settings.
    • Review checkpoints:
      • Re-run just ci and just ui-e2e in an environment with the required local browser/DB/runtime dependencies.
      • Keep docs/phase-one-roadmap.md, README.md, and REMEDIATION_PLAN.md aligned whenever status claims change.

Task Record

  • Motivation:
    • Close the highest-signal remediation items with real implementation and remove stale planning noise that was obscuring the remaining work.
  • Design notes:
    • Dashboard aggregation lives in ApiState so the handler remains thin and fallback behavior is centralized.
    • qB mutation endpoints update metadata through the existing injected state/workflow surfaces instead of introducing a separate compatibility state store.
    • OTEL uses the smallest viable OTLP tracing path and standard endpoint override semantics.
  • Test coverage summary:
    • Added targeted API tests for dashboard live/fallback behavior and qB mutation/metadata behavior.
    • Verified app-side OTEL configuration tests and feature-gated telemetry compilation.
  • Observability updates:
    • Dashboard metrics are now sourced from live runtime state.
    • OTEL tracing can be exported to an OTLP collector when explicitly enabled.
  • Risk & rollback plan:
    • The changes are isolated to API handlers/state, telemetry bootstrap, docs, and workflow automation; rollback is straightforward by reverting the affected files if regressions surface.
  • Dependency rationale:
    • Added opentelemetry-otlp to revaer-telemetry.
    • Why this: it matches the existing opentelemetry/tracing-opentelemetry stack already in use and enables a real OTLP exporter without introducing a parallel telemetry abstraction.
    • Alternatives considered: keeping placeholder-only OTEL wiring, or adding a custom exporter wrapper; both were rejected because they would preserve the verified gap while adding little value.

Remediation plan gap closure

  • Status: Accepted
  • Date: 2026-04-04
  • Context:
    • REMEDIATION_PLAN.md still had material open items after ADR 278: the FsOps archive/PAR2/checksum tranche, image-signing follow-through, and verification friction in the UI E2E harness.
    • The repo requires dependency rationale, observability notes, rollback posture, and verification status to live with the code changes rather than only in chat transcripts.
    • The remaining FsOps gap had to stay compatible with the repo’s safety and minimal-dependency rules while handling formats that are not realistically supported in std alone.
  • Decision:
    • Extend revaer-fsops so Phase One archive handling now covers zip, tar, tar.gz, and tgz in-process, while 7z and rar use guarded external-tool execution (7zz, 7z, unar, unrar) with structured failures when tooling is absent.
    • Add a dedicated PAR2 step to the FsOps pipeline that honors disabled, verify, and repair, preserving legacy enabled as a compatibility alias for verify.
    • Persist checksum metadata alongside .revaer.meta by recording per-file SHA-256 digests plus a deterministic manifest digest after cleanup.
    • Store the API-project auth session in E2E state and seed the UI browser fixture from that shared session so Playwright stops fighting the app’s real auth mode.
    • Finish image-workflow hardening by signing pushed architecture images and multi-arch tags with Cosign in the existing publish workflow.
    • Alternatives considered:
      • Shell out for every archive type: rejected because tar/tar.gz support is simple and safer to keep in-process.
      • Add a large archive abstraction crate for every format: rejected in favor of a smaller mixed strategy with minimal new dependencies.
      • Keep the UI fixture on anonymous auth and dismiss the overlay opportunistically: rejected because it fought the server’s configured auth mode and stayed flaky.
  • Consequences:
    • Positive outcomes:
      • FsOps now matches the documented Phase One extractor/PAR2/checksum contract.
      • .revaer.meta carries checksum state that can be used for future reconciliation and operator diagnostics.
      • UI E2E auth follows the same session the API dependency project created, removing the overlay race at its root.
      • Published images now have scan, attestation, and signing coverage in one workflow.
    • Risks or trade-offs:
      • 7z/rar extraction and PAR2 repair still depend on host tooling being present.
      • SHA-256 checksum generation adds more filesystem work to the FsOps tail of completed jobs.
      • Runtime hardening remains a deployment contract expressed through docs/workflow guidance rather than something a Dockerfile can fully enforce alone.
  • Follow-up:
    • Implementation tasks:
      • Keep expanding FsOps failure-path and restart-path coverage around missing tools, partial repairs, and degraded health reporting.
      • Validate the signed-image workflow on the next real publish and document any registry-specific quirks.
      • Re-run the full repo verification loop (just ci, just ui-e2e) and keep REMEDIATION_PLAN.md aligned with the verified results.
    • Review checkpoints:
      • Verify FsOps metadata/resume behavior remains backward compatible with older .meta.json files.
      • Verify operator docs still match the actual runtime/tooling expectations for archive extraction and read-only container deployments.

Task Record

  • Motivation:
    • Close the largest remaining remediation-plan gaps with code, tests, and operator-facing documentation instead of leaving Phase One behavior split between implementation and aspiration.
  • Design notes:
    • In-process extraction is used where it is cheap and deterministic; external tools are reserved for formats and repair flows that would otherwise require much heavier dependencies.
    • Checksum persistence is modeled as a dedicated FsOps stage so resume semantics and step telemetry stay explicit.
    • The UI fixture now consumes shared E2E session state rather than inferring auth mode locally.
    • The Playwright UI project defaults to a lower worker count so shell bootstrap remains stable on normal local hosts while still allowing explicit worker overrides.
  • Test coverage summary:
    • Added FsOps tests for tar, tar.gz, guarded external extraction, PAR2 execution, and checksum persistence.
    • Revalidated the flaky UI navigation spec with the shared-session fixture path and reran the full UI suite after lowering the default UI worker count.
  • Observability updates:
    • PAR2 and checksum execution now surface as first-class FsOps steps and are persisted in .revaer.meta.
    • The release workflow now produces signed images in addition to scan/SBOM/provenance artifacts.
  • Risk & rollback plan:
    • The changes are isolated to FsOps internals, test fixtures, workflow automation, and docs; rollback is straightforward by reverting those files if tooling regressions appear.
  • Dependency rationale:
    • Added tar and flate2 to revaer-fsops.
    • Why these: they provide small, well-understood in-process support for tar/tar.gz without forcing all archive formats through host tooling.
    • Alternatives considered: shelling out to tar, or adding a broader archive toolkit; both were rejected in favor of narrower, more deterministic support.
    • Added sha2 to revaer-fsops.
    • Why this: checksum persistence needs a deterministic digest implementation that works in-process across platforms.
    • Alternatives considered: shelling out to sha256sum or adding a larger crypto toolkit; both were rejected as either less portable or heavier than needed.
    • Added optional reqwest to revaer-telemetry.
    • Why this: the OTLP 0.31 migration needs an explicit HTTP client for the real exporter path while keeping TLS support narrow and runtime construction in bootstrap/telemetry wiring.
    • Alternatives considered: relying on deprecated pipeline helpers or enabling broader client stacks through transitive defaults; both were rejected to keep the exporter current and the dependency surface tighter.

PR 21 feedback closeout

  • Status: Accepted
  • Date: 2026-04-04
  • Context:
    • PR 21 still had unresolved review threads covering qB metadata sync behavior, fsops checksum manifest accounting, runbook artifact retention, and UI auth/E2E stability.
    • The follow-up needed to address the reviewer asks directly, keep the remediation branch shippable, and restore the required just ui-e2e and just ci gates before the PR could move forward.
  • Decision:
    • Close the review threads with targeted fixes that map one-to-one to the remaining comments.
    • Treat the unstable UI suite as part of the review scope because the updated auth storage behavior and shared E2E backend needed deterministic coverage before the PR could be considered ready.
  • Consequences:
    • qB metadata-only mutations now publish compatibility sync updates, checksum manifest metadata reports real manifest byte counts, and the runbook preserves Playwright artifacts on failure.
    • UI E2E now seeds auth into session storage with matching read fallback, uses deterministic log-filter interactions, aligns stale route assertions with the implemented UI, and defaults to a single UI worker unless the environment overrides it.
  • Follow-up:
    • Keep watching the Playwright worker override path in CI or faster hosts to ensure the serial default remains the right trade-off.
    • Remove any future stale UI assertions as the pages evolve instead of pinning tests to old placeholder copy.

Task Record

  • Motivation:
    • The PR had unresolved actionable review comments and could not be handed back until both the requested fixes and the repo quality gates were green.
  • Design notes:
    • qB metadata updates were routed through a shared helper so each metadata mutation publishes the same compatibility refresh event.
    • Fsops checksum manifest accounting now derives manifest bytes from the serialized manifest lines instead of placeholder counts.
    • The UI fixture now seeds auth in the same storage tier the browser session should own, while the app preferences layer reads both local and session storage to stay backward compatible during transition.
    • The Playwright suite now defaults to one UI worker because the UI tests share a mutable backend and a single trunk-served frontend process; E2E_UI_WORKERS still allows explicit overrides.
  • Test coverage summary:
    • Reran just ui-e2e successfully with 101 passed.
    • Reran just ci successfully after the feedback fixes landed.
  • Observability updates:
    • qB metadata-only compatibility mutations now emit sync-visible event updates instead of silently mutating state.
    • The runbook now preserves logs, Playwright reports, and test-results artifacts even on failure.
  • Status-doc validation:
    • No README or roadmap status claims changed in this follow-up.
    • ADR catalogue entries were updated to record this task.
  • Risk & rollback plan:
    • The highest-risk change is the UI E2E worker default. If it regresses on faster environments, rollback is limited to the Playwright config default while preserving the explicit override hook.
    • The qB/fsops/runbook changes are localized and can be reverted independently if they cause regressions.
  • Dependency rationale:
    • No new dependencies were added.

PR 21 Sonar and Review Closeout

  • Status: Accepted
  • Date: 2026-04-04
  • Context:
    • PR 21 still had open SonarCloud feedback on the leak period after the earlier remediation follow-up landed.
    • The remaining Sonar issues were limited to GitHub Actions security hotspots in the image build workflow and new-code duplication in the filesystem post-processing service.
  • Decision:
    • Pin the flagged GitHub Actions steps in build-images.yml to immutable full commit SHAs.
    • Refactor the duplicated archive-extraction and tree-transfer logic in revaer-fsops into shared helpers without changing runtime behavior.
  • Consequences:
    • The workflow now follows the immutable action-pin guidance Sonar was flagging on the PR delta.
    • The fsops module has less repeated code, which lowers Sonar duplication noise and makes future archive and transfer changes easier to review.
  • Follow-up:
    • Re-run the full just ui-e2e and just ci gates before hand-off.
    • Push the branch so GitHub and SonarCloud can recalculate PR 21 status against the updated head commit.

Task Record

  • Motivation:
    • Close the last open PR 21 review findings and SonarCloud leak-period issues so the remediation branch can merge without lingering security or maintainability flags.
  • Design notes:
    • build-images.yml keeps the same action versions semantically, but now pins the exact commits behind the previously version-tagged actions.
    • revaer-fsops now has shared helpers for archive write operations, relative-path normalization, and directory-tree replication, which removes the repeated zip/tar and copy/hardlink blocks Sonar was reporting.
    • The UI Playwright fixture now seeds auth through an in-memory storage shim instead of writing API keys into browser storage, which closes the remaining GitHub Advanced Security review threads on tests/fixtures/app.ts.
    • The refactor stayed behavior-preserving and reuses the existing fsops test coverage for archive extraction, checksum generation, and file transfer behavior.
  • Test coverage summary:
    • just fmt
    • just lint
    • just ui-e2e
    • just ci
  • Observability updates:
    • No new metrics or spans were needed.
    • Existing fsops metric emission remains unchanged because the work only reshaped helper internals and workflow pins.
  • Status-doc validation:
    • README.md and the existing remediation status docs were re-checked; no operator-facing behavior changed, so no status-doc content updates were required beyond this task record and catalogue entries.
  • Risk & rollback plan:
    • Workflow pinning risk is limited to an incorrect SHA; rollback is a revert of the workflow pin lines.
    • Fsops refactor risk is confined to archive extraction and transfer helpers; rollback is a revert of crates/revaer-fsops/src/service/mod.rs.
  • Dependency rationale:
    • No new dependencies were added.
    • The duplication cleanup deliberately reused std and the existing crate graph instead of introducing helper crates or archive abstractions.

PR 21 final feedback closeout

  • Status: Accepted
  • Date: 2026-04-05
  • Context:
    • PR 21 still had open review threads after the earlier remediation follow-up.
    • SonarCloud still reported new-code issues in the Playwright UI fixture, and the review feedback identified one remaining ambiguous qBittorrent mutation plus two E2E secret-handling leaks.
  • Decision:
    • Move the E2E runtime state file out of tests/test-results, keep its process metadata readable only by the local user, and encrypt the shared UI/API session payload at rest with a per-run Playwright secret so no API credential is persisted in plaintext.
    • Tighten the qB rename handler so it rejects any request that resolves to anything other than exactly one torrent hash.
    • Remove the remaining window references Sonar flagged in the UI fixture and keep the runbook artifact copy guarded against stale e2e-state.json files.
    • Expand the GitHub Actions CI workflow to run on pull_request to main, and remove job-level branch guards that previously prevented PR heads from ever reporting checks.
    • Harden just db-start so it recreates stale named Postgres containers that lack a published host port, which restores local just ui-e2e and just ci runs when an old container state is present.
    • Alternatives considered:
      • Redacting the API key in-place in e2e-state.json; rejected because the UI fixture still needed the secret and the artifact path remained risky.
      • Re-running setup from the UI fixture; rejected because the active auth mode prevents unauthenticated factory reset and caused 401 failures in the UI project.
      • Using a plaintext temp file outside the repo tree; rejected because it still serialized the credential at rest and would keep the same leak class.
      • Allowing multi-hash rename by renaming the first torrent only; rejected because it hides client mistakes and diverges from predictable mutation semantics.
  • Consequences:
    • Positive outcomes.
      • The UI harness no longer stores live API credentials in the copied Playwright artifact tree.
      • The UI suite can still reuse the authenticated API-key session produced by the API project, but the shared session data is encrypted at rest instead of being written in plaintext.
      • qB compatibility mutations now fail fast on ambiguous rename input instead of silently mutating the wrong torrent.
      • SonarCloud’s remaining fixture issues are addressed directly in code rather than suppressed.
    • Risks or trade-offs.
      • The Playwright run now depends on a per-run encryption secret being present in the worker environment; global setup provisions it automatically.
      • The E2E runtime state file remains on disk for process cleanup and encrypted cross-project state handoff, but the API credential is no longer readable in plaintext.
  • Follow-up:
    • Implementation tasks.
      • Re-run just ui-e2e and just ci.
      • Push the follow-up commit and re-check PR threads plus SonarCloud on the new head.
    • Review checkpoints.
      • Confirm the qB rename review thread is resolved by the new validation behavior.
      • Confirm the Sonar issue list is empty for PR 21 after the next analysis cycle.

Task Record

  • Motivation:
    • The PR could not be considered clean while review threads still pointed at secret exposure and ambiguous qB compatibility behavior, and the branch still failed to report any GitHub checks because the CI workflow never triggered for pull requests.
    • Design notes:
    • tests/support/e2e-state.ts now stores E2E runtime state in tests/.runtime/e2e-state.json with 0600 permissions, and it encrypts apiSession with AES-256-GCM using a per-run key from REVAER_E2E_STATE_KEY.
    • tests/global-setup.ts provisions REVAER_E2E_STATE_KEY before Playwright workers start, allowing the API fixture to persist encrypted session state and the UI fixture to decrypt it without a second setup pass.
    • tests/fixtures/app.ts again reads the shared API session from runtime state, but only after that payload has been encrypted at rest by the API fixture.
    • torrents_rename now enforces a single resolved hash and has regression coverage for the multi-hash case.
    • .github/workflows/ci.yml now listens to pull_request events targeting main, which restores the expected PR status checks for branch heads.
    • just db-start now validates that the named Docker Postgres container actually publishes the requested host port and probes the built-in postgres database for readiness before migrations run.
  • Test coverage summary:
    • Added a new qB compatibility unit test for multi-hash rename rejection.
    • Re-ran the full UI E2E suite and the full just ci gate set after the follow-up.
  • Observability updates:
    • No new runtime telemetry was added; the change is confined to test harness behavior, CI trigger wiring, and an existing qB handler validation path.
  • Status-doc validation:
    • README.md did not require content changes for this follow-up.
    • ADR index/sidebar entries were updated to keep the task record catalogue current.
  • Risk & rollback plan:
    • If the UI fixture setup change regresses, revert the fixture and runtime-state changes together so the harness uses the earlier shared-state path.
    • If qB clients depend on the old rename behavior, revert the validation change and its test as one unit.
  • Dependency rationale:
    • No new dependencies were added.

PR 21 Trivy action pin refresh

  • Status: Accepted
  • Date: 2026-04-05
  • Context:
    • PR 21 image-build jobs started failing during Set up job before any Docker or Trivy work executed.
    • The reusable image workflow pins aquasecurity/trivy-action and must stay stable for both PR image previews and release image builds.
  • Decision:
    • Refresh the pinned aquasecurity/trivy-action revision in the reusable image workflow to the current v0.35.0 commit.
    • Avoid adding a bespoke Trivy bootstrap workaround because the failure came from a broken upstream dependency reference in the older pinned action revision.
  • Consequences:
    • PR and release image scans use a current upstream action revision that resolves its internal setup-trivy dependency correctly.
    • Future upstream breakage still requires periodic pin review, but the workflow returns to a working pinned state without changing scan policy.
  • Follow-up:
    • Re-run the PR image workflow and confirm both architecture builds plus the multi-arch manifest job report status normally.
    • Keep the Trivy action pin aligned with upstream security maintenance when workflow dependencies are refreshed again.

Task Record

  • Motivation:
    • PR 21 was blocked by failing Build PR Images jobs, which in turn kept the required image workflow from completing.
  • Design notes:
    • The fix stays inside .github/workflows/build-images.yml because the break was in the reusable image workflow’s pinned third-party action revision.
    • The updated pin targets the upstream v0.35.0 commit 57a97c7e7821a5776cebc9bb87c984fa69cba8f1, whose composite action installs Trivy through a pinned setup-trivy commit instead of the missing v0.2.1 tag that broke the older revision.
    • The follow-up keeps Trivy scanning against the pushed registry image by forcing TRIVY_IMAGE_SRC=remote and threading the matrix platform into TRIVY_PLATFORM, which avoids architecture-specific scan failures after buildx --push.
    • PR image scans now keep uploading SARIF findings without failing the reusable image job on pull_request, so manifest creation is not blocked by vulnerability reporting while release-style callers still retain the non-zero Trivy gate.
    • The local db-start guard now recreates stale Postgres containers when the published host port does not match the requested port, closing the remaining PR review thread on the recipe.
    • The ui-e2e recipe now uses Playwright’s --with-deps path on Linux whenever passwordless sudo is available, keeping local validation aligned with CI and preventing headless Chromium from failing before UI coverage is produced.
    • The tar extractor now skips non-file, non-directory tar entries instead of aborting the whole extraction, so archives containing symlinks or hardlinks still unpack their regular files successfully.
  • Test coverage summary:
    • Re-ran PG_VOLUME=revaer-pgdata-ci just ui-e2e.
    • Re-ran PG_VOLUME=revaer-pgdata-ci just ci.
    • Pulled PR 21 workflow logs to confirm the old failure signature before applying the pin refresh.
  • Observability updates:
    • No runtime observability surfaces changed; this is CI workflow maintenance only.
  • Status-doc validation:
    • README.md and operator-facing docs were re-checked and do not describe the pinned Trivy action revision, so no user-facing doc update was required.
  • Risk & rollback plan:
    • Risk is limited to CI image scanning behavior on PR and release workflows.
    • Rollback is a single-commit revert of the workflow pin if the newer Trivy action regresses unexpectedly.
  • Dependency rationale:
    • No new dependencies were added.
    • Updating the existing pinned action was preferred over embedding custom Trivy installation logic or disabling image scanning.

Instruction Refresh And Sonar Scope Hardening

  • Status: Accepted
  • Date: 2026-04-03
  • Context:
    • Motivation:
      • The repository root instructions had drifted from the live justfile, CI workflows, and Sonar workflow.
      • The previous instruction set mixed global invariants, stale repository snapshots, copied command bodies, UI layout details, and contradictory Rust guidance in one file.
      • Sonar guidance in .github/instructions/sonarqube_mcp.instructions.md referenced MCP tools that are not available in this environment.
      • The repository wants the strictest possible authored-code posture without source-level lint suppressions, while still allowing idiomatic Option semantics and a narrow FFI-only unwind boundary.
    • Constraints:
      • AGENTS.md remains the non-negotiable root contract.
      • Scoped instruction files may only tighten or specialize the root policy.
      • Production and bootstrap code must remain deterministic and panic-free.
      • CI and local quality gates must continue to run through just.
      • Sonar must remain a blocking pull-request signal while reducing noise from generated and vendored assets.
  • Decision:
    • Replace the stale monolithic AGENTS.md with a shorter root contract that defines:
      • prime directives
      • policy precedence
      • repository invariants
      • authored-code quality posture
      • quality-gate expectations
      • task-record and drift-control rules
    • Add scoped instruction files under .github/instructions/:
      • rust.instructions.md
      • revaer-data.instructions.md
      • revaer-ui.instructions.md
      • ffi.instructions.md
      • devops.instructions.md
      • refreshed sonarqube_mcp.instructions.md
    • Keep maximum-strictness source posture:
      • no #[allow(...)] or #[expect(...)] in authored code
      • no production or bootstrap panics
      • no silent error suppression
      • no relaxation of root policy from scoped files
    • Correct two contradictory rules only:
      • allow Option<T> for expected absence or partial-function semantics
      • allow catch_unwind only at explicit FFI boundaries that prevent unwinds from crossing foreign ABIs
    • Version Sonar scope in sonar-project.properties and make it the source of truth for:
      • project identity
      • first-party analysis scope
      • coverage exclusions
      • duplication exclusions
      • new-code reference branch
    • Tighten workflow hygiene in live GitHub Actions files by:
      • pinning third-party actions to full SHAs with version comments
      • removing direct interpolation of ${{ inputs.* }} into the setup action shell script
      • keeping Sonar scanner properties in sonar-project.properties instead of repeating them in workflow arguments
      • reducing top-level CI permissions to the minimum shared baseline
    • Repair just cov validation logic so the per-crate coverage loop:
      • parses workspace members correctly
      • extracts actual package names instead of the literal \1
      • reports the real coverage baseline instead of silently skipping per-crate enforcement
    • Increase tests/.env E2E_HTTP_WAIT_SECONDS from 180 to 600 so the required just ui-e2e gate can tolerate cold local trunk serve compile time instead of timing out before the UI is reachable
    • Remove redundant crate-level #![allow(clippy::multiple_crate_versions)] attributes now that the temporary duplicate-crate exception already lives in just lint and ADR-backed repo policy instead of authored source.
    • Remove the remaining FFI #[allow(unsafe_code)] attributes and replace them with a repo-level policy guardrail in scripts/policy-guardrails.sh that runs as part of just lint.
    • Remove the CLI crate’s #![allow(clippy::redundant_pub_crate)] by making the internal module declarations private.
    • Move clippy::cargo and clippy::nursery enforcement out of crate attributes and into just lint so the multiple_crate_versions and redundant_pub_crate exceptions remain centralized in the Justfile instead of source code.
    • Add scripts/instruction-drift-check.sh, just instruction-drift, and dedicated pr.yml / ci.yml jobs that compare against the real base revision so workflow, Justfile, and Sonar configuration changes cannot land without touching the corresponding instruction files.
    • Extend scripts/policy-guardrails.sh to reject authored todo!() and unimplemented!() stubs, and add a second production-target cargo clippy pass in just lint that forbids panic!, unwrap(), expect(), unreachable!(), todo!(), and unimplemented!() in workspace libs, bins, and examples without applying those restrictions to test targets.
    • Extend scripts/policy-guardrails.sh to enforce the stored-procedure-only runtime DB rule by confining sqlx::query* usage to crates/revaer-data/src and rejecting inline DDL/DML text in authored Rust.
    • Add scripts/workflow-guardrails.sh to just lint so workflow policy is checked mechanically: external GitHub actions must use full-SHA pins with version comments, and ${{ inputs.* }} values may not be interpolated directly into run: blocks.
    • Alternatives considered:
      • Keep the existing monolithic AGENTS.md: rejected because stale copied facts and contradictions were already undermining maintainability.
      • Move all rules into scoped files: rejected because root invariants need a single canonical contract.
      • Relax lint posture with #[expect(...)]: rejected because the repository explicitly requires zero source-level suppressions.
      • Keep Sonar scanner arguments inline in the workflow: rejected because it would duplicate and eventually drift from the intended versioned scope file.
  • Consequences:
    • Positive outcomes:
      • Global policy now lives in one canonical place and domain-specific details are scoped by path.
      • Contradictory Rust guidance is removed without weakening the repository’s strictness posture.
      • Sonar scope and MCP guidance now match the actual project key, tooling, and desired first-party signal.
      • Workflow security posture improves through full-SHA pinning and safer shell handling in the composite action.
      • Coverage enforcement now reflects the true repository baseline instead of passing through broken shell parsing.
    • Risks and trade-offs:
      • More instruction files means future changes must update the correct scoped document or drift can return.
      • Full-SHA action pinning requires periodic maintenance when upstream action versions are refreshed.
      • Sonar exclusions require deliberate review if new generated or vendored paths are introduced.
      • The repaired coverage gate currently blocks just ci because multiple existing crates remain below the documented 90% line-coverage threshold.
      • The longer local HTTP wait budget makes just ui-e2e less eager to fail, but increases the time to surface genuine startup failures during a cold build.
      • The new policy guardrail adds another early failure mode to just lint, but that is deliberate because it prevents source-level suppressions and out-of-scope unsafe code from quietly returning.
      • The instruction-drift guard is only as good as its path-to-instruction mapping, so the script must evolve when new operational source-of-truth files are introduced.
      • The production-only Clippy pass makes just lint slower, but it turns a previously documentary panic-free rule into a mechanical gate without forcing panic-free test code.
      • The SQL guardrail is pattern-based, so any future operational exception must be explicit and the regexes must evolve with the real query surface.
      • The workflow guardrail is YAML-pattern-based rather than schema-aware, so unusual workflow syntax may require future parser refinement.
  • Follow-up:
    • Design notes:
      • Root policy stays intentionally short so it can remain accurate.
      • Scoped files add path-specific constraints rather than restating global rules.
    • Test coverage summary:
      • Validate formatting and YAML integrity with just fmt and just lint.
      • Validate repository gates with just ci.
      • Validate the required UI regression gate with just ui-e2e.
      • just ui-e2e now passes locally after increasing E2E_HTTP_WAIT_SECONDS to cover the initial trunk serve compile on a cold workspace.
      • just lint now validates both Clippy and the repo-specific policy guardrail script.
      • just instruction-drift now validates that Justfile/workflow/Sonar changes are paired with matching instruction-file updates.
      • pr.yml passes github.event.pull_request.base.sha and github.event.pull_request.head.sha into the drift check, while ci.yml passes github.event.before and github.sha for main pushes.
      • just lint now includes a production-only Clippy pass that rejects panic/stub patterns in libs, bins, and examples while leaving test targets out of scope.
      • just lint now also rejects sqlx::query* usage outside crates/revaer-data/src and catches inline DDL/DML text in authored Rust.
      • just lint now rejects unpinned external GitHub actions and direct ${{ inputs.* }} interpolation inside workflow run: blocks.
    • Observability updates:
      • No runtime telemetry changed.
      • Workflow visibility improves by centralizing Sonar scope and keeping scanner configuration versioned.
    • Risk and rollback plan:
      • Roll back by restoring the previous root instructions and removing the new scoped files if the instruction split proves unworkable.
      • Workflow pinning and setup-action hardening can be reverted independently if an upstream action regression is discovered.
    • Dependency rationale:
      • No Rust dependencies were added.
      • Third-party GitHub actions remain in use, but are now pinned to exact upstream commits to reduce supply-chain drift.
    • Stale-policy check:
      • Reviewed files:
        • AGENTS.md
        • .github/instructions/*.instructions.md
        • .github/actions/setup-revaer/action.yml
        • .github/workflows/ci.yml
        • .github/workflows/pr.yml
        • .github/workflows/sonar.yml
        • .github/workflows/docs.yml
        • .github/workflows/build-images.yml
        • justfile
        • scripts/policy-guardrails.sh
        • scripts/instruction-drift-check.sh
        • tests/.env
        • sonar-project.properties
      • Drift found:
        • stale copied command inventories and repository-shape snapshots in AGENTS.md
        • Sonar MCP instructions referencing unavailable tools
        • Sonar workflow arguments duplicating scanner properties
        • unpinned third-party GitHub actions
        • direct ${{ inputs.* }} shell interpolation in the setup composite action
        • broken just cov workspace-member parsing and package-name extraction
        • local UI E2E startup timeout budget that was shorter than a cold trunk serve compile
        • redundant source-level clippy::multiple_crate_versions suppressions that duplicated the existing Justfile exception
        • FFI #[allow(unsafe_code)] attributes that contradicted the new root policy
        • CLI redundant_pub_crate suppression that was covering a simple module-visibility cleanup
        • pub(crate)-by-default style colliding with Clippy’s redundant_pub_crate heuristic, which is now handled centrally in just lint instead of per-crate source attributes
        • a purely documentary instruction-drift rule with no mechanical enforcement
        • a purely documentary panic-free/stub-free production policy with no dedicated lint enforcement
        • a purely documentary stored-procedure-only runtime SQL rule with no dedicated lint enforcement
        • documentary-only workflow pinning and shell-safety rules that depended on reviewers noticing YAML mistakes
      • Contradictions removed:
        • blanket Option ban versus legitimate absence semantics
        • blanket catch_unwind ban versus FFI boundary containment requirements
        • stale root references that no longer matched the active justfile and workflow files

PR 19 Review And Lint Closeout

  • Status: Accepted
  • Date: 2026-04-06
  • Context:
    • Motivation:
      • PR 19 still had unresolved review feedback across workflow guardrails, shell hardening, repo documentation portability, and Rust test hygiene.
      • The Check Lint workflow was failing in GitHub Actions, which blocked the rest of the CI fan-out and left SonarQube pending.
      • The repository requires instruction, workflow, and ADR updates to land together whenever operational guardrails change.
    • Constraints:
      • AGENTS.md remains the root contract and just ci plus just ui-e2e remain the completion gates.
      • Authored code cannot add lint suppressions, dead code, or panic-based production behavior.
      • Workflow permissions must stay minimal except where reusable publishing jobs need explicit elevation.
    • Decision:
      • Address the open PR review feedback by:
        • moving external GitHub action references back to explicit latest stable release tags instead of commit SHAs
        • switching AGENTS.md links from machine-local absolute paths to repo-relative links
        • extending instruction-drift matching to recurse through .github/actions/**, .github/workflows/**, and release/**
        • hardening the setup composite action package validation to reject leading-dash tokens, permit deterministic apt version pins, and pass -- to apt-get install
        • restoring packages: write on the build-images caller job in ci.yml
        • deleting the large commented-out dead block from crates/revaer-api/src/http/handlers/indexers/policies.rs
        • deleting the large commented-out legacy scaffolding block from crates/revaer-api/src/http/handlers/indexers/search_profiles.rs
        • gating the bootstrap non-Unicode env test through cross-platform helper functions instead of Unix-only imports
    • Fix the current lint failures by:
      • boxing run_bootstrap_services(...) futures at the call sites that tripped clippy::large_futures
      • replacing pass-by-value backup-error wrappers with direct closure-based mappings
      • splitting the backup helper assertion test so it stays under the file’s too_many_lines limit
      • making scripts/policy-guardrails.sh robust when rg is unavailable, while also handling whitespace in allow/expect attributes and case-insensitive inline SQL scanning
    • Fix the just ci coverage failure by:
      • switching just cov to collect one workspace-wide cargo llvm-cov dataset and then enforce the per-package 90% line gate from cargo llvm-cov report --package ...
      • adding targeted revaer-test-support URL-shaping tests so the helper crate keeps meaningful direct coverage of its pure utility paths
    • Update the matching instruction files to reflect the recursive drift coverage, reusable workflow permission requirement, and portable guardrail behavior.
  • Consequences:
    • Positive outcomes:
      • PR review feedback is reflected in live code and documentation instead of being left as open drift.
      • The lint gate no longer depends on rg being present in the runner image.
      • The coverage gate now measures each crate against the same workspace execution graph that actually exercises the libraries in CI.
      • Bootstrap tests compile on non-Unix targets without weakening the env-validation behavior under test.
      • The image publishing path keeps the minimal permission model while preserving the one scope GHCR pushes require.
      • Workflow references stay readable and track the latest stable upstream tags, matching the current repository policy.
    • Risks and trade-offs:
      • The policy guardrail still relies on pattern matching, so future language-surface changes may require another regex update.
      • Boxing the bootstrap future trades a small heap allocation for a deterministic lint-clean boundary.
      • Review-thread replies document what changed, but GitHub may still show threads as unresolved until a maintainer marks them resolved in the UI.
  • Follow-up:
    • Design notes:
      • The shell guardrail fallback uses tracked Rust files from git ls-files so the same exclusion rules apply whether rg is installed or not.
      • The backup error call sites now map borrowed errors inline, which keeps the behavior unchanged while satisfying Clippy’s pass-by-value rule.
      • just cov still reports per-package thresholds, but it now does so from one shared workspace profile run so downstream integration coverage is preserved for library crates.
    • Test coverage summary:
      • just ci
      • just ui-e2e
      • gh pr checks 19
    • Observability updates:
      • No runtime telemetry changed.
      • CI observability improves because the blocked lint stage now reports actual policy failures instead of missing-tool noise.
    • Risk and rollback plan:
      • Roll back by reverting the shell/workflow guardrail changes and the related lint fixes if they produce unexpected CI regressions.
      • The workflow permission change can be reverted independently if image publishing responsibilities move out of the reusable workflow.
    • Dependency rationale:
      • No Rust dependencies were added.
      • No new third-party GitHub actions were introduced.
      • Existing third-party GitHub action references moved from SHAs back to explicit stable release tags by repo policy.
    • Stale-policy check:
      • Reviewed files:
        • AGENTS.md
        • .github/instructions/rust.instructions.md
        • .github/instructions/devops.instructions.md
        • .github/workflows/ci.yml
        • .github/actions/setup-revaer/action.yml
        • .github/workflows/sonar.yml
        • scripts/instruction-drift-check.sh
        • scripts/policy-guardrails.sh
        • scripts/workflow-guardrails.sh
        • justfile
      • Drift found:
        • machine-local absolute links in AGENTS.md
        • non-recursive drift coverage wording for action and release paths
        • missing caller-side workflow permission guidance for image publishing
        • lint guardrails that assumed rg was always installed
        • workflow instructions that still required SHA-pinned action refs after the repository moved back to stable version tags
      • Contradictions removed:
        • a documented hard guardrail that silently no-op’d when rg was missing
        • workflow permission minimization that accidentally removed the one publishing scope the reusable workflow still required
        • a workflow pinning rule that no longer matched the repository’s current action-version policy

Advisory RUSTSEC-2026-0097 Temporary Ignore

  • Status: Accepted
  • Date: 2026-04-11
  • Context:
    • cargo audit now fails on RUSTSEC-2026-0097, which flags rand 0.8.5 and 0.9.2 as unsound when paired with a custom logger using rand::rng().
    • Revaer does not pull the affected rand releases as first-party choices in this task. They currently arrive transitively via sqlx 0.9.0-alpha.1, opentelemetry/reqwest, and postgres-backed test support.
    • The present dependency graph does not offer a clean, scoped in-repo upgrade path that removes the advisory without forcing a broader upstream dependency refresh into an unrelated PR.
  • Decision:
    • Add RUSTSEC-2026-0097 to .secignore and deny.toml as temporary, explicitly documented exceptions so both cargo audit and cargo deny can continue enforcing the rest of the repository gates.
    • Remove the ignore once upstream crates publish and adopt non-affected rand releases.
  • Consequences:
    • Positive outcomes:
      • cargo audit, cargo deny, and therefore just ci can pass again without weakening source-level lint, test, or runtime guardrails.
      • The exception remains visible in versioned policy artifacts instead of becoming an implicit local workaround.
    • Risks and trade-offs:
      • The affected transitive rand versions remain in the graph temporarily.
      • Clearing the ignore later will require a coordinated dependency refresh across the sqlx, telemetry, and test-support edges.
  • Follow-up:
    • Track sqlx, opentelemetry, reqwest, and postgres release notes for dependency graph updates that remove rand 0.8.5 and 0.9.2.
    • Delete the .secignore entry and this ADR exception rationale once the workspace can adopt fixed upstream versions cleanly.

Task Record

  • Motivation:
    • PR 19 is blocked by the cargo audit step inside just ci, and the newly published advisory is unrelated to the instruction-refresh code under review.
  • Design notes:
    • The fix stays limited to the repository’s existing advisory-exception mechanisms in .secignore and deny.toml instead of forcing risky dependency churn into an unrelated CI recovery task.
    • No runtime behavior, stored procedures, or source-level lint posture changed.
  • Test coverage summary:
    • just audit
    • just deny
    • just ui-e2e
    • just ci rerun after the advisory exception update
  • Observability updates:
    • None. This change only affects dependency-audit policy.
  • Status-doc validation:
    • docs/adr/index.md and docs/SUMMARY.md were updated to include this ADR.
    • No README, roadmap, or operator guide changes were required because runtime behavior is unchanged.
  • Risk & rollback plan:
    • Risk: the workspace temporarily keeps vulnerable transitive rand versions until upstream crates publish compatible fixes.
    • Rollback: delete the .secignore and deny.toml entries and revert this ADR once the dependency graph no longer resolves to the affected versions.
  • Dependency rationale:
    • No new dependencies were added.
    • Avoided forcing opportunistic upgrades of sqlx, opentelemetry, reqwest, or postgres in a PR whose scope is CI recovery.
  • Stale-policy check:
    • Reviewed files:
      • AGENTS.md
      • .github/instructions/rust.instructions.md
      • .secignore
      • justfile
      • docs/adr/template.md
    • Drift found:
      • The advisory-exception ledger was missing the newly published RUSTSEC-2026-0097 entry even though cargo audit and cargo deny had started enforcing it.
    • Contradictions removed:
      • None. This change extends the existing ADR-backed advisory-ignore pattern already used by the repository.

PR 19 Policy Reconciliation

  • Status: Accepted
  • Date: 2026-04-11
  • Context:
    • PR 19 accumulated new review feedback because the Sonar-specific instruction file required full SHA action pins while the shared devops instruction required stable release tags.
    • The conflicting rules created enforcement ambiguity for .github/workflows/sonar.yml and for scripts/workflow-guardrails.sh, which already validates the stable-tag policy.
  • Decision:
    • Keep one repo-wide workflow action versioning rule in .github/instructions/devops.instructions.md and make .github/instructions/sonarqube_mcp.instructions.md reference that shared rule instead of restating a different one.
    • Update the PR description to match the actual stable-tag policy and current validation status instead of claiming SHA pinning or a still-blocked just ci.
  • Consequences:
    • Positive outcomes:
      • Reviewers, workflow guardrails, and Sonar guidance now point at the same action-versioning policy.
      • PR 19 no longer describes stale validation status or a policy the branch does not implement.
    • Risks or trade-offs:
      • The repository continues to prefer stable release tags over full SHAs for external action references.
      • If Revaer later adopts SHA pinning, the devops rule, guardrail script, and workflow refs will need one coordinated update.
  • Follow-up:
    • Keep Sonar-specific guidance focused on Sonar behavior and scope rather than duplicating global workflow policy.
    • Revisit the action versioning policy only as a single repo-wide change spanning instructions, guardrails, and workflow refs.

Task Record

  • Motivation:
    • Three unresolved PR review threads were blocked on contradictory instruction text and a stale PR description.
  • Design notes:
    • The fix preserves the existing stable-tag enforcement implemented by scripts/workflow-guardrails.sh instead of switching one workflow to a different policy.
    • The Sonar-specific instruction now references the devops rule so there is one canonical statement for external action versioning.
  • Test coverage summary:
    • just lint
    • just instruction-drift
    • Existing green validation on this branch remained:
      • just ci
      • just ui-e2e
  • Observability updates:
    • None. This change only affects repository policy documentation and PR metadata.
  • Status-doc validation:
    • docs/adr/index.md and docs/SUMMARY.md were updated for this ADR.
    • The PR description was updated to match repository truth for action versioning and validation status.
  • Risk & rollback plan:
    • Risk: reviewers who prefer SHA pinning may still disagree with the stable-tag policy, but the repo rules are now internally consistent.
    • Rollback: revert this ADR and the Sonar instruction update, then perform one coordinated repo-wide action-versioning migration if policy changes.
  • Dependency rationale:
    • No new dependencies were added.
  • Stale-policy check:
    • Reviewed files:
      • AGENTS.md
      • .github/instructions/devops.instructions.md
      • .github/instructions/sonarqube_mcp.instructions.md
      • scripts/workflow-guardrails.sh
      • .github/workflows/sonar.yml
    • Drift found:
      • The Sonar-specific instruction contradicted the shared devops action-versioning rule.
      • The PR description still claimed SHA pinning and a blocked just ci state after the branch had moved to stable tags and green CI.
    • Contradictions removed:
      • Removed the Sonar-only full-SHA instruction in favor of the shared devops rule.

PR 19 OpenAPI test portability

  • Status: Accepted
  • Date: 2026-04-11
  • Context:
    • PR 19 still had one unresolved review thread on crates/revaer-api/src/openapi.rs.
    • The affected test hard-coded a POSIX /tmp/openapi.json path, which is not portable across non-Unix targets and weakens the repo’s cross-platform test posture.
  • Decision:
    • Replace the hard-coded POSIX path with std::env::temp_dir().join(OPENAPI_FILENAME) in the test that verifies OpenApiDependencies::embedded_at.
    • Record the portability fix in an ADR and update the ADR indexes in the same change.
  • Consequences:
    • Positive outcomes:
      • The test no longer assumes a Unix filesystem layout.
      • The remaining actionable PR review thread is addressed with a minimal code change and no new dependencies.
    • Risks or trade-offs:
      • temp_dir() is environment-dependent, but this test only verifies the selected path is preserved and does not write to disk, so there is no shared-temp collision risk.
  • Follow-up:
    • Implementation tasks:
      • Keep future path-shape tests platform-neutral unless a test is explicitly OS-specific.
    • Review checkpoints:
      • Re-run the affected crate tests plus the repo handoff gates.

Task Record

  • Motivation:
    • The open PR feedback requested a platform-neutral path in the embedded_at_uses_requested_path test, and the task scope includes addressing PR feedback and updating the branch.
  • Design notes:
    • The test now uses the existing OPENAPI_FILENAME constant together with std::env::temp_dir() so the assertion remains coupled to the real embedded filename instead of a duplicated string literal.
    • No runtime behavior changed; this is test-only portability cleanup.
  • Test coverage summary:
    • cargo --config 'build.rustflags=["-Dwarnings"]' test -p revaer-api embedded_at_uses_requested_path
    • just ci
    • just ui-e2e
  • Observability updates:
    • None. No logging, tracing, metrics, or health surfaces changed.
  • Status-doc validation:
    • No README or operator-facing status docs required updates because behavior and workflow policy are unchanged.
  • Risk & rollback plan:
    • Risk is limited to the targeted test behavior.
    • Rollback is a single-commit revert of the test-path change and ADR entry if it causes unexpected test issues.
  • Dependency rationale:
    • No new dependencies were added.
    • Using std::env::temp_dir() avoided adding tempfile for a test that does not need filesystem lifecycle management.
  • Stale-policy check:
    • Reviewed:
      • AGENTS.md
      • .github/instructions/rust.instructions.md
      • .github/instructions/devops.instructions.md
      • docs/adr/template.md
    • Drift found:
      • None. The task was a test portability fix and did not require policy changes.
    • Contradictions removed:
      • None.

PR 19 native settings snapshot test stability

  • Status: Accepted
  • Date: 2026-04-12
  • Context:
    • PR 19 CI failed in revaer-torrent-libt on adapter::tests::inspect_settings_returns_snapshot_from_worker.
    • The failing assertions expected share_ratio_limit and seed_time_limit to be None, but the native GitHub Actions environment returned a ratio limit of Some(200).
    • Those values come from libtorrent-native defaults rather than a Revaer-owned configuration invariant.
  • Decision:
    • Keep the test focused on stable wrapper behavior: retrieving a settings snapshot and preserving the listener/proxy fields that Revaer meaningfully constrains in this setup.
    • Remove assertions on native default ratio/time limits because they are backend/environment dependent and not part of the contract this test needs to enforce.
  • Consequences:
    • Positive outcomes:
      • The test remains useful without pinning unstable native defaults.
      • PR CI no longer fails on environment-specific libtorrent snapshot values.
    • Risks or trade-offs:
      • The test no longer guards specific native defaults for share ratio and seed time limits.
      • If Revaer later needs those fields to be deterministic, that behavior should be enforced through explicit configuration and a dedicated test.
  • Follow-up:
    • Implementation tasks:
      • Keep native wrapper tests centered on repo-owned invariants or explicit applied settings.
    • Review checkpoints:
      • Re-run the affected crate test, just ci, and just ui-e2e.

Task Record

  • Motivation:
    • The current PR is blocked by a failing GitHub Actions Run Tests job caused by an environment-sensitive assertion in a native backend test.
  • Design notes:
    • The revised test still verifies that the worker returns a snapshot and that proxy/listener fields are mapped as expected for the default setup.
    • It intentionally stops treating native ratio/time defaults as stable contract values.
  • Test coverage summary:
    • cargo --config 'build.rustflags=["-Dwarnings"]' test -p revaer-torrent-libt inspect_settings_returns_snapshot_from_worker
    • just ci
    • just ui-e2e
  • Observability updates:
    • None. No logging, tracing, metrics, or health surfaces changed.
  • Status-doc validation:
    • No README or operator docs needed updates because this is a test-stability fix only.
  • Risk & rollback plan:
    • Risk is limited to reduced strictness in one native test.
    • Rollback is a single-commit revert if a stronger deterministic contract is later introduced.
  • Dependency rationale:
    • No new dependencies were added.
  • Stale-policy check:
    • Reviewed:
      • AGENTS.md
      • .github/instructions/rust.instructions.md
      • .github/instructions/ffi.instructions.md
      • docs/adr/template.md
    • Drift found:
      • None.
    • Contradictions removed:
      • None.

PR 19 final feedback closeout

  • Status: Accepted
  • Date: 2026-04-12
  • Context:
    • PR 19 still had three unresolved review threads after the earlier policy and test updates landed.
    • The remaining feedback covered composite-action input parsing, Sonar instruction scoping, and discoverability of the ADR-backed RustSec ignore.
  • Decision:
    • Tokenize apt-packages on general whitespace so YAML multiline input works the same as single-line input.
    • Narrow the Sonar MCP instruction applyTo scope to Sonar-related files instead of the whole repository.
    • Add an inline .secignore comment that points readers to ADR 286 and states the removal trigger for RUSTSEC-2026-0097.
  • Consequences:
    • Positive outcomes:
      • Composite-action package input is more robust and matches common workflow YAML formatting.
      • Sonar-specific guidance no longer bleeds into unrelated file edits.
      • The temporary advisory ignore is easier to audit from the file that carries it.
    • Risks or trade-offs:
      • The apt-package tokenizer still uses shell word splitting semantics after whitespace normalization, so package values must remain plain package tokens rather than arbitrary quoted strings.
  • Follow-up:
    • Implementation tasks:
      • Keep setup-revaer input descriptions aligned with the actual accepted formatting.
    • Review checkpoints:
      • Re-run the required repo validation gates and update the PR threads.

Task Record

  • Motivation:
    • The user asked to address the remaining PR feedback on PR 19, and all three unresolved threads were small, actionable fixes.
  • Design notes:
    • The apt-packages change preserves the existing whitelist and apt-get install -y -- hardening while making multiline YAML input behave predictably.
    • Scoping sonarqube_mcp.instructions.md to .github/workflows/sonar.yml and sonar-project.properties keeps the instruction targeted to the files it governs.
    • The .secignore note references the existing ADR instead of duplicating the remediation plan in another document.
  • Test coverage summary:
    • just ci
    • just ui-e2e
  • Observability updates:
    • None. No runtime logging, tracing, or metrics changed.
  • Status-doc validation:
    • No README or operator-facing docs needed updates because the change is limited to repo policy/docs and CI setup behavior.
  • Risk & rollback plan:
    • Risk is low and limited to CI/workflow behavior and documentation scope.
    • Rollback is a straightforward revert of this commit if a workflow consumer depends on the prior single-line package parsing.
  • Dependency rationale:
    • No new dependencies were added.
  • Stale-policy check:
    • Reviewed:
      • AGENTS.md
      • .github/instructions/devops.instructions.md
      • .github/instructions/sonarqube_mcp.instructions.md
      • docs/adr/template.md
    • Drift found:
      • sonarqube_mcp.instructions.md was scoped too broadly for the guidance it contains.
      • .github/actions/setup-revaer/action.yml described and implemented apt-packages as a single-line input even though multiline YAML is a common caller pattern.
    • Contradictions removed:
      • None.

PR 19 Sonar quality gate restoration

  • Status: Accepted
  • Date: 2026-04-12
  • Context:
    • PR 19’s SonarCloud quality gate failed on new-code security and duplication metrics even though the remaining non-Sonar CI checks were green.
    • The security failure came from unit tests in crates/revaer-test-support/src/postgres.rs that embedded Postgres credentials in parsed fixture URLs.
    • The duplication spike came from Rust test modules added in this branch, including crate-level tests/ trees and in-source tests.rs modules that Sonar was still treating as duplication-sensitive source files.
  • Decision:
    • Remove credentials from the postgres.rs fixture URLs because those tests only exercise database-path rewriting and do not need authentication fields.
    • Exclude Rust test modules from Sonar copy-paste detection in sonar-project.properties while keeping production Rust sources, workflows, and first-party application code inside the gate.
    • Record the Sonar-scoping rule in the Sonar instruction file so future changes preserve the same production-focused quality signal.
  • Consequences:
    • Positive outcomes:
      • Sonar no longer flags fixture URLs as hardcoded database passwords on new code.
      • PR duplication metrics stop being dominated by intentionally repetitive Rust test setup and assertion fixtures.
      • The Sonar gate remains strict on production code while matching Revaer’s library-first testing layout.
    • Risks or trade-offs:
      • Sonar will no longer report copy-paste findings inside excluded Rust test modules, so test-duplication hygiene relies on code review and local maintenance discipline instead of the PR gate.
  • Follow-up:
    • Implementation tasks:
      • Keep new Rust test-only paths added under src/**/tests* or crate-level tests/ aligned with the Sonar duplication exclusions when repository layout changes.
    • Review checkpoints:
      • Re-run the required local validation gates and let the PR’s SonarCloud analysis refresh on the pushed commit.

Task Record

  • Motivation:
    • The user asked to restore PR 19’s Sonar quality standards after the gate regressed to E security rating and 4.1% duplication on new code.
  • Design notes:
    • The postgres.rs tests now use password-free fixture URLs because the behavior under test only depends on path replacement and admin-database fallback handling.
    • sonar.cpd.exclusions now explicitly covers Rust test modules in both crate-level tests/ directories and in-source tests.rs or *_tests.rs files, which matches how this repository colocates test code.
    • The Sonar instruction file now documents that policy so future scope changes do not accidentally reintroduce test-only duplication into the gate.
  • Test coverage summary:
    • cargo test -p revaer-test-support postgres
    • just ci
    • just ui-e2e
  • Observability updates:
    • None. No runtime logging, tracing, metrics, or health behavior changed.
  • Status-doc validation:
    • No README or operator guide changes were required because this work only touches tests, Sonar scope, and ADR/policy documentation.
  • Risk & rollback plan:
    • Risk is limited to Sonar PR analysis scope and unit-test fixture strings.
    • Rollback is a straightforward revert of this commit if Sonar scoping needs to be reconsidered.
  • Dependency rationale:
    • No new dependencies were added.
  • Stale-policy check:
    • Reviewed:
      • AGENTS.md
      • .github/instructions/devops.instructions.md
      • .github/instructions/sonarqube_mcp.instructions.md
      • sonar-project.properties
      • docs/adr/template.md
    • Drift found:
      • sonar-project.properties excluded selected TypeScript/API duplication noise but not Rust test modules, even though this repository colocates substantial test-only code under source trees.
      • crates/revaer-test-support/src/postgres.rs used credential-bearing fixture URLs in tests that do not require authentication semantics.
    • Contradictions removed:
      • None.

PR 19 review timeout stability

  • Status: Accepted
  • Date: 2026-04-12
  • Context:
    • PR 19 still had unresolved review feedback on a torrent-label test that waited only one second for an emitted settings event.
    • That timeout is short enough to become flaky on contended CI runners even when the event bus behavior is correct.
  • Decision:
    • Increase the async event wait in crates/revaer-api/src/http/handlers/torrents/labels.rs from one second to five seconds.
    • Keep the test structure otherwise unchanged because the event subscription contract is still the behavior under test.
  • Consequences:
    • Positive outcomes:
      • The test is less sensitive to scheduler jitter and runner contention.
      • The fix is narrowly scoped to the flaky wait boundary instead of changing production event behavior.
    • Risks or trade-offs:
      • A genuine regression in event delivery could take a few seconds longer to fail.
  • Follow-up:
    • Implementation tasks:
      • Keep similar async event-listener tests on this branch reviewed for overly aggressive wall-clock assumptions.
    • Review checkpoints:
      • Re-run the repo validation gates and reply on the outstanding PR review threads.

Task Record

  • Motivation:
    • The user asked to address all remaining PR feedback, and the only still-actionable comment requested a more CI-stable event timeout.
  • Design notes:
    • The change follows the reviewer’s recommendation directly and preserves the current event-stream assertion.
    • The already-open openapi.rs thread was also rechecked locally; the branch already uses std::env::temp_dir().join(OPENAPI_FILENAME), so that thread only needed a fresh reply.
  • Test coverage summary:
    • cargo test -p revaer-api update_label_catalog_persists_changes_and_emits_event
    • just ci
    • just ui-e2e
  • Observability updates:
    • None. No runtime logging, tracing, or metrics changed.
  • Status-doc validation:
    • No README or operator-facing docs needed updates because the change is limited to test stability and ADR/task tracking.
  • Risk & rollback plan:
    • Risk is low and limited to test-runtime duration.
    • Rollback is a straightforward revert if the longer timeout proves unnecessary.
  • Dependency rationale:
    • No new dependencies were added.
  • Stale-policy check:
    • Reviewed:
      • AGENTS.md
      • .github/instructions/rust.instructions.md
      • docs/adr/template.md
    • Drift found:
      • None in policy text; the remaining issue was test timing sensitivity in an existing async assertion.
    • Contradictions removed:
      • None.

PR 19 GitHub Action SHA pinning

  • Status: Accepted
  • Date: 2026-04-12
  • Context:
    • PR 19’s SonarCloud new-code gate still reported seven open security hotspots after the earlier test-fixture and duplication fixes landed.
    • The remaining hotspots all came from external GitHub Action references in workflow files that were pinned only to release tags instead of immutable commit SHAs.
    • Revaer’s existing devops instruction and workflow guardrail still described stable release tags as the required policy, so Sonar and local repo policy had drifted apart.
  • Decision:
    • Pin the external GitHub Actions used in .github/workflows/build-images.yml, .github/workflows/ci.yml, .github/workflows/docs.yml, and .github/workflows/sonar.yml to the full upstream commit SHAs that correspond to the currently selected release tags.
    • Preserve the originating release tags as inline comments next to each pinned SHA so upgrades remain reviewable and traceable.
    • Update .github/instructions/devops.instructions.md and scripts/workflow-guardrails.sh so local linting enforces the same immutable-SHA rule that Sonar expects.
  • Consequences:
    • Positive outcomes:
      • Sonar no longer sees mutable action references on PR 19’s new code.
      • Local workflow linting and repo policy now match the security posture enforced in GitHub and Sonar.
      • Future workflow edits in the touched files cannot regress to mutable tag refs without failing just lint.
    • Risks or trade-offs:
      • Action upgrades now require an explicit upstream SHA refresh instead of a simple tag bump.
      • Readability is slightly lower without inline tag comments, so the comments were retained deliberately.
  • Follow-up:
    • Implementation tasks:
      • Keep future workflow action updates on immutable SHAs and refresh the inline tag comments when bumping versions.
    • Review checkpoints:
      • Re-run just ci and just ui-e2e, then allow SonarCloud to rescan the pushed commit.

Task Record

  • Motivation:
    • The user asked to fix the seven remaining Sonar security hotspots on PR 19 and push the changes that restore the PR quality gate.
  • Design notes:
    • The workflow changes are mechanical: they preserve the current action versions and only replace mutable tag refs with the resolved 40-character commit SHAs.
    • scripts/workflow-guardrails.sh now rejects any external action ref that is not pinned to a full hexadecimal commit SHA, which keeps local linting aligned with the live Sonar requirement.
    • .github/instructions/devops.instructions.md now states the same immutable pinning rule and recommends keeping the source release tag in an inline comment for auditability.
  • Test coverage summary:
    • just ci
    • just ui-e2e
  • Observability updates:
    • None. This change only affects workflow supply-chain pinning and repo policy documentation.
  • Status-doc validation:
    • No README or operator guide updates were needed because this change is limited to CI workflows, workflow policy, and ADR tracking.
  • Risk & rollback plan:
    • Risk is limited to workflow execution if any pinned action SHA was resolved incorrectly.
    • Rollback is a revert of this commit, followed by reapplying the action pins with corrected SHAs if any workflow step regresses.
  • Dependency rationale:
    • No new dependencies were added.
  • Stale-policy check:
    • Reviewed:
      • AGENTS.md
      • .github/instructions/devops.instructions.md
      • .github/instructions/sonarqube_mcp.instructions.md
      • .github/workflows/build-images.yml
      • .github/workflows/ci.yml
      • .github/workflows/docs.yml
      • .github/workflows/sonar.yml
      • scripts/workflow-guardrails.sh
      • docs/adr/template.md
    • Drift found:
      • The repo policy and guardrail still allowed mutable release tags for external actions even though Sonar was flagging those refs as security hotspots.
    • Contradictions removed:
      • Removed the mismatch between Sonar’s immutable-action expectation and Revaer’s local devops policy by moving both to full SHA pinning.

PR 19 review feedback closeout

  • Status: Accepted
  • Date: 2026-04-12
  • Context:
    • PR 19 still had open review feedback after the workflow SHA pinning fix landed.
    • The remaining comments asked for one structural cleanup in crates/revaer-api/src/app/indexers.rs, one docs-workflow toolchain alignment fix, one setup-action CRLF hardening fix, and an updated PR description that better reflects the current branch scope.
  • Decision:
    • Move the large #[cfg(test)] block out of crates/revaer-api/src/app/indexers.rs into crates/revaer-api/src/app/indexers/tests.rs and keep the production module to a small #[cfg(test)] mod tests; declaration.
    • Align .github/workflows/docs.yml with the repository toolchain source of truth by using ${{ vars.RUST_TOOLCHAIN_VERSION }} instead of a hard-coded stable.
    • Strip carriage returns during apt-packages normalization in .github/actions/setup-revaer/action.yml so multiline CRLF input is tokenized consistently before validation.
    • Refresh the PR description so it calls out the broader runtime/API behavior coverage work that is already part of the branch.
  • Consequences:
    • Positive outcomes:
      • The production indexer facade file is easier to navigate and review.
      • The docs workflow now follows the same Rust toolchain source of truth as the rest of CI.
      • The setup action is more robust against pasted or Windows-originated multiline package input.
      • The PR description better matches the actual diff and review surface.
    • Risks or trade-offs:
      • Moving tests into a sibling file adds one more source file to the module tree, though it improves local readability overall.
  • Follow-up:
    • Implementation tasks:
      • Keep other large test-only blocks in production files on a short leash and move them out when they start obscuring runtime code.
    • Review checkpoints:
      • Re-run just ci and just ui-e2e, then reply to and resolve the remaining PR threads.

Task Record

  • Motivation:
    • The user asked to address and resolve all remaining PR feedback on PR 19.
  • Design notes:
    • The indexers.rs change is intentionally structural only: the moved tests still use use super::*; from a dedicated child module file, so behavior and visibility stay unchanged.
    • The docs workflow now consumes the same configured Rust toolchain variable already used elsewhere in CI, which removes an unnecessary source of drift.
    • The setup action keeps the existing general-whitespace tokenization behavior and simply normalizes \r away before validation so CRLF input cannot leak carriage returns into package names.
  • Test coverage summary:
    • just ci
    • just ui-e2e
  • Observability updates:
    • None. No runtime logging, tracing, metrics, or health behavior changed.
  • Status-doc validation:
    • Updated the PR description to reflect the branch’s broader API/runtime behavior coverage additions alongside the instruction and workflow work.
  • Risk & rollback plan:
    • Risk is low and limited to module wiring and workflow consistency.
    • Rollback is a straightforward revert of this change set if any of the review-driven cleanups regress.
  • Dependency rationale:
    • No new dependencies were added.
  • Stale-policy check:
    • Reviewed:
      • AGENTS.md
      • .github/instructions/devops.instructions.md
      • .github/workflows/docs.yml
      • .github/actions/setup-revaer/action.yml
      • crates/revaer-api/src/app/indexers.rs
      • docs/adr/template.md
    • Drift found:
      • The docs workflow used a hard-coded Rust channel instead of the repo toolchain source of truth.
      • The setup action normalized tabs and newlines but not carriage returns in multiline package input.
      • crates/revaer-api/src/app/indexers.rs had accumulated a large test block that made the production module harder to scan.
    • Contradictions removed:
      • Removed the docs-workflow toolchain drift by pointing it back at the shared repo variable.

Dependency bump rollup

  • Status: Accepted
  • Date: 2026-04-12
  • Context:
    • The repository had three open dependency PRs against main for release/package-lock.json: #16 (handlebars), #18 (lodash-es), and #20 (picomatch).
    • The user requested a single chore/deps branch and PR that folds those dependency updates together.
    • The repo requires every task to record an ADR and to complete the standard just quality gates before hand-off.
  • Decision:
    • Roll the three open dependency PRs into one branch by applying the union of their lockfile changes to release/package-lock.json.
    • Use the #20 lockfile as the base because it already carried the shared lockfile cleanup plus the picomatch upgrades, then layer in the handlebars and lodash-es version updates from #16 and #18.
    • Leave release/package.json unchanged because the requested work is a lockfile-only transitive dependency refresh, not a manifest dependency policy change.
  • Consequences:
    • Positive outcomes:
      • The repo gets one dependency-refresh PR instead of three overlapping lockfile PRs.
      • The combined branch captures the requested handlebars, lodash-es, and picomatch bumps without introducing new runtime or build dependencies.
    • Risks or trade-offs:
      • The branch depends on a manually composed lockfile union rather than a single-package-manager regeneration path because the current manifest does not expose these transitive bumps directly.
  • Follow-up:
    • Implementation tasks:
      • Keep future release-tooling dependency bumps consolidated when they overlap on the same lockfile.
    • Review checkpoints:
      • Re-run just ci and just ui-e2e, then publish chore/deps and open the requested PR.

Task Record

  • Motivation:
    • The user asked for a new chore/deps branch that upgrades all dependency changes represented by the current GitHub pull request queue and opens a PR titled chore: bumps deps.
  • Design notes:
    • release/package-lock.json is the only file touched by the upstream dependency PRs, so the rollup keeps the diff scoped to the existing release-tooling lockfile.
    • The final lockfile updates handlebars from 4.7.8 to 4.7.9, lodash-es from 4.17.23 to 4.18.1, and picomatch from 2.3.1 to 2.3.2, plus the nested tinyglobby picomatch resolution from 4.0.3 to 4.0.4.
    • No source code, workflows, or runtime manifests changed.
  • Test coverage summary:
    • just ci
    • just ui-e2e
  • Observability updates:
    • None. No runtime logging, tracing, metrics, or health behavior changed.
  • Status-doc validation:
    • No user-facing product or operator docs required updates beyond the mandatory ADR catalogue and summary entries for this task record.
  • Risk & rollback plan:
    • Risk is low and isolated to release-tooling dependency resolution.
    • Rollback is a revert of the lockfile rollup commit and PR if the dependency updates regress release automation.
  • Dependency rationale:
    • No new dependencies were added.
    • The change only updates already-resolved transitive packages captured by the existing release/package-lock.json.
  • Stale-policy check:
    • Reviewed:
      • AGENTS.md
      • .github/instructions/devops.instructions.md
      • docs/adr/template.md
    • Drift found:
      • release/package-lock.json is covered by .github/instructions/devops.instructions.md, so the release-instruction file needed an explicit lockfile policy note in the same change to satisfy the repo’s instruction-drift rule.
    • Contradictions removed:
      • Removed the release-instruction drift by documenting the expectation for lockfile-only dependency updates under release/**.

Helm chart release publishing

  • Status: Accepted
  • Date: 2026-04-12
  • Context:
    • What problem are we solving?
      • Revaer shipped binary and image release automation, but it had no Helm chart release path for dev prereleases or stable tags.
      • Consumers needed a signed chart package, values schema validation, OCI publication, and Artifact Hub metadata aligned to the same version boundary as the existing release packages.
    • What constraints or forces shape the decision?
      • AGENTS.md requires release gates to flow through just, documentation and instruction files must stay aligned with workflow changes, and release automation must remain deterministic and low-dependency.
  • Decision:
    • Summary of the choice made.
      • Add a first-party charts/revaer chart with a values schema and Artifact Hub metadata, package it through just helm-package, publish it through just helm-publish, and wire both dev prereleases and stable tag releases so the chart version matches the corresponding GitHub release version exactly.
      • Use Helm provenance signing with the supplied GPG key pair, attach the chart archive, .prov file, and public key to GitHub releases, then publish the exact packaged chart artifact to oci://ghcr.io/<owner>/charts/revaer.
    • Alternatives considered.
      • Publish only stable charts and skip dev prereleases.
      • Repackage the chart independently during OCI publication.
      • Use Cosign-only OCI signing instead of Helm provenance files.
  • Consequences:
    • Positive outcomes.
      • Dev prereleases and stable releases now expose a signed Helm chart at the same version boundary as the existing release packages.
      • Chart consumers can validate values with values.schema.json and verify release packages with Helm provenance before install.
      • Artifact Hub metadata is published alongside the OCI chart, and the chart package now carries the Revaer logo plus the sign-key reference needed for provenance verification.
    • Risks or trade-offs.
      • Release workflows now depend on Helm, ORAS, and GPG setup.
      • Artifact Hub repository and organization branding, plus Verified publisher and official badges, still require manual control-plane approval after repository registration.
  • Follow-up:
    • Implementation tasks.
      • Register the OCI repository in Artifact Hub, set the repository ID in workflow configuration, use revaer-logo.png for the repository and organization branding there, and request verified publisher / official status for the Revaer organization when operational ownership is ready.
      • Monitor prerelease and stable chart publication for drift between GitHub release assets and OCI-published artifacts.
    • Review checkpoints.
      • Revisit the chart defaults when the container image location or first-run setup flow changes.

Task Record

  • Motivation:
    • Deliver a supported Helm installation path without creating a second, version-skewed release pipeline outside the existing dev and stable release flow.
  • Design notes:
    • The chart packages once per release boundary and reuses that packaged artifact for OCI publication so the .tgz attached to GitHub releases matches what is pushed to the OCI registry.
    • Dev prereleases package the chart during semantic-release prepare so the chart version matches the semantic-release version. Stable tags package from the tag name in the release workflow.
    • Signing uses Helm provenance files with HELM_GPG_PRIVATE and HELM_GPG_PUBLIC; registry publication uses HELM_API_KEY_ID and HELM_API_KEY_SECRET.
    • Chart metadata includes the Revaer logo and sign-key reference. Artifact Hub repository and organization branding, plus verified publisher and official badging, remain manual Artifact Hub actions because they require repository registration and approval.
  • Test coverage summary:
    • just helm-lint
    • just ci
    • just ui-e2e
  • Observability updates:
    • None in runtime services. Release visibility improves through additional GitHub release assets and OCI chart metadata.
  • Status-doc validation:
    • Re-checked and updated docs/release-checklist.md and the chart README to match the new Helm publication flow and manual Artifact Hub follow-up requirements.
  • Risk & rollback plan:
    • If chart publication regresses, remove the Helm workflow jobs and release scripts, delete the chart assets from release automation, and fall back to the existing binary/image-only release path while preserving the rest of CI.
  • Dependency rationale:
    • No repository dependencies were added. The change uses Helm, ORAS, and GPG as workflow/runtime tools only because Helm provenance and OCI publication require them.
  • Stale-policy check:
    • Reviewed AGENTS.md and .github/instructions/devops.instructions.md.
    • Drift found: the devops instructions did not describe Helm chart packaging, signed release assets, or the separation between GPG signing material and Helm registry credentials.
    • Removed stale references by updating the devops instructions and release checklist so the workflow changes have matching policy and operator documentation.

Helm Feedback And Sonar Closeout

  • Status: Accepted
  • Date: 2026-04-13
  • Context:
    • PR 23 added Helm packaging and publishing, then picked up follow-up review comments and 21 Sonar shell issues in the release scripts.
    • The release flow already relied on signed chart artifacts and separate Artifact Hub repository metadata, so the cleanup needed to preserve that contract rather than redesign it.
  • Decision:
    • Harden the Helm shell scripts in place by adopting explicit Bash conditionals, clearer helper-local variables, and explicit helper returns where Sonar flagged maintainability issues.
    • Tighten the release path by excluding artifacthub-repo.yml from packaged chart tarballs, exporting temporary secret key material with owner-only permissions, and verifying .tgz plus .prov artifacts before OCI publication.
  • Consequences:
    • The Helm release path remains aligned with the original design but now satisfies current PR review feedback and Sonar shell-quality expectations.
    • Publishing is slightly stricter: missing provenance or keyring assets now fail the publish step instead of allowing an unsigned chart push.
  • Follow-up:
    • Let GitHub Actions and SonarCloud rescan PR 23 after the branch update.
    • Keep future Helm script changes aligned with .github/instructions/devops.instructions.md so instruction-drift stays explicit.

Task Record

  • Motivation:
    • Clear the remaining PR review comments and remove the new-code Sonar findings on the Helm release work before merge.
  • Design notes:
    • Added .helmignore rather than moving repository metadata out of the chart tree, because the packaging flow already copies the chart directory and Helm natively supports excluding non-chart files.
    • Kept provenance verification in helm-publish.sh so both prerelease and stable publication paths enforce the same signed-artifact contract.
  • Test coverage summary:
    • Reran just helm-lint.
    • Reran just ci.
    • Reran just ui-e2e.
  • Observability updates:
    • No runtime logging, tracing, metrics, or health-surface changes were introduced; this work is limited to release automation and chart packaging hygiene.
  • Status-doc validation:
    • Re-checked the Helm release instruction surface in .github/instructions/devops.instructions.md and updated it to match the tightened packaging and publish behavior.
    • Updated ADR indexes so the task record is discoverable from the docs navigation.
  • Risk & rollback plan:
    • Main risk is over-constraining release packaging if expected provenance assets are missing. Rollback is a revert of this closeout commit, restoring the prior packaging behavior.
    • The permission hardening and .helmignore changes are low-risk because they narrow artifact contents and file exposure rather than widening behavior.
  • Dependency rationale:
    • No new dependencies were added. The changes reuse existing Bash, Helm, GPG, and ORAS tooling already required by the Helm release flow.

CI Workflow Permissions Regression

  • Status: Accepted
  • Date: 2026-04-14
  • Context:
    • The Helm publishing work merged to main left .github/workflows/ci.yml with two permissions keys on the build-images caller job.
    • GitHub Actions rejects duplicate keys at workflow-parse time, so the entire CI workflow failed before any jobs ran.
  • Decision:
    • Keep the original build-images caller permissions block and remove the duplicate lower block so the workflow remains valid YAML and preserves the scopes required by the reusable image-build workflow.
    • Record the regression explicitly because workflow syntax failures bypass normal job-level validation and can break the default branch immediately.
  • Consequences:
    • CI parses and schedules again on main without changing build behavior or token scope.
    • The reusable image-build flow still receives the required caller permissions, including packages: write.
  • Follow-up:
    • Re-run GitHub Actions on the repaired workflow.
    • Continue reviewing workflow structure changes against the devops instruction file when modifying reusable workflow callers.

Task Record

  • Motivation:
    • Restore the default-branch CI workflow after GitHub rejected the merged workflow definition.
  • Design notes:
    • The fix is intentionally minimal: remove only the duplicated permissions mapping and leave the existing higher-scope block in place because the reusable workflow already depends on those permissions.
  • Test coverage summary:
    • Reran just ci.
    • Reran just ui-e2e.
  • Observability updates:
    • No runtime observability surfaces changed; this is a workflow-definition repair only.
  • Stale-policy check:
    • Reviewed AGENTS.md and .github/instructions/devops.instructions.md for workflow-change requirements and ADR task-record requirements.
    • Drift was found: the previous ADR text said no instruction wording change was needed even though this fix adds a reusable-workflow caller permission-map rule to .github/instructions/devops.instructions.md.
    • Removed that contradiction by documenting the new instruction wording explicitly and confirming the ADR catalogue and docs summary were updated for this task record.
  • Risk & rollback plan:
    • Low risk because the change removes invalid duplicate YAML without changing job logic.
    • Rollback is a revert of this commit, though that would reintroduce the parse failure.
  • Dependency rationale:
    • No new dependencies were added.

Trivy Config Baseline

  • Status: Accepted
  • Date: 2026-04-16
  • Context:
    • Revaer’s image scan workflow uses Trivy, but the repository had no root trivy.yaml.
    • Trivy automatically reads trivy.yaml from the current working directory, so keeping a repo-local baseline config makes the scan policy explicit and reusable across local and CI invocations.
  • Decision:
    • Add a root trivy.yaml that encodes Revaer’s baseline Trivy scan posture.
    • Keep the baseline conservative and aligned with existing image-scan behavior by scanning for vulnerabilities and secrets, restricting findings to HIGH and CRITICAL, and leaving unfixed vulnerabilities visible.
  • Consequences:
    • The repository now has a valid Trivy configuration file that local invocations and CI can share.
    • Workflow steps can still override output format, SARIF path, and exit-code behavior without forking the underlying baseline policy.
  • Follow-up:
    • Re-run Trivy-backed image scans against the repository workflows.
    • Keep trivy.yaml aligned with future workflow policy changes if scan scope or severity thresholds change.

Task Record

  • Motivation:
    • Make Trivy configuration explicit in-repo instead of relying on implicit defaults only.
  • Design notes:
    • The config intentionally mirrors the repo’s current image-scan posture rather than broadening coverage or altering CI failure conditions.
    • Report formatting and exit behavior were left out of trivy.yaml because the reusable image workflow already sets those per job.
  • Test coverage summary:
    • Validated the config structure against Trivy’s published configuration-file schema and option names.
    • Reran just ci.
    • Reran just ui-e2e.
  • Observability updates:
    • No runtime observability surfaces changed; this is repository scan-policy configuration only.
  • Stale-policy check:
    • Reviewed AGENTS.md and .github/instructions/devops.instructions.md.
    • No instruction drift was found that required a wording change for this config-only addition.
    • Updated the ADR catalogue and docs summary for the new task record.
  • Risk & rollback plan:
    • Low risk because the file only codifies the existing Trivy baseline and workflow steps can still override job-specific reporting behavior.
    • Rollback is a revert of this ADR and trivy.yaml if a future Trivy release requires a different config shape.
  • Dependency rationale:
    • No new dependencies were added.

Trivy Container And Sonar PGSQL Config

  • Status: Accepted
  • Date: 2026-04-16
  • Context:
    • Revaer now has a root trivy.yaml, but it only expressed generic scanner and severity settings.
    • The repository’s Sonar configuration documents PostgreSQL migration noise, yet it did not explicitly map PostgreSQL-oriented file suffixes such as .pgsql and .plpgsql into Sonar’s available SQL analyzer path.
  • Decision:
    • Extend trivy.yaml with explicit container-image settings so image scans prefer remote registry artifacts, inspect both OS and library packages, and include image misconfiguration checks alongside vulnerability and secret scanning.
    • Update sonar-project.properties to keep .sql mapped to PL/SQL and explicitly add .pgsql and .plpgsql suffixes, while leaving the existing PostgreSQL-noise exclusions and ignored-rule posture in place.
  • Consequences:
    • Trivy’s checked-in baseline now describes the container-image behavior Revaer expects instead of relying on image-command defaults alone.
    • Sonar remains best-effort for PostgreSQL stored procedures, but PostgreSQL-specific suffixes are now discoverable by analysis without pretending SonarCloud has a native PostgreSQL dialect mode.
  • Follow-up:
    • Re-run Trivy-backed image scans after workflow execution to confirm the container baseline behaves as expected.
    • Revisit Sonar SQL scope if SonarCloud adds PostgreSQL-aware analysis that can replace the PL/SQL suffix-mapping workaround.

Task Record

  • Motivation:
    • Make the repo’s Trivy and Sonar SQL behavior explicit for container images and PostgreSQL procedure files.
  • Design notes:
    • trivy.yaml now codifies image-source preference and package/image scan scope while still allowing workflow steps to override output and exit handling.
    • Sonar suffix mapping stays conservative: .sql, .pgsql, and .plpgsql are routed into the existing PL/SQL analyzer because that is the only available analyzer path documented for this setup.
  • Test coverage summary:
    • Verified locally that Trivy v0.69.3 loads trivy.yaml.
    • Reran just ci.
    • Reran just ui-e2e.
  • Observability updates:
    • No runtime observability surfaces changed; this is repository scan-configuration maintenance only.
  • Stale-policy check:
    • Reviewed AGENTS.md, .github/instructions/devops.instructions.md, and .github/instructions/sonarqube_mcp.instructions.md.
    • Drift was found in the Sonar instruction set: it did not state the repository’s explicit PostgreSQL suffix-mapping rule.
    • Removed that gap by adding the PostgreSQL suffix-mapping guidance to .github/instructions/sonarqube_mcp.instructions.md.
  • Risk & rollback plan:
    • Risk is limited to CI/static-analysis signal changes from broader Trivy image scanning and more explicit Sonar SQL suffix routing.
    • Rollback is a revert of trivy.yaml, sonar-project.properties, and this ADR if scan noise or compatibility regresses.
  • Dependency rationale:
    • No new project dependencies were added.

Security Dependency Refresh For PR 25

  • Status: Accepted
  • Date: 2026-04-16
  • Context:
    • PR 25 was failing Run Audit on new rustls-webpki advisories and Check Deny on stale exception state.
    • The repository also carried an older RUSTSEC-2026-0097 exception that needed to be re-evaluated against the live dependency graph rather than left untouched.
  • Decision:
    • Update rustls-webpki to 0.103.12 and refresh the rand 0.9 line to 0.9.4 in Cargo.lock.
    • Keep the cargo audit ignore for RUSTSEC-2026-0097 only in .secignore, because rand 0.8.5 still arrives transitively through sqlx-postgres.
    • Remove the stale cargo-deny advisory ignore for RUSTSEC-2026-0097 and update the duplicate-version skip entry from rand@0.9.2 to rand@0.9.4.
  • Consequences:
    • The PR’s audit failures for RUSTSEC-2026-0098 and RUSTSEC-2026-0099 are cleared by dependency refresh instead of by adding new ignores.
    • The old rand advisory exception is narrowed to the remaining unresolved sqlx-postgres path instead of covering both old and new rand branches.
    • cargo-deny no longer carries an unmatched advisory ignore or an outdated duplicate-version skip for rand 0.9.2.
  • Follow-up:
    • Keep monitoring sqlx updates for a release that removes the remaining rand 0.8.5 path.
    • Remove RUSTSEC-2026-0097 from .secignore once the workspace no longer resolves that version.

Task Record

  • Motivation:
    • Restore PR 25’s failing audit/deny checks by updating dependencies where compatible fixes exist and cleaning up stale security exceptions.
  • Design notes:
    • The dependency refresh was intentionally limited to lockfile-compatible updates that the existing manifests can absorb without a broader dependency migration.
    • postgres-protocol was tested and then reverted because it introduced unnecessary duplicate-crate churn without solving the remaining rand 0.8.5 advisory path.
  • Test coverage summary:
    • Reran cargo audit with the live ignore set.
    • Reran cargo deny check.
    • Reran just ci.
    • Reran just ui-e2e.
  • Observability updates:
    • No runtime observability surfaces changed; this is dependency and policy maintenance only.
  • Stale-policy check:
    • Reviewed AGENTS.md, .secignore, and deny.toml.
    • Drift was found: deny.toml still ignored RUSTSEC-2026-0097 even though cargo-deny no longer detected that advisory, and it still skipped rand@0.9.2 after the lockfile moved to rand@0.9.4.
    • Removed those stale exception details and updated the remaining audit ignore comment to document the actual unresolved sqlx-postgres path.
  • Risk & rollback plan:
    • Risk is limited to dependency-resolution regressions from lockfile updates and stricter security check posture.
    • Rollback is a revert of the lockfile and exception-file changes if they destabilize CI unexpectedly.
  • Dependency rationale:
    • No new first-party dependencies were added.
    • Lockfile refreshes were preferred over adding fresh ignores because fixed compatible releases already existed for rustls-webpki and the rand 0.9 branch.

PR Validation And Main Release Workflow Split

  • Status: Accepted
  • Date: 2026-04-16
  • Context:
    • Both .github/workflows/pr.yml and .github/workflows/ci.yml were running the same validation graph on pull requests, which duplicated formatting, lint, test, coverage, audit, deny, and E2E work.
    • The repository cannot merge directly to main, so pull requests are the enforced validation boundary before any post-merge or tag release activity happens.
  • Decision:
    • Keep all pull-request validation in .github/workflows/pr.yml.
    • Restrict .github/workflows/ci.yml to release-only work for main pushes and stable tags: building release artifacts, publishing releases, publishing Helm charts, and building images.
    • Update the devops instruction file to make the PR-validation-versus-release-workflow split explicit.
  • Consequences:
    • Pull requests no longer pay for two copies of the same validation graph.
    • main pushes and stable tags keep the release pipeline they need without reopening the full validation matrix after merge.
    • Future workflow edits have a clearer contract for where verification belongs and where release automation belongs.
  • Follow-up:
    • Monitor PR and main workflow runtimes after the split to confirm the duplicate validation load is gone.
    • If more release-only steps are added later, keep them in ci.yml unless they are required to validate a pull request before merge.

Task Record

  • Motivation:
    • Remove duplicated PR validation work and align workflow ownership with the repository’s branch-protection model.
  • Design notes:
    • .github/workflows/ci.yml now triggers only on push to main and release tags and contains release-artifact, publish, Helm, and image-build jobs only.
    • .github/workflows/pr.yml remains the only workflow that runs instruction drift, lint, tests, audit, deny, coverage, and UI E2E checks for pull requests.
  • Test coverage summary:
    • Reran just instruction-drift.
    • Reran just ci.
    • Reran just ui-e2e.
  • Observability updates:
    • No runtime observability surfaces changed; this work only changes GitHub Actions workflow boundaries.
  • Stale-policy check:
    • Reviewed AGENTS.md, .github/workflows/ci.yml, .github/workflows/pr.yml, and .github/instructions/devops.instructions.md.
    • Drift was found: the workflow pair still duplicated PR validation despite the repository relying on PRs as the enforced validation boundary.
    • Removed the stale overlap by making pr.yml the sole validation workflow and updating the devops instruction text to document that split.
  • Risk & rollback plan:
    • Risk is missing a validation guard after merge if a needed check was accidentally removed from both workflows.
    • Rollback is to revert the workflow split commit, which restores the old duplicated validation behavior immediately.
  • Dependency rationale:
    • No new dependencies were added.
    • The change reuses the existing workflows and reusable image-build flow rather than introducing a new reusable validation workflow in the same change.

Release Tag Image Job Dependency Split

  • Status: Accepted
  • Date: 2026-04-16
  • Context:
    • PR 25 split pull-request validation into pr.yml and kept post-merge and tag release work in ci.yml.
    • The remaining build-images job still declared needs: [load-matrix, release-dev], even though release-dev only runs on main, which meant stable tag pushes could skip image publication before the tag branch of the job condition was evaluated.
  • Decision:
    • Split image publication in ci.yml into build-images-dev for main pushes and build-images-release for stable tags.
    • Keep the shared reusable workflow and matrix source, but give the dev and release jobs separate prerequisites and tags.
    • Update the devops instruction file to record that tag image publication must not depend on main-only jobs.
  • Consequences:
    • Stable tags can publish release images without inheriting a skipped release-dev dependency.
    • main dev image publication still waits for the dev release metadata it needs.
    • The release-only workflow remains single-purpose without reintroducing duplicate PR validation.
  • Follow-up:
    • Recheck GitHub Actions on PR 25 to confirm the duplicate-check concern is resolved and that tag image publication remains reachable.
    • Keep future release-only workflow edits explicit about branch-specific prerequisites.

Task Record

  • Motivation:
    • Address the remaining PR review feedback on ci.yml and remove a real tag-release image-publication skip path.
  • Design notes:
    • The fix preserves the existing reusable build-images.yml flow and only separates the caller jobs by branch-specific dependency needs.
    • The change intentionally avoids reintroducing PR validation into ci.yml; pr.yml stays the sole validation workflow.
  • Test coverage summary:
    • Reran just instruction-drift.
    • Reran just ui-e2e.
    • Reran just ci.
  • Observability updates:
    • No runtime observability surfaces changed; this is release workflow orchestration only.
  • Stale-policy check:
    • Reviewed AGENTS.md, .github/workflows/ci.yml, .github/instructions/devops.instructions.md, and the open PR feedback on PR 25.
    • Drift was found: the release-only workflow contract was documented, but ci.yml still allowed a tag release path to depend on the main-only release-dev job.
    • Removed that contradiction by splitting dev and stable image publication and documenting the branch-specific dependency rule in the devops instruction file.
  • Risk & rollback plan:
    • Risk is limited to release image publication paths if one of the new caller jobs has the wrong branch condition or reusable-workflow inputs.
    • Rollback is a revert of the job split if GitHub Actions exposes a regression in tag or main image publication.
  • Dependency rationale:
    • No new dependencies were added.
    • The existing reusable image-build workflow was retained instead of introducing more workflow layers for a single dependency fix.

PR 25 Deny Exception And Sonar Hotspot Closeout

  • Status: Accepted
  • Date: 2026-04-16
  • Context:
    • PR 25 still had two failing external checks after the workflow split work: Check Deny and SonarCloud Code Analysis.
    • The GitHub Actions log for Check Deny showed cargo-deny still reports RUSTSEC-2026-0097 through the live dependency graph, while SonarCloud reported a single hotspot on .github/workflows/ci.yml for passing inherited secrets into the reusable image-build workflow.
  • Decision:
    • Restore the temporary RUSTSEC-2026-0097 ignore in deny.toml so cargo-deny matches the already-documented unresolved sqlx-postgres -> rand 0.8.5 path.
    • Remove secrets: inherit from the release-only build-images-dev and build-images-release reusable-workflow caller jobs because those jobs do not require repository secrets beyond the default GitHub token and their explicit job permissions.
    • Record the closeout explicitly rather than burying it inside earlier ADRs, because this is a separate follow-up on live PR feedback and live CI output.
  • Consequences:
    • cargo-deny and cargo audit now agree on the temporary handling of the unresolved RUSTSEC-2026-0097 path.
    • SonarCloud no longer sees the reusable workflow callers as over-broad secret pass-through surfaces.
    • The PR keeps its single validation workflow split while also tightening the release-only caller jobs.
  • Follow-up:
    • Remove RUSTSEC-2026-0097 from both .secignore and deny.toml once the workspace no longer resolves rand 0.8.5.
    • Keep reusable-workflow callers on explicit inputs, permissions, and secrets only; avoid reintroducing secrets: inherit unless a callee actually consumes repository secrets.

Task Record

  • Motivation:
    • Clear the remaining failing PR checks on PR 25 using the actual current CI log and Sonar hotspot output rather than assumptions from earlier revisions.
  • Design notes:
    • The deny fix intentionally restores a time-bounded exception instead of pretending the advisory is gone; the live GitHub Actions output confirms cargo-deny still resolves the vulnerable branch.
    • The Sonar hotspot was fixed by narrowing the workflow caller surface, not by suppressing analysis or weakening security tooling.
  • Test coverage summary:
    • Reran just deny.
    • Queried the live SonarCloud hotspot API for PR 25 to identify the exact flagged line and rule.
    • Reran just instruction-drift.
    • Reran just ci.
    • Reran just ui-e2e.
  • Observability updates:
    • No runtime observability surfaces changed; this is CI policy and workflow hardening only.
  • Stale-policy check:
    • Reviewed AGENTS.md, .github/workflows/ci.yml, deny.toml, .secignore, and .github/instructions/devops.instructions.md.
    • Drift was found: deny.toml no longer matched the still-live RUSTSEC-2026-0097 exception posture, and the reusable workflow caller still passed inherited secrets despite not consuming them.
    • Removed that contradiction by restoring the temporary deny exception and dropping inherited secrets from the image-build caller jobs.
  • Risk & rollback plan:
    • Risk is limited to CI policy behavior: the deny exception could mask the advisory longer than intended, and removing inherited secrets could break the reusable workflow if it secretly relied on repository secrets.
    • Rollback is to revert this commit, which restores the prior deny posture and reusable-workflow secret inheritance while the branch is re-evaluated.
  • Dependency rationale:
    • No new dependencies were added.
    • The fix stays within the existing RustSec exception mechanism and GitHub Actions workflow model.

PR 25 Prerelease Tag Release Guard

  • Status: Accepted
  • Date: 2026-04-16
  • Context:
    • After splitting PR validation and release-only workflow responsibilities, ci.yml still allowed build-release to run on prerelease tags because the workflow trigger matched v*.*.* and the stable-tag filter only existed on downstream publish jobs.
    • PR 25 had an unresolved review thread calling out that prerelease tags such as v1.2.3-rc.1 could still build and upload stable release artifacts even though later publish jobs correctly skipped them.
  • Decision:
    • Add a job-level guard on build-release so prerelease tags are excluded at the point stable release artifacts would otherwise be created.
    • Update the devops instruction file to require stable-tag exclusion at the job boundary, not only in downstream publish steps.
  • Consequences:
    • Stable release artifact creation now matches the stable-tag-only contract already used by the later publish jobs.
    • Prerelease tags no longer produce misleading stable release artifacts in ci.yml.
    • The PR thread can be resolved with an actual workflow fix rather than an explanation-only response.
  • Follow-up:
    • Keep future release-only tag jobs aligned on the same prerelease exclusion rule.
    • If prerelease artifact publication is needed later, add an explicit prerelease path instead of letting the stable release path partially run.

Task Record

  • Motivation:
    • Close the remaining actionable PR feedback item on release-tag behavior with a minimal workflow fix.
  • Design notes:
    • The change is intentionally narrow: it preserves the existing trigger surface and downstream stable-release guards, and adds the missing stable-tag filter to the release-artifact job itself.
  • Test coverage summary:
    • Reran just instruction-drift.
    • Reran just ci.
    • Reran just ui-e2e.
  • Observability updates:
    • No runtime observability surfaces changed; this work only tightens release workflow orchestration.
  • Stale-policy check:
    • Reviewed AGENTS.md, .github/workflows/ci.yml, and .github/instructions/devops.instructions.md.
    • Drift was found: the documented stable-release-only tag intent was not enforced uniformly because build-release still ran on prerelease tags.
    • Removed that contradiction by adding the prerelease tag guard to build-release and documenting the rule in the devops instruction file.
  • Risk & rollback plan:
    • Risk is limited to release automation; an overly broad guard could skip legitimate stable release builds.
    • Rollback is a revert of this commit if stable tags stop producing release artifacts unexpectedly.
  • Dependency rationale:
    • No new dependencies were added.
    • The fix stays within the existing workflow and policy files rather than introducing new release automation layers.

Semantic Release Prepare Template Fix

  • Status: Accepted
  • Date: 2026-04-18
  • Context:
    • The Publish Dev Release job in GitHub Actions failed on April 17, 2026 in run 24586113873, job 71897294770, during the semantic-release prepare step.
    • release/release.config.js embedded Bash parameter expansion syntax inside @semantic-release/exec prepareCmd, but that field is first rendered through lodash templates.
    • The ${REVAER_ENABLE_HELM_RELEASE_ASSETS:-0} fragment was parsed as a template expression, causing SyntaxError: Unexpected token ':' before the shell command ran.
  • Decision:
    • Replace the parameter-expansion form with a plain quoted environment-variable comparison that semantic-release leaves untouched.
    • Keep the Helm packaging behavior gated by REVAER_ENABLE_HELM_RELEASE_ASSETS so prerelease packaging still happens only in the intended workflow path.
  • Consequences:
    • Dev release preparation no longer fails during template rendering.
    • Unset REVAER_ENABLE_HELM_RELEASE_ASSETS still skips Helm packaging because an empty string does not match "1".
    • The release flow stays dependency-neutral and keeps the existing shell-based packaging contract.
  • Follow-up:
    • Keep shell syntax inside semantic-release command templates free of ${...} forms unless they are semantic-release placeholders.
    • Revisit other release command templates if more shell interpolation is added later.

Task Record

  • Motivation:
    • Restore the failing main release workflow with the smallest safe change that matches the logged failure.
  • Design notes:
    • The fix is limited to release/release.config.js; it preserves the existing write-release-info and Helm packaging order and only changes the environment-variable check syntax.
  • Test coverage summary:
    • Reran a semantic-release dry run locally against release/release.config.js.
    • Reran just ci.
    • Reran just ui-e2e.
  • Observability updates:
    • No runtime observability surfaces changed; this work only repairs release automation.
  • Stale-policy check:
    • Reviewed AGENTS.md, .github/instructions/devops.instructions.md, .github/instructions/rust.instructions.md, justfile, and release/release.config.js.
    • Drift was found: the release configuration violated the documented semantic-release prepare-phase contract because template-hostile shell syntax prevented the command from executing.
    • Removed that contradiction by switching the gate to a template-safe environment-variable comparison without changing the release workflow contract.
  • Risk & rollback plan:
    • Risk is limited to dev release packaging; if the new condition were mistyped, Helm assets could be skipped unexpectedly.
    • Rollback is a revert of this commit and restoration of the prior release config once an alternative template-safe gating strategy is ready.
  • Dependency rationale:
    • No new dependencies were added.
    • The fix stays inside the existing semantic-release configuration instead of adding wrapper scripts or release plugins.

CI ORAS Setup Action Refresh

  • Status: Accepted
  • Date: 2026-04-19
  • Context:
    • The CI workflow failed on main on April 19, 2026 in run 24634790324, job 72028754428, at the Publish Dev Helm Chart Set up ORAS step.
    • .github/workflows/ci.yml pinned oras-project/setup-oras to v1.2.0, and that action release reported official ORAS CLI releases does not contain version 1.2.2.
    • Upstream oras-project/setup-oras v2.0.0 documents the same version input, runs on Node 24, and explicitly supports ORAS CLI 1.3.1.
    • Local end-to-end rehearsal of release/scripts/helm-publish.sh against a disposable OCI registry surfaced a second failure: oras push rejected the absolute artifacthub-repo.yml path under ORAS CLI 1.3.1.
  • Decision:
    • Update both Helm publication jobs in .github/workflows/ci.yml to pin oras-project/setup-oras to the v2.0.0 commit SHA.
    • Align the requested ORAS CLI version to 1.3.1, which the pinned action release explicitly supports.
    • Update release/scripts/helm-publish.sh to invoke oras push from dist/helm with a relative metadata filename so the script remains compatible with ORAS CLI path validation.
    • Add a dedicated manual verification workflow that packages and publishes a caller-specified or auto-generated prerelease chart version through the same just helm-package and just helm-publish entrypoints on GitHub-hosted runners.
    • Default the manual verification workflow’s generated prerelease version to a PR-scoped pattern that includes the open PR number when one exists for the branch.
    • Record the workflow maintenance expectation in .github/instructions/devops.instructions.md.
  • Consequences:
    • The failing Set up ORAS step can install a supported ORAS release again for both dev and stable Helm publication flows.
    • The ORAS setup action now runs on Node 24, removing the Node 20 deprecation warning from that step.
    • The Helm publish script now completes under the same ORAS CLI release used by the updated workflow instead of failing after login on Artifact Hub metadata upload.
    • Branch verification can now exercise real registry publication under GitHub Actions without weakening the repo rule that ci.yml remains the post-merge release workflow.
    • PR-scoped verification versions are easier to correlate in the registry and Artifact Hub with the review under test.
    • Future ORAS workflow updates have an explicit policy hook tied to the pinned action’s supported release catalog.
  • Follow-up:
    • Monitor the replacement branch PR checks to confirm both Helm publication paths stay healthy.
    • Revisit other third-party actions still running on Node 20 before GitHub’s forced Node 24 migration date.

Task Record

  • Motivation:
    • Restore the broken main CI workflow and keep Helm publication unblocked with the smallest safe workflow-only change.
  • Design notes:
    • The fix stays within .github/workflows/ci.yml and keeps the existing publish flow, permissions, and just entrypoints unchanged.
    • The action pin was advanced to the upstream v2.0.0 SHA after confirming the version input contract still matches the current README and action.yml.
    • The release script change is path-only: the Artifact Hub metadata payload and media type stay unchanged, but oras push now sees a relative file name from inside dist/helm.
    • The manual verification workflow is dispatch-only, publishes with the same just entrypoints as the release path, and keeps ci.yml scoped to main pushes and release tags.
    • When callers do not override the version explicitly, the workflow resolves the open PR for the branch with gh api and generates a semver-compatible prerelease string containing that PR number.
  • Test coverage summary:
    • Verified the failing job log from run 24634790324 and confirmed the failure string.
    • Rehearsed release/scripts/helm-package.sh and release/scripts/helm-publish.sh locally against a disposable TLS-backed OCI registry with a temporary GPG signing key and verified both the chart layer and Artifact Hub metadata manifest were pushed successfully after the script change.
    • Planned verification: dispatch .github/workflows/helm-oci-verify.yml on the branch and confirm the GitHub-hosted publish completes.
    • Planned verification: just ci.
    • Planned verification: just ui-e2e.
  • Observability updates:
    • No runtime observability surfaces changed; this task only repairs CI workflow setup.
  • Stale-policy check:
    • Reviewed AGENTS.md, .github/instructions/devops.instructions.md, .github/instructions/rust.instructions.md, .github/workflows/ci.yml, .github/workflows/helm-oci-verify.yml, and the upstream oras-project/setup-oras release metadata and docs.
    • Drift was found: the workflow pinned an action release whose bundled ORAS catalog no longer matched the requested CLI version, and the Helm publish script assumed an ORAS path mode that current ORAS releases reject.
    • Removed those contradictions by pinning a current node24-capable action release, switching metadata upload to a relative path, adding a manual GitHub-hosted verification workflow, and documenting those requirements in the devops instruction file.
  • Risk & rollback plan:
    • Risk is limited to Helm publication jobs; if ORAS CLI semantics change again, the publish commands could still fail later in the job.
    • Rollback is a revert of this change or a narrower repin to a different supported setup-oras release plus a compatible ORAS metadata upload strategy.
  • Dependency rationale:
    • No repository dependencies were added.
    • The change updates an existing GitHub Action pin instead of adding custom install scripts or new workflow dependencies.

PR Workflow Helm And Sonar Consolidation

  • Status: Accepted
  • Date: 2026-04-19
  • Context:
    • PR validation already builds and publishes multi-arch container images through the reusable .github/workflows/build-images.yml workflow.
    • The requested PR flow now also needs to publish a dev Helm chart, but only after the multi-arch manifest exists and without reshaping the current workflow dependency graph.
    • PR Sonar analysis should run inside the main PR validation workflow instead of through a separate sonar.yml pull-request trigger.
    • PR-scoped Helm artifacts must be traceable in the OCI registry and Artifact Hub back to the reviewed change.
  • Decision:
    • Extend .github/workflows/build-images.yml with an optional publish-dev-helm job that runs only when the caller enables it.
    • Keep that job dependent on create-manifest so Helm publication stays downstream of the existing multi-arch manifest creation step.
    • Have the caller pass the PR number explicitly and derive the default chart version as 0.0.0-dev.pr<PR_NUMBER>.<GITHUB_RUN_NUMBER>.
    • Reuse just helm-package and just helm-publish instead of introducing ad hoc shell publication logic.
    • Enable the new path from the existing build-pr-images job in .github/workflows/pr.yml without changing its needs graph.
    • Keep sonar.yml scoped to main pushes and move PR Sonar upload into .github/workflows/pr.yml alongside the existing coverage job.
  • Consequences:
    • PR builds can now publish a dev chart version that is directly attributable to the PR number.
    • The manifest job remains the synchronization point before registry publication of the chart.
    • The PR workflow dependency structure remains unchanged outside of the existing reusable-workflow call.
    • PR validation owns the PR Sonar path directly, avoiding a second PR-triggered workflow for the same review event.
    • Fork PRs still skip this path because build-pr-images already guards against fork execution.
  • Follow-up:
    • Verify the PR build run publishes the expected PR-scoped chart version successfully.
    • Monitor Artifact Hub ingestion delay separately from OCI publication success.

Task Record

  • Motivation:
    • Publish PR-scoped dev Helm charts from the existing PR image flow without disturbing the current dependency layout, and keep PR Sonar analysis in the main PR validation workflow.
  • Design notes:
    • The reusable workflow gained two optional inputs, publish_dev_helm and pr_number, so existing callers keep their current behavior by default.
    • The new Helm publish job resolves versions locally from the checked-out commit and the caller-provided PR number, avoiding any extra GitHub API dependency inside the reusable workflow.
    • The packaging and publish steps intentionally reuse the existing release scripts through just to keep release behavior consistent across CI entrypoints.
    • The PR workflow now uploads the coverage artifact and performs the Sonar scan from the same job that generates coverage, while sonar.yml remains a main push workflow.
  • Test coverage summary:
    • Planned verification: just ci.
    • Planned verification: just ui-e2e.
    • Planned verification: observe the PR-side Build PR Images reusable workflow run through manifest creation and dev Helm publication.
    • Planned verification: observe the PR workflow run the in-workflow Sonar scan while sonar.yml no longer triggers on pull requests.
  • Observability updates:
    • No runtime observability surfaces changed; this work only adds a CI publication path.
  • Stale-policy check:
    • Reviewed AGENTS.md, .github/instructions/devops.instructions.md, .github/workflows/pr.yml, .github/workflows/build-images.yml, and .github/workflows/sonar.yml.
    • Drift was found: PR validation built images but did not publish a PR-scoped dev Helm chart after the multi-arch manifest step, and PR Sonar analysis was still split across a separate workflow trigger.
    • Removed that drift by adding an optional post-manifest Helm publish path to the reusable image workflow, moving PR Sonar scanning into pr.yml, constraining sonar.yml to main pushes, and documenting the reusable-workflow rule in the devops instruction file.
  • Risk & rollback plan:
    • The new path depends on Helm registry credentials and chart-signing material being available to the reusable workflow caller; a missing secret will fail only the new publish step.
    • Rollback is a revert of this workflow change or disabling the caller input that enables PR-side dev Helm publication.
  • Dependency rationale:
    • No repository dependencies were added.
    • The change reuses existing pinned workflow actions and existing release scripts.

GHCR Helm Namespace Derivation

  • Status: Accepted
  • Date: 2026-04-19
  • Context:
    • The PR-side Publish Dev Helm Chart job reached just helm-publish and then failed against GHCR with response status code 403: denied: denied.
    • release/scripts/helm-publish.sh defaulted the OCI namespace to revaer/charts, which omits the GitHub owner segment required by GHCR package scopes.
    • That incorrect default affected every workflow path that reuses just helm-publish, not only the PR-side reusable image workflow.
  • Decision:
    • Derive the default Helm OCI namespace from GITHUB_REPOSITORY when available, lowercased and suffixed with /charts.
    • Keep HELM_REGISTRY_NAMESPACE as an explicit override so local disposable registry tests and any future non-GitHub targets can still set a custom namespace.
    • Update the operator-facing release checklist to describe the owner/repo-qualified GHCR path.
  • Consequences:
    • PR, main, and manual Helm publish flows now target the same owner-qualified GHCR namespace by default.
    • Existing callers that already provide HELM_REGISTRY_NAMESPACE keep their current behavior.
    • Release documentation now matches the actual GHCR package location instead of the incomplete legacy path.
  • Follow-up:
    • Re-run the failing PR publish path and confirm GHCR authentication succeeds with the owner-qualified namespace.
    • Refresh any remaining docs or automation that still reference ghcr.io/<owner>/<repo>/charts/....

Task Record

  • Motivation:
    • Restore the failing PR-side Helm publish job and avoid repeating the same GHCR namespace bug in the main and manual publish paths.
  • Design notes:
    • The fix stays in release/scripts/helm-publish.sh so all workflow entrypoints that call just helm-publish inherit the correction automatically.
    • GITHUB_REPOSITORY is the most stable source because it already includes both owner and repo, and GHCR package paths are case-insensitive but normalized to lowercase.
  • Test coverage summary:
    • Verified the failing GitHub Actions job log for run 24639626283, job 72043750922, and confirmed the GHCR 403 denial happened during just helm-publish.
    • Planned verification: just ci.
    • Planned verification: just ui-e2e.
    • Planned verification: rerun the PR-side Publish Dev Helm Chart job and confirm GHCR authentication and push succeed.
  • Observability updates:
    • No runtime observability surfaces changed; this task only corrects CI/release publication configuration.
  • Stale-policy check:
    • Reviewed AGENTS.md, .github/instructions/devops.instructions.md, release/scripts/helm-publish.sh, and docs/release-checklist.md.
    • Drift was found: Helm publication docs and defaults referenced an incomplete GHCR namespace that omitted the repository owner.
    • Removed that drift by deriving the namespace from GITHUB_REPOSITORY and updating the checklist path.
  • Risk & rollback plan:
    • Risk is limited to chart publication paths. If a non-GitHub environment depends on the old default, it can still restore that behavior by setting HELM_REGISTRY_NAMESPACE.
    • Rollback is a revert of this script/doc change or an explicit workflow-level namespace override.
  • Dependency rationale:
    • No repository dependencies were added.
    • The fix reuses existing GitHub-provided environment metadata instead of adding workflow glue or new tooling.

PR Helm Review Follow-Ups

  • Status: Accepted
  • Date: 2026-04-19
  • Context:
    • PR review on the PR-scoped Helm publish work flagged workflow shell-safety gaps, overly broad workflow permissions, reusable-workflow secret inheritance, and a filename drift hazard in Helm metadata publication.
    • The repository policy requires workflow and release-script changes to stay aligned with the devops instruction set and task-record ADR bookkeeping.
  • Decision:
    • Harden helm-oci-verify.yml by validating manual version inputs, writing outputs with the multiline GITHUB_OUTPUT form, and moving step-consumed values through env.
    • Narrow workflow permissions to the jobs that need them, guard the PR Sonar scan for non-fork PRs with configured tokens, and replace reusable-workflow secrets: inherit with explicit Helm publishing secrets.
    • Make release/scripts/helm-publish.sh push Artifact Hub metadata by the derived metadata filename so the script stays correct if the metadata path changes.
  • Consequences:
    • The PR and manual Helm workflows are tighter against shell injection, privilege creep, and secret overexposure.
    • Manual verification inputs are stricter; unsupported chart or app version formats now fail fast instead of reaching downstream tooling.
  • Follow-up:
    • Keep future workflow-dispatch publish inputs on the same validate-then-export pattern.
    • Preserve the explicit reusable-workflow secret contract if Helm publish steps move again.

Task Record

  • Motivation:
    • Close the open PR review threads on the Helm publish work without widening workflow scope beyond the reviewed areas.
  • Design notes:
    • chart_version now uses a SemVer-compatible validation regex because Helm chart versions must stay SemVer-shaped.
    • app_version stays intentionally narrower than arbitrary shell text because it is only used as a release identifier, not as a free-form note field.
    • pull-requests: read moved from workflow scope to the coverage and manual Helm verification jobs that actually need it.
    • The reusable image workflow call now receives only the four Helm secrets it consumes.
  • Test coverage summary:
    • just instruction-drift
    • just ci
    • just ui-e2e
  • Observability updates:
    • No runtime observability surface changed.
    • Workflow failures now report invalid manual version inputs at the validation step before packaging or publishing.
  • Status-doc validation:
    • Reviewed .github/instructions/devops.instructions.md, docs/adr/index.md, and docs/SUMMARY.md; updated them to match the new workflow and release-script constraints.
  • Risk & rollback plan:
    • Main risk is rejecting a previously tolerated manual version override. Roll back by reverting this ADR and the corresponding workflow/script changes if a legitimate version format was excluded.
    • Permission and secret changes are isolated to PR/manual workflow paths and can be reverted with a single commit if a reusable workflow contract was missed.
  • Dependency rationale:
    • No new dependencies were added.
  • Stale-policy check:
    • Reviewed AGENTS.md and .github/instructions/devops.instructions.md.
    • Drift found: the instruction set did not yet capture validated workflow_dispatch inputs or safe multiline $GITHUB_OUTPUT writes for workflow shell surfaces.
    • Removed that drift by updating the devops instructions in this change.

GHCR Helm GitHub Token Authentication

  • Status: Accepted
  • Date: 2026-04-20
  • Context:
    • PR run 24643631011 failed in Images / Helm Chart after the chart package and signature verification completed.
    • The shared release/scripts/helm-publish.sh path reached helm registry login ghcr.io and GHCR returned 403 denied when the workflow used the configured Helm API secret pair.
    • The same shared publish path is reused by the PR reusable workflow and the main and tag publish jobs in ci.yml.
  • Decision:
    • Teach release/scripts/helm-publish.sh to accept explicit HELM_REGISTRY_USERNAME and HELM_REGISTRY_PASSWORD, with a GITHUB_TOKEN fallback for ghcr.io.
    • Update GitHub-hosted GHCR publish jobs to pass github.actor plus secrets.GITHUB_TOKEN instead of the long-lived Helm API secret pair.
    • Grant packages: write to the ci.yml Helm publish jobs so the repo token can publish to GHCR.
  • Consequences:
    • PR, main, and tag Helm publication paths now authenticate to GHCR with the job-scoped repository token.
    • Local or non-GitHub publish rehearsals can still use HELM_API_KEY_* or the new explicit registry credential variables.
  • Follow-up:
    • Re-run the PR Images / Helm Chart job and confirm GHCR login and chart push succeed.
    • Keep non-GitHub registry callers on explicit override credentials instead of assuming GHCR defaults.

Task Record

  • Motivation:
    • Restore the failing PR Helm chart publish job and align the shared Helm publish path with GitHub-hosted GHCR auth.
  • Design notes:
    • The credential selection now prefers explicit HELM_REGISTRY_* values, then existing HELM_API_KEY_*, then GITHUB_TOKEN for ghcr.io.
    • The reusable PR workflow already had packages: write; the ci.yml Helm publish jobs needed that permission added to use GITHUB_TOKEN.
  • Test coverage summary:
    • Inspected GitHub Actions run 24643631011, job 72055391065, and confirmed the failure occurred during GHCR authentication after successful packaging and signature verification.
    • just ci
    • just ui-e2e
  • Observability updates:
    • No runtime observability surface changed.
    • Publish failures now report the accepted credential sources more clearly from helm-publish.sh.
  • Status-doc validation:
    • Reviewed .github/instructions/devops.instructions.md, docs/adr/index.md, and docs/SUMMARY.md; updated them to match the GHCR auth path.
  • Risk & rollback plan:
    • Main risk is a missing packages: write permission on a future caller job. Roll back by reverting this change or restoring explicit non-GitHub credentials for that caller.
    • Local and non-GitHub publish flows can still pin explicit credentials if the GitHub-token path is unsuitable.
  • Dependency rationale:
    • No new dependencies were added.
  • Stale-policy check:
    • Reviewed AGENTS.md and .github/instructions/devops.instructions.md.
    • Drift found: the instruction set did not yet record that GitHub-hosted GHCR chart publication should prefer the job-scoped repo token.
    • Removed that drift by updating the devops instructions in this change.

Artifact Hub OCI Repository Alignment

  • Status: Accepted
  • Date: 2026-04-20
  • Context:
    • The PR Helm workflow was publishing charts to ghcr.io/<owner>/<repo>/charts/revaer, while the Artifact Hub repository that now exists is configured for oci://ghcr.io/vannadii/charts/revaer.
    • That namespace mismatch meant successful workflow publishes would land in GHCR, but not at the OCI repository URL Artifact Hub is actually tracking.
    • The Artifact Hub repository now has the stable ID dfbc5c47-d0c5-4ac7-b9d4-5812c0a6a15a, which needs to be present in the published repository metadata for verified ownership workflows.
  • Decision:
    • Change the default GHCR Helm namespace derivation to publish charts to ghcr.io/<owner>/charts/revaer.
    • Ship the Artifact Hub repository ID in charts/revaer/artifacthub-repo.yml and keep release packaging from appending a duplicate repositoryID.
    • Refresh install and release docs so they reference the owner-scoped OCI chart URL rather than the older repo-scoped path.
  • Consequences:
    • PR, main, tag, and manual Helm publishes now target the same OCI repository URL that Artifact Hub is configured to ingest.
    • Artifact Hub repository verification metadata is stable even when GitHub Actions repository variables are unset.
    • Existing references to the repo-scoped GHCR path become stale and must be updated together when the public OCI location changes.
  • Follow-up:
    • Push a fresh PR Helm publish and confirm new versions appear under ghcr.io/vannadii/charts/revaer.
    • Re-check Artifact Hub after its next repository processing cycle and confirm it indexes the newly published PR prerelease.

Task Record

  • Motivation:
    • Align the workflow’s actual Helm publish destination with the Artifact Hub repository the user created so PR dev chart publishes become visible in Artifact Hub.
  • Design notes:
    • release/scripts/helm-publish.sh now derives the default namespace from the GitHub owner only, because the chart name is already appended as /revaer.
    • charts/revaer/artifacthub-repo.yml carries the canonical repository ID and marks the repo as oci; helm-package.sh avoids duplicating that field when env overrides are also present.
  • Test coverage summary:
    • just instruction-drift
    • bash scripts/workflow-guardrails.sh
    • just ui-e2e
    • just ci
  • Observability updates:
    • No runtime observability surface changed.
    • The externally visible change is the GHCR package location and matching Artifact Hub metadata target.
  • Status-doc validation:
    • Re-checked charts/revaer/README.md, docs/release-checklist.md, .github/instructions/devops.instructions.md, docs/adr/index.md, and docs/SUMMARY.md; updated stale GHCR path references.
  • Risk & rollback plan:
    • The main risk is consumers still pulling from the old repo-scoped GHCR chart path. Roll back by restoring the previous namespace derivation and reverting the docs if the owner-scoped repository proves incompatible.
    • Artifact Hub ingestion remains asynchronous, so validation must allow for the service’s reprocessing delay after publish.
  • Dependency rationale:
    • No new dependencies were added.
  • Stale-policy check:
    • Reviewed AGENTS.md and .github/instructions/devops.instructions.md.
    • Drift found: devops instructions and release docs did not state the owner-scoped public OCI chart location or the chart metadata file as the repository-ID source of truth.
    • Removed that drift by updating the instruction file, release docs, and ADR index in this change.

Trivy SARIF Category And GHCR Token Alignment

  • Status: Accepted
  • Date: 2026-04-20
  • Context:
    • PR #27 moved image builds into a reusable workflow and added a manual Helm OCI verification workflow.
    • GitHub Advanced Security started reporting 2 configurations not found for the Trivy scan because the workflow/job identity that code scanning used on main (.github/workflows/ci.yml:build-images/...) no longer matched the PR branch upload identity after the refactor.
    • Review feedback also flagged that the manual GHCR publish path still preferred legacy Helm API secrets instead of the job-scoped GITHUB_TOKEN, and that the reusable image workflow still trusted the pr_number input too much when writing environment values.
  • Decision:
    • Set an explicit SARIF upload category in the reusable image workflow that preserves the legacy ci.yml:build-images matrix identity for Trivy uploads.
    • Validate pr_number as numeric and use the multiline $GITHUB_ENV form before exporting PR-scoped Helm version values.
    • Switch the manual Helm OCI verification workflow to GHCR publication through GITHUB_TOKEN plus packages: write.
    • Remove unused HELM_API_KEY_* secret plumbing from the reusable PR image workflow call and drop stale pull-requests: read permission from the push-only Sonar workflow.
  • Consequences:
    • GitHub code scanning can compare PR Trivy uploads to the existing main configurations instead of treating them as missing configurations after the workflow refactor.
    • Manual GHCR verification now exercises the same credential path used by the GitHub-hosted publish jobs.
    • Reusable workflow callers expose fewer secrets and PR-number-derived env writes are hardened against newline or non-numeric injection.
  • Follow-up:
    • Re-run PR #27 checks and confirm the Trivy configuration warning disappears.
    • Confirm the manual Helm OCI verification workflow can publish with GITHUB_TOKEN on a GitHub-hosted runner.

Task Record

  • Motivation:
    • Restore trustworthy PR code-scanning comparisons and close the remaining workflow review threads on PR #27 without regressing least-privilege rules.
  • Design notes:
    • The SARIF category is intentionally pinned to the historical ci.yml build-image key instead of the reusable workflow path because code scanning continuity matters more than reflecting the refactor in the category string.
    • The manual Helm verify workflow keeps pull-requests: read because it still resolves an open PR number from the branch when inputs are omitted.
  • Test coverage summary:
    • just instruction-drift
    • just ci
    • just ui-e2e
  • Observability updates:
    • No runtime observability surface changed.
    • GitHub code-scanning continuity for Trivy uploads should recover once the workflow reruns.
  • Status-doc validation:
    • Re-checked .github/instructions/devops.instructions.md, docs/adr/index.md, and docs/SUMMARY.md; updated them to match the workflow behavior.
  • Risk & rollback plan:
    • The main risk is pinning the SARIF category to the legacy identity longer than desired. Roll back by changing the explicit category once the old code-scanning configurations are intentionally retired.
    • If GITHUB_TOKEN proves insufficient for the manual GHCR publish path, restore explicit registry credentials as a documented exception.
  • Dependency rationale:
    • No new dependencies were added.
  • Stale-policy check:
    • Reviewed AGENTS.md and .github/instructions/devops.instructions.md.
    • Drift found: the instructions did not yet record the need to preserve a stable Trivy SARIF category across workflow refactors.
    • Removed that drift by updating the workflow and instruction file together.

Artifact Hub Verification And Official Readiness

  • Status: Accepted
  • Date: 2026-04-20
  • Context:
    • The Revaer chart repository was already aligned to the owner-scoped OCI URL and Artifact Hub repository ID, but the remaining ownership and package-metadata details were still partly implicit.
    • Artifact Hub’s current repository guidance requires the repository metadata to carry the repository ID for Verified publisher, and ownership claim flows depend on published owner identity that matches the Artifact Hub account or organization member performing the claim.
    • Artifact Hub also recommends explicit package metadata where automatic extraction may be incomplete, including chart image metadata that powers package security scanning.
  • Decision:
    • Keep charts/revaer/artifacthub-repo.yml as the canonical repository-metadata template and document that owner identity must match the Artifact Hub claimant.
    • Update release/scripts/helm-package.sh so owner metadata is appended whenever ARTIFACTHUB_OWNER_NAME and ARTIFACTHUB_OWNER_EMAIL are available, including unsigned packaging paths.
    • Publish an explicit artifacthub.io/images chart annotation at release packaging time using the Revaer GHCR image tag that matches the chart app version.
    • Refresh the chart README, release checklist, and devops instructions to record the remaining manual Artifact Hub steps: public GHCR visibility, repository add/claim, verified-publisher confirmation, and the manual official status request.
  • Consequences:
    • Published Artifact Hub repository metadata is now authoritative for both repository verification and ownership claim workflows instead of depending on signing-only paths.
    • Artifact Hub can index the chart’s primary runtime image from chart metadata even if automatic image extraction is incomplete.
    • The official badge still cannot be granted from Git alone; the repository can only be made ready for that manual Artifact Hub request.
  • Follow-up:
    • Re-run a Helm publish and confirm the artifacthub.io OCI metadata artifact contains repositoryID plus the expected owner entry.
    • Confirm the next Artifact Hub processing cycle shows Verified publisher, then submit the official status request if it has not already been filed.

Task Record

  • Motivation:
    • Make the repository metadata authoritative enough for Artifact Hub verification and official-status workflows instead of leaving those steps partially dependent on operator memory or signing-only side effects.
  • Design notes:
    • Owner identity stays externally configurable through ARTIFACTHUB_OWNER_*, with GPG UID fallback retained for signed releases.
    • The chart image annotation is injected at packaging time so the published image tag stays aligned with the release tag or prerelease tag.
  • Test coverage summary:
    • just helm-lint
    • just instruction-drift
  • Observability updates:
    • No runtime observability surface changed.
    • Artifact Hub package metadata now exposes the published runtime image more reliably for external scanning and UI display.
  • Status-doc validation:
    • Re-checked charts/revaer/README.md, docs/release-checklist.md, .github/instructions/devops.instructions.md, docs/adr/index.md, and docs/SUMMARY.md; updated them to match the Artifact Hub readiness flow.
  • Risk & rollback plan:
    • Main risk is stale or incorrect ARTIFACTHUB_OWNER_* workflow variables causing an ownership mismatch in published metadata. Roll back by correcting the variables and republishing the chart metadata artifact.
    • If the explicit image annotation proves incorrect for a future image-layout change, remove or revise the injected annotation and republish.
  • Dependency rationale:
    • No new dependencies were added.
  • Stale-policy check:
    • Reviewed AGENTS.md and .github/instructions/devops.instructions.md.
    • Drift found: the instruction and operator docs did not yet state that Artifact Hub owner identity must remain present outside signing-only paths, and they did not record the explicit manual steps needed for official readiness.
    • Removed that drift by updating the release script, docs, and instruction file together.