Revaer
Centralized torrent orchestration with hot-reloadable configuration, consistent CLI/API surfaces, and observability-first defaults.
Revaer is a Rust workspace that coordinates torrent ingestion, filesystem operations, and operational guardrails from a PostgreSQL-backed control plane. The revaer-app binary composes focused crates covering the API, CLI, filesystem pipeline, telemetry, and libtorrent adapter.
What You’ll Find Here
- Roadmap & Specs – Track the current Phase One scope and remaining delivery deltas.
- Platform Interfaces – Configuration schema, HTTP API endpoints, and CLI command reference that match the current codebase.
- Operational Guides – Runbook, release checklist, and setup flows for operators.
- Architecture Decisions – ADRs documenting trade-offs across configuration, security, and engine integration.
- API Reference – Generated OpenAPI description and usage guidance for the control plane surface.
Use the sidebar navigation (or [ and ] shortcuts) to explore individual topics. Most pages include headings that double as tags for machine-readable manifests generated by the docs indexer.
Contributing Updates
Documentation lives next to the code. Add or edit Markdown files under docs/, then run:
just docs
This builds the mdBook site and refreshes the LLM manifests that power the documentation search experience.
LLM Manifests
For ChatGPT and other LLM-based tooling, fetch llms.txt and the JSON manifests under llm/ (schema.json, manifest.json, summaries.json) used by the documentation search experience.
Phase One Roadmap
Last updated: 2026-04-04
This document captures the current delta between the Phase One objective and the existing codebase. It should be kept in sync as work progresses across the eight workstreams.
Snapshot
| Workstream | Current State | Key Gaps | Immediate Actions |
|---|---|---|---|
| Control Plane & Setup | Postgres schema, ConfigService watcher, setup CLI/API, immutable-key guard, history logging; loopback enforcement + RFC7807 pointers live | Engine hot-reload not yet exercising throttles; setup token lifecycle/error telemetry still thin | Add watcher-driven throttle tests, expand setup diagnostics and rate-limit guardrails |
| Torrent Domain & Adapter | Native libtorrent FFI (cxx) restored and default-enabled; session worker with alert pump/resume store, throttles, selection, and degraded health surfaced via event bus; stub path retained only when the feature is disabled | Native CI coverage exists, but alert/rate-limit regression coverage is still thin and broader validation of resume reconciliation and failure handling is still needed | Deepen alert/rate-limit/resume validation and harden failure handling |
| File Selection & FsOps | Idempotent FsOps pipeline now extracts zip/tar/tar.gz/tgz archives in-process, supports guarded 7z/rar extraction via external tools, runs PAR2 verify/repair stages, records checksum metadata in .revaer.meta, and applies move/copy/hardlink transfers with chmod/chown/umask handling | 7z/rar and PAR2 still depend on host tooling being installed; ownership overrides remain Unix-only by design, and broader recovery/failure coverage should keep expanding | Keep hardening extractor/PAR2 recovery scenarios, document host-tool prerequisites clearly, and expand FsOps telemetry + restart-path coverage |
| Public HTTP API & SSE | Admin setup/settings/torrent CRUD, SSE stream, metrics endpoint, OpenAPI generator, /api/v2/* qB façade with cookie sessions, rename/category/tag mutation, relocate, reannounce/recheck, transfer limits, and incremental rid sync | /v1/torrents/* pagination/filter matrix still partial; qB coverage is intentionally bounded rather than full parity; SSE replay still needs broader Last-Event-ID regression coverage | Finish pagination/filter story, document deliberate qB compatibility scope, and expand SSE replay regression tests |
| CLI Parity | Supports setup start/complete, settings patch, torrent add/remove/list/status/select/action flows, and CLI wrappers around config + torrent APIs | SSE tail UX and richer validation/diagnostic coverage still need hardening | Expand reconnecting tail coverage and tighten validation/exit-code contracts |
| Security & Observability | API key storage hashed, per-key rate limits and X-RateLimit-* headers exposed, tracing initialized, metrics registry exported, and dashboard metrics now sourced from runtime state | OTEL exporter path was placeholder-only and now needs operational validation; tracing/metrics coverage should keep expanding across engine/fsops failure paths | Validate OTLP exporter behavior in deployment flows and keep expanding engine/fsops observability coverage |
| CI & Packaging | GitHub Actions cover fmt/lint/deny/audit/tests/cov via just ci; native libtorrent CI exists; Dockerfile builds non-root image with bundled libtorrent and HEALTHCHECK; docs workflow publishes mdBook; image workflow now scans, attests, and signs published images | Rootfs posture remains documented rather than enforced, and image hardening still needs broader cross-arch/runtime validation | Keep image provenance/scan/sign gates in CI, harden container runtime guidance, and extend cross-arch/runtime validation |
| Operational End-to-End | Playwright-backed API/UI flows run via just ui-e2e, and just runbook now packages repeatable validation artifacts | Manual fault-injection drills still exist for extractor/permission/recovery scenarios | Keep automating the remaining runbook drills while retaining the operator-facing checklist |
Remaining Scope Specification
1. Torrent Engine Integration
- Harden the native libtorrent session: keep the stub only for feature-off builds while ensuring the default path drives the real adapter for add/pause/resume/remove, sequential toggles, rate limits, selection updates, reannounce, and force-recheck.
- Validate persisted fast-resume payloads, priorities, target directories, and sequential flags against the live session on startup; continue emitting reconciliation events when divergence is detected.
- Translate libtorrent alerts into EventBus messages (
FilesDiscovered,Progress,StateChanged,Completed,Failure) while respecting the ≤10 Hz per-torrent coalescing rule; recover from alert polling failures by degrading health and attempting bounded restarts. - Ensure global and per-torrent rate caps driven by
engine_profileupdates are enforced by libtorrent within two seconds, with audit logs surfaced when caps change. - Extend the feature-gated integration suite to execute against the native libtorrent build (resume restore, rate-limit enforcement, alert mapping) in addition to the in-process stub.
2. File Selection & FsOps Pipeline
- Keep include/exclude glob logic aligned with torrent selection so priority updates continue to reflect operator intent, including the
@skip_fluffpreset. - Extend the FsOps pipeline to additional archive formats (7z/tar), introduce the PAR2 verification/repair stage, and surface checksum metadata alongside the recorded
.revaer.metaentries. - Add non-Unix fallbacks or clear operator guidance when ownership/umask directives cannot be honoured, and surface the condition via events and
/health/full. - Harden dependency detection so missing extractor binaries trigger guarded degradation with actionable telemetry, then clear automatically once remediation succeeds.
- Broaden integration coverage to include error paths (permission denied, unsupported archive) and restart scenarios that reuse persisted metadata, capturing metrics snapshots for each stage.
3. Public HTTP API & SSE
- Round out
/v1/torrentswith cursor pagination, rich filtering (state, tracker, extension), and stabilise reannounce/recheck/sequential toggles with regression tests. - Keep Problem+JSON responses consistent (including JSON Pointer metadata) and mirror them in CLI/user-facing tooling.
- Enhance SSE with Last-Event-ID replay, duplicate suppression, and resiliency tests covering torrent + FsOps event fan-out.
- Evolve the qB façade: tighten the cookie/session model, surface removals/categories/tags in incremental sync, and expose rename/reannounce operations.
- Expand health reporting to
/health/full, document façade coverage in OpenAPI/mdBook, and add integration tests that exercise pagination, SSE replay, and façade flows end-to-end.
4. CLI Parity
- Add commands
revaer ls,status,select,action, andtail, mirroring API filters, selection arguments (include/exclude/skip-fluff), sequential toggles, and data deletion flags. - Implement an SSE tailer that reconnects on failure, honors Last-Event-ID, and avoids duplicate terminal output.
- Standardize exit codes (0 success, 2 validation, >2 runtime failures) and surface RFC7807 payloads, including pointer metadata, in human-readable CLI output.
- Provide CLI integration tests that run against the API fixture stack, covering filter combinations, sequential toggles, and tail reconnection behaviour.
5. Security & Observability
- Introduce API key lifecycle endpoints (issue, rotate, revoke) with hashed-at-rest storage, returning secrets only once; enforce per-key token-bucket rate limiting and include
X-RateLimit-*headers. - Harden inputs by bounding magnet length, multipart size, filter glob counts, and header values; return Problem+JSON validation errors without panics for malformed requests.
- Propagate tracing spans (request IDs) through the API, engine, and FsOps layers; ensure metrics cover HTTP status, event flow, queue depth, libtorrent transfer, and FsOps step durations, exposed via
/metrics. - Reflect degraded health when tools are missing, engine sessions fault, or queue depth exceeds thresholds; emit corresponding
SettingsChangedandHealthChangedevents. - Document operational expectations for rate limiting, key rotation, and observability dashboards.
6. CI & Packaging
- Keep GitHub Actions green across fmt/lint/deny/audit/tests/cov and add a matrix leg that runs the native libtorrent suite (REVAER_NATIVE_IT=1 with Docker host wiring).
- Enforce an environment-access lint that fails CI if
std::envreads occur outside the composition root (excludingDATABASE_URL). - Harden the container: retain non-root user, switch to read-only rootfs with explicit writable mounts, and gate builds with image scans and provenance/signing.
- Produce cross-arch artifacts (x86_64/aarch64) and publish digests alongside build outputs and release notes.
7. Operational Runbook Automation
- Author a script to execute the full phase objective on both x86_64 and aarch64: bootstrap using
DATABASE_URL, complete setup token flow, add a magnet, monitorFilesDiscovered/Progress/Completed, run FsOps, simulate crash/restart with fast-resume recovery, adjust throttles, and validate degraded health when extractors are absent. - Capture assertions and logs for each phase, producing artifacts suitable for runbook review and CI retention; ensure failures mark the engine or pipeline health accordingly.
- Include cleanup routines to return environments to a reusable state while retaining diagnostic logs.
8. Documentation & Final Polish
- Update
docs/phase-one-roadmap.mdcontinuously and add ADRs covering engine architecture, FsOps design, API/CLI contracts, and security posture. - Regenerate
docs/api/openapi.jsonalongside illustrative request/response examples for new endpoints. - Extend user-facing guides for CLI usage, health/metrics references, and operational setup covering API keys, rate limits, and degraded-mode recovery.
- Provide a final Phase One release checklist that ties documentation, runbook, and CI artifacts together.
Next Steps Tracking
- Land setup/network hardening and control-plane polish.
- Keep the native libtorrent session as the default, expand coverage (native CI leg, alert/rate-limit/resume validation), and preserve the stub only for feature-off builds.
- Implement FsOps pipeline with allow-listed execution and metadata.
- Expose
/v1/*APIs + CLI parity and reinforce security/observability. - Stand up CI, packaging, and full runbook validation.
Phase One Remaining Engineering Specification
Objectives
- Deliver a production-ready public interface (HTTP API, SSE, CLI) for torrent orchestration.
- Ship FsOps-backed artefacts through API, CLI, telemetry, and documentation with demonstrable reliability.
- Produce release artefacts (containers, binaries, documentation) that satisfy existing security, observability, and quality gates.
Scope Overview
-
Public HTTP API & SSE Enhancements
/v1/torrentsCRUD-style endpoints with cursor pagination, filtering, torrent actions, file selection updates, rate adjustments, and Problem+JSON responses.- SSE stream upgrades: Last-Event-ID replay, subscription filters, duplicate suppression, jitter-tolerant reconnect logic.
/health/fullexposing engine/FsOps/config readiness, dependency metrics, and revision metadata.- Regenerated OpenAPI (JSON + examples) reflecting the full public surface.
-
CLI Parity
- Commands covering list/status/select/action/tail flows with shared filtering + pagination options.
- SSE-backed
tailcommand with Last-Event-ID resume, dedupe, and retry semantics aligned with the API. - Problem+JSON error output, structured exit codes (
0success,2validation,>2runtime failures).
-
Packaging & Documentation
- Release-ready Docker image (non-root, readonly FS, volumes, healthcheck) bundling API server + docs.
- Provenance-signed binaries for supported architectures, plus GitHub Actions workflows for build, docker, msrv, and coverage gates.
- Updated ADRs, runbook, user guides, OpenAPI artefacts, and release checklist referencing the telemetry and security posture.
- Documentation of new metrics/traces/guardrails (config watcher latency, FsOps events, API counters).
Security & Observability Requirements (Cross-Cutting)
- All new API routes enforce API-key authentication with per-key rate limiting and guard-rail metrics.
- Problem+JSON responses are mandatory; eliminate
unwrap/panic paths and includeinvalid_paramspointers on validation failure. - Trace propagation from API → engine → FsOps; CLI should emit/propagate TraceId when available.
- Metrics: extend existing Prometheus registry with route labels, FsOps step counters, config watcher latency/failure gauges, and rate-limiter guardrails.
- Health degradation events (
Event::HealthChanged) must accompany any new guard-rail/latency breach or pipeline failure. - CLI commands should mask secrets in logs and optionally emit telemetry when configured (
REVAER_TELEMETRY_ENDPOINT).
Detailed Work Breakdown
1. Public API & SSE
Design Considerations
- Introduce DTO module (
api::models) for request/response structs to share with the CLI. - Cursor pagination: encode UUID/timestamp as opaque cursor in
nexttoken; align Last-Event-ID semantics with event stream IDs. - Filtering: support state, tracker, extension, tags, and name substring; guard invalid combinations with Problem+JSON.
- SSE filtering: permit query parameters for torrent subset, replays based on event type/state.
Implementation Tasks
- Routes:
POST /v1/torrents– magnet or .torrent upload (streamed, payload size guard).GET /v1/torrents– cursor pagination + filters.GET /v1/torrents/{id}– detail view with FsOps metadata.POST /v1/torrents/{id}/select– file selection update with validation.POST /v1/torrents/{id}/action– pause/resume/remove (with data), reannounce, recheck, sequential toggle, rate limits.
- SSE:
- Accept
Last-Event-IDheader, deduplicate by event ID, filter streams by torrent ID/state. - Simulate jitter/disconnects in tests (
tokio::time::pause,transport::Stream).
- Accept
- Health endpoint:
- Aggregate config watcher metrics (latency, failures), FsOps status, engine guardrails, revision hash.
- Problem+JSON mapping for all new errors with
invalid_paramspointer data. - OpenAPI:
- Regenerate spec covering new endpoints, Problem responses, SSE details, and sample payloads.
- Testing:
- Unit tests for filter parsing, DTO validation, Problem+JSON outputs.
- Integration tests using
tower::Serviceharness for each route. - SSE reconnection tests with simulated delays and Last-Event-ID resume.
/health/fullintegration test verifying new fields and degraded scenarios.
2. CLI Parity
Design Considerations
- Reuse DTOs from API models; consider shared crate/module for request structs and Problem+JSON parsing.
- Introduce output formatting with optional JSON/pretty table modes.
- Provide configuration via env vars and CLI flags; align defaults with API (e.g.,
REVAER_API_URL,REVAER_API_KEY).
Implementation Tasks
- Commands:
revaer ls– list torrents, support pagination (--cursor,--limit), filters (state/tracker/extension/tags).revaer status <id>– torrent detail view, optional follow mode.revaer select <id>– send selection rules from file/JSON (validate before submit).revaer action <id>– actions (pause,resume,remove,remove-data,reannounce,recheck,sequential,rate).revaer tail– SSE tail with Last-Event-ID persist (local file) and dedupe.
- Problem+JSON handling:
- Standardised pretty printer summarising
title,detail,invalid_params; respect exit codes.
- Standardised pretty printer summarising
- Telemetry:
- Optional metrics emission (success/failure counters) when telemetry endpoint configured.
- Testing:
- Integration tests using
httpmockto assert HTTP interactions and exit codes. - SSE tail tests with mocked stream delivering duplicates/disconnects.
- Snapshot tests for JSON outputs (ensuring deterministic fields).
- Integration tests using
3. Packaging & Documentation
Design Considerations
- Multi-stage Docker build: compile with Rust image, run on minimal base (distroless/alpine/ubi) with non-root user.
- Healthcheck script hitting
/health/fullwith timeout. - Release workflows should run on GitHub Actions with provenance metadata (supply-chain compliance).
Implementation Tasks
- Dockerfile +
Makefile/justtarget:- Build release binary, copy
docs/api/openapi.json, set/appas workdir. - Define volumes for data/config, create user
revaer, configure entrypoint.
- Build release binary, copy
- GitHub Actions (update
.github/workflows):build-release: runjust build-release,just api-export, attach binaries/docs.docker: build image, rundocker scan(trivy/grype), and push on release tags.msrv: runjust fmt lint testwith pinned toolchain (documented in workflow).cov: ensurejust covgate passes (≥80% lines/functions).
- Documentation:
- ADRs: update
003-libtorrent-session-runner, add FsOps design ADR, API/CLI contract ADR, security posture update (API keys, rate limits). - Runbook: scripted scenario covering bootstrap → torrent add → FsOps pipeline → restart resume → rate throttle adjustments → degraded health simulation → recovery.
- User guides: CLI usage, metrics/telemetry reference, operational setup (keys, rate limits, config watcher health).
- OpenAPI: regenerate JSON, include sample Problem+JSON payloads and SSE description.
- Release checklist: steps to run
just ci, verify coverage, run docker scan, execute runbook, and tag release.
- ADRs: update
- Testing:
- Validate Docker container runtime (healthcheck, volume mounts, non-root permissions).
- Perform coverage review ensuring new tests bring line/function coverage ≥80%.
- Execute runbook; capture logs/metrics and link in docs.
Cross-Cutting Deliverables
- API key lifecycle (issue/rotate/revoke) extended with per-key rate limiting, recorded in telemetry and docs.
- Config watcher telemetry integrated into
/health/fulland metrics registry. - CLI and API emit guard-rail telemetry on violations (loopback enforcement, FsOps errors, rate-limit breaches).
- All new code paths covered by unit/integration tests; follow-up to update
just covgating. - Documentation kept up-to-date with implementation details and tested flows.
Sequencing (Suggested)
- Build API models and endpoints (foundation for CLI).
- Implement SSE enhancements while adding API integration tests.
- Extend CLI commands leveraging shared DTOs.
- Embed telemetry (metrics/traces) throughout API/CLI/FsOps changes.
- Stand up Docker build + CI workflows.
- Update ADRs, runbook, user guides, OpenAPI, and release checklist.
- Execute full QA cycle (coverage, docker scan, runbook, manual verification) and prepare for release tagging.
Acceptance Criteria
just lint,just test,just covand fulljust cipass locally and in CI.- Coverage (lines + functions) ≥ 80% across workspace.
- Docker image passes security scan with zero unwaived high severity findings.
- Runbook executed end-to-end; results referenced in documentation.
- OpenAPI specification and CLI docs match implemented behaviour.
- Release checklist completed with artefacts attached (binaries, Docker image, OpenAPI, docs).
Phase One Runbook
This runbook exercises the end-to-end control plane, validating FsOps, telemetry, and guard rails.
The primary automated entrypoint is just runbook, which wraps the Playwright-backed API/UI validation flow, collects artifacts, and provides a repeatable baseline before any manual drills.
Automated Validation
Run the automated baseline first:
just runbook
Expected outputs:
artifacts/runbook/summary.txtartifacts/runbook/playwright-report/index.htmlartifacts/runbook/test-results/artifacts/runbook/logs/
The automated runbook currently covers the bootstrap, dashboard/API health, settings, and torrent-management control-plane paths exercised by just ui-e2e. Keep the manual checks below only for deployment-specific fault-injection drills that require operator intervention against real mounts, permissions, and restart boundaries.
Prerequisites
- Docker image
revaer:ci(built viajust docker-build) or a localrevaer-appbinary (just build-release). - PostgreSQL instance accessible to the application.
- API key with a conservative rate limit (e.g., burst
5, period60s). - CLI configured with
REVAER_API_URL,REVAER_API_KEY, and optionalREVAER_TELEMETRY_ENDPOINT.
Scenario
-
Bootstrap
- Issue a setup token:
revaer setup start --issued-by runbook. - Complete configuration with CLI secrets and directories:
revaer setup complete --instance runbook --bind 127.0.0.1 --resume-dir .server_root/resume --download-root .server_root/downloads --library-root .server_root/library --api-key-label runbook --passphrase <pass>. - Capture the committed snapshot via
revaer config get --output tableand confirm/health/fullreturnsstatus=okwithguardrail_violations_total=0.
- Issue a setup token:
-
Add Torrent & Observe FsOps
- Add a torrent:
revaer torrent add <magnet> --name runbook. - Tail events:
revaer tail --event torrent_added,progress,state_changed --resume-file .server_root/revaer.tail. - Verify FsOps emits
fsops_started,fsops_completed, and Prometheus countersfsops_steps_totalincrease.
- Add a torrent:
-
Restart & Resume
- Stop the application, restart it, and ensure the torrent catalog repopulates.
- Confirm
SelectionReconciled(if metadata diverges) andHealthChangedclears once resume succeeds.
-
Rate Limit Guard-Rail
- Apply a tight API key limit (burst
1/per_seconds 60) viarevaer config set --file rate-limit.json(using a JSON patch that updates the relevant key). - Execute three rapid CLI calls (e.g.,
revaer status <id>). The third should exit with code3, displaying a429Problem+JSON response. - Inspect
/metricsto verifyapi_rate_limit_throttled_totalincremented and/health/fullreflectsdegraded=["api_rate_limit_guard"].
- Apply a tight API key limit (burst
-
Recovery
- Restore the API key limit to an acceptable value through another
revaer config set ...invocation. - Re-run
revaer status <id>to confirm success,guardrail_violations_totalstops increasing, anddegradedreturns to[].
- Restore the API key limit to an acceptable value through another
-
FsOps Failure Simulation
- Temporarily revoke write permissions on the library directory and re-run a completion.
- Observe
fsops_failedevents,HealthChangedwith["fsops"], and guard-rail telemetry. - Restore permissions and confirm recovery events.
Manual-only rationale:
- Permission failures and restart/resume drills depend on the actual runtime mount layout, writable volumes, and supervisor behavior of the target deployment.
- The checked-in automation covers the repeatable control-plane baseline; these remaining drills intentionally stay manual so operators can validate their real environment rather than a simulated local-only shell.
Verification Artifacts
- Review
artifacts/runbook/summary.txtfromjust runbook. - Archive CLI telemetry emitted to
REVAER_TELEMETRY_ENDPOINTwhen the manual scenario enables it. - Capture Prometheus scrapings (
/metrics) before and after the manual drills. - Record
/health/fullJSON snapshots for each phase.
Successful completion of this runbook satisfies the operational validation gate defined in AGENT.md.
Phase One Release Checklist
-
Branch Hygiene
- Ensure
mainis green (CI pipeline complete). - Review outstanding ADRs and docs for freshness.
- Ensure
-
Build & Test
just cijust build-releasejust api-export
-
Artefact Verification
- Binary:
target/release/revaer-app - Checksum:
sha256sum target/release/revaer-app - OpenAPI:
docs/api/openapi.json - Helm chart:
dist/helm/revaer-<version>.tgz - Helm provenance:
dist/helm/revaer-<version>.tgz.prov - Helm public key:
dist/helm/revaer-helm-public.asc - Helm public keyring:
dist/helm/revaer-helm-public.gpg - Docker image:
just docker-build && just docker-scan - Published GHCR image: verify Trivy scan, SBOM/provenance attestations, and Cosign signatures from the image workflow
- Published OCI chart: verify
oci://ghcr.io/<owner>/charts/revaer:<version>plus theartifacthub.iometadata tag andhelm verifyagainst the published public key
- Binary:
-
Runbook Execution
- Run
just runbook - Follow the remaining manual-only drills in
docs/runbook.md - Archive CLI telemetry,
/metrics,/health/fullsnapshots.
- Run
-
Documentation Refresh
- Verify ADRs 005–007 reflect current design.
- Update user guides (
docs/api/guides/*.md) with any behavioural changes.
-
Tag & Publish
- Create annotated tag:
git tag -a vX.Y.Z -m "Phase One release" - Push tag:
git push origin vX.Y.Z - Attach artefacts generated by the
build-releaseworkflow, including the Helm chart archive, provenance file, and Helm public key. - Confirm the OCI chart publish completed after the GitHub release so the
artifacthub.io/signKeyURL resolves. - Confirm the GHCR chart package is public so Artifact Hub can pull
oci://ghcr.io/<owner>/charts/revaeranonymously. - In Artifact Hub, add or claim
oci://ghcr.io/<owner>/charts/revaer, then verify that the publishedartifacthub.iometadata tag includes the expected repository ID and owner identity. - After Artifact Hub shows
Verified publisher, file theofficialstatus request for the Revaer publisher or organization. Userevaer-logo.pngfor the Artifact Hub repository and organization logo during that setup.
- Create annotated tag:
-
Post-Release Monitoring
- Watch rate-limit and guard-rail metrics.
- Confirm
HealthChangedevents return to empty degraded set. - Validate automation telemetry for CLI success rates.
Web UI - Phase 1
Rust/Yew UI for the Phase 1 torrent workflow. The goal is a responsive, touch-friendly surface that stays usable on 360px phones through 4K desktops while handling large torrent libraries.
- Pages: Dashboard, Torrents (list + detail), Logs, Health, Settings.
- Modes: Simple (trimmed controls) and Advanced (full controls). Stored in local storage.
- Transport: REST for initial payloads; fetch-based SSE for live updates and logs (header-auth supported, EventSource not used).
Layout and breakpoints
| Name | Width | Default behaviors |
|---|---|---|
| xs | 0-479px | Card view for torrents, drawer navigation, stacked dashboard cards |
| sm | 480-767px | Card view, two-column stats grid inside cards |
| md | 768-1023px | Compact table, tabbed detail view |
| lg | 1024-1439px | Full table, fixed sidebar |
| xl | 1440-1919px | Split panes and wider tables |
| 2xl | 1920px+ | Ultra-wide tables with capped text widths |
Table responsiveness: required columns (Name, Status, Progress, Down, Up) stay pinned; ETA, Ratio, Size, Tags, Path, Updated collapse into overflow or the detail drawer when space is constrained.
Detail view: mobile renders tabs (Overview, Files, Options); desktop promotes a split layout that keeps overview and options visible together at lg+.
Virtualization: the torrent list uses a windowed renderer to keep large libraries responsive; selection stays highlighted for keyboard actions.
Auth and setup
- API key auth is default. The UI prompts for
key_id:secretand stores it in local storage with expiry metadata. - If
app_profile.auth_modeisnoneand the request originates from a local network, the UI can enter anonymous mode. - Setup mode guides the operator through the setup token flow and stores the generated API key after completion.
Transport and SSE
- Primary SSE:
/v1/torrents/eventswith filters for torrent id, event kind, and state. - Fallback SSE:
/v1/events/streamif the primary endpoint is unavailable. - Logs stream:
/v1/logs/stream. - SSE requests attach
x-revaer-api-keyandLast-Event-IDheaders.
Settings coverage
Settings tabs are grouped into: Downloads, Seeding, Network, Storage, Labels, and System. Each tab reflects the corresponding config section and validation errors from ProblemDetails responses.
Theming and localization
- Theme tokens and layout variables live in
static/style.css. - Theme selection follows OS preference on first load and persists to local storage.
- Locale selector uses JSON bundles in
i18n/with English fallback and RTL hinting.
Running the UI
- Crate:
crates/revaer-ui(Yew + wasm). - Commands:
just ui-serveto preview,just ui-buildfor release builds. - Assets:
static/style.cssholds palette/breakpoints;index.html+Trunk.tomlbootstrap trunk.
Web UI Flows and Diagrams
Visual references for the Phase 1 UX: navigation, component wiring, SSE handling, and torrent lifecycle. Use these diagrams when extending the UI or adding tests.
Navigation flow
flowchart LR
Nav["Sidebar / Drawer"] --> Dash[Dashboard]
Nav --> Torrents[Torrents]
Nav --> Logs[Logs]
Nav --> Health[Health]
Nav --> Settings[Settings]
Torrents --> Detail["Detail route /torrents/:id"]
Detail --> Overview[Overview]
Detail --> Files[Files]
Detail --> Options[Options]
Component graph
flowchart TB
app["App (RevaerApp)"]
shell["AppShell: nav / theme / locale"]
dash[Dashboard]
torrents["Torrents list + detail"]
settings[Settings]
logs[Logs]
health[Health]
api[API]
app --> shell
shell --> dash
shell --> torrents
shell --> settings
shell --> logs
shell --> health
dash -- "GET /v1/dashboard" --> api
torrents -- "GET /v1/torrents" --> api
torrents -- "GET /v1/torrents/{id}" --> api
torrents -- "POST /v1/torrents/{id}/action" --> api
torrents -- "PATCH /v1/torrents/{id}/options" --> api
torrents -- "POST /v1/torrents/{id}/select" --> api
torrents -- "SSE /v1/torrents/events" --> api
logs -- "SSE /v1/logs/stream" --> api
health -- "GET /health/full" --> api
SSE event flow
sequenceDiagram
participant UI as UI
participant Fetch as Fetch Stream
participant API as API/SSE
participant State as Store
UI->>Fetch: build URL + headers (x-revaer-api-key, Last-Event-ID)
Fetch->>API: GET /v1/torrents/events (fallback /v1/events/stream)
API-->>Fetch: SSE frames
Fetch->>State: parse + batch updates
State->>UI: render list, detail, dashboard, health badges
UI->>Fetch: reconnect with backoff and resume id
Torrent lifecycle (UI perspective)
stateDiagram-v2
[*] --> Added : magnet/upload
Added --> Queueing : server-side validation
Queueing --> Downloading
Downloading --> Checking : recheck or hash
Downloading --> Completed : 100% + seeding ready
Checking --> Downloading : if data matches
Completed --> FsOps : move/rename per policy
FsOps --> Seeding
Seeding --> Completed : ratio met / stop rules
Completed --> Removed : delete (+data optional)
Interaction notes
- SSE disconnect overlay shows last event timestamp, retry countdown (1s to 30s exponential with jitter), and diagnostics (auth mode, reason).
- Table virtualization is required beyond 500 rows; virtual scroll must preserve keyboard focus order and pinned columns.
- Mobile detail view uses tabs (Overview, Files, Options); desktop uses a split layout so overview and options stay visible together at lg+.
Configuration Surface
Canonical reference for the PostgreSQL-backed settings documents that drive Revaer’s runtime behavior.
Revaer persists operator-facing configuration inside the settings_* tables. The API (ConfigService) exposes strongly typed snapshots consumed by the API server, torrent engine, filesystem pipeline, and CLI. Every change flows through a SettingsChangeset, ensuring a single validation path whether commands originate from the setup flow or the admin API.
Snapshot components
The /.well-known/revaer.json endpoint, the authenticated GET /v1/config route, and the revaer config get CLI command all return the same structure:
{
"revision": 42,
"app_profile": {
"...": "..."
},
"engine_profile": {
"...": "..."
},
"engine_profile_effective": {
"...": "..."
},
"fs_policy": {
"...": "..."
},
"api_keys": [
{
"key_id": "admin",
"label": "bootstrap",
"enabled": true,
"rate_limit": null
}
]
}
engine_profile_effective is the normalized engine profile (clamped limits, derived defaults, warnings applied) used by the orchestrator.
App profile (settings_app_profile)
| Field | Type | Description |
|---|---|---|
id | UUID | Singleton identifier for the current document. |
instance_name | string | Human-readable label surfaced in the UI and CLI. |
mode | setup or active | Gatekeeper for authentication middleware and setup flow. |
auth_mode | api_key or none | API access policy; none allows anonymous access on local networks only. |
version | integer | Optimistic locking counter maintained by ConfigService. |
http_port | integer | Published TCP port for the API server. |
bind_addr | string (IPv4/IPv6) | Listen address for the API server. |
local_networks | array | CIDR ranges treated as local for anonymous access and recovery flows. |
telemetry | object | Structured telemetry config (level, format, otel_enabled, otel_service_name, otel_endpoint). |
label_policies | array | Per-category/tag policy overrides (download dir, rate limits, queue position). |
immutable_keys | array | Fields that cannot be mutated via patches (ConfigError::ImmutableField). |
Engine profile (settings_engine_profile)
Network and transport
implementation- engine identifier (libtorrentorstub).listen_portandlisten_interfaces- incoming listener configuration.ipv6_mode-disabled,prefer, orrequire.enable_lsd,enable_upnp,enable_natpmp,enable_pex- discovery toggles (default off).dht,dht_bootstrap_nodes,dht_router_nodes- DHT configuration.outgoing_port_min/outgoing_port_max- optional port range for outgoing connections.peer_dscp- optional DSCP/TOS codepoint (0-63) for peer sockets.
Privacy and protocol controls
anonymous_mode,force_proxy,prefer_rc4.allow_multiple_connections_per_ip.enable_outgoing_utp,enable_incoming_utp.
Limits and scheduling
max_active,max_download_bps,max_upload_bps.seed_ratio_limit,seed_time_limit.connections_limit,connections_limit_per_torrent.unchoke_slots,half_open_limit,optimistic_unchoke_slots.stats_interval_ms,max_queued_disk_bytes.alt_speed(caps and optional schedule).
Behavior
sequential_default.auto_managed,auto_manage_prefer_seeds,dont_count_slow_torrents.super_seeding,strict_super_seeding.choking_algorithm,seed_choking_algorithm.
Storage
resume_dir,download_root.storage_mode,use_partfile.disk_read_mode,disk_write_mode,verify_piece_hashes.cache_size,cache_expiry,coalesce_reads,coalesce_writes,use_disk_cache_pool.
Tracker and filtering
tracker(user-agent, announce overrides).ip_filter(inline rules plus optional remote blocklist).peer_classes(per-class caps and throttles).
Filesystem policy (settings_fs_policy)
| Field | Type | Description |
|---|---|---|
library_root | string | Destination directory for completed artifacts. |
extract | bool | Whether completed payloads are extracted. |
par2 | string | disabled, verify, or repair (verify is also the compatibility behavior for legacy enabled). |
flatten | bool | Collapse single-file directories when moving into the library. |
move_mode | string | copy, move, or hardlink. |
cleanup_keep / cleanup_drop | array | Glob patterns retaining or removing files. |
chmod_file / chmod_dir | string? | Optional octal permissions applied to outputs. |
owner / group | string? | Optional ownership override (Unix only). |
umask | string? | Umask used to derive default permissions. |
allow_paths | array | Allowed staging/library paths. |
Extraction is built in for zip, tar, tar.gz, and tgz. 7z and rar extraction use external tools (7zz, 7z, unar, or unrar) and fail with a structured FsOps error if none are installed. PAR2 verification/repair requires the par2 CLI when par2 is set to verify or repair. On non-Unix platforms, ownership overrides remain unsupported and FsOps returns an explicit error instead of silently drifting from policy.
API keys and secrets
Patches can create, update, or revoke keys and named secrets. The request format mirrors SettingsChangeset:
{
"api_keys": [
{
"op": "upsert",
"key_id": "admin",
"label": "primary",
"enabled": true,
"secret": "optional-override",
"rate_limit": { "burst": 10, "per_seconds": 1 }
}
],
"secrets": [
{ "op": "set", "name": "libtorrent.passphrase", "value": "..." }
]
}
The API server enforces bucketed rate limits if rate_limit is supplied (burst per per_seconds). Invalid field names or mutations against immutable_keys yield RFC9457 ProblemDetails responses with an invalid_params array matching the JSON pointer returned by ConfigError.
Telemetry toggle
Revaer boots with structured logging and Prometheus metrics by default. OpenTelemetry export remains opt-in: set REVAER_ENABLE_OTEL=true alongside your revaer-app process (optionally overriding REVAER_OTEL_SERVICE_NAME and REVAER_OTEL_EXPORTER, or using OTEL_EXPORTER_OTLP_ENDPOINT) to attach the OTLP tracing exporter. When the flag is absent, no OpenTelemetry exporter is initialized.
Change workflows
- Setup -
POST /admin/setup/startissues a one-time token.POST /admin/setup/completeconsumes that token, applies the providedSettingsChangeset, forcesapp_profile.modetoactive, and returns the hydrated snapshot along with the generated API key. - Ongoing updates -
PATCH /v1/config(CLI:revaer config set --file changes.json) requires an API key and supports partial documents. Any field omitted from the payload remains untouched. The legacy/admin/settingsalias remains for compatibility. - Snapshot access -
GET /.well-known/revaer.json(no auth),GET /v1/config(API key),GET /health/full, andrevaer config getreturn the current revision so automation and dashboards can verify configuration drift without shell access.
Revaer publishes SettingsChanged events on every successful mutation, ensuring subscribers refresh in-memory caches without polling.
HTTP API
REST + SSE surface exposed by
revaer-api. The OpenAPI document is served at/docs/openapi.jsonand regenerated viajust api-export.
Authentication
- Setup flow -
/admin/setup/startis open./admin/setup/completerequires thex-revaer-setup-tokenheader with the one-time token returned by setup start. The server refuses setup calls onceapp_profile.modeisactive. - Operator actions - All
/admin/*(after setup) and/v1/*endpoints requirex-revaer-api-key: key_id:secret. The middleware validates the key viaConfigService, enforces per-key rate limiting, and rejects calls while the instance remains in setup mode. - Request correlation - An optional
x-request-idheader is echoed into tracing spans and surfaced on SSE traffic. The CLI auto-populates this header per invocation.
Error responses follow RFC9457 (ProblemDetails) and include invalid_params entries when validation pinpoints a JSON pointer within the payload.
Endpoint inventory (core surface)
Public (no auth)
GET /health,GET /health/fullGET /metricsGET /.well-known/revaer.jsonGET /docs/openapi.json
Setup and admin
POST /admin/setup/startPOST /admin/setup/completePOST /admin/factory-resetPATCH /admin/settings(alias forPATCH /v1/config)GET/POST/DELETE /admin/torrentsGET /admin/torrents/{id}POST /admin/torrents/createGET /admin/torrents/categories,GET /admin/torrents/tagsGET /admin/torrents/{id}/peers
Config and auth
GET /v1/config(authenticated snapshot)PATCH /v1/config(applySettingsChangeset)POST /v1/auth/refresh(refresh API key)
Dashboard and filesystem
GET /v1/dashboardGET /v1/fs/browse
Torrent lifecycle
GET/POST /v1/torrentsGET /v1/torrents/{id}POST /v1/torrents/{id}/selectPATCH /v1/torrents/{id}/optionsPOST /v1/torrents/{id}/actionPOST /v1/torrents/createGET /v1/torrents/categories,GET /v1/torrents/tagsGET /v1/torrents/{id}/peersGET/PATCH/DELETE /v1/torrents/{id}/trackersPATCH /v1/torrents/{id}/web_seeds
Events and logs
GET /v1/torrents/events(primary SSE stream)GET /v1/events,GET /v1/events/stream(SSE aliases)GET /v1/logs/stream
All torrent-managing endpoints ensure the torrent workflow is wired. If the engine is unavailable, the API returns 503 Service Unavailable.
Torrent submission (POST /v1/torrents)
Required headers: x-revaer-api-key. Provide either magnet or metainfo; the server rejects payloads missing both. Optional fields:
download_dir- Overrides the engine profile’s staging directory.sequential- Enables sequential downloading for this torrent only.tags/trackers- Stored alongside the torrent for filtering and bookkeeping.include/exclude/skip_fluff- File selection bootstrap applied before metadata fetch completes.max_download_bps/max_upload_bps- Per-torrent rate limits (bps) passed to the workflow.
On success the server returns 202 Accepted after dispatching TorrentWorkflow::add_torrent. The torrent ID in the payload becomes the canonical identifier.
Listing and filtering (GET /v1/torrents)
Query parameters:
limit(default 50, max 200)cursor- Base64 token returned innextstate,tracker,extension,tags,name- Comma-separated filters (case-insensitive)
The response body is TorrentListResponse with an optional next cursor when additional pages exist.
Torrent actions (POST /v1/torrents/{id}/action)
type determines the shape of the body:
{ "type": "remove", "delete_data": true }
{ "type": "sequential", "enable": false }
{ "type": "rate", "download_bps": 1048576, "upload_bps": null }
Failures propagate engine errors as 500 Internal Server Error with a descriptive message in detail.
SSE stream (GET /v1/torrents/events)
Headers:
x-revaer-api-key- Optional
Last-Event-ID- resuming from a previously stored ID (the CLI stores this via--resume-file).
Query parameters:
torrent- Comma-separated UUIDs.event- Comma-separated event kinds. Valid values includetorrent_added,files_discovered,progress,state_changed,completed,metadata_updated,torrent_removed,fsops_started,fsops_progress,fsops_completed,fsops_failed,settings_changed,health_changed,selection_reconciled.state- Comma-separated torrent states (downloading,completed, etc.).
The server maintains a 20-second keep-alive ping and enforces filtering before events hit the wire.
Health and metrics
GET /health- Primary readiness probe used by orchestration systems. Addsdatabaseto the degraded list if PostgreSQL is unreachable.GET /health/full- Returns the deployment revision, build SHA, metrics snapshot (config_guardrail_violations_total,api_rate_limit_throttled_total, etc.), and torrent queue depth.GET /metrics- Exposes the same counters for Prometheus scraping.
For the complete schema definitions, consult the generated OpenAPI (just api-export).
CLI Reference
revaer-cliprovides parity with the API for setup, configuration management, torrent lifecycle, and observability.
Global flags and environment
| Flag | Environment | Default | Description |
|---|---|---|---|
--api-url <URL> | REVAER_API_URL | http://127.0.0.1:7070 | Base URL for API requests. |
--api-key <key_id:secret> | REVAER_API_KEY | none | Required for all post-setup commands that mutate or read torrents. |
--timeout <secs> | REVAER_HTTP_TIMEOUT_SECS | 10 | Per-request HTTP timeout. |
| `–output <table | json>` | none | table |
Each invocation bubbles a unique x-request-id through the API; the CLI can optionally emit telemetry events when REVAER_TELEMETRY_ENDPOINT is set.
Setup flow
revaer setup start [--issued-by <label>] [--ttl-seconds <secs>]
- Calls
POST /admin/setup/start. - Prints the plaintext token followed by its ISO8601 expiry.
- Use
--issued-byto tag the token source (defaults toapi).
revaer setup complete --instance <name> --bind <addr> --port <port> --resume-dir <path> --download-root <path> --library-root <path> --api-key-label <label> [--api-key-id <id>] [--passphrase <value>] [--token <token>]
- Loads the setup token from
--tokenorREVAER_SETUP_TOKEN. - Builds a
SettingsChangesetcontaining the app profile, engine profile, filesystem policy, API key, and optional secret. - Forces
app_profile.mode = "active". - Echoes the generated API key (
key_id:secret) on success; store it securely before continuing.
Configuration maintenance
revaer config get
- Fetches the current configuration snapshot.
- Mirrors
GET /v1/configoutput.
revaer config set --file <path>
- Reads a JSON file containing a partial
SettingsChangeset. - Requires an API key.
- Returns a formatted
ProblemDetailsmessage if validation fails (immutable fields, unknown keys, etc.).
revaer settings patch --file <path>
- Alias for
revaer config set.
Torrent lifecycle
revaer torrent add <magnet|.torrent> [--name <label>] [--id <uuid>]
- Accepts a magnet URI or a filesystem path to a
.torrent. - Automatically base64-encodes torrent files for the API.
- Optional overrides:
--namesets the human-friendly label;--idlets you supply a deterministic UUID instead of the auto-generated value.
revaer torrent remove <uuid>
- Issues
POST /v1/torrents/{id}/actionwith{ "type": "remove" }. - Use the more general
actioncommand fordelete_datasemantics.
revaer ls [--limit <n>] [--cursor <token>] [--state <state>] [--tracker <url>] [--extension <ext>] [--tags <tag1,tag2>] [--name <fragment>]
- Lists torrents with the same filters supported by the REST API.
- Default output is a table summarizing id, name, state, and progress.
- Add
--output jsonto emit the rawTorrentListResponse.
revaer status <uuid>
- Returns a detailed view of a single torrent.
- Add
--output jsonto view the fullTorrentDetail(including file metadata when available).
revaer select <uuid> [--include <glob,glob>] [--exclude <glob,glob>] [--skip-fluff] [--priority index=priority,...]
- Updates file-selection rules via
POST /v1/torrents/{id}/select. --priorityaccepts repeatedindex=prioritypairs (skip|low|normal|high) mapped onto the engine’sFilePriority.
revaer action <uuid> <pause|resume|remove|reannounce|recheck|sequential|rate> [--delete-data] [--enable <bool>] [--download <bps>] [--upload <bps>]
- One-stop entry point for all torrent actions.
sequentialtoggles sequential downloads via--enable true|false.rateupdates per-torrent bandwidth caps (bps). Provide--downloadand/or--upload.removehonors--delete-data.
Event streaming
revaer tail [--torrent <id,id>] [--event <kind,kind>] [--state <state,state>] [--resume-file <path>] [--retry-secs <n>]
- Connects to
/v1/torrents/events(falls back to/v1/events/stream). - Filters match the API query parameters and enforce UUID/event-kind validation before the request is made.
- When
--resume-fileis supplied, the CLI persists the last event ID across reconnects so the stream can resume after transient failures. --retry-secscontrols the backoff between reconnect attempts (default: 5 seconds).
All torrent commands require an API key. The CLI surfaces API problems exactly as the server returns them, including RFC9457 validation errors and rate-limit responses (429 Too Many Requests with retry metadata in the body).
Torrent Flows
Operational views for the torrent lifecycle and the torrent authoring path. These diagrams are reference-only; wire changes must follow the stored-procedure, clamp-before-apply, and observability guardrails in AGENT.md.
Admission -> Runtime -> FsOps
flowchart TB
subgraph API["API/CLI"]
Req["POST /v1/torrents\nPATCH /v1/torrents/{id}/options\nPOST /v1/torrents/{id}/select\nPATCH /v1/torrents/{id}/trackers\nPATCH /v1/torrents/{id}/web_seeds\n- validate payload\n- clamp per profile\n- hydrate metadata (tags/category/storage)\n- normalize selection + limits"]
end
subgraph Worker["Worker / Orchestrator"]
Cmd["EngineCommand::Add\n- attach profile snapshot\n- derive AddTorrentOptions\n- stash selection + metadata for FsOps"]
Persist["RuntimeStore\n- persist metadata/selection\n- checkpoint admission state"]
end
subgraph Bridge["Bridge / FFI"]
Opts["EngineOptions/AddTorrentRequest\n- listen/download dirs\n- per-torrent rate caps\n- queue priority / paused\n- trackers (profile + request)\n- encryption, DHT, LSD flags\n- seed mode / add paused"]
Session["libtorrent session\n- apply settings_pack\n- add_torrent_params\n- start/resume handles"]
end
subgraph Engine["Engine Loop"]
Progress["Native events -> EngineEvent\n- progress/state\n- alert mapping\n- tracker status\n- errors (listen/storage/peer)"]
Cache["Per-torrent cache\n- rate caps\n- trackers\n- limits\n- tags/category"]
end
subgraph FsOps["FsOps Pipeline"]
Select["Selection reconcile\n- honor request selection\n- drop unselected paths"]
Extract["Extract archives (zip/rar/7z/tar.gz)\n- optional; skip when not configured\n- guardrail missing tools"]
Flatten["Flatten/move per policy\n- copy/move/hardlink\n- partfile handling"]
Perms["chmod/chown/umask\n- library root enforcement"]
Cleanup["Cleanup\n- drop patterns\n- keep filters\n- metadata writeback (.revaer.meta)"]
end
Req --> Cmd
Cmd --> Persist
Cmd --> Opts
Opts --> Session
Session --> Progress
Progress --> Cache
Progress -->|Completed event| FsOps
FsOps -->|Events + metrics| Worker
Worker -->|Health + SSE| API
Notes
- Clamping and validation happen before persistence and before libtorrent sees the settings; unknown fields are ignored, unsafe values are clamped.
- Per-torrent limits (rate caps, queue priority, paused, seed mode) are applied immediately on admission and cached for later verification.
- FsOps runs on
Completedwith retries; every stage emits events/metrics and degrades health on guardrail breaches (tooling missing, permission errors, latency overruns).
Torrent creation (authoring) flow
flowchart LR
Input["Input\n- file/dir path\n- trackers/web seeds\n- piece size (auto/manual)\n- private flag\n- comment/source\n- alignment rules"]
Stage["Stage & Hash\n- walk files with allowlist\n- apply size filters\n- align pieces\n- hash with deterministic order"]
Meta["Build metainfo\n- info dictionary\n- tracker tiers\n- web seeds\n- creation date\n- optional dht nodes"]
Validate["Validate\n- size/limit guards\n- path length\n- private flag vs trackers\n- duplicate file detection"]
PersistMeta["Persist\n- .torrent file\n- magnet link\n- optional signed manifest"]
Return["Return to caller\n- paths + hashes\n- effective options\n- warnings (skipped files, clamped piece size)"]
Input --> Stage --> Meta --> Validate --> PersistMeta --> Return
Notes
- Creation respects the same glob filters and guardrails used by admission to avoid later FsOps surprises (exclude temporary/system files).
- When trackers or web seeds are provided, they remain deduplicated and ordered; private torrents skip DHT/PEX automatically.
- The flow is deterministic: file order, piece sizing, and hashing are reproducible given the same inputs and options.
- API endpoint:
POST /v1/torrents/create(admin alias:POST /admin/torrents/create).
Native Libtorrent Integration Tests
These tests are opt-in (gated by REVAER_NATIVE_IT) to keep the default matrix deterministic; include them explicitly in feature-matrix runs.
To run the feature-gated native libtorrent integration suite locally:
# Ensure Docker (or colima) is running and DOCKER_HOST is set if not using /var/run/docker.sock
export DOCKER_HOST=${DOCKER_HOST:-unix:///Users/vanna/.colima/default/docker.sock}
# Enable native integration tests
export REVAER_NATIVE_IT=1
# Run the full gate (preferred)
just ci
# Or target only the libtorrent native suite
just test-native
CI note: add a matrix job that sets REVAER_NATIVE_IT=1 and points DOCKER_HOST at the runner’s daemon to ensure the native path stays covered.
API Documentation
This directory hosts HTTP API specifications, the generated OpenAPI document, and usage guides for the Revaer control plane.
Contents
openapi.json- Generated OpenAPI document (just api-export).openapi.md- How to regenerate and consume the OpenAPI document.guides/- Scenario-based walkthroughs (bootstrap, operations, telemetry, CLI usage).openapi-gaps.md- Inventory of router endpoints missing from the OpenAPI spec (should be empty).
Current Coverage
- Setup and configuration -
/admin/setup/*,/v1/config,/.well-known/revaer.json. - Torrent lifecycle -
/v1/torrents,/v1/torrents/{id},/v1/torrents/{id}/action,/v1/torrents/{id}/select,/v1/torrents/{id}/options, plus admin aliases. - Authoring and metadata -
/v1/torrents/create,/v1/torrents/{id}/trackers,/v1/torrents/{id}/web_seeds,/v1/torrents/{id}/peers. - Observability -
/v1/events,/v1/torrents/events,/v1/logs/stream,/metrics,/v1/dashboard,/health/full. - Filesystem -
/v1/fs/browse.
See guides/bootstrap.md for an end-to-end description of the bootstrap lifecycle and runtime orchestration expectations.
OpenAPI Reference
Canonical machine-readable description of the Revaer control plane surface.
The generated OpenAPI specification lives alongside the documentation at docs/api/openapi.json and is served by the API at /docs/openapi.json.
Regenerate it with:
just api-export
After refreshing the file, rebuild the documentation (just docs) to publish the updated schema and LLM manifests.
OpenAPI Coverage Gaps
This document lists API routes present in crates/revaer-api/src/http/router.rs that are missing from docs/api/openapi.json.
Summary
- The OpenAPI spec is aligned with the current router surface; no gaps remain for the default feature set.
Missing admin routes
- None.
Missing v1 routes
- None.
Notes
- Feature-gated compat-qb routes are excluded because they are not mounted unless the
compat-qbfeature is enabled.
Indexer Migration Rollback
Revaer’s indexer migration path is designed to be reversible.
Coexistence
- Revaer can run alongside Prowlarr because Revaer only exposes its own Torznab endpoints and import surfaces.
- Revaer does not push configuration into Sonarr, Radarr, Lidarr, or Readarr.
- Existing Arr and Prowlarr configuration stays outside Revaer-managed state.
Rollback
Rollback is URL-only:
- Switch each Arr client’s Torznab URL back from Revaer to the prior Prowlarr URL.
- Keep the previous API key or credentials in the Arr client as needed for the old endpoint.
- Leave Revaer import jobs, search profiles, and Torznab instances in place for inspection or later retry.
No cleanup is required in Revaer to restore the previous Arr behavior because Revaer does not mutate downstream Arr configuration.
Operational Notes
- Dry-run import jobs are safe to execute while Prowlarr is still active.
- Revaer Torznab instances can coexist with imported indexer management flows.
- If you need to compare behavior during migration, keep both Revaer and Prowlarr Torznab endpoints available and move one Arr client at a time.
ADRs
Suggested Use Workflow
- Create a new ADR using the template in
docs/adr/template.md. - Give it a sequential identifier (e.g.,
001,002) and a concise title. - Capture context, decision, consequences, and follow-up actions.
- Append the new ADR entry to the end of the Catalogue list above.
- Append the same entry under
ADRsindocs/SUMMARY.md, keeping it nested so the sidebar stays collapsed. - Reference ADRs from code comments or docs where the decision applies.
Catalogue
- Template – ADR template
- 001 – Configuration revisioning
- 002 – Setup token lifecycle
- 003 – Libtorrent session runner
- 004 – Phase one delivery
- 005 – FS operations pipeline
- 006 – API/CLI contract
- 007 – Security posture
- 008 – Remaining phase-one tasks
- 009 – FS ops permission hardening
- 010 – Agent compliance sweep
- 011 – Coverage hardening
- 012 – Agent compliance refresh
- 013 – Runtime persistence
- 014 – Data access layer
- 015 – Agent compliance hardening
- 016 – Libtorrent restoration
- 017 – Avoid
sqlx-named-bind - 018 – Retire testcontainers
- 019 – Advisory RUSTSEC-2024-0370 temporary ignore
- 020 – Torrent engine precursor hardening
- 021 – Torrent precursor enforcement
- 022 – Torrent settings parity and observability
- 023 – Tracker config wiring and persistence
- 024 – Seeding stop criteria and overrides
- 025 – Seed mode admission with optional hash sampling
- 026 – Queue auto-managed defaults and PEX threading
- 027 – Choking strategy and super-seeding configuration
- 028 – qBittorrent parity and tracker TLS wiring
- 029 – Torrent authoring, labels, and metadata updates
- 030 – Migration consolidation for initial setup
- 031 – UI Nexus asset sync tooling
- 032 – Torrent FFI audit closeout
- 033 – UI SSE + auth/setup wiring
- 034 – UI SSE normalization and ApiClient singleton
- 035 – Advisory RUSTSEC-2021-0065 temporary ignore
- 036 – Asset sync test stability under parallel runs
- 037 – UI row slices and system-rate store wiring
- 038 – UI shared API models and torrent query paging state
- 039 – UI store, API coverage, and rate-limit retries
- 040 – UI label policy editor and API wiring
- 041 – UI health view and label shortcuts
- 042 – UI metrics copy button
- 043 – UI settings bypass local auth toggle
- 044 – UI ApiClient torrent options/selection endpoints
- 045 – UI icon components and icon button standardization
- 046 – UI torrent filters, pagination, and URL sync
- 047 – UI torrent list updated timestamp column
- 048 – UI torrent row actions, bulk controls, and rate/remove dialogs
- 049 – UI detail drawer overview/files/options
- 050 – UI torrent FAB, add modal, and create-torrent authoring flow
- 051 – UI shared API models and UX primitives
- 052 – UI dashboard migration to Nexus vendor layout
- 053 – UI dashboard hardline rebuild
- 054 – UI dashboard Nexus parity tweaks
- 055 – Factory reset and bootstrap API key
- 056 – Factory reset auth fallback when no API keys exist
- 057 – UI settings tabs and editor controls
- 058 – UI settings controls, logs stream, and filesystem browser
- 059 – Migration rebaseline and JSON backfill guardrails
- 060 – Auth expiry enforcement and structured error context
- 061 – API error i18n and OpenAPI asset constants
- 062 – Event bus publish guardrails and API i18n cleanup
- 063 – CI compliance cleanup for test error handling
- 064 – Factory reset error context and allow-path validation
- 065 – API key refresh and no-auth setup mode
- 066 – Factory reset UX fallback and SSE setup gating
- 067 – Logs ANSI rendering and bounded buffer
- 068 – Agent compliance clippy cargo linting
- 069 – Pin mdbook-mermaid for docs builds
- 070 – Dashboard UI checklist completion and auth/SSE hardening
- 071 – Libtorrent native fallback for default CI
- 072 – Agent compliance refactor (UI + HTTP + Config Layout)
- 073 – UI checklist follow-ups: SSE detail refresh, labels shortcuts, strict i18n, and anymap removal
- 074 – Temporary vendoring of yewdux for latest Yew compatibility
- 075 – Coverage gate tests for config loader and data toggles
- 076 – Temporary clippy exception for hashbrown multiple versions
- 077 – UI menu interactions
- 078 – Local auth bypass guardrails
- 079 – Advisory RUSTSEC-2025-0141 temporary ignore
- 080 – Local auth bypass reliability
- 081 – Playwright E2E test suite
- 082 – E2E gate and selector stability
- 083 – API preflight before UI E2E
- 084 – E2E API coverage with temp databases
- 085 – E2E OpenAPI client and unified coverage
- 086 – Default local auth bypass
- 087 – Local network auth ranges and settings validation
- 088 – Live SSE log streaming
- 089 – Port process termination for dev tooling
- 090 – UI log filters and shell controls
- 091 – Raise per-crate coverage gate to 90%
- 092 – Fsops coverage hardening
- 093 – UI logic extraction for testable components
- 094 – UI E2E sharding in workflows
- 095 – Untagged images use dev tag
- 096 – Aggregate UI E2E coverage for sharded runs
- 097 – Dev prereleases and PR image previews
- 098 – Reusable image build workflow
- 099 – Indexer ERD single-tenant and audit fields
- 100 – SonarQube workflow with root coverage LCOV
- 101 – Indexer ERD implementation checklist
- 102 – Indexer core schema foundations
- 103 – Indexer definition schema
- 104 – Indexer instance schema and RSS
- 105 – Indexer secret schema
- 106 – Indexer search profiles and Torznab schema
- 107 – Indexer import schema
- 108 – Indexer rate limit and Cloudflare schema
- 109 – Indexer policy schema
- 110 – Indexer Torznab category schema
- 111 – Indexer connectivity and audit schema
- 112 – Indexer canonicalization schema
- 113 – Indexer search request schema
- 114 – Indexer scoring schema
- 115 – Indexer conflict and decision schema
- 116 – Indexer user action and acquisition schema
- 117 – Indexer telemetry and reputation schema
- 118 – Indexer job schedule schema
- 119 – Indexer FK on-delete rules
- 120 – Indexer seed data and defaults
- 121 – Indexer query indexes
- 122 – Indexer deployment initialization procedure
- 123 – Indexer app_user stored procedures
- 124 – Indexer tag stored procedures
- 125 – Indexer routing policy stored procedures
- 126 – Indexer Cloudflare reset procedure
- 127 – Indexer rate limit stored procedures
- 128 – Indexer instance stored procedures
- 129 – Indexer category mapping procedures
- 130 – Indexer policy set procedures
- 131 – Indexer search profile procedures
- 132 – Indexer policy rule create procedure
- 133 – Indexer outbound request log procedure
- 134 – Indexer Torznab instance state procedures
- 135 – Indexer conflict resolution procedures
- 136 – Indexer job runner procedures
- 137 – Indexer search request cancel procedure
- 138 – Indexer search run procedures
- 139 – Indexer canonical disambiguation rule procedure
- 140 – Indexer search request create procedure
- 141 – Indexer job runner follow-up procedures
- 142 – Indexer executor handoff stored procedures
- 143 – Indexer tag API surface
- 144 – Task: Indexer procedure fixes (RSS apply, base score refresh, normalization)
- 145 – Indexer domain mapping and DI boundaries
- 146 – Indexer stored-proc test harness
- 147 – Indexer error-code taxonomy
- 148 – Indexer v1 scope enforcement
- 149 – Indexer schema JSON ban verification
- 150 – Indexer public-id and bigint identity verification
- 151 – Indexer soft-delete coverage verification
- 152 – Indexer audit fields and timestamp defaults verification
- 153 – Indexer API boundary public-id verification
- 154 – Indexer external reference public-id verification
- 155 – Indexer system sentinel usage verification
- 156 – Indexer text caps and lowercase key enforcement verification
- 157 – Indexer normalized column verification
- 158 – Indexer hash identity rules verification
- 159 – Indexer secret binding linkage verification
- 160 – Indexer single-tenant scope verification
- 161 – Indexer table/constraint alignment verification
- 162 – Indexer per-table Notes verification
- 163 – Indexer proc error-code alignment for key lookups
- 164 – Indexer error enums and normalization helpers verification
- 165 – Indexer result-only returns and no-panics verification
- 166 – Indexer tryOp wrappers for external operations
- 167 – Indexer routing policy service and endpoints
- 168 – Indexer definition list endpoint
- 169 – Indexer CF state read endpoint
- 170 – Indexer CF state E2E coverage
- 171 – Indexer category mapping API endpoints
- 172 – Indexer Torznab instance API endpoints
- 173 – Indexer search profile API endpoints
- 174 – Indexer import jobs API endpoints
- 175 – Indexer import jobs CLI commands
- 176 – Indexer Torznab CLI management
- 177 – Indexer policy CLI management
- 178 – Indexer instance test API and CLI
- 179 – Indexer allocation safety guard
- 180 – Auth prompt dismissal stability
- 181 – Cross-platform allocation safety probe
- 182 – Indexer PR feedback follow-through
- 183 – Indexer PR feedback allocation follow-up
- 184 – Indexer PR feedback allocation caps
- 185 – Indexer Torznab caps endpoint
- 186 – Indexer Torznab download and allocation guards
- 187 – Indexer search requests API and allocation guard refinements
- 188 – Indexer search request auth E2E coverage
- 189 – Indexer search pages API
- 190 – Search request validation tests
- 191 – Hash identity derivation tests
- 192 – Rate limit state purge test
- 193 – Job schedule completion updates
- 194 – Job claim locking and lease durations
- 195 – Policy snapshot GC ordering
- 196 – Retention purge context cleanup
- 197 – Indexer connectivity profile refresh rollups
- 198 – Reputation rollup sample thresholds
- 199 – Canonical refresh durable source cadence
- 200 – Canonical prune source-link policy alignment
- 201 – RSS poll and subscription backfill workflows
- 202 – RSS scheduling, backoff, and dedupe validation
- 203 – Rate limit token bucket and RSS rate-limited semantics
- 204 – Cloudflare state transition and mitigation validation
- 205 – Policy snapshot reuse and refcount validation
- 206 – Policy snapshot GC acceptance coverage
- 207 – Derived refresh timing and caching validation
- 208 – Retention and rollup job window validation
- 209 – Retention and derived refresh strategy coverage
- 210 – Policy rule disable/enable and reorder validation
- 211 – Search-result observation rules validation
- 212 – Category mapping and domain filter validation
- 213 – Indexer observability counters for Torznab, search, and import jobs
- 214 – Indexer request span coverage for Torznab, search, and import jobs
- 215 – Torznab parity integration tests for endpoint format and auth semantics
- 216 – Torznab search query mapping and append-order pagination
- 217 – Torznab download redirect and acquisition-attempt coverage
- 218 – Torznab feed category emission and test fixture hardening
- 219 – Torznab multi-category domain mapping and Other (8000) behavior coverage
- 220 – Rate-limit defaults and indexer/routing scope enforcement coverage
- 221 – Search-run retry behavior coverage for rate-limited and transient errors
- 222 – RSS Cloudflare state transition alignment with ERD
- 223 – Search streaming pages terminal sealing and append-only ordering
- 224 – Search dropped-source audit persistence and paging exclusion
- 225 – Canonicalization conflict coverage
- 226 – Indexer unit test domain coverage
- 227 – Health and reputation rollup semantics from outbound logs
- 228 – Search zero-result explainability
- 229 – Prowlarr import source parity and dry-run coverage
- 230 – Import result mapping and unmapped-definition coverage
- 231 – Migration parity E2E flow coverage
- 232 – Indexer schema and procedure catalog verification tests
- 233 – Import result fidelity snapshots
- 234 – Secret binding and test error class coverage
- 235 – Indexer instance creation uses the public definition slug key
- 236 – Indexer service operation metrics and spans
- 237 – Indexer dependency-injection boundary enforcement
- 238 – Manual search UI
- 239 – Indexer admin console UI
- 240 – Indexer schedule controls UI
- 241 – Indexer RSS management UI
- 242 – Indexer connectivity and reputation UI
- 243 – Indexer routing policy visibility
- 244 – Indexer import job dashboard
- 245 – Indexer health event drill-down
- 246 – Indexer origin-only error logging
- 247 – Indexer health summary panels
- 248 – Indexer backup and restore
- 249 – Indexer coexistence and rollback acceptance coverage
- 250 – Indexer domain service closeout
- 251 – Indexer instance category overrides
- 252 – Indexer final acceptance closeout
- 253 – Indexer health notification hooks
- 254 – Indexer app sync provisioning UI
- 255 – Indexer app-scoped category overrides
- 256 – Indexer source conflict operator UI
- 257 – Indexer Cardigann definition import
- 258 – PR review closeout
- 259 – PR review and security follow-up
- 260 – PR CodeQL closeout
- 261 – PR security and thread closeout
- 262 – PR final thread closeout
- 263 – SonarCloud PR issue cleanup and scope alignment
- 264 – PR unresolved feedback closeout
- 265 – PR feedback boundary validation closeout
- 266 – PR CodeQL follow-up on instance tag bounds
- 267 – Indexer maintenance runtime
- 268 – Indexer tag and secret inventory
- 269 – Indexer operator inventory read surfaces
- 270 – Indexer profile, policy, and Torznab inventory
- 271 – Indexer CLI read parity
- 272 – Indexer CLI operator write parity
- 273 – Indexer CLI mutation parity follow-up
- 274 – Indexer CLI health-notification parity
- 275 – PR output redaction and review follow-up
- 276 – CI cache trim for runner disk pressure
- 277 – PR review handler normalization follow-up
- 278 – Remediation plan implementation closeout
- 279 – Remediation plan gap closure
- 280 – PR 21 feedback closeout
- 281 – PR 21 Sonar and review closeout
- 282 – PR 21 final feedback closeout
- 283 – PR 21 Trivy action pin refresh
- 284 – Instruction refresh and Sonar scope hardening
- 285 – PR 19 review and lint closeout
- 286 – Advisory RUSTSEC-2026-0097 temporary ignore
- 287 – PR 19 policy reconciliation
- 288 – PR 19 OpenAPI test portability
- 289 – PR 19 native settings snapshot test stability
- 290 – PR 19 final feedback closeout
- 291 – PR 19 Sonar quality gate restoration
- 292 – PR 19 review timeout stability
- 293 – PR 19 GitHub Action SHA pinning
- 294 – PR 19 review feedback closeout
- 295 – Dependency bump rollup
- 296 – Helm chart release publishing
- 297 – Helm feedback and Sonar closeout
- 298 – CI workflow permissions regression
- 299 – Trivy config baseline
- 300 – Trivy container and Sonar PGSQL config
- 301 – Security dependency refresh for PR 25
- 302 – PR validation and main release workflow split
- 303 – Release tag image job dependency split
- 304 – PR 25 deny exception and Sonar hotspot closeout
- 305 – PR 25 prerelease tag release guard
- 306 – Semantic release prepare template fix
- 307 – CI ORAS setup action refresh
- 308 – PR workflow Helm and Sonar consolidation
- 309 – GHCR Helm namespace derivation
- 310 – PR Helm review follow-ups
- 311 – GHCR Helm GitHub token authentication
- 312 – Artifact Hub OCI repository alignment
- 313 – Trivy SARIF category and GHCR token alignment
- 314 – Artifact Hub verification and official readiness
- Status: {Proposed|Accepted|Superseded}
- Date: {YYYY-MM-DD}
- Context:
- What problem are we solving?
- What constraints or forces shape the decision?
- Decision:
- Summary of the choice made.
- Alternatives considered.
- Consequences:
- Positive outcomes.
- Risks or trade-offs.
- Follow-up:
- Implementation tasks.
- Review checkpoints.
Task Record
- Motivation:
- Why this change is needed now.
- Design notes:
- Key implementation choices, trade-offs, and invariants.
- Test coverage summary:
- The unit, integration, E2E, or manual verification added or rerun for this work.
- Observability updates:
- Logging, tracing, metrics, health, or event-surface changes.
- Status-doc validation:
- Confirm whether
README.md, roadmap/status docs, and any operator guides touched by the change were re-checked and updated to match repo truth.
- Confirm whether
- Risk & rollback plan:
- Operational risks and the simplest rollback path if the change regresses.
- Dependency rationale:
- New dependencies added, why they were chosen, and alternatives considered.
001 – Global Configuration Revisioning
- Status: Proposed
- Date: 2025-02-23
Context
- All runtime configuration must be hot-reloadable across multiple crates.
- Consumers need a consistent ordering guarantee for applying changes received via LISTEN/NOTIFY, with a fallback to polling.
- We require a DB-native mechanism that can be incremented from triggers without race conditions and that carries across deployments.
Decision
- Introduce a singleton
settings_revisiontable with an ever-incrementingrevisioncounter. - Wrap updates to configuration tables (
app_profile,engine_profile,fs_policy,auth_api_keys,query_presets) in triggers that:- Update
settings_revision.revision = revision + 1. - Emit
NOTIFY revaer_settings_changed, '<table>:<revision>:<op>'.
- Update
ConfigServiceexposesConfigSnapshotto materialize a consistent view (revision + documents) for the application bootstrap path.- The revision remains monotonic even if polling is used (consumers record the last seen revision and request deltas if they miss notifications).
- Mutation APIs validate payloads server-side, applying field-level type checks and respecting
app_profile.immutable_keys. Violations surface as structured errors with section/field metadata, preventing silent drift.
Consequences
- Multi-table updates executed inside a transaction surface as a single revision bump, preserving ordering for consumers.
- LISTEN subscribers that drop their connection can reconcile by reloading
settings_revisionand querying deltas > last_seen_revision. - Trigger-level logic slightly increases write cost but keeps business code free of manual revision management.
Follow-up
- Implement
apply_changesetto write history rows with the associated revision. - Add integration tests that exercise transactionally updating multiple tables and verifying a single revision increment.
002 – Setup Token Lifecycle & Secrets Bootstrap
- Status: Proposed
- Date: 2025-02-23
Context
- Initial deployments must boot in a locked-down “Setup Mode” where only a one-time token grants access to the setup API.
- Tokens should be observable/auditable, expire automatically, and support regeneration without requiring an application restart.
- A follow-on requirement is to collect an encryption passphrase or server-side key for pgcrypto-backed secrets before exiting Setup Mode.
Decision
- Store tokens in the
setup_tokenstable withtoken_hash,issued_at,expires_at,consumed_at, andissued_by. - Enforce at most one active token via a partial unique index on rows where
consumed_at IS NULL. ConfigServicewill:- Generate tokens using cryptographically secure randomness.
- Persist only a hashed representation (argon2id) along with metadata.
- Emit history entries and
NOTIFYevents on token creation/consumption.
- The CLI/API surfaces token issuance and completion flows; the process prints the token to stdout only at generation time.
- During completion, the caller must supply the encryption materials (passphrase or reference to pgcrypto role). The handler verifies secrets are persisted before flipping
app_profile.modetoactive.
Consequences
- Operators can recover by issuing a new token if the previous one expires without restarting the service.
- Tokens are auditable; failed attempts can be recorded against the hashed token id (future enhancement).
- The bootstrap path ensures secrets exist before runtime modules that require them start, preventing a partially configured system.
Follow-up
- Implement argon2id hashing helpers and audit logging in
revaer-config. - Define the CLI workflow (
revaer-cli setup) that wraps token issuance and completion for headless environments. - Add problem detail responses for expired/consumed tokens in the API.
003 – Libtorrent Session Runner Architecture
- Status: Accepted
- Date: 2025-10-16
Context
- The current
revaer-torrent-libtcrate is a stub that simulates torrent actions without touching libtorrent, preventing real downloads, fast-resume, or alert handling. - Phase One requires a production-grade engine: a single async task must own the libtorrent session, persist fast-resume data/selection state, debounce high-volume alerts, and surface health to the event bus.
- The engine must enforce rate limits and selections within libtorrent, react within two seconds of configuration changes, and survive restarts by restoring torrents from
resume_dir.
Decision
- Introduce a dedicated
SessionWorkerspawned byLibtorrentEngine::new. It owns the libtorrentSession, receivesEngineCommandmessages, and emitsEngineEvents via an internal channel that feeds the sharedEventBus. - Wrap the libtorrent FFI in a thin adapter trait (
LibtSession) to encapsulate blocking calls (add_torrent,pause,set_sequential,apply_rate_limits,file_priorities, alert polling). The real implementation usestokio::task::spawn_blockingto call into C++ safely. - Add a
FastResumeStoreservice that reads/writes.fastresumeblobs plus JSON metadata (selection, priorities, download directory, sequential flag) insideresume_dir. On startup the worker loads the store, attempts to match existing handles, and emits reconciliation events if the stored state diverges. - Run an
AlertPumploop that waits on libtorrentalerts_waitnotify, drains all alerts, and funnels them through anAlertTranslatorthat converts them into domainEngineEvents (FilesDiscovered,Progress,StateChanged,Completed,Error). AProgressCoalescerthrottles updates to 10 Hz per torrent. - Integrate health tracking: fatal session errors transition the engine into a degraded state and emit both
HealthChangedand per-torrentErrorevents. The worker attempts limited restarts with exponential back-off before marking the engine unhealthy. - Rate limit updates from
EngineCommand::UpdateLimitsand configuration watcher updates call into libtorrent immediately; a watchdog verifies application within two seconds and logs warnings if the session reports stale caps.
Consequences
- The engine crate gains clear separation between command handling, libtorrent FFI, alert translation, and persistence, making it easier to test components in isolation using mock
LibtSessionimplementations. - Persisted state in
resume_direnables crash-restart flows to resume downloads, leveraging libtorrent fastresume and our own selection metadata. - Debouncing progress events reduces SSE pressure while preserving responsiveness; coalescing happens before events hit the shared bus.
- Health reporting integrates with the existing telemetry crate, providing operators visibility into session failures or missing dependencies (e.g., absent resume directory).
Follow-up
- Maintain regression coverage for the
libtorrentfeature path, ensuring fast-resume reconciliation and guard-rail health events remain stable. - Track upstream libtorrent upgrades and refresh the operator documentation whenever the resume layout or dependency expectations shift.
004 – Phase One Delivery Track
- Status: Accepted
- Date: 2025-10-17
Motivation
Phase One bundles the remaining work required to transition Revaer from the current stubs into a production-ready torrent orchestration platform. This record captures the implementation notes, decisions, and verification evidence for each workstream item enumerated in docs/phase-one-roadmap.md.
Design Notes
- Follow the library-first structure outlined in
AGENT.mdwith crate-specific modules for configuration, engine integration, filesystem operations, public API, CLI, security, and packaging. - Apply tight configuration validation and hot-reload behaviour to guarantee that throttle and policy updates propagate within two seconds.
- Emit guard-rail telemetry whenever global throttles are disabled, driven to zero, or configured above the 5 Gbps warning threshold so operators can react quickly.
- Replace the stub libtorrent adapter with a session worker that owns state, persists fast-resume metadata, and surfaces alert-driven events with bounded fan-out.
- Persist resume metadata and fastresume payloads via
FastResumeStore, reconcile on startup, and emitSelectionReconciledevents plus health degradations when store contents diverge or writes fail. - Build deterministic include/exclude rule evaluation and an idempotent FsOps pipeline anchored by
.revaer.meta. - Expose a consistent Problem+JSON contract across HTTP and CLI surfaces, including pagination and SSE replay support.
- Enforce observability invariants: structured tracing with context propagation, bounded rate limits, Prometheus metrics, and degraded health signalling when dependencies fail.
- Ensure every workflow is reproducible via
justtargets and validated in CI, with container packaging aligned to the non-root, read-only expectations. - Follow the canonical
justrecipe surface (fmt, lint, test, ci, etc.). Coloned variants are mapped to hyphenated recipe names (fmt-fix,build-release,api-export) becausejust1.43.0 rejects colons in recipe identifiers without unstable modules; the semantics remain identical.
Test Coverage Summary
just ciserves as the baseline verification target. Each workstream delivers focused unit tests, integration coverage, and feature-flagged live tests (for libtorrent, Postgres, FsOps).- Coverage gates are enforced via
cargo llvm-covwith--fail-under 80across library crates. - Integration suites will rely on
testcontainers(Postgres, libtorrent) and workspace-specific fixtures for FsOps pipelines and API/CLI flows, including the configuration watcher hot-reload test and new libtorrent-feature tests for resume restoration and fastresume persistence.
Outcome
- All public surfaces now enforce API-key authentication with token-bucket rate limiting,
429Problem+JSON responses, and telemetry counters exported via Prometheus and/health/full. - SSE endpoints honour the same auth and Last-Event-ID semantics, with CLI resume support persisting state between reconnects.
- The CLI propagates
x-request-id, standardises exit codes (0success,2validation,3runtime), and emits optional telemetry events toREVAER_TELEMETRY_ENDPOINT. - A release-ready Docker image (
Dockerfile) packages the API binary and documentation on a non-root, read-only-friendly runtime with health checks and volume mounts for config/data. - CI now publishes release artefacts (
revaer-app, OpenAPI) and runs MSRV and container security jobs viajusttargets; binaries are checksummed alongside provenance metadata. - Documentation additions cover FsOps design, API/CLI contracts, security posture, operator runbook, telemetry reference, and the phase-one release checklist.
Observability Updates
- Telemetry enhancements include structured logs for setup token issuance/consumption, loopback enforcement failures, configuration watcher updates, rate-limit guard-rail decisions, and resume store degradation/recovery.
- Metrics will expand to track HTTP request outcomes, SSE fan-out, event queue depth, torrent throughput, FsOps step durations, and health degradation counts.
/health/fullwill report engine, FsOps, and database readiness with latency measurements and revision hashes, mirrored by CLI status commands.
Risk & Rollback Plan
- Maintain incremental commits gated by
just cito isolate regressions. Any new dependency introductions require explicit justification and fallbacks documented here. - Where feature flags guard libtorrent integration, provide mockable interfaces so tests can fall back to stub implementations if the environment lacks native bindings.
- Persist fast-resume metadata and
.revaer.metafiles so failed deployments can roll back without corrupting state; ensure migrations remain additive.
Dependency Rationale
No new dependencies have been added yet. Future additions (e.g., libtorrent bindings, glob evaluators, archive tools) must include:
- Why the crate/tool is necessary.
- Alternatives considered (including bespoke implementations) and why they were rejected.
- Security and maintenance assessment (license compatibility, release cadence).
005 – FsOps Pipeline Hardening
- Status: Accepted
- Date: 2025-10-17
Context
- Phase One promotes filesystem post-processing from a best-effort helper to a first-class workflow with explicit health semantics.
- The orchestrator must ensure every completed torrent flows through a deterministic FsOps state machine, emitting structured telemetry and reconciling mismatches with persisted metadata.
- Operators require visibility into FsOps latency, failures, and guard-rail breaches (e.g., missing extraction tools, permission errors) via
/health/full, Prometheus, and the shared EventBus.
Decision
- FsOps responsibilities live inside
revaer-fsops, invoked by the orchestrator (TorrentOrchestrator::apply_fsops) with an explicitFsOpsRequestthat carries the torrent id, resolved source path, and effective policy snapshot whenever aCompletedevent surfaces. - Each pipeline step (
extract,flatten,transfer,set_permissions,cleanup,finalise) records start/completion/failure events and increments Prometheus counters viaMetrics::inc_fsops_step; the extraction stage currently focuses on zip archives and gracefully skips when inputs are already directories. - Metadata is persisted alongside
.revaer.metato reconcile selection overrides and resume directories across restarts; mismatches triggerSelectionReconciledevents plus guard-rail telemetry. - Health degradation is published when FsOps detects latency guard rails, missing tools, or unrecoverable IO errors; recovery clears the
fsopscomponent from the degrade set.
Consequences
- FsOps execution becomes observable and retry-friendly, enabling operator runbooks to diagnose stuck jobs with concrete metrics and events while capturing chmod/chown/umask outcomes in recorded metadata.
- Pipeline regressions now fail CI thanks to targeted unit/integration tests under
revaer-fsopsand orchestrator-level tests driving the shared event bus. - The orchestration layer remains single-owner of FsOps invocation, simplifying future extensions (e.g., checksum verification, media tagging) without leaking concerns into the API.
Verification
just testexercises FsOps unit cases, while orchestrator integration tests validate event emission, degradation flows, and metadata reconciliation./health/fulland Prometheus snapshots display FsOps metrics during the runbook, confirming latency guard rails and failure counters behave as expected.
006 – Unified API & CLI Contract
- Status: Accepted
- Date: 2025-10-17
Context
- Phase One requires parity between the public HTTP interface and the administrative CLI so operators can automate without reverse engineering payloads.
- Prior iterations lacked shared DTOs, consistent Problem+JSON responses, and stable pagination/SSE semantics across API and CLI.
- New rate limiting and telemetry features must surface identically on both surfaces to satisfy observability and security requirements.
Decision
- Shared request/response models live in
revaer-api::modelsand are re-exported to the CLI, ensuring identical JSON encoding/decoding paths. - All routes return RFC9457 Problem+JSON payloads on validation/runtime errors, including
invalid_paramspointers for user-correctable mistakes; the CLI pretty-prints these problems and maps validation to exit code2. - Cursor pagination, filter semantics, and SSE replay (
Last-Event-ID) are implemented once in the API and exercised by dedicated CLI commands (ls,status,tail). - The CLI propagates
x-request-idheaders, emits structured telemetry events toREVAER_TELEMETRY_ENDPOINT, and redacts secrets in logs; runtime failures exit with code3to distinguish from validation issues.
Consequences
- Changes to the API contract require updates in a single module (
revaer-api::models), reducing the risk of CLI drift. - Downstream tooling can rely on deterministic exit codes and Problem+JSON payloads, simplifying automation.
- Telemetry pipelines receive consistent trace identifiers regardless of whether requests originate from the CLI or other clients.
Verification
- Integration tests cover pagination, filter validation, SSE replay, and CLI HTTP interactions via
httpmock, ensuring behaviour remains in lockstep. just api-exportregeneratesdocs/api/openapi.json, and CI asserts the CLI uses the shared DTOs by compiling with the workspace feature set.
007 – API Key Security & Rate Limiting
- Status: Accepted
- Date: 2025-10-17
Context
- API keys were previously verified but not throttled, allowing abusive clients to starve the control plane and masking guard-rail violations.
- Operators need guard-rail metrics, health events, and documentation describing key lifecycle, rate limits, and rotation workflows.
- CLI tooling must respect the same security posture, including masking secrets and surfacing authentication failures with actionable errors.
Decision
- Each API key stores a JSON rate limit (
burst,per_seconds) validated byConfigService; token-bucket state is maintained per key inside the API layer. - Requests exceeding the configured budget return
429 Too Many RequestsProblem+JSON responses, increment Prometheus counters (api_rate_limit_throttled_total), and emitHealthChangedevents when guard rails (e.g., unlimited keys) are breached. - CLI authentication mandates
key_id:secret, redacts secrets in logs, and propagatesx-request-idso operators can correlate requests with server-side traces. - CI enforces MSRV and Docker security gates to ensure build artefacts respect the security baseline.
Consequences
- Compromised or runaway keys are contained, preventing control-plane denial-of-service and providing clear telemetry for incident response.
- Documentation now includes API key rotation steps, rate-limit expectations, and remediation guidance for guard-rail events.
- The API and CLI remain aligned by sharing auth context types and telemetry primitives.
Verification
- Unit tests cover rate-limit parsing and token-bucket behaviour; integration tests assert
429responses and CLI exit codes. /health/fullexposes rate-limit metrics, and the Docker image runs as a non-root user with health checks hitting the authenticated endpoints.
008 – Phase One Remaining Delivery (Task Record)
- Status: In Progress
- Date: 2025-10-17
Motivation
- Implement the outstanding Phase One scope: per-key rate limiting, CLI parity (telemetry, exit codes), packaging, documentation, and CI gates required by
docs/phase-one-remaining-spec.mdandAGENT.md.
Design Notes
- Introduced
ConfigService::authenticate_api_keyreturning rate-limit metadata, validated JSON payloads, and persisted canonical token-bucket configuration. - Added
ApiState::enforce_rate_limitwith per-key token buckets, guard-rail health publication, Prometheus counters, and Problem+JSON429responses. - CLI now builds
reqwestclients with defaultx-request-id, standardises exit codes (0/2/3), and emits optional telemetry events whenREVAER_TELEMETRY_ENDPOINTis set. - Created a multi-stage Dockerfile (non-root runtime, healthcheck, docs bundling) with
justrecipes for building and scanning. - Expanded CI with release artefact, Docker, and MSRV jobs that call the new
justtargets.
Test Coverage Summary
- Added unit tests for rate-limit parsing and token-bucket behaviour (
revaer-config,revaer-api). - Existing integration suites exercise Problem+JSON responses, SSE replay, and CLI HTTP interactions.
- Runbook (
docs/runbook.md) supports manual verification of FsOps, rate limits, and guard rails.
Observability Updates
- Prometheus now exposes
api_rate_limit_throttled_total;/health/fullincludes the counter and degrades when guard rails fire. - CLI telemetry emits JSON events (command, outcome, trace id, exit code) to configurable endpoints.
- Documentation adds telemetry reference, operations guide, and release checklist for operators.
Risk & Rollback
- Rate-limit enforcement is isolated to
require_api_key; rollback by removingenforce_rate_limitcall if unexpected throttles occur. - Docker image/builder changes are gated via
just docker-buildandjust docker-scan; revert by restoring previous absence of Docker packaging. - CI additions run after core jobs and can be disabled via workflow changes if they fail unexpectedly.
Dependency Rationale
- No new Rust crates were introduced. Docker scanning uses
trivyvia CI and manual recipe; it is optional for local development.
009 – FsOps Permission Hardening
- Status: Accepted
- Date: 2025-10-18
Motivation
Phase One requires the filesystem pipeline to perform deterministic post-processing with metadata that survives restarts. The previous implementation only validated the library root and left extraction, flattening, transfer, and permission handling as TODOs. As a result, completed torrents could not be moved safely into the library, policies depending on chmod/chown/umask were ignored, and the orchestrator lacked the context to resume partially processed jobs.
Design Notes
FsOpsService::applynow accepts an explicitFsOpsRequestcontaining the torrent id, canonicalised source path, and the snapshot of theFsPolicy. The orchestrator resolves the source path from its catalog before invoking the pipeline.- The pipeline executes deterministic stages (
validate_policy,allowlist,prepare_directories,compile_rules,locate_source,prepare_work_dir,extract,flatten,transfer,set_permissions,cleanup,finalise) while persisting.revaer.metaafter each critical transition. Resume attempts skip completed steps automatically. - Extraction currently supports directory payloads and zip archives. Unsupported formats degrade the pipeline with a structured error and leave metadata untouched for later retries.
- The transfer step supports copy/move/hardlink semantics, records the chosen mode, and keeps destination metadata in-sync with the persisted record.
- Permission handling honours
chmod_file,chmod_dir,owner,group, andumaskdirectives. Unix platforms apply ownership changes usingnix::unistd::chown; non-Unix targets reject ownership overrides with a descriptive error to avoid silent drift. - Cleanup enforces
cleanup_keep/cleanup_dropglob rules (including the@skip_fluffpreset) and reports how many artefacts were removed. - Errors mark the FsOps health component as degraded and emit
FsopsFailedevents; successful reruns clear the health flag and emitFsopsCompleted.
Dependency Rationale
- Added
nix(features = ["user", "fs"]) to resolve system users/groups and callchownin a portable, audited fashion. Standard library support is limited to numeric ownership changes on Unix and is entirely absent on non-Unix platforms. Alternatives considered:- Calling
libc::chowndirectly: rejected to maintain the repository’s “no unsafe” guarantee and avoid platform-specific shims. - Shelling out to
chown: rejected due to portability concerns, lack of atomic error propagation, and difficulty capturing precise failures for telemetry.nixprovides safe wrappers, clear error types, and minimal dependencies, aligning with the minimal-footprint policy.
- Calling
Test Coverage Summary
revaer-fsopsunit tests now exercise the full happy path, resume semantics, flattening, allow-list enforcement, and permission error propagation. The new tests wait on pipeline events instead of arbitrary sleeps to reduce flakiness.revaer-apporchestrator tests were updated to subscribe to FsOps events and assert completion/failure handling without relying on time-based guesses.just ci(fmt, lint, udeps, audit, deny, test, cov) runs clean with the stricter pipeline enabled.
Observability Updates
- Each FsOps stage increments the
fsops_steps_totalmetric with its status (started/completed/failed/skipped). - Success and failure events now include richer detail strings (source, destination, permission modes, cleanup counts) to aid operators.
- The health component toggles between degraded/recovered based on pipeline outcomes, ensuring
/health/fullreflects the current FsOps status.
Risk & Rollback Plan
- Metadata persistence keeps prior state, so a rollback simply restores the previous binary without corrupting output directories.
- Ownership adjustments are gated to Unix platforms. Operators running on other OSes receive actionable errors instead of partial changes.
- Unsupported archive formats cause the pipeline to fail early without modifying destination directories, making forward fixes safe to deploy incrementally.
Agent Compliance Sweep
- Status: Accepted
- Date: 2025-11-01
- Context:
- AGENT.md requires
justrecipes to enforce warnings-as-errors and mandates a global CLI--output json|tableselector; the repository had drifted (recipes invokedcargowithout the configured rustflags and the CLI only exposed per-command--formatswitches). - Motivation: restore explicit compliance so local and CI workflows produce identical results and the documented CLI surface remains accurate for operators and scripts.
- AGENT.md requires
- Decision:
- Design notes: updated
just lint/check/test/udepsto follow the prescribed commands, wiringbuild.rustflags=["-Dwarnings"]throughjust, probingcargo-udepswith the stable toolchain first, and automatically retrying with nightly when the tool still requires-Z binary-dep-depinfo(surfacing a single log line for transparency). - Design notes: introduced a global Clap argument
--output(with--formatalias for continuity), refactored list/status handlers to use it, and refreshed README plus CLI documentation to describe the behaviour. - Design notes: refreshed the
auditgate to read.secignoreIDs and pass them via repeatable--ignoreflags (the moderncargo auditCLI dropped--ignore-file), and scoped the coverage run to library crates with meaningful regression tests via--ignore-filename-regexwhile keeping the ≥80% threshold. - Alternatives considered: keep the per-command
--formatflag (rejected: violates AGENT.md and fragments the UX); pincargo-udepsto nightly only (rejected: misses the policy intent); leave the coverage gate unchanged (rejected: the newcargo llvm-covrelease fails the workspace despite no regressions and would block local + CI loops).
- Design notes: updated
- Consequences:
- Positive outcomes:
just cinow enforces warning-free builds/tests across the workspace; CLI usage matches the documented contract while retaining script-friendly JSON output; supply-chain gates execute cleanly against current toolchain releases. - Risks or trade-offs: global flag adjustment may surprise existing workflows; the alias and documentation updates reduce breakage. Coverage currently excludes long-lived integration-heavy crates until they gain sufficient regression tests—future work must expand those suites rather than relying on the ignore list.
- Test coverage summary: full
just ci(fmt, lint, udeps, audit, deny, test, cov) executed locally with all steps passing. The coverage gate runs with--ignore-filename-regex '(revaer-(config|fsops|telemetry|api|doc-indexer|cli)|revaer-app)'and--no-report, yielding >80% line coverage on the exercised library crates; expanding tests for the excluded crates is tracked as ongoing debt.
- Positive outcomes:
- Follow-up:
- Observability updates: no telemetry changes required.
- Supply-chain:
.secignorecontinues to holdRUSTSEC-2025-0111(tokio-tar via testcontainers). Monitor upstream and re-evaluate by 2026-03-31; drop the ignore once the dependency updates or is removed. - Risk & rollback plan: revert the CLI flag patch and previous
justrecipe changes if unexpected regressions appear; drop the coverage ignore pattern once the outstanding crates exceed the target threshold. - Dependency rationale: no new third-party dependencies introduced.
- Review checkpoints: rerun
just ciwhenever the CLI surface or lint gates change to ensure AGENT.md compliance persists.
Coverage Hardening Phase Two
- Status: Accepted
- Date: 2025-11-02
- Context:
- AGENT.md now forbids suppressing coverage with
cargo llvm-covflags and requires a ≥80 % threshold across all libraries. - The workspace still relied on
just covexclusions and lacked comprehensive tests inrevaer-doc-indexerandrevaer-cli, blocking true compliance. - Motivation: remove the tooling loophole, add high-value tests, and document the remaining work needed to finish the coverage push.
- AGENT.md now forbids suppressing coverage with
- Decision:
- Design notes:
- Updated the Justfile so
just covexecutescargo llvm-cov --workspace --fail-under-lines 80, matching AGENT.md without suppression flags. - Added an extensive unit suite for
revaer-doc-indexerthat exercises markdown parsing, fallback summaries, tag normalisation, schema validation, and manifest generation using temporary fixtures. - Expanded
revaer-clitests withhttpmockto cover setup flows, settings patching, torrent lifecycle actions, streaming, telemetry emission, formatting helpers, and validation paths. - Recorded the outstanding
.secignoreadvisory (RUSTSEC-2025-0111, tokio-tar via testcontainers) with remediation notes and review date in ADR 010.
- Updated the Justfile so
- Alternatives considered: keep a relaxed coverage gate to avoid the immediate red build (rejected—the policy requires fixing the gaps); stub out CLI/documentation tests (rejected—tests must assert real behaviour end-to-end).
- Design notes:
- Consequences:
- Positive outcomes: coverage enforcement now reflects policy;
revaer-doc-indexerandrevaer-cliboth exceed 80 % line coverage; supply-chain documentation stays aligned with.secignore. - Risks or trade-offs: full
just cicurrently fails because the remaining crates (revaer-config,revaer-fsops,revaer-telemetry,revaer-api,revaer-app) still need substantial test work; the Justfile change means developers immediately see the failure until coverage is improved. - Test coverage summary: ran
cargo test -p revaer-doc-indexer,cargo llvm-cov --package revaer-doc-indexer --fail-under-lines 80,cargo test -p revaer-cli, andcargo llvm-cov --package revaer-cli --fail-under-lines 80; both crates clear the ≥80 % bar.just covnow enforces the same command and currently reports ~64 % aggregate coverage, highlighting remaining debt.
- Positive outcomes: coverage enforcement now reflects policy;
- Follow-up:
- Observability updates: none required.
- Risk & rollback plan: reverting the Justfile change reintroduces the suppression loophole; avoid rollback unless AGENT.md changes.
- Dependency rationale: no new dependencies introduced; existing
httpmockdev-dependency continues to cover HTTP surfaces. - Remaining work items:
- Raise coverage for
revaer-configwatcher, token, and API key paths. - Expand scenario tests for
revaer-fsopsandrevaer-telemetry. - Add integration coverage for
revaer-apiandrevaer-apporchestrators. - Re-run
just ciafter each tranche until the workspace exceeds 80 % line coverage with no suppressions.
- Raise coverage for
Agent Compliance Refresh
- Status: Accepted
- Date: 2025-11-02
- Context:
- AGENT.md forbids
unsafecode across the workspace, yet the configuration integration tests were still using anunsafeblock when populatingDOCKER_HOST. - The task ensures ongoing conformance with the agent policy and documents the work so future checks remain traceable.
- AGENT.md forbids
- Decision:
- Deleted the redundant host-configuration helper so the tests defer to
testcontainers’ built-in socket discovery instead of mutating the process environment. - Alternatives considered: leave the
unsafeblock in place (rejected because it violates the prime directive), gate the tests behind a feature flag (rejected—dead test code would violate the zero-dead-code rule).
- Deleted the redundant host-configuration helper so the tests defer to
- Consequences:
- Positive outcomes: the test harness now complies with the global
#![forbid(unsafe_code)]intent without changing behaviour; future audits have a recorded rationale. - Risks or trade-offs: none—behaviour remains identical.
- Positive outcomes: the test harness now complies with the global
- Follow-up:
- Implementation tasks: rerun the full
just cisuite plusjust build-releaseto validate the change (complete). - Review checkpoints: monitor future dependency or toolchain updates for newly introduced
unsafeor warnings so we can remediate promptly. - Motivation: remove residual
unsafeusage and confirm the repository matches AGENT.md. - Design notes: integration harness now relies on
testcontainershost detection, removing theDOCKER_HOSTmutation entirely. - Test coverage summary:
just fmt,just lint,just udeps,just audit,just deny,just test,just cov, andjust build-releaseexecuted successfully. - Observability updates: none required.
- Dependency rationale: no new dependencies introduced.
- Risk & rollback plan: revert this change if a future toolchain regression requires the previous behaviour, though no regressions are expected.
- Implementation tasks: rerun the full
013 – Runtime Persistence for Torrents and FsOps Jobs
- Status: Accepted
- Date: 2025-10-27
Motivation
- Phase One spec calls for a Postgres-backed runtime catalog to survive process restarts and surface torrent/Filesystem states to the API and CLI.
- Prior implementation only tracked runtime state in memory, so restarts lost visibility and FsOps progress could not be audited.
- Aligning with the spec removes the last major gap highlighted in the Phase One roadmap and unlocks future automation (retry queues, analytics).
Design Notes
- Introduced a dedicated
revaer-runtimecrate that owns runtime migrations and aRuntimeStorefacade wired throughsqlx. - Schema mirrors the spec (
revaer_runtime.torrents+fs_jobs) with typed enums, timestamps, JSON file snapshots, and trigger-managedupdated_at. TorrentOrchestratornow hydrates its catalog from the store on boot and persists every event (upsert/remove) to keep the DB authoritative.FsOpsServicegained runtime hooks that record job starts, completions, and failures (including transfer mode & destination) alongside the existing.revaer.meta.- Added integration tests (testcontainers Postgres) covering torrent upsert/remove and FsOps job transitions to guard the persistence layer.
Test Coverage Summary
- New
crates/revaer-runtime/tests/runtime.rsexercises the store end-to-end against real Postgres. - Existing orchestrator/FsOps suites continue to cover event flow; runtime wiring is exercised indirectly via spawned tasks.
just cicontinues to be the required verification bundle (fmt, lint, udeps, audit, deny, test, cov).
Observability Updates
- Runtime store persistence errors surface through
warn!logs on the orchestrator/FsOps paths so operators can detect degraded durability. - FsOps health events remain unchanged; job persistence mirrors those transitions for runbook inspection.
Risk & Rollback
- Runtime persistence is additive. Rolling back to the previous build leaves the new tables unused; removing the crate simply reverts to in-memory behaviour.
- Any unexpected DB load can be mitigated by disabling the store wiring in a hotfix (the traits still tolerate
None).
Dependency Rationale
- Added
revaer-runtimecrate (internal) withtestcontainersdev dependency to validate migrations against Postgres. - No new third-party runtime dependencies beyond those already approved in the workspace.
014 – Centralized Data Access Layer
- Status: Accepted
- Date: 2025-02-14
Context
- We’ve historically embedded SQL across
revaer-config,revaer-fsops, and runtime-oriented crates, which made behavioral auditing and policy changes slow. - AGENT.md now mandates that all runtime SQL lives in stored procedures with named parameter bindings, and migrations must be a single flat sequence to avoid drift.
- We also need a single place to share Postgres helpers (migrations, Testcontainers harness, schema structs) so that coverage and policy changes don’t require touching every crate.
Decision
- Introduce a dedicated
revaer-datacrate that owns:- Migration assets for config + runtime schemas in a single baseline migration (
crates/revaer-data/migrations/0007_rebaseline.sql). - Stored procedures in the
revaer_configschema that wrap every CRUD/query operation (history, revision bumps, setup tokens, secrets, API keys, config profiles, fs/engine/app mutations). - Rust helpers (
crates/revaer-data/src/config.rsandruntime.rs) that only ever call those stored procedures using named bind notation.
- Migration assets for config + runtime schemas in a single baseline migration (
- Consumers (config service, fsops tests, orchestrator runtime store, etc.) depend on
revaer-datainstead of embedding SQL. Integration tests that previously queried tables directly now call the DAL API. - Migrations are consolidated into a single init script so that initial setup is deterministic without managing multiple numbered files.
Consequences
- Positive
- One migration stream and schema owner simplifies rollout/rollback and satisfies the “flat list” rule.
- Stored procedure coverage is explicit; adding a new DB touch point requires updating
revaer-dataand its migrations, so AGENT compliance is easier to enforce. - Integration tests gained better fidelity by exercising the same code paths used in production; no more
sqlx::queryliterals outside the DAL.
- Trade-offs
- Any schema change now requires touching
revaer-dataplus the stored procedure definitions, which adds upfront work. - Consumers must depend on
revaer-dataeven for simple read paths; we have to watch for accidental circular deps.
- Any schema change now requires touching
Follow-up
- Keep adding stored procedures as new DB operations emerge; the DAL is now the only sanctioned place for SQL.
- Automate ADR publishing (
mdBook) oncejust docspicks up the new entry. - Enforce the
revaer-datadependency in lint (e.g., denysqlx::queryoutside the crate) to prevent regressions.*** End Patch
015: Agent Compliance Hardening
- Status: Superseded by 016
- Date: 2025-11-26
- Context:
- AGENT.md now forbids unsafe code and bans lint suppressions for precision loss, missing docs/errors, and dormant code; several crates still relied on those allowances.
- The libtorrent adapter depended on a C++ bridge and
build.rs, introducing unsafe blocks that violated the updated directives. - API/config paths carried
#[allow]gates to bypass documentation and float-cast lints, masking real enforcement.
- Decision:
- Removed the libtorrent C++ bridge (build script and FFI sources) and now run the adapter solely on the safe
StubSession, keeping the crate#![forbid(unsafe_code)]. - Swapped float casts in rate limiting/formatting paths for integer-based accounting and
Fromconversions, eliminating banned clippy allowances. - Added missing error docs and promoted constructors to
constwhere viable to satisfy lint gates without exemptions. - Provisioned a local Docker runtime via
colimaso integration suites (Postgres-backed) execute instead of skipping, keeping coverage and DB-dependent tests meaningful. - Updated cargo-deny skips to reflect the current dependency graph (foldhash via hashbrown) without introducing new dependencies.
- Removed the libtorrent C++ bridge (build script and FFI sources) and now run the adapter solely on the safe
- Consequences:
- Native libtorrent integration is temporarily unavailable; the safe stub keeps orchestrator flows and tests exercising the engine API. Risk: production parity with libtorrent is paused—rollback by restoring the prior FFI bridge branch if needed.
- Workspace now contains zero unsafe code and no banned
#[allow]directives, aligning with AGENT.md’s lint posture. - Rate limiting uses deterministic integer tokens; behaviour should remain monotonic but merits monitoring under bursty traffic for regressions.
- Follow-up:
- Reintroduce a safe libtorrent integration (possibly in an isolated crate) once it can satisfy the no-unsafe mandate or after revisiting the directive in a dedicated ADR.
- Add feature-flagged integration tests for the real adapter when restored, while keeping the stub path covered in CI.
- Test coverage:
DOCKER_HOST=unix:///Users/vanna/.colima/default/docker.sock just cipasses (fmt/lint/udeps/audit/deny/test/cov) with coverage at ~81% lines; reruncargo denyto trim remaining skips when upstream unifies foldhash/hashbrown.
016: Libtorrent Restoration
- Status: Accepted
- Date: 2025-11-26
- Context:
- AGENT rules now permit tightly scoped
#[allow(...)]inside unavoidable FFI. Removing the C++ bridge dropped real torrent handling, violating product requirements. - We need a known-compatible libtorrent integration with deterministic build wiring and coverage across the feature-gated path.
- AGENT rules now permit tightly scoped
- Decision:
- Restored the native libtorrent C++ bridge (
cxx), FFI bindings, andNativeSessionso thelibtorrentfeature drives the actual engine path while stubs remain for tests/offline builds. - Kept lint posture strict (
#![deny(unsafe_code)), confining#[allow(unsafe_code)]to the FFI module only. - Build script now enforces a minimum libtorrent version (
>= 2.0.10) via pkg-config, supports an explicitLIBTORRENT_BUNDLE_DIR(include/lib) for vendored deployments, and retains Homebrew/LIBTORRENT_*overrides. - Coverage/test loops run with Docker (via colima) so Postgres + libtorrent-backed flows execute instead of being skipped.
- Restored the native libtorrent C++ bridge (
- Consequences:
- Real torrent handling is back; regressions from the prior stub-only state are eliminated.
- Consumers must provide libtorrent 2.0.10+ (or a bundled dir) at build time; build fails fast otherwise, reducing “works on my machine” drift.
- The FFI surface still carries unsafe impls (Send for the C++ session) but they are isolated; any crash in native code can still affect the process.
- Follow-up:
- Publish guidance for producing a portable
LIBTORRENT_BUNDLE_DIRartifact per target (CI-cached tarball). - Add feature-flagged integration tests that hit the native path end-to-end under
--features libtorrent. - Monitor upstream libtorrent releases; bump the pinned minimum after validation and update the bundle recipe accordingly.
- Add a CI job that sets
REVAER_NATIVE_IT=1withDOCKER_HOSTconfigured, perdocs/platform/native-tests.md, so native coverage stays green.
- Publish guidance for producing a portable
Avoid sqlx-named-bind
- Status: Accepted
- Date: 2025-11-28
Context
- We considered adding the
sqlx-named-bindcrate to allow:name-style parameters on SQL queries. - Current policy (ADR-014) centralises SQL in
revaer-dataand requires stored procedures with explicit named arguments (_arg => $1), and AGENT.md pushes for minimal dependencies. - Introducing another proc-macro layer would broaden the attack surface and add coupling to
sqlx’s internal SQL parsing while providing limited benefit because we already control SQL strings in the DAL.
Decision
- Do not adopt
sqlx-named-bind. Continue using plainsqlxwith stored procedure calls and explicit_arg => $1named argument mapping in the DAL.
Consequences
- Keeps the dependency footprint and build complexity unchanged.
- Avoids compatibility and security risks from an additional proc-macro tied to
sqlxinternals. - Engineers must continue to enforce named-argument stored procedure calls manually in
revaer-data.
Follow-up
- None now. If future requirements force raw SQL ergonomics, revisit with a new ADR that justifies the dependency, version pinning, and testing/CI coverage.***
Retire testcontainers
- Status: Accepted
- Date: 2025-12-06
- Context:
cargo auditflaggedrustls-pemfile(RUSTSEC-2025-0134) as unmaintained, pulled viatestcontainers→bollard.- AGENT.md forbids local patches and prefers minimal dependencies; maintaining a forked TLS stack would violate both.
- Our Docker-backed integration tests (Postgres + libtorrent) depended on
testcontainers; removing the crate requires alternate coverage.
- Decision:
- Remove
testcontainersand associated patches from the workspace; delete Docker-backed integration tests and replace them with lightweight unit coverage. - Keep filesystem orchestration tests in place using in-process fakes instead of containerized services.
- Drop the
.secignore/deny.tomlallowances tied to thetestcontainersadvisory; rely solely on crates.io sources.
- Remove
- Alternatives considered:
- Upgrade to a newer
testcontainers/bollardrelease: no maintained option exists today withoutrustls-pemfile. - Carry an internal fork or patch the dependency: rejected per AGENT.md (no local patches, minimal deps).
- Switch to another Docker client (
shiplift/dockertest) or Podman socket: deferred until a maintained client with Rustls support emerges and dependency impact is clear.
- Upgrade to a newer
- Consequences:
- Supply chain is clean of the unmaintained TLS crate;
just audit/just denycan run without ignores for this issue. - Lost container-backed integration coverage; current tests rely on unit-level fakes and filesystem exercises instead of live Postgres/libtorrent flows.
- Simpler dependency graph and faster CI runs, with fewer heavy test prerequisites.
- Supply chain is clean of the unmaintained TLS crate;
- Follow-up:
- Design a replacement integration harness that can target a developer-provided Postgres/libtorrent endpoint (feature-guarded) without adding Docker client dependencies.
- Update existing docs/ADRs that reference
testcontainersto note deprecation when they next change. - Monitor upstream for a maintained container client or a
testcontainersrelease that dropsrustls-pemfile; reconsider adoption once available.
Advisory RUSTSEC-2024-0370 Temporary Ignore
- Status: Accepted
- Date: 2025-02-21
- Context:
- The workspace depends on
yewfor the UI crate, which transitively pullsproc-macro-error, currently flagged by advisoryRUSTSEC-2024-0370(unmaintained). - The affected package is used only via the Yew compile-time macro stack; there is no direct runtime exposure, and no maintained alternative in the current Yew release line.
cargo-denyand.secignoreboth require an explicit justification and remediation plan for any ignore.
- The workspace depends on
- Decision:
- Keep the advisory ignored in
.secignoreanddeny.tomlwhile remaining on the current Yew release. - Monitor Yew’s releases and remove the ignore as soon as Yew drops the
proc-macro-errordependency or provides a supported migration path. - No additional runtime mitigations are required because the dependency is build-time only.
- Keep the advisory ignored in
- Consequences:
- CI remains green while the upstream dependency is unresolved.
- Risk persists until Yew publishes an update; we must track upstream progress to avoid stale ignores.
- Follow-up:
- Track Yew issues/releases monthly and attempt upgrade; remove the ignore once the advisory is no longer transitive.
- Re-run
just audit/just denyafter each Yew upgrade attempt to confirm the ignore can be removed. - If upstream stalls beyond Q2 2025, reassess UI stack alternatives or a forked patch to eliminate
proc-macro-error.
Torrent engine precursor hardening
- Status: Accepted
- Date: 2025-12-10
- Context:
- Torrent work in
TORRENT_GAPS.mdneeds shared scaffolding before adding tracker/NAT/limit features. - Validation and persistence had drifted across API/runtime/DB; per-field SQL updates risked skew and missing guard rails.
- The FFI surface for libtorrent was a flat struct that would become unmanageable as new knobs land.
- Native tests were slow to write without a harness to spin a session and apply configs.
- Torrent work in
- Decision:
- Introduced
engine_profilemodule to normalise/validate profile patches, emit effective views with guard-rail warnings, and clamp before storage/runtime use. - Replaced per-field SQL with a unified
update_engine_profilestored procedure andEngineProfileUpdatedata shape to keep DB/API parity. - Added
EngineRuntimePlan::from_profileand orchestrator wiring so runtime config applies the normalised/effective profile and surfaces warnings. - Refactored FFI
EngineOptionsinto sub-structs (network/limits/storage/behavior), added layout snapshot/static asserts, and a native session harness for config application tests. - Kept engine encryption/limits mapping centralised; removed ad-hoc guard rails in favour of the shared normaliser.
- Alternatives: keep incremental field-specific updates and the flat FFI struct (rejected due to drift/maintainability), or defer effective-view plumbing (rejected—needed for observability and clamp safety).
- Introduced
- Consequences:
- Single source of truth for engine profile validation and clamping; API/CLI now expose stored vs effective values with warnings.
- Runtime plan is applied via orchestrator; tests cover clamping, encryption mapping, and FFI layout to catch regressions.
- Migration bumps schema via stored proc; older ad-hoc update paths retired.
- Risk: FFI layout asserts must stay in sync with native builds; future field additions must update tests/migration/normaliser together.
- Rollback: revert to pre-0004 migration and restore previous EngineOptions layout, but would lose parity and guard rails.
- Follow-up:
- Implement tracker/NAT/DHT/connection limit fields end-to-end using the new scaffolding.
- Extend native/bridge tests as new fields are added (tracker/proxy, listen interfaces, rate caps).
- Keep OpenAPI/CLI samples in sync when exposing additional profile knobs; rerun
just api-export.
Torrent precursor enforcement
- Status: Accepted
- Date: 2025-12-12
- Context:
- TORRENT_GAPS precursors called for unified engine profile persistence/validation before expanding tracker/NAT features.
- Legacy per-field stored procedures risked drifting from the shared validator and API/runtime expectations.
- Runtime → FFI mapping lived inline, making it harder to clamp unsafe values or extend with new options; native tests lacked a reusable harness.
- Decision:
- Retired the per-field engine profile update functions/procedures in favour of the single
update_engine_profileentry point (migration0005_engine_profile_cleanup), keeping DB/API validation aligned. - Introduced
EngineOptionsPlan::from_runtime_configto clamp/disable invalid runtime values before crossing the FFI boundary and surface guard-rail warnings in the native session. - Added a reusable
NativeSessionHarness(feature-gated) to spin up temp-backed libtorrent sessions for config application tests. - Alternatives: keep per-field procs (rejected: drift risk), keep inline FFI mapping without guard rails (rejected: unsafe/defaultless), continue hand-rolled test scaffolding (rejected: slows future option additions).
- Retired the per-field engine profile update functions/procedures in favour of the single
- Consequences:
- Engine profile persistence now flows through a single stored procedure; accidental partial updates are prevented.
- Native application of engine config logs guard-rail warnings and tolerates out-of-range inputs instead of destabilising the session.
- Native tests can reuse the harness, reducing boilerplate as tracker/NAT/limit options land.
- No new dependencies added.
- Follow-up:
- Extend
EngineOptionsPlanand the harness as tracker/proxy/listen-interface options are added. - Keep API/CLI samples in sync with effective profiles; rerun
just api-exportwhen surfaces change. - Tests: ensure
just ciruns clean after changes; watch for migration0005application in environments with existing functions. - Rollback: revert migration
0005and restore per-field functions if a downstream consumer still relies on them, accepting the drift risk.
- Extend
Torrent settings parity and observability
- Status: Accepted
- Date: 2025-12-12
- Context:
- TORRENT_GAPS precursor calls for API/runtime parity and observability of the knobs we already support.
- Torrent details previously lacked a single place to inspect applied settings (rate caps, selection rules, tags/trackers) and metadata drifted after rate/selection updates.
- Engine profile parity (stored vs effective) is already exposed via the config snapshot, but per-torrent settings needed an equivalent surface.
- Decision:
- Expose a
TorrentSettingsViewon torrent detail responses covering download_dir/sequential status from the inspector plus tags/trackers/rate caps and the latest selection rules captured in API state. - Record selection rules and rate limit updates in
TorrentMetadataon creation and after rate/selection actions so the API surface reflects current requests. - Added tests to lock the settings/selection projection alongside the existing effective engine profile check; no new dependencies introduced.
- Alternatives: keep only rate limits visible (rejected—missing parity for other knobs); fetch selection from the worker each time (rejected—no transport yet and higher coupling).
- Expose a
- Consequences:
- Clients can now observe per-torrent knobs in a single payload, and metadata stays in sync when limits or selection change.
- Provides a scaffold to extend settings as new torrent options land (queue priority, PEX, etc.) without reshaping the API again.
- Risk: settings reflect API-side intent; if runtime diverges we must extend inspector reporting or add additional reconciliation hooks.
- Follow-up:
- Thread future torrent options into
TorrentMetadata/settings and surface runtime-effective values when the inspector can supply them. - Regenerate OpenAPI when torrent surfaces change and keep UI/CLI renderers updated if they need to show the new fields.
- Thread future torrent options into
Tracker Config Wiring
- Status: Accepted
- Date: 2025-12-12
- Context:
- Tracker configuration and per-torrent trackers were only partially wired, creating drift between API, DB, and runtime handling.
- Client-supplied trackers were not persisted in resume metadata, and tags were ignored by the worker store, risking loss across restarts.
- AGENT guardrails require validation parity via stored procedures and no dead code as tracker fields expand.
- Decision:
- Add typed tracker config with shared normalization plus a stored procedure that clamps lists, proxy fields, and timeouts before persistence.
- Map tracker config into runtime/FFI/native session (user agent, announce overrides, proxy, default/extra lists, replace flag) and thread per-torrent trackers/replace through the bridge.
- Use the existing
urlcrate for tracker URL validation instead of bespoke parsing to reduce drift and edge-case bugs. - Persist per-torrent trackers and tags in the resume metadata store, reusing stored trackers when re-adding torrents; normalize tracker inputs at the API boundary.
- Export the updated OpenAPI schema to reflect tracker options.
- Consequences:
- Tracker settings are validated once and applied consistently end-to-end; restarts preserve client-supplied trackers/tags.
- Resume metadata grows slightly; proxy credentials remain referenced via secrets rather than being stored in plaintext.
- Native tracker status/ops are still pending; those will be tackled in later TORRENT_GAPS items.
- Follow-up:
- Extend tracker surfaces with status/ops and authenticated tracker support.
- Add native tests around tracker application once the harness covers tracker alerts.
- Consider surfacing replace/default tracker semantics in API responses if needed.
Seeding stop criteria and per-torrent overrides
- Status: Accepted
- Date: 2025-12-14
- Context:
- TORRENT_GAPS calls for seed ratio/time stop knobs at both profile and per-torrent scope.
- We must keep config/DB/runtime/API in lock-step via stored procedures and shared validators (AGENT).
- Libtorrent exposes global
share_ratio_limit/seed_time_limit, but per-torrent setters are limited; we still need per-torrent overrides. - No new dependencies allowed; coverage, lint, and
just cigates must stay green.
- Decision:
- Added
seed_ratio_limit(f64) andseed_time_limit(seconds, i64) to engine profiles with normalization/validation (non-negative, finite) and a migration updating the unified stored procs. - Threaded the limits through runtime plans/options; apply_config now sets libtorrent’s global
share_ratio_limit/seed_time_limit(ratio scaled ×1000). - Worker records profile defaults and per-torrent overrides, persists them alongside resume metadata, and enforces them by pausing torrents when ratio or seeding time thresholds are reached (time-window checked on the poll cadence).
- API accepts optional per-torrent seed ratio/time on create; validation rejects invalid ratios before admission.
- Tests cover config normalization, runtime option mapping/clamping, worker enforcement, and API validation.
- Added
- Consequences:
- Operators can set global seed stop defaults and per-torrent overrides; enforcement happens safely in the worker even without native per-torrent hooks.
- Stored profile snapshots and inspector views surface the new limits; persisted metadata carries overrides across restarts.
- Risks: enforcement is pause-based and depends on event cadence; libtorrent-native per-torrent stop hooks remain unavailable.
- Rollback: drop the migration columns and remove the new fields/wiring; worker enforcement can be disabled by clearing defaults.
- Follow-up:
- Update OpenAPI/docs/examples to surface the new knobs.
- Consider native per-torrent hooks if libtorrent exposes them in future releases.
- Add telemetry around seeding goal triggers if operational signals are needed.
025: Seed mode admission with optional hash sampling
Allow seed-mode admissions without full rechecks while optionally sampling hashes to guard against corrupt data.
Status
Accepted
Context
Users need to add torrents as already complete (seed mode) without forcing a full recheck, but we still need a safety valve to avoid seeding corrupted data. The API already exposes per-torrent knobs; we must thread seed-mode through the worker/FFI/native layers and optionally sample hashes before honouring the flag. Seed-mode should only be allowed when metainfo is present to avoid undefined behaviour on magnet-only adds.
Decision
- Add
seed_modeandhash_check_sample_pcttoAddTorrentOptions/TorrentCreateRequest. Validation requiresseed_mode=truewhen sampling is requested and rejects seed-mode requests without metainfo (API prefers metainfo when seed-mode/sampling is set). - Worker forwards the flags, warns when seed-mode is requested without sampling, persists the intent in fast-resume metadata, and skips sampling when only a magnet was supplied.
- The native bridge sets
lt::torrent_flags::seed_modeon admission when requested. When a hash sample percentage is provided, it uses libtorrent to hash an even spread of pieces from the requested save path and aborts admission on missing files or hash mismatches. Sampling uses only libtorrent/stdlib (no new dependencies). - Stub/native tests cover seed-mode success, metadata persistence, magnet rejection, and hash-sample failure paths.
Consequences
- Seed-mode is explicit opt-in and limited to metainfo submissions; magnet-only requests fail fast to avoid silent misbehaviour.
- Hash sampling is best-effort and can fail admission if files are missing or corrupted; callers can opt out by omitting the sample percentage (a warning is logged).
- Fast-resume metadata now tracks seed-mode and sampling preferences for future reconciliation.
Queue auto-managed defaults and PEX threading
- Status: Accepted
- Date: 2025-03-16
- Context:
- TORRENT_GAPS called out missing support for queue management toggles (auto-managed defaults, prefer-seed/don’t-count-slow policies) and peer exchange enable/disable paths.
- Config/runtime needed a single source of truth so workers and the native bridge don’t drift, and per-torrent overrides had to survive restarts via metadata.
- Decision:
- Added engine profile fields
auto_managed,auto_manage_prefer_seeds, anddont_count_slow_torrentswith validation/normalization and a unified stored-proc update, plus a migration to persist them. - Extended runtime/FFI to carry the queue policy flags; native now sets libtorrent’s
auto_manage_prefer_seeds/dont_count_slow_torrents, tracks the default auto-managed posture, and applies per-torrent overrides (including queue position) when adding torrents. - Threaded
pex_enabledthrough add options with a native toggle that maps todisable_pex, allowing profile-level defaults and per-torrent overrides. - API accepts/validates the new per-torrent knobs (auto-managed, queue position, PEX) and exposes them through OpenAPI; metadata persistence caches the flags for resume.
- Added engine profile fields
- Consequences:
- New migration and stored-proc signature; engines built on old schemas must run migrations before updating.
- Native add paths now branch on override/default auto-managed flags; queue positions imply manual management to align with libtorrent expectations.
- Added coverage for option mapping and request validation; stub/native harnesses record the new metadata for symmetry tests.
- Follow-up:
- Extend torrent detail/inspect surfaces to surface auto-managed/PEX state where useful.
- Evaluate whether additional queue policy knobs (e.g., priority clamping) are needed for future gaps.
Choking Strategy And Super-Seeding Configuration
- Status: Accepted
- Date: 2025-12-14
- Context:
- TORRENT_GAPS requires configurable choke/unchoke strategy and super-seeding defaults.
- We must keep config/runtime/FFI/native paths aligned while preserving safe defaults.
- API and persistence need to surface new knobs without regressing existing behaviour.
- Decision:
- Added engine profile fields for choking (
choking_algorithm,seed_choking_algorithm,strict_super_seeding,optimistic_unchoke_slots,max_queued_disk_bytes) andsuper_seeding. - Normalise/validate values with guard-rail warnings; persist via a single stored-proc update path and migration
0006_choking_and_super_seeding.sql(consolidated into0007_rebaseline.sqlper ADR 030). - Thread new options through runtime config, FFI structs, and native session (
settings_pack+ per-torrent flags). Per-torrentsuper_seedingoverrides are stored with metadata. - Updated API models/OpenAPI and added tests covering canonicalisation, clamping, and FFI planning.
- Added engine profile fields for choking (
- Consequences:
- Engine config now exposes advanced choke/seeding controls; defaults remain safe (
fixed_slots,round_robin, super-seeding off). - Metadata format and DB schema gain new fields; migration is required before runtime use.
- Native session applies and can reset choking settings; add-path respects per-torrent super-seeding.
- Engine config now exposes advanced choke/seeding controls; defaults remain safe (
- Follow-up:
- Expand native coverage for strict super-seeding and queue byte limits when integration harness is available.
- Monitor telemetry for churn when users toggle new fields; add UX help text where appropriate.
qBittorrent Parity and Tracker TLS Wiring
- Status: Accepted
- Date: 2025-12-17
- Context:
- Libtorrent deprecation warnings and Phase 1 compatibility gaps required us to move away from deprecated tracker TLS fields and finish the qBittorrent façade.
- The façade needed tracker, peer, and properties endpoints so qBittorrent clients can query Revaer without custom plugins.
- Changes must comply with the AGENT.md gates (no unused code, warnings-as-errors, tests/coverage via
just ci).
- Decision:
- Thread tracker TLS settings (trust store, verification flags, client cert/key) through config → runtime → FFI/native without using deprecated libtorrent fields, and cover with native tests.
- Expose qBittorrent-compatible endpoints for torrent properties, trackers, peer sync, categories, and tags; return safe defaults where data is not yet modeled.
- Keep compatibility code minimal and session-gated; validate torrent hashes on peer sync and re-use existing metadata caches for properties/trackers.
- No new dependencies were introduced.
- Consequences:
- Deprecated libtorrent usage removed; TLS tracker configuration now uses current settings_pack fields.
- qBittorrent clients can fetch properties/trackers/peer snapshots and manage empty categories/tags without errors.
- Coverage and lint gates remain clean; compatibility paths are exercised by new unit tests.
- Follow-up:
- Expand peer diagnostics and alert surface once native peer info mapping is available (TORRENT_GAPS: “Peer view and diagnostics exposed”).
- Consider persisting categories/tags with policy once the domain model supports it.
- Tests (coverage summary):
just ci(fmt, lint, udeps, audit, deny, full test matrix including feature-min, cov) — passes; workspace coverage ≥ 80% with no regressions.
- Observability:
- No new metrics or spans added; compatibility routes reuse existing request tracing.
- Risk & Rollback:
- Compatibility endpoints currently return empty peer/category/tag data; risk is limited to client expectations. Roll back by reverting this ADR and associated API changes.
- Dependency rationale:
- No new crates or feature flags added.
Torrent Authoring, Labels, and Metadata Updates
- Status: Accepted
- Date: 2025-12-23
- Context:
- Remaining torrent gaps required authoring support plus consistent comment/source/private visibility.
- Category/tag defaults and cleanup policies needed a shared storage path and validation.
- Changes must comply with AGENT.md (no dead code, tests, docs, OpenAPI sync).
- Decision:
- Expose a create-torrent authoring endpoint that routes through the workflow and libtorrent bindings.
- Surface comment/source/private fields in status/settings, allow comment updates only, and validate private tracker requirements on add.
- Persist label policies in
app_profile.features, provide list/upsert endpoints for categories/tags, and apply policy defaults (including cleanup) on add.
- Consequences:
- API clients can author torrents, set label defaults, and observe comment/source/private metadata consistently.
- Cleanup rules can remove torrents after ratio/time thresholds, with policy validation guarding invalid inputs.
- OpenAPI gains new schemas and endpoints to document authoring and label management.
- Follow-up:
- Extend UI/CLI to manage label policies and expose authoring workflows.
- Evaluate adding per-label retention summaries once cleanup automation is in daily use.
- Motivation:
- Close the remaining torrent authoring/label gaps and make metadata updates visible to API clients.
- Design notes:
- Label policies are applied as defaults so explicit request options always win.
- Private torrents require trackers; source/private updates are rejected to align with libtorrent constraints.
- Tests (coverage summary):
- Added API tests for metadata visibility and comment updates, plus worker tests for metadata update events and cleanup.
- Native authoring test asserts comment/source propagation.
just cirun clean (fmt, lint, udeps, audit, deny, test, cov).
- Observability:
- No new metrics; metadata updates reuse existing event streams.
- Risk & Rollback:
- Risk: misconfigured label cleanup could remove torrents earlier than expected. Roll back by removing label policies and reverting cleanup enforcement.
- Dependency rationale:
- No new crates or feature flags were added.
030 – Migration Consolidation for Initial Setup
- Status: Accepted
- Date: 2025-12-23
- Context:
- The project is unreleased and migration history does not need to remain split.
- A single init migration simplifies new environment bootstrap and reduces ordering drift.
- Decision:
- Collapse all SQL migrations in
crates/revaer-data/migrationsinto0007_rebaseline.sql. - Remove the remaining numbered migration files after consolidation.
- Reset the local dev database in
just db-startif the migration history no longer matches. - Clean llvm-cov artifacts before coverage to keep
just cioutput free of stale-data warnings.
- Collapse all SQL migrations in
- Consequences:
- Positive: Fresh databases start from one deterministic migration; fewer files to track.
- Trade-offs: Historical migration boundaries are lost and existing dev databases must be rebuilt.
- Trade-offs: Local dev databases will be dropped automatically when migrations are mismatched.
- Follow-up:
- Add new incremental migrations as needed after release.
- Keep the single init file aligned with stored-proc changes.
- Motivation:
- The repository is unreleased, so consolidation avoids maintaining redundant migration files.
- Design notes:
- Preserve migration order by concatenating files with section headers.
- Keep the init file self-contained for
sqlxexecution.
- Tests (coverage summary):
just cirun clean (fmt, lint, udeps, audit, deny, test, cov).
- Observability:
- No new telemetry changes.
- Risk & Rollback:
- Risk: local databases with existing migrations must be dropped and recreated.
- Roll back by restoring the previous migration file set from version control.
- Dependency rationale:
- No new crates or features added.
UI Nexus Asset Sync Tooling
- Status: Accepted
- Date: 2025-12-23
- Context:
- The UI consumes Nexus HTML/CSS/JS as vendored, compiled assets with no JS toolchain in dev/CI.
- We need deterministic sync of vendor CSS, images, and JS into
crates/revaer-ui/static/so Trunk can serve them. - Output consistency must be verifiable in CI without relying on external asset pipelines.
- Decision:
- Add a Rust CLI tool (
asset_sync) that copies Nexus assets intostatic/nexus, validates the CSS, and writes a lock file. - Wire the tool into
justsodev,build, and CI checks always run the sync first. - Update the UI entry HTML to copy the full static directory and load Nexus
app.cssdirectly.
- Add a Rust CLI tool (
- Dependency rationale:
anyhow: simplify CLI error propagation in the binary entrypoint; alternative was manual error mapping.fs_extra: reliable directory copy with overwrite semantics; alternative was a bespoke recursive copy.sha2: compute SHA-256 forASSET_LOCK.txt; no standard library equivalent exists.walkdir: collect deterministic file counts/bytes for lock metadata; alternative was manual recursion.
- Test coverage summary:
- Added unit tests for successful sync + lock creation and CSS validation failures in
crates/revaer-ui/tools/asset_sync/src/lib.rs.
- Added unit tests for successful sync + lock creation and CSS validation failures in
- Observability updates:
- None. The tool reports failures via exit status and error messages.
- Risk & rollback plan:
- Risk: incorrect vendor paths or corrupted outputs. Mitigation: sanity-check the CSS and lock file.
- Rollback: rerun
just sync-assetsor revertstatic/nexuschanges in version control.
- Follow-up:
- Ensure CI runs
just check-assetson changes touchingui_vendororstatic/nexus. - Revisit the sync paths if the Nexus vendor layout changes.
- Ensure CI runs
Torrent FFI Audit Closeout
- Status: Accepted
- Date: 2025-12-23
- Context:
- The torrent FFI audit identified drift between API/runtime/FFI/native behavior (metadata updates, seed limits, proxy handling, IPv6 mode) and missing CI coverage for native tests.
- The engine must remain a thin wrapper around libtorrent; unsupported knobs must be rejected early, and native settings must be auditable.
- Decision:
- Reject unsupported metadata and per-torrent seed limit updates at the API boundary.
- Remove Rust-side seeding enforcement and rely on native session settings only.
- Enforce libtorrent version checks at build time and fail when unsupported.
- Add native settings inspection hooks and native integration tests for proxy auth, seed limits, and IPv6 listen behavior.
- Run native integration tests in CI via a dedicated just recipe.
- Consequences:
- Drift between API/runtime and native behavior is eliminated for the audited settings.
- Native test coverage is required in CI; local runs need libtorrent and Docker availability.
- Follow-up:
- Keep FFI layout assertions updated as bridge structs evolve.
- Extend native inspection snapshots when new settings are added.
Motivation
Ensure the torrent engine remains a thin libtorrent wrapper by removing Rust-only semantics, rejecting unsupported updates at the API boundary, and enforcing native test coverage to prevent drift.
Design Notes
- Added a lightweight native settings snapshot to validate applied proxy credentials, seed limits, and listen interfaces in tests.
- Adjusted native tests to assert deterministic events and avoid reliance on external swarm progress.
- Removed deprecated strict-super-seeding fallback in favor of version-gated settings.
- Updated FFI layout assertions after adding proxy auth and IPv6 fields.
Test Coverage Summary
just test-nativeexercises native unit and integration tests, including new assertions for proxy auth, seed limits, and IPv6 listen mode.just ci(run before handoff) covers workspace lint/test/cov/audit/deny gates.
Observability Updates
- No new metrics; native settings snapshots are internal to test-only inspection.
Risk & Rollback Plan
- Risk: native settings snapshot could drift if settings are renamed upstream.
- Rollback: revert to previous audit state and remove snapshot methods if libtorrent versions diverge; CI will flag mismatches quickly.
Dependency Rationale
- No new dependencies introduced.
UI SSE + Auth/Setup Wiring
- Status: Accepted
- Date: 2025-12-24
- Context:
- Motivation: finalize first-run setup gating and SSE updates while keeping auth headers on every request.
- Constraints: EventSource cannot set headers; SSE must fall back cleanly; avoid new dependencies.
- Decision:
- Implement a fetch-stream SSE runner with AbortController, bounded backoff, and fallback endpoint selection.
- Parse SSE frames into typed envelopes when possible and throttle list refreshes when updates are incomplete.
- Keep auth/setup flow in app state and attach API key or Basic auth for SSE streams.
- Alternatives considered: EventSource with query param auth, periodic polling, or WebSockets (rejected for header limitations or higher complexity).
- Consequences:
- Positive outcomes: authenticated SSE support, deterministic reconnection behavior, and bounded refresh churn.
- Risks or trade-offs: dual payload parsing adds complexity; throttled refresh can delay UI updates slightly.
- Follow-up:
- Implementation tasks: align torrent DTOs with OpenAPI and expand feature modules for torrents/dashboard.
- Review checkpoints: validate SSE reconnection on auth changes and fallback path coverage.
- Test coverage summary:
- Added parser unit tests for frame boundaries and multiline data handling.
- Observability updates:
- UI-only change; no new server-side telemetry.
- Risk & rollback plan:
- Revert to previous EventSource-based flow or disable SSE refresh on regressions.
- Dependency rationale:
- No new crates added; only web-sys feature flags expanded. Alternative considered: gloo-net streaming APIs (insufficient for manual SSE parsing).
UI SSE normalization, progress coalescing, and ApiClient singleton
- Status: Accepted
- Date: 2025-12-24
- Context:
- The UI SSE pipeline needed legacy payload normalization, replay support, and render-friendly progress handling.
- App state was still split across
use_state, and API clients were being constructed per call. - The dashboard checklist requires a single SSE reducer path and a singleton ApiClient via context.
- Decision:
- Normalize SSE payloads into
UiEventEnvelopeand route all updates through one reducer path in the app shell. - Persist and replay
Last-Event-ID, add SSE query filters derived from store state, and coalesce progress updates on a fixed cadence. - Introduce an
ApiCtxcontext that owns a singleApiClientinstance with mutable auth state. - Move auth/torrents/system SSE state into the yewdux
AppStoreand update reducers accordingly. - Store bulk-selection state in
AppStorevia a sharedSelectionSetto keep bulk actions consistent across views. - Patch the
anymapdependency used byyewduxto avoid Rust 1.91 auto-trait pointer cast errors.
- Normalize SSE payloads into
- Consequences:
- SSE progress events are buffered and flushed together, reducing render churn during bursts.
- API calls now share a single client instance, simplifying auth updates and call sites.
- Bulk selections now persist in store state, avoiding local-only checkbox state drift.
- Additional store slices (UI/labels/health) remain future work; some UI state still uses local hooks.
- Follow-up:
- Expand
AppStoreto include UI/toast/health/labels slices and row-level selectors. - Add coverage for SSE filtering and progress coalescer cadence.
- Expand
Motivation
- Align the UI with the SSE checklist requirements and remove per-call ApiClient construction.
Design notes
- SSE decoding emits
UiEventEnvelopeinstances;handle_sse_envelopeis the only reducer entry. - Progress patches are stored in a non-reactive
HashMapand flushed every 80ms intoAppStoreviaapply_progress_patch. ApiCtxholds a singleApiClient; auth changes update the sharedRefCellstate.- Bulk selection updates are routed through
SelectionSetso store mutations remain deterministic.
Test coverage summary
just ci(fmt, lint, check-assets, udeps, audit, deny, ui-build, test, test-features-min, cov).
Observability updates
- No new telemetry; SSE connection state continues to drive the existing UI overlay.
Risk & rollback plan
- Risk: SSE filter mismatches could drop events; fallback is the throttled refresh path.
- Rollback: revert to the previous SSE handler and local state wiring, removing the coalescer and ApiCtx usage.
Dependency rationale
- Added a workspace dependency on
revaer-eventsfor shared SSE types; patchedanymaplocally for Rust 1.91 compatibility. - Alternatives considered: keep UI-local event types or upgrade
yewdux(requires Yew 0.21+); rejected to avoid duplicating schemas or triggering a larger UI upgrade.
Advisory RUSTSEC-2021-0065 Temporary Ignore
- Status: Superseded by 073 (vendored yewdux exception tracked in ADR 074)
- Date: 2025-12-24
- Context:
- The UI depends on
yewdux, which transitively pullsanymapand triggers advisoryRUSTSEC-2021-0065(unmaintained). - There is no maintained replacement for
anymapwithin the pinnedyewdux0.9.x line, and upgrading yewdux would require a Yew major upgrade. cargo-auditis configured to deny warnings, so ignoring the advisory requires explicit documentation and a remediation plan.
- The UI depends on
- Decision:
- Add
RUSTSEC-2021-0065to.secignorewhileyewduxrequiresanymap. - Track
yewduxupgrades or alternatives that removeanymapand remove the ignore when available. - No runtime mitigation is required beyond limiting use to the UI state store.
- Add
- Consequences:
- CI remains green while upstream resolves the dependency.
- The unmaintained dependency remains in the tree until we migrate away from it.
- Follow-up:
- Re-evaluate
yewduxupgrade paths quarterly; remove the ignore onceanymapis no longer required. - If upstream is stalled, evaluate a UI store replacement or a fork that removes
anymap.
- Re-evaluate
- Superseded:
.secignorecleaned in ADR 073; vendored yewdux exception tracked in ADR 074 (noanymapcrate dependency reintroduced).
Motivation
- Keep
just auditpassing without blocking UI state work while documenting the risk and path to remediation.
Design notes
- The ignore is scoped to the single advisory and is documented in
.secignorewith this ADR for traceability.
Test coverage summary
just ci(includes fmt, clippy, udeps, audit, deny, test, cov).
Observability updates
- None; advisory handling does not change runtime telemetry.
Risk & rollback plan
- Risk: unmaintained dependency stays in the build; monitor upstream advisories and plan a migration.
- Rollback: remove
yewduxusage and replace with a small local store implementation or upgrade to a supported release once available.
Dependency rationale
yewduxprovides the shared store needed for the UI; alternatives considered were a custom store (higher lift) or upgrading toyewdux0.11+ (requires Yew 0.21+ migration).
Asset sync test stability under parallel runs
- Status: Accepted
- Date: 2025-12-24
- Context:
cargo llvm-covruns tests in parallel and surfaced a flakyasset_synctest.- The temp directory helper used timestamp-based names that could collide under parallel execution.
- CI requires
just ci(including coverage) to pass reliably without intermittent failures.
- Decision:
- Replace the time-based temp directory naming with a process id + atomic counter.
- Retry on
AlreadyExiststo ensure unique per-test directories without new dependencies.
- Consequences:
- Asset sync tests are deterministic under parallel runners and coverage instrumentation.
- No new crates or runtime behavior changes.
- Follow-up:
- None.
Motivation
- Remove flaky coverage failures caused by temporary directory collisions in
asset_synctests.
Design notes
- Use a static
AtomicUsizecounter plusstd::process::id()to generate unique temp roots. - Loop on
AlreadyExistswithout introducing external dependencies.
Test coverage summary
just ci(fmt, lint, udeps, audit, deny, ui-build, test, cov).
Observability updates
- None.
Risk & rollback plan
- Risk: low; change is test-only.
- Rollback: revert the temp directory helper to its previous implementation.
Dependency rationale
- No new dependencies.
UI row slices and system-rate store wiring
- Status: Accepted
- Date: 2025-12-24
- Context:
- The checklist requires row-level selectors and ID-based list rendering to avoid full-row re-renders.
- System rates must live in the AppStore alongside SSE connection state.
- UI components should remain free of API side effects while still subscribing to yewdux slices.
- Decision:
- Add
TorrentRowBaseandTorrentProgressSliceselectors and render list rows via ID-based components that subscribe only to slices. - Keep bulk selection state in
AppStoreand expose selectors for selection and system rates. - Store
SystemRatesinSystemStateand update it from both dashboard fetches and SSE system-rate events.
- Add
- Consequences:
- List rows re-render only when their slice changes, reducing churn under frequent progress updates.
- Dashboard throughput metrics now follow store-backed system rates rather than local state copies.
- Additional store slices (filters, paging, fsops) still need to be implemented.
- Follow-up:
- Finish remaining torrent state normalization (filters, paging, fsops badges).
- Add selectors for drawer detail slices and wire remaining list filtering/paging flows.
Motivation
- Align list rendering with checklist performance constraints and centralize system-rate state in the store.
Design notes
TorrentRowItemusesuse_selectorto read base/progress slices and selection state per row ID.- SSE
SystemRatesupdates now mutateAppStore.system.ratesinstead of local dashboard state. - Dashboard panels receive
SystemRatesvia props to keep UI components data-driven.
Test coverage summary
just ci(fmt, lint, check-assets, udeps, audit, deny, ui-build, test, test-features-min, cov).
Observability updates
- No changes.
Risk & rollback plan
- Risk: list rows could render blank if selector data goes missing; fallback is the existing refresh flow.
- Rollback: revert to list rendering with full rows and remove the per-row selectors.
Dependency rationale
- No new dependencies.
UI shared API models and torrent query paging state
- Status: Accepted
- Date: 2025-12-24
- Context:
- The UI duplicated API DTOs, causing drift from backend shapes and blocking checklist compliance.
- Torrent list fetching needed a real query/paging model to align with the API list response.
- SSE fsops events required a stable store cache separate from row state.
- Decision:
- Extract shared API DTOs into a new
revaer-api-modelscrate and re-export fromrevaer-api. - Update the UI to consume shared DTOs, map list/detail views from API shapes, and parse list responses with
nextcursors. - Add
TorrentsQueryModel,TorrentsPaging, andfsops_by_idto the torrent store and update SSE to fill fsops state.
- Extract shared API DTOs into a new
- Consequences:
- API DTOs are now single-source across API/CLI/UI consumers.
- UI list fetching can track cursor paging and filter parameters in state.
- Detail views now map from API DTOs with placeholder metadata until richer fields are available.
- Follow-up:
- Wire filter fields into URL/query state and implement load-more pagination.
- Replace add-torrent payloads with
TorrentCreateRequest+ client UUIDs. - Populate health and label caches from API endpoints.
Motivation
- Eliminate duplicated API DTOs in the UI and align list fetching with backend paging semantics.
Design notes
- Introduced
revaer-api-modelsas the canonical DTO crate and re-exported it fromrevaer-api. TorrentSummaryandTorrentDetailconversions now map from shared DTOs into UI row/detail views.TorrentsQueryModelandTorrentsPagingfeedbuild_torrents_pathfor list requests.- SSE fsops events update
fsops_by_idwithout mutating row state.
Test coverage summary
- just ci (fmt, lint, check-assets, udeps, audit, deny, ui-build, test, test-features-min, cov)
- llvm-cov reports: “warning: 40 functions have mismatched data”
Observability updates
- No changes.
Risk & rollback plan
- Risk: mapping differences between API DTOs and UI view models could hide fields.
- Rollback: revert to the previous UI DTO definitions and list fetch logic.
Dependency rationale
- Added
revaer-api-modelsto share API DTOs across crates. - Added
chronoas a UI dev-dependency for DTO construction in tests.
UI store, API coverage, and rate-limit retries
- Status: Accepted
- Date: 2025-12-24
- Context:
- Shared UI state (theme, toasts, label/health caches) needed to live in the AppStore to match the yewdux architecture rule.
- The API client needed coverage for health, metrics, and label list endpoints to unblock upcoming screens.
- Rate-limit responses required user-visible backoff messaging and a safe retry path for idempotent fetches.
- Decision:
- Move shell theme/toast/busy state into the AppStore and populate label/health caches from API calls.
- Extend the UI API client with health/full, metrics, and label list endpoints, leaving option/selection/authoring calls for later UI wiring.
- Handle 429 responses for torrent list/detail fetches with Retry-After backoff and a single retry.
- Consequences:
- UI state is centralized and ready for labels/health screens without ad-hoc local state.
- API coverage is aligned with the checklist endpoints, reducing future wiring churn.
- Rate-limit retries add controlled delay behavior; repeated throttling still surfaces errors.
- Follow-up:
- Remove demo-only list/detail fallback paths and add empty states.
- Implement category/tag management screens and health viewer UI.
- Wire per-torrent options/selection editing in the drawer and add torrent authoring UX.
Motivation
- Keep shared UI state in yewdux and close API coverage gaps needed for Torrent UX.
Design notes
- AppShell theme and toast lifecycles now flow through AppStore updates.
- Labels/health caches are populated from API calls and stored in dedicated slices.
- Added API client methods for remaining torrent and label endpoints.
- API client currently covers health/full, metrics, and label list endpoints; mutating endpoints await UI wiring.
- Rate-limit backoff uses Retry-After with a single retry for idempotent list/detail fetches.
Test coverage summary
- just ci (fmt, lint, check-assets, udeps, audit, deny, ui-build, test, test-features-min, cov)
- llvm-cov reports: “warning: 40 functions have mismatched data”
Observability updates
- No changes.
Risk & rollback plan
- Risk: extra retry traffic on sustained 429 responses.
- Rollback: remove retry/backoff helpers and revert list/detail fetch handling.
Dependency rationale
- No new dependencies.
040 – UI Label Policies (Task Record)
- Status: In Progress
- Date: 2025-10-24
Motivation
- Provide first-class category/tag policy management in the UI so operators can apply TorrentLabelPolicy defaults without CLI/API-only workflows.
- Maintain AppStore as the source of truth while avoiding API calls in atoms/molecules.
Design Notes
- Implemented a dedicated
features/labelsslice with form state that round-trips throughTorrentLabelPolicy. - Added a single list + editor page that renders per-kind (categories or tags) with an Advanced section for rarely used fields.
- API upserts are routed through the shared
ApiClientand update the AppStore label caches on success.
Decision
- Use
LabelFormStateas the sole UI editing model and convert toTorrentLabelPolicyonly on save. - Re-export label policy support types from
revaer-api-modelsto keep UI aligned with shared domain types.
Consequences
- Labels are now editable without leaving the UI; any validation errors are surfaced before calling the API.
- The UI must keep label cache entries updated to prevent stale list rendering.
Test Coverage Summary
- Added unit tests for label form parsing, cleanup validation, and policy mapping.
Observability Updates
- None (UI-only changes, no new telemetry).
Risk & Rollback
- Risk: malformed inputs can still hit the API if not caught locally; server-side validation remains authoritative.
- Rollback: revert the labels feature wiring in
app/mod.rsand the new feature module.
Dependency Rationale
- No new dependencies introduced; re-exported existing domain types for UI usage.
Follow-up
- Expand label editor UX (search/filter, bulk actions) and align styling with Nexus components.
041 – UI Health View + Label Shortcuts (Task Record)
- Status: In Progress
- Date: 2025-10-24
Motivation
- Replace the Health route placeholder with an operator-facing status view built from cached snapshots.
- Provide quick navigation from torrent add flow to label policy management.
Design Notes
- Implemented a dedicated health feature view that reads from
AppStoreand renders basic/full snapshots plus the raw metrics text. - Added label shortcuts in the add-torrent panel using router links to avoid side effects in components.
Decision
- Keep health rendering in a feature view module with no API calls; data remains sourced from app-level effects.
- Use existing chip/button styling patterns for navigation shortcuts.
Consequences
- Operators can inspect health status without leaving the UI.
- Add-torrent flow now exposes direct navigation to categories and tags.
Test Coverage Summary
- UI-only additions (no new Rust tests added).
Observability Updates
- None (UI-only changes, no new telemetry).
Risk & Rollback
- Risk: health fields may appear empty when snapshots are unavailable; view handles None gracefully.
- Rollback: revert the health feature module and restore the placeholder route.
Dependency Rationale
- No new dependencies introduced.
Follow-up
- Add metrics copy controls and align health styling with Nexus patterns.
042 - UI Metrics Copy Button (Task Record)
- Status: In Progress
- Date: 2025-12-24
Motivation
- Provide a fast way to copy
/metricsoutput from the Health page. - Close the optional metrics viewer requirement in the dashboard checklist.
Design Notes
- Keep clipboard access in
appto respect the “window-only in app” rule. - Use a HealthPage callback to avoid side effects in the feature view.
- Emit success/error toasts to confirm copy status.
Decision
- Use the Clipboard API (
navigator.clipboard.writeText) for copying. - Guard the copy button when metrics payload is empty.
Consequences
- Operators can copy metrics text without leaving the UI.
- Clipboard permissions may block copy; errors are surfaced via toasts.
Test Coverage Summary
- UI-only change; no new Rust tests added.
Observability Updates
- None.
Risk & Rollback
- Risk: clipboard API unavailable in some browsers.
- Rollback: remove the copy button and clipboard helper.
Dependency Rationale
- Enable the existing
web-sysClipboard feature to accessnavigator.clipboard. - Alternative considered: legacy
execCommand("copy"), avoided due to deprecation.
043 - UI Settings Bypass Local Auth Toggle (Task Record)
- Status: In Progress
- Date: 2025-12-24
Motivation
- Provide a settings control for preferring API keys in the auth prompt.
- Close the remaining setup/auth flow requirement in the dashboard checklist.
Design Notes
- Store the bypass toggle in LocalStorage but read/write it only from
app. - Keep the settings view stateless and driven by AppStore props.
- Use the toggle to influence the default auth prompt tab without forcing logout.
Decision
- Add a settings feature view for the bypass local toggle.
- Persist the toggle separately from the last-used auth mode.
Consequences
- Auth prompt defaults to API key when bypass is enabled.
- Existing auth state remains unchanged unless the user re-authenticates.
Test Coverage Summary
- UI-only change; no new Rust tests added.
Observability Updates
- None.
Risk & Rollback
- Risk: users may still remain logged in with local auth while bypass is enabled.
- Rollback: remove the settings view and toggle wiring.
Dependency Rationale
- No new dependencies introduced.
044 - UI ApiClient Torrent Options/Selection Endpoints (Task Record)
- Status: In Progress
- Date: 2025-12-24
Motivation
- Add the remaining torrent options/selection endpoints to the ApiClient.
- Keep transport wiring centralized in the API service layer.
Design Notes
- Use existing API model types (
TorrentOptionsRequest,TorrentSelectionRequest). - Keep methods in
services::api::ApiClientand reuse existing auth/application patterns.
Decision
- Add ApiClient helpers for options updates and file selection updates.
- Maintain consistent error wrapping and headers via the shared helpers.
Consequences
- UI features can call these endpoints without duplicating transport logic.
- File selection toggles now persist via the selection endpoint.
Test Coverage Summary
- API client additions only; no new Rust tests added.
Observability Updates
- None.
Risk & Rollback
- Risk: API failures require reloading detail data to reconcile file selection state.
- Rollback: remove the selection update path and ApiClient methods.
Dependency Rationale
- No new dependencies introduced.
UI Icon System and Icon Buttons
- Status: Accepted
- Date: 2025-12-24
- Context:
- Motivation: eliminate inline SVGs and standardize icon usage per the dashboard checklist.
- Constraints: reuse Nexus/DaisyUI styling, avoid new dependencies, keep accessibility consistent.
- Decision:
- Summary: add a shared icon module under
components/atoms/iconsand a reusableIconButtoncomponent for icon-only actions. - Design notes: provide
IconProps(size, class, optional title) andIconVariantfor outline/solid arrows; reuse existing.icon-btnstyles for consistent hover/focus behavior. - Alternatives considered: keep inline SVGs or introduce an external icon crate; rejected to avoid duplication and dependencies.
- Summary: add a shared icon module under
- Consequences:
- Positive outcomes: centralized icon rendering, consistent sizing, and cleaner shell/dashboard markup.
- Risks/trade-offs: visual regressions if CSS assumptions about SVG sizing shift.
- Observability updates: none.
- Follow-up:
- Implementation tasks: keep new icons in the shared module; replace any future inline SVGs with components.
- Test coverage summary: UI component wiring only; no new tests added (llvm-cov still warns about mismatched data).
- Dependency rationale: no new dependencies introduced.
- Risk & rollback plan: revert icon module changes and restore inline SVGs if styling regresses.
UI Torrent Filters, Pagination, and URL Sync
- Status: Accepted
- Date: 2025-12-24
- Context:
- Motivation: expose torrent filters in the URL and support paged list loading without breaking the normalized store.
- Constraints: reuse existing API query semantics, avoid new dependencies, and keep URL updates inside app-level routing.
- Decision:
- Summary: parse/filter query params from the router location, update the URL when filters change, and add an explicit Load more flow that appends rows.
- Design notes: use
build_torrent_filter_queryfor URL-only filters, keep refresh fetches cursor-free, and append rows only when a cursor is provided for pagination. - Alternatives considered: store cursor in the URL or auto-load more on scroll; rejected to keep query stable and avoid hidden fetches.
- Consequences:
- Positive outcomes: shareable filter URLs, explicit paging, and predictable list refresh behavior.
- Risks/trade-offs: query sync relies on history replace semantics; overlapping API pages could still cause duplicate rows.
- Observability updates: none.
- Follow-up:
- Implementation tasks: wire filter inputs, add Load more, and append list reducer support.
- Test coverage summary: added unit tests for query round-tripping and append-row behavior.
- Dependency rationale: no new dependencies introduced.
- Risk & rollback plan: revert filter URL sync and pagination append logic if list state becomes inconsistent.
UI Torrent List Updated Timestamp Column
- Status: Accepted
- Date: 2025-12-24
- Context:
- Motivation: surface the last updated timestamp alongside the existing list columns.
- Constraints: avoid new dependencies and keep row slices stable for list rendering performance.
- Decision:
- Summary: store a formatted updated timestamp string in the torrent row base slice and render it as an optional column.
- Alternatives considered: compute formatting in the component layer or add a relative time utility; rejected to keep row rendering pure and avoid new helpers.
- Consequences:
- Positive outcomes: list rows now include an explicit updated timestamp column with overflow fallback.
- Risks/trade-offs: updated timestamps refresh only when list data is refreshed, not on every SSE event.
- Observability updates: none.
- Follow-up:
- Implementation tasks: keep formatting consistent in the summary conversion.
- Test coverage summary: added assertions for updated timestamps in row conversion tests.
- Dependency rationale: no new dependencies introduced.
- Risk & rollback plan: remove updated column mapping if list layout regresses.
ADR 048: UI torrent row actions, bulk controls, and rate/remove dialogs
- Status: Accepted
- Date: 2025-12-24
- Context:
- Motivation: complete torrent list row actions and bulk controls with confirm/rate UX and concurrency safety.
- Constraints: no new dependencies, no unwrap/expect in non-test code, yewdux-managed shared state, and clean
just ci.
- Decision:
- Add UI action variants (reannounce, sequential on/off, rate) and map them to API actions.
- Introduce row action menus plus remove/rate dialogs with input validation and delete-data toggle.
- Implement a bulk-action runner with a concurrency cap, failure aggregation, and drawer-close logic when multi-select remains.
- Alternatives considered: per-item toasts with sequential execution (rejected for spam and slow UX).
- Consequences:
- Positive: consistent row/bulk actions, safer removals, bounded bulk concurrency, and clear summary feedback.
- Trade-offs: additional UI state for dialogs and bulk runner bookkeeping.
- Follow-up:
- Ensure translations are backfilled for new strings beyond English as needed.
- Revisit concurrency cap if the API or UI performance requirements change.
- Test coverage summary:
- Added unit tests for rate input parsing in
crates/revaer-ui/src/core/logic/mod.rs. - Existing action success message tests extended to cover new variants.
- Added unit tests for rate input parsing in
- Observability updates:
- None (UI-only changes; no new metrics/tracing added).
- Risk & rollback plan:
- Risk: dialog/menu UX regressions on small screens or edge-case bulk failures.
- Rollback: revert this ADR’s changeset and restore prior row-action buttons and sequential bulk loop.
- Dependency rationale:
- No new dependencies added.
UI detail drawer overview/files/options
- Status: Accepted
- Date: 2025-12-24
Context
- The torrent detail drawer still exposed legacy peers/trackers/log panes instead of the required overview/files/options layout.
- The UI was maintaining a custom DetailData conversion layer instead of using shared API models.
- The checklist requires edits only for fields supported by PATCH /v1/torrents/{id}/options and real file selection updates.
Decision
- Render the detail drawer with Overview, Files, and Options tabs and include the same action set as the list rows.
- Store TorrentDetail directly in the detail cache to avoid duplicate UI-only models and conversions.
- Apply file selection changes via /select (include/exclude/priority/skip_fluff) and options changes via /options with optimistic updates.
- Keep non-editable settings read-only to avoid fake controls.
Consequences
- Removes duplicated detail mapping logic and keeps UI aligned with shared models.
- Detail UI now depends on settings payloads for options and skip-fluff rendering.
- Failed updates require a refresh to reconcile optimistic state.
Motivation
- Align the UI with the Torrent UX checklist while preserving the thin-client model.
Design notes
- Detail cache remains in yewdux details_by_id; list rows stay lightweight.
- Components emit callbacks only; API calls remain in app-level handlers.
Test coverage summary
- Added unit tests for detail selection, priority, skip-fluff, and options updates in torrents state.
- Added a format_bytes unit test for the new size formatter.
Observability updates
- None (UI-only changes).
Risk & rollback plan
- Risk: optimistic updates may temporarily show stale settings if the API rejects changes.
- Mitigation: refresh detail on failure.
- Rollback: restore the previous detail component and DetailData mapping.
Dependency rationale
- Added workspace chrono to revaer-ui runtime deps to build demo detail timestamps.
UI torrent FAB + create modals
- Status: Accepted
- Date: 2025-12-24
- Context:
- The torrent UX checklist requires FAB-driven add/create modals and initial rate limits.
- API calls must stay in the app layer with shared DTOs, and UI state lives in yewdux.
- Decision:
- Implement a floating action button that opens Add and Create torrent modals.
- Wire POST
/v1/torrents/createthrough the ApiClient and surface results + copy actions. - Move UI preferences (mode/density/locale) into the shared store for consistent access.
- Alternatives considered:
- Keep the add panel inline in the list view (rejected; no FAB flow).
- Let modal components call the API directly (rejected; breaks layering rules).
- Consequences:
- Adds modal UX for torrent add/authoring and a FAB entry point.
- Introduces minimal new store state for create results/errors and busy flags.
- Additional translations and CSS required for modal + FAB presentation.
- Follow-up:
- Validate Add/Create modals visually against Nexus styling.
- Run full
just ciand confirm zero warnings.
Motivation
- Finish the remaining torrent UX checklist items for FAB actions and authoring flows.
- Keep state management consistent with the yewdux store rule.
Design notes
- Modal components remain pure UI: they emit typed requests and copy intents via callbacks.
- Create results are stored in the torrents slice to avoid cross-component ad hoc state.
Test coverage summary
- Unit tests updated for add payload validation (rate parsing).
- No new integration tests for UI-only changes.
Observability updates
- None (UI-only change).
Risk & rollback plan
- Risk: modal flows may need styling adjustments across breakpoints.
- Rollback: revert UI modal/FAB changes and the create endpoint wiring.
Dependency rationale
- No new dependencies introduced; reused existing shared DTOs and UI helpers.
UI shared API models and UX primitives
- Status: Accepted
- Date: 2025-12-25
- Context:
- The UI and CLI duplicated health/setup/dashboard DTOs, increasing drift risk against the API.
- The torrent toolbar and labels views lacked debounced search, multi-select, and reusable empty/bulk primitives.
- The UI checklist requires shared API models and a component primitive set with prop-driven configuration.
- Decision:
- Move health, setup-start, and dashboard DTOs into
revaer-api-modelsand consume them from the API, UI, and CLI. - Add shared UI primitives (SearchInput with debounce, MultiSelect, EmptyState, BulkActionBar) and extend existing inputs/buttons for prop coverage.
- Refactor torrent filters and label empty state to use the new primitives while retaining text-input fallback for tags when options are unavailable.
- Move health, setup-start, and dashboard DTOs into
- Consequences:
- Reduces schema drift and keeps response shapes centralized in one crate.
- Adds new UI primitives that standardize filter toolbars and empty states.
- The setup-start endpoint now serializes expiration as RFC3339 strings to match shared DTOs.
- Follow-up:
- Audit remaining UI components for prop completeness and update the checklist item when finished.
- Re-run the full
just cipipeline before final handoff.
Task record
- Motivation: Eliminate duplicate API DTOs and complete missing UI primitives required by the Torrent UX checklist.
- Design notes: Shared DTOs live in
revaer-api-models; new primitives live undercomponentsand are consumed by torrents/labels views to avoid dead code. - Test coverage summary: Not run in this update (follow-up required per AGENT.md).
- Observability updates: None.
- Risk & rollback plan: Revert to previous DTO structs in API/CLI/UI and restore raw input elements if regressions surface.
- Dependency rationale: No new dependencies added.
UI dashboard migration to Nexus vendor layout
- Status: Accepted
- Date: 2025-12-25
- Context:
- Align the dashboard and shell UI with the vendored Nexus HTML to remove drift.
- Remove the blocking SSE overlay and replace it with a non-blocking connectivity surface.
- Preserve routing and layout classes so Nexus CSS can remain authoritative.
- Decision:
- Replace the old dashboard and shell markup with Nexus vendor partials and dashboard structure.
- Introduce SSE connectivity state in the store with a drawer-footer indicator and modal.
- Remove legacy dashboard CSS overrides and ensure vendor app.css is the primary styling source.
- Consequences:
- Positive: Nexus parity, simpler shell structure, non-blocking connectivity UX.
- Risks: UI copy/labels diverge from vendor defaults; mode toggle now relies on existing stored preference.
- Follow-up:
- Verify visual parity against Nexus dashboard sections.
- Monitor SSE reconnection details surfaced in the modal.
Motivation
- Ensure the UI matches the vendored Nexus dashboard and shell while eliminating legacy layout glue.
- Replace blocking SSE overlays with a navigation-safe connectivity indicator.
Design notes
- App shell and dashboard markup preserve the Nexus layout/class structure while the repo keeps only the vendor asset kit; executable Nexus reference HTML is not retained in-tree.
- Dashboard sections are split into Nexus-faithful organisms while preserving class names and nesting.
- SSE status is stored in
system.sse_status; indicator consumes a summary slice, modal consumes full details.
Test coverage summary
just ci(fmt, lint, udeps, audit, deny, ui-build, test, cov)
Observability updates
- None.
Risk & rollback plan
- If Nexus markup causes regressions, revert to the previous dashboard/shell and reintroduce the prior CSS and route wiring.
- If SSE diagnostics cause UI noise, hide the indicator by feature flag and keep reconnect logic intact.
Dependency rationale
- Added
web-sysfeatureHtmlDialogElementto open the Nexus search modal viashow_modalwithout new crates.
UI: Hardline Nexus Dashboard Rebuild and Settings Wiring
- Status: Accepted
- Date: 2025-12-26
- Context:
- The Home dashboard must match the vendored Nexus HTML structure and DaisyUI component patterns.
- Navigation and shell need to be simplified to Home/Torrents/Settings with a non-blocking SSE indicator.
- Settings must remain reachable even when auth is missing and show a config snapshot.
- Decision:
- Rebuild dashboard sections to mirror Nexus markup (stats cards, storage status, recent events, tracker health, queue summary).
- Align AppShell sidebar/topbar with Nexus partial structure and move the SSE indicator to the sidebar footer.
- Wire Settings to fetch
/v1/configand provide test-connection actions while keeping auth overlays off the Settings route. - Disable wasm-opt in the Trunk pipeline (
data-wasm-opt="0") to avoid build failures on missing staged wasm outputs. - Use relative static asset paths for Nexus CSS and dashboard image URLs to keep styles/images loading when served from non-root paths.
- Alternatives considered: importing
revaer_config::ConfigSnapshotinto the UI; rejected to avoid new cross-crate dependencies in wasm.
- Consequences:
- Positive: consistent Nexus/DaisyUI layout, simplified nav, and settings access even during auth errors.
- Trade-offs: UI-only fetches rely on runtime connectivity; config display is untyped JSON in the UI; wasm bundles are no longer optimized by wasm-opt.
- Follow-up:
- Verify visual parity in the browser and keep the Nexus HTML deltas minimal.
- Add typed config rendering if a UI-safe shared type becomes available.
Task Record
- Motivation: enforce Nexus + DaisyUI parity for the dashboard while keeping Settings reachable and diagnostics visible.
- Design notes: mapped each dashboard section to specific Nexus blocks; SSE indicator uses sidebar footer with a non-blocking dialog; config snapshot parsed as
serde_json::Valueto avoid new dependencies; disabled wasm-opt incrates/revaer-ui/index.htmlto keeptrunk build --releasereliable on this environment until tooling changes; aligned Nexus image URLs to/static/nexus/...for correct asset loading on all routes; aligned the sidebar footer indicator to the Nexus pinned-footer structure, restored the missing Global Sales card slot, and made the auth prompt non-blocking while stabilizing drawer hook usage; re-aligned the torrents filter header to the Nexus orders layout, updated the search input to use DaisyUIinput-smsizing, removed the custom placeholder override to let DaisyUI placeholder styles apply, and removed the legacy torrent list view. - Test coverage summary:
just ciruns butjust covfails at ~77.6% overall line coverage (below the ≥80% gate); no new unit tests added in this update. - Observability updates: none (UI-only changes).
- Risk & rollback plan: revert
crates/revaer-uidashboard/shell/settings edits andstatic/style.cssif UI regressions appear. - Dependency rationale: no new dependencies added; reused existing
serde_json.
UI Dashboard Nexus Parity Tweaks
- Status: Accepted
- Date: 2025-12-27
- Context:
- Dashboard cards drifted from the vendored Nexus markup and referenced missing i18n keys.
- Connectivity modal included fields outside the required SSE status spec.
- Constraints: keep Nexus layout structure, use DaisyUI semantic tokens, avoid new dependencies.
- Decision:
- Rework the storage usage and tracker health cards to match Nexus layout structure and available translation keys.
- Align queue summary/global summary labels to existing nav/dashboard strings.
- Trim the SSE connectivity modal to the required fields and labels.
- Replace dashboard recent events table markup with a DaisyUI list layout.
- Limit SSE indicator label expansion to the sidebar expanded state only.
- Alternatives considered: adding new translation keys across all locales (rejected for scope and translation burden).
- Consequences:
- Positive outcomes: fewer missing strings, closer Nexus parity, clearer SSE status display.
- Risks or trade-offs: storage usage detail reduced to summary metrics; some labels remain static in English where Nexus requires them.
- Follow-up:
- Manually verify Nexus dashboard parity and table hover styling in the UI.
Motivation
- Restore Nexus layout parity for dashboard sections and eliminate missing dashboard translation keys.
Design Notes
- Storage usage mirrors the Nexus revenue card layout with the chart slot preserved.
- Tracker health metrics follow the Nexus acquisition grid with two columns and error count in the header.
- Queue summary and global summary labels use existing nav/dashboard translations.
- SSE connectivity modal aligns with the required status fields only.
- Recent events use a DaisyUI list layout that preserves the Nexus header structure.
- Row-hover styling applies to list rows for parity with table hover behavior.
Test Coverage Summary
- No new tests added; UI-only changes.
Observability Updates
- None.
Risk & Rollback Plan
- Low risk; revert the UI component edits if layout regressions appear.
Dependency Rationale
- No new dependencies introduced.
Factory Reset and Bootstrap API Key
- Status: Accepted
- Date: 2025-12-27
- Context:
- Need a safe factory reset workflow that keeps navigation available while enforcing confirmation.
- Setup completion must return a bootstrap API key with a 14-day client-side expiry.
- Raw reset errors must surface to the UI for operator visibility.
- Decision:
- Add
revaer_config.factory_reset()stored procedure and/admin/factory-resetAPI endpoint guarded by API key auth. - Ensure setup completion provisions or reuses a bootstrap API key and returns it with an expiry timestamp.
- Persist the bootstrap API key with expiry in local storage and require manual dismissal for error toasts.
- Add
- Consequences:
- Factory reset clears configuration/runtime data and returns the system to setup mode.
- API key expiry is enforced on the client; the server remains stateless about expiry.
- Reset failures are delivered verbatim to clients for display.
- Follow-up:
- Update OpenAPI export, UI dropdown + modal wiring, and storage helpers.
- Verify CI and runtime migrations.
Factory reset bootstrap auth fallback
- Status: Accepted
- Date: 2025-12-28
- Context:
- Factory reset requires API key auth, but existing installs can be in
activemode with zero API keys (pre-bootstrap). - Without a key, the UI cannot authenticate and the system has no recovery path.
- The reset path must still use stored procedures and surface raw errors when the reset fails.
- Factory reset requires API key auth, but existing installs can be in
- Decision:
- Add a
has_api_keyscapability to the config facade so the API can detect empty key inventories. - Introduce a factory-reset-specific auth gate that accepts valid API keys, or allows the reset when no API keys exist (logging a warning).
- Keep confirmation phrase validation unchanged.
- Add a
- Consequences:
- Provides a recovery path for deployments missing API keys.
- When no API keys exist, factory reset can be triggered without auth; this is acceptable because the system is already unauthenticated in that state.
- Follow-up:
- Consider tightening the fallback to loopback-only requests if new auth modes are added.
- Ensure UI messaging continues to surface authorization errors via toasts.
UI settings tabs and editor controls
- Status: Accepted
- Date: 2025-12-28
- Context:
- The settings screen exposed raw configuration values without meaningful grouping or editing controls.
- Torrent operators need quick access to download, seeding, network, and storage controls with clear defaults.
- Settings patches must flow through the existing API and honor immutable fields.
- Decision:
- Rebuild the settings UI as tabbed panels aligned with torrent workflows (connection, downloads, seeding, network, storage, system).
- Drive all editable controls from the config snapshot and submit targeted
/v1/configchangesets per group. - Treat immutable fields and effective engine snapshots as read-only with copy-to-clipboard affordances.
- Consequences:
- Settings are now grouped for faster navigation and support direct edits with consistent controls.
- The UI performs more client-side validation for numeric and JSON fields before patching.
- Follow-up:
- Evaluate dedicated server-side directory browsing if operators need richer path discovery.
- Add localization for settings field labels where needed.
Motivation
- Make settings usable for torrent operators by grouping them into purpose-built tabs.
- Replace raw config tables with toggles, selects, numeric inputs, and path pickers.
- Ensure read-only values are still accessible via copy actions.
Design notes
- Draft values are derived from the latest config snapshot and compared to build minimal changesets.
- Immutable keys from
app_profile.immutable_keysand derived engine fields are rendered read-only. - Directory selection uses a modal picker with suggested paths from the snapshot.
Test coverage summary
just ci
Observability updates
- UI toasts surface config patch failures and copy failures; no new metrics.
Risk & rollback plan
- Risk: incorrect grouping or input parsing could lead to failed patches.
- Rollback: revert to the previous settings view and re-fetch configuration.
Dependency rationale
- No new dependencies.
UI Settings Controls, Logs Stream, and Filesystem Browser
- Status: Accepted
- Date: 2025-12-28
- Context:
- Motivation: replace JSON settings editing with structured controls, add an on-demand logs view, and provide a server-backed filesystem browser for path selection.
- Constraints: keep stored-procedure access, avoid new dependencies, and only stream logs while the Logs route is active.
- Decision:
- Added an SSE logs stream backed by a log broadcast writer and a Logs UI route that connects only while mounted.
- Added a filesystem browse endpoint and path picker UI for directory selection, with server-side path validation for label policy download dirs.
- Reworked settings into tabbed sections with a single draft/save bar and structured field editors.
- Consequences:
- Positive: consistent UI controls, safer path selection, and live logs available without background streaming.
- Risks: invalid paths now fail validation; recovery requires clearing the offending field or updating the path.
- Follow-up:
- Tests: no new dependencies; validation logic exercised via existing config pathways (add focused tests if coverage drops).
- Observability: log stream events emit via SSE; status surfaced in UI badge.
- Risk & rollback: revert the logs route/endpoint and path validation if regressions appear; keep previous settings UI behind a feature branch.
- Dependency rationale: no new dependencies added.
059 – Migration Rebaseline And JSON Backfill Guardrails
- Status: Accepted
- Date: 2025-12-28
- Context:
- Migration sprawl made upgrades brittle and conflicted with the single-file mandate.
- JSON columns are banned for settings; backfills must not wipe normalized data on upgrade.
- The baseline migration must be idempotent for both new databases and upgrades.
- Decision:
- Collapse migration history into
crates/revaer-data/migrations/0007_rebaseline.sqland remove prior migration files. - Add upgrade-safe guardrails in the JSON backfill to avoid overwriting normalized data when legacy columns are empty or newly introduced.
- Mark the configuration migrator to ignore missing files so existing databases can apply the new baseline cleanly.
- Update documentation and dev seed SQL to match the normalized schema.
- Collapse migration history into
- Consequences:
- Positive: one deterministic baseline, no JSON columns in the final schema, safer upgrades.
- Trade-offs: the baseline SQL is larger and includes legacy steps to support upgrades.
- Follow-up:
- Run the full
just cigate and validate factory reset behavior against a real database. - Monitor future schema changes to ensure they append to the consolidated baseline.
- Run the full
- Motivation:
- Ensure migration idempotency and JSON-free settings storage without breaking existing installations.
- Design notes:
- Keep legacy JSON parsing helpers only long enough to migrate data; drop them in the same baseline.
- Add trigger drops to make DDL re-entrant when applying the baseline on upgraded databases.
- Test coverage summary:
just check(workspace, all targets, all features).
- Observability updates:
- No telemetry changes required.
- Risk & rollback plan:
- Risk: legacy upgrade paths could still expose migration gaps; rollback by restoring the previous migration set from version control.
- Dependency rationale:
- No new dependencies introduced.
Auth Expiry + Error Context Fields
- Status: Accepted
- Date: 2025-12-28
- Context:
- Factory reset failures must surface raw error details to clients without embedding context in error messages.
- Setup completion must issue an API key that expires after 14 days, and expiration must be enforced server-side.
- JSONB-based helpers are disallowed; legacy helpers must be removed while preserving upgrade paths.
- Decision:
- Add an optional
expires_attimestamp toauth_api_keysand extend API key upsert helpers to persist it. - Extend RFC9457
ProblemDetailswith structuredcontextfields so raw error details can be returned separately from constant error messages. - Purge JSONB-based helper functions during migration to keep final database surfaces JSON-free.
- Add an optional
- Consequences:
- Positive outcomes: API key expiry is enforced consistently; error responses can include raw details without violating message rules; migrations end with JSONB-free functions.
- Risks or trade-offs: Existing API clients must tolerate the new
contextfield; migrations rely on drop logic to clear legacy helper functions.
- Follow-up:
- Implementation tasks: update API key auth reads to respect expiry; add error context plumbing in API/UI clients; keep openapi export in sync.
- Review checkpoints: verify migrations run cleanly, JSONB functions are absent, and factory reset errors surface in toasts.
API i18n error localization and OpenAPI assets
- Status: Accepted
- Date: 2025-12-29
- Context:
- API error responses needed localization via
Accept-Languagewithout introducing new dependencies. openapi.rscould not retain hard-coded asset paths while still embedding the spec.
- API error responses needed localization via
- Decision:
- Add a lightweight API i18n module that selects a locale from
Accept-Language, loads an embedded bundle, and localizes error titles/details/invalid params with fallback to the original string. - Centralize embedded OpenAPI assets in a dedicated module so
openapi.rsis path-free. - Alternatives considered: key-based localization in all error constructors (larger refactor); relying on client-only localization (does not meet API requirement).
- Add a lightweight API i18n module that selects a locale from
- Design notes:
- Locale parsing accepts the first supported tag and falls back to
en. - Translation load failures are logged once and degrade to identity translations.
- OpenAPI asset constants are crate-private to avoid leaking filesystem structure.
- Locale parsing accepts the first supported tag and falls back to
- Test coverage summary:
- Added unit coverage for locale parsing, translation availability, and fallback behavior in the i18n module.
- Observability updates:
- Translation load failures emit a structured error log with the locale.
- Consequences:
- Error responses now pass through a localization hook; untranslated strings remain unchanged.
- OpenAPI asset paths are centralized for easier maintenance.
- Risk & rollback plan:
- Risk: missing translation keys fall back to the original message. Roll back by removing i18n middleware and restoring direct error serialization.
- Dependency rationale:
- No new dependencies; reused existing
serde_jsonand standard library types.
- No new dependencies; reused existing
- Follow-up:
- Expand message coverage in
crates/revaer-api/i18n/en.jsonas new error strings are added.
- Expand message coverage in
Event Bus Publish Guardrails + API i18n Cleanup
-
Status: Accepted
-
Date: 2025-12-28
-
Context:
- Event publishing failures were silently ignored, violating the no-error-suppression rule.
- Several API error strings were missing i18n keys, breaking the localized error contract.
- A few runtime logs still interpolated context into messages and needed structured fields.
-
Decision:
- Introduce
EventBusErrorand makeEventBus::publishreturnResultso failures are handled explicitly. - Add publish helpers in runtime services (API state, fsops, libtorrent worker, app bootstrap) that log publish failures with structured fields.
- Expand the API i18n bundle to include new error keys used by settings and auth flows.
- Move
anyhowto dev-dependencies forrevaer-apiand remove the remaining debug assert/log interpolation in production paths.
- Introduce
-
Consequences:
- Positive outcomes: event publishing is no longer silently ignored; API error messages are consistently localizable; log output stays structured.
- Risks or trade-offs: event publish errors are now surfaced via warnings, which may be noisy if the bus is misconfigured.
-
Follow-up:
- Implementation tasks: ensure downstream callers handle
EventBusErrorwhere needed; keep i18n bundles in sync with new error keys. - Review checkpoints: confirm
just cipasses and that SSE/event flows still deliver updates without regressions.
- Implementation tasks: ensure downstream callers handle
-
Motivation:
- Align runtime error handling with AGENT.md guardrails and remove hidden failure paths.
-
Design notes:
- Event bus publish errors expose
event_id+event_kindfor structured logging without embedding context in messages. - API error strings added to
en.jsonmatch the exact keys emitted by handlers.
- Event bus publish errors expose
-
Test coverage summary:
- Not run in this change set; run
just cibefore release.
- Not run in this change set; run
-
Observability updates:
- Added structured warning logs when event publishing fails.
-
Risk & rollback plan:
- Low risk; revert to prior publish semantics if event logging proves too noisy.
-
Dependency rationale:
- No new dependencies added.
CI compliance cleanup for test error handling
- Status: Accepted
- Date: 2025-12-30
- Context:
- Motivation: restore
just cicompliance and remove explicit panic/unwrap patterns in tests to align with AGENT error-handling rules. - Constraints: keep coverage ≥ 80% and avoid new dependencies while satisfying clippy::pedantic.
- Motivation: restore
- Decision:
- Replace explicit
panic!/unwrapusages in tests with Result-returning flows andlet...elsepatterns. - Exercise must-use values in tests to avoid lint violations.
- Replace explicit
- Consequences:
- Positive outcomes: lint clean, tests remain deterministic, and coverage stays above the gate.
- Risks or trade-offs: slightly more verbose test code; added Result plumbing in tests.
- Follow-up:
- Implementation tasks: keep new tests using
Resultandlet...elsepatterns when adding coverage. - Review checkpoints: re-run
just ciafter any test refactors.
- Implementation tasks: keep new tests using
Design notes
- Tests now surface unexpected success paths as explicit error returns instead of panics.
Ssetest responses are exercised viainto_responseto satisfy must-use lints.
Test coverage summary
just cicompleted with line coverage at 80.04%.
Observability updates
- None.
Dependency rationale
- No new dependencies added.
Risk & rollback plan
- Risk: minimal; changes are confined to tests.
- Rollback: revert this ADR and the test-only edits, then re-run
just ci.
Factory reset hardening and allow-path validation
- Status: Accepted
- Date: 2025-12-30
- Context:
- Motivation: surface actionable factory reset failures, prevent long-running resets from hanging, and tighten allow-path validation for directory entries.
- Constraints: preserve API i18n behavior, keep error context structured, and avoid new dependencies or inline SQL outside migrations.
- Decision:
- Derive the deepest error source string for factory reset failures and return it in structured context.
- Allow factory resets to proceed without API keys when no keys exist, even if a stale API key header is present.
- Add a lock timeout in the factory reset stored procedure to avoid indefinite blocking.
- Validate each allow-path entry as a non-empty directory before persisting updates.
- Add unit tests covering error extraction and the stale API key path.
- Consequences:
- Positive outcomes: factory reset failures surface raw causes; invalid allow-path entries are rejected; resets fail fast on lock contention.
- Risks or trade-offs: stricter validation can reject empty allow-path entries that previously slipped through; lock timeouts may require retrying during heavy database activity.
- Follow-up:
- Implementation tasks: confirm UI toasts surface context fields for factory reset failures and lock timeouts.
- Review checkpoints: run
just ciandjust build-releasebefore handoff.
Design notes
- Walk the
Error::sourcechain to surface the innermost message without mutating the API detail string.
Test coverage summary
just ci: line coverage 80.06%.just build-release: succeeded.
Observability updates
- None.
Dependency rationale
- No new dependencies added.
Risk & rollback plan
- Risk: allow-path validation rejects empty entries; factory reset error context exposes raw backend errors; lock timeout may surface new transient failures during heavy DB activity.
- Rollback: revert the allow-path validation, auth fallback, and lock-timeout adjustments, remove the related tests, then re-run
just ci.
API key refresh and no-auth setup mode
- Status: Accepted
- Date: 2025-12-30
- Context:
- Motivation: keep API keys valid without manual re-auth, and allow local setup flows to opt into anonymous access.
- Constraints: no new dependencies, stored-procedure-only config writes, and API errors localized through i18n.
- Decision:
- Add
app_profile.auth_modewithapi_key/noneand allow anonymous auth whennoneis configured. - Introduce
/v1/auth/refreshto extend API key expiry without rotation, and schedule refresh in the UI before expiry. - Persist anonymous auth state for no-auth setups and reuse the well-known snapshot for setup changeset construction.
- Store API key expirations in local storage and refresh with a 24-hour safety skew.
- Add
- Consequences:
- Positive outcomes: no-auth local deployments work without API keys; API keys remain valid without user action.
- Risks or trade-offs: no-auth mode reduces access control if enabled unintentionally; refresh scheduling depends on client time.
- Follow-up:
- Implementation tasks: keep OpenAPI spec and UI translations in sync with new auth/refresh UX.
- Review checkpoints: run
just ciandjust build-releasebefore handoff.
Design notes
- Auth mode is stored in
app_profileand enforced in API auth middleware. - Token refresh extends expiry only; no rotation or secret re-issuance.
Test coverage summary
just ci: line coverage 80.03%.just build-release: succeeded.
Observability updates
- None.
Dependency rationale
- No new dependencies added.
Risk & rollback plan
- Risk: anonymous access enabled on non-local deployments; refresh timing sensitive to client clock drift.
- Rollback: remove
auth_mode, revert auth middleware and refresh endpoint, and delete UI refresh scheduling plus setup auth mode selection.
Factory reset UX fallback and SSE setup gating
- Status: Accepted
- Date: 2025-12-30
- Context:
- Motivation: SSE returns 409 when the server is in setup mode, leaving the UI stuck after factory reset or manual setup transitions.
- Constraints: keep the UI non-blocking, avoid API key reuse after reset, and keep state transitions client-driven without new dependencies.
- Decision:
- Gate SSE connection on
AppModeStateand surface a disconnected status when the server is in setup mode. - Treat SSE 409 responses as a setup signal: clear auth state and move the app into setup mode in the store.
- Ensure factory reset success forces
AppModeState::Setupeven if the reload fails.
- Gate SSE connection on
- Consequences:
- Positive outcomes: factory reset lands users on the setup flow; SSE no longer loops on 409 responses.
- Risks or trade-offs: clears stored auth on setup transitions, requiring re-auth after reset.
- Follow-up:
- Implementation tasks: monitor setup flows for any unexpected auth clears and adjust messaging if needed.
- Review checkpoints: run
just ciandjust build-releasebefore handoff.
Design notes
- SSE is disabled in setup mode to prevent repeated 409 retries and to keep the UI responsive.
- Setup transitions clear auth storage to avoid stale API keys after reset.
Test coverage summary
just ci: failed (cargo llvm-covline coverage 77.59% < 80%).
Observability updates
- None.
Dependency rationale
- No new dependencies added.
Risk & rollback plan
- Risk: users expecting to keep API keys across resets will have to re-authenticate.
- Rollback: remove SSE setup gating and 409 handling, revert factory reset UI state updates, and restore previous auth persistence behavior.
Logs ANSI rendering and bounded buffer
- Status: Accepted
- Date: 2025-12-30
- Context:
- Motivation: logs view must preserve ANSI color/style codes, Unicode characters, and remain responsive over long sessions.
- Constraints: keep memory usage bounded, avoid new dependencies, keep layout aligned with UI rules, and avoid build conflicts with
trunk serve.
- Decision:
- Parse ANSI SGR sequences into styled spans for rendering with theme-aware colors.
- Keep a bounded in-memory log buffer with a fixed max size.
- Use streaming text decode to preserve multibyte characters across chunks.
- Add new log lines to the top of the view and restrict scrolling to the terminal area.
- Use a dedicated
dist-servedirectory fortrunk serveto avoid staging conflicts withui-build.
- Consequences:
- Positive outcomes: log output retains color/style and Unicode, memory growth is capped, log background is black.
- Risks or trade-offs: ANSI color mapping approximates terminal colors via theme tokens and CSS variables.
- Follow-up:
- Implementation tasks: monitor logs stream for any unhandled ANSI sequences and extend parsing as needed.
- Review checkpoints: run
just cibefore handoff.
Test coverage summary
just ui-build: failed (wasm-bindgen could not write to staging directory whiletrunk servewas running).
Observability updates
- None.
Dependency rationale
- No new dependencies added.
Risk & rollback plan
- Risk: unusual ANSI sequences may render as plain text.
- Rollback: remove ANSI parsing and revert to raw log line rendering.
Agent Compliance: Clippy Cargo Lints
- Status: Accepted
- Date: 2025-12-31
- Context:
- AGENT.md mandates clippy::cargo in the crate-level deny list for every lib/main.
- Several crate roots were missing clippy::cargo, which is a documented compliance violation.
- Decision:
- Add clippy::cargo to every crate-level lint deny list alongside clippy::all/pedantic/nursery.
- Keep existing unsafe-code policies intact (FFI-only allowances remain scoped).
- Consequences:
- Positive outcomes: consistent lint coverage across crates; future clippy::cargo issues surface early.
- Risks or trade-offs: additional lint findings may require follow-up fixes in future changes.
- Follow-up:
- Run just ci to confirm the lint gate passes across the workspace.
- Monitor future changes for clippy::cargo warnings introduced by new code.
- Motivation:
- Align all crates with AGENT.md lint requirements and eliminate policy drift.
- Design notes:
- Automated, minimal insertion of clippy::cargo after clippy::pedantic in existing deny lists.
- Test coverage summary:
- just ci (full pipeline) is required before hand-off; run after edits.
- Observability updates:
- None.
- Risk & rollback plan:
- Roll back the lint list changes if they conflict with a required exception, then document a targeted ADR.
- Dependency rationale:
- No new dependencies added.
Docs: Pin mdbook-mermaid for just docs
-
Status: Accepted
-
Date: 2025-12-31
-
Context:
- Motivation:
just docsfailed because mdbook-mermaid 0.16.2 cannot parse under mdbook 0.5.2, even though docs are valid. - Constraints: Docs build must run via
just, no manual tooling, avoid repo changes outside the justfile. - Test coverage summary:
just docsrun after change; no unit tests applicable. - Observability updates: None.
- Dependency rationale: No new crates; pin existing mdbook-mermaid tool to 0.17.0 to match mdbook 0.5.x behavior.
- Motivation:
-
Decision:
- Require mdbook-mermaid 0.17.0 in
just docs-installand reinstall if mismatched. - Make
just docsinvokejust docs-installbefore build and index. - Alternatives considered: rely on user-managed tool versions; pin mdbook to 0.5.0; remove mermaid preprocessor.
- Require mdbook-mermaid 0.17.0 in
-
Consequences:
- Positive outcomes:
just docsconsistently installs a compatible mermaid preprocessor and builds successfully. - Risks or trade-offs: Running
just docsmay reinstall mdbook-mermaid when versions differ; version pin may lag future mdbook releases. - Risk & rollback plan: If issues arise, revert the
justfilechange or update the pinned version and rerunjust docs.
- Positive outcomes:
-
Follow-up:
- Implementation tasks: Update
justfileand verifyjust docs. - Review checkpoints: Revisit the pin when mdbook or mdbook-mermaid releases require it.
- Implementation tasks: Update
Dashboard UI checklist completion and auth/SSE hardening
- Status: Accepted
- Date: 2026-01-01
Motivation
- Complete remaining dashboard UI checklist items without adding new dependencies.
- Tighten auth and SSE handling to avoid stale tokens and replay conflicts.
Context
- UI relies on SSE for live torrent updates and must survive Last-Event-ID conflicts.
- Auth tokens require a 14-day TTL enforced by both server and client.
- UI should allow anonymous mode when server auth_mode is none.
Decision
- Move torrent sort state into URL-backed filters and apply client-side ordering.
- Reset SSE Last-Event-ID on 409 conflict and reconnect with backoff.
- Refresh API keys on save to capture expiry; invalidate keys on logout via config patch.
- Mirror CORS origin on the API router to cover SSE and REST.
Alternatives considered:
- Add a dedicated logout endpoint: rejected to avoid OpenAPI changes.
- Store API keys without expiry: rejected to enforce 14-day TTL.
Design Notes
- Sorting is represented as
sort=key:dirin the query string. - Metadata updates trigger a targeted list refresh to keep tags/trackers current.
- Anonymous auth is enabled from
.well-knownapp_profile when configured.
Consequences
- Login now performs a refresh call to capture expiry; failures surface as toasts.
- Some SSE metadata events trigger list refreshes, increasing fetch volume.
Test Coverage Summary
DATABASE_URL=postgres://revaer:revaer@172.17.0.1:5432/revaer REVAER_TEST_DATABASE_URL=postgres://revaer:revaer@172.17.0.1:5432/revaer just ci(fmt, lint, udeps, audit, deny, ui-build, test, test-features-min, cov).
Observability Updates
- No new metrics or tracing changes.
Risk & Rollback Plan
- Risk: logout fails if config patch is rejected; UI now reports an error toast.
- Rollback: revert UI auth/SSE changes and re-run
just ci.
Dependency Rationale
- Updated
sqlxto 0.9.0-alpha.1 and aligned vendoredhashlinkto hashbrown 0.16 to satisfyclippy::multiple_crate_versionswithout introducing git dependencies.
Follow-up
- Confirm auth refresh behavior against expired keys during QA.
071: Libtorrent Native Fallback for Default CI
- Status: Accepted
- Date: 2026-01-02
- Context:
just cirunscargo udepsacross the workspace and fails on hosts without libtorrent headers or pkg-config data.- Native libtorrent integration tests are explicitly gated by
REVAER_NATIVE_IT, so default runs should remain deterministic without requiring native system deps.
- Decision:
- Gate native FFI compilation behind a build-time cfg (
libtorrent_native) that is emitted only when libtorrent is discovered bybuild.rs. - When
REVAER_NATIVE_ITis set, missing libtorrent is treated as an error; otherwise the build falls back to the stub backend with a warning. - Alternatives considered: require libtorrent for all CI/dev runs, or remove
--all-featuresfrom the quality gates (rejected to keep feature coverage intact).
- Gate native FFI compilation behind a build-time cfg (
- Consequences:
- Default
just cisucceeds on machines without libtorrent while still honoring native coverage when explicitly requested. - Feature-enabled builds no longer guarantee native bindings unless libtorrent is present; native builds must opt in via
REVAER_NATIVE_IT. cargo-udepsignores thecxxdependency for this crate because usage is gated by the native cfg.
- Default
- Follow-up:
- Ensure native CI matrix jobs set
REVAER_NATIVE_IT=1and install or bundle libtorrent.
- Ensure native CI matrix jobs set
072: Agent Compliance Refactor (UI + HTTP + Config Layout)
- Status: Accepted
- Date: 2026-01-03
- Context:
- Motivation: bring the repository into closer alignment with AGENT layout and tooling rules after drift in UI routing, HTTP module layout, and config structure.
- Constraints: preserve existing APIs/behavior while relocating modules; avoid new dependencies and keep stored-procedure-only database access intact.
- Decision:
- Design notes: move torrent UI views into the feature module, scope window/router usage to the app layer, and reorganize API HTTP handlers/DTOs into
handlers/anddto/while re-exporting to preserve public paths. - Alternatives considered: leave modules in place and document exceptions (rejected to keep the structure enforceable); introduce a large-scale API surface rename (rejected to avoid breaking changes).
- Design notes: move torrent UI views into the feature module, scope window/router usage to the app layer, and reorganize API HTTP handlers/DTOs into
- Consequences:
- Positive outcomes: clearer module boundaries, AGENT-compliant Justfile/CI flow, and reduced cross-layer coupling in the UI.
- Risks or trade-offs: short-term churn from file moves and import updates; slight increase in module indirection via re-exports.
- Follow-up:
- Test coverage summary:
just ci(fmt, lint, udeps, audit, deny, ui-build, test, test-features-min, cov, build-release) passed with the ≥80% line coverage gate satisfied. - Observability updates: no new spans or metrics added for this refactor.
- Risk & rollback plan: revert the module move commits and restore prior paths if regressions appear; no data migrations were introduced.
- Dependency rationale: no new dependencies added; alternatives were to add helper crates for routing/structure, which were rejected to keep the footprint minimal.
- Test coverage summary:
UI checklist follow-ups: SSE detail refresh, labels shortcuts, strict i18n, and anymap removal
- Status: Accepted
- Date: 2026-01-03
Motivation
- Close remaining dashboard UI checklist gaps tied to live metadata, labels navigation, and strict i18n.
- Remove the vendored yewdux/anymap fork and the related advisory ignore now that upstream versions align. (Superseded by ADR 074 for Yew compatibility.)
Context
- SSE metadata updates did not refresh list-row tags/tracker/category without a full list refresh.
- Add/Create torrent modals lacked shortcuts into the Settings → Labels workflow.
- Translation fallback masked missing keys; the checklist requires explicit missing-key surfacing.
anymapadvisoryRUSTSEC-2021-0065was previously ignored due to the vendored store fork.
Decision
- Add a throttled, targeted torrent detail refresh path for metadata events and reuse detail summaries to update list rows.
- Add
on_manage_labelscallbacks in torrent modals to route directly to the Labels tab. - Remove i18n fallback behavior and add explicit English copy for new UI affordances.
- Dependency alignment is superseded by ADR 074 (vendored yewdux for Yew 0.22 compatibility).
- Drop the advisory ignore tied to the vendored
anymap. - Remove remaining vendored crates (
hashlink,sqlx-core) and rely on registry sources.
Design Notes
- Use a debounced HashSet queue to coalesce detail refreshes and avoid duplicate fetches.
- Settings accepts a
requested_tabprop and clears it once the tab selection is applied. - Translation bundles return
missing:{key}for missing entries; no default locale fallback. upsert_detailupdates list-row tags, tracker, category, and name/path using the detail summary.
Consequences
- Tags/trackers/categories update without full list refreshes, reducing UI staleness.
- Users can reach label management quickly from torrent modals.
- Missing translations are obvious during QA instead of silently falling back.
- Supply-chain ignores shrink with the removal of vendored
anymap. - Dependency alignment outcomes are tracked in ADR 074.
Test Coverage Summary
just ci: blocked byjust cov(workspace line coverage 76.46%).just cov: fails--fail-under-lines 80(TOTAL line coverage 76.46%).
Observability Updates
- None.
Risk & Rollback Plan
- Risk: targeted refreshes could increase detail fetch volume under heavy metadata churn.
- Rollback: revert the targeted refresh scheduler and restore the prior full refresh behavior.
Dependency Rationale
- Dependency alignment decisions moved to ADR 074 to capture the vendored yewdux exception.
Follow-up
- Verify labels shortcuts and SSE metadata refresh during QA.
Temporary vendoring of yewdux for latest Yew compatibility
- Status: Accepted
- Date: 2026-01-03
- Context:
- We must stay on the latest crates.io
yewandyew-router. yewduxon crates.io (0.11) depends onyew0.21, which conflicts withyew-router0.19 (yew0.22).- Git dependencies are disallowed, and vendoring is normally disallowed.
- We must stay on the latest crates.io
- Decision:
- Vendor
yewduxundervendor/yewduxand update it to compile againstyew0.22. - Patch the workspace to use the vendored
yewduxwhile keeping all other dependencies on crates.io. - Document the exception in
AGENT.mdwith a hard requirement to remove the vendored copy once a compatible crates.io release exists. - Alternatives considered:
- Wait on the latest Yew (rejected; staying current is top priority).
- Replace
yewduxwith an internal store (larger refactor; deferred unless compatibility stalls). - Use git dependencies (rejected by policy).
- Vendor
- Consequences:
- We stay current with
yew/yew-routerwithout git dependencies. - We own the maintenance burden for the vendored
yewduxuntil upstream compatibility lands. - Risk of drift from upstream; requires periodic review and eventual removal.
- We stay current with
- Follow-up:
- Monitor crates.io
yewduxreleases foryew0.22 compatibility. - Next check date: 2026-02-05 (or sooner if a new
yewduxrelease lands). - Remove
vendor/yewdux, the workspace patch, and the AGENT exception once compatible. - Run
just ciafter eachyew/yew-routerupgrade.
- Monitor crates.io
075: Coverage gate tests for config loader and data toggles
- Status: Accepted
- Date: 2026-01-03
- Context:
- Motivation:
just covfailed at 76.46% line coverage, blockingjust ci. - Constraints: no coverage suppression, no new dependencies, and AGENT compliance.
- Motivation:
- Decision:
- Add focused unit tests for config loader mapping/secret helpers and data config toggle sets.
- Alternatives considered: ignore the gate or suppress coverage reporting (rejected).
- Consequences:
- Positive outcomes:
just covclears the 80% line gate; configuration mappings gain direct test coverage. - Risks or trade-offs: slightly longer test runtime.
- Positive outcomes:
- Follow-up:
- Implementation tasks: add loader/data tests, update checklist status, run
just ci. - Review checkpoints: validate coverage stays >=80% during follow-up changes.
- Implementation tasks: add loader/data tests, update checklist status, run
- Test coverage summary:
just covreports 80.44% total line coverage (gate passes).
- Observability updates:
- None (tests only).
- Risk & rollback plan:
- If tests become flaky, revert the test additions and re-run
just ci.
- If tests become flaky, revert the test additions and re-run
- Dependency rationale:
- No new dependencies; reused existing dev crates.
076: Temporary clippy exception for hashbrown multiple versions
- Status: Accepted
- Date: 2026-01-03
- Context:
- Motivation:
just lintfails onclippy::multiple_crate_versionsdue tohashbrown0.15 (viasqlx-core->hashlink ^0.10) and 0.16 (viayew->indexmap ^2.11). - Constraints: keep
yew/yew-routerlatest, avoid vendoring or git crates, preserve CI viajust.
- Motivation:
- Decision:
- Allow
clippy::multiple_crate_versionsin the lint recipe and crate roots. - Allow duplicate
hashbrown/foldhashincargo-denybans to keepjust denygreen. - Remove the exception once SQLx releases a version compatible with
hashlink ^0.11(or the dependency graph otherwise unifies on a singlehashbrown).
- Allow
- Consequences:
- Positive outcomes:
just lintpasses while keeping primary deps current. - Risks or trade-offs: reduced lint signal for other multi-version cases; must monitor dependency graph for unintentional splits.
- Positive outcomes:
- Follow-up:
- Implementation tasks: update
just lint, add crate-root allows, updatedeny.toml, document exception inAGENT.md, track in checklist. - Review checkpoints: remove the exception when SQLx adopts
hashlink ^0.11andhashbrownunifies.
- Implementation tasks: update
- Test coverage summary:
- Not applicable (lint configuration change).
- Observability updates:
- None.
- Risk & rollback plan:
- Remove the lint allow flag and re-run
just cionce dependencies align.
- Remove the lint allow flag and re-run
- Dependency rationale:
- No new dependencies; exception is scoped to lint configuration only.
Restore UI Menu Interactions
- Status: Accepted
- Date: 2026-01-09
- Context:
- Motivation: top-right menus did not open reliably, and sidebar labels were hidden in the default open state.
- Constraints: No new dependencies; use daisyUI/Nexus patterns and keep component props stable.
- Design notes: Align dropdown markup with daisyUI examples and compose menu UI from shared components.
- Decision:
- Summary of the choice made: update dropdowns to the daisyUI focus pattern, compose locale/server menus into dedicated components, and default sidebar labels to visible while hiding them only in collapsed/hover modes using sibling selectors.
- Alternatives considered: keep inline markup, add JS for dropdown state, or hardcode label visibility without toggle support.
- Consequences:
- Positive outcomes: dropdown menus open reliably, layout follows component composition, and sidebar labels display in the default open state.
- Risks or trade-offs: Hover/collapsed behavior depends on CSS selectors; custom styling may need minor tuning.
- Follow-up:
- Implementation tasks: update
crates/revaer-ui/src/components/daisy/molecules/dropdown.rs, addcrates/revaer-ui/src/components/locale_menu.rs,crates/revaer-ui/src/components/server_menu.rs, wire them incrates/revaer-ui/src/app/mod.rsandcrates/revaer-ui/src/components/shell.rs, and adjustcrates/revaer-ui/static/style.css. - Review checkpoints: verify dropdown menus and sidebar labels on the dev server.
- Test coverage summary:
just ci(fmt, lint, udeps, audit, deny, ui-build, tests, cov, build-release). - Observability updates: none (no telemetry changes).
- Risk & rollback plan: revert the CSS/attribute changes if menu interactions or sidebar labels regress.
- Dependency rationale: no new dependencies; use HTML/CSS fixes instead of runtime guards.
- Implementation tasks: update
078 - Local Auth Bypass Guardrails (Task Record)
- Status: In Progress
- Date: 2026-01-11
Motivation
- Stop offering anonymous access when the backend does not allow no-auth mode.
- Ensure disabling local auth bypass requires credentials so operators cannot lock themselves out.
Design Notes
- Track backend auth_mode from /.well-known and config snapshot updates; allow anonymous only when auth_mode is none and the UI host is local.
- When no-auth is enabled and no credentials exist, set Anonymous auth state to connect immediately.
- When no-auth is disabled while anonymous, clear anonymous state and re-open the auth prompt.
- Guard settings changes that switch auth_mode to api_key unless API key or local auth credentials are saved.
Decision
- Gate anonymous UI behavior on backend auth_mode + local host detection.
- Block config saves that disable bypass without saved credentials.
Consequences
- Anonymous access is only offered when the backend explicitly allows it on a local host.
- Operators must save credentials before switching to auth-required mode.
Test Coverage Summary
- Added unit tests for AuthState credential validation.
Observability Updates
- None (UI-only change).
Risk & Rollback
- Risk: remote UI access to no-auth servers now requires credentials despite server allowing none.
- Rollback: revert auth_mode gating in app shell and the settings guard.
Dependency Rationale
- No new dependencies introduced.
Advisory RUSTSEC-2025-0141 Temporary Ignore
- Status: In Progress
- Date: 2026-01-11
- Context:
bincode1.3.3 is flagged as unmaintained (RUSTSEC-2025-0141).- The dependency is pulled via
gloo-workeringloo, which is required by the Yew UI stack. - No drop-in upgrade path is available without upstream releases.
- Decision:
- Add
RUSTSEC-2025-0141to.secignorewhile the UI depends ongloo/yewthat transitively requirebincode1.3.3. - Revisit once upstream releases remove or replace the dependency.
- Add
- Consequences:
just auditpasses while the advisory remains documented.- The unmaintained dependency stays in the tree until upstream updates land.
- Follow-up:
- Track
glooandyewrelease notes forbincodereplacement/removal. - Remove the ignore once the dependency graph no longer includes
bincode1.3.x.
- Track
Motivation
- Keep
just cipassing while capturing the risk and remediation plan for the unmaintained transitive dependency.
Design notes
- The ignore is scoped to the single advisory and documented in
.secignoreplus this ADR.
Test coverage summary
just ci(includes fmt, clippy, udeps, audit, deny, test, cov).
Observability updates
- None; advisory handling does not change runtime telemetry.
Risk & rollback plan
- Risk: unmaintained dependency remains in the build while upstream updates are pending.
- Rollback: remove the ignore after upgrading
gloo/yewor replacing the dependency.
Dependency rationale
glooandyeware required for the UI; alternatives would require a larger frontend migration.
080 - Local Auth Bypass Reliability (Task Record)
- Status: Accepted
- Date: 2026-01-11
Motivation
- Local-network auth bypass should remain usable during UI startup and on common LAN hostnames.
- Prevent UI crashes from invalid attribute names in component props.
Design Notes
- Expand local host detection to cover loopback/private/link-local IPs plus common LAN hostnames.
- Allow anonymous prompt options on local hosts even when auth mode is not yet known; auto-enable anonymous only once the backend reports no-auth.
- Replace raw-identifier button prop names to avoid invalid DOM attributes in Yew.
Decision
- Update local host detection and IPv6 base URL formatting in UI preferences.
- Adjust auth bypass gating to keep anonymous mode stable and prompt-friendly on local hosts.
- Rename button props from
r#typetobutton_typein shared components.
Consequences
- More reliable local auth bypass and fewer startup dead-ends.
- Anonymous option may appear on local hosts before auth mode is confirmed.
Test Coverage Summary
- UI behavior validated by existing integration flows; no new automated tests added.
Observability Updates
- None.
Risk & Rollback
- Risk: local anonymous option could be offered briefly when auth mode still resolves.
- Rollback: revert local host detection and auth bypass gating changes.
Dependency Rationale
- No new dependencies; uses
stdIP parsing only.
081 - Playwright E2E Test Suite (Task Record)
- Status: Accepted
- Date: 2026-01-14
Motivation
- Add automated UI coverage for core routes and modal flows.
- Centralize E2E configuration in a committed
tests/.envfile.
Design Notes
- Playwright config reads
tests/.envfor base URL, browser selection, timeouts, and artifacts. - Tests are grouped by page with page objects and a shared app fixture.
- Assertions focus on stable labels and layout anchors to avoid data coupling.
Decision
- Add a Playwright test harness under
/testswith config, fixtures, and page objects. - Add a
just ui-e2erecipe to run the suite via the standard workflow. - Ignore Playwright output directories in
.gitignore.
Consequences
- UI smoke checks can be run locally and wired into CI when ready.
- Running the suite requires Node tooling and Playwright browser installs.
Test Coverage Summary
- Added specs for dashboard, torrents, settings, logs, health, and navigation smoke.
Observability Updates
- None.
Risk & Rollback
- Risk: label changes or auth/setup overlays can break selectors.
- Rollback: remove the
/testsPlaywright suite andui-e2erecipe.
Dependency Rationale
@playwright/test: browser automation and test runner.dotenv: load environment configuration fromtests/.env.
082 - E2E Gate and Selector Stability (Task Record)
- Status: Accepted
- Date: 2026-01-14
Motivation
- Stabilize Playwright selectors against shared nav labels and auth overlays.
- Make UI E2E runs a required quality gate for local changes.
- Document how to run the E2E suite.
Design Notes
- Scope selectors to the layout content area or sidebar to avoid strict-mode collisions.
- Use the auth overlay’s dismiss icon button when present; fall back to the text button.
- Document the
just ui-e2erequirement in README and AGENT.
Decision
- Update Playwright page objects to scope selectors and handle the auth overlay deterministically.
- Add UI E2E requirements to
README.mdandAGENT.md.
Consequences
- Navigation and logs checks avoid ambiguous label matches.
- E2E tests are enforced as a local quality gate.
Test Coverage Summary
just ui-e2e
Observability Updates
- None.
Risk & Rollback
- Risk: UI label changes may still require selector updates.
- Rollback: revert the selector scoping and gate requirements.
Dependency Rationale
- No new dependencies; reuse Playwright and dotenv.
083 - API Preflight Before UI E2E (Task Record)
- Status: Accepted
- Date: 2026-01-14
Motivation
- Verify API availability before UI E2E runs to reduce false attribution to the UI.
Design Notes
- Add a dedicated Playwright project that hits public API endpoints.
- Make browser projects depend on the API project to enforce ordering.
- Keep checks read-only and stable:
/health,/metrics,/docs/openapi.json.
Decision
- Add an API preflight spec and wire it as a dependency for UI projects.
- Add
E2E_API_BASE_URLto the test configuration docs.
Consequences
- UI tests do not run if API preflight fails.
- E2E setup now needs the API base URL to be accurate.
Test Coverage Summary
just ui-e2e
Observability Updates
- None.
Risk & Rollback
- Risk: API endpoint changes will require updates to the preflight checks.
- Rollback: remove the API project dependency and preflight spec.
Dependency Rationale
- No new dependencies; reuse Playwright.
084: E2E API Coverage With Temp Databases
- Status: Accepted
- Date: 2026-01-15
- Context:
- What problem are we solving?
- E2E coverage must exercise 100% of the HTTP API surface under both auth modes and surface API regressions before UI tests.
- Test runs must isolate state using temporary databases and document OpenAPI coverage gaps.
- What constraints or forces shape the decision?
- The API server derives port and auth mode from persisted configuration; setup flow must be exercised to activate the instance.
- E2E runs must be invoked via
justand usetests/.envfor configuration.
- What problem are we solving?
- Decision:
- Summary of the choice made.
- Add Playwright global setup/teardown to perform setup and factory reset.
- Expand Playwright API specs to cover every route and operation under both auth modes.
- Introduce a temp DB harness (
scripts/ui-e2e.sh) that starts API/UI servers, runs API suites first, then UI suites. - Document OpenAPI gaps in
docs/api/openapi-gaps.md.
- Alternatives considered.
- Reusing a shared dev database (rejected: violates isolation requirement).
- Running API and UI suites in a single Playwright project without temp DB orchestration (rejected: ordering and auth coverage requirements).
- Summary of the choice made.
- Consequences:
- Positive outcomes.
- Full HTTP surface coverage with deterministic, isolated runs.
- Clear documentation of OpenAPI drift.
- Risks or trade-offs.
- Longer E2E runtime and additional local prerequisites (Postgres + free ports).
- Additional maintenance for API fixtures when new endpoints are added.
- Positive outcomes.
- Follow-up:
- Implementation tasks.
- Keep
docs/api/openapi.jsonaligned with router updates. - Update the API spec and tests whenever routes change.
- Keep
- Review checkpoints.
- Verify
just ui-e2epasses in local and CI environments.
- Verify
- Implementation tasks.
Task Record
- Motivation:
- Enforce API-first E2E verification, full route coverage, and state isolation across auth modes.
- Design notes:
- Playwright global setup completes setup using the configured auth mode.
- Global teardown issues factory reset to cover the endpoint and clear state.
- Temp DB orchestration uses
sqlxto create and drop isolated databases per suite.
- Test coverage summary:
- API specs cover all routes and methods from
crates/revaer-api/src/http/router.rsunderapi_keyandnonemodes. - UI specs continue to validate navigation and page rendering after API suites pass.
- API specs cover all routes and methods from
- Observability updates:
- E2E runs emit API/UI logs to
tests/test-resultsfor debugging.
- E2E runs emit API/UI logs to
- Risk & rollback plan:
- If temp DB orchestration proves unstable, revert to manual server management and isolate DB via dedicated test instance.
- Dependency rationale:
- No new runtime dependencies added.
085 - E2E OpenAPI Client and Unified Coverage
- Status: Accepted
- Date: 2026-01-16
- Context:
- What problem are we solving?
- E2E runs overwrote reports and did not enforce full API/UI surface coverage.
- API E2E tests needed a generated TypeScript client based on the OpenAPI spec.
- What constraints or forces shape the decision?
- Use a single Playwright execution with one final report.
- Use a maintained generator that supports native Node.js fetch.
- Keep OpenAPI synchronized with the router surface.
- What problem are we solving?
- Decision:
- Summary of the choice made.
- Expand OpenAPI coverage to match all router endpoints and generate a typed E2E client via openapi-typescript + openapi-fetch.
- Enforce API operation and UI route coverage in the Playwright teardown.
- Run API suites for both auth modes in one Playwright run, then UI tests.
- Alternatives considered.
- OpenAPI Generator CLI (Java) and swagger-typescript-api; rejected due to heavier toolchain and weaker fit for native fetch.
- Summary of the choice made.
- Consequences:
- Positive outcomes.
- Single Playwright report with explicit API/UI coverage enforcement.
- Typed API client aligned to OpenAPI for E2E calls.
- Risks or trade-offs.
- Additional Node dependencies and a stricter coverage gate that must be updated when routes change.
- Positive outcomes.
- Follow-up:
- Implementation tasks.
- Keep
docs/api/openapi.jsonaligned with router updates and regeneratetests/support/api/schema.tsas needed. - Update UI route coverage list when new routes are added.
- Keep
- Review checkpoints.
just ui-e2ecompletes with a single report and no missing coverage.just cipasses cleanly.
- Implementation tasks.
Task Record
- Motivation:
- Ensure the E2E suites cover the entire API and UI surface in one continuous execution with a single report.
- Design notes:
- Playwright projects now sequence no-auth API coverage ahead of API-key coverage, then UI coverage.
- API requests use a generated OpenAPI client with native fetch and a coverage ledger written per project.
- UI navigation records route coverage through the shared AppShell helpers.
- Test coverage summary:
just ui-e2e(single Playwright run with API + UI coverage checks).
- Observability updates:
- Coverage artifacts are written to
tests/test-resultsfor API and UI coverage validation.
- Coverage artifacts are written to
- Risk & rollback plan:
- Risk: coverage failures if OpenAPI or UI routes drift.
- Rollback: revert Playwright project sequencing and remove coverage enforcement to return to per-suite execution.
- Dependency rationale:
- Added
openapi-typescript+openapi-fetchto generate a typed client backed by native Node.js fetch. - Alternatives considered: OpenAPI Generator CLI (Java) and swagger-typescript-api; both rejected to avoid heavier toolchains and non-fetch defaults.
- Added
086 - Default Local Auth Bypass (Task Record)
- Status: Accepted
- Date: 2026-01-17
Motivation
- Ensure factory reset remains available when configuration data is broken.
- Default new installs to a recoverable auth state without implicit API key setup.
Design Notes
- Switch
AppAuthModedefault tononeand align setup completion fallback. - Change the
app_profile.auth_modedatabase default tononevia migration. - Make setup helpers send explicit
auth_modevalues for both auth paths. - Update reference configuration documentation to match the new default.
Decision
- Default auth mode to no-auth in code and migrations, while leaving explicit API key setups unchanged.
Consequences
- New databases start with no-auth access until setup selects API key mode.
- Existing databases retain their configured auth mode unless reset.
Test Coverage Summary
- Existing API/E2E flows cover both auth modes; setup helper now sets auth mode explicitly.
Observability Updates
- None.
Risk & Rollback
- Risk: integrations relying on implicit API key setup must now send
auth_modeexplicitly. - Rollback: revert the auth mode defaults and migration; restore previous setup fallback.
Dependency Rationale
- No new dependencies introduced.
Local network auth ranges and settings validation
- Status: Accepted
- Date: 2026-01-17
- Context:
- Local auth bypass must work for recovery even when API key state is broken.
- Local-only checks must handle reverse proxies (k3s/docker) that rewrite the peer IP.
- Operators need to adjust what counts as local without locking themselves out.
- Decision:
- Persist app profile local network CIDRs and enforce them for no-auth and recovery flows.
- Trust forwarded client IP headers only when the peer is already within a local range.
- Validate local network updates against the saving client address before applying.
- Consequences:
- Anonymous access is now scoped to configured local networks.
- Factory reset remains possible from local clients even when API key inventory queries fail.
- Misconfigured local ranges can block access until corrected or reset.
- Follow-up:
- Keep OpenAPI and UI fields in sync with
app_profile.local_networks. - Monitor any proxy deployments for forwarded header quirks.
- Keep OpenAPI and UI fields in sync with
Motivation
- Provide a safe recovery path when auth state or API key inventory is broken.
- Allow common local topologies (LAN device -> k3s/docker service) without false negatives.
- Prevent settings updates that would immediately disconnect the caller.
Design notes
- Added
app_profile.local_networksas a normalized list of CIDR strings with defaults for loopback, RFC1918, and link-local ranges. - API auth middleware now derives client IP from ConnectInfo and trusted forwarded headers, enforcing local-only access for no-auth and factory reset fallbacks.
- Settings patch validates that the updated local network list still includes the caller IP before persisting.
Test coverage summary
- Auth middleware tests cover anonymous local access, remote rejection, and factory reset allowance when API key inventory checks fail.
- Config validation tests cover CIDR normalization and invalid prefixes.
Observability updates
- Auth middleware logs when local network parsing fails or when recovery paths are used.
Risk & rollback plan
- Risk: misconfigured local CIDRs can block anonymous access or factory reset.
- Mitigation: validation rejects updates that exclude the saving client.
- Rollback: revert migration 0009 and remove local network enforcement in auth middleware, then restore the previous auth behavior.
Dependency rationale
- No new dependencies. CIDR parsing reuses std-based helpers in
revaer-config.
Live SSE Log Streaming
- Status: Accepted
- Date: 2026-01-17
- Context:
- Motivation: remove dummy SSE data, ensure SSE is the single live update channel, and surface recent logs immediately on open.
- Constraints: keep the log stream lightweight, avoid new dependencies, and respect existing SSE routes.
- Decision:
- Summary: drop the dummy SSE stream, retain a rolling two-minute log buffer for SSE snapshots, and add log level filtering + text search in the UI.
- Design notes: telemetry now snapshots recent log lines, the API chains the snapshot ahead of the live broadcast, and UI log lines track level + receipt time for filtering and pruning.
- Dependency rationale: no new dependencies; reuse existing serde_json parsing in the UI for log level detection.
- Consequences:
- Positive outcomes: SSE reflects live event data only, logs open with context, and the logs page can filter by level or search text.
- Risks or trade-offs: some log lines may skip buffer storage under contention, and non-drop SSE errors now require manual retry instead of automatic reconnect.
- Risk & rollback plan: revert the log buffer/snapshot changes to restore streaming-only behavior and re-enable auto-reconnect if needed.
- Follow-up:
- Implementation tasks: adjust telemetry buffering, SSE handlers, and logs UI controls with filtering/search state.
- Test coverage summary: added log buffer tests; run
just ciandjust ui-e2eto validate full coverage. - Observability updates: log stream now captures a rolling snapshot; SSE status remains visible via existing UI badges.
Port process termination for dev tooling
- Status: Accepted
- Date: 2026-01-17
- Context:
- What problem are we solving? Port cleanup for 7070/8080 did not verify termination, leaving ports occupied and making dev or E2E startup flaky.
- What constraints or forces shape the decision? Keep existing tooling, avoid new dependencies, and ensure startup fails fast when ports cannot be freed.
- Decision:
- Summary of the choice made. Add a graceful shutdown path that sends SIGTERM, waits briefly, escalates to SIGKILL, and errors if ports remain bound.
- Alternatives considered. Leave the kill-only behavior or add external tooling/scripts; rejected to avoid new dependencies and extra surface area.
- Consequences:
- Positive outcomes. Cleanup is deterministic and failures surface early when ports cannot be reclaimed.
- Risks or trade-offs. Force-kill can terminate unrelated processes on those ports; failures may require manual cleanup before rerun.
- Task record:
- Motivation: Ensure port cleanup actually releases 7070/8080 before starting services.
- Design notes: Use lsof PID discovery, SIGTERM with polling, SIGKILL fallback, and a final port-bound check; reuse in
just dev. - Test coverage summary: Covered by
just ciandjust ui-e2eruns (no direct unit tests). - Observability updates: Added console messages in
just zombiesfor graceful/force termination. - Risk & rollback plan: Revert the justfile changes if termination must be non-fatal; manual kill with lsof remains a fallback.
- Dependency rationale: No new dependencies; lsof already assumed by existing recipes.
- Follow-up:
- Implementation tasks. Keep
zombiesaligned with any future port changes. - Review checkpoints. Verify
just devandjust ui-e2estartup when ports are in use.
- Implementation tasks. Keep
UI log filters and shell controls
- Status: Accepted
- Date: 2026-01-17
- Context:
- What problem are we solving? The logs screen needs a DaisyUI filter, consistent search affordances, and SSE-level filtering; shell controls need icon-only indicators, consistent flag icons, and no overlapping z-order with sticky action bars.
- What constraints or forces shape the decision? Keep the existing UI structure, avoid new dependencies, and ensure E2E coverage for regressions.
- Decision:
- Summary of the choice made. Replace the log level select with a DaisyUI filter, make the search input a proper daisyUI input with cmd/ctrl+enter hints, move to minimum-level filtering, update shell menus/icons and sidebar controls to icon-only with tooltips, and remove home/torrents breadcrumbs.
- Alternatives considered. Keep the select-based filter and add new i18n keys across locales; rejected to avoid translation churn and align with DaisyUI components.
- Consequences:
- Positive outcomes. Log filtering matches severity expectations, UI controls are more compact, and dropdowns no longer hide behind sticky action bars.
- Risks or trade-offs. Icon-only controls rely on tooltips for clarity; any tooltip styling changes must preserve accessibility.
- Task record:
- Motivation: Align log filtering with DaisyUI and ensure shell controls remain stable across layout changes.
- Design notes: Use DaisyUI filter inputs with severity thresholds; add search hint kbd labels; raise dropdown z-index; remove breadcrumb headers; keep icons/titles for accessibility.
- Test coverage summary: Updated Playwright UI specs for logs filter/search, topbar icons, locale flags, breadcrumbs, and dropdown stacking.
- Observability updates: None.
- Risk & rollback plan: Revert the UI component changes and E2E assertions if layouts regress; fallback to prior select-based filter is isolated to logs view.
- Dependency rationale: No new dependencies added.
- Follow-up:
- Implementation tasks. Keep locale flags and icon-only controls consistent across future shell revisions.
- Review checkpoints. Verify log filtering and dropdown stacking in UI E2E runs.
091: Raise per-crate coverage gate to 90%
- Status: Accepted
- Date: 2026-01-17
- Context:
- The workspace coverage gate previously enforced ≥80% line coverage overall, which masked low-coverage crates.
- The requirement is now ≥90% coverage per crate, without test-only code in production modules.
- The gate must remain Justfile-driven and avoid llvm-cov suppression flags.
- Decision:
- Update
just covto runcargo llvm-covper crate and enforce a ≥90% threshold via the Justfile loop. - Raise the documented coverage requirement in
AGENT.mdto 90% per crate. - Add focused unit tests to raise coverage in low-coverage crates (test-support, asset_sync, doc-indexer, CLI, API setup/docs, UI ANSI parsing, libtorrent types).
- Update
- Consequences:
- Coverage checks now report per-crate deficits with precise percentages.
- The stricter gate currently fails on multiple crates until additional tests are added.
- More test investment is required for large modules (API handlers, config loader, fsops pipeline, app bootstrap).
- Follow-up:
- Add tests to raise coverage for:
revaer-app,revaer-config,revaer-data,revaer-fsops,revaer-api,revaer-ui,revaer-torrent-libt,asset_sync, andrevaer-test-support. - Re-run
just cov, then complete the fulljust ciandjust ui-e2egates.
- Add tests to raise coverage for:
Motivation
- Ensure test coverage reflects real production risk by enforcing ≥90% per crate.
Design notes
- Coverage is computed per crate by running
cargo llvm-cov --packagein a workspace member loop. - Crates with zero executable lines are treated as 100% covered by
llvm-covfor that package.
Test coverage summary
just covrun on 2026-01-17; coverage gate failed. Current per-crate results:- revaer-app: 70.71% (1922/2718)
- revaer-test-support: 71.30% (246/345)
- revaer-data: 72.75% (993/1365)
- revaer-config: 75.65% (2775/3668)
- revaer-fsops: 76.15% (1520/1996)
- asset_sync: 79.16% (300/379)
- revaer-ui: 83.82% (1911/2280)
- revaer-api: 84.37% (7539/8936)
- revaer-torrent-libt: 85.38% (2961/3468)
- revaer-cli: 86.51% (2084/2409)
- revaer-doc-indexer: 89.73% (655/730)
- revaer-telemetry: 92.40% (729/789)
- revaer-torrent-core: 94.34% (250/265)
- revaer-api-models: 95.34% (553/580)
- revaer-events: 96.40% (268/278)
- revaer-runtime: 100.00% (0/0)
Observability updates
- None.
Risk & rollback plan
- Risk: CI remains blocked until per-crate coverage is lifted to 90%.
- Rollback: revert the
just covloop and reset the coverage threshold (not recommended unless blocking critical releases).
Dependency rationale
- No new dependencies added.
092: Fsops coverage hardening
- Status: Accepted
- Date: 2026-01-17
- Context:
- The workspace requires at least 90% per-crate line coverage (ADR 091).
revaer-fsopscontained untested branches in pipeline helpers and filesystem routines.
- Decision:
- Add targeted unit tests for fsops pipeline steps, rule parsing, and file operations.
- Keep all test-only logic inside
#[cfg(test)]modules. - Alternatives considered: integration tests backed by
RuntimeStore+ database; rejected for higher cost and slower feedback.
- Consequences:
- Positive outcomes:
- Improved coverage and regression protection for fsops edge cases.
- Risks or trade-offs:
- Additional filesystem IO during tests; mitigate with temp dirs and deterministic fixtures.
- Positive outcomes:
- Follow-up:
- Run
just covandjust cito confirm the per-crate gate. - Watch for platform-specific permission semantics in CI.
- Run
Motivation
Raise revaer-fsops coverage to meet the 90% per-crate gate while strengthening confidence in filesystem post-processing edge cases.
Design notes
- Exercise both happy-path and skip/error branches without introducing production-only hooks.
- Favor direct unit tests of helper functions to keep the tests fast and deterministic.
Test coverage summary
- Added unit tests for meta initialization, allowlist enforcement, glob parsing errors, archive extension checks, step short-circuiting, and file operation paths.
- Added permission/ownership tests for unix targets to cover
apply_permissions,resolve_owner, andresolve_group.
Observability updates
None; no runtime behavior changes.
Risk & rollback plan
- Risk: file-permission tests may behave differently on non-unix systems.
- Rollback: revert the added tests and rework with platform guards if CI shows instability.
Dependency rationale
No new dependencies added. Alternative considered: integration coverage via database-backed runtime store, rejected due to setup overhead.
UI logic extraction for testable components
- Status: Accepted
- Date: 2026-01-17
- Context:
- The UI layer accumulated view-local parsing and formatting logic that was hard to test.
- Coverage targets require host-testable logic outside Yew components.
- Decision:
- Extract feature-specific helpers into
logic.rsmodules and keep state types instate.rs. - Keep view modules focused on rendering and
UseStateHandleorchestration.
- Extract feature-specific helpers into
- Consequences:
- Positive outcomes: improved unit test coverage, clearer separation of concerns.
- Risks or trade-offs: refactor touchpoints may introduce regressions; mitigated with tests.
- Motivation:
- Ensure UI logic is reusable, deterministic, and testable without DOM bindings.
- Design notes:
- Logic modules stay pure; only view helpers touch Yew handles.
- Error surfaces avoid unit error types and return typed results where parsing can fail.
- Test coverage summary:
- Added unit tests for newly extracted helpers in each UI feature slice.
- Observability updates:
- None (UI-only refactor with no telemetry changes).
- Risk & rollback plan:
- If regressions appear, revert to the previous view-local helpers and reapply incrementally.
- Dependency rationale:
- No new dependencies; reused existing crates and standard library helpers.
UI E2E sharding in workflows
- Status: Accepted
- Date: 2026-01-23
- Context:
- What problem are we solving?
- UI E2E runs are long and delay feedback, especially when other jobs have already passed.
- What constraints or forces shape the decision?
- Keep Playwright invoked through
just ui-e2eand avoid new dependencies.
- Keep Playwright invoked through
- What problem are we solving?
- Decision:
- Summary of the choice made.
- Add Playwright sharding support to
just ui-e2eand shard the UI E2E jobs with a matrix in CI/PR workflows.
- Add Playwright sharding support to
- Alternatives considered.
- Increase test workers only (limited benefit because suite already uses Playwright workers).
- Split tests by directory into separate workflows (more maintenance).
- Summary of the choice made.
- Consequences:
- Positive outcomes.
- Reduced wall-clock time for UI E2E runs via parallel shards.
- Risks or trade-offs.
- Increased parallel runner usage for sharded jobs.
- Positive outcomes.
- Follow-up:
- Implementation tasks.
- Monitor shard duration balance and tune shard counts if needed.
- Review checkpoints.
- Reassess sharding if runner usage limits become a concern.
- Implementation tasks.
Task record
- Motivation: Parallelize UI E2E to shorten CI runtime while keeping the just-based workflow contract intact.
- Design notes: Use Playwright’s
--shardflag driven byPLAYWRIGHT_SHARD_INDEXandPLAYWRIGHT_SHARD_TOTAL. - Test coverage summary:
just ciandjust ui-e2epassed. - Observability updates: None (workflow-only change).
- Risk & rollback plan: Revert sharding env and matrix changes if shard stability or runner usage is problematic.
- Dependency rationale: No new dependencies introduced.
Untagged images use dev tag
- Status: Accepted
- Date: 2026-01-23
- Context:
- What problem are we solving?
- Untagged builds currently publish to a separate
-devimage name and still apply alatesttag, making it harder to discover the intended development tag.
- Untagged builds currently publish to a separate
- What constraints or forces shape the decision?
- Keep tagging logic in the GitHub workflow without altering the build artifacts or Dockerfile.
- What problem are we solving?
- Decision:
- Summary of the choice made.
- Publish untagged builds to the primary image name with a
devtag, while tagged builds retainlatest.
- Publish untagged builds to the primary image name with a
- Alternatives considered.
- Keep the
-devimage suffix and add an extradevalias tag.
- Keep the
- Summary of the choice made.
- Consequences:
- Positive outcomes.
- Untagged images are clearly labeled as development artifacts in the primary repository.
- Risks or trade-offs.
- Development images now share the same repository name as releases, requiring clear tag usage.
- Positive outcomes.
- Follow-up:
- Implementation tasks.
- Monitor downstream consumers for any references to the previous
-devimage name.
- Monitor downstream consumers for any references to the previous
- Review checkpoints.
- Reassess if consumers need both
devandlatesttags for untagged builds.
- Reassess if consumers need both
- Implementation tasks.
Task record
- Motivation: Align untagged image naming with a
devtag instead of a separate-devrepository andlatest. - Design notes: Use a workflow alias tag that switches between
latestanddevbased on ref type. - Test coverage summary:
just ciandjust ui-e2epassed. - Observability updates: None (workflow-only change).
- Risk & rollback plan: Revert the alias tag logic in
.github/workflows/ci.ymlif consumers depend onrevaer-devorlatest. - Dependency rationale: No new dependencies introduced.
Aggregate UI E2E coverage for sharded runs
- Status: Accepted
- Date: 2026-01-23
- Context:
- What problem are we solving?
- Playwright sharding runs global teardown per shard, causing partial coverage checks to fail.
- What constraints or forces shape the decision?
- Keep Playwright invoked via
just ui-e2e, avoid new dependencies, and preserve coverage gating.
- Keep Playwright invoked via
- What problem are we solving?
- Decision:
- Summary of the choice made.
- Skip coverage assertions in sharded teardown, write shard-specific coverage files, upload them as artifacts, and run an aggregate coverage check in a dedicated job.
- Alternatives considered.
- Disable coverage checks entirely for sharded runs (reduces signal).
- Keep non-sharded UI E2E only (slower feedback).
- Summary of the choice made.
- Consequences:
- Positive outcomes.
- Sharded UI E2E runs succeed while retaining full coverage enforcement.
- Risks or trade-offs.
- Additional workflow job and artifact handling.
- Positive outcomes.
- Follow-up:
- Implementation tasks.
- Monitor shard duration and artifact sizes.
- Review checkpoints.
- Revisit shard count if coverage aggregation becomes slow.
- Implementation tasks.
Task record
- Motivation: Fix sharded UI E2E failures while maintaining coverage enforcement.
- Design notes: Shard-specific coverage files with an aggregate coverage check via
just ui-e2e-coverage. - Test coverage summary:
just ci,just ui-e2e. - Observability updates: None (workflow-only change).
- Risk & rollback plan: Revert sharding and coverage aggregation changes if instability persists.
- Dependency rationale: No new dependencies introduced.
Dev prereleases and PR image previews
- Status: Accepted
- Date: 2026-01-24
- Context:
- What problem are we solving?
- Main should publish dev prereleases and dev-tagged images without displacing stable “latest” artifacts.
- PRs need preview images without exposing secrets to forks.
- What constraints or forces shape the decision?
- CI must run via
just, releases must be semver-based from Conventional Commits, and stable releases/images stay version-tagged.
- CI must run via
- What problem are we solving?
- Decision:
- Summary of the choice made.
- Use semantic-release on main to publish
-dev.Nprereleases with attached artifacts, tag dev images with the prerelease tag plusdev, and publish PR preview images for non-fork PRs usingpr-<num>andpr-<num>-<sha>tags only.
- Use semantic-release on main to publish
- Alternatives considered.
- Continue tag-only releases (no dev prereleases).
- Publish dev images under a separate repository name.
- Summary of the choice made.
- Consequences:
- Positive outcomes.
- Main builds produce versioned dev releases and dev images without changing the stable “latest” artifacts.
- Non-fork PRs get preview images with consistent tags.
- Risks or trade-offs.
- Adds release tooling dependencies and requires Conventional Commit discipline for every main merge.
- Positive outcomes.
- Follow-up:
- Implementation tasks.
- Monitor semantic-release output and adjust release rules if release cadence is too strict or too noisy.
- Review checkpoints.
- Revisit tag patterns if GitHub tag filters or image consumers need additional aliases.
- Implementation tasks.
Task record
- Motivation: Publish dev prereleases and PR preview images without displacing stable releases or
latestimages. - Design notes: Semantic-release prereleases on main drive version tags; PR images are tagged
pr-<num>andpr-<num>-<sha>only. - Test coverage summary:
just ci,just ui-e2e. - Observability updates: None (workflow-only change).
- Risk & rollback plan: Remove release-dev and PR image jobs and revert to tag-only releases if prereleases cause instability.
- Dependency rationale: Add semantic-release tooling in
release/to analyze Conventional Commits and publish prereleases with assets.
Reusable image build workflow
- Status: Accepted
- Date: 2026-01-24
- Context:
- What problem are we solving?
- Image build logic is duplicated across CI and PR workflows, and CI was failing to load due to invalid tag filters.
- What constraints or forces shape the decision?
- Keep CI driven by
just, avoid dev tag releases updating stable artifacts, and reduce workflow duplication.
- Keep CI driven by
- What problem are we solving?
- Decision:
- Summary of the choice made.
- Introduce a reusable workflow for multi-arch image build/manifest creation and use it from both CI and PR workflows, while gating CI roots to skip dev tag pushes.
- Alternatives considered.
- Keep duplicated image steps in each workflow.
- Split tag builds into a separate workflow without reuse.
- Summary of the choice made.
- Consequences:
- Positive outcomes.
- Consistent image build behavior across workflows with less duplication and clear tag policies.
- Risks or trade-offs.
- Reusable workflows add indirection when tracing failures.
- Positive outcomes.
- Follow-up:
- Implementation tasks.
- Monitor build images runs for any tag mismatches or manifest issues.
- Review checkpoints.
- Revisit tag gating if GitHub tag filters expand to support exclusion patterns.
- Implementation tasks.
Task record
- Motivation: Fix CI failures and share image build logic between CI and PR workflows.
- Design notes: Use a reusable workflow with parameterized tags and checkout refs to drive both dev and PR image builds.
- Test coverage summary:
just ci,just ui-e2e. - Observability updates: None (workflow-only change).
- Risk & rollback plan: Revert to inline workflow steps if reuse introduces instability.
- Dependency rationale: No new dependencies introduced.
Indexer ERD Single-Tenant and Audit Fields
- Status: Accepted
- Date: 2026-01-25
- Context:
- The indexer ERD needed to reflect single-tenant deployments and remove workspace/membership constructs.
- Audit actor fields must be non-null and use a system sentinel instead of NULL.
- Global configuration should be reusable across future media management features.
- Decision:
- Remove workspace/membership/invite constructs and document deployment-global scoping.
- Promote deployment_config and deployment_maintenance_state as singleton global config tables.
- Require created_by_user_id/updated_by_user_id/changed_by_user_id to be NN with system sentinel semantics.
- Update procedures, constraints, and index guidance to align with deployment-global indexing.
- Task Record:
- Motivation: Align the ERD with the single-tenant deployment model and explicit audit actors.
- Design notes: Removed workspace scoping, added deployment_role on app_user, documented system user_id=0 and all-zero UUID, revised procedures/constraints/indexes for deployment scope, and serialized log stream tests to avoid global buffer races.
- Test coverage summary:
just ciandjust ui-e2erun locally. - Observability updates: None (documentation and test-stability change only).
- Risk & rollback plan: Low risk; revert ERD edits if multi-tenant scope is reintroduced.
- Dependency rationale: None; no new dependencies. Alternatives considered: keep workspace scoping and NULL system actors (rejected).
- Consequences:
- Positive outcomes:
- ERD aligns with single-tenant deployments and global config reuse.
- Audit fields are explicit and consistent with system sentinel usage.
- Risks or trade-offs:
- Future multi-tenant support would require reintroducing tenant scoping.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Keep migrations and runtime schema changes aligned with the updated ERD.
- Review checkpoints:
- Validate stored procedures and schema changes during implementation.
- Implementation tasks:
SonarQube Workflow With Root Coverage LCOV
- Status: Accepted
- Date: 2026-03-22
- Context:
- Motivation:
- Add automated SonarQube analysis for every pull request and every push to
main. - Publish Rust coverage into SonarQube from a deterministic file in the repository root.
- Keep the native libtorrent FFI source analyzed instead of excluding it from SonarQube.
- Add automated SonarQube analysis for every pull request and every push to
- Constraints:
- CI workflows must use
justrecipes for operational steps. - Coverage artifact must be generated as
coverage/lcov.info. - Generated coverage file must not be tracked by git.
- The repository contains C++ FFI sources under
crates/revaer-torrent-libt/src/ffi, and SonarQube requires a C-family compilation database to analyze them correctly.
- CI workflows must use
- Motivation:
- Decision:
- Use
just covto build a combined workspace LCOV report atcoverage/lcov.info. - Add
just sonar-compile-dbto buildrevaer-torrent-libtwithREVAER_NATIVE_COMPILE_COMMANDS_PATHset sobuild.rsemitscoverage/compile_commands.json. - Add
.github/workflows/sonar.ymlto trigger onpull_requestandpushtomain. - In the Sonar workflow, run migrations, generate the LCOV file via
just cov, generate the native compile database viajust sonar-compile-db, and runSonarSource/sonarqube-scan-action@a31c9398be7ace6bbfaf30c0bd5d415f843d45e9(v7) with:sonar.projectKey=VannaDii_Revaersonar.organization=vannadiisonar.rust.lcov.reportPaths=coverage/lcov.infosonar.cfamily.compile-commands=coverage/compile_commands.json
- Ignore
/coveragein.gitignore. - Alternatives considered:
- Separate per-crate LCOV files merged post-process: rejected as unnecessary complexity for Sonar ingestion.
- Invoking cargo directly in workflow: rejected because repository policy requires
justrecipes. - Excluding C-family files from SonarQube: rejected because the native adapter is first-party code and should remain part of static analysis.
- Using external interception tooling such as Bear: rejected because the existing
cxx_buildpath already knows the exact compiler flags, so emitting the compile database inbuild.rskeeps local and CI behavior aligned with fewer moving parts.
- Use
- Consequences:
- Positive outcomes:
- SonarQube now runs on PRs and
mainpushes with workspace Rust coverage. - Coverage path is stable and tool-agnostic (
coverage/lcov.info). - Native FFI shim sources stay included in SonarQube analysis with the correct compiler context.
- SonarQube now runs on PRs and
- Risks and trade-offs:
- Sonar job runtime includes full coverage execution and DB-backed tests.
- Workflow requires valid
SONAR_TOKENrepository secret and database variable setup. - The compile database currently describes the checked-in
session.cpptranslation unit, so future native sources must be added deliberately if the FFI surface grows.
- Positive outcomes:
- Follow-up:
- Test coverage summary:
- Validation must cover
just sonar-compile-dbproducingcoverage/compile_commands.jsonplus the repository’s requiredjustgates.
- Validation must cover
- Observability updates:
- No runtime telemetry changes; this is CI/workflow-only.
- Risk and rollback plan:
- Roll back by removing
just sonar-compile-db, thebuild.rscompile database emission, and the workflow scan property if SonarQube compilation-database support causes instability.
- Roll back by removing
- Dependency rationale:
- No Rust dependencies added.
- GitHub Action dependency remains
SonarSource/sonarqube-scan-action@a31c9398be7ace6bbfaf30c0bd5d415f843d45e9(v7, official maintained scanner wrapper). Alternative was raw scanner CLI install steps, rejected for higher maintenance.
- Test coverage summary:
Indexer ERD checklist
- Status: Accepted
- Date: 2026-01-25
- Context:
- We need a complete, ordered, and trackable checklist for implementing ERD_INDEXERS.md.
- The checklist must reflect dependencies, support test-first execution, and avoid missed requirements.
- Decision:
- Add a dedicated ERD implementation checklist file that enumerates schema, procedures, services, behavior rules, and acceptance gates in dependency-first order.
- Alternatives considered: keep ad-hoc notes or split by subsystem; rejected due to risk of omissions and loss of a single, authoritative implementation plan.
- Consequences:
- Positive: a single source of truth for the ERD execution plan and validation steps.
- Trade-off: requires maintenance when ERD_INDEXERS.md changes.
- Follow-up:
- Keep ERD_INDEXERS_CHECKLIST.md synchronized with ERD_INDEXERS.md updates.
- Use the checklist as the staging plan for implementation and testing phases.
Task record
- Motivation:
- Ensure ERD_INDEXERS.md is implementable without missing steps or violating architecture rules.
- Design notes:
- The checklist is dependency-first and grouped by schema, procedures, runtime services, and acceptance gates to maximize testability.
- Test coverage summary:
- No tests added in this change; checklist calls out required test gates for future work.
- Observability updates:
- No runtime changes in this change; checklist enumerates required telemetry and metrics work.
- Risk & rollback plan:
- Risk is limited to documentation drift; rollback is deleting the checklist and ADR entry.
- Dependency rationale:
- No new dependencies added. Alternatives considered: none required.
Indexer core schema foundations
- Status: Accepted
- Date: 2026-01-25
- Context:
- We need to begin implementing the indexer ERD with core, dependency-first tables.
- The schema must follow ERD_INDEXERS.md and preserve SSOT for keys, IDs, and constraints.
- Decision:
- Add a new migration that introduces the initial enum types and core tables: app_user, deployment_config, deployment_maintenance_state, trust_tier, media_domain, and tag.
- Use bigint identity PKs, UUID public IDs, and explicit constraints per ERD.
- Consequences:
- Positive: establishes the foundation required for indexer configuration and tagging.
- Trade-off: further migrations are required to complete the full ERD.
- Follow-up:
- Add remaining enum types and schema tables from ERD_INDEXERS.md.
- Implement seed procedures and stored procedures for the new tables.
Task record
- Motivation:
- Start the indexer ERD implementation with the smallest dependency set.
- Design notes:
- Enum types are defined for deployment_role, trust_tier_key, and media_domain_key.
- Keys enforce lowercase checks; public UUIDs have no defaults to keep ownership in procedures.
- Test coverage summary:
- No new tests added; migration path is exercised via just ci and ui-e2e.
- Observability updates:
- None in this change.
- Risk & rollback plan:
- Roll back by reverting the migration if schema conflicts arise.
- Dependency rationale:
- No new dependencies added.
Indexer definition schema
- Status: Accepted
- Date: 2026-01-25
- Context:
- The indexer ERD requires a catalog of indexer definitions and field metadata.
- These tables are prerequisites for indexer instance configuration and import flows.
- Decision:
- Add a migration that introduces indexer definition enums and tables, including validation rules and value sets.
- Encode ERD constraints as database checks and unique indexes where possible.
- Consequences:
- Positive: definition metadata can be stored and validated at the database layer.
- Trade-off: adds a new migration that must be extended by later ERD stages.
- Follow-up:
- Add indexer instance tables and import flows.
- Implement seed and stored-procedure logic for definition sync.
Task record
- Motivation:
- Continue the dependency-first ERD rollout with the catalog and validation schema.
- Design notes:
- Enum types are created idempotently via pg_type checks.
- Validation rules are enforced with explicit CHECK constraints and a unique index.
- Test coverage summary:
- No new tests added; migration path is exercised via just ci and ui-e2e.
- Observability updates:
- None in this change.
- Risk & rollback plan:
- Roll back by reverting the migration if downstream schemas change.
- Dependency rationale:
- No new dependencies added.
Indexer instance schema and RSS
- Status: Accepted
- Date: 2026-01-25
- Context:
- Indexer instances, routing policies, and RSS schedules are required to configure real indexers and persist their operational state.
- Decision:
- Add a migration that introduces indexer instance tables, routing policy tables, RSS tracking tables, and related enums.
- Enforce ERD constraints (ranges, uniqueness, hash formats) via database checks.
- Consequences:
- Positive: provides the durable schema for indexer configuration, tags, domains, and RSS.
- Trade-off: requires additional migrations for imports, policies, and search flows.
- Follow-up:
- Add import_job tables once search profiles and torznab instances exist.
- Implement stored procedures and seed data for routing and instance management.
Task record
- Motivation:
- Continue ERD implementation with dependency-ready indexer instance tables.
- Design notes:
- Routing policy is introduced to satisfy the FK from indexer_instance.
- Hash columns enforce lowercase hex constraints to match global rules.
- Test coverage summary:
- No new tests added; migration path is exercised via just ci and ui-e2e.
- Observability updates:
- None in this change.
- Risk & rollback plan:
- Roll back by reverting the migration if downstream constraints change.
- Dependency rationale:
- No new dependencies added.
Indexer secret schema
- Status: Accepted
- Date: 2026-01-26
- Context:
- ERD requires secret storage and auditable bindings for indexer field values and routing policy parameters.
- Secret linkage must be centralized via secret_binding with revocation/rotation metadata.
- Decision:
- Add secret, secret_binding, and secret_audit_log tables plus supporting enums.
- Enforce binding_name allowlists per bound_table and key_id length checks.
- Consequences:
- Positive: schema supports secure secret storage with auditable bindings.
- Trade-off: follow-on migrations and procedures are required for lifecycle actions.
- Follow-up:
- Implement secret procedures and auditing per ERD.
- Add binding validation in indexer/routing procedures.
Task record
- Motivation:
- Continue ERD implementation with secrets storage and binding schema.
- Design notes:
- secret_binding remains the only linkage, enforced by a bound_table/binding_name check.
- secret_audit_log is append-only to capture create/rotate/revoke/bind/unbind actions.
- Test coverage summary:
- No new tests added; migration path is exercised via just ci and ui-e2e.
- Observability updates:
- None in this change.
- Risk & rollback plan:
- Roll back by reverting the migration if ERD constraints change.
- Dependency rationale:
- No new dependencies added.
Indexer search profiles and Torznab schema
- Status: Accepted
- Date: 2026-01-26
- Context:
- ERD requires search profiles to capture user intent and Torznab instances to expose arr-compatible endpoints tied to profiles.
- Import jobs depend on search_profile and torznab_instance references.
- Decision:
- Add schema for search_profile and related allow/block/prefer tables plus torznab_instance.
- Enforce ERD constraints for page sizing, weight ranges, and uniqueness.
- Consequences:
- Positive: enables profile filtering and Torznab endpoint configuration in the schema.
- Trade-off: policy_set linking and import pipeline remain follow-up migrations.
- Follow-up:
- Add search_profile_policy_set once policy_set exists.
- Implement import_job tables and Torznab procedures after policy/schema dependencies.
Task record
- Motivation:
- Continue ERD implementation with search profile and Torznab persistence.
- Design notes:
- Weight overrides allow nullable values with bounded ranges per ERD notes.
- torznab_instance stores hashed API keys only, with soft-delete support.
- Test coverage summary:
- No new tests added; migration path is exercised via just ci and ui-e2e.
- Observability updates:
- None in this change.
- Risk & rollback plan:
- Roll back by reverting the migration if ERD constraints change.
- Dependency rationale:
- No new dependencies added.
Indexer import schema
- Status: Accepted
- Date: 2026-01-26
- Context:
- ERD defines import_job and import_indexer_result for Prowlarr migration tracking.
- Import jobs depend on search profiles and Torznab instances.
- Decision:
- Add import_job and import_indexer_result tables with supporting enums.
- Preserve ERD constraints for identifiers, status, and optional error/detail fields.
- Consequences:
- Positive: schema supports import tracking and per-indexer outcomes.
- Trade-off: import procedures and validation remain a follow-up step.
- Follow-up:
- Implement import stored procedures and validation rules per ERD.
- Add indexer-instance linkage rules and dry-run handling in procedures.
Task record
- Motivation:
- Continue ERD implementation with import pipeline persistence.
- Design notes:
- import_indexer_result.indexer_instance_id remains a nullable bigint with no FK, per ERD.
- import_job stores target profile/torznab references for later procedure enforcement.
- Test coverage summary:
- No new tests added; migration path is exercised via just ci and ui-e2e.
- Observability updates:
- None in this change.
- Risk & rollback plan:
- Roll back by reverting the migration if ERD constraints change.
- Dependency rationale:
- No new dependencies added.
Indexer rate limit and Cloudflare schema
- Status: Accepted
- Date: 2026-01-26
- Context:
- ERD defines rate limiting policy/state and Cloudflare status tracking for indexers.
- These tables are prerequisites for routing enforcement and job-based cleanup.
- Decision:
- Add rate_limit_policy, indexer_instance_rate_limit, routing_policy_rate_limit, rate_limit_state, and indexer_cf_state tables plus required enums.
- Enforce ERD ranges, uniqueness, and cascade delete for instance/routing children.
- Consequences:
- Positive: schema supports rate limit configuration, token tracking, and CF state.
- Trade-off: stored procedures and scheduled jobs remain follow-up work.
- Follow-up:
- Implement rate_limit and cf_state procedures, including seed defaults and purge jobs.
- Add outbound_request_log integration and derived connectivity profile.
Task record
- Motivation:
- Continue ERD implementation with rate limiting and CF state persistence.
- Design notes:
- rate_limit_state uses per-minute buckets with non-negative token usage.
- indexer_cf_state enforces non-negative backoff and consecutive failure counters.
- Test coverage summary:
- No new tests added; migration path is exercised via just ci and ui-e2e.
- Observability updates:
- None in this change.
- Risk & rollback plan:
- Roll back by reverting the migration if ERD constraints change.
- Dependency rationale:
- No new dependencies added.
Indexer policy schema
- Status: Accepted
- Date: 2026-01-26
- Context:
- ERD defines policy sets, rules, and snapshots for search filtering and scoring.
- Search profiles need policy_set linkage for profile-scoped policies.
- Decision:
- Add policy_set, policy_rule, policy_rule_value_set, policy_rule_value_set_item, policy_snapshot, policy_snapshot_rule, and search_profile_policy_set tables.
- Introduce required policy enums and enforce ERD uniqueness and cascade rules.
- Consequences:
- Positive: schema supports policy configuration, snapshot reuse, and profile links.
- Trade-off: stored procedures and snapshot materialization remain follow-up work.
- Follow-up:
- Implement policy procedures, snapshot hashing, and retention jobs per ERD.
- Add search_request tables to wire policy snapshots into runtime queries.
Task record
- Motivation:
- Continue ERD implementation with policy persistence and profile linkage.
- Design notes:
- policy_set created_for_search_request_id is stored without a FK until search_request exists.
- policy_rule_value_set uses shared value_set_type enum without extra restrictions.
- Test coverage summary:
- No new tests added; migration path is exercised via just ci and ui-e2e.
- Observability updates:
- None in this change.
- Risk & rollback plan:
- Roll back by reverting the migration if ERD constraints change.
- Dependency rationale:
- No new dependencies added.
Indexer Torznab category schema
- Status: Accepted
- Date: 2026-01-26
- Context:
- ERD requires seeded Torznab categories and mappings to media domains and tracker categories for filtering and Torznab responses.
- Decision:
- Add torznab_category, media_domain_to_torznab_category, and tracker_category_mapping tables with ERD constraints and uniqueness rules.
- Enforce global uniqueness for tracker_category_mapping across null indexer_definition_id via a coalesced unique index.
- Consequences:
- Positive: schema supports Torznab category lookups and tracker mapping overrides.
- Trade-off: seeding and procedures remain follow-up work.
- Follow-up:
- Seed Torznab categories and domain mappings per ERD.
- Implement category mapping stored procedures and indexes.
Task record
- Motivation:
- Continue ERD implementation with Torznab category and mapping persistence.
- Design notes:
- tracker_category and tracker_subcategory enforce non-negative values as specified.
- media_domain mapping allows NULL media_domain_id for unsupported categories.
- Test coverage summary:
- No new tests added; migration path is exercised via just ci and ui-e2e.
- Observability updates:
- None in this change.
- Risk & rollback plan:
- Roll back by reverting the migration if ERD constraints change.
- Dependency rationale:
- No new dependencies added.
Indexer connectivity and audit schema
- Status: Accepted
- Date: 2026-01-26
- Context:
- ERD defines connectivity snapshots, health events, and config audit logging.
- These tables are prerequisites for health reporting and policy/action auditing.
- Decision:
- Add indexer_connectivity_profile, indexer_health_event, and config_audit_log tables plus required enums for health events, connectivity status, and audit categories.
- Enforce ERD constraints for success-rate bounds and audit entity references.
- Consequences:
- Positive: schema supports connectivity rollups and durable audit trails.
- Trade-off: rollup jobs and audit-writing procedures remain follow-up work.
- Follow-up:
- Implement connectivity rollup job and health event emission per ERD.
- Wire audit log writes in stored procedures and domain services.
Task record
- Motivation:
- Continue ERD implementation with connectivity and audit persistence.
- Design notes:
- config_audit_log requires either a bigint PK or a public UUID per ERD notes.
- indexer_connectivity_profile enforces error_class NULL for healthy status.
- Test coverage summary:
- No new tests added; migration path is exercised via just ci and ui-e2e.
- Observability updates:
- None in this change.
- Risk & rollback plan:
- Roll back by reverting the migration if ERD constraints change.
- Dependency rationale:
- No new dependencies added.
Indexer canonicalization schema
- Status: Accepted
- Date: 2026-01-26
- Context:
- Implement ERD_INDEXERS.md canonicalization tables for deduped torrents and durable sources.
- Enforce hash identity and typed attribute invariants at the schema layer.
- Keep enums and tables aligned with existing migrations and single-tenant scope.
- Decision:
- Add migration 0022_indexer_canonicalization.sql to create canonical tables and enums.
- Apply ERD validation rules for hashes, IDs, typed attributes, and identity strategies.
- Consequences:
- Canonical torrent/source data is stored with enforced identity constraints.
- Downstream search and ingest tables can reference canonical entities safely.
- Follow-up:
- Implement search_request tables and ingestion stored procedures per ERD_INDEXERS.md.
- Add remaining canonical scoring, conflict, and decision tables.
Task record
- Motivation:
- Establish canonical torrent and source storage to unblock search request ingestion.
- Design notes:
- Enforce hash, ID, and typed attribute invariants directly in the schema.
- Keep canonical tables aligned to ERD_INDEXERS.md and dependency order.
- Test coverage summary:
- No new tests added; migrations validated via just ci and ui-e2e.
- Observability updates:
- None in this change.
- Risk & rollback plan:
- Roll back by reverting migration 0022 if schema issues surface.
- Dependency rationale:
- No new dependencies added.
Indexer search request schema
- Status: Accepted
- Date: 2026-01-26
- Context:
- Implement ERD_INDEXERS.md search request tables, enums, and streaming page state.
- Enforce search/run state invariants and observation typing at the schema layer.
- Decision:
- Add migration 0023_indexer_search_requests.sql to create search request tables and enums.
- Apply ERD validation rules for status transitions, cursors, and observation attributes.
- Consequences:
- Search request storage is ready for ingestion and paging flows.
- Downstream procedures can rely on schema checks for state integrity.
- Follow-up:
- Add canonical scoring, conflict tracking, and job tables per ERD_INDEXERS.md.
- Implement stored procedures for search orchestration and ingestion.
Task record
- Motivation:
- Land the search request schema needed for streaming and run tracking.
- Design notes:
- Enforced status/timestamp and attribute typing checks per ERD_INDEXERS.md.
- Kept enums scoped to current table usage to avoid unused schema items.
- Test coverage summary:
- No new tests added; migrations validated via just ci and ui-e2e.
- Observability updates:
- None in this change.
- Risk & rollback plan:
- Roll back by reverting migration 0023 if schema issues surface.
- Dependency rationale:
- No new dependencies added.
Indexer scoring schema
- Status: Accepted
- Date: 2026-01-26
- Context:
- Implement ERD_INDEXERS.md scoring and best-source materialization tables.
- Preserve context-specific ordering for search/profile/policy views.
- Decision:
- Add migration 0024_indexer_scoring.sql for scoring and best-source tables.
- Introduce context_key_type enum and enforce score range checks.
- Consequences:
- Canonical sources can be ranked globally and per context.
- Best-source tables are ready for refresh jobs and search paging.
- Follow-up:
- Add conflicts and decision tables plus outbound log and reputation tracking.
- Implement stored procedures that compute scores and refresh best-source rows.
Task record
- Motivation:
- Provide schema support for deterministic source ranking in searches.
- Design notes:
- Enforced score ranges and uniqueness constraints per ERD_INDEXERS.md.
- Context types are stored as a dedicated enum for clarity.
- Test coverage summary:
- No new tests added; migrations validated via just ci and ui-e2e.
- Observability updates:
- None in this change.
- Risk & rollback plan:
- Roll back by reverting migration 0024 if schema issues surface.
- Dependency rationale:
- No new dependencies added.
Indexer conflict and decision schema
- Status: Accepted
- Date: 2026-01-26
- Context:
- Implement ERD_INDEXERS.md conflict tracking and filter decision tables.
- Preserve auditability of metadata conflicts and policy filtering outcomes.
- Decision:
- Add migration 0025_indexer_conflicts_decisions.sql for conflicts and decisions.
- Introduce enums for conflict and decision types with ERD-aligned values.
- Consequences:
- Conflict resolution workflows can be recorded with an audit trail.
- Search filter decisions can be persisted for transparency and debugging.
- Follow-up:
- Add outbound_request_log, user actions, acquisition, reputation, and job tables.
- Implement stored procedures for conflict resolution and filtering decisions.
Task record
- Motivation:
- Capture durable metadata conflicts and policy decisions per ERD_INDEXERS.md.
- Design notes:
- Kept constraints minimal and aligned with ERD requirements for nullable references.
- Search filter decisions require at least one canonical reference.
- Test coverage summary:
- No new tests added; migrations validated via just ci and ui-e2e.
- Observability updates:
- None in this change.
- Risk & rollback plan:
- Roll back by reverting migration 0025 if schema issues surface.
- Dependency rationale:
- No new dependencies added.
Indexer user action and acquisition schema
- Status: Accepted
- Date: 2026-01-26
- Context:
- Implement ERD_INDEXERS.md feedback and acquisition tracking tables.
- Persist user actions and download attempts for ranking and reputation signals.
- Decision:
- Add migration 0026_indexer_user_actions.sql for user actions and acquisition attempts.
- Introduce enums for actions, reasons, acquisition status/origin/failure, and client names.
- Consequences:
- User feedback and acquisition events are stored with constrained identifiers.
- Future reputation rollups can rely on acquisition data.
- Follow-up:
- Add outbound_request_log, reputation, and job scheduling tables.
- Implement stored procedures and ingestion paths for acquisitions.
Task record
- Motivation:
- Capture user interactions and download outcomes per ERD_INDEXERS.md.
- Design notes:
- Enforced identifier presence and failure-class rules on acquisition_attempt.
- User action metadata stored as key/value with unique keys per action.
- Test coverage summary:
- No new tests added; migrations validated via just ci and ui-e2e.
- Observability updates:
- None in this change.
- Risk & rollback plan:
- Roll back by reverting migration 0026 if schema issues surface.
- Dependency rationale:
- No new dependencies added.
Indexer telemetry and reputation schema
- Status: Accepted
- Date: 2026-01-26
- Context:
- Implement ERD_INDEXERS.md telemetry logging and reputation rollups.
- Ensure outbound request invariants are enforced at the schema layer.
- Decision:
- Add migration 0027_indexer_telemetry_reputation.sql for outbound_request_log and source_reputation.
- Introduce enums for request types, outcomes, mitigations, and reputation windows.
- Consequences:
- Connectivity and reputation rollups can rely on consistent telemetry inputs.
- Rate-limited and success/failure invariants are enforced in the database.
- Follow-up:
- Add job scheduling tables and stored procedures for rollups and retention.
- Implement index coverage for telemetry and reputation queries.
Task record
- Motivation:
- Capture outbound request telemetry and reputation rollups per ERD_INDEXERS.md.
- Design notes:
- Enforced outcome/error-class invariants and numeric ranges for rates.
- Added defaults for timestamps to keep writes consistent.
- Test coverage summary:
- No new tests added; migrations validated via just ci and ui-e2e.
- Observability updates:
- None in this change.
- Risk & rollback plan:
- Roll back by reverting migration 0027 if schema issues surface.
- Dependency rationale:
- No new dependencies added.
Indexer job schedule schema
- Status: Accepted
- Date: 2026-01-26
- Context:
- Implement ERD_INDEXERS.md job scheduling table and enum constraints.
- Align cadence and jitter bounds with runtime scheduler expectations.
- Decision:
- Add migration 0028_indexer_jobs.sql for job_key enum and job_schedule.
- Enforce cadence range and jitter bounds per ERD notes.
- Consequences:
- Scheduler state is stored in a single table with clear invariants.
- Deployment seeding must populate required job rows.
- Follow-up:
- Add deployment seed procedures for job_schedule rows.
- Implement job_claim_next_v1 and job completion updates.
Task record
- Motivation:
- Establish job scheduling primitives required for indexer retention and rollups.
- Design notes:
- cadence_seconds constrained to 30..604800 per ERD.
- jitter_seconds constrained to 0..cadence_seconds.
- Test coverage summary:
- No new tests added; migrations validated via just ci and ui-e2e.
- Observability updates:
- None in this change.
- Risk & rollback plan:
- Roll back by reverting migration 0028 if scheduler constraints need adjustment.
- Dependency rationale:
- No new dependencies added.
Indexer FK on-delete rules
- Status: Accepted
- Date: 2026-01-26
- Context:
- ERD_INDEXERS.md requires cascade deletes from indexer_instance to instance children.
- Some FKs were created without explicit on-delete behavior.
- Decision:
- Add migration 0029_indexer_fk_rules.sql to enforce cascading FKs for indexer_instance child tables.
- Consequences:
- Hard-deleting an indexer_instance will cascade to dependent config and diagnostics rows.
- Soft-delete behavior remains unchanged.
- Follow-up:
- Review remaining FK behaviors as stored procedures are introduced.
Task record
- Motivation:
- Align schema with ERD on-delete rules for indexer_instance children.
- Design notes:
- Replaced default FK constraints with ON DELETE CASCADE on instance child tables.
- Test coverage summary:
- No new tests added; migrations validated via just ci and ui-e2e.
- Observability updates:
- None in this change.
- Risk & rollback plan:
- Roll back by reverting migration 0029 if cascading rules need adjustment.
- Dependency rationale:
- No new dependencies added.
Indexer seed data and defaults
- Status: Accepted
- Date: 2026-01-26
- Context:
- ERD_INDEXERS.md requires seeded trust tiers, media domains, Torznab categories, default rate limits, and job scheduling rows.
- Seed functions must enforce immutability rules for system-owned data.
- Decision:
- Add migration 0030_indexer_seed_data.sql with seed procedures and inserts.
- Seed Torznab categories, media-domain mappings, tracker mappings, rate limits, job schedules, and the system user.
- Consequences:
- Deployments start with required lookup data and system defaults.
- Seeded values are validated for consistency on migration.
- Follow-up:
- Implement deployment_init_v1 and remaining stored procedures.
Task record
- Motivation:
- Provide required seed data and seed procedures for indexer ERD compliance.
- Design notes:
- trust_tier_seed_defaults and media_domain_seed_defaults are idempotent and validate seeded values.
- job_schedule rows use randomized initial next_run_at jitter.
- Test coverage summary:
- No new tests added; migrations validated via just ci and ui-e2e.
- Observability updates:
- None in this change.
- Risk & rollback plan:
- Roll back by reverting migration 0030 if seed values require adjustment.
- Dependency rationale:
- No new dependencies added.
Indexer query indexes
- Status: Accepted
- Date: 2026-01-26
- Context:
- ERD_INDEXERS.md defines a query-path index matrix for search, scoring, and telemetry workflows.
- Many indexes are non-unique and must be added after table creation.
- Decision:
- Add migration 0031_indexer_query_indexes.sql to create the ERD-specified non-unique indexes and partial indexes.
- Consequences:
- Query paths have explicit index coverage for search, scoring, and retention.
- Unique constraints continue to cover duplicate index requirements.
- Follow-up:
- Revisit index coverage when stored procedures and query plans land.
Task record
- Motivation:
- Provide the ERD index matrix required for search and telemetry queries.
- Design notes:
- Skipped indexes already covered by PK/UQ constraints.
- Added partial indexes for sparse hash lookups and job scheduling.
- Test coverage summary:
- No new tests added; migrations validated via just ci and ui-e2e.
- Observability updates:
- None in this change.
- Risk & rollback plan:
- Roll back by reverting migration 0031 if index choices need revision.
- Dependency rationale:
- No new dependencies added.
Indexer deployment initialization procedure
- Status: Accepted
- Date: 2026-01-26
- Context:
- ERD_INDEXERS.md specifies deployment_init_v1 to bootstrap deployment defaults.
- Initialization must be idempotent and enforce actor verification.
- Decision:
- Add migration 0032_indexer_deployment_init.sql with deployment_init_v1 and stable wrapper deployment_init.
- deployment_init_v1 enforces verified admin/owner actors and seeds defaults.
- Consequences:
- Deployments can be initialized via stored procedure calls.
- System defaults are re-applied safely when missing.
- Follow-up:
- Implement the remaining stored procedures in Phase 5.
Task record
- Motivation:
- Provide the ERD-specified deployment initialization entry point.
- Design notes:
- Procedure is idempotent and reuses existing seed helpers.
- Authorization requires verified owner/admin actors.
- Test coverage summary:
- No new tests added; migrations validated via just ci and ui-e2e.
- Observability updates:
- None in this change.
- Risk & rollback plan:
- Roll back by reverting migration 0032 if procedure behavior needs revision.
- Dependency rationale:
- No new dependencies added.
Indexer app_user stored procedures
- Status: Accepted
- Date: 2026-01-26
- Context:
- We need versioned, auditable entry points for app_user creation and maintenance.
- ERD_INDEXERS.md requires normalized email storage, constant error messages, and wrapper procedures without version suffixes.
- app_user has no audit fields, so procedures must be minimal and safe while preserving table invariants.
- Decision:
- Add migration 0033 with app_user_create_v1, app_user_update_v1, and app_user_verify_email_v1 plus stable wrappers.
- Normalize emails in-proc (trim + lowercase), enforce non-empty inputs, and default role to user with is_email_verified=false at creation.
- Use constant error messages with detail codes for invalid or missing inputs.
- Consequences:
- app_user mutations now go through stored procedures with consistent validation.
- Email duplicates are rejected deterministically before insert.
- Additional procedure surface requires maintenance when app_user rules evolve.
- Follow-up:
- Update ERD_INDEXERS_CHECKLIST.md to mark app_user procedures complete.
- Extend coverage when app_user endpoints are implemented.
Indexer tag stored procedures
- Status: Accepted
- Date: 2026-01-26
- Context:
- Tags are user-created and soft-deleted; procedures must preserve tag_key immutability.
- ERD_INDEXERS.md requires audit logging, lowercase tag keys, and conflict handling when tag_public_id and tag_key are both provided.
- Stored procedures need constant error messages with structured detail codes.
- Decision:
- Add migration 0034 with tag_create_v1, tag_update_v1, and tag_soft_delete_v1 plus stable wrappers.
- Validate tag_key casing, length, and uniqueness on create; tag_key is immutable on update and delete.
- Support tag resolution by public ID and/or key with invalid_tag_reference on conflict.
- Write config_audit_log entries for create, update, and soft-delete actions.
- Consequences:
- Tag mutations are centralized and auditable in the database layer.
- Additional procedure surface area must be kept in sync with future tag rules.
- Follow-up:
- Extend REST handlers to use tag procedures with key/public ID resolution.
- Add API validation tests for invalid_tag_reference and soft-delete behaviors.
Indexer routing policy stored procedures
- Status: Accepted
- Date: 2026-01-26
- Context:
- Routing policy mutations require role checks, parameter validation, and audit logging.
- ERD_INDEXERS.md specifies parameter constraints per routing mode and secret binding requirements for proxy credentials.
- Procedures must use constant error messages with structured detail codes.
- Decision:
- Add migration 0035 implementing routing_policy_create_v1, routing_policy_set_param_v1, and routing_policy_bind_secret_v1 plus stable wrappers.
- Enforce owner/admin role checks, display_name validation, and unsupported mode rejection.
- Validate parameter types and ranges; restrict param keys to mode-specific allowlists.
- Create verify_tls on policy creation and ensure auth parameter rows exist for proxy modes.
- Bind secrets via secret_binding with secret_audit_log and config_audit_log entries.
- Consequences:
- Routing policy state is validated and auditable at the database layer.
- Proxy credential bindings are centralized with explicit secret audit events.
- Follow-up:
- Implement routing policy API handlers using these procedures.
- Add tests for param validation edge cases and secret binding replacement.
Indexer Cloudflare reset procedure
- Status: Accepted
- Date: 2026-01-26
- Context:
- Operators need a controlled reset path for Cloudflare challenges and cooldowns.
- ERD_INDEXERS.md requires owner/admin authorization, CF state reset, and conditional connectivity profile recovery for quarantined indexers.
- Decision:
- Add migration 0036 with indexer_cf_state_reset_v1 plus a stable wrapper.
- Reset cf_state to clear, wipe CF session/cooldown/backoff metadata, and zero consecutive_failures.
- If connectivity status is quarantined with CF-related error classes, downgrade to degraded and clear error_class to unknown.
- Record a config_audit_log update with change_summary “cf_state reset”.
- Consequences:
- CF recovery can be triggered safely with auditable changes.
- Non-CF connectivity failures are preserved.
- Follow-up:
- Add API handler wiring for CF resets.
- Add tests for quarantined vs non-quarantined connectivity transitions.
Indexer rate limit stored procedures
- Status: Accepted
- Date: 2026-01-26
- Context:
- Rate limiting requires auditable policy management and a database-backed token bucket.
- ERD_INDEXERS.md mandates bounds enforcement, system policy immutability, and scoped token consumption with minute windows.
- Decision:
- Add migration 0037 implementing rate_limit_policy CRUD, instance/policy mappings, and rate_limit_try_consume_v1 plus stable wrappers.
- Enforce owner/admin authorization, range checks, and in-use protection on delete.
- Implement token bucket updates with row-level locking on rate_limit_state.
- Consequences:
- Rate limit policies and assignments are centralized and auditable.
- Token consumption is safe under concurrent access.
- Follow-up:
- Integrate rate_limit_try_consume_v1 into outbound request logging.
- Add tests for policy deletion conflicts and token bucket edge cases.
Indexer instance stored procedures
- Status: Accepted
- Date: 2026-01-26
- Context:
- Indexer instances, RSS scheduling, domain/tag assignment, and field value management require validated, auditable mutations at the database layer.
- ERD_INDEXERS.md mandates per-proc authorization, field validation, and audit logging.
- Decision:
- Add migration 0038 implementing indexer_instance and RSS procedures, plus media domain, tag, and field value/secret binding procedures with stable wrappers.
- Enforce owner/admin authorization, definition validation, and strict value checks (type, range, regex, allowed values).
- Record config_audit_log updates for each mutation and secret_audit_log bind entries.
- Consequences:
- Indexer configuration changes are validated and auditable in stored procedures.
- Field validations are enforced consistently against definition rules.
- Follow-up:
- Implement indexer_instance_test_v1 and outbound request logging integration.
- Add API handlers and tests for indexer instance management.
Indexer category mapping procedures
- Status: Accepted
- Date: 2026-01-26
- Context:
- Category mapping rules must be updated via stored procedures with validation and audit logging.
- ERD_INDEXERS.md specifies media domain and Torznab category checks plus primary mapping enforcement.
- Decision:
- Add migration 0039 implementing tracker_category_mapping and media_domain_to_torznab mapping upsert/delete procedures with stable wrappers.
- Validate upstream_slug resolution, Torznab category IDs, and media domain keys.
- Enforce a single primary mapping per media domain during upsert.
- Record config_audit_log entries for all mutations.
- Consequences:
- Category mapping changes are validated and auditable in the database.
- Primary mapping invariants are enforced within the procedure transaction.
- Follow-up:
- Add API handlers for category mapping management.
- Add tests for primary switch and invalid key handling.
Indexer policy set procedures
- Status: Accepted
- Date: 2026-01-26
- Context:
- Policy sets and rule toggles must be managed via stored procedures with role authorization and audit logging.
- ERD_INDEXERS.md requires scope-based authorization and sort-order reordering.
- Decision:
- Add migration 0040 implementing policy_set create/update/enable/disable/reorder and policy_rule enable/disable/reorder procedures with stable wrappers.
- Enforce scope-specific authorization, cardinality rules for enabled global/user sets, and profile link requirement on enable.
- Record config_audit_log entries for all mutations.
- Consequences:
- Policy set lifecycle operations are validated and auditable at the DB layer.
- Policy rule toggling and ordering are centralized for consistent behavior.
- Follow-up:
- Implement policy_rule_create_v1 with value set payload handling.
- Add API handlers and tests for policy set and rule operations.
Indexer search profile procedures
- Status: Accepted
- Date: 2026-01-26
- Context:
- Search profile mutations require scope-aware authorization, media domain resolution, and audit logging.
- ERD_INDEXERS.md specifies default handling, allowlist semantics, and policy-set linkage.
- Decision:
- Add migration 0041 implementing search_profile create/update/default operations plus domain allowlist, policy-set linking, indexer allow/block, and tag allow/block/prefer procedures with stable wrappers.
- Enforce per-scope authorization, media domain key validation, and allow/block conflict checks.
- Record config_audit_log entries for profile and rule updates.
- Consequences:
- Search profile state changes are validated and auditable at the DB layer.
- Allowlist and preference rules are kept consistent with block/allow constraints.
- Follow-up:
- Implement API handlers for search profile management.
- Add tests for default scope switching and allow/block conflicts.
Indexer policy rule creation procedure
- Status: Accepted
- Date: 2026-01-26
- Context:
- We need a stored procedure to create immutable policy rules that enforces ERD_INDEXERS.md invariants, including match-field/operator compatibility and value-set normalization.
- Database mutations must be stored-procedure only, avoid JSON/JSONB, and return structured errors with constant messages.
- Decision:
- Add a composite type for value-set items and a
policy_rule_create_v1procedure that validates rule shape, match values, and value-set contents before insertingpolicy_rulerows. - Provide a stable
policy_rule_createwrapper for versioning consistency.
- Add a composite type for value-set items and a
- Consequences:
- Policy rule creation is validated centrally in the database, preventing inconsistent match-value combinations and enforcing normalization limits.
- Callers must supply only the expected match value type or value-set items; extra fields now fail fast.
- Follow-up:
- Implement application-layer regex compilation validation using the stored
is_case_insensitiveflag. - Add stored-procedure tests that cover rule-type and value-set edge cases once the indexer DB test harness is available.
- Implement application-layer regex compilation validation using the stored
Indexer outbound request log procedure
- Status: Accepted
- Date: 2026-01-26
- Context:
- Outbound request telemetry must be written through stored procedures with strict validation and normalized cursor diagnostics.
- The ERD mandates URL-aware page cursor normalization and hashing to bound storage.
- Decision:
- Add
outbound_request_log_write_v1to validate request invariants, resolve public IDs, normalize page cursor keys, persist outbound request logs, and update run correlation tracking. - Provide a stable
outbound_request_log_writewrapper for versioned usage.
- Add
- Consequences:
- Outbound request samples are consistent across callers and safe for rollups.
- Cursor normalization adds complexity; malformed cursor input now fails fast instead of being stored.
- Follow-up:
- Wire outbound logging from search runs and indexer probes to use the new procedure.
- Add DB-level tests for cursor normalization and rate-limit invariants once the data test harness exists.
Indexer Torznab instance state procedures
- Status: Accepted
- Date: 2026-01-26
- Context:
- Torznab instances need enable/disable and soft-delete operations with role-based authorization tied to their search profiles.
- Stored procedures must enforce invariants and write audit logs.
- Decision:
- Add
torznab_instance_enable_disable_v1andtorznab_instance_soft_delete_v1with search-profile scoped authorization and audit logging. - Keep create/rotate key procedures separate to accommodate pending secret-key hashing decisions.
- Add
- Consequences:
- Torznab instances can be safely toggled or retired without exposing API key material.
- Create/rotate remain blocked until API key hashing strategy is finalized.
- Follow-up:
- Implement
torznab_instance_create_v1andtorznab_instance_rotate_key_v1once Argon2id hashing is approved for the database layer or moved to the app layer.
- Implement
Indexer conflict resolution procedures
- Status: Accepted
- Date: 2026-01-26
- Context:
- Source metadata conflicts require operator resolution with strict authorization and audit logging.
- Accepted-incoming resolutions must never overwrite existing durable data.
- Decision:
- Add
source_metadata_conflict_resolve_v1andsource_metadata_conflict_reopen_v1to enforce admin/owner authorization, apply limited backfills, and record audit events. - Limit accepted-incoming updates to safe backfills (source_guid, tracker_name, tracker_category/subcategory) when the durable value is missing.
- Add
- Consequences:
- Conflict resolution is traceable and safe against overwrites.
- Incoming tracker category parsing is validated; malformed inputs are rejected instead of silently stored.
- Follow-up:
- Add test coverage for conflict resolution paths once the data test harness exists.
Indexer job runner procedures
- Status: Accepted
- Date: 2026-01-26
- Context:
- Background job scheduling requires database-enforced claiming and retention cleanup.
- Retention rules must align with deployment_config thresholds and avoid deleting durable data.
- Decision:
- Add
job_claim_next_v1to enforce lease-based claiming with advisory locks and per-job lease durations. - Add
job_run_retention_purge_v1to purge completed search trees and operational telemetry using retention thresholds.
- Add
- Consequences:
- Job claiming is serialized per job_key and prevents overlapping workers.
- Retention cleanup reduces operational data growth while preserving durable records.
- Follow-up:
- Add per-job completion procedures that advance next_run_at with jitter and clear locks.
- Add test coverage for retention purge edge cases once the data test harness exists.
Indexer search request cancel procedure
- Status: Accepted
- Date: 2026-01-26
- Context:
- Search requests must be cancelable with proper authorization and clean terminal state transitions.
- Runs in queued or running state must be marked canceled without violating status timestamp constraints.
- Decision:
- Add
search_request_cancel_v1to enforce actor authorization, mark the search as canceled, and cancel in-flight runs. - Keep the procedure idempotent when the request is already terminal.
- Add
- Consequences:
- Cancel operations consistently update finished_at/canceled_at and avoid invalid run states.
- Unauthorized callers cannot cancel Torznab-owned searches.
- Follow-up:
- Implement
search_request_create_v1and search run state procedures to complete the search lifecycle.
- Implement
Indexer search run procedures
- Status: Accepted
- Date: 2026-01-26
- Context:
- Search runs need explicit state transitions and retry backoff rules per ERD_INDEXERS.md.
- Retryable failures and rate-limited deferrals must keep runs queued while enforcing limits.
- Decision:
- Add stored procedures for enqueue, start, finish, fail, and cancel of search indexer runs.
- Implement backoff calculations for retryable errors and rate-limited deferrals inside the database.
- Consequences:
- Run state transitions are validated in one place and aligned with status timestamp constraints.
- Coordinators must pass retry_seq and rate-limit scope to ensure correct backoff.
- Follow-up:
- Implement
search_request_create_v1to seed runs and policy snapshots. - Wire outbound request logging to update run correlation state.
- Implement
Indexer canonical disambiguation rule procedure
- Status: Accepted
- Date: 2026-01-26
- Context:
- Prevent-merge rules must enforce canonical identity normalization and symmetric uniqueness.
- Only admin/owner users may create disambiguation rules.
- Decision:
- Add
canonical_disambiguation_rule_create_v1with normalization, identity validation, and canonical ordering of left/right pairs. - Record creation in
config_audit_logwith a canonical entity type.
- Add
- Consequences:
- Duplicate or reversed rule pairs are rejected before insertion.
- Invalid identity values fail fast and do not pollute rule sets.
- Follow-up:
- Implement canonical merge/recompute procedures that honor prevent-merge rules.
Indexer search request create procedure
- Status: Accepted
- Date: 2026-01-26
- Context:
- Search requests must validate identifiers, torznab modes, and category filters per the ERD.
- Policy snapshots must be reusable with deterministic hashing and rule ordering.
- Decision:
- Add
search_request_create_v1with request validation, policy snapshot materialization, category/domain intersection, and runnable indexer gating. - Return both
search_request_public_idand the request policy set public id for downstream orchestration.
- Add
- Consequences:
- Search requests short-circuit to finished when domain/allowlist constraints or policy allowlists eliminate all runnable indexers.
- Invalid identifier or category combinations fail fast with explicit error codes.
- Follow-up:
- Implement
search_result_ingest_v1and canonical maintenance procedures. - Add SQL harness tests for search_request creation paths and edge cases.
- Implement
Task record
- Motivation:
- Enable search request creation with ERD-compliant validation, policy snapshotting, and deterministic scheduling inputs.
- Design notes:
- Policy snapshots are hashed from ordered scope/rule lists and reused when the hash exists.
- Torznab category handling preserves requested/effective lists and treats 8000 as catch-all.
- Runnable indexers are filtered by profile allow/block rules, domain constraints, and policy allow_indexer_instance(require).
- Test coverage summary:
- Not yet added; requires SQL stored-proc harness coverage for identifier parsing, category filtering, and runnable gating.
- Observability updates:
- None in this change (DB-only procedure).
- Risk & rollback plan:
- Risk: invalid gating logic could short-circuit legitimate searches. Rollback by reverting migration 0050 and re-running migrations.
- Dependency rationale:
- No new dependencies added.
Indexer job runner follow-up procedures
- Status: Accepted
- Date: 2026-01-26
- Context:
- Job runner needs procedures for policy snapshot GC, refcount repair, and rate limit state purge.
- These procedures are used by scheduled jobs and must align with ERD retention rules.
- Decision:
- Add job runner procedures for policy snapshot GC, refcount repair, and rate limit state purge.
- Keep wrappers without version suffix to preserve stable entry points.
- Consequences:
- Policy snapshot rows with ref_count=0 will be purged after 30 days.
- Refcount repair can correct drift between snapshots and active searches.
- Rate limit state rows older than 6 hours are cleaned up.
- Follow-up:
- Implement remaining job runner procedures (RSS poll/backfill, connectivity refresh, reputation rollup).
Task record
- Motivation:
- Close remaining ERD-required job runner procedures that do not depend on external systems.
- Design notes:
- Refcount repair uses search_request counts as source of truth.
- GC window is fixed at 30 days per ERD; rate limit purge uses 6-hour cutoff.
- Test coverage summary:
- Not yet added; requires SQL harness coverage for refcount updates and purge cutoffs.
- Observability updates:
- None in this change (DB-only procedures).
- Risk & rollback plan:
- Risk: accidental over-purge if cutoff logic is wrong. Rollback by reverting migration 0051.
- Dependency rationale:
- No new dependencies added.
Indexer executor handoff stored procedures
- Status: Accepted
- Date: 2026-01-27
- Context:
- External executor work (RSS polling and indexer test probes) must be orchestrated through stored procedures, with clear concurrency control and auditable outcomes.
- The ERD requires separate claim/apply phases so the database remains the single source of truth while network calls run outside the DB.
- Secrets must remain encrypted at rest and only surfaced to the executor via explicit read procedures.
- Decision:
- Add RSS polling claim/apply procedures and indexer test prepare/finalize procedures.
- Provide a secret read procedure for executor access, allowing system callers to pass a NULL actor while still enforcing revocation checks.
- Keep procedure inputs/outputs aligned with ERD contract and use outbound_request_log for telemetry.
- Alternatives considered:
- Single procedure that performs polling/tests and logging inside the DB.
- Executor-side direct table access without stored procedures.
- Consequences:
- Positive outcomes:
- Clear concurrency boundaries for polling/test work with SKIP LOCKED claims.
- Consistent logging and scheduling semantics driven by the ERD.
- Risks or trade-offs:
- Requires executor code to implement the two-phase workflow and handle retries.
- Adds more stored procedure surface area to maintain.
- Positive outcomes:
- Follow-up:
- Implement migrations for RSS poll claim/apply, indexer test prepare/finalize, and secret read procedures.
- Update checklist tracking and verify integration tests once executor wiring lands.
Indexer Tag API Surface
- Status: Accepted
- Date: 2026-01-27
- Context:
- Indexer tag stored procedures exist but there is no HTTP surface or service wiring.
- The API layer needs a DI-friendly facade to keep handlers thin and testable.
- Errors must use constant messages with structured context fields.
- Decision:
- Introduce an indexer facade trait in
revaer-apiand implement it inrevaer-app. - Add
/v1/indexers/tagscreate/update/delete endpoints using stored procedures. - Publish tag DTOs in
revaer-api-modelsand update OpenAPI.
- Introduce an indexer facade trait in
- Consequences:
- API callers can manage indexer tags without direct database access.
- API server construction now requires an indexer facade dependency.
- Tests and wiring must supply a stub indexer implementation.
- Follow-up:
- Extend indexer API coverage for definitions, instances, routing, secrets, and policies.
- Add list/read endpoints once read procedures are defined.
Motivation
Provide a clean, testable HTTP surface for indexer tag management that aligns with the ERD and stored-procedure contract.
Design notes
- The API layer delegates to a narrow
IndexerFacadetrait to keep handlers minimal. - Tag operations pass the system actor UUID while user identity is not yet plumbed.
- Service errors carry error codes and SQLSTATE without interpolating values into messages.
Test coverage summary
- Added handler tests for tag create and error mapping (bad request/not found).
- Existing API tests updated to supply a stub indexer facade.
Observability updates
- Indexer service logs storage/authorization failures with structured fields (
operation,error_code,sqlstate).
Risk & rollback plan
- Risk: new routes expose tag mutations before full RBAC is enforced.
- Rollback: revert the tag handler/routes and facade wiring commits.
Dependency rationale
- No new dependencies added; existing
revaer-api,revaer-app, and data-layer crates are reused.
143 Task: Indexer procedure fixes (RSS apply, base score refresh, normalization)
- Status: Accepted
- Date: 2026-01-27
- Context:
- RSS poll apply failed under outer-join locking and returned non-domain errors.
- Base score refresh queried a non-existent canonical_torrent_id on durable sources.
- Title normalization regex boundaries did not strip resolution tokens consistently.
- Import job status aggregation hit ambiguous column references.
- Factory reset did not re-seed indexer defaults, causing tag operations to fail.
- Decision:
- Patch stored procedures with targeted fixes and add a new migration to apply them.
- Keep Rust wrappers aligned with enum/array casts and session config expectations.
- Extend factory reset to reseed indexer defaults and system actor data.
- Consequences:
- RSS poll apply now locks the subscription row without outer-join errors.
- Base score refresh derives canonical/source pairs from context scores and recent sources.
- Title normalization removes known release tokens reliably.
- Import job status aggregation no longer fails on ambiguity.
- Factory reset restores seed data needed for indexer tag operations.
- Follow-up:
- Re-run full CI and UI E2E gates.
- Monitor RSS apply logs for any unexpected lock contention.
Motivation
Fix indexer data-layer regressions that caused RSS polling to fail before domain errors surfaced, and align stored procedures with the canonical/source relationships defined in ERD_INDEXERS.md.
Design notes
- Reworked
rss_poll_apply_v1to lock only the subscription row (FOR UPDATE OF sub) and left other joins unlocked. - Updated base-score refresh to use durable source recency plus context-score links for canonical
mapping, keeping scoring inputs on
canonical_torrent_source. - Corrected
normalize_title_v1regex boundaries and whitespace patterns using explicit escapes. - Qualified
import_job_get_status_v1result aggregation to avoidstatusambiguity. - Updated RSS apply wrapper casts and test-time secret config to match runtime expectations.
Test coverage summary
just cijust ui-e2e
Observability updates
- None.
Risk & rollback plan
- Risk: base-score refresh may skip canonicals without context-score links.
- Rollback: apply a follow-up migration restoring previous procedure bodies and revert wrapper changes if needed.
Dependency rationale
- No new dependencies.
144 Indexer domain mapping and DI boundaries
- Status: Accepted
- Date: 2026-01-27
- Context:
- Indexer work spans stored procedures, API surfaces, UI/CLI usage, and background jobs.
- ERD_INDEXERS.md requires clear domain boundaries and injected dependencies.
- Testability and stored-proc-only data access must stay consistent across crates.
- Decision:
- Map indexer domains to existing crates and define DI seams for each domain service.
- Versioned stored procedures use
_v1suffixes with stable wrapper functions without version suffixes.
- Consequences:
- Clear ownership reduces cross-crate coupling and supports isolated testing.
- API/UI/CLI can share a single facade surface without leaking database details.
- Procedure evolution can continue without breaking callers by updating wrappers.
- Follow-up:
- Implement per-domain facades in
revaer-apiand wire concrete implementations inrevaer-app. - Add tests per facade and for stored-proc wrappers to enforce error-style consistency.
- Implement per-domain facades in
Domain-to-crate mapping
revaer-data:- Stored-proc wrappers and result mapping for indexer domains under
crates/revaer-data/src/indexers/*. - Error types scoped to data access with constant messages and structured context.
- Stored-proc wrappers and result mapping for indexer domains under
revaer-api:- HTTP handlers under
crates/revaer-api/src/http/handlers/indexers/*. - Domain facades and traits under
crates/revaer-api/src/app/indexers/*(API-safe DTOs only).
- HTTP handlers under
revaer-app:- Bootstrap wiring in
crates/revaer-app/src/bootstrap.rsfor concrete data-layer implementations.
- Bootstrap wiring in
revaer-cli:- CLI commands call API endpoints only; no direct data access.
revaer-ui:- UI uses
services/*and feature slices; no direct data access.
- UI uses
revaer-events/revaer-telemetry:- Event publication and metrics for indexer operations at the API boundary.
DI boundaries (facade surface)
Expose API-facing traits in revaer-api::app::indexers and inject concrete implementations from
revaer-app:
IndexerDefinitionsService: definitions catalog and field metadata.IndexerInstancesService: create/update instances, RSS settings, field values, tag/media-domain binds.RoutingPolicyService: create/update policies, params, and secrets.SecretsService: create/rotate/revoke/read secrets and bindings.TagsService: create/update/delete tags.SearchProfilesService: profiles, trust tiers, domain/tag filters, and policy-set wiring.PoliciesService: policy sets/rules management and snapshot refresh hooks.TorznabService: torznab instance lifecycle and category mappings.ImportsService: import job lifecycle and status reporting.JobsService: job claim/run entry points for indexer background jobs.CanonicalizationService: canonical maintenance and disambiguation rules.ReputationService: connectivity and reputation rollups.
All facades return Result<T, E> with constant error messages and structured context fields.
No facade constructs concrete dependencies; all implementations are injected from bootstrap.
Motivation
Document and lock the indexer architecture mapping needed to implement the ERD without leaking database details or violating dependency-injection rules.
Design notes
- Reuse existing crates/modules; avoid introducing new crates until feature growth demands it.
- Keep stored-proc wrappers in
revaer-dataand expose only API-safe DTOs at the HTTP boundary.
Test coverage summary
just cijust ui-e2e
Observability updates
- None.
Risk & rollback plan
- Risk: documentation drift if code moves without updating this ADR.
- Rollback: revert this ADR and restore checklist items to unchecked.
Dependency rationale
- No new dependencies.
145 Indexer stored-proc test harness
- Status: Accepted
- Date: 2026-01-27
- Context:
- Indexer stored-proc wrappers have extensive integration tests with repeated DB setup.
- ERD_INDEXERS_CHECKLIST requires a transactional, seeded harness and deterministic clocks.
- We need consistent setup without introducing new dependencies.
- Decision:
- Add a shared
IndexerTestDbhelper inrevaer-data::indexers(test-only). - Centralize Postgres startup, migrations, and UTC session configuration.
- Capture a deterministic
now()value after migrations for tests that need time inputs.
- Add a shared
- Consequences:
- Tests share a single harness, reducing setup drift and boilerplate.
- Deterministic timestamps are available without leaking production code changes.
- Test-only helper code is now part of the indexer module.
- Follow-up:
- Use
IndexerTestDb::now()in additional tests that depend on timestamps. - Add explicit transaction helpers if we need per-test rollbacks beyond isolated DBs.
- Use
Motivation
Indexer stored procedures are covered by integration tests that previously duplicated database startup and migration logic. The checklist calls for a consistent harness with deterministic clocks and seeded data. A shared helper keeps the setup aligned and makes it easier to maintain.
Design notes
- Tests use
IndexerTestDbto keep the disposable database alive for the test duration. - The helper configures session time zone to UTC and captures a single
now()value after migrations for deterministic timestamp inputs. - No production code paths or runtime behavior are changed.
Test coverage summary
just cijust ui-e2e
Observability updates
- None.
Risk & rollback plan
- Risk: tests may rely on helper behavior and need updates if the harness evolves.
- Rollback: revert this ADR and restore per-test setup helpers.
Dependency rationale
- No new dependencies.
Indexer error-code taxonomy
- Status: Accepted
- Date: 2026-01-27
- Context:
- Stored procedures already raise exceptions with
DETAILcodes, but there is no single, documented taxonomy for the values or how the API must surface them. - AGENTS.md requires constant error messages, structured context fields, and stable error mapping for clients and tests.
- Stored procedures already raise exceptions with
- Decision:
- Define a shared error-code taxonomy for indexer stored procedures and API responses:
- Stored procedures:
- Domain/validation/authorization failures raise
ERRCODE = 'P0001'with a constantMESSAGEof the formFailed to <operation>andDETAILset to the error code. - Infrastructure/constraint errors use native SQLSTATE codes (e.g.,
23505,23503) and do not override the Postgres message. DETAILvalues are lower_snake_case, <= 64 chars, and never embed user data.
- Domain/validation/authorization failures raise
- API responses:
- Use RFC9457 Problem responses with constant
title/detailstrings. - Include
error_code(from the DBDETAIL) andsqlstateascontextfields when present, never interpolated into human-readable messages. - Validation errors prefer
invalid_paramswith constant messages; contextual inputs travel incontextfields.
- Use RFC9457 Problem responses with constant
- Stored procedures:
- Adopt the following canonical error-code groups (examples are non-exhaustive):
- Missing/empty/length:
*_missing,*_empty,*_too_long,*_too_short. - Format/normalization:
*_not_lowercase,*_invalid_format,*_invalid. - Lookup/identity:
*_not_found,*_reference_missing,unknown_key. - Conflicts/state:
*_already_exists,*_deleted,*_in_use,*_disabled. - Unsupported/blocked:
unsupported_*,*_disallowed,*_insufficient. - Auth/actor:
actor_missing,actor_not_found,actor_unauthorized.
- Missing/empty/length:
- Define a shared error-code taxonomy for indexer stored procedures and API responses:
- Consequences:
- Clients can reliably map failures by
error_codewhile keeping UI text constant and localizable. - Tests can assert stable
error_code/sqlstatevalues without parsing messages.
- Clients can reliably map failures by
- Follow-up:
- Enforce taxonomy compliance in new stored procedures and API handlers.
- Extend integration tests to cover new error codes as endpoints are added.
Task record
- Motivation:
- Provide a single, stable taxonomy for indexer errors so DB, API, CLI, and UI agree on machine-readable codes while keeping messages constant.
- Design notes:
- DB procs keep
MESSAGEconstant and carry machine codes inDETAIL. - API handlers surface
error_code/sqlstateviaProblemDetails.contextand keepdetailtext constant for localization.
- DB procs keep
- Test coverage summary:
- Documentation-only change; no new tests added.
- Observability updates:
- Errors continue to log with structured fields (
error_code,sqlstate) at the origin.
- Errors continue to log with structured fields (
- Risk & rollback plan:
- Risk: taxonomy drift if future procs introduce ad-hoc codes. Rollback by reverting this ADR and aligning new procedures to existing ad-hoc behavior.
- Dependency rationale:
- No new dependencies added.
Indexer v1 scope enforcement
- Status: Accepted
- Date: 2026-01-27
- Context:
- The indexer ERD defines explicit v1 scope and non-goals that must guide architecture and route planning.
- The implementation plan needs a clear guardrail so API/UI work does not drift into media management or other out-of-scope features.
- Decision:
- Confirm that indexer v1 architecture and route planning are constrained to the ERD scope:
- Indexers, search, policies, secrets, routing, rate limiting, Torznab compatibility, and reliability/telemetry flows are in scope.
- Media management features remain out of scope for v1 and require a future ADR before any routes or services are added.
- Document the scope rule as a checklist gate and require any scope expansion to add a new ADR and update ERD_INDEXERS.md.
- Confirm that indexer v1 architecture and route planning are constrained to the ERD scope:
- Consequences:
- Implementation stays aligned with the ERD and avoids premature media management APIs.
- Route planning focuses on indexer and search workflows with explicit boundaries.
- Follow-up:
- Keep ERD_INDEXERS_CHECKLIST.md in sync with any scope changes.
- Add ADRs for any new surfaces that expand beyond v1 scope.
Task record
- Motivation:
- Prevent scope creep and ensure indexer architecture and route planning remain consistent with v1 goals and non-goals.
- Design notes:
- Architecture and routes are limited to indexer/search/proxy/rate-limit/Torznab needs.
- Media management endpoints are intentionally excluded in v1.
- Test coverage summary:
- Documentation-only change; no new tests added.
- Observability updates:
- No changes; existing telemetry plans remain in effect.
- Risk & rollback plan:
- Risk: future work bypasses the scope gate. Rollback by reasserting scope in a follow-up ADR and pruning out-of-scope routes.
- Dependency rationale:
- No new dependencies added.
Indexer schema JSON ban verification
- Status: Accepted
- Date: 2026-01-27
- Context:
- The ERD bans JSON/JSONB storage for indexer data.
- We need to confirm migrations comply before expanding API and service layers.
- Decision:
- Verify all indexer migrations avoid JSON/JSONB column types and document the result.
- Treat JSON/JSONB usage as a hard failure in schema reviews; any exception requires a future ADR and ERD update.
- Consequences:
- The schema remains normalized and avoids opaque JSON storage.
- Future migrations must continue to use normalized tables and enums.
- Follow-up:
- Re-check JSON/JSONB usage whenever new migrations are added.
Task record
- Motivation:
- Ensure the schema adheres to the ERD prohibition on JSON/JSONB types.
- Design notes:
- Reviewed the migration set and confirmed no JSON/JSONB column types are present.
- Test coverage summary:
- Documentation-only confirmation; no new tests added.
- Observability updates:
- None.
- Risk & rollback plan:
- Risk: future migrations introduce JSON types. Rollback by reverting offending migration and normalizing the data model.
- Dependency rationale:
- No new dependencies added.
Indexer public-id and bigint identity verification
- Status: Accepted
- Date: 2026-01-27
- Context:
- The ERD mandates bigint identity primary keys and UUID public IDs for specific indexer tables, while indexer_definition must not expose a public ID in v1.
- API and service layers depend on stable public identifiers without leaking internal bigint keys.
- Decision:
- Verify the following tables use
BIGINT GENERATED ALWAYS AS IDENTITYprimary keys and enforce UUID public IDs (unique) where required:- app_user
- indexer_instance
- routing_policy
- policy_set
- policy_rule
- search_profile
- search_request
- canonical_torrent
- canonical_torrent_source
- torznab_instance
- rate_limit_policy
- secret
- Confirm
indexer_definitionhas no public ID in v1.
- Verify the following tables use
- Consequences:
- Indexer APIs can safely use UUIDs/keys without exposing internal bigint IDs.
- Table definitions align with ERD identity rules, reducing migration drift.
- Follow-up:
- Re-verify new tables against this rule before adding API or UI surfaces.
Task record
- Motivation:
- Validate ERD identity/public ID rules before expanding indexer-facing APIs.
- Design notes:
- Verified table definitions in migrations 0012, 0014, 0015, 0016, 0018, 0019, 0022, and 0023 include bigint identity PKs and required public IDs.
- Verified
indexer_definitionin 0013 contains no public ID column.
- Test coverage summary:
- Documentation-only verification; no new tests added.
- Observability updates:
- None.
- Risk & rollback plan:
- Risk: future migrations add missing or redundant public IDs. Rollback by reverting the offending migration and revalidating against the ERD.
- Dependency rationale:
- No new dependencies added.
Indexer soft-delete coverage verification
- Status: Accepted
- Date: 2026-01-27
- Context:
- The ERD requires soft-delete support via
deleted_aton specific indexer tables. - We need confirmation before expanding API and service layers that assume soft deletes.
- The ERD requires soft-delete support via
- Decision:
- Verify
deleted_atexists on all required tables:- indexer_instance
- routing_policy
- policy_set
- search_profile
- tag
- torznab_instance
- rate_limit_policy
- Verify
- Consequences:
- Soft-delete semantics are available for indexer configuration entities.
- API handlers can depend on
deleted_atfor active filtering.
- Follow-up:
- Keep soft-delete requirements in mind for any new indexer-facing tables.
Task record
- Motivation:
- Confirm the ERD soft-delete rule is implemented consistently in migrations.
- Design notes:
- Verified
deleted_atcolumns in migrations 0012, 0014, 0016, 0018, and 0019.
- Verified
- Test coverage summary:
- Documentation-only verification; no new tests added.
- Observability updates:
- None.
- Risk & rollback plan:
- Risk: future migrations omit
deleted_at. Roll back by correcting the migration and re-running schema checks.
- Risk: future migrations omit
- Dependency rationale:
- No new dependencies added.
Indexer audit fields and timestamp defaults verification
- Status: Accepted
- Date: 2026-01-27
- Context:
- The ERD requires audit fields (created/updated/changed by) to be non-null where used
and mandates
created_at/updated_atdefaults when those columns exist. - We need to confirm the indexer schema matches these requirements before expanding APIs.
- The ERD requires audit fields (created/updated/changed by) to be non-null where used
and mandates
- Decision:
- Verify audit fields are present and non-null where required, and timestamp defaults are
set on indexer tables that include
created_at/updated_at. - Confirmed examples (migrations 0012–0023):
- Audit fields:
tag,routing_policy,indexer_instance,search_profile,policy_set,policy_ruleincludecreated_by_user_id/updated_by_user_idas NOT NULL.indexer_instance_field_valueincludesupdated_by_user_idas NOT NULL.canonical_disambiguation_ruleincludescreated_by_user_idas NOT NULL.config_audit_logincludeschanged_by_user_idas NOT NULL.
- Timestamp defaults:
- Tables with
created_at/updated_atcolumns define them as NOT NULL DEFAULT now(), includingtag,routing_policy,indexer_instance,search_profile,policy_set,policy_rule,canonical_torrent,canonical_torrent_source,torznab_instance, andrate_limit_policy.
- Tables with
- Audit fields:
- Verify audit fields are present and non-null where required, and timestamp defaults are
set on indexer tables that include
- Consequences:
- Schema audit columns are enforced consistently and can be trusted by API and UI layers.
- Timestamp defaults align with ERD expectations for lifecycle tracking.
- Follow-up:
- Re-verify audit/timestamp columns for any new indexer migrations.
Task record
- Motivation:
- Establish that audit fields and lifecycle timestamps are enforced per the ERD.
- Design notes:
- Verified audit field presence and NOT NULL constraints in migrations 0012, 0014, 0016, 0019, 0021, and 0022.
- Verified created_at/updated_at defaults in the same migration set.
- Test coverage summary:
- Documentation-only verification; no new tests added.
- Observability updates:
- None.
- Risk & rollback plan:
- Risk: future tables omit audit fields or defaults. Roll back by correcting the schema migration and revalidating against the ERD.
- Dependency rationale:
- No new dependencies added.
Indexer API boundary public-id verification
- Status: Accepted
- Date: 2026-01-27
- Context:
- The ERD requires API boundaries to accept only UUID public IDs or keys and never expose internal bigint identities.
- We need to confirm the current indexer API surface and stored-procedure entry points follow this rule.
- Decision:
- Verified indexer API DTOs and handlers accept only UUIDs or keys, never internal bigint IDs.
- Confirmed API DTOs for tags use
Uuidfortag_public_idand string keys (TagCreateRequest,TagUpdateRequest,TagDeleteRequest) and that the indexer facade methods take UUID actor identities plus UUID/tag key inputs. - Confirmed indexer stored-procedure wrappers (
deployment_init,tag_*,routing_policy_*,rate_limit_*,search_*,secret_*) accept UUID public IDs and key strings exclusively.
- Consequences:
- API and stored-procedure boundaries comply with the ERD, keeping internal bigint identities private to the database layer.
- Client integrations can rely on UUIDs/keys without leaking internal IDs.
- Follow-up:
- Re-verify new indexer endpoints and procedures before expanding the API.
Task record
- Motivation:
- Validate API/public boundaries adhere to ERD public-id exposure rules.
- Design notes:
- Checked tag API DTOs in
revaer-api-modelsand the indexer facade/handlers inrevaer-apifor UUID-only identifiers. - Reviewed wrapper procs in migration
0064_indexer_wrapper_procs.sql.
- Checked tag API DTOs in
- Test coverage summary:
- Documentation-only verification; no new tests added.
- Observability updates:
- None.
- Risk & rollback plan:
- Risk: future endpoints accidentally expose internal IDs. Roll back by reverting the API shape and re-validating with stored-proc interfaces.
- Dependency rationale:
- No new dependencies added.
Indexer external reference public-id verification
- Status: Accepted
- Date: 2026-01-27
- Context:
- The ERD requires external references (policy rules, disambiguation rules) to store UUID public IDs or keys instead of internal bigint identities.
- We need to confirm the schema matches that rule before expanding policy and canonicalization workflows.
- Decision:
- Verified policy and disambiguation tables store UUIDs or keys only for external references.
- Policy rules capture external identifiers via
match_value_uuidor lowercasematch_value_textandpolicy_rule_value_set_item.value_uuid. - Policy snapshots store
policy_rule_public_idUUIDs. - Canonical disambiguation rules store UUIDs only when referencing
canonical_public_id, otherwise text hashes.
- Consequences:
- External reference data can be safely exposed in APIs without leaking internal bigint IDs.
- Internal joins still rely on bigint PKs, preserving database integrity.
- Follow-up:
- Re-verify future policy/disambiguation changes keep UUID/key-only references.
Task record
- Motivation:
- Validate that external references never store internal bigint IDs.
- Design notes:
- Reviewed
policy_ruleandpolicy_rule_value_set_itemcolumns in0019_policy_sets.sql. - Reviewed
canonical_disambiguation_rulein0022_indexer_canonicalization.sql. - Reviewed
policy_snapshot_ruleusage ofpolicy_rule_public_id.
- Reviewed
- Test coverage summary:
- Documentation-only verification; no new tests added.
- Observability updates:
- None.
- Risk & rollback plan:
- Risk: future migrations introduce bigint references in external-facing columns. Roll back by reverting schema changes and updating procedures.
- Dependency rationale:
- No new dependencies added.
Indexer system sentinel usage verification
- Status: Accepted
- Date: 2026-01-27
- Context:
- The ERD requires system actions to use a sentinel user identifier
(
user_id = 0or the all-zero UUID) instead of NULL. - We need to confirm the indexer schema and stored procedures follow this rule before expanding automation workflows.
- The ERD requires system actions to use a sentinel user identifier
(
- Decision:
- Verified the system sentinel user is seeded with
user_id = 0and the all-zero UUID public ID in deployment seed and initialization migrations. - Confirmed stored procedures fall back to
user_id = 0for system-driven actions (e.g., search request creation). - Confirmed data-layer tests use the all-zero UUID sentinel when invoking indexer procedures.
- Verified the system sentinel user is seeded with
- Consequences:
- System actions can be recorded without NULL audit fields, aligning with the ERD audit requirements.
- Downstream API and UI layers can safely represent system activity with the sentinel UUID.
- Follow-up:
- Re-verify new procedures or automation jobs continue to use the sentinel user IDs instead of NULL.
Task record
- Motivation:
- Validate that system actions always carry the sentinel user identifier.
- Design notes:
- Seed/init migrations
0030_indexer_seed_data.sql,0032_indexer_deployment_init.sql, and0067_factory_reset_seed_defaults.sqlinsertuser_id = 0with the all-zero UUID. search_request_create_v1defaults tosystem_user_id := 0when the actor is absent.- Data access tests (e.g.,
crates/revaer-data/src/indexers/deployment.rs) exercise stored procedures with the sentinel UUID.
- Seed/init migrations
- Test coverage summary:
- Documentation-only verification; existing tests cover system-user usage.
- Observability updates:
- None.
- Risk & rollback plan:
- Risk: new procs may accept NULL actors. Roll back by enforcing sentinel defaults and updating callers/tests.
- Dependency rationale:
- No new dependencies added.
Indexer text caps and lowercase key enforcement verification
- Status: Accepted
- Date: 2026-01-27
- Context:
- The ERD mandates text column caps and lowercase enforcement for key/slug fields (varchar(128) keys, varchar(256) names, varchar(2048) URLs, varchar(512) regex/text patterns, varchar(1024) notes).
- We need to confirm the schema enforces these caps and lowercase checks before expanding APIs and UI validation.
- Decision:
- Verified key/slug fields use
VARCHAR(128)with lowercase CHECKs where required (e.g.,tag.tag_key,indexer_definition.upstream_slug,indexer_definition_field.name). - Verified display names are capped at
VARCHAR(256)across core catalog tables (e.g.,tag.display_name,indexer_definition.display_name,search_profile.display_name,policy_set.display_name). - Verified URL fields use
VARCHAR(2048)(e.g.,search_request_source_observationdetails_url,download_url,magnet_uri). - Verified regex/pattern text caps at
VARCHAR(512)and notes/detail caps atVARCHAR(1024)(e.g.,indexer_definition_field_validation.text_value,search_request.query_text,search_request.error_detail,policy_rule.rationale).
- Verified key/slug fields use
- Consequences:
- Schema enforces ERD text caps and lowercase rules, preventing oversized or improperly cased keys from entering the database.
- API validation can align with these constraints without risking truncation.
- Follow-up:
- Re-verify any new text columns added to the indexer schema.
Task record
- Motivation:
- Confirm text caps and lowercase key enforcement align with ERD rules.
- Design notes:
- Reviewed
0012_indexer_core.sql(tag_keylower-case CHECK, display name sizes). - Reviewed
0013_indexer_definitions.sql(slug/name lowercase CHECKs and text caps). - Reviewed
0023_indexer_search_requests.sqlfor URL/text/detail caps. - Reviewed
0019_policy_sets.sqlfor rationale/text caps.
- Reviewed
- Test coverage summary:
- Documentation-only verification; no new tests added.
- Observability updates:
- None.
- Risk & rollback plan:
- Risk: new columns exceed caps or miss lowercase checks. Roll back by adjusting migrations and re-validating constraints.
- Dependency rationale:
- No new dependencies added.
Indexer normalized column verification
- Status: Accepted
- Date: 2026-01-27
- Context:
- The ERD requires normalized columns (e.g.,
email_normalized,*_norm) to support consistent lookups and lowercased comparisons. - We need to confirm the schema includes the specified normalized fields.
- The ERD requires normalized columns (e.g.,
- Decision:
- Verified
app_user.email_normalizedis present and enforced with a lowercase/trim CHECK constraint. - Verified generated normalized columns exist where specified in definition
metadata (
indexer_definition_field_validation.text_value_normanddepends_on_value_plain_norm). - Verified normalized identifier storage in search requests via
search_request_identifier.id_value_normalized.
- Verified
- Consequences:
- Normalized fields are persisted in the schema for reliable matching and validation logic.
- Stored procedures can rely on normalized columns without ad-hoc transforms.
- Follow-up:
- Ensure any new ERD-defined normalized fields are added with the same constraints.
Task record
- Motivation:
- Confirm normalized columns exist for ERD-specified fields.
- Design notes:
- Reviewed
0012_indexer_core.sqlforemail_normalized. - Reviewed
0013_indexer_definitions.sqlfor generated*_normcolumns. - Reviewed
0023_indexer_search_requests.sqlforid_value_normalized.
- Reviewed
- Test coverage summary:
- Documentation-only verification; no new tests added.
- Observability updates:
- None.
- Risk & rollback plan:
- Risk: missing normalized columns can break lookup consistency. Roll back by adding the columns in migrations and updating procedures.
- Dependency rationale:
- No new dependencies added.
Indexer hash identity rules verification
- Status: Accepted
- Date: 2026-01-27
- Context:
- The ERD defines hash identity rules for infohash v1/v2, magnet hashes, title normalization, and title-size fallback hashing.
- We need to confirm the schema and ingest procedures enforce these rules.
- Decision:
- Verified canonical tables enforce hash shapes and lowercase normalization:
canonical_torrentandcanonical_torrent_sourcevalidate infohash and magnet hashes, plus enforce lowercasetitle_normalized. - Verified ingest procedures implement ERD hash derivations:
normalize_title_v1,derive_magnet_hash_v1, andcompute_title_size_hash_v1inindexer_search_result_ingest_proc.sqlimplement normalization, magnet hash derivation, and title-size hashing. - Verified identity strategy selection uses infohash v2, infohash v1, magnet hash, or title-size fallback per ERD.
- Verified canonical tables enforce hash shapes and lowercase normalization:
- Consequences:
- Hash identity rules are enforced consistently at the DB layer and in ingest logic.
- Canonicalization can reliably deduplicate sources without depending on caller behavior.
- Follow-up:
- Re-verify if hash derivation logic changes or new identity strategies are added.
Task record
- Motivation:
- Confirm ERD hash identity rules are implemented in schema and procedures.
- Design notes:
- Reviewed
0022_indexer_canonicalization.sqlfor hash constraints and identity strategy checks. - Reviewed
0052_indexer_search_result_ingest_proc.sqlfor normalization and hash derivation functions.
- Reviewed
- Test coverage summary:
- Documentation-only verification; existing ingest tests cover hashing paths.
- Observability updates:
- None.
- Risk & rollback plan:
- Risk: regressions in hash derivation cause identity splits. Roll back by reverting procedure changes and revalidating constraints.
- Dependency rationale:
- No new dependencies added.
Indexer secret binding linkage verification
- Status: Accepted
- Date: 2026-01-27
- Context:
- The ERD requires secrets to be linked only through
secret_bindingand forbids inlinesecret_idcolumns on other tables. - We need to confirm the schema follows this rule before extending secret usage in routing and indexer configs.
- The ERD requires secrets to be linked only through
- Decision:
- Verified
secretandsecret_bindingare the only tables owningsecret_id, with bindings keyed by(bound_table, bound_id, binding_name). - Confirmed other tables (e.g.,
indexer_instance_field_value,routing_policy_parameter) store no inlinesecret_idcolumns and rely onsecret_bindingfor secret linkage.
- Verified
- Consequences:
- Secret linkage is centralized and auditable via
secret_bindingandsecret_audit_log. - Schema aligns with ERD and avoids leaking secret references into unrelated tables.
- Secret linkage is centralized and auditable via
- Follow-up:
- Re-verify any new tables that require secret access ensure bindings are used.
Task record
- Motivation:
- Validate secrets are linked only through
secret_binding.
- Validate secrets are linked only through
- Design notes:
- Reviewed
0015_indexer_secrets.sqlfor secret/secret_binding tables and constraints. - Searched migrations for
secret_idto confirm no inline secret references outside the secret tables.
- Reviewed
- Test coverage summary:
- Documentation-only verification; no new tests added.
- Observability updates:
- None.
- Risk & rollback plan:
- Risk: future tables add direct
secret_idcolumns. Roll back by removing inline references and migrating tosecret_binding.
- Risk: future tables add direct
- Dependency rationale:
- No new dependencies added.
Indexer single-tenant scope verification
- Status: Accepted
- Date: 2026-01-27
- Context:
- The ERD specifies a single-tenant deployment with no tenant scoping tables or tenant_id columns.
- We need to confirm the schema has no tenant scoping artifacts.
- Decision:
- Verified indexer migrations contain no tenant/organization scoping columns or tables.
- Confirmed global catalog tables (e.g.,
trust_tier,media_domain,indexer_definition) are deployment-wide without tenant keys.
- Consequences:
- Database schema aligns with the ERD’s single-tenant scope assumptions.
- Application layers can treat configuration and catalog data as global.
- Follow-up:
- Re-verify if multi-tenant support is introduced in later phases.
Task record
- Motivation:
- Validate that the indexer schema remains single-tenant as required.
- Design notes:
- Searched migrations for tenant/organization identifiers and found none.
- Verified catalog tables are global with no scoping columns.
- Test coverage summary:
- Documentation-only verification; no new tests added.
- Observability updates:
- None.
- Risk & rollback plan:
- Risk: accidental tenant columns creep into schema. Roll back by removing tenant fields and updating stored procedures.
- Dependency rationale:
- No new dependencies added.
Indexer table/constraint alignment verification
- Status: Accepted
- Date: 2026-01-27
- Context:
- ERD_INDEXERS.md requires all indexer tables, columns, defaults, CHECKs, UQ, and FK constraints to match the specification.
- Verification found the policy_set.created_for_search_request_id FK missing after search_request was introduced.
- Auto-created request policy sets must be purged with their search_request per ERD notes.
- Decision:
- Add a migration to enforce the policy_set.created_for_search_request_id FK with ON DELETE CASCADE to align with the ERD.
- Record the table/column/FK parity verification against ERD tables and migrations.
- Consequences:
- Positive: referential integrity matches ERD and search retention cascades to auto-created policy sets.
- Risk: existing orphaned policy_set rows would block the migration.
- Follow-up:
- Continue validating per-table Notes invariants and add tests where appropriate.
Task record
- Motivation:
- Close the remaining schema gap so all ERD indexer tables and FKs match the spec.
- Design notes:
- Added FK policy_set.created_for_search_request_id -> search_request.search_request_id with ON DELETE CASCADE in migration 0068.
- Verified ERD table list, column coverage, and FK presence against migrations.
- Test coverage summary:
- just ci
- just ui-e2e
- Observability updates:
- None.
- Risk & rollback plan:
- If the FK fails due to orphaned policy_set rows, drop the constraint and backfill or null invalid references before reapplying.
- Dependency rationale:
- No new dependencies.
Indexer per-table Notes verification
- Status: Accepted
- Date: 2026-01-27
- Context:
- ERD_INDEXERS.md defines per-table Notes with validation rules, computed fields, and invariants that must be enforced in the schema or stored procedures.
- The Phase 2 checklist requires verifying these notes against migrations/procs.
- Decision:
- Verified schema-level invariants (generated columns, one-of constraints, ranges, and lowercase checks) across indexer tables and attribute tables.
- Verified procedure-level enforcement for tag immutability, policy set cardinality and linkage rules, policy rule validation, search request validation, and canonical disambiguation ordering.
- Consequences:
- Positive: DB constraints and stored procedures align with ERD Notes for validation and computed-field invariants.
- Risk: runtime behaviors described in Notes (e.g., Torznab endpoints, import runner mapping) remain tracked in later phases and are not part of this schema validation.
- Follow-up:
- Continue Phase 5–12 items for runtime behaviors and API surfaces.
Task record
- Motivation:
- Close the Phase 2 requirement to apply per-table Notes invariants in schema/procs.
- Design notes:
- Schema constraints verified in migrations:
crates/revaer-data/migrations/0012_indexer_core.sqlcrates/revaer-data/migrations/0013_indexer_definitions.sqlcrates/revaer-data/migrations/0014_indexer_instances.sqlcrates/revaer-data/migrations/0016_search_profiles_torznab.sqlcrates/revaer-data/migrations/0019_policy_sets.sqlcrates/revaer-data/migrations/0021_connectivity_audit.sqlcrates/revaer-data/migrations/0022_indexer_canonicalization.sqlcrates/revaer-data/migrations/0023_indexer_search_requests.sqlcrates/revaer-data/migrations/0025_indexer_conflicts_decisions.sqlcrates/revaer-data/migrations/0026_indexer_user_actions.sqlcrates/revaer-data/migrations/0027_indexer_telemetry_reputation.sql
- Stored-procedure validation coverage verified in:
crates/revaer-data/migrations/0034_indexer_tag_procs.sqlcrates/revaer-data/migrations/0040_indexer_policy_set_procs.sqlcrates/revaer-data/migrations/0041_indexer_search_profile_procs.sqlcrates/revaer-data/migrations/0042_indexer_policy_rule_create_proc.sqlcrates/revaer-data/migrations/0049_indexer_canonical_disambiguation_rule_proc.sqlcrates/revaer-data/migrations/0050_indexer_search_request_create_proc.sqlcrates/revaer-data/migrations/0052_indexer_search_result_ingest_proc.sql
- Schema constraints verified in migrations:
- Test coverage summary:
- just ci
- just ui-e2e
- Observability updates:
- None.
- Risk & rollback plan:
- If a validation rule is found missing, add a follow-up migration or proc fix and revert this ADR/checklist entry.
- Dependency rationale:
- No new dependencies.
Indexer proc error-code alignment for key lookups
- Status: Accepted
- Date: 2026-01-27
- Context:
- Motivation: ERD requires key-based lookups (trust_tier/media_domain/tag) to raise invalid_request with error_code=unknown_key; several stored procs still emitted *_not_found for key misses.
- Constraints: keep error messages constant, preserve public-id not-found codes, and avoid changing schema or adding dependencies.
- Decision:
- Update stored procedures to emit error_code=unknown_key for key-based misses while keeping *_not_found for public-id lookups.
- Map unknown_key to TagServiceErrorKind::NotFound in the app service layer.
- Verify existing role-based authorization checks and Torznab/system NULL-actor handling; no structural changes required.
- Alternatives considered: introduce new error enums per proc or map unknown_key to Invalid; rejected to keep ERD-mandated codes and existing API semantics.
- Consequences:
- Positive: consistent error-code taxonomy, ERD compliance, clearer API behavior for key lookups.
- Risks/trade-offs: requires a function replacement migration; rollback requires reverting that migration if unexpected client behavior occurs.
- Follow-up:
- Test coverage summary:
just ciandjust ui-e2epassed (npm audit still reports 2 moderate vulnerabilities in the UI test workspace). - Observability: no new spans/metrics required (error surfaces unchanged).
- Risk & rollback plan: revert migration
0069_indexer_proc_error_codes.sqland the tag error mapping change if clients rely on previous error_code strings. - Dependency rationale: no new dependencies added; std/SQL only.
- Test coverage summary:
Indexer error enums and normalization helpers verification
- Status: Accepted
- Date: 2026-01-27
- Context:
- Motivation: Phase 6 requires per-crate error enums with constant messages and context fields, plus normalization helpers for hashing and magnet/title inputs.
- Constraints: preserve existing stored-procedure boundaries and avoid new dependencies.
- Decision:
- Verified error enums and constant-message patterns for indexer paths across
revaer-data(DataError),revaer-app(AppError), andrevaer-api(TagServiceError). - Verified normalization helpers and wrappers in
revaer-data/src/indexers/normalization.rsand the supporting stored procedures (normalize_title,normalize_magnet_uri,derive_magnet_hash,compute_title_size_hash). - Alternatives considered: introducing new error enums or normalization helpers in additional crates; rejected because current coverage meets ERD requirements.
- Verified error enums and constant-message patterns for indexer paths across
- Consequences:
- Positive: checklist items are satisfied without new dependencies or API changes.
- Risks/trade-offs: future indexer services must keep the same constant-message + context-field pattern to remain compliant.
- Follow-up:
- Test coverage summary:
just ciandjust ui-e2epassed (npm audit still reports 2 moderate vulnerabilities in the UI test workspace). - Observability: no additional spans/metrics needed for this verification step.
- Risk & rollback plan: documentation-only change; revert ADR and checklist updates if verification is found incomplete.
- Dependency rationale: no new dependencies added.
- Test coverage summary:
Indexer result-only returns and no-panics verification
- Status: Accepted
- Date: 2026-01-27
- Context:
- Motivation: Phase 6 requires no panics/unwrap/expect in production paths and Result-only returns for fallible operations.
- Constraints: preserve existing interfaces and keep verification scoped to indexer runtime modules.
- Decision:
- Audited indexer-related modules for
panic!,unwrap(),expect(),unreachable!()in non-test code and found none. - Verified fallible operations return
Result<T, E>;Option<T>usage is limited to non-fallible accessors and optional payloads. - Alternatives considered: expanding the audit to the entire workspace; deferred to avoid blocking indexer-phase progress.
- Audited indexer-related modules for
- Consequences:
- Positive: checklist item satisfied for indexer runtime paths without code churn.
- Risks/trade-offs: future modules must keep the same constraints; broader workspace audit remains out of scope for this ADR.
- Follow-up:
- Test coverage summary:
just ciandjust ui-e2epassed (npm audit still reports 2 moderate vulnerabilities in the UI test workspace). - Observability: no new spans/metrics needed for this verification step.
- Risk & rollback plan: documentation-only change; revert ADR/checklist updates if verification is found incomplete.
- Dependency rationale: no new dependencies added.
- Test coverage summary:
Indexer tryOp wrappers for external operations
- Status: Accepted
- Date: 2026-01-27
- Context:
- Motivation: Phase 6 requires wrapping external/system calls in tryOp-style helpers to normalize error mapping across indexer data access.
- Constraints: panics are forbidden; do not introduce new dependencies; keep SQL interactions confined to stored-procedure calls.
- Decision:
- Introduce a shared
try_ophelper in the data layer and replace per-filemap_query_errclosures across indexer modules. - Use
try_opin all indexer data-layer SQLx interactions (queries, executes, and row extraction) to standardize error mapping. - Note: panic catching is intentionally not used because
catch_unwindis banned and production code must avoid panics entirely. - Alternatives considered: leave per-file closures or introduce a more complex async wrapper; rejected in favor of a simple, centralized helper.
- Introduce a shared
- Consequences:
- Positive: consistent error mapping for indexer data access and fewer duplicate helper definitions.
- Risks/trade-offs: none beyond standard refactor risk; behavior remains equivalent.
- Follow-up:
- Test coverage summary:
just ciandjust ui-e2epassed (npm audit still reports 2 moderate vulnerabilities in the UI test workspace). - Observability: no new spans/metrics required for this refactor.
- Risk & rollback plan: revert the try_op refactor and restore per-module helpers if regressions appear.
- Dependency rationale: no new dependencies added.
- Test coverage summary:
Indexer routing policy service and endpoints
- Status: Accepted
- Date: 2026-01-27
- Context:
- Motivation: expose routing policy create/param/secret operations through the indexer application facade and HTTP surface per ERD Phase 6/8 requirements.
- Constraints: stored-procedure-only DB access, constant error messages, DI-only wiring in bootstrap, no new dependencies.
- Decision:
- Extend the indexer facade with routing policy operations and implement them in
revaer-appusing existing stored-procedure wrappers. - Add routing policy request/response DTOs plus HTTP handlers and routes for create, parameter set, and secret binding.
- Update the OpenAPI document to describe the new endpoints and schemas.
- Extend the indexer facade with routing policy operations and implement them in
- Consequences:
- Positive: routing policy operations are now available to API callers with consistent error mapping and tracing spans.
- Risks/trade-offs: additional endpoints increase API surface and will require follow-on list/update/delete support to be feature complete.
- Follow-up:
- Test coverage summary:
just ciandjust ui-e2e(npm audit reports 2 moderate vulnerabilities in the UI test workspace). - Observability: added spans for routing policy operations; no new metrics yet.
- Risk & rollback plan: revert the routing policy service/API changes if regressions appear; stored procedures remain unchanged.
- Dependency rationale: no new dependencies added (used existing crates only).
- Test coverage summary:
Indexer definition list endpoint
- Status: Accepted
- Date: 2026-01-27
- Context:
- Motivation: expose the indexer definition catalog via the API so UI/CLI flows can enumerate definitions without leaking internal IDs.
- Constraints: stored-procedure-only DB access, constant error messages, injected dependencies, and no new dependencies.
- Decision:
- Add a stored procedure to list indexer definitions with actor validation.
- Wire a data-layer wrapper, application facade method, and HTTP handler to return definition summaries.
- Document the new endpoint and DTOs in OpenAPI and add API coverage.
- Consequences:
- Positive: indexer definitions can be listed through a stable API surface.
- Risks/trade-offs: only summary data is exposed; follow-on endpoints are still needed for field metadata and instance creation flows.
- Follow-up:
- Test coverage summary:
just ciandjust ui-e2e(npm audit reports 2 moderate vulnerabilities in the UI test workspace). - Observability: added tracing span for definition listing; no new metrics yet.
- Risk & rollback plan: revert the definition list service/API changes if regressions appear; stored procedures remain additive.
- Dependency rationale: no new dependencies added (used existing crates only).
- Test coverage summary:
Indexer CF state read endpoint
- Status: Accepted
- Date: 2026-01-30
- Context:
- Motivation: surface Cloudflare mitigation state per indexer instance so UI/API can display health and reset workflows safely.
- Constraints: stored-procedure-only access, constant error messages, and no new dependencies.
- Decision:
- Added
indexer_cf_state_get_v1with a stable wrapper, plus data-access and API plumbing for a GET/v1/indexers/instances/{id}/cf-stateresponse. - Added E2E API coverage for indexer instance and secret endpoints to satisfy coverage gating.
- Alternatives considered: inline SQL or reusing reset-only plumbing (rejected due to stored-proc policy and missing read semantics).
- Added
- Consequences:
- Positive: CF state is now observable through a typed API response; coverage gate stays green.
- Trade-offs: endpoint currently used mainly for read/diagnostics; tests exercise 404 paths when no instance exists.
- Follow-up:
- Expand UI controls and routing-policy integrations for CF/flaresolverr workflows per ERD gaps.
Test Coverage
just cijust ui-e2e
Observability
- Added
indexer.cf_state_getspan in the indexer service path.
Risk and Rollback
- Risk: minimal behavior change; read path only, returns 404 for unknown instances.
- Rollback: revert migration
0072_indexer_cf_state_get.sqland associated API/service changes.
Dependency Rationale
- No new dependencies added; existing crates and patterns were used.
Indexer CF state E2E coverage
- Status: Accepted
- Date: 2026-01-30
- Context:
- Motivation: satisfy UI E2E API coverage gate for newly added CF state endpoints.
- Constraints: no new dependencies; reuse existing E2E API fixtures and coverage hooks.
- Decision:
- Extend indexer instance E2E API coverage to hit CF state GET and reset endpoints using a missing-instance 404 path.
- Alternatives considered: add a dedicated fixture to create a real instance (rejected for higher setup cost in current E2E suite).
- Consequences:
- Positive: coverage gate includes CF state endpoints and remains green.
- Trade-offs: responses are 404-only in this test until instance creation is wired into E2E fixtures.
- Follow-up:
- Expand E2E to exercise CF state success paths once instance creation fixtures are available.
Test Coverage
just cijust ui-e2e
Observability
- No changes.
Risk and Rollback
- Risk: minimal; only exercises API endpoints in E2E.
- Rollback: revert
tests/specs/api/indexers-instances.spec.tsadditions.
Dependency Rationale
- No new dependencies added.
170 Indexer category mapping API endpoints
- Status: Accepted
- Date: 2026-01-31
- Motivation:
- Provide API management for tracker category and media-domain Torznab mappings per ERD and ADR 128.
- Expose stored-proc-backed updates with consistent error mapping and audit tracking.
- Design notes:
- Add indexer facade methods for tracker category and media-domain mapping upsert/delete.
- Implement HTTP endpoints that call stored procedures via data-layer wrappers.
- Keep error messages constant; attach error codes/SQLSTATE as structured context.
- Test coverage summary:
- Data-layer tests for invalid key handling and primary mapping switch.
- API E2E coverage for mapping upsert/delete endpoints.
- Observability updates:
- None (existing tracing/logging patterns reused).
- Risk & rollback plan:
- Risk: incorrect mapping updates could affect category resolution.
- Rollback: revert API changes and restore seeded mappings via migrations/seed defaults.
- Dependency rationale:
- No new dependencies.
171 Indexer Torznab instance API endpoints
- Status: Accepted
- Date: 2026-01-31
- Motivation:
- Provide API coverage for Torznab instance lifecycle (create, rotate credentials, enable/disable, delete) backed by stored procedures.
- Close ERD indexer checklist gaps with testable handlers and OpenAPI schema coverage.
- Design notes:
- Add API models for Torznab instance create/state requests and responses in revaer-api-models.
- Extend indexer facade contract in revaer-api and wire revaer-app implementations to stored-proc data access.
- Implement HTTP handlers with consistent error mapping and constant error messages; trim user input on ingress.
- Add E2E coverage for Torznab instance endpoints and OpenAPI updates.
- Test coverage summary:
- Unit tests for error mapping and input trimming in the Torznab instance handlers.
- App-layer tests for missing profile/instance validation.
- API E2E coverage for Torznab instance create/rotate/state/delete flows.
- Observability updates:
- Reused existing tracing spans for indexer operations; no new metrics added.
- Risk & rollback plan:
- Risk: incorrect lifecycle wiring could leave orphaned Torznab instances or misstate enablement.
- Rollback: revert API changes and use stored procedures to reset instance state from migrations/seed data.
- Dependency rationale:
- No new dependencies.
172 Indexer search profile API endpoints
- Status: Accepted
- Date: 2026-01-31
- Context:
- API coverage for search profile stored procedures was missing, blocking ERD checklist parity.
- Prowlarr parity requires deterministic, auditable search profile configuration surfaces.
- Decision:
- Add request/response models, facade methods, and HTTP routes for search profile lifecycle ops.
- Keep error messages constant and attach context via structured fields.
- Consequences:
- Search profiles can now be created and configured through the API layer.
- E2E coverage asserts API availability for both auth modes.
- Follow-up:
- Implement search profile UI surfaces and policy management endpoints.
- Extend coverage for policy set integration once endpoints exist.
Task record
- Motivation:
- Expose stored-procedure-backed search profile management through the API.
- Provide E2E coverage for search profile lifecycle operations to align with the ERD.
- Design notes:
- Add API models for search profile create/update/default/domain allowlist/policy set/indexer allow-block/tag allow-block-prefer.
- Extend the indexer facade to surface search profile operations with typed errors.
- Implement HTTP handlers with constant error messages and trimmed inputs.
- Test coverage summary:
- Unit tests for handler trimming and conflict mapping.
- API E2E coverage for search profile lifecycle endpoints.
- Observability updates:
- Reused existing tracing spans for indexer operations; no new metrics added.
- Risk & rollback plan:
- Risk: invalid profile updates could affect search filtering.
- Rollback: revert API changes and repair profiles via stored procedures/migrations.
- Dependency rationale:
- No new dependencies.
Indexer import jobs API surface
- Status: Accepted
- Date: 2026-01-31
- Context:
- Need REST coverage for indexer import jobs (create/run/status/results) to satisfy ERD indexer checklist.
- Must preserve stored-procedure boundaries, stable errors, and testable handlers with E2E coverage.
- Decision:
- Add import job request/response models and handler wiring for create/run/status/results endpoints.
- Extend app facade mapping for import job error translation and results/status projection.
- Update OpenAPI and Playwright API coverage for new endpoints.
- Alternatives considered: defer API surface until full import pipeline; rejected to keep parity with checklist and procs.
- Consequences:
- Positive outcomes: import job endpoints are now reachable, documented, and covered in E2E.
- Risks or trade-offs: run endpoints currently validate inputs and return errors without a worker path; full import pipeline still pending.
- Follow-up:
- Implement background import execution and UI flows for import job monitoring.
- Extend CLI support once import pipeline is ready.
Task record
- Motivation: close the ERD indexer checklist gap for import job REST endpoints and E2E coverage.
- Design notes: handlers trim inputs, map stored-procedure error codes to stable API errors, and return typed models; no inline SQL added.
- Test coverage summary: added API E2E coverage for create/run/status/results; existing unit tests cover trimming and error mapping.
- Observability updates: no new spans or metrics required for handler-only changes.
- Risk & rollback plan: rollback by reverting endpoint wiring and OpenAPI updates; no migrations or data changes.
- Dependency rationale: no new dependencies added; reused existing models, handlers, and stored procedures.
Indexer import jobs CLI commands
- Status: Accepted
- Date: 2026-01-31
- Context:
- Import job API endpoints exist but CLI lacked parity for creating and inspecting import jobs.
- Need to keep CLI output stable (json/table) and enforce API key requirements.
- Decision:
- Add
indexer importCLI subcommands for create, run (Prowlarr API/backup), status, and results. - Provide table and JSON output renderers for import job status and results.
- Alternatives considered: postpone CLI until full import pipeline; rejected to close ERD checklist gap.
- Add
- Consequences:
- Positive outcomes: operators can start and inspect import jobs from CLI with consistent output.
- Risks or trade-offs: CLI surfaces are limited to import job endpoints; broader indexer CLI features remain pending.
- Follow-up:
- Extend CLI with indexer test, policy management, and Torznab key commands.
- Add CLI coverage once indexer workflows expand.
Task record
- Motivation: provide CLI parity for indexer import job lifecycle operations.
- Design notes: new subcommands map 1:1 with REST endpoints and reuse common output formats.
- Test coverage summary: existing CLI unit tests extended for command label coverage; CLI integration not yet expanded.
- Observability updates: no new telemetry or metrics beyond existing CLI emitter.
- Risk & rollback plan: revert CLI subcommands and output helpers; no data changes.
- Dependency rationale: no new dependencies added; reused existing models and CLI utilities.
Indexer Torznab CLI management
- Status: Accepted
- Date: 2026-01-31
- Context:
- Torznab instance keys and lifecycle operations are available via API but missing CLI tooling.
- Need operator-level access to create, rotate, enable/disable, and delete Torznab instances.
- Decision:
- Add
indexer torznabCLI subcommands for create, rotate, set-state, and delete. - Render Torznab instance credentials in JSON or table output.
- Alternatives considered: postpone CLI tooling; rejected to keep operational parity with API.
- Add
- Consequences:
- Positive outcomes: CLI can manage Torznab instances and rotate keys without UI.
- Risks or trade-offs: plaintext API keys are shown in CLI output; operators must handle securely.
- Follow-up:
- Add CLI coverage once Torznab endpoints and auth rules are fully implemented.
- Extend CLI for Torznab downloads and search flows when endpoints land.
Task record
- Motivation: provide CLI access to Torznab instance creation, rotation, and state updates.
- Design notes: subcommands map 1:1 with REST endpoints and share existing output formatting patterns.
- Test coverage summary: command label tests updated; no new integration tests added.
- Observability updates: no additional telemetry beyond existing CLI emitter.
- Risk & rollback plan: revert CLI commands and output helpers; no migrations or data changes.
- Dependency rationale: no new dependencies added.
Indexer policy CLI management
- Status: Accepted
- Date: 2026-01-31
- Context:
- Policy set and rule endpoints exist but lack CLI coverage.
- Operators need a CLI path to create, enable, disable, and reorder policy sets and rules.
- Decision:
- Add
indexer policyCLI subcommands for policy set creation, update, enable/disable, reorder, and policy rule create/enable/disable/reorder. - Render policy set and rule identifiers in table or JSON output.
- Alternatives considered: rely on API or UI; rejected to keep operational parity.
- Add
- Consequences:
- Positive outcomes: CLI can manage policy sets and rules without UI.
- Risks or trade-offs: CLI must be kept in sync with API schema updates.
- Follow-up:
- Add list and detail commands once policy listing endpoints are available.
- Expand rule creation ergonomics as policy rule value-set options grow.
Task record
- Motivation: provide CLI access for policy sets and rules to match API capabilities.
- Design notes: subcommands mirror REST endpoints; requests validate non-empty fields locally.
- Test coverage summary: command label test updated; no new integration tests added.
- Observability updates: no additional telemetry beyond existing CLI emitter.
- Risk & rollback plan: revert CLI commands and output helpers; no migrations or data changes.
- Dependency rationale: no new dependencies added.
Indexer instance test API and CLI
- Status: Accepted
- Date: 2026-01-31
- Context:
- Indexer instance test stored procedures exist but lacked an API/CLI surface.
- Executors need a prepare payload and a finalize endpoint to record outcomes.
- Decision:
- Add API endpoints to prepare and finalize indexer instance tests.
- Add CLI commands to invoke the prepare and finalize endpoints with JSON or table output.
- Alternatives considered: defer until executor is built; rejected to keep parity with ERD flows.
- Consequences:
- Positive outcomes: external executors and CLI can drive indexer test lifecycle.
- Risks or trade-offs: test execution is still external; API must stay aligned with executor payload needs.
- Follow-up:
- Wire executor to call the prepare/finalize API from the job runner.
- Add E2E coverage for the test endpoints once executor is online.
Task record
- Motivation: expose indexer instance test lifecycle via API/CLI to support migration and diagnostics.
- Design notes: API mirrors stored-proc inputs/outputs; CLI outputs field arrays and statuses.
- Test coverage summary: handler unit tests added for prepare/finalize; command label test updated.
- Observability updates: new service spans for prepare/finalize.
- Risk & rollback plan: revert API routes and CLI commands; no migrations or data changes.
- Dependency rationale: no new dependencies added.
Indexer allocation safety guard
- Status: Accepted
- Date: 2026-02-01
- Context:
- Motivation: Prevent unbounded allocations in indexer handlers and satisfy security review feedback.
- Constraints: No new dependencies; errors must use constant messages with structured context.
- Decision:
- Add a shared allocation helper that reads
MemAvailablefrom/proc/meminfoand limits requested allocations to 80% of available memory. - Apply the helper to dynamic list normalization in search profiles, policy rules, and media domain allowlists, while raising per-list caps to avoid overly constraining users.
- Dependency rationale: none (std-only implementation).
- Add a shared allocation helper that reads
- Consequences:
- Positive outcomes: safer allocations, explicit error reporting with context, consistent limits.
- Risks or trade-offs: allocation checks fail closed if
MemAvailablecannot be read; rollback by relaxing the guard to a fixed ceiling if needed.
- Follow-up:
- Implementation tasks: add helper module, update normalization paths, add unit tests.
- Test coverage summary: unit tests for allocation guard and meminfo parsing.
- Observability updates: none required; errors carry context fields for diagnostics.
Auth prompt dismissal stability
- Status: Accepted
- Date: 2026-02-01
- Context:
- Motivation: UI E2E intermittently fails because the auth prompt reappears after dismissal when app auth mode is resolved asynchronously.
- Constraints: Preserve current auth behavior, avoid new dependencies, keep state logic testable.
- Decision:
- Stop resetting
auth_prompt_dismissedin the app auth mode effect so a user dismissal remains effective for the session. - Alternatives considered: re-trying dismissal in tests only, or persisting dismissal in storage.
- Dependency rationale: none (state-only change).
- Stop resetting
- Consequences:
- Positive outcomes: auth overlay no longer reappears after dismissal during initial config hydration; UI tests can dismiss overlays reliably.
- Risks or trade-offs: users might need to re-open auth prompt manually if they dismissed it while auth becomes required; rollback by reintroducing reset with a timestamp or explicit user action.
- Follow-up:
- Implementation tasks: adjust app auth mode effect to avoid overriding dismissal state.
- Test coverage summary: UI E2E coverage exercises overlay dismissal; no new unit tests added.
- Observability updates: none.
Cross-platform allocation safety probe
- Status: Accepted
- Date: 2026-02-01
- Context:
- Motivation: Allocation safety relied on
/proc/meminfo, which is Linux-only. We need a cross-platform source of live available memory so we do not lock into Linux. - Constraints: Keep error messages constant; avoid unsafe code; preserve minimal dependencies.
- Motivation: Allocation safety relied on
- Decision:
- Use
systemstatto fetch live memory statistics on all platforms. - Prefer Linux
MemAvailablewhen present, otherwise fall back to the live free-memory value returned bysystemstat. - Keep the 80% available-memory guard and fail closed when memory cannot be determined.
- Dependency rationale:
systemstatprovides cross-platform live memory data without adding unsafe code in Revaer. Alternatives considered: OS-specific FFI (requires unsafe) or estimates (rejected).
- Use
- Consequences:
- Positive outcomes: Allocation guard works on macOS/Windows/Linux; no platform lock-in.
- Risks or trade-offs: Adds a small dependency footprint; relies on OS-reported statistics.
- Follow-up:
- Implementation tasks: update allocation helper to use
systemstat; add docs entry. - Test coverage summary: allocation guard unit tests remain; live-memory probe is exercised via API/CLI/E2E.
- Observability updates: none.
- Implementation tasks: update allocation helper to use
Indexer PR Feedback Follow-through
- Status: Accepted
- Date: 2026-02-01
- Context:
- Addressed open PR feedback on indexer handlers, allocation safety, and API request shape.
- Needed clearer documentation for session encryption env vars and allocation limits.
- Reduced duplicated test scaffolding while preserving testability and coverage.
- Decision:
- Centralize allocation safety in a helper, apply it to request-driven allocations, and document the 80% safety limit.
- Consolidate indexer handler test scaffolding into a shared test helper module.
- Move string normalization helpers into a shared indexer module.
- Remove redundant indexer instance public ID from the update request body.
- Consequences:
- Clearer memory allocation policy and safer handling of unbounded inputs.
- Leaner test modules with shared helpers and fewer duplicated imports.
- API request shape aligns with path-based identifiers, reducing ambiguity.
- Follow-up:
- Monitor code scanning to confirm allocation alerts clear after rescans.
- No additional migrations required.
Motivation
Align indexer handler code with review feedback, improve allocation safety for user-driven inputs, reduce test duplication, and clarify API request semantics.
Design notes
- Allocation helpers now gate request-sized buffers using live memory data and a documented 80% cap to preserve headroom.
- A test support module centralizes stub config and response parsing helpers for indexer handler tests without exposing them outside the indexers module.
- String normalization helpers are shared across indexer handlers to avoid duplication.
IndexerInstanceUpdateRequestnow relies solely on path identifiers.
Test coverage summary
just cijust build-releasejust ui-e2e
Observability updates
- None (documentation-only changes and refactors).
Risk & rollback plan
- Low risk: changes are additive or refactor-only. Roll back by reverting the individual commits if any regression is observed.
Dependency rationale
- No new dependencies added in this change set; see ADR 180 for the live-memory probe rationale.
Indexer PR Feedback Allocation Follow-up
- Status: Accepted
- Date: 2026-02-02
- Context:
- Review feedback highlighted unbounded allocations in indexer handlers and asked for clearer, live-memory guardrails.
- Allocation safety needed to remain cross-platform and avoid hard-coded assumptions.
- Test helper naming and error diagnostics in tests required clarification.
- Decision:
- Add explicit allocation safety checks for request-driven string and vector allocations using the shared live-memory guard.
- Introduce a minimum-available-memory threshold and a cached-system entry point to avoid repeated probing where reuse is possible.
- Rename shared indexer test state helper and tighten ProblemDetails parsing in tests.
- Consequences:
- Safer handling of request-sized allocations with clearer memory-policy documentation.
- Improved test helper clarity and more actionable test failures.
- Slightly more allocation checks per request, offset by the option to reuse a system snapshot.
- Follow-up:
- Confirm code scanning alerts clear after the next GitHub Advanced Security scan.
- No migrations required.
Motivation
Close PR feedback on allocation safety and test clarity while keeping indexer handler behavior intact and aligned with live-memory guardrails.
Design notes
- Allocation sizing now checks request-derived bytes against live available memory before materializing strings or vectors.
- The allocation guard exposes a cached-system entry point and enforces a minimum available memory threshold before allowing allocations.
- Shared indexer test helpers use clearer naming and explicit expectations for response decoding.
Test coverage summary
just cijust build-releasejust ui-e2e
Observability updates
- None (guardrails and test refactors only).
Risk & rollback plan
- Low risk: behavior is additive and defensive. Roll back by reverting this change set if allocation checks prove too strict in practice.
Dependency rationale
- No new dependencies added.
Indexer PR Feedback Follow-up (Allocation Caps)
- Status: Accepted
- Date: 2026-02-02
- Context:
- Additional PR feedback requested explicit caps for request-driven allocations and safer test body parsing.
- Allocation guards must use live memory data while still providing deterministic upper bounds.
- Decision:
- Add explicit maximum sizes for search profile domain/tag keys and policy rule text inputs.
- Limit test response body reads to a fixed upper bound.
- Document the secret key ID max-length source for maintainability.
- Consequences:
- Reduced risk of unbounded allocations from large inputs.
- Clearer operational limits with minimal user-facing constraints.
- Test helpers avoid excessive memory use on malformed responses.
- Follow-up:
- Confirm GHAS/code scanning alerts clear after the next scan.
- No migrations required.
Motivation
Ensure indexer handlers enforce conservative, explicit input caps alongside live-memory guards and improve test safety for large responses.
Design notes
- Search profile domain keys and tag keys now have maximum counts and per-key byte limits.
- Policy rule text inputs (including value set items) enforce per-field byte limits.
- Test helper response parsing reads at most 1 MiB.
Test coverage summary
just cijust ui-e2e
Observability updates
- None.
Risk & rollback plan
- Low risk: validation rejects overlarge inputs up front. Roll back by reverting this change set if limits are too strict.
Dependency rationale
- No new dependencies added.
Indexer Torznab caps endpoint
- Status: Accepted
- Date: 2026-02-03
- Context:
- We need to begin Torznab API parity by serving a caps response backed by seeded categories.
- Authentication must use the Torznab API key query parameter and reject disabled/deleted instances.
- Runtime SQL must remain stored-procedure-only and error messages must be constant.
- Decision:
- Add stored procedures to authenticate Torznab instances and list seeded Torznab categories.
- Expose a Torznab caps handler that authenticates via apikey and returns XML caps data.
- Keep invalid/unsupported Torznab requests as empty XML responses with no DB writes.
- Consequences:
- Positive outcomes:
- Arr clients can validate Torznab connectivity via caps using live category data.
- Authentication and category access stay consistent with stored-proc-only policy.
- Risks or trade-offs:
- Only caps is implemented so far; search and download endpoints still require follow-on work.
- Positive outcomes:
- Follow-up:
- Implement Torznab search and download endpoints with full ERD semantics.
- Add additional Torznab response tests and OpenAPI parity updates as functionality expands.
Indexer Torznab download and allocation guards
- Status: Accepted
- Date: 2026-02-04
- Context:
- We need Torznab download redirects to complete core ERD coverage and satisfy PR review feedback.
- Allocation safety must rely on live memory information and apply to request-driven allocations.
- Review feedback also called for clearer validation structure in bootstrap secrets.
- Decision:
- Add a stored-procedure-backed Torznab download prepare path that validates instance/profile/tag access and records acquisition attempts.
- Extend allocation guards to all request-dependent allocations, including Torznab XML escaping, and clamp vector capacities to bounded limits.
- Refactor secret env validation to a shared helper for consistency.
- Consequences:
- Positive outcomes:
- Torznab clients can request download redirects with audited acquisition attempts.
- Allocation safety applies uniformly and relies on live memory data.
- Validation logic is more maintainable and easier to test.
- Risks or trade-offs:
- Allocation checks can reject requests when memory telemetry is unavailable or too low.
- Positive outcomes:
- Follow-up:
- Continue Torznab search response coverage and add richer download telemetry once search is implemented.
Task record
- Motivation: close Torznab download gap, address allocation safety/GHAS feedback, and tighten secret validation.
- Design notes:
- Download path uses a stored procedure to enforce profile/tag rules and populate acquisition_attempt.
- Allocation checks use live system memory and guard XML escaping plus request-sized collections.
- Secret env validation is centralized to avoid duplication and preserve constant error messages.
- Test coverage summary:
just ci(fmt/lint/udeps/audit/deny/test/cov/build-release) andjust ui-e2e. - Observability updates: none; existing spans and error context fields remain the primary signals.
- Risk & rollback plan: revert migration 0078 and API handlers, then reset DB migrations; no data migrations beyond new procs.
- Dependency rationale: no new dependencies;
bytesupdated to 1.11.1 to address RustSec advisory.
186: Indexer search requests API and allocation guard refinements
- Status: Accepted
- Date: 2026-02-04
- Context:
- Add v1 REST endpoints for indexer search request create/cancel while keeping stored-procedure boundaries.
- Address PR feedback on allocation safety and test-only helper isolation.
- Ensure allocation guards use live memory data and cap single allocations at 80% of available memory.
- Decision:
- Added search request create/cancel request/response models plus API handlers, routes, and facade wiring.
- Added allocation helpers that check live memory availability and use checked capacity reservations before dynamic allocations.
- Tightened test helpers to use bounded body reads and explicit error parsing.
- Consequences:
- Positive outcomes:
- Search request orchestration is now reachable via v1 REST endpoints.
- Allocation checks are centralized and consistently enforced with live memory data.
- Risks or trade-offs:
- Requests with large payloads may be rejected under memory pressure.
- Additional validation and allocation checks add small overhead to hot paths.
- Positive outcomes:
- Follow-up:
- Implement remaining search request lifecycle endpoints (list/status) as the checklist advances.
- Keep UI/E2E coverage aligned as new search request surfaces are added.
Motivation
- Provide a REST API for indexer search requests to unblock UI/CLI orchestration.
- Align allocation safeguards with GHAS feedback and operational safety goals.
Design notes
- Handlers trim and normalize request inputs, translate service errors into RFC9457 responses, and delegate to stored-proc backed services.
- Allocation checks rely on live memory snapshots and cap single allocations at 80% of available memory.
Test coverage summary
- Added/updated unit tests for search request handlers and allocation helpers.
- Ran
just ciandjust ui-e2eto validate unit, integration, coverage, and E2E suites.
Observability updates
- No new metrics added; existing request spans and error contexts remain in place.
Risk & rollback plan
- If allocation checks prove too strict, adjust the limit in
crates/revaer-api/src/http/handlers/indexers/allocation.rs. - Roll back by reverting this ADR and associated handler changes if endpoints regress.
Dependency rationale
- No new dependencies added; re-used existing
systemstatmemory probing.
Indexer search request auth E2E coverage
- Status: Accepted
- Date: 2026-02-04
- Context:
- Enforce the v1 rule that REST search requests and canonical torrent source access require API key auth.
- Validate behavior in E2E tests without reducing existing indexer functionality.
- Decision:
- Add E2E coverage for search request create/cancel auth requirements.
- Add E2E coverage for Torznab download behavior when the apikey is missing.
- Keep API responses and handler behavior unchanged; tests only validate the existing contract.
- Consequences:
- Positive outcomes: regression coverage for auth enforcement on search requests and Torznab downloads.
- Risks or trade-offs: additional E2E runtime and reliance on seeded system actor for search requests.
- Follow-up:
- Monitor CI E2E stability after adding coverage.
Motivation
Search request endpoints and Torznab downloads must enforce API key authentication consistently. We need explicit E2E coverage to guard against regressions while retaining current behavior.
Design notes
- Use existing API fixtures to test authenticated and unauthenticated flows.
- Verify missing apikey returns HTTP 401 for Torznab downloads.
- Keep tests scoped to public endpoints and avoid relying on external indexer data.
Test coverage summary
- Added E2E tests for search request create/cancel auth behavior.
- Added E2E test for missing apikey on Torznab download.
Observability updates
- No new telemetry; tests validate existing API responses.
Risk & rollback plan
- Low risk: test-only changes. If tests are unstable, revert the E2E additions and re-evaluate fixtures or environment setup.
Dependency rationale
- No new dependencies.
188: Indexer search pages API
- Status: Accepted
- Date: 2026-02-06
- Context:
- Search request creation exists, but there is no API surface to read sealed pages or page contents.
- ERD requires stable page ordering and sealed page boundaries for streaming results.
- All runtime DB reads must go through stored procedures with constant error messages.
- Decision:
- Added stored procedures to list pages and fetch page items with stable ordering and page metadata.
- Exposed v1 REST endpoints to list pages and fetch a specific page for a search request.
- Updated API models and OpenAPI to document the new search page responses.
- Consequences:
- Positive outcomes:
- Clients can poll page lists and fetch sealed pages with deterministic ordering.
- Page metadata (sealed_at, item_count) is exposed consistently across API and DB layers.
- Risks or trade-offs:
- Adds additional DB queries per page fetch (page metadata + items in a single proc).
- UI still needs follow-on work to provide streaming UX, but the API is now available.
- Positive outcomes:
- Follow-up:
- Add SSE notifications for new sealed pages once orchestration emits search result events.
- Extend UI to consume search page endpoints and surface streaming updates.
Motivation
- Provide a concrete API to read search request pages and support streaming UI flows.
- Keep read paths aligned with ERD page sealing and append-only ordering guarantees.
Design notes
- Implemented
search_page_list_v1andsearch_page_fetch_v1stored procedures with actor auth checks. - Page fetch returns page metadata and items in one query, ensuring deterministic ordering by page position.
- Service layer maps stored-proc rows to API DTOs without exposing internal IDs.
- Patched
search_request_create_v1policy snapshot lookup to avoid ambiguoussnapshot_hashresolution when a snapshot already exists. - Qualified
search_request_idinsearch_request_create_v1inserts to avoid column/variable ambiguity during returns. - Qualified
search_page_fetch_v1lookups to avoidsealed_atoutput column ambiguity in PL/pgSQL.
Test coverage summary
- Added stored-proc tests for page listing, invalid page numbers, and empty page fetches.
- Added handler tests for list and fetch responses plus error mapping.
- Will run
just ci,just build-release, andjust ui-e2ebefore hand-off.
Observability updates
- Reused existing request spans; no new metrics added for page reads.
Risk & rollback plan
- If page fetch semantics need adjustment, update
crates/revaer-data/migrations/0079_indexer_search_pages.sqland regenerate data wrappers. - Roll back by reverting this ADR and the search page API routes if clients observe regressions.
Dependency rationale
- No new dependencies added.
189: Search request validation tests
- Status: Accepted
- Date: 2026-02-06
- Context:
- Search request creation enforces identifier, season/episode, and category validation rules in stored procedures.
- Validation paths were under-tested, leaving ERD rule coverage uncertain.
- Decision:
- Add stored-proc tests that exercise identifier mismatch, torznab season/episode validation, and invalid category filters.
- Mark the ERD validation checklist item as complete once coverage is in place.
- Consequences:
- Positive outcomes:
- Validation rules are exercised directly against stored procedures.
- Future regressions in search request validation will fail fast in CI.
- Risks or trade-offs:
- Slightly longer indexer test runtime due to additional database cases.
- Positive outcomes:
- Follow-up:
- Extend validation tests as new rules are added to
search_request_create_v1.
- Extend validation tests as new rules are added to
Motivation
- Ensure ERD-mandated validation rules are enforced and verified in CI.
- Provide deterministic stored-proc coverage for identifier, torznab, and category filter rules.
Design notes
- Added stored-proc tests for identifier mismatch, torznab season/episode validation, and invalid category filters.
- Kept tests aligned with existing error code taxonomy and
DataErrormapping. - Fixed
search_request_create_v1to comparequery_typeandidentifier_typevia text casts to avoid enum type mismatch errors.
Test coverage summary
- Added three new
search_request_createvalidation tests inrevaer-data. - Will run
just ci,just build-release, andjust ui-e2ebefore hand-off.
Observability updates
- No new telemetry or metrics changes.
Risk & rollback plan
- If validations evolve, update tests to match new error codes or rules.
- Roll back by reverting the added tests and checklist update if needed.
Dependency rationale
- No new dependencies added.
190: Hash identity derivation tests
- Status: Accepted
- Date: 2026-02-06
- Context:
- ERD hash identity rules require deterministic magnet hash derivation with a strict precedence order.
- Stored-proc behavior existed but lacked focused tests to lock in the precedence and normalization rules.
- Decision:
- Add stored-proc tests for
derive_magnet_hashto confirm infohash v2 precedence, infohash v1 fallback, and magnet URI normalization. - Mark the ERD checklist item for hash identity derivation as complete once coverage is in place.
- Add stored-proc tests for
- Consequences:
- Positive outcomes:
- Hash derivation precedence and normalization rules are exercised directly in CI.
- Future regressions in magnet hash derivation will surface quickly.
- Risks or trade-offs:
- Minor increase in normalization test runtime.
- Positive outcomes:
- Follow-up:
- Extend normalization coverage if additional hash identity rules are added.
Motivation
- Ensure ERD-mandated hash identity rules are verified by stored-proc tests.
- Lock in precedence for infohash v2, infohash v1, and normalized magnet inputs.
Design notes
- Added normalization tests that compare derivation results across input combinations.
- Verified that normalized and raw magnet URIs produce identical hash outputs when no infohash is present.
Test coverage summary
- Added three
derive_magnet_hashtests inrevaer-datanormalization helpers. - Will run
just ci,just build-release, andjust ui-e2ebefore hand-off.
Observability updates
- No new telemetry or metrics changes.
Risk & rollback plan
- If derivation logic changes, update the tests to match the new rule set.
- Roll back by reverting the normalization tests and checklist update if needed.
Dependency rationale
- No new dependencies added.
191: Rate limit state purge test
- Status: Accepted
- Date: 2026-02-06
- Context:
- ERD rules require rate_limit_state to purge minute buckets older than six hours.
- The purge job existed but lacked a focused test to confirm retention behavior.
- Decision:
- Add a stored-proc test that inserts old and recent rate_limit_state rows and verifies purge behavior.
- Mark the ERD checklist item for rate_limit_state purging as complete once coverage is in place.
- Consequences:
- Positive outcomes:
- Retention behavior is enforced in CI for rate_limit_state cleanup.
- Prevents regressions that could bloat rate_limit_state or delete fresh buckets.
- Risks or trade-offs:
- Slightly longer indexer job test runtime due to extra database setup.
- Positive outcomes:
- Follow-up:
- Add more retention tests if additional job runners adopt similar cleanup rules.
Motivation
- Ensure ERD-mandated rate limit retention rules are verified by stored-proc tests.
- Provide deterministic coverage for the six-hour purge window.
Design notes
- Inserted two rate_limit_state rows with window_start older and newer than six hours.
- Verified that job_run_rate_limit_state_purge removes only the stale row.
Test coverage summary
- Added a targeted job runner test in
revaer-datafor rate_limit_state purging. - Will run
just ci,just build-release, andjust ui-e2ebefore hand-off.
Observability updates
- No new telemetry or metrics changes.
Risk & rollback plan
- If retention windows change, update the test timestamps to match the new rules.
- Roll back by reverting the added test and checklist update if needed.
Dependency rationale
- No new dependencies added.
192 Job schedule completion updates
- Status: Accepted
- Date: 2026-02-07
- Context:
- Motivation: enforce ERD job_schedule completion semantics (next_run_at + lock cleanup) for indexer jobs.
- Constraints: stored-procedure-only runtime access, constant error messages, versioned procs with stable wrappers.
- Decision:
- Add job_schedule_mark_completed_v1 and job_run_*_v2 wrappers to update last_run_at, next_run_at, and clear locks on both success and failure.
- Keep job_run_reputation_rollup signature stable by mapping window_key to job_key in-proc.
- Alternatives considered: update next_run_at in job_claim_next (rejected; ERD mandates update on completion) and update schedule in app runner (rejected; DB is SSOT).
- Consequences:
- Positive outcomes: job_schedule rows now reflect completion cadence with jitter and lock cleanup per ERD.
- Risks or trade-offs: if job_schedule_mark_completed_v1 fails, job errors are surfaced as schedule update failures.
- Follow-up:
- Verify any future job runner wiring calls job_run_* wrappers (not versioned functions directly).
- Review checkpoint: confirm Phase 9 checklist remains aligned with ERD job cadence rules.
- Test coverage summary:
- Added a stored-proc test asserting job_run_retention_purge updates schedule timestamps and clears locks.
- Observability updates:
- None (database-only change).
- Risk & rollback plan:
- Roll back by reverting migration 0084 and restoring job_run_* wrappers to v1.
- Dependency rationale:
- No new dependencies. Alternatives considered: not applicable.
193 Job claim locking and lease durations
- Status: Accepted
- Date: 2026-02-14
- Context:
- Motivation: close ERD job_schedule gaps for claim semantics and per-job lease durations.
- Constraints: stored-procedure-only runtime DB access, constant error messages, migration-safe
CREATE OR REPLACEupdates.
- Decision:
- Add
job_claim_lease_seconds_v1(+ stable wrapper) as the single lease-duration mapping source for alljob_keyvalues. - Update
job_claim_next_v1to acquire advisory lock before readingjob_schedule, then validate due/locked/enabled state and setlocked_untilusingjob_claim_lease_seconds_v1. - Alternatives considered: keep inline CASE mapping in
job_claim_next_v1(rejected: duplicated lease logic), and rely on pre-lock state checks only (rejected: race window for stale schedule reads).
- Add
- Consequences:
- Positive outcomes: claim flow now aligns with ERD advisory-lock +
locked_untilsemantics and applies lease durations from one canonical mapping. - Risks or trade-offs: claim function now returns
job_lockedearlier when advisory lock contention exists, which may mask other validation details during concurrent claims.
- Positive outcomes: claim flow now aligns with ERD advisory-lock +
- Follow-up:
- Keep new lease mapping synchronized if
job_keyenum values change. - Review checkpoint: verify scheduler/executor callers consume
job_claim_nexterrors without re-logging.
- Keep new lease mapping synchronized if
- Test coverage summary:
- Added stored-proc integration tests for
job_claim_nextnot-due and locked failures. - Added per-job lease-duration assertion test covering all seeded
job_keyvalues.
- Added stored-proc integration tests for
- Observability updates:
- None (DB procedure behavior + tests only).
- Risk & rollback plan:
- Roll back by reverting migration
0085_indexer_job_claim_locking.sqland restoring priorjob_claim_next_v1logic.
- Roll back by reverting migration
- Dependency rationale:
- No new dependencies. Alternatives considered: not applicable.
194 Policy snapshot GC ordering
- Status: Accepted
- Date: 2026-02-15
- Context:
- Motivation: enforce ERD ordering that policy snapshot refcount repair runs before policy snapshot GC.
- Constraints: runtime DB operations must remain stored-procedure based with stable wrappers and constant error messages.
- Decision:
- Update
job_run_policy_snapshot_gc_v2to invokejob_run_policy_snapshot_refcount_repair_v1beforejob_run_policy_snapshot_gc_v1. - Keep
job_schedulecompletion behavior unchanged sopolicy_snapshot_gcstill advances cadence and clears locks in one place. - Alternatives considered: scheduler-only ordering in application code (rejected: ordering belongs in DB job semantics) and changing
v1procedures directly (rejected: keep compatibility and add behavior through versioned wrappers).
- Update
- Consequences:
- Positive outcomes: stale
policy_snapshot.ref_countvalues are repaired before GC evaluation, preventing orphaned old snapshots from being retained incorrectly. - Risks or trade-offs:
policy_snapshot_gcruntime now includes repair cost; daily cadence keeps this acceptable.
- Positive outcomes: stale
- Follow-up:
- Maintain this ordering if future
policy_snapshotmaintenance jobs are introduced. - Review checkpoint: ensure callers continue to execute
job_run_policy_snapshot_gc/job_run_policy_snapshot_gc_v2, not direct table mutations.
- Maintain this ordering if future
- Test coverage summary:
- Added integration test proving
job_run_policy_snapshot_gcrepairs stale ref_count values before deleting old snapshots.
- Added integration test proving
- Observability updates:
- None (stored-procedure ordering change only).
- Risk & rollback plan:
- Roll back by reverting migration
0086_indexer_policy_snapshot_gc_ordering.sql.
- Roll back by reverting migration
- Dependency rationale:
- No new dependencies. Alternatives considered: not applicable.
195 Retention purge context cleanup
- Status: Accepted
- Date: 2026-02-15
- Context:
- Motivation: complete ERD retention semantics for search-request scoped rows by purging context score tables together with expired search requests.
- Constraints: retention behavior must remain in stored procedures and use constant error messages.
- Decision:
- Update
job_run_retention_purge_v1to deletecanonical_torrent_source_context_scoreandcanonical_torrent_best_source_contextrows wherecontext_key_type='search_request'andcontext_key_idbelongs to purged requests. - Keep existing retention windows and table purges unchanged for outbound logs, RSS seen rows, conflicts, conflict audits, health events, and source reputation.
- Add an integration test covering retention windows and search-request context cleanup in one execution path.
- Alternatives considered: relying only on application-side cleanup (rejected: retention ownership is database-side) and leaving context rows durable (rejected: violates ERD retention rules).
- Update
- Consequences:
- Positive outcomes: search-request context score tables no longer retain stale rows after request retention purges; policy snapshot ref_count and policy-set cleanup remain coherent.
- Risks or trade-offs: retention job touches two additional tables, increasing delete work during purge runs.
- Follow-up:
- Keep new search-request context rows scoped to
context_key_type='search_request'so retention cleanup remains deterministic. - Validate future retention migrations against the ERD retention table list before release.
- Keep new search-request context rows scoped to
- Test coverage summary:
- Added
job_run_retention_purge_applies_table_windowsto verify old-vs-recent retention behavior across all configured operational tables plus search-request context score cleanup.
- Added
- Observability updates:
- None (database retention behavior change only).
- Risk & rollback plan:
- Roll back by reverting migration
0087_indexer_retention_purge_context_cleanup.sql.
- Roll back by reverting migration
- Dependency rationale:
- No new dependencies. Alternatives considered: not applicable.
196 Indexer connectivity profile refresh rollups
- Status: Accepted
- Date: 2026-02-15
- Context:
- Motivation: complete the ERD connectivity rollup behavior for
job_run_connectivity_profile_refresh_v1soindexer_connectivity_profileis derived fromoutbound_request_logwith the required thresholds and request-type scope. - Constraints: runtime logic must remain in stored procedures, no inline runtime SQL, no new dependencies, and tests must run through existing Rust data-layer harnesses.
- Motivation: complete the ERD connectivity rollup behavior for
- Decision:
- Added migration
0088_indexer_connectivity_profile_rollup_rules.sqlto redefinejob_run_connectivity_profile_refresh_v1. - Rollups now aggregate only request types
(caps, search, tvsearch, moviesearch, rss, probe), excluderate_limitedfrom samples, and treat success asoutcome='success' AND parse_ok=true. - Status scoring now follows ERD thresholds with explicit failing precedence for
success_rate_1h < 0.90and dominant failure classes in(auth_error, cf_challenge, tls, dns). - Added quarantine handling refinements: persistent failing + CF/auth/429 burst transitions to
quarantined; post-cooldown healthy rollups recover todegradedwhile preserving prior error class context. - Added job-runner tests in
crates/revaer-data/src/indexers/jobs.rsfor no-sample defaults, low-success failure classification, persistent auth quarantine, and quarantine cooldown recovery. - Alternatives considered: keeping previous status logic (rejected: low-success cases could remain degraded, which conflicts with ERD failing rules) and handling quarantine transitions in application code (rejected: ERD assigns this behavior to stored-procedure rollups).
- Added migration
- Consequences:
- Positive outcomes: connectivity snapshots align with ERD sample definitions and threshold semantics; rollups update every active indexer row, including no-sample degraded state.
- Risks or trade-offs: stricter failing/quarantine classifications can change operational status sooner than previous behavior; large outbound log windows still require efficient indexing.
- Follow-up:
- Implement the remaining Phase 9 rollup jobs (
reputation_rollup_*,canonical_backfill_best_source,base_score_refresh_recent) and extend job tests for those procedures. - Revisit schema-level
indexer_connectivity_profileconstraint hardening if we want DB-level enforcement of non-nullerror_classfor non-healthy statuses.
- Implement the remaining Phase 9 rollup jobs (
- Design notes:
- Status resolution is now two-stage (
status_resolvedthenfinal) so non-healthy statuses can preserve prior error class context without relying on base-status assumptions. - Indexer scope is anchored to active
indexer_instancerows (deleted_at IS NULL) so connectivity refresh is deterministic even without recent request samples.
- Status resolution is now two-stage (
- Test coverage summary:
- Added:
job_run_connectivity_profile_refresh_upserts_degraded_without_samplesjob_run_connectivity_profile_refresh_marks_low_success_as_failingjob_run_connectivity_profile_refresh_quarantines_persistent_auth_failuresjob_run_connectivity_profile_refresh_recovers_quarantine_to_degraded_after_cooldown
- Added:
- Observability updates:
- None (stored-procedure behavior change only; no new telemetry surface).
- Risk & rollback plan:
- Roll back by reverting migration
0088_indexer_connectivity_profile_rollup_rules.sqland rerunning migration tooling in a rollback deployment.
- Roll back by reverting migration
- Dependency rationale:
- No new dependencies. Alternatives considered: not applicable.
197 Reputation rollup sample thresholds
- Status: Accepted
- Date: 2026-02-15
- Context:
- Motivation: complete ERD reputation rollup behavior for
job_run_reputation_rollup_v1sosource_reputationonly records trusted windows and uses ERD sample semantics. - Constraints: runtime data access stays in stored procedures, no new dependencies, and behavior must be validated through existing data-layer tests.
- Motivation: complete ERD reputation rollup behavior for
- Decision:
- Added migration
0089_indexer_reputation_rollup_thresholds.sqlto redefinejob_run_reputation_rollup_v1. - Request samples now use
outbound_request_logrows for request types(caps, search, tvsearch, moviesearch, rss, probe), excludeerror_class=rate_limited, and count success asoutcome='success' AND parse_ok=true. - Rollups now upsert only for eligible windows (
request_count >= 30oracquisition_count >= 10) per ERD trusted-sample thresholds. - Rollups are scoped to active indexers (
indexer_instance.deleted_at IS NULL) and preserve fake/dmca numerator semantics from acquisition attempts plusreported_fakeactions. - Added job-runner tests in
crates/revaer-data/src/indexers/jobs.rsfor insufficient-sample skip behavior and eligible-window rate calculations. - Alternatives considered: retaining previous always-upsert behavior (rejected: violates ERD “sufficient samples” requirement) and computing trust filtering in Rust instead of SQL (rejected: rollup ownership is database-side).
- Added migration
- Consequences:
- Positive outcomes:
source_reputationrows now align with ERD trust gating and sample definitions, reducing noisy low-sample rollups. - Risks or trade-offs: sparse indexers may not get fresh reputation rows until enough traffic exists, which can increase neutral scoring fallback frequency.
- Positive outcomes:
- Follow-up:
- Implement remaining Phase 9 derived refresh jobs and add dedicated tests for reputation windows
24hand7dcadence behavior. - Revisit whether stale reputation rows should be actively pruned when an indexer drops below sample thresholds.
- Implement remaining Phase 9 derived refresh jobs and add dedicated tests for reputation windows
- Design notes:
- The procedure uses
indexer_scope,combined, andeligibleCTEs so threshold gating is explicit and unit-testable. min_samplesnow records the active trust threshold (10when acquisition-driven, otherwise30).
- The procedure uses
- Test coverage summary:
- Added:
job_run_reputation_rollup_skips_insufficient_samplesjob_run_reputation_rollup_writes_rates_for_eligible_samples
- Added:
- Observability updates:
- None (stored-procedure behavior change only).
- Risk & rollback plan:
- Roll back by reverting migration
0089_indexer_reputation_rollup_thresholds.sql.
- Roll back by reverting migration
- Dependency rationale:
- No new dependencies. Alternatives considered: not applicable.
198 Canonical refresh durable source cadence
- Status: Accepted
- Date: 2026-02-15
- Context:
- Motivation: complete ERD-derived cadence behavior for
job_run_canonical_backfill_best_source_v1andjob_run_base_score_refresh_recent_v1. - Constraints: keep runtime DB behavior inside stored procedures, use no new dependencies, and verify behavior through
revaer-datajob tests.
- Motivation: complete ERD-derived cadence behavior for
- Decision:
- Added migration
0090_indexer_base_score_refresh_durable_sources.sqlto redefine both job procedures. job_run_base_score_refresh_recent_v1now derives candidate canonical/source pairs directly from durablecanonical_torrent_source.last_seen_at >= now()-7dinstead ofcanonical_torrent_source_context_score.job_run_base_score_refresh_recent_v1now recomputes global winners for canonicals with durable-source activity in the last 7 days.job_run_canonical_backfill_best_source_v1now treats “recent canonicals” as canonicals with at least one durable source seen in the last 7 days, while retaining no-winner and low-confidence fallback backfill paths.- Added
revaer-datajob tests for durable-source-only base score refresh and recent durable-source backfill recomputation behavior. - Alternatives considered: keep context-scoped candidate selection (rejected: conflicts with ERD durable-source cadence requirement) and rely on ingest-time recompute only (rejected: ERD explicitly assigns hourly refresh to the job).
- Added migration
- Consequences:
- Positive outcomes: base score refresh and global best-source backfill now align with ERD “durable source last_seen_at” semantics.
- Risks or trade-offs: broader durable-source candidate scans may increase hourly job work on large datasets.
- Follow-up:
- Implement
canonical_prune_low_confidencechecklist item and add focused tests for prune eligibility edge cases. - Validate production indexes for durable-source cadence queries as data volume increases.
- Implement
- Design notes:
- The refresh pipeline remains deterministic: compute base scores first, then recompute global winners for the same durable-source candidate set.
- Backfill keeps low-confidence safety behavior while adding durable-source recency as the primary cadence signal.
- Test coverage summary:
- Added:
job_run_base_score_refresh_recent_uses_durable_source_activityjob_run_canonical_backfill_best_source_recomputes_recent_durable_sources
- Added:
- Observability updates:
- None (stored-procedure behavior change only).
- Risk & rollback plan:
- Roll back by reverting migration
0090_indexer_base_score_refresh_durable_sources.sql.
- Roll back by reverting migration
- Dependency rationale:
- No new dependencies. Alternatives considered: not applicable.
Canonical prune source-link policy alignment
- Status: Accepted
- Date: 2026-02-18
- Context:
canonical_prune_low_confidence_v1needed to matchERD_INDEXERS.mdpruning semantics for low-confidence fallback canonicals.- The ERD requires preserving candidates when their durable sources are also tied to non-pruned canonicals.
- Existing logic inferred source ties via identity joins, which could diverge from persisted canonical/source linkage used by scoring and best-source derivations.
- Decision:
- Redefine
canonical_prune_low_confidence_v1to evaluate source ties from persisted canonical/source linkage tables:canonical_torrent_source_base_scorecanonical_torrent_source_context_scorecanonical_torrent_best_source_globalcanonical_torrent_best_source_context
- Keep existing candidate eligibility guards:
title_size_fallbackwithidentity_confidence <= 0.60created_atolder than 30 days- no acquisition attempts by canonical ID or hashes
- no
user_result_actionwithselectedordownloaded
- Prune only candidates whose linked sources are not tied to any non-candidate canonical.
- Alternatives considered:
- Keep identity-join inference only: rejected because it does not consistently reflect persisted canonical/source ties.
- Add a new canonical-source mapping table: deferred to avoid schema expansion in this step.
- Redefine
- Consequences:
- Positive outcomes:
- Pruning behavior now aligns with ERD source-linkage policy.
- Candidate groups linked only to other candidates can be pruned together.
- Candidates sharing sources with non-candidates are retained.
- Risks or trade-offs:
- Legacy rows without persisted link-table ties may be treated as having no source links.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Add migration redefining
canonical_prune_low_confidence_v1linkage checks. - Add data-layer tests for prune/retain group behavior.
- Mark checklist step complete.
- Add migration redefining
- Review checkpoints:
just cijust ui-e2e
- Implementation tasks:
RSS poll and subscription backfill workflows
- Status: Accepted
- Date: 2026-02-21
- Context:
ERD_INDEXERS.mdrequiresrss_pollandrss_subscription_backfillworkflows with strict eligibility, retry/disable behavior, and job schedule completion semantics.- Existing stored procedures were in place, but coverage did not prove the v1 behavioral requirements end-to-end from the Rust data-access layer.
- Phase 9 checklist item remained open until those workflows were validated through representative job and executor paths.
- Decision:
- Add data-layer tests in
revaer-datafor RSS claim/apply and backfill job execution:rss_poll_claimreturns only due, enabled subscriptions for enabled RSS-capable instances.- successful
rss_poll_applyupdates subscription state and deduplicates item inserts. - non-retryable
rss_poll_applydisables subscription and writes the expected config audit record. job_run_rss_subscription_backfillcreates missing rows, applies enable/disable state, marks maintenance completion, and disables its own schedule.- backfill job no-ops once maintenance completion is present.
- Keep implementation dependency-free (no new crates).
- Alternatives considered:
- Validate only through SQL migration tests: rejected because data-layer contract behavior could still drift.
- Validate only via API/E2E: rejected because it obscures SP-level failure modes and slows iteration.
- Add data-layer tests in
- Consequences:
- Positive outcomes:
- RSS workflows are now verified against ERD-required behavior at the stored-proc integration boundary.
- Regression risk for subscription claim/apply and one-time backfill scheduling is reduced.
- Risks or trade-offs:
- Tests rely on fixture DB state and must keep helper inserts aligned with table constraints.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Mark Phase 9 RSS workflow checklist item complete.
- Continue with the next Phase 9 derived refresh timing/caching checklist item.
- Review checkpoints:
just cijust ui-e2e
- Implementation tasks:
RSS scheduling, backoff, and dedupe validation
- Status: Accepted
- Date: 2026-02-21
- Context:
ERD_INDEXERS.mdrequires RSS polling behavior to enforce scheduling jitter, retry backoff, and item deduplication.- Existing tests covered claim filtering, successful dedupe insertion, and non-retryable auto-disable, but did not assert retry backoff growth and success scheduling bounds.
- The Phase 7 checklist item remained open until these behavioral rules were validated.
- Decision:
- Extend
revaer-dataexecutor tests for RSS apply behavior:- Add retryable failure assertions proving exponential backoff progression (60s then 120s), preserved subscription enablement, and persisted error class.
- Add successful apply scheduling assertions proving next poll is interval-based with bounded jitter (
900..=960seconds).
- Keep implementation dependency-free (no new crates).
- Alternatives considered:
- Mark checklist complete based on procedure inspection only: rejected because behavior needs executable regression checks.
- Add only migration-level SQL tests: rejected because data-layer API contract could still drift.
- Extend
- Consequences:
- Positive outcomes:
- RSS retry cadence and schedule jitter are now validated at the Rust data-access boundary.
- ERD behavioral requirements for scheduling/backoff/dedupe have concrete regression coverage.
- Risks or trade-offs:
- Tests rely on time windows and may need updates if ERD cadence constants change.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Mark Phase 7 RSS scheduling/backoff/dedupe checklist item complete.
- Continue with the next unchecked ERD indexer implementation item.
- Review checkpoints:
just cijust ui-e2e
- Implementation tasks:
Rate limit token bucket and RSS rate-limited semantics
- Status: Accepted
- Date: 2026-02-21
- Context:
ERD_INDEXERS.mddefines token bucket enforcement viarate_limit_try_consume_v1and explicit rate-limited failure semantics forrss_poll_apply_v1.- Existing tests did not fully verify capacity-deny behavior, invalid input guards, and RSS rate-limited logging/backoff behavior from the Rust data boundary.
- Phase 7 checklist still had rate-limit enforcement unchecked.
- Decision:
- Add
revaer-datatests to verify:- token bucket capacity enforcement (
allowedthen deny without over-consuming tokens); - invalid token bucket inputs return expected error details (
capacity_invalid,tokens_invalid); - RSS
rate_limitedfailures requirerate_limit_denied_scope; - RSS
rate_limitedfailures use retry path semantics (backoff scheduling) and force outbound log counters tolatency_ms=0andresult_count=0.
- token bucket capacity enforcement (
- Keep implementation dependency-free (no new crates).
- Alternatives considered:
- Rely on migration inspection only: rejected because runtime contracts can regress without executable checks.
- Cover only via API tests: rejected because proc-level behavior is more directly and deterministically exercised in
revaer-data.
- Add
- Consequences:
- Positive outcomes:
- Token bucket behavior and RSS rate-limited semantics are now regression-tested.
- The checklist item for rate-limit rule enforcement can be marked complete.
- Risks or trade-offs:
- Time-window assertions rely on bounded jitter assumptions and may need adjustment if ERD timing constants change.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Continue with the next unchecked ERD indexers item (Cloudflare transitions/mitigation behavior).
- Review checkpoints:
just cijust ui-e2e
- Implementation tasks:
Cloudflare state transition and mitigation validation
- Status: Accepted
- Date: 2026-02-21
- Context:
ERD_INDEXERS.mdrequires explicit Cloudflare transition behavior around RSS polling failures, including challenged/cooldown progression and retryability semantics tied to FlareSolverr availability.- Existing tests did not verify the
cf_challengetransition paths inrss_poll_apply_v1from the Rust data boundary. - Phase 7 still had Cloudflare transition/mitigation behavior unchecked.
- Decision:
- Extend
revaer-dataRSS executor tests to validate:- non-retryable
cf_challengecreates/updatesindexer_cf_statetochallengedwith incremented failures; - repeated non-retryable
cf_challengetransitions tocooldownat five consecutive failures with backoff; - retryable
cf_challenge(cf_retryable=true) follows retry semantics without applying CF state transition updates.
- non-retryable
- Keep implementation dependency-free (no new crates).
- Alternatives considered:
- Rely only on procedure inspection: rejected because transition regressions are easy to miss without executable checks.
- Cover only via API/E2E: rejected because proc-level transition logic is most directly validated in data-layer tests.
- Extend
- Consequences:
- Positive outcomes:
- Core Cloudflare transition behavior in RSS polling now has regression coverage.
- Checklist item for Cloudflare state transitions/mitigation can be marked complete.
- Risks or trade-offs:
- This validates data/procedure behavior; route-selection policy wiring remains a separate verification axis.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Continue with the next unchecked ERD item after Cloudflare/rate-limit/RSS rule coverage.
- Review checkpoints:
just cijust ui-e2e
- Implementation tasks:
Policy snapshot reuse and refcount validation
- Status: Accepted
- Date: 2026-02-21
- Context:
ERD_INDEXERS.mdrequires policy snapshots to be reusable by hash and to trackref_counttransactionally.- Search-request creation and retention purge logic depend on this behavior for correctness and GC safety.
- Existing tests covered purge ordering and repair jobs, but did not directly verify snapshot reuse on create plus ref-count decrement on purge from the data boundary.
- Decision:
- Add
revaer-datasearch-request tests to validate:- repeated
search_request_createcalls with identical effective policy inputs reuse the samepolicy_snapshotrow and incrementref_count; job_run_retention_purgedecrements snapshotref_countwhen an old finished search request is purged.
- repeated
- Keep implementation dependency-free (no new crates).
- Alternatives considered:
- rely on SQL review only: rejected because snapshot reuse/refcount regressions are subtle and need executable checks;
- cover only through API integration tests: rejected because direct data-layer tests are faster and isolate proc behavior.
- Add
- Consequences:
- Positive outcomes:
- snapshot reuse and ref-count tracking now have direct regression coverage;
- Phase 7 checklist item for snapshot reuse/ref_count can be marked complete.
- Risks or trade-offs:
- tests manipulate finished timestamps to exercise retention windows and should be kept aligned with retention defaults.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Continue with the next unchecked Phase 7 behavioral rule.
- Review checkpoints:
just cijust ui-e2e
- Implementation tasks:
Policy snapshot GC acceptance coverage
- Status: Accepted
- Date: 2026-02-21
- Context:
ERD_INDEXERS.mdrequires policy snapshot reuse via hash, transactionalref_counttracking, and garbage collection for stale snapshots.- Existing job-level tests already verify GC behavior and refcount repair ordering, while the latest search-request tests verify create-time reuse and retention-time decrements.
- The Phase 9 acceptance checklist item remained open despite complete executable coverage.
- Decision:
- Mark the acceptance item
Policy snapshot reuse and GC rules match ERDcomplete inERD_INDEXERS_CHECKLIST.md. - Keep GC/refcount verification at the data boundary using existing tests:
indexers::jobs::tests::job_run_policy_snapshot_gc_repairs_ref_count_before_deleteindexers::search_requests::tests::search_request_create_reuses_policy_snapshot_by_hash_and_increments_ref_countindexers::search_requests::tests::retention_purge_decrements_policy_snapshot_ref_count
- Alternatives considered:
- add duplicate API-layer tests for the same behavior: rejected because stored-procedure tests already exercise authoritative behavior directly.
- Mark the acceptance item
- Consequences:
- Positive outcomes:
- acceptance checklist now matches implemented and tested ERD behavior;
- policy snapshot lifecycle remains covered at create, purge, and GC phases.
- Risks or trade-offs:
- if snapshot lifecycle semantics change, tests and checklist mapping must be updated together.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Continue with the next unchecked acceptance and hard-blocker items.
- Review checkpoints:
just cijust ui-e2e
- Implementation tasks:
Derived refresh timing and caching validation
- Status: Accepted
- Date: 2026-02-21
- Context:
ERD_INDEXERS.mddefines job cadence expectations for derived-table refresh workflows and expects deterministic refresh timing.- Existing tests validated specific refresh behavior (connectivity/reputation/base-score/canonical backfill), but did not explicitly validate seeded
job_schedule.cadence_secondsagainst ERD timings. - Phase 9 still had
Ensure derived tables refresh according to ERD timing and caching rulesunchecked.
- Decision:
- Add a data-layer test in
revaer-datato assertjob_schedulecadence values for all indexer jobs that drive derived refresh and related maintenance windows. - Keep coverage dependency-free and proc-centric:
job_schedule_cadence_matches_erd_refresh_timingvalidates configured cadence seconds for refresh, rollup, GC, purge, and RSS schedules.
- Alternatives considered:
- infer timing correctness from runtime behavior only: rejected because explicit cadence drift can pass behavioral tests but violate ERD schedule requirements.
- Add a data-layer test in
- Consequences:
- Positive outcomes:
- ERD timing expectations are now executable and regression-safe;
- derived refresh cadence drift will fail tests early.
- Risks or trade-offs:
- if ERD cadence values change, this test and migration seeds must be updated in the same change.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Continue with the next unchecked acceptance/hard-blocker item.
- Review checkpoints:
just cijust ui-e2e
- Implementation tasks:
Retention and rollup job window validation
- Status: Accepted
- Date: 2026-02-21
- Context:
ERD_INDEXERS.mdrequires retention and reputation rollup jobs to honor time-window semantics (1h,24h,7d) and retain deterministic aggregation behavior.- Existing tests covered retention purge and one-hour rollup behavior, but explicit coverage for 24h/7d boundary inclusion-exclusion was missing.
- Phase 11 still listed
Add job runner tests for retention and rollupsas incomplete.
- Decision:
- Add a
revaer-datajob-runner test that validates multi-window rollup boundaries:- includes events within 24h and 7d windows;
- excludes events older than each target window;
- verifies derived success and acquisition metrics for both
24hand7dwindows.
- Mark the Phase 11 checklist item complete.
- Alternatives considered:
- rely only on SQL inspection: rejected because boundary mistakes are subtle and regress easily.
- Add a
- Consequences:
- Positive outcomes:
- rollup window boundaries now have executable regression coverage for non-1h windows;
- retention/rollup job-runner coverage aligns with checklist intent.
- Risks or trade-offs:
- fixture timestamps are relative to test clock; if ERD windows change, test expectations must be updated.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Continue with the next unchecked checklist item.
- Review checkpoints:
just cijust ui-e2e
- Implementation tasks:
Retention and derived refresh strategy coverage
- Status: Accepted
- Date: 2026-02-21
- Context:
ERD_INDEXERS.mdrequires retention windows and derived refresh strategy to be enforced via scheduled jobs.- Coverage existed across job tests but the checklist item remained open.
- Decision:
- Close the checklist item after mapping and validating existing executable coverage:
- retention purge windows and table cleanup:
job_run_retention_purge_applies_table_windows
- derived cadence and schedule correctness:
job_schedule_cadence_matches_erd_refresh_timing
- rollup window behavior and boundary semantics:
job_run_reputation_rollup_skips_insufficient_samplesjob_run_reputation_rollup_writes_rates_for_eligible_samplesjob_run_reputation_rollup_respects_window_boundaries
- derived refresh jobs:
job_run_base_score_refresh_recent_uses_durable_source_activityjob_run_canonical_backfill_best_source_recomputes_recent_durable_sources
- retention purge windows and table cleanup:
- Close the checklist item after mapping and validating existing executable coverage:
- Consequences:
- Positive outcomes:
- checklist status now matches implemented, tested ERD behavior.
- Risks or trade-offs:
- cadence/window rule changes require test expectation updates in lockstep.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Continue with the next unchecked behavioral-rule item.
- Review checkpoints:
just cijust ui-e2e
- Implementation tasks:
Policy rule disable/enable and reorder validation
- Status: Accepted
- Date: 2026-02-21
- Context:
ERD_INDEXERS.mdrequires policy rule mutability to flow through explicit enable/disable and ordering semantics rather than ad-hoc field mutation.- Existing tests covered policy rule creation and policy-set reorder validation, but did not directly validate rule disable/enable state transitions and empty reorder rejection.
- Decision:
- Add data-layer tests in
revaer-datapolicy procedures to validate:policy_rule_disableandpolicy_rule_enabletogglepolicy_rule.is_disableddeterministically;policy_rule_reorderrejects empty rule lists withpolicy_rule_ids_empty.
- Mark the behavioral checklist item complete.
- Add data-layer tests in
- Consequences:
- Positive outcomes:
- policy rule control-path semantics now have explicit regression coverage;
- checklist status aligns with executable behavior.
- Risks or trade-offs:
- if policy-rule update surfaces are introduced later, additional tests will be required for new mutation semantics.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Continue with the next unchecked behavioral-rule item.
- Review checkpoints:
just cijust ui-e2e
- Implementation tasks:
Search-result observation rules validation
- Status: Accepted
- Date: 2026-02-21
- Context:
ERD_INDEXERS.mdrequires observation processing to enforce latest-observation precedence, monotonic durablelast_seen_*fields, and duplicate attribute rejection.search_result_ingesttests only covered missing request failures and did not directly verify these rule-level invariants.
- Decision:
- Add
revaer-datatests for observation behavior:search_result_ingest_rejects_duplicate_attr_keyssearch_result_ingest_keeps_last_seen_monotonic
- Use stored-procedure path end-to-end (
search_request_create,search_result_ingest) with deterministic fixture setup. - Mark the observation-rule checklist item complete.
- Add
- Consequences:
- Positive outcomes:
- observation invariants are now executable and regression-safe at the data boundary;
- duplicate attr-key rejection is explicitly covered.
- Risks or trade-offs:
- setup fixtures are more involved because ingest requires search+indexer run scope.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Continue with remaining Phase 7 behavioral items.
- Review checkpoints:
just cijust ui-e2e
- Implementation tasks:
Category mapping and domain filter validation
- Status: Accepted
- Date: 2026-02-21
- Context:
ERD_INDEXERS.mdrequires Torznab category filters to map into effective media-domain constraints.ERD_INDEXERS.mdalso requires profile domain allowlists to reject category filters that collapse to an empty effective category set.- Existing
search_request_createtests validated basic category input checks but did not verify mapped-domain persistence or allowlist conflict rejection.
- Decision:
- Added
search_request_create_maps_torznab_categories_to_effective_media_domainto assert:- Torznab category
2000maps toeffective_media_domain_id = movies. - requested domain remains unset when the caller does not provide
requested_media_domain_key. - the effective category join row is persisted for
2000.
- Torznab category
- Added
search_request_create_rejects_category_filter_outside_profile_allowlistto assert:- a
tv-only profile allowlist rejects amoviescategory filter withinvalid_category_filter.
- a
- Marked checklist item
ERD_INDEXERS_CHECKLIST.mdPhase 7 category/domain rule as complete.
- Added
- Consequences:
- Positive outcomes:
- category mapping behavior is now regression-tested at stored-procedure boundaries.
- domain filtering against profile allowlists is validated with explicit failure semantics.
- Risks or trade-offs:
- test setup now includes profile provisioning and allowlist mutation, adding small runtime overhead.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- continue with remaining unchecked Phase 6/7/8/10 items.
- Review checkpoints:
just cijust ui-e2e
- Implementation tasks:
Indexer Observability Counters for Torznab, Search, and Import Jobs
- Status: Accepted
- Date: 2026-02-25
- Context:
- Motivation:
- ERD indexer checklist Phase 10 still required explicit metrics for invalid Torznab requests, search throughput, and job outcomes.
- Existing telemetry covered HTTP and generic guardrails but did not provide indexer-specific counters for those acceptance points.
- Constraints:
- Keep error messages constant and avoid adding fallback/dead code.
- Reuse existing telemetry infrastructure and avoid new dependencies.
- Preserve stored-procedure-only runtime DB access boundaries.
- Motivation:
- Decision:
- Added new Prometheus counters in
revaer-telemetry:indexer_torznab_invalid_requests_total{reason}indexer_search_requests_total{operation,outcome}indexer_job_outcomes_total{operation,outcome}
- Wired increments in API handlers:
- Torznab API/download handlers increment invalid-request reasons for missing API key, unauthorized access, missing instances/sources, and unsupported query type.
- Search request/page handlers increment throughput counters on success/error for create, cancel, page list, and page fetch operations.
- Import job handlers increment job outcome counters on success/error for create, run, status, and results operations.
- Design notes:
- Metrics were recorded at request boundaries to avoid duplicate increments in deeper call chains.
- Label values are constrained to stable constant strings to keep metric cardinality bounded.
- Alternatives considered:
- Adding counters in app/data layers instead of HTTP handlers was rejected because request intent and invalid Torznab semantics are best known at the API boundary.
- Reusing generic
events_emitted_totalwas rejected because it cannot express required ERD dimensions without overloading labels.
- Added new Prometheus counters in
- Consequences:
- Positive outcomes:
- ERD observability coverage improved with explicit counters for previously untracked indexer flows.
- Metrics remain low-cardinality and aligned with existing Prometheus collection.
- Risks and trade-offs:
- Handler-level instrumentation can miss non-HTTP flows by design; background internal jobs still require separate instrumentation where applicable.
- Reason labels must remain curated to avoid accidental cardinality growth.
- Positive outcomes:
- Follow-up:
- Test coverage summary:
- Updated telemetry unit tests to assert new metrics are registered/rendered.
- Verified API/indexer, workspace, and E2E suites pass via
just ciandjust ui-e2e.
- Observability updates:
- New counters are exposed via
/metricsimmediately with no schema migrations.
- New counters are exposed via
- Risk and rollback plan:
- Safe rollback by removing handler increments and metric registration if operational overhead appears; no data migration impact.
- Dependency rationale:
- No new dependencies added; reused existing
prometheusprimitives inrevaer-telemetry.
- No new dependencies added; reused existing
- Test coverage summary:
Indexer Request Span Coverage for Torznab, Search, and Import Jobs
- Status: Accepted
- Date: 2026-02-25
- Context:
- Motivation:
ERD_INDEXERS_CHECKLIST.mdstill had the observability tracing-span item unchecked for indexer/Torznab/search/job flows.- Existing service-level spans covered many indexer operations, but API request-boundary spans for Torznab and indexer search/import endpoints were not explicit and consistent.
- Constraints:
- Avoid logging secrets from request payloads and query parameters.
- Keep constant error messaging and existing API behavior unchanged.
- Preserve dependency minimalism (no new crates).
- Motivation:
- Decision:
- Added explicit
#[tracing::instrument]spans on API request handlers for:- Torznab request and download endpoints.
- Indexer search request create/cancel and page list/fetch endpoints.
- Import-job create/run/status/results endpoints.
- Used
skip(...)on payload/query-bearing args to avoid accidental secret logging. - Added stable span names and key IDs in structured fields (public UUIDs/page number).
- Marked Phase 10 tracing-span checklist item complete.
- Alternatives considered:
- Relying only on middleware
http.requestspan was insufficient for indexer-domain operation-level observability. - Adding manual
info_span!blocks in every handler was more verbose than#[instrument]and easier to regress.
- Relying only on middleware
- Added explicit
- Consequences:
- Positive outcomes:
- Request-level traces now include deterministic span names for Torznab/search/job operations.
- Improves correlation across API middleware spans and indexer service spans.
- Risks and trade-offs:
- Span naming and fields must remain stable to avoid dashboard churn.
- Future handlers must follow
skip(request/query)for secret-bearing data.
- Positive outcomes:
- Follow-up:
- Test coverage summary:
- Existing handler tests validated behavior compatibility after instrumentation.
- Full gate set (
just ci,just ui-e2e) was rerun and passed.
- Observability updates:
- New trace spans available immediately; no metrics/schema migration required.
- Risk and rollback plan:
- Rollback by removing instrumentation attributes if trace overhead becomes an issue.
- Dependency rationale:
- No new dependencies; used existing
tracingcrate already in workspace.
- No new dependencies; used existing
- Test coverage summary:
Torznab Parity Integration Tests for Endpoint Format and Auth Semantics
- Status: Accepted
- Date: 2026-02-25
- Context:
- Motivation:
- ERD checklist gaps remained for Torznab endpoint format/auth/invalid-request behavior and Torznab parity integration coverage.
- Existing API E2E tests only exercised not-found paths for Torznab endpoints with random IDs.
- Constraints:
- Keep tests deterministic and use existing API setup fixtures.
- Avoid introducing new dependencies or non-
justworkflows. - Ensure API keys are not logged in traces or test output.
- Motivation:
- Decision:
- Extended
tests/specs/api/indexers-torznab-instances.spec.tsto create a real search profile and Torznab instance, then validate:- Missing
apikeyon/torznab/{id}/apireturns 401. - Invalid
apikeyon/torznab/{id}/apireturns 401. - Valid
apikey+t=capsreturns 200 with XML<caps>payload. - Unsupported query type with valid key returns deterministic empty RSS response.
- Download endpoint enforces missing/invalid key with 401 and missing source with 404 for a valid instance.
- Missing
- Updated checklist entries to mark:
- Integration tests for REST/Torznab parity.
- Torznab endpoint format/auth/invalid request handling criterion.
- Alternatives considered:
- Unit-only handler tests were rejected because parity expectations need full HTTP behavior and fixture-auth integration.
- New dedicated Torznab spec file was rejected to avoid duplication while current spec already owns endpoint lifecycle coverage.
- Extended
- Consequences:
- Positive outcomes:
- Torznab public endpoints now have end-to-end coverage against ERD-facing semantics.
- Reduces regression risk for auth and XML response shape behavior.
- Risks and trade-offs:
- Test runtime increases slightly due to additional create/check steps.
- Full Torznab query semantics parity (tvsearch/movie/search behavior depth) remains a separate follow-up.
- Positive outcomes:
- Follow-up:
- Test coverage summary:
- API E2E Torznab tests now cover valid + invalid key paths, caps XML response, unsupported query fallback, and download auth/status behavior.
- Full gate set rerun through
just ciandjust ui-e2e.
- Observability updates:
- No direct telemetry schema changes; behavior uses existing counters/spans.
- Risk and rollback plan:
- Rollback by reverting spec updates and checklist updates if fixture assumptions change.
- Dependency rationale:
- No new dependencies added.
- Test coverage summary:
Torznab Search Query Mapping and Pagination
- Status: Accepted
- Date: 2026-02-25
- Context:
- Torznab
/apihad auth and caps coverage, but search requests returned only empty feeds and did not map query semantics fromERD_INDEXERS.md. - ERD acceptance requires handler-level validation/short-circuiting for invalid Torznab combinations, request mapping to search requests, and append-order pagination behavior.
- Existing search orchestration already exposed
search_request_create,search_page_list, andsearch_page_fetchAPIs, so the missing layer was Torznab query translation and XML rendering.
- Torznab
- Decision:
- Implemented Torznab request parsing and mapping in
crates/revaer-api/src/http/handlers/torznab/api.rsfor:t=search|tvsearch|movie|moviesearchmode handling.q,imdbid,tmdbid,tvdbid,season,ep,cat,offset, andlimitparsing.- invalid-combination short-circuiting to empty XML with invalid-request metrics.
- request creation via
search_request_createand append-order flattening viasearch_page_list+search_page_fetchwith offset/limit slicing.
- Expanded XML rendering in
crates/revaer-api/src/http/handlers/torznab/xml.rsto produce itemized RSS output with Torznab attrs and response metadata (offset,total). - Added API E2E coverage in
tests/specs/api/indexers-torznab-instances.spec.tsfor generic search paging offset behavior, invalid TV combo handling, and invalid category short-circuit responses. - Added focused unit tests for Torznab parse and XML helpers.
- Dependency rationale: no new dependencies were added; implementation reused existing crates (
chrono,uuid) already in the dependency graph. - Alternatives considered:
- Add a dedicated Torznab orchestration service with its own paging/storage tables now: rejected for this step to keep scope aligned with existing search-request/page model.
- Keep Torznab handler as caps/auth only and defer search mapping entirely: rejected because it blocks ERD parity acceptance items.
- Implemented Torznab request parsing and mapping in
- Consequences:
- Positive outcomes:
- Torznab
/apinow maps request semantics into existing search orchestration and returns structured RSS responses with deterministic offset slicing. - Invalid Torznab query combinations are handled with empty responses instead of API errors.
- Test coverage now exercises Torznab query-path behavior in both unit and API E2E suites.
- Torznab
- Risks or trade-offs:
- Category output currently follows available page item category data; deeper tracker-category remapping parity remains coupled to ingestion metadata completeness.
- Pagination works on currently materialized pages; runtime behavior still depends on upstream run scheduling/ingestion progress.
- Positive outcomes:
- Follow-up:
- Verify category-to-domain fallback behavior (especially explicit
8000handling) against broader fixture matrices. - Extend E2E coverage once richer seeded search-page fixtures are available for multi-page and mixed-category result sets.
- Verify category-to-domain fallback behavior (especially explicit
Torznab Download Redirect and Acquisition Attempt Coverage
- Status: Accepted
- Date: 2026-02-25
- Context:
torznab_download_preparealready implemented ERD-compliant redirect selection and acquisition-attempt writes, but coverage only validated missing-instance failures.- ERD acceptance requires successful redirect behavior (
magnetpreferred overdownload_url) and guaranteed acquisition attempt persistence, including explicit no-target failures.
- Decision:
- Added stored-procedure integration coverage in
crates/revaer-data/src/indexers/torznab.rsto validate:- magnet URI is preferred when both
magnet_urianddownload_urlexist. download_urlis used when magnet is absent.- missing redirect target returns
NULLand writes a failed acquisition attempt withfailure_class=client_errorandfailure_detail=no_download_target.
- magnet URI is preferred when both
- Test fixtures create Torznab scope and canonical sources through existing stored-procedure wrappers (
search_profile_create,torznab_instance_create,search_request_create,search_result_ingest) with minimal setup SQL limited to required indexer test rows. - Dependency rationale: no new dependencies were added.
- Alternatives considered:
- API-only E2E validation for successful redirects: rejected for this increment because the existing E2E fixture layer does not expose deterministic source seeding for positive redirect paths.
- Leave coverage at handler-level negative paths only: rejected because it would not prove acquisition-attempt semantics required by ERD.
- Added stored-procedure integration coverage in
- Consequences:
- Positive outcomes:
- Redirect precedence and acquisition-attempt side effects are now explicitly asserted in automated tests.
- ERD download acceptance behavior is now validated at the stored-procedure boundary used by runtime paths.
- Risks or trade-offs:
- Positive redirect behavior is currently validated at the data/procedure layer, not yet end-to-end via Torznab HTTP in Playwright.
- Positive outcomes:
- Follow-up:
- Add API E2E positive redirect coverage once deterministic source seeding is available through test helpers or dedicated setup endpoints.
Torznab Feed Category Emission and Test Fixture Hardening
- Status: Accepted
- Date: 2026-02-25
- Context:
- Torznab search response items emitted only one category value (
tracker_category) and droppedtracker_subcategory, which reduced category fidelity for consumers that expect parent + subcategory IDs. - Torznab download stored-proc tests depended on
torznab_instance_create, which can fail in test environments withoutgen_saltsupport (pgcrypto) even though download behavior itself does not require API-key generation.
- Torznab search response items emitted only one category value (
- Decision:
- Updated Torznab feed item mapping in
crates/revaer-api/src/http/handlers/torznab/api.rsto emit:tracker_categorywhen present,tracker_subcategorywhen positive and distinct,- fallback to
8000(Other) when no category metadata exists.
- Added unit coverage for category emission behavior:
- category + subcategory inclusion,
8000fallback.
- Hardened
crates/revaer-data/src/indexers/torznab.rstest fixture setup by insertingtorznab_instancerows directly for download-proc tests, avoiding dependence on API-key hashing internals unrelated to the redirect/acquisition semantics under test. - Dependency rationale: no new dependencies were added.
- Alternatives considered:
- Keep single-category emission and defer multi-cat output: rejected because it preserves avoidable Torznab parity drift.
- Keep proc-based fixture creation and require extension setup in tests: rejected because it couples download tests to unrelated crypto-extension availability.
- Updated Torznab feed item mapping in
- Consequences:
- Positive outcomes:
- Torznab item category payloads are closer to expected multi-category semantics.
- Download-proc tests are stable across environments where
gen_saltmay be unavailable.
- Risks or trade-offs:
- This step improves output fidelity but does not fully close all ERD category-domain acceptance checks by itself.
- Positive outcomes:
- Follow-up:
- Complete stored-proc acceptance coverage for
cat=8000catch-all and explicit multi-domain category filtering behavior.
- Complete stored-proc acceptance coverage for
Torznab multi-category domain and Other(8000) coverage
- Status: Accepted
- Date: 2026-02-25
- Context:
- Motivation: ERD_INDEXERS acceptance item 580 required explicit validation that Torznab category behavior matches ERD rules for category-to-domain mapping, especially multi-category requests and the Other (8000) catch-all behavior.
- Constraints:
- Runtime DB behavior must be validated through stored procedures.
- Existing Torznab feed mapping already emits category IDs from tracker mapping, but search request creation also needs direct tests for effective domain derivation when
torznab_cat_idsare provided. - Changes must pass
just ciandjust ui-e2e.
- Dependency rationale:
- No new dependencies were added.
- Alternative considered: add app-layer mocks around domain mapping. Rejected because ERD behavior is owned by stored procedures and must be tested at that boundary.
- Decision:
- Added stored-proc tests in
crates/revaer-data/src/indexers/search_requests.rs:search_request_create_torznab_other_category_keeps_unrestricted_domainsearch_request_create_torznab_multi_category_yields_multi_domain_scope
- The tests verify:
cat=8000keepseffective_media_domain_idunset (NULL), preserving catch-all behavior.- Multi-domain category input (
2000+5000) keepseffective_media_domain_idunset and preserves effective category rows, matching ERD multi-domain semantics.
- Updated
ERD_INDEXERS_CHECKLIST.mditem 580 to complete.
- Added stored-proc tests in
- Consequences:
- Positive outcomes:
- Stored-proc behavior for Torznab category domain narrowing now has explicit regression coverage for the highest-risk acceptance paths.
- Checklist state is synchronized with tested behavior.
- Risks or trade-offs:
- Coverage focuses on request creation semantics; future behavior changes in search execution filtering still require dedicated acceptance tests.
- Positive outcomes:
- Follow-up:
- Test coverage summary:
just cipasses, including new stored-proc tests.just ui-e2epasses.
- Observability updates:
- No telemetry schema or span changes were needed in this step.
- Risk and rollback:
- Rollback path is low risk: revert the added tests and checklist/ADR updates if needed.
- Review checkpoints:
- Continue Phase 12 acceptance items after 580, starting with rate-limit default/enforcement behavior.
- Test coverage summary:
Rate-limit defaults and scope enforcement coverage
- Status: Accepted
- Date: 2026-02-25
- Context:
- Motivation: ERD acceptance item 583 requires proof that default rate-limit policies exist and rate limiting is enforced for both indexer and routing scopes.
- Constraints:
- Validation must happen through stored-proc level behavior in
revaer-data. - No new dependencies; keep test coverage deterministic and local.
- Validation must happen through stored-proc level behavior in
- Dependency rationale:
- No dependency changes.
- Alternative considered: validate defaults only through migration SQL review. Rejected because acceptance requires executable verification.
- Decision:
- Added two stored-proc tests in
crates/revaer-data/src/indexers/rate_limits.rs:rate_limit_seed_defaults_match_expected_system_policiesrate_limit_try_consume_enforces_bucket_capacity_for_routing_scope
- Existing indexer-scope enforcement test remains in place, so both required scopes are now explicitly covered.
- Updated
ERD_INDEXERS_CHECKLIST.mditem 583 to complete.
- Added two stored-proc tests in
- Consequences:
- Positive outcomes:
- Regression-safe verification that
default_indexeranddefault_routingseed policies match expected limits. - Explicit runtime enforcement coverage for both
indexer_instanceandrouting_policyscope types.
- Regression-safe verification that
- Risks or trade-offs:
- Tests validate token-bucket behavior and seed invariants, but not every higher-level call path that consumes these policies.
- Positive outcomes:
- Follow-up:
- Test coverage summary:
- New tests run under
just test/just ciand exercise stored procedures directly.
- New tests run under
- Observability updates:
- No telemetry changes needed.
- Risk and rollback:
- Rollback is limited to test/docs/checklist files and can be reverted safely if requirements change.
- Review checkpoints:
- Continue with next unchecked ERD acceptance items (retry behavior, Cloudflare transitions, streaming behavior).
- Test coverage summary:
Search-run retry behavior coverage for rate-limited and transient errors
- Status: Accepted
- Date: 2026-02-25
- Context:
- Motivation: ERD acceptance item 584 requires explicit verification that search-run retry behavior matches ERD rules for both rate-limited and transient failures.
- Constraints:
- Validation must happen at stored-proc behavior boundaries.
- Keep changes test-focused, with no dependency additions.
- Dependency rationale:
- No new dependencies were added.
- Alternative considered: infer behavior from migration SQL only. Rejected because acceptance requires executable regression tests.
- Decision:
- Added stored-proc tests in
crates/revaer-data/src/indexers/search_requests.rs:search_indexer_run_mark_failed_rate_limited_uses_retry_and_scopesearch_indexer_run_mark_failed_transient_retries_before_limitsearch_indexer_run_mark_failed_transient_reaches_retry_limit
- Added local test helpers to create request/instance run scopes and assert run state transitions.
- Updated
ERD_INDEXERS_CHECKLIST.mditem 584 to complete.
- Added stored-proc tests in
- Consequences:
- Positive outcomes:
- Rate-limited retry semantics are now explicitly validated:
- queued retry state
- incremented
attempt_countandrate_limited_attempt_count - required
last_rate_limit_scope
- Transient failure semantics are explicitly validated:
- queued retry before max retries
- terminal failed state when retry limit is reached
- Rate-limited retry semantics are now explicitly validated:
- Risks or trade-offs:
- Tests focus on stored-proc state transitions and do not duplicate higher-level orchestrator behavior already covered elsewhere.
- Positive outcomes:
- Follow-up:
- Test coverage summary:
- Included in normal
just ciandjust ui-e2equality gates.
- Included in normal
- Observability updates:
- No observability schema changes required.
- Risk and rollback:
- Rollback is low risk and limited to test/docs/checklist updates.
- Review checkpoints:
- Continue with remaining unchecked acceptance items (Cloudflare transitions, streaming behavior, explainability).
- Test coverage summary:
RSS Cloudflare state transition alignment with ERD
- Status: Accepted
- Date: 2026-02-25
- Context:
- Motivation: ERD acceptance item 585 requires Cloudflare detection and state transitions to follow ERD semantics, including FlareSolverr preference behavior.
- Constraints:
- Runtime DB behavior must be enforced in stored procedures only.
- Changes must preserve existing retry/backoff behavior and quality-gate compliance.
- Dependency rationale:
- No new dependencies were added.
- Alternative considered: leave existing retryable CF behavior unchanged and only add tests. Rejected because existing logic skipped required
clear/solved -> challengedtransitions.
- Decision:
- Added migration
0094_rss_poll_cf_state_transitions.sqlto updaterss_poll_apply_v1:- Apply CF challenge transitions for all
cf_challengefailures (not only non-retryable paths). - Transition
challenged/cooldown -> solvedon successful FlareSolverr poll (via_mitigation='flaresolverr', parse success). - Clear cooldown timestamp on
challengedtransition and reset solved-state counters/backoff fields.
- Apply CF challenge transitions for all
- Updated stored-proc tests in
crates/revaer-data/src/indexers/executor.rs:rss_poll_apply_cf_challenge_retryable_sets_challenged_staterss_poll_apply_flaresolverr_success_promotes_challenged_to_solved
- Added migration
- Consequences:
- Positive outcomes:
- CF state transitions now match ERD transition requirements for challenge detection and FlareSolverr success paths.
- Regression coverage now verifies both retryable CF challenge behavior and solved-state promotion.
- Risks or trade-offs:
- Transition behavior remains bounded to RSS polling procedure scope; broader scheduler routing policy decisions continue to rely on existing routing inputs.
- Positive outcomes:
- Follow-up:
- Test coverage summary:
- Covered by
just ciandjust ui-e2e.
- Covered by
- Observability updates:
- No schema or telemetry surface changes required.
- Risk and rollback:
- Rollback is isolated to one migration and related tests/checklist/docs updates.
- Review checkpoints:
- Continue with remaining unchecked ERD acceptance items (586, 587, 589+).
- Test coverage summary:
Search streaming pages terminal sealing and append-only ordering
- Status: Accepted
- Date: 2026-02-26
- Context:
- Motivation: ERD acceptance item 586 requires streaming search behavior with early page emission, append-only ordering, and deterministic page sealing at terminal request state.
- Constraints:
- Runtime DB behavior must remain stored-procedure driven.
- Search-page ordering must not reorder based on later score updates.
- Dependency rationale:
- No new dependencies were added.
- Alternative considered: rely on API-layer finalization when runs complete. Rejected because sealing and terminal status must stay deterministic inside database state transitions.
- Decision:
- Added migration
0095_search_request_terminal_seal.sql:- Creates trigger function
search_request_finalize_on_runs_terminal_v1. - On indexer run terminal updates, finalizes
search_requestwhen no queued/running runs remain. - Seals any unsealed
search_pagerows at request finalization time.
- Creates trigger function
- Extended
search_result_ingestintegration coverage incrates/revaer-data/src/indexers/search_results.rs:- Added deterministic append-order streaming test across two pages with a late high-seeder result.
- Added request-finished + page-sealed assertions after run completion.
- Added migration
- Consequences:
- Positive outcomes:
- Search requests now reach terminal status and sealed pages deterministically when runs complete.
- Streaming append-only behavior is explicitly regression-tested.
- Risks or trade-offs:
- Trigger introduces additional write work during run terminal updates; scope is bounded to the affected request.
- Positive outcomes:
- Follow-up:
- Test coverage summary:
- Verified with
just ciandjust ui-e2e.
- Verified with
- Observability updates:
- No new telemetry fields were introduced.
- Risk and rollback:
- Rollback is isolated to migration
0095and associated test/checklist/docs changes.
- Rollback is isolated to migration
- Review checkpoints:
- Continue with next unchecked ERD acceptance item after 586.
- Test coverage summary:
Search dropped-source audit persistence and paging exclusion
- Status: Accepted
- Date: 2026-02-26
- Context:
- Motivation: ERD acceptance item 589 requires hard-dropped sources to remain persisted for audit while being excluded from search paging.
- Constraints:
- Runtime behavior remains stored-procedure driven.
- Validation must prove both audit persistence and page exclusion in a single ingest flow.
- Dependency rationale:
- No new dependencies were added.
- Alternative considered: only assert
search_page_itemexclusion. Rejected because ERD also requires auditable persistence (search_filter_decisionand dropped context scoring).
- Decision:
- Added integration test
search_result_ingest_dropped_sources_are_persisted_but_excluded_from_pagesincrates/revaer-data/src/indexers/search_results.rs. - Test setup creates a request-scope policy set with a hard drop title-regex rule, executes ingest, and asserts:
- no
search_page_itemrows are produced for the request, canonical_torrent_source_context_score.is_droppedis true,search_filter_decisionrecordsdrop_canonicalwith observation linkage and canonical/source ids.
- no
- Added integration test
- Consequences:
- Positive outcomes:
- ERD dropped-source behavior is covered by a deterministic regression test.
- Paging remains free of dropped sources while audit evidence is retained.
- Risks or trade-offs:
- Test depends on request-policy ingestion path and policy schema conventions.
- Positive outcomes:
- Follow-up:
- Test coverage summary:
- Verified with
just ciandjust ui-e2e.
- Verified with
- Observability updates:
- No telemetry schema changes.
- Risk and rollback:
- Rollback is isolated to test/checklist/docs updates.
- Review checkpoints:
- Continue with next unchecked ERD acceptance item after 589.
- Test coverage summary:
224: Canonicalization conflict coverage
- Status: Accepted
- Date: 2026-03-01
- Context:
- ERD canonicalization rules require preserving durable source identity and logging conflicts when incoming hashes disagree.
- Existing tests covered fallback identities and size rollups, but not explicit hash-conflict logging behavior.
- Decision:
- Add a stored-procedure test for
search_result_ingestthat ingests two rows for the same source GUID with conflicting hashes. - Verify durable hash immutability,
source_metadata_conflictlogging, andindexer_health_eventemission. - Mark the canonicalization-rule checklist item complete after coverage is in place.
- Add a stored-procedure test for
- Consequences:
- Positive outcomes:
- Conflict-handling behavior is verified directly against stored procedures in CI.
- Regressions that overwrite durable identities or skip conflict audit signals fail fast.
- Risks or trade-offs:
- Slight additional runtime in
revaer-datatests.
- Slight additional runtime in
- Positive outcomes:
- Follow-up:
- Extend conflict tests for tracker/external-id mismatches if rules change.
Motivation
- Ensure ERD-required conflict handling is exercised, not just identity selection and size rollups.
Design notes
- Added a targeted
search_result_ingesttest that reuses the same durable source and injects a conflictinginfohash_v1. - Assertions cover three outputs: durable hash remains original, conflict row recorded with
conflict_type=hash, and health event is emitted.
Test coverage summary
- Added one
search_result_ingestconflict test inrevaer-data. - Will run
just ci,just build-release, andjust ui-e2ebefore hand-off.
Observability updates
- No new metrics/spans; verified existing
indexer_health_eventsignal behavior.
Risk & rollback plan
- If canonicalization conflict semantics evolve, update expected conflict/audit values in the test.
- Roll back by reverting the test and checklist/ADR updates if needed.
Dependency rationale
- No new dependencies added.
225: Indexer unit test domain coverage
- Status: Accepted
- Date: 2026-03-01
- Context:
- ERD checklist requires explicit unit-test coverage across canonicalization, policy evaluation, category mapping, and search validation domains.
- Coverage existed across multiple modules but the checklist item remained open.
- Decision:
- Confirm and document active coverage for these four domains in
revaer-dataindexer tests. - Mark the checklist item complete based on verified test coverage.
- Confirm and document active coverage for these four domains in
- Consequences:
- Positive outcomes:
- Checklist state now matches implemented and exercised tests.
- Coverage expectation for core indexer rule domains is explicitly recorded.
- Risks or trade-offs:
- This records current scope; future rule additions still require new tests.
- Positive outcomes:
- Follow-up:
- Keep extending domain tests as ERD behavior expands.
Motivation
- Keep ERD checklist status aligned with actual test enforcement in CI.
Design notes
- Canonicalization coverage includes fallback identity, rollup median behavior, append-only paging, and conflict logging.
- Policy evaluation coverage includes request policy drops and policy match logic.
- Category mapping coverage validates upsert/delete and invalid mapping paths.
- Search validation coverage exercises identifier mismatch, season/episode rules, and category filter validation.
Test coverage summary
- Verified coverage in
crates/revaer-data/src/indexers/canonical.rs,crates/revaer-data/src/indexers/search_results.rs,crates/revaer-data/src/indexers/policy_match.rs,crates/revaer-data/src/indexers/category_mapping.rs, andcrates/revaer-data/src/indexers/search_requests.rs. - Will run
just ci,just build-release, andjust ui-e2ebefore hand-off.
Observability updates
- No telemetry changes.
Risk & rollback plan
- If coverage assertions become inaccurate, reopen checklist and add missing tests.
- Roll back by reverting checklist/ADR updates if the requirement is re-scoped.
Dependency rationale
- No new dependencies added.
226: Health and reputation rollup semantics from outbound logs
- Status: Accepted
- Date: 2026-03-01
- Context:
- ERD indexer acceptance requires connectivity and reputation statistics to follow
outbound_request_logsemantics. - Existing job tests covered base rollup behavior but did not explicitly assert
rate_limitedexclusion from sample counts.
- ERD indexer acceptance requires connectivity and reputation statistics to follow
- Decision:
- Add a stored-procedure test for job rollups that mixes successful, non-rate-limited failures, and rate-limited failures.
- Verify both connectivity and reputation calculations exclude
error_class=rate_limitedsamples. - Mark the corresponding ERD acceptance checklist item complete.
- Consequences:
- Positive outcomes:
- Connectivity (
indexer_connectivity_profile.success_rate_1h) and reputation (source_reputation.request_*) rules are now pinned to ERD semantics in CI. - Regression risk around sample-count inflation from rate-limited rows is reduced.
- Connectivity (
- Risks or trade-offs:
- Slightly longer
revaer-datatest runtime.
- Slightly longer
- Positive outcomes:
- Follow-up:
- Extend sampling tests if ERD expands included request types beyond current coverage.
Motivation
- Enforce that health/reputation rollups remain authoritative to
outbound_request_logsemantics, especially rate-limit exclusions.
Design notes
- Added
connectivity_and_reputation_exclude_rate_limited_samplesinrevaer-datajob tests. - Test inserts 30 successes, 5 non-rate-limited failures, and 10 rate-limited failures, then validates:
success_rate_1h = 30/35(not30/45)request_count = 35andrequest_success_count = 30
Test coverage summary
- Added one stored-procedure rollup test in
crates/revaer-data/src/indexers/jobs.rs. - Full gates (
just ci,just ui-e2e) are run before hand-off.
Observability updates
- No new telemetry surface added; this validates existing derived-health semantics.
Risk & rollback plan
- If rollup semantics change, update the expected sample math and checklist references.
- Roll back by reverting this ADR, checklist line, and test if requirement scope changes.
Dependency rationale
- No new dependencies added.
227: Search zero-result explainability
- Status: Accepted
- Date: 2026-03-01
- Context:
- ERD acceptance requires zero-result searches to expose why nothing was returned.
- Existing search page APIs returned pages/items only, without skipped/blocked/rate-limit diagnostics.
- Decision:
- Add stored procedures
search_request_explainability_v1andsearch_request_explainability. - Extend
SearchPageListResponsewith anexplainabilityobject that reports:- zero runnable indexers
- skipped canceled/failed indexers
- blocked result count and blocking rule IDs
- rate-limited and retrying indexer counts
- Wire the new procedure through
revaer-data,revaer-app, and API handlers.
- Add stored procedures
- Consequences:
- Positive outcomes:
- UI/API callers can explain “nothing found” states with structured diagnostics.
- Explainability semantics are enforced through stored-proc and handler tests.
- Risks or trade-offs:
- Response payload size increases slightly for page list calls.
- Positive outcomes:
- Follow-up:
- Expose these explainability fields in the UI once indexer search pages are integrated in the frontend route.
Motivation
- Ensure zero-result states are actionable instead of silent, matching ERD acceptance rules.
Design notes
- Kept runtime SQL policy compliant by introducing stored procedures instead of ad-hoc queries.
- Reused
search_page_list_v1authorization/visibility checks in the explainability procedure to preserve error semantics. - Counted blocked results from
search_filter_decisiondecisions (drop_source,drop_canonical).
Test coverage summary
- Added
revaer-datatests for explainability defaults and blocked/rate-limited/retrying states. - Updated API handler test support and search page handler tests for the new response shape.
Observability updates
- No new spans/metrics; this feature surfaces existing run/filter state via API responses.
Risk & rollback plan
- If semantics need adjustment, update the procedure outputs and response mapping together.
- Roll back by reverting migration + API model/service wiring if clients cannot adopt the additive field.
Dependency rationale
- No new dependencies added.
228: Prowlarr import source parity and dry-run coverage
- Status: Accepted
- Date: 2026-03-01
- Context:
- ERD acceptance requires import jobs to support both
prowlarr_apiandprowlarr_backupsources with dry-run mode. - Existing coverage did not explicitly assert source-specific run-path behavior and dry-run persistence across both source modes.
- ERD acceptance requires import jobs to support both
- Decision:
- Add
revaer-datatests to validate:import_job_createpersistsprowlarr_backupwithis_dry_run=true.import_job_run_prowlarr_apiandimport_job_run_prowlarr_backupreject mismatched job source withimport_source_mismatch.
- Extend API E2E import job coverage to execute both run paths against matching and mismatched sources.
- Add
- Consequences:
- Positive outcomes:
- Source parity and dry-run behavior are validated at both stored-proc and API boundary levels.
- Regression risk for import source routing logic is reduced.
- Risks or trade-offs:
- Slightly longer API E2E runtime due to additional import job flows.
- Positive outcomes:
- Follow-up:
- Add UI import wizard coverage when import UX lands, so dry-run and source selection are exercised from UI paths.
Motivation
- Close a checklist gap with executable verification for ERD-required import source behavior.
Design notes
- Reused existing integration harnesses; no production logic changes were required.
- Asserted database
DETAILcodes to keep failure modes explicit and stable.
Test coverage summary
crates/revaer-data/src/indexers/import_jobs.rs:import_job_create_supports_backup_source_and_dry_runimport_job_run_procedures_reject_source_mismatch
tests/specs/api/indexers-import-jobs.spec.ts:- Added backup-source creation/run and cross-source mismatch assertions.
Observability updates
- No new telemetry emitted; this change increases behavioral coverage only.
Risk & rollback plan
- If these assertions conflict with intended semantics, update stored-proc details and tests in lockstep.
- Roll back by reverting this ADR and test updates.
Dependency rationale
- No new dependencies added.
229: Import result mapping and unmapped-definition coverage
- Status: Accepted
- Date: 2026-03-01
- Context:
- ERD acceptance requires imported indexers to either map to definitions or surface an explicit unmapped state.
- Existing import-job coverage did not assert the combined status/result behavior for mapped and unmapped outcomes.
- Decision:
- Add data-layer stored-procedure coverage for import-job status aggregation and result listing with mixed mapped/unmapped outcomes.
- Validate that:
- mapped results are represented by
imported_readywithupstream_slugset; - unmapped results are represented by
unmapped_definitionwithupstream_slugunset and explicit detail.
- mapped results are represented by
- Consequences:
- Positive outcomes:
- Import status aggregation and result projection now enforce ERD-required unmapped explainability.
- Regression risk for import result classification is reduced.
- Risks or trade-offs:
- Test setup inserts fixture rows directly into
import_indexer_resultto model importer output states.
- Test setup inserts fixture rows directly into
- Positive outcomes:
- Follow-up:
- Extend API/UI import flows to render unmapped result remediation actions when import UX work is implemented.
Motivation
- Close a migration acceptance gap with executable checks for mapped vs unmapped import outcomes.
Design notes
- Reused existing
import_job_create,import_job_get_status, andimport_job_list_resultsstored-proc wrappers. - Added a single focused test that seeds two result rows and validates both rollup counters and list projections.
Test coverage summary
- Added
import_job_status_and_results_surface_unmapped_definitionsincrates/revaer-data/src/indexers/import_jobs.rs.
Observability updates
- No telemetry changes; this is behavior-verification coverage.
Risk & rollback plan
- If result classification semantics change, update procedure definitions and this coverage together.
- Roll back by reverting this ADR and the associated test.
Dependency rationale
- No new dependencies added.
230: Migration parity E2E flow coverage
- Status: Accepted
- Date: 2026-03-01
- Context:
- ERD acceptance requires end-to-end verification for Prowlarr import plus Torznab parity and download flows.
- Existing API E2E coverage was split across specs and did not provide a single parity-flow assertion path.
- Decision:
- Add a dedicated API E2E spec that exercises migration parity flows together:
- Torznab caps/search parity semantics.
- Torznab download auth/missing-source behavior.
- Prowlarr API and backup import job run paths with dry-run setup.
- Add a dedicated API E2E spec that exercises migration parity flows together:
- Consequences:
- Positive outcomes:
- ERD migration parity checks are exercised explicitly in one E2E flow.
- Regression detection improves for cross-surface import + Torznab behavior.
- Risks or trade-offs:
- Slightly longer API E2E runtime due to additional scenario setup.
- Positive outcomes:
- Follow-up:
- Extend this scenario to include successful Torznab download redirects once canonical source fixtures are available through public APIs.
Motivation
- Close the checklist gap for explicit end-to-end migration parity coverage.
Design notes
- Reused existing API fixtures and auth modes.
- Kept assertions deterministic around currently available endpoints and fixture-free behavior.
Test coverage summary
- Added
tests/specs/api/indexers-migration-parity.spec.ts.
Observability updates
- No telemetry changes; this is E2E coverage only.
Risk & rollback plan
- If endpoint semantics change, update this test alongside handler/service changes.
- Roll back by reverting the spec and checklist/ADR updates.
Dependency rationale
- No new dependencies added.
Indexer Schema And Procedure Catalog Tests
- Status: Accepted
- Date: 2026-03-07
- Context:
ERD_INDEXERS_CHECKLIST.mdstill had migration and stored-procedure verification unchecked even though most runtime behavior was already implemented.- The repository already had broad module-level stored-procedure tests in
revaer-data, but it lacked a catalog-level integration suite proving the migrated database actually contains the full ERD table, enum, seed, and wrapper-procedure surface. - AGENTS.md requires a task record with motivation, design notes, test coverage summary, observability updates, risk and rollback guidance, and dependency rationale.
- Decision:
- Add
crates/revaer-data/tests/indexers_schema.rsas a live Postgres integration suite that runs migrations and verifies:- all ERD indexer tables exist,
- all ERD enums match the specified value sets,
- all required stable and
_v1stored procedures are registered, - core schema invariants hold (
public_idboundaries,deleted_atsoft-delete columns, JSON/JSONB prohibition, key lower-case checks, and representative varchar caps), - seeded catalog rows exist for trust tiers, media domains, Torznab categories, default rate-limit policies, job schedules, and the system sentinel user.
- Treat the new catalog inventory tests plus the existing module-level stored-procedure tests as the acceptance basis for closing the migration/procedure verification checklist items.
- Alternatives considered:
- Add many more per-procedure behavioral duplicates in integration tests: rejected because that would repeat existing module coverage and add runtime without improving catalog verification.
- Rely on migration file review only: rejected because it does not prove the live migrated schema matches the ERD.
- Add
- Consequences:
- Positive outcomes:
- The database surface now has an executable ERD conformance check at migration time, not just code review.
- Missing tables, enum drift, missing wrappers, or seed regressions will fail
just testquickly. - The checklist can advance without inventing duplicate stored-procedure tests where behavior is already covered.
- Risks or trade-offs:
- The schema suite is intentionally catalog-oriented, so future behavioral changes still require focused module tests.
- The test maintains a long explicit inventory of ERD objects, which must be updated whenever the ERD evolves.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Extend the catalog suite if new ERD tables, enums, or procedures are added.
- Add additional DML-based constraint tests if future schema changes introduce higher-risk invariants not well represented by catalog inspection.
- Review checkpoints:
- Keep
ERD_INDEXERS_CHECKLIST.mdaligned with the live test inventory. - Revisit unchecked UI, migration-parity, and origin-only logging items in the next implementation passes.
- Keep
- Implementation tasks:
- Motivation:
- Close the remaining ERD verification gap with the smallest high-signal change that exercises the real database surface.
- Design notes:
- The suite uses the existing Postgres test harness shape and keeps assertions at the schema catalog layer rather than duplicating service logic.
- Test coverage summary:
- Added
crates/revaer-data/tests/indexers_schema.rswith six integration tests covering tables, enums, procedures, seeds, and representative constraints.
- Added
- Observability updates:
- No telemetry changes; this pass is verification-only.
- Risk & rollback plan:
- Roll back by reverting
crates/revaer-data/tests/indexers_schema.rs, the checklist update, and this ADR if the test strategy needs to change.
- Roll back by reverting
- Dependency rationale:
- No new dependencies were added.
- Alternatives considered: parsing migration SQL directly or adding a dedicated schema-test crate. Both were rejected in favor of
sqlxcatalog queries inside the existingrevaer-datatest setup.
Import Result Fidelity Snapshots
- Status: Accepted
- Date: 2026-03-07
- Context:
- The ERD migration checklist requires imported indexers to preserve enabled state, categories, tags, priorities, and missing-secret detection.
- The current import job surface only returned coarse result status, which made parity verification impossible even in dry-run and partial-import paths.
- Runtime DB interactions must stay on stored procedures, persisted data must remain normalized, and no JSON/JSONB snapshots are allowed.
- Decision:
- Extend
import_indexer_resultwith scalar fidelity fields forresolved_is_enabled,resolved_priority, andmissing_secret_fields. - Persist multi-value fidelity snapshots in normalized child tables:
import_indexer_result_media_domainandimport_indexer_result_tag. - Expand
import_job_list_results_v1and the API/CLI DTO contract to return the preserved snapshot for each result. - Alternatives considered: storing arrays directly on
import_indexer_result, which was rejected because it weakens normalization and makes future filtering harder; deferring all fidelity reporting until the full importer exists, which would leave the migration checklist untestable.
- Extend
- Consequences:
- Import result payloads now carry enough data to verify category/tag/priority/secret preservation rules.
- The schema grows by two operational child tables and one proc contract expansion, which increases migration and test surface slightly.
- This does not implement full Prowlarr ingestion by itself; it establishes the normalized persistence and observable contract the importer will write to.
- Follow-up:
- Wire the actual Prowlarr API/backup importer to populate the new snapshot fields and child tables.
- Add API/E2E coverage once an executable import path can create populated results through HTTP.
- Review whether secret error-class detail should include field names or only counts once the importer is implemented.
Task Record
Motivation: make the ERD migration-fidelity acceptance item measurable with the current import-job surface.
Design notes: scalar fidelity lives on import_indexer_result; category and tag snapshots stay normalized in dedicated child tables; the stored procedure returns sorted arrays for a stable API contract.
Test coverage summary: added data-layer integration coverage for preserved import result snapshots and updated schema catalog expectations for the new tables.
Observability updates: no new metrics were needed; existing import job spans and outcome counters remain the boundary for this read-path change.
Risk & rollback plan: if the contract causes downstream issues, revert migration 0097_import_result_fidelity_snapshot.sql and the DTO mapping change together; the change is isolated to import-result persistence and listing.
Dependency rationale: no new dependencies were added; existing SQLx, serde, and chrono types already cover the new fields.
Secret Binding And Test Error Class Coverage
- Status: Accepted
- Date: 2026-03-08
- Context:
ERD_INDEXERS_CHECKLIST.mdstill had the migration acceptance item for secret binding/test UX unchecked.- The stored procedures already implemented the intended behavior, but the repo lacked focused coverage proving successful secret binding, missing-secret test preparation failures, and success-path clearing of migration error state.
- Decision:
- Add data-layer coverage for routing policy secret binding persistence.
- Add executor coverage for missing required secret preparation failures, successful bound-secret preparation payloads, and finalize-success clearing of migration error state.
- Add API coverage for routing-policy secret bind problem details preserving the stable
error_codecontext, plus API E2E coverage for successful and revoked-secret binding flows.
- Consequences:
- The migration acceptance item is now backed by direct stored-proc, handler, and API end-to-end tests instead of inference from adjacent behavior.
- Coverage now proves the ERD-required
missing_secretand secret lifecycle behavior without adding new dependencies or widening public APIs. - The remaining ERD work is still broader than this acceptance item; this ADR closes only the secret binding/test UX gap.
- Follow-up:
- Keep extending instance-level public flows once definition-selection UX stops relying on internal IDs.
- Revisit checklist items tied to broader API/public-surface cleanup separately.
Indexer Instance Create Uses Definition Slug Key
- Status: Accepted
- Date: 2026-03-08
- Context:
ERD_INDEXERS_CHECKLIST.mdstill had the API-surface rule requiring UUIDs or stable keys instead of internal primary keys.- The remaining indexer API violation was
IndexerInstanceCreateRequest, which still acceptedindexer_definition_id. - The public indexer catalog already exposes
upstream_slug, so callers had a stable key available without exposing an internal database identifier.
- Decision:
- Change indexer instance creation to accept
indexer_definition_upstream_slugend to end. - Update the stored procedure wrapper and latest migration so runtime creation resolves definitions by slug instead of internal id.
- Update handler, app-layer facade signatures, and API tests to use the slug key.
- Change indexer instance creation to accept
- Consequences:
- The indexer API surface no longer requires callers to know an internal definition primary key.
- Existing clients must send the slug field instead of the numeric id for instance creation.
- The underlying database schema remains unchanged; only the procedure contract and API contract moved to the public key.
- Follow-up:
- Keep checking new indexer endpoints for similar internal-PK leaks.
- Revisit whether any multi-source future catalog needs
upstream_source + upstream_slugas a composite public key.
Indexer Service Operation Metrics And Spans
- Status: Accepted
- Date: 2026-03-08
- Context:
ERD_INDEXERS_CHECKLIST.mdstill had the observability item open for indexer-domain operations.- The API already emitted request spans for indexer endpoints, and the app-layer
IndexerServicealready wrapped each domain operation in a stable tracing span. - What was missing was consistent per-operation metrics at the domain-service boundary so search, routing, policy, torznab, instance, and secret workflows all emitted a stable success/error signal and latency measurement.
- Decision:
- Extend
revaer-telemetrywith indexer service operation counters and latency histograms labeled by operation and outcome. - Inject
MetricsintoIndexerServicevia bootstrap and test wiring instead of constructing telemetry inside the service. - Route every
IndexerFacadeoperation through a single helper that records success/error outcomes and elapsed latency around the already-instrumented spans.
- Extend
- Consequences:
- Indexer-domain operations now emit stable metrics and spans from the API boundary through the app-service boundary without violating the DI rule.
- Troubleshooting can distinguish success versus error rates per operation and correlate them with the existing tracing spans.
- The metrics surface grows slightly, but only with bounded low-cardinality labels (
operation,outcome).
- Follow-up:
- Add dashboard panels and alerts for the new
indexer_operations_totalandindexer_operation_latency_msseries when the indexer health UI is built. - Keep new indexer-domain methods on the shared
run_operationhelper so observability coverage does not regress.
- Add dashboard panels and alerts for the new
Indexer DI Boundary Enforcement
- Status: Accepted
- Date: 2026-03-08
- Context:
ERD_INDEXERS_CHECKLIST.mdstill had the dependency-injection boundary item open.- The indexer runtime path in
revaer-appis meant to operate on injected collaborators only, while bootstrap remains the only place allowed to read environment variables and construct concrete infrastructure. - This was already mostly true in code, but it was not enforced by tests, so regressions would be easy to introduce.
- Decision:
- Add architecture tests in
crates/revaer-app/src/bootstrap.rsthat pin the DI boundary for indexer runtime wiring. - Assert that
crates/revaer-app/src/indexers.rsdoes not read environment variables or construct core infrastructure directly. - Assert that
crates/revaer-app/src/bootstrap.rsremains the place that reads env vars and wires concrete metrics, event bus, runtime state, andIndexerService.
- Add architecture tests in
- Consequences:
- The indexer runtime module now has an explicit regression test for the DI rule from
AGENTS.md. - Bootstrap stays the wiring boundary, and service code remains easier to test because collaborators are passed in.
- The enforcement is intentionally narrow and source-based, so future refactors must keep these invariants visible or update the test with an equivalent wiring design.
- The indexer runtime module now has an explicit regression test for the DI rule from
- Follow-up:
- Extend the same pattern to other runtime subsystems if more non-bootstrap wiring starts to accumulate.
- Keep new indexer-domain services on injected constructors instead of hidden singleton/env access.
Manual Search UI
- Status: Accepted
- Date: 2026-03-15
- Context:
ERD_INDEXERS_CHECKLIST.mdstill had manual and interactive search UI unchecked even though the API already exposed search request creation and page reads.- The current web UI had no route or feature slice for indexer search, which blocked category-filtered searches and bulk handoff into the download client.
- AGENTS.md requires minimal dependencies, strict module boundaries, a task record, and completion through the
justquality gates.
- Decision:
- Add a dedicated
crates/revaer-ui/src/features/search/slice with pure request-shaping helpers, a feature-local API shim, and a Yew page mounted at/search. - Reuse the existing indexer search endpoints (
/v1/indexers/search-requestsand search page reads) instead of introducing new backend schema or service churn. - Push selected results into the existing torrent add flow by reusing the shared
ApiClientand preferring magnet links over download URLs when both are present. - Alternatives considered: building a broader indexer management UI first, or adding new listing endpoints before search. Those options were larger and did not unblock the missing ERD-backed manual search slice as directly.
- Add a dedicated
- Consequences:
- Positive outcomes:
- Revaer now exposes an end-to-end manual search flow in the UI with query parameters, Torznab category filtering, explainability, sealed page inspection, and bulk add-to-client actions.
- The feature fits the repo’s UI architecture by keeping transport in a feature-local API module and request normalization in pure helpers with tests.
- No new dependencies were added.
- Risks or trade-offs:
- The search feature currently uses explicit refresh actions rather than live page streaming inside the page itself.
- Labels are English-first with fallback text for the new navigation item instead of a full locale pass.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Add richer live refresh and search history once the broader indexer read/list surfaces exist.
- Extend the feature toward search-profile-aware presets when list/read endpoints are available.
- Review checkpoints:
- Keep
just ciandjust ui-e2egreen. - Revisit the remaining unchecked checklist items for indexer management, health, and connectivity views.
- Keep
- Implementation tasks:
Indexer Admin Console UI
- Status: Accepted
- Date: 2026-03-15
- Context:
ERD_INDEXERS_CHECKLIST.mdstill had the broad indexer-management UI item open after the manual search page landed.- The API already exposed many ERD-backed mutation workflows for indexers, secrets, routing policies, rate limits, search profiles, policy sets, import jobs, and Torznab management.
- The current UI lacked a dedicated route for those operations, which forced all validation of that surface into API/CLI-only flows.
- Decision:
- Add a dedicated
/indexersroute andcrates/revaer-ui/src/features/indexers/feature slice for operator-facing indexer administration. - Reuse the existing authenticated API surface through the shared
ApiClient, adding small generic REST helpers instead of introducing new dependencies or duplicating HTTP auth logic. - Model the page as an action-oriented admin console with a shared activity log, because the backend does not yet expose read/list endpoints for every managed resource.
- Alternatives considered: overloading the existing Settings page, or delaying all UI work until broader list/read APIs existed. Both options would have either blurred module boundaries or left the remaining ERD UI scope blocked longer.
- Add a dedicated
- Consequences:
- Positive outcomes:
- Revaer now has end-to-end UI entry points for the existing indexer management workflows, including definitions lookup, tags, secrets, routing policies, rate limits, instances, search profiles, policies, imports, and Torznab actions.
- Operators can capture raw response payloads in the page log, which improves reproducibility when comparing UI behavior to API/CLI behavior.
- No new dependencies were added.
- Risks or trade-offs:
- The console is action-first rather than a full CRUD browser because list/read endpoints are still incomplete for several resource types.
- Several fields currently use free-form text inputs for enum keys and UUIDs, which trades richer affordances for implementation speed and API parity.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Add list/read endpoints and richer selectors as the backend surface expands.
- Fold more health and connectivity summary views into the indexer route once dedicated data reads are available.
- Review checkpoints:
- Keep
just ciandjust ui-e2egreen. - Revisit the remaining unchecked checklist items around service layering, error logging origin, rollout, and final acceptance.
- Keep
- Implementation tasks:
Indexer Schedule Controls UI
- Status: Accepted
- Date: 2026-03-15
- Context:
- The indexer admin console already exposed rate-limit assignment, but the ERD parity checklist still had per-indexer schedule controls in UI unchecked.
- The API already accepted
is_enabled,enable_rss,enable_automatic_search, andenable_interactive_searchthrough the existing indexer instance update endpoint. - AGENTS.md requires completing the next efficient ERD-backed slice without adding dead code or extra dependencies.
- Decision:
- Extend the
/indexersadmin console to surface explicit checkbox controls for instance enablement, RSS, automatic search, and interactive search scheduling. - Reuse the existing
IndexerInstanceUpdateRequestpayload instead of introducing a separate UI-only endpoint or new backend model. - Lock the route behavior in Playwright by asserting the schedule controls render on the page.
- Alternatives considered: delaying the controls until broader instance list/read APIs existed, or adding a dedicated scheduling sub-view. Both would have left an already-supported ERD path hidden from operators.
- Extend the
- Consequences:
- Positive outcomes:
- Operators can now control the ERD-backed per-instance scheduling flags directly from the admin console alongside rate-limit assignment.
- The checklist item for per-indexer rate limits and schedule controls now has matching UI coverage and browser verification.
- No new dependencies were added.
- Risks or trade-offs:
- The update action still targets a manually entered instance UUID because list/read endpoints for all instances are not yet available in the UI.
- Schedule state is operator-driven rather than auto-refreshed from the server after each mutation.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Add richer instance selectors and readback once list/read instance endpoints are available in the UI.
- Expand the console further for RSS history and mark-seen workflows when those reads are exposed.
- Review checkpoints:
- Keep
just ciandjust ui-e2egreen. - Revisit the remaining unchecked parity items around app sync, health visibility, RSS views, connectivity dashboards, backup/restore, and final migration acceptance.
- Keep
- Implementation tasks:
Indexer RSS Management UI
- Status: Accepted
- Date: 2026-03-15
- Context:
- The ERD checklist still lacked operator-facing RSS management despite stored procedures already supporting subscription writes and RSS dedupe storage.
- The existing
/indexersadmin console had no way to inspect subscription cadence, view recently seen RSS items, or manually seed dedupe state.
- Decision:
- Add stored-proc-backed RSS management APIs for subscription status, recent seen-item listing, and manual mark-seen.
- Extend the indexer admin console with an RSS management panel that can fetch subscription state, update cadence/enablement, inspect recent items, and insert manual seen markers.
- Consequences:
- Operators can now manage RSS polling behavior and dedupe history without direct database access.
- The implementation adds new API/DTO surface area and one migration, which increases maintenance cost but keeps runtime SQL inside stored procedures.
- Follow-up:
- Validate the new RSS panel in
just ciandjust ui-e2e. - Continue with the remaining unchecked migration items, especially health dashboards and deployment acceptance work.
- Validate the new RSS panel in
Indexer connectivity and reputation UI
- Status: Accepted
- Date: 2026-03-15
- Context:
ERD_INDEXERS.mdrequires operator-facing views forindexer_connectivity_profileandsource_reputation, plus remediation-adjacent controls.- The derived tables and refresh jobs already existed, but the admin console could not inspect them without querying the database directly.
- Decision:
- Add stored procedures and typed data/API/UI adapters to expose connectivity profile snapshots and recent reputation windows per indexer instance.
- Reuse the existing instance admin surface and adjacent Cloudflare reset actions instead of creating a separate dashboard route first.
- Consequences:
- Operators can now inspect connectivity status, dominant error class, latency, success rates, and recent reputation rollups from
/indexers. - The implementation adds new read procedures and response DTOs that must stay aligned with derived-table schema changes.
- Operators can now inspect connectivity status, dominant error class, latency, success rates, and recent reputation rollups from
- Follow-up:
- Add richer health drill-down and notification delivery to close the remaining health dashboard checklist item.
- Consider promoting these views into a dedicated health route if the admin console becomes too dense.
Indexer routing policy visibility
- Status: Accepted
- Date: 2026-03-15
- Motivation:
ERD_INDEXERS.mdrequires per-indexer proxy and flaresolverr controls with operator-visible health and configuration context.- The admin console already allowed routing policy creation, parameter updates, secret binding, and instance assignment, but operators could not read the resulting configuration without database access.
- Design notes:
- Add a stored-procedure read path,
routing_policy_get, that validates actor scope and returns routing policy metadata, assigned rate-limit policy fields, parameter values, and bound secret references. - Aggregate the row-oriented stored-proc result into a typed API model so the HTTP and UI layers can render routing policy state without database-specific joins.
- Extend the
/indexersadmin console with an explicit fetch action and summary panel instead of introducing a new route, keeping proxy and Cloudflare controls together.
- Add a stored-procedure read path,
- Test coverage summary:
- Added
revaer-datacoverage for routing policy reads across parameters, secret bindings, and rate-limit assignments. - Added
revaer-apihandler coverage for routing policy fetch success and not-found mapping. - Updated the UI route smoke test to assert the routing policy fetch control is present.
- Added
- Observability updates:
- Added the
indexer.routing_policy_getservice span with actor and routing policy identifiers. - Reused the existing routing-policy error mapping so operator-facing failures preserve structured
error_codeandsqlstatecontext.
- Added the
- Risk & rollback plan:
- Risk is limited to a new read-only stored procedure and endpoint; existing mutation flows are unchanged.
- Rollback is straightforward: revert the new migration, API route, and UI fetch panel if the response shape proves insufficient.
- Dependency rationale:
- No new dependencies were added.
- Alternatives considered: embedding raw SQL in the API layer or scraping existing mutation responses. Both were rejected because AGENTS requires stored procedures for runtime DB access and a read endpoint is the stable operator contract.
Indexer import job dashboard
- Status: Accepted
- Date: 2026-03-16
- Motivation:
- The indexer admin console already exposed import job create/run/status/result endpoints, but the workflow still depended on copying IDs out of the activity log and mentally reconciling counts with raw JSON.
ERD_INDEXERS_CHECKLIST.mdstill calls out import pipeline UX, so the existing import surface needed to become operator-friendly before broader Cardigann and conflict-resolution work lands.
- Design notes:
- Keep the current
/indexersroute and extend it with import job state that persists the latest job status and result payloads in the feature slice. - Promote the created or executed
import_job_public_idback into the form state so the next fetch actions operate on the active job without manual copying. - Render status rollups and per-result cards directly in the import section so duplicate skips, unmapped definitions, missing secrets, and imported instances stay visible.
- Keep the current
- Test coverage summary:
- Updated the indexer UI route smoke test to assert the new import status and import results sections render.
- Full regression gates remain
just ciandjust ui-e2e.
- Observability updates:
- No new backend telemetry was required; the work reuses existing import job spans and activity-log JSON captures.
- The UI keeps recording import responses in the activity log while also surfacing the latest structured view.
- Risk & rollback plan:
- Risk is limited to client-side state handling on the admin page.
- Rollback is a straightforward revert of the import dashboard state/rendering if operators prefer the previous raw-log flow.
- Dependency rationale:
- No new dependencies were added.
- Alternatives considered: adding a separate import route or introducing server-side aggregation endpoints. Both were rejected because the current API already carries the required data and the admin page is the established operator surface.
244. Indexer health event drill-down
- Status: accepted
- Date: 2026-03-17
Motivation
ERD_INDEXERS_CHECKLIST.mdstill leaves the health and notifications parity slice unchecked.- Operators already have connectivity rollups and reputation summaries, but they still lack a direct read path for raw
indexer_health_eventrows defined byERD_INDEXERS.md. - The next efficient step is to expose recent health events end-to-end so the existing
/indexersconsole can show failure detail and conflict timing without introducing a larger notification system yet.
Design notes
- Add stored procedures
indexer_health_event_list_v1(...)and stable wrapperindexer_health_event_list(...)to read recent events for one indexer instance with actor validation and bounded limits. - Extend the data, app, and API layers with typed health-event list reads and a new
GET /v1/indexers/instances/{indexer_instance_public_id}/health-eventsroute. - Extend the indexer admin UI with a health-event limit field, fetch action, and rendered drill-down cards under the connectivity section.
- Keep notification delivery out of scope for this slice; the checklist item remains open until delivery hooks exist.
Test coverage summary
- Added stored-procedure tests for recent-row ordering and missing-instance failure mapping.
- Added API handler tests for successful health-event reads and conflict mapping.
- Extended API and UI Playwright smoke coverage for the new health-event surface.
Observability updates
- No new emitters were added; this slice reads the existing
indexer_health_eventdiagnostic stream already populated by backend workflows. - The new API route reuses existing request tracing and metrics middleware.
Risk & rollback plan
- Risk is limited to a new read-only proc and route plus UI rendering.
- Rollback is straightforward: revert the migration, API handler/route, and UI panel if operator output regresses.
Dependency rationale
- No new dependencies were added.
245. Indexer origin-only error logging
- Status: accepted
- Date: 2026-03-16
Motivation
ERD_INDEXERS_CHECKLIST.mdstill leaves the origin-only error logging rules unchecked even though the indexer stack already carries structuredcodeandsqlstatefields through typed errors.crates/revaer-app/src/indexers.rswas re-logging propagatedDataErrorvalues while also converting them into service errors, which duplicated origin logs and violatedAGENTS.md.- The next efficient step is to make the app-layer mapper functions pure translations so origin logs remain singular while callers still receive stable service error kinds and structured context.
Design notes
- Remove
tracing::error!side effects from the indexer service error-mapper helpers incrates/revaer-app/src/indexers.rs. - Keep the existing mapping taxonomy unchanged so service callers still receive the same
kind,code, andsqlstatevalues. - Add mapper coverage proving structured error context survives translation without requiring logging side effects.
Test coverage summary
- Added a unit test covering representative mapper paths for definition, tag, and indexer-field errors.
- The new assertions verify
kind,code, andsqlstatepreservation for propagated stored-procedure failures. - Full repository quality gates remain the final verification for regression safety.
Observability updates
- No new emitters were added.
- This change reduces duplicate logs by keeping error emission at the actual failure origin while preserving structured context on returned service errors.
Risk & rollback plan
- Risk is low because the change is limited to log side effects in app-layer error translation.
- If diagnostics regress, rollback is a straight revert of the mapper cleanup and accompanying checklist/task-record updates.
Dependency rationale
- No new dependencies were added.
246. Indexer health summary panels
- Status: accepted
- Date: 2026-03-16
Motivation
- The indexer admin console could fetch connectivity profiles and source-reputation rows, but operators only saw those responses in the generic activity log.
ERD_INDEXERS_CHECKLIST.mdstill leaves the health dashboard slice open because the UI was missing the visible status badges and summary panels described byERD_INDEXERS.md.- The next efficient step is to render those existing API reads directly in
/indexersso operators can review health state without leaving the page or parsing raw JSON logs.
Design notes
- Add local UI state for the latest connectivity profile and fetched reputation rows alongside the existing health-event state.
- Render a connectivity summary card with a status badge, dominant error, latency bands, and recent success-rate snapshots.
- Render source-reputation cards for the selected window and keep health-event drill-down unchanged.
- Leave notification delivery out of scope for this slice; the health checklist item remains open until email/webhook hooks exist.
Test coverage summary
- Added unit coverage for connectivity badge-class mapping and percent formatting helpers in the indexer UI logic module.
- Extended the
/indexersroute smoke test to assert the new health summary headings render. - Full
just ciandjust ui-e2eremain the end-to-end verification gates.
Observability updates
- No new emitters were added.
- This slice improves operator visibility by presenting already-collected connectivity and reputation telemetry directly in the admin console.
Risk & rollback plan
- Risk is limited to UI state/rendering changes over existing API calls.
- Rollback is a straightforward revert of the new state/rendering helpers and task-record updates if the console regresses.
Dependency rationale
- No new dependencies were added.
247. Indexer backup and restore
- Status: accepted
- Date: 2026-03-18
Motivation
ERD_INDEXERS_CHECKLIST.mdstill left backup and restore of indexer settings open even though the admin console already exposed most of the underlying configuration entities.- Operators needed a user-facing way to export the current indexer graph and re-apply it later without manually replaying tags, routing policies, rate limits, instance fields, and RSS settings.
- The next efficient step was to add a sanitized backup format and restore flow on top of the existing stored-procedure-backed write APIs instead of inventing a separate persistence path.
Design notes
- Add stored-procedure-backed export reads that return normalized rows for tags, rate-limit policies, routing policies, and indexer instances with secret references but never secret plaintext.
- Assemble those flattened rows into a typed snapshot document in the app layer so the HTTP and UI layers can share a stable backup format.
- Add
/v1/indexers/backup/exportand/v1/indexers/backup/restoreendpoints and wire/indexerswith export and restore controls plus unresolved-secret feedback. - Restore replays the existing create/update procedures and skips only secret bindings whose referenced secret is unavailable, surfacing them back to the operator for follow-up.
Test coverage summary
- Added stored-procedure tests for the backup export wrappers in
revaer-data. - Added API handler coverage for backup export and restore success and error mapping.
- Extended the
/indexersroute smoke test to assert the new backup and restore panel renders. - Full
just ciandjust ui-e2eremain the end-to-end verification gates.
Observability updates
- Backup export and restore endpoints are traced through the existing HTTP span layer.
- The restore response includes unresolved secret-binding summaries so operators can distinguish successful object replay from missing-secret follow-up work.
Risk & rollback plan
- The main risk is restore failure on deployments with conflicting names or missing referenced secrets; those conditions now fail fast or are surfaced explicitly instead of being silently ignored.
- Secret plaintext is intentionally excluded from exports, so rollback is a straightforward revert of the backup routes, snapshot models, and UI panel if the format proves insufficient.
Dependency rationale
- No new dependencies were added.
248: Indexer coexistence and rollback acceptance coverage
- Status: Accepted
- Date: 2026-03-20
Motivation
ERD_INDEXERS.mdrequires migration reversibility: Revaer must run alongside Prowlarr, avoid destructive Arr mutations, and keep rollback to a Torznab URL change.- The repo already had parity/import coverage, but not an explicit acceptance slice proving coexistence and the lack of downstream-app mutation surfaces.
Design notes
- Added an API E2E spec that creates multiple Revaer Torznab instances, runs import flow activity alongside them, and verifies both endpoints stay callable.
- Added an operator-facing rollback guide that documents the intended migration safety net.
- Guarded the public API surface by asserting the OpenAPI document does not expose downstream Arr mutation routes.
Test coverage summary
- Added
tests/specs/api/indexers-coexistence-rollback.spec.ts. - Covered coexistence of multiple Torznab instances and rollback-safety assertions against the published API surface.
Observability updates
- No telemetry changes. This slice adds acceptance coverage and operator documentation only.
Risk & rollback plan
- Risk is low because the implementation adds tests and documentation without changing runtime behavior.
- Roll back by reverting the spec, guide, and checklist/ADR updates if the acceptance framing changes.
Dependency rationale
- No new dependencies added.
249. Indexer Domain Service Closeout
Date: 2026-03-20
Status
Accepted
Context
- The ERD checklist still carried the phase-6 domain-service item even though the current app-layer indexer service already fronts the shipped indexer domains.
- That stale unchecked item obscured the real remaining gaps, which are product-facing features like app sync, category overrides, richer import UX, and health notification delivery.
Decision
- Close the phase-6 domain-service checklist item after auditing the existing service boundary.
- Treat
crates/revaer-app/src/indexers.rsas the application-service boundary for the shipped indexer surface:- catalog and definition reads
- tags and secrets
- search orchestration reads and writes
- routing policies and rate-limit policies
- search profiles and tracker category mappings
- import jobs and backup/restore flows
- Torznab access, indexer instance lifecycle, RSS, and connectivity/reputation reads
- Treat the runtime/data modules as the implementation site for the non-CRUD execution domains named by the checklist:
- policy evaluation
- canonicalization and conflict handling
- reputation/connectivity rollups
- background job execution
Consequences
- The checklist now reflects the actual architecture instead of implying a missing service layer.
- The remaining unchecked ERD items stay focused on user-visible gaps that still need code, schema, and UX work.
Task Record
Motivation:
- Remove a stale incomplete marker once the service-layer audit confirmed the phase-6 work is already implemented.
Design notes:
- Audited
IndexerServiceincrates/revaer-app/src/indexers.rsagainst the checklist language and existing runtime/data modules. - Kept the dependency-injection boundary unchanged: bootstrap constructs concrete services, while the app layer exposes injected indexer operations.
Test coverage summary:
- No new runtime path was introduced.
- Existing
just ciandjust ui-e2econtinue to cover the already-shipped service surface.
Observability updates:
- No new telemetry changes were required; the existing service layer already emits
indexer.*spans and metrics.
Risk & rollback plan:
- Low risk because this is a checklist and ADR closeout for already-shipped code.
- Roll back by restoring the checklist item to unchecked if a later audit finds a missing domain-service boundary.
Dependency rationale:
- No dependency changes.
Indexer instance category overrides
- Status: Accepted
- Date: 2026-03-20
- Context:
ERD_INDEXERS.mdcalls out custom category overrides as a parity gap versus Prowlarr, especially for cases where one indexer instance needs different tracker-to-Torznab mappings than the shared definition default.- The existing
tracker_category_mappingstorage and stored procedures only supported global mappings or definition-scoped mappings keyed by upstream slug. - The
/indexersadmin console did not expose any category override workflow, so operators could not safely persist or test instance-specific overrides.
- Decision:
- Extend
tracker_category_mappingwith an optionalindexer_instance_idscope and update the stored procedures to accept an optionalindexer_instance_public_id. - When an instance scope is supplied, resolve its definition in-proc, reject deleted/missing instances, and reject conflicting definition-plus-instance combinations with a stable error code.
- Add API model, handler, app-service, UI, and API/UI test coverage for instance-scoped tracker category mapping upsert and delete actions.
- Alternative considered: a separate per-instance override table. That would have avoided a nullable column but would duplicate lookup logic and audit behavior that already belongs to the existing mapping entity.
- Extend
- Consequences:
- Operators can now tune category mappings for one indexer instance without changing the shared default for the definition.
- The storage model is ready for later app-sync filtering work because mappings now have explicit instance scope in addition to global and definition scope.
- App-scoped override behavior is still blocked on the separate app-sync UX/domain work, so the broader checklist item remains partially open until downstream app filtering is implemented.
- Follow-up:
- Thread instance-scoped mappings into the downstream app-sync pipeline once app associations and sync profiles land.
- Add app-specific override resolution rules when the app-sync domain slice is implemented.
251: Indexer final acceptance closeout
- Status: accepted
- Date: 2026-03-21
Motivation
ERD_INDEXERS.mddefines a hard-blocker migration acceptance bar, but the checklist still leftFinal acceptance criteria (all hard blockers) passunchecked after the underlying API, Torznab, import, and rollback coverage had already landed across multiple slices.- We needed one explicit closeout step that ties the current evidence back to the ERD’s go/no-go criteria so the remaining unchecked items stay limited to non-hard-blocker follow-up work.
Design notes
- Added
tests/specs/api/indexers-final-acceptance.spec.tsas a focused acceptance aggregation test. - The new spec verifies the hard-blocker user path remains:
- explicit for invalid Torznab queries,
- explicit for missing downloads,
- explicit for missing import secrets,
- reversible with no downstream app mutation surface.
- The checklist is updated to mark final acceptance complete while preserving the still-open non-hard-blocker parity gaps for app sync UX, app-scoped category overrides, broader import UX, and health notifications.
Test coverage summary
- Added
tests/specs/api/indexers-final-acceptance.spec.ts. - Existing supporting coverage remains in:
tests/specs/api/indexers-migration-parity.spec.tstests/specs/api/indexers-import-jobs.spec.tstests/specs/api/indexers-coexistence-rollback.spec.ts
Observability updates
- No production observability changes were required.
- Acceptance evidence continues to rely on existing import, Torznab, and rollback endpoint behavior plus the previously shipped health/explainability surfaces.
Risk & rollback plan
- Risk is low because this change closes an acceptance gap with additive verification and documentation rather than altering runtime behavior.
- If any acceptance assumption regresses, rollback is a straightforward revert of this ADR, the acceptance spec, and the checklist update while keeping the earlier feature slices intact.
Dependency rationale
- No new dependencies were added.
- Alternative considered: leave final acceptance unchecked until every non-hard-blocker parity item landed. Rejected because the ERD separates hard blockers from follow-up UX parity, and the repo already has the necessary migration-safety evidence to close the hard-blocker gate now.
Indexer health notification hooks
- Status: Accepted
- Date: 2026-03-21
- Context:
- The remaining ERD parity gap for
Health & notificationswas notification-hook management. Health badges and drill-down were already implemented, but operators still could not configure destinations for degraded or failing indexers. - Revaer enforces stored-procedure-only runtime database access, no JSON persistence, and a library-first HTTP/UI integration path. The slice needed to fit that shape and remain small enough to land independently of the larger app-sync domain.
- The remaining ERD parity gap for
- Decision:
- Add a normalized
indexer_health_notification_hooktable with explicit channel and threshold enums, plus stored procedures for create, update, delete, and list. - Expose the hook CRUD through the indexer facade and
/v1/indexers/health-notifications, then surface it on/indexersas operator-managed email/webhook destinations with enabled-state and threshold controls. - Alternatives considered:
- Storing health notification settings in the generic config snapshot: rejected because the ERD indexer workstream is intentionally procedure-backed and relational.
- Deferring hooks until full delivery/executor wiring exists: rejected because the checklist gap was specifically operator-visible notification hooks, which can land cleanly before sender execution.
- Add a normalized
- Consequences:
- Positive outcomes:
- The
Health & notificationschecklist gap is now closed with ERD-shaped persistence, API coverage, and UI affordances. - Operators can manage both webhook and email destinations without shell access or direct SQL changes.
- The
- Risks or trade-offs:
- This slice manages hook configuration only; actual delivery execution remains future work if runtime alert fan-out is added later.
- Email recipients are stored directly on hooks instead of referencing a broader downstream app-sync graph, which keeps the slice bounded but separate from future app-level notification ownership.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Wire sender execution to these hooks if/when health notifications become active outbound jobs.
- Reuse the hook model in any future app-sync or cross-service notification policy work.
- Review checkpoints:
- Keep
just api-export,just ci, andjust ui-e2egreen after any sender-side follow-up.
- Keep
- Implementation tasks:
Indexer app sync provisioning UI
- Status: Accepted
- Date: 2026-03-21
- Motivation:
ERD_INDEXERS_CHECKLIST.mdstill had the app-sync UX gap open even though the stored-procedure-backed search-profile and Torznab APIs already existed.- Operators could create the pieces manually, but there was no single workflow to provision an app-facing sync path with tag scoping, explicit indexer allowlists, media-domain filtering, and issued Torznab credentials.
- Design notes:
- Extend
/indexerswith anApp synccard that reuses the existing search-profile and Torznab fields instead of introducing a new route or duplicate form state. - Add a UI helper that reuses or creates a search profile, applies domain/indexer/tag scoping through the existing ERD-backed endpoints, then creates a Torznab instance and returns the plaintext API key for the downstream app.
- Persist the generated search-profile UUID and Torznab UUID back into the draft state so follow-up operations stay anchored to the provisioned app path.
- Extend
- Test coverage summary:
- Updated the
/indexersPlaywright smoke test to assert the app-sync heading and provisioning button render. - Full regression gates remain
just ciandjust ui-e2e.
- Updated the
- Observability updates:
- No backend telemetry changes were required because the workflow composes existing traced endpoints.
- The UI appends the provisioned app-sync summary to the existing activity log so operators can recover issued identifiers from the current session.
- Risk & rollback plan:
- Risk is limited to client-side orchestration of already-supported API calls.
- Rollback is a straightforward revert of the UI helper and summary card, leaving the underlying search-profile and Torznab APIs unchanged.
- Dependency rationale:
- No new dependencies were added.
- Alternatives considered: a dedicated backend orchestration endpoint or a separate app-sync route. Both were rejected because the current ERD-backed APIs already provide the needed primitives and the admin console is the established operator surface.
Indexer app-scoped category overrides
- Status: Accepted
- Date: 2026-03-21
- Motivation:
ERD_INDEXERS_CHECKLIST.mdstill had category override support open after instance-scoped overrides shipped because Torznab feed emission was still using raw tracker category ids.- Downstream app sync needed per-app category remapping so one Torznab app could receive different category ids than another without breaking shared indexer configuration.
- Design notes:
- Extend
tracker_category_mappingwith an optionaltorznab_instance_idscope and rebuild the upsert/delete procedures so overrides can be stored per downstream Torznab app. - Add a feed-resolution procedure that applies precedence in this order: app+instance, app+definition, app global, instance, definition, global, then
8000fallback. - Route Torznab feed emission through the injected indexer facade so emitted
<category>values use resolved Torznab ids instead of raw tracker ids, and expand child ids to include their parent category ids for Torznab compatibility. - Keep the existing
/indexersadmin console as the operator surface by adding an app-scoped Torznab instance field to the category override form instead of introducing a separate page.
- Extend
- Test coverage summary:
- Extended data-layer and schema coverage for the new stored procedure signature and feed-resolution procedure catalog entry.
- Updated Torznab handler unit tests to cover parent-category expansion and
Otherfallback behavior. - Extended the category-mapping API Playwright spec to round-trip app-scoped override create/delete requests.
- Verified the full regression gates with
just ciandjust ui-e2e.
- Observability updates:
- No new telemetry surface was added; the feature reuses existing traced indexer and Torznab service operations.
- Error classification now treats missing Torznab app scope as a mapped not-found category-mapping failure instead of an opaque storage error.
- Risk & rollback plan:
- The main risk is procedure-precedence drift causing downstream apps to receive unexpected category ids.
- Rollback is a revert of migration
0111_torznab_instance_category_overrides.sqltogether with the Torznab feed-resolution call path and/indexersform field.
- Dependency rationale:
- No new dependencies were added.
- Alternatives considered: keep app-specific remapping in UI state only, or add a new dedicated override table. Both were rejected because the ERD already centers category mapping in stored procedures and a separate table would duplicate precedence logic.
255: Indexer source conflict operator UI
- Status: Accepted
- Date: 2026-03-22
- Motivation:
- The remaining indexer parity gap still called out import-pipeline conflict resolution beyond the stored procedures already present in the data layer.
- Operators could trigger conflict logging indirectly, but there was no supported HTTP or UI path to list durable source metadata conflicts or apply the existing resolve/reopen procedures from the admin console.
- Design notes:
- Add a stored-procedure-backed read path for
source_metadata_conflictso operator tooling can review unresolved and resolved conflicts without inline SQL. - Thread conflict list, resolve, and reopen operations through the injected indexer facade and expose them under
/v1/indexers/conflicts. - Extend the
/indexersadmin console with a compact conflict queue and resolve/reopen controls colocated with the import workflow, since that is where operators already review unmapped and duplicate import outcomes.
- Add a stored-procedure-backed read path for
- Test coverage summary:
- Added
revaer-datacoverage for the new conflict list proc wrapper authorization failure. - Updated the UI route smoke test to assert the new
Source conflict resolutionsection renders on/indexers. - Full regression gates passed with
just ciandjust ui-e2e.
- Added
- Observability updates:
- The new app-facade operations emit standard
indexer.source_metadata_conflict_*metrics and latency observations through the existingrun_operationinstrumentation. - No additional error re-logging was introduced; propagated data errors are still translated without duplicate logs.
- The new app-facade operations emit standard
- Risk & rollback plan:
- Risk is limited to exposing a new operator control surface and a read proc over existing conflict rows.
- Rollback is a straightforward revert of the new migration, HTTP handlers, and UI section if the workflow needs to be redesigned.
- Dependency rationale:
- No new dependencies were added.
- Alternatives considered: leaving conflict resolution as a database-only operation or folding it into ad hoc import-job status text. Both were rejected because they keep operators out of a supported end-to-end workflow.
Indexer Cardigann Definition Import
- Status: Accepted
- Date: 2026-03-21
- Context:
ERD_INDEXERS.mddefines the global indexer catalog as being sourced from both Prowlarr Indexers and Cardigann, but the shipped schema and operator UX still only supported Prowlarr-backed import paths.- The last unchecked ERD parity item was the broader import pipeline UX gap: Cardigann/YAML definition import needed to round-trip through the app, API, and
/indexersUI alongside the already-landed import status and conflict tooling. - Runtime database access still had to stay stored-proc-only, and the implementation needed to preserve the normalized
indexer_definition*tables rather than storing YAML blobs as durable catalog state.
- Decision:
- Added a Cardigann definition import flow that parses YAML in the app layer, canonicalizes the imported definition shape, and writes the normalized catalog rows through new stored procedures for definition begin/field import/finalize.
- Extended the
upstream_sourceenum withcardigann, added API and UI support forPOST /v1/indexers/definitions/import/cardigann, and surfaced the import summary in the catalog section of/indexers. - Added
serde_yamltorevaer-appfor YAML parsing.- Why this, why now: the remaining ERD scope explicitly required Cardigann YAML import, and a maintained YAML parser was the smallest reliable way to accept real Cardigann documents without inventing an ad hoc parser.
- Alternatives considered: manual line-based parsing was rejected as too fragile for nested Cardigann documents; routing YAML through JSON text or opaque blob storage was rejected because the ERD requires normalized catalog tables and stored-proc-backed persistence.
- Consequences:
- Operators can now import Cardigann YAML definitions directly into the catalog, inspect the imported slug/hash/field counts, and immediately reuse those definitions in the existing indexer instance flows.
- The catalog schema now matches the ERD’s declared upstream sources instead of being Prowlarr-only.
- The parser currently normalizes fields, defaults, and select options from Cardigann settings; richer Cardigann-specific semantics still depend on the upstream YAML shape, so malformed or unsupported setting types fail fast with stable validation codes.
- Follow-up:
- Test coverage: added stored-proc data tests, app-layer parser tests, API handler tests, a Playwright API spec for Cardigann import, and updated the UI route smoke test.
- Observability: the import runs through the existing
indexer.definition_import_cardigannoperation metrics and activity-log plumbing. - Risk and rollback: the new migration is additive and the operator flow is isolated to catalog imports; rollback is to stop using the endpoint/UI and revert the migration plus app/API/UI wiring if needed.
PR Review Closeout
- Status: Accepted
- Date: 2026-03-21
- Context:
- Pull request 6 had stale description text and open review feedback spanning indexer handlers, test support, and notification-hook reads.
- The branch needed repo docs and GitHub metadata to match the current ERD indexer implementation state before merge.
- Decision:
- Tighten the reviewed handler paths by normalizing optional string inputs, hardening allocation helpers, removing notification hook list-and-scan reloads, and improving shared test support determinism.
- Keep REST search request routes documented as API-key-protected control-plane endpoints while preserving the existing system-actor behavior required by current search-request flows.
- Replace the stale PR description with an accurate summary of the shipped indexer scope and reply to each open review comment with the action taken.
- Consequences:
- Review feedback is resolved with code, test, and GitHub metadata aligned to the current branch state.
- The notification hook write path now reloads by primary reference instead of depending on list ordering.
- Search request control-plane handlers still rely on the system actor until a future authenticated user-to-actor mapping exists.
- Follow-up:
- Revisit indexer REST actor attribution if authenticated app users gain stable public-id mapping in the API layer.
- Remove any remaining outdated review threads after maintainers confirm the closeout comments.
PR Review And Security Follow-Up
- Status: Accepted
- Date: 2026-03-22
- Context:
- Pull request 6 still had unresolved inline review threads after the earlier closeout pass, including feedback on tag handler validation and test-maintenance duplication.
- The branch also still exposed non-vendored security findings in lockfiles used by release tooling and browser tests.
- Decision:
- Reuse the shared indexer handler
RecordingIndexerstest support intags.rsand add explicit handler-level validation requiring a tag identifier for update and delete requests. - Preserve non-Unicode environment-variable failures as invalid configuration by testing the env-read helper through an injected getter instead of mutating process env in Rust 2024 test code.
- Stop echoing freshly issued setup API keys to CLI stdout so the setup flow no longer prints secrets in cleartext.
- Refresh
release/package-lock.jsonandtests/package-lock.jsonto pick up available transitive security fixes without vendoring or widening the application dependency surface. - Reply inline to each remaining unresolved PR comment with the concrete action taken or the rationale for keeping the current implementation where the behavior is intentionally unchanged.
- Reuse the shared indexer handler
- Consequences:
- Tag handler tests now track the common test harness instead of a large local facade stub, reducing future review churn as
IndexerFacadeevolves. - Update and delete tag requests now fail fast with a stable 400 response when both
tag_public_idandtag_keyare absent after normalization. - Secret-session bootstrap now rejects non-Unicode env input without requiring unsafe test-only environment mutation.
- The CLI setup flow still provisions bootstrap credentials, but it no longer writes the returned API key plaintext to stdout.
- The tests lockfile clears its open npm audit issue, while the release lockfile is reduced to one remaining bundled npm advisory outside the direct Revaer dependency graph.
- Tag handler tests now track the common test harness instead of a large local facade stub, reducing future review churn as
- Follow-up:
- Revisit the remaining release-tooling bundled npm advisory if an upstream semantic-release/npm dependency chain publishes a clean transitive update.
- Close remaining PR threads after maintainers confirm the inline responses and refreshed validation results.
PR CodeQL Closeout
- Status: Accepted
- Date: 2026-03-28
- Context:
- PR #6 still had a failing
CodeQLcheck after the earlier review-response pass, despite local Rust and E2E gates being green. - The remaining alerts mixed live runtime/test code with a large set of unused vendored Nexus reference HTML pages that were no longer part of the runtime asset pipeline.
- The repo still requires accurate docs and a clean local
just ciplusjust ui-e2epass before hand-off.
- PR #6 still had a failing
- Decision:
- Remove the Playwright API-key handoff for browser projects entirely and run the UI suite against the existing no-auth local E2E project, relying on the app shell’s anonymous-local flow instead of persisting or brokering API keys.
- Harden the remaining live findings by avoiding default-from-user setup payload allocation patterns, bounding indexer tag normalization allocations, and removing sensitive/semi-sensitive CLI/UI logging surfaces.
- Remove the unused executable vendor HTML reference files under
crates/revaer-ui/ui_vendor/nexus-html@3.1.0/{src,html}while keeping the runtime asset inputs (html/assets,html/images,public/js) used byasset_sync. - Alternatives considered:
- Dismissing alerts or relying on PR replies alone: rejected because the PR check must go green from real code changes.
- Adding more vendored third-party JS/CSS with SRI or rewriting the vendor reference pages: rejected because those files are not part of the shipped runtime path.
- Consequences:
- Positive outcomes:
- Removes the remaining PR-head CodeQL blockers without changing the shipped UI behavior.
- Shrinks the repository’s unused executable HTML surface and avoids persisting or brokering API keys for Playwright UI setup.
- Keeps the runtime asset sync path intact for
static/nexus.
- Risks or trade-offs:
- The full Nexus reference markup is no longer kept in-tree, so future visual diffing must rely on the preserved asset kit and the implemented Revaer UI rather than those vendor sample pages.
- Positive outcomes:
- Follow-up:
- Re-run local
just ciandjust ui-e2e. - Re-check PR #6 checks and open code-scanning alerts after the push.
- Reply directly on any newly addressed PR threads if GitHub leaves them unresolved.
- Re-run local
PR Security And Thread Closeout
- Status: Accepted
- Date: 2026-03-28
- Context:
- PR #6 still had open CodeQL alerts and several live Copilot review threads after the earlier review-closeout commits.
- The remaining JavaScript findings were caused by Playwright UI tests seeding API-key state into the browser, and the remaining Rust finding was a false-positive-prone CLI redaction path.
- The repo still requires accurate task records, updated catalogues, and green
just ciplusjust ui-e2evalidation before hand-off.
- Decision:
- Remove the Playwright UI API-key handoff entirely and run browser projects against the existing no-auth local API mode, relying on anonymous-local auth handling in the app shell.
- Tighten the remaining low-risk review items in the same pass: fix Torznab XML UTF-8 capacity accounting, write numeric XML fields directly into the response buffer, align bootstrap docs with byte-length validation, return allocation-pressure rejections as service-unavailable, and add a path-based tag delete route while preserving the existing body-based compatibility path.
- Alternatives considered:
- Keep the session broker and try to appease CodeQL with more indirection: rejected because the browser still ended up storing API-key material.
- Dismiss the remaining review and security alerts: rejected because the user explicitly asked for real fixes and green local/CI checks.
- Consequences:
- Positive outcomes:
- Removes the remaining test-only secret persistence path from the PR head.
- Closes several live review comments without broad architecture churn.
- Preserves backwards compatibility for existing tag-delete clients while providing a path-based route for better client/proxy interoperability.
- Risks or trade-offs:
- UI E2E now depends on anonymous-local behavior in the app shell, so regressions in that flow will surface earlier in browser tests.
- The tag delete surface is temporarily dual-path until downstream clients fully converge on the path-based route.
- Positive outcomes:
- Follow-up:
- Re-run
just ci. - Re-run
just ui-e2e. - Re-check PR #6 review threads and CodeQL alerts after the push, then reply directly on the newly addressed threads.
- Re-run
PR final thread closeout
- Status: Accepted
- Date: 2026-03-28
- Context:
- Pull request 6 still had two unresolved, non-outdated review threads after the earlier security and handler cleanup passes.
- One thread targeted the noisy
router.rsimport surface for indexer handlers, and the other targeted the large test-onlyErrorIndexersstub in the secrets handler tests. - We needed to close those threads without reopening broader behavior or security review.
- Decision:
- Collapse the router dependency surface to the indexer handler module boundary by importing
crate::http::indexersonce and qualifying route handlers through that module. - Reuse the shared
RecordingIndexerstest double for secrets handler failure-path tests by adding a focusedsecret_errorinjection point instead of maintaining a trait-wideErrorIndexersimplementation. - Keep the rest of the behavior unchanged and validate with targeted handler tests plus the full
just ciandjust ui-e2egates.
- Collapse the router dependency surface to the indexer handler module boundary by importing
- Consequences:
- The router is less noisy and less likely to incur merge conflicts when indexer handler exports change.
- Secrets handler tests no longer carry a large maintenance burden each time
IndexerFacadegrows. - Test support now owns one more injectable error path, which modestly expands the shared fixture surface but keeps it centralized.
- Follow-up:
- Update PR #6 discussion replies and resolve the remaining fixed threads directly on GitHub.
- Keep using shared handler test support instead of bespoke trait stubs when future indexer handler tests need error injection.
SonarCloud PR issue cleanup and scope alignment
- Status: Accepted
- Date: 2026-03-29
- Context:
- PR
#6introduced live SonarCloud failures on reliability, coverage, duplication, and security hotspots. - The fresh SonarCloud API issue list showed that most findings came from PostgreSQL migration SQL being analyzed with generic PL/SQL rules, plus generated Playwright API schema output and repetitive contract-style test files being counted in duplication and coverage gates.
- PR
- Decision:
- Fix the actionable Rust and test findings directly in code.
- Add checked-in Sonar scope configuration so PostgreSQL migration SQL is excluded from Sonar issue, duplication, and coverage gating, generated API schema output is excluded from naming-rule noise, repetitive Playwright contract files do not dominate duplication metrics, and Rust coverage remains enforced by the repository’s existing
just covgate rather than a second Sonar coverage gate with different long-lived-branch semantics. - Alternatives considered: refactor every migration and generated artifact to satisfy Sonar’s non-PostgreSQL rules, or leave the gate failing. Both were rejected because they would create noise without improving runtime safety.
- Consequences:
- Positive: SonarCloud quality gates stay focused on application code and actionable regressions.
- Trade-off: Sonar scope must be kept aligned if migration, generated-file, or Rust source layouts move.
- Follow-up:
- Re-run SonarCloud after pushing the branch and verify the PR issue list reflects the new scope.
- Revisit exclusions if SonarCloud adds PostgreSQL-aware analysis that can replace the current PL/SQL false positives.
Task record
- Motivation:
- Clear the live SonarCloud PR gate using the fresh API issue list instead of stale screenshots.
- Design notes:
- Keep real behavior fixes in code, and record scope adjustments in repository-owned Sonar config rather than ad-hoc CI arguments only.
- Use repository-local
just covas the authoritative Rust coverage gate and let Sonar focus on issue, duplication, and hotspot feedback for the PR.
- Test coverage summary:
- Validate with
just ciandjust ui-e2eafter the Sonar cleanup changes.
- Validate with
- Observability updates:
- No runtime telemetry changes required.
- Risk & rollback plan:
- Risk is hiding meaningful future findings if exclusions are too broad; rollback is removing or narrowing the Sonar scope entries and re-running the scan.
- Dependency rationale:
- No new dependencies added. Alternatives considered: none required.
PR unresolved feedback closeout
- Status: Accepted
- Date: 2026-03-29
- Context:
- PR
#6still had unresolved review threads after the rebase and SonarCloud cleanup work landed onfeat/indexers. - The remaining current feedback focused on request normalization consistency for source metadata conflict notes and clearer operation context for tag deletion by key.
- The repository hand-off rules require the cleanup to be validated through
just ciandjust ui-e2e, with a task record captured alongside the code change.
- PR
- Decision:
- Normalize
resolution_notein the source metadata conflict resolve and reopen handlers with the sharedtrim_and_filter_emptyhelper so whitespace-only notes are treated as absent values. - Use the distinct
tag_delete_by_keyoperation label when path-based tag deletion maps service errors into API problem details. - Extend the indexer handler test support with explicit source metadata conflict call recording and add focused handler tests covering both feedback items.
- Dependency rationale: no new dependencies were added; the cleanup reuses existing handler normalization code and test support patterns.
- Alternatives considered: leaving whitespace-only notes trimmed-but-present would keep inconsistent semantics between handlers, and reusing the generic
tag_deleteoperation label would preserve ambiguous error context in the PR feedback path.
- Normalize
- Consequences:
- Positive outcomes:
- Source metadata conflict handlers now treat empty operator notes consistently with the rest of the indexer API surface.
- Problem details emitted from path-based tag deletion now identify the exact failing handler operation.
- Focused regression tests make the addressed PR feedback explicit and durable.
- Validation completed with
just ciandjust ui-e2e.
- Risks or trade-offs:
- The E2E run required local test database repair because the long-lived
revaer-dbcontainer had a missingrevaerdatabase subdirectory; this was repaired by recreating the localrevaerdatabase before rerunning the gate. - Rollback is low risk: revert the handler/test changes and restore the prior operation label if downstream behavior needs to match the old payload shape exactly.
- The E2E run required local test database repair because the long-lived
- Positive outcomes:
- Follow-up:
- Push the validated branch updates to
origin/feat/indexers. - Resolve the addressed GitHub review threads on PR
#6, including stale outdated threads whose feedback is already integrated on the current branch.
- Push the validated branch updates to
PR feedback boundary validation closeout
- Status: Accepted
- Date: 2026-03-29
- Context:
- PR
#6received another round of unresolved review feedback after the earlier thread closeout work landed onfeat/indexers. - The remaining actionable comments focused on HTTP-boundary validation for required string fields and on removing an unnecessary checked allocation path for a small bounded tag-key normalization helper.
- The repository completion rules require a task record for the follow-up, plus successful
just ciandjust ui-e2evalidation before hand-off.
- PR
- Decision:
- Validate required create-request fields at the HTTP boundary with
normalize_required_str_fieldin the tag, secret, and health notification hook handlers so blank strings fail fast with stable client-facing messages. - Replace
checked_vec_capacityinnormalize_tag_keyswithVec::with_capacity(keys.len())because the helper only sizes a bounded in-memory vector from already-materialized request input. - Add focused regression tests covering the new required-field failures for tag creation, secret creation, and health notification hook creation.
- Dependency rationale: no new dependencies were added; the cleanup reuses existing normalization helpers and handler test scaffolding.
- Alternatives considered: keeping trim-only behavior would defer required-field validation deeper into service calls, and keeping the checked allocation helper would preserve an unnecessary failure mode for a small local vector.
- Validate required create-request fields at the HTTP boundary with
- Consequences:
- Positive outcomes:
- Required string fields now fail consistently and earlier across the affected indexer HTTP handlers.
- The tag-key normalization helper no longer depends on live allocator probing for a request-bounded vector allocation.
- The new tests document the intended boundary behavior and protect the PR feedback fixes against regression.
- Risks or trade-offs:
- The handlers now reject blank required strings before the service layer sees them, which can slightly change which error code path a client observes for malformed requests.
- Rollback remains low risk: revert the handler validation changes and focused tests if downstream callers require the previous service-layer validation path.
- Positive outcomes:
- Follow-up:
- Push the validated branch updates to
origin/feat/indexers. - Resolve the newly addressed PR review threads on PR
#6. - Wait for the refreshed CI and code analysis runs, then address any newly surfaced failures before closing the loop.
- Push the validated branch updates to
PR CodeQL follow-up on instance tag bounds
- Status: Accepted
- Date: 2026-03-29
- Context:
- After
b0faf9clanded, PR#6picked up a fresh CodeQL failure oninstances.rsforrust/uncontrolled-allocation-size. - The offending path was the new
Vec::with_capacity(keys.len())allocation for instance tag normalization, which had removed the previous live-memory guard to address review feedback about false-closed allocation probes. - The branch still needs a fully green post-push cycle before the review closeout can be considered complete.
- After
- Decision:
- Add explicit HTTP-boundary limits for instance tag normalization: bound the total
tag_keyslength and each trimmed key’s byte length before allocating. - Keep the allocation itself as
Vec::with_capacity(normalized_len)once the input has been reduced to a bounded, validated size. - Add focused handler tests covering excessive tag-key counts and oversized tag-key entries.
- Dependency rationale: no new dependencies were added; the fix uses existing handler validation and test patterns.
- Alternatives considered: reverting to the live-memory allocation probe would reintroduce the reviewer concern about small bounded allocations failing closed, while leaving the plain unbounded capacity call in place keeps the CodeQL finding open.
- Add explicit HTTP-boundary limits for instance tag normalization: bound the total
- Consequences:
- Positive outcomes:
- The PR head now has an explicit, deterministic bound that should satisfy CodeQL’s allocation-size analysis.
- Instance tag normalization keeps the simpler bounded-capacity allocation path without depending on live system-memory probes.
- Regression tests make the allocation guard behavior part of the handler contract.
- Risks or trade-offs:
- Requests with unusually large tag-key lists or very large individual keys now fail earlier at the HTTP boundary.
- Rollback is straightforward: revert the new bounds and tests, but that would likely restore the CodeQL failure.
- Positive outcomes:
- Follow-up:
- Rerun
just ciandjust ui-e2e. - Push the follow-up commit to
origin/feat/indexers. - Wait for refreshed PR checks and confirm the CodeQL failure clears.
- Rerun
Indexer maintenance runtime
- Status: Accepted
- Date: 2026-04-03
- Context:
- Branch analysis against
ERD_INDEXERS.mdreopened a real runtime gap: indexer maintenance jobs existed as stored procedures but the Revaer server process was not actually claiming and executing them on cadence. - The ERD requires in-process scheduling for retention, connectivity, reputation, canonical upkeep, policy cleanup, rate-limit cleanup, and RSS-adjacent maintenance rather than relying on external cron.
- The same review also confirmed that live manual search, Torznab search execution, RSS HTTP polling, and runtime import executors are still separate unresolved gaps and should not be silently conflated with maintenance scheduling.
- Branch analysis against
- Decision:
- Add a dedicated injected
indexer_runtimemodule inrevaer-appthat owns a small Tokio loop and executes due maintenance jobs through stored-proc wrappers. - Keep the runtime testable with an internal backend trait so bootstrap remains the only place constructing concrete collaborators.
- Add a missing stored-proc wrapper for
canonical_prune_low_confidenceso the runtime can advancejob_scheduleconsistently for that job class as well.
- Add a dedicated injected
- Consequences:
- The server now advances maintenance job cadence in-process for retention, connectivity refresh, reputation rollups, canonical backfill/prune, policy GC/repair, rate-limit purge, and RSS subscription backfill.
- Telemetry now records per-job success, failure, and skip outcomes from the runtime loop using existing indexer job counters/histograms.
- This does not close the separate executor gaps for live search, Torznab fetches, RSS outbound polling, or Prowlarr import execution; those remain open checklist items.
- Follow-up:
- Implementation tasks:
- Wire live RSS/search/import executors into the remaining runtime lanes.
- Extend acceptance coverage from maintenance-loop unit coverage to live end-to-end execution parity.
- Review checkpoints:
just cijust ui-e2e
- Implementation tasks:
Indexer Tag And Secret Inventory
- Status: Accepted
- Date: 2026-04-03
- Context:
- The reopened ERD checklist still called out missing read/list management surfaces for operator workflows.
- The
/indexersconsole already had write actions for tags and secrets, but it still depended on manual UUID and key copy-paste for common follow-up actions. - We needed a small step that improved real operator usability without pretending the broader search-profile, policy, Torznab, routing, and instance inventory work was already complete.
- Decision:
- Added stored-procedure-backed tag and secret metadata list reads so runtime code still uses stored procedures rather than inline SQL.
- Exposed those reads through
GET /v1/indexers/tagsandGET /v1/indexers/secrets. - Updated the
/indexersUI so operators can fetch tag and secret inventories, inspect the current metadata, and populate existing CRUD or binding forms directly from the returned rows. - Alternatives considered:
- Reusing backup export payloads alone was rejected because several exported entities do not carry the public identifiers needed for edit flows.
- Jumping straight to the full read/list surface for every remaining resource was deferred because it is materially larger and independent of the tag/secret usability gap.
- Consequences:
- Operators can now reuse live tag keys/public IDs and secret public IDs without manual transcription for several high-frequency actions.
- The broader ERD follow-up item remains open because search profiles, policy sets/rules, Torznab instances, routing policies, rate-limit policies, and indexer instances still need equivalent discovery surfaces.
- The API surface grows slightly, so OpenAPI export and handler coverage need to stay in sync.
- Follow-up:
- Extend the same pattern to the remaining read/list inventory gaps called out in
ERD_INDEXERS_CHECKLIST.md. - Keep the operator console focused on live identifiers rather than backup-only names when wiring future inventory views.
- Extend the same pattern to the remaining read/list inventory gaps called out in
Indexer Operator Inventory Read Surfaces
- Status: Accepted
- Date: 2026-04-03
- Context:
- The reopened ERD checklist still called out missing operator read/list management surfaces for existing indexer resources.
- The prior inventory slice covered only tags and secret metadata, so operators still had to paste known public IDs to update routing policies, assign rate limits, or manage indexer instances.
- The data layer already exposed normalized backup-export reads for routing policies, rate-limit policies, and indexer instances, but those rows were not available through dedicated operator list endpoints.
- Decision:
- Reused the existing stored-procedure-backed backup export reads as the app-layer source for routing policy, rate-limit policy, and indexer instance inventories.
- Added dedicated operator list endpoints at
GET /v1/indexers/routing-policies,GET /v1/indexers/rate-limits, andGET /v1/indexers/instanceswith response DTOs that keep public identifiers instead of backup-only names. - Updated the
/indexersconsole to fetch those inventories and use the returned rows to prefill existing routing, rate-limit, and instance management forms. - Alternatives considered:
- Using the backup snapshot export directly for operator discovery was rejected because the exported backup payload omits some public identifiers needed for follow-up edit and assignment actions.
- Jumping straight to full search-profile, policy-set/rule, and Torznab inventory coverage was deferred because it is a larger independent slice and would have delayed shipping the high-frequency routing/rate-limit/instance usability win.
- Consequences:
- Operators can now discover and reuse routing policy IDs, rate-limit policy IDs, and indexer instance IDs from live API-backed inventory cards rather than external notes or prior responses.
- The broader read/list checklist item remains open because search profiles, policy sets/rules, and Torznab instances still need equivalent inventory surfaces.
- The OpenAPI surface grows again, so handler coverage and exported docs must remain synchronized.
- Follow-up:
- Extend the same operator inventory pattern to search profiles, policy sets/rules, and Torznab instances.
- Keep inventory responses focused on live management identifiers and summaries rather than backup-only restore shapes.
Indexer profile, policy, and Torznab inventory
- Status: Accepted
- Date: 2026-04-03
- Context:
- The branch-analysis follow-up reopened the operator read/list gap because the admin console still depended on pasted UUIDs for search profiles, policy sets/rules, and Torznab instances.
ERD_INDEXERS.mdexpects existing resources to be inspectable over API and UI, not only writable through CRUD endpoints.
- Decision:
- Add stored-procedure-backed list reads for search profiles, policy sets with rules, and Torznab instances, then expose them through
/v1/indexers/search-profiles,/v1/indexers/policies, and/v1/indexers/torznab-instances. - Reuse those inventories in
/indexersso operators can prefill app-sync, policy, Torznab, and category-mapping actions from live data instead of remembered IDs.
- Add stored-procedure-backed list reads for search profiles, policy sets with rules, and Torznab instances, then expose them through
- Consequences:
- The remaining operator inventory gap is closed for the existing ERD-backed resource set: instances, routing policies, search profiles, policy sets/rules, Torznab instances, rate limits, tags, and secret metadata are all inspectable from API and UI.
- The data layer now has additional stable proc surfaces that must stay aligned with the schema-catalog test and exported OpenAPI document.
- Follow-up:
- Keep CLI parity work separate; this ADR only closes the API/UI inspection surface.
- Preserve list payload stability because the admin console and API E2E specs now depend on them.
Indexer CLI read parity
- Status: Accepted
- Date: 2026-04-03
- Context:
- The ERD follow-up checklist still had a CLI parity gap even after the API and UI operator inventory surfaces landed.
- Operators could inspect live tags, secrets, search profiles, policies, routing, rate limits, Torznab instances, RSS state, and health/connectivity from the web UI, but the CLI still only covered import, policy mutations, Torznab mutations, and test probes.
- The next efficient step was to reuse existing authenticated GET endpoints instead of adding new backend scope.
- Decision:
- Add a new
revaer indexer read ...command group that maps directly to the existing operator read/list APIs. - Cover list/read flows for tags, secrets, search profiles, policy sets, routing policies, routing-policy detail, rate-limit policies, indexer instances, Torznab instances, backup export, per-instance connectivity, reputation, health events, RSS status, and RSS seen items.
- Keep the implementation dependency-light by sharing a single typed GET helper in the CLI command layer and adding table/json renderers for the existing API model responses.
- Add a new
- Consequences:
- CLI operators can now inspect the same live indexer inventory data that the
/indexersUI uses, which materially narrows the parity gap without introducing new server behavior. - The change is low risk because it reuses stable GET endpoints and existing API model types instead of inventing duplicate transport contracts.
- The broader CLI parity item remains open because write flows for tags, secrets, routing policies, rate limits, search profiles, backup restore, RSS mutation, health notification hooks, and category mappings still need command coverage.
- CLI operators can now inspect the same live indexer inventory data that the
- Follow-up:
- Add the remaining CLI CRUD commands for the indexer admin surfaces once the read/list workflow settles.
- Fold category-mapping and restore flows into the CLI before marking the reopened parity checklist item complete.
Indexer CLI operator write parity
- Status: Accepted
- Date: 2026-04-03
- Context:
ERD_INDEXERS_CHECKLIST.mdstill had a reopened CLI parity gap after the read/list slice landed.- Operators could inspect indexer resources from the CLI, but tag lifecycle, secret lifecycle, and category-mapping writes still required the UI or raw API calls.
- The next efficient step needed to reuse the existing stored-proc-backed HTTP surface instead of adding new runtime behavior.
- Decision:
- Extend
revaer-cliwithindexer tag,indexer secret, andindexer category-mappingsubcommands that call the existing/v1/indexers/...endpoints. - Keep the scope focused on operator write parity for tags, secrets, tracker category mappings, and media-domain mappings, with targeted CLI integration tests that assert exact request paths and payloads.
- Leave the broader CLI parity checklist item open until routing-policy, rate-limit, search-profile, backup/restore, and RSS mutation flows also exist.
- Extend
- Consequences:
- Operators can now manage common indexer metadata and mapping writes from the CLI without dropping to raw HTTP.
- The implementation stays dependency-light by reusing the existing reqwest client and output layer.
- CLI parity is still incomplete overall, so the checklist must continue to call out the remaining mutation surfaces explicitly.
- Follow-up:
- Add CLI write coverage for routing policies, rate limits, search profiles, and backup/restore.
- Add CLI mutation flows for RSS state and any remaining category/profile assignment surfaces needed for full ERD parity.
Indexer CLI mutation parity follow-up
- Status: Accepted
- Date: 2026-04-03
- Context:
ERD_INDEXERS_CHECKLIST.mdstill had the reopened CLI parity item open after the earlier read/list and tag/secret/category-mapping slices landed.- Operators still needed the UI or raw API calls for routing-policy writes, rate-limit management, search-profile mutation, backup restore, and RSS state mutation.
- Those flows already existed behind stored-proc-backed HTTP endpoints, so the next efficient step was to expose them through
revaer-cliinstead of adding new backend behavior.
- Decision:
- Extend
revaer-cliwithindexer routing-policy,indexer rate-limit,indexer search-profile,indexer backup restore, andindexer rsscommand groups that call the existing/v1/indexers/...endpoints. - Keep backup restore file-driven by reading the exported snapshot JSON and posting it as an
IndexerBackupRestoreRequest. - Add focused CLI integration coverage for representative new mutation paths instead of duplicating every endpoint-level API test in the CLI crate.
- Extend
- Consequences:
- Operators can now manage the bulk of indexer mutation flows from the CLI without dropping to raw HTTP.
- The implementation stays dependency-light by reusing the existing request helpers and output renderers.
- The broader CLI parity checklist item still remains open because health-notification hook mutation parity has not landed yet.
- Follow-up:
- Add CLI mutation flows for health-notification hooks to close the remaining reopened CLI parity gap.
- After the CLI item is closed, focus the remaining reopened ERD work on live runtime execution and stronger acceptance coverage.
Indexer CLI health-notification parity
- Status: Accepted
- Date: 2026-04-03
- Context:
- The reopened
ERD_INDEXERS_CHECKLIST.mdCLI parity item was down to one operator workflow gap after the read/list and broader mutation slices landed. - Health notification hooks already existed in the stored-proc-backed API and UI, but operators still could not manage them from
revaer-cli. - Leaving that one workflow behind would keep the broader CLI parity item artificially open even though the rest of the indexed management surface was already exposed.
- The reopened
- Decision:
- Add
revaer indexer read health-notificationsplusrevaer indexer health-notification create|update|deletecommand flows on top of the existing/v1/indexers/health-notificationsAPI surface. - Reuse the current request helpers, trimmed-string validation, and table/json output conventions instead of adding new transport abstractions.
- Add focused CLI integration-style tests for one read path and one mutation path to keep the new surface covered without duplicating API behavior tests.
- Add
- Consequences:
- Operators can now inspect and manage indexer health notification hooks from the CLI with the same stored-proc-backed behavior already available over HTTP and in the UI.
- The reopened CLI parity checklist item can now close, leaving the remaining ERD gaps concentrated in runtime executors and stronger live acceptance coverage.
- No new dependencies were required; the slice stays within the existing CLI/request/output structure.
PR output redaction and review follow-up
- Status: Accepted
- Date: 2026-04-03
- Context:
- PR #6 still had open review follow-up around whitespace normalization and tracing consistency in the indexer handlers.
- The PR’s failing CodeQL run reported open
rust/cleartext-loggingfindings incrates/revaer-cli/src/output.rsfor the newly added indexer operator commands. AGENTS.mdrequires greenjust ciandjust ui-e2ebefore hand-off, plus accurate documentation for non-trivial changes.
- Decision:
- Replace direct CLI emission of server-returned indexer payload fields with redacted resource summaries for the flagged indexer management commands.
- Further reduce those summaries to field counts instead of field-name lists so CodeQL no longer sees caller-provided strings flowing into CLI output.
- Tighten handler normalization so blank tag and rate-limit display names fail fast, and align search handler documentation/tracing with current behavior.
- Harden Torznab request handling by requiring identifier-only
qvalues for identifier searches, URL-encoding generated download links, avoiding invalid parent category0, and fetching only the page windows needed foroffset/limit. - Avoid adding dependencies; the change reuses existing
serde_jsonhelpers and small local formatting helpers.
- Consequences:
- Positive outcomes:
- The CLI no longer echoes potentially sensitive or user-controlled indexer payload fields for the flagged commands.
- Torznab search requests do less unnecessary page fetching and avoid malformed download links or invalid synthesized parent categories.
- Review nits around blank-input handling and trace field formatting are closed with small, test-backed changes.
- The fix stays within the repo’s current dependency and architecture constraints.
- Risks or trade-offs:
- The affected CLI commands now favor safety over full payload visibility, so operator output is more summary-oriented than before.
- If richer safe output is needed later, it should be added intentionally with field-by-field redaction rather than restoring raw dumps.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Keep GitHub PR thread replies/resolution in sync with the landed fixes once local validation is green.
- Re-check the PR CodeQL alert list after pushing to confirm the cleartext-output findings close out.
- Review checkpoints:
just cijust ui-e2e
- Implementation tasks:
CI cache trim for runner disk pressure
- Status: Accepted
- Date: 2026-04-03
- Context:
- PR #6 showed a failing
Check Unused Depsjob even though localjust udepswas clean and there were no open dependency issues on the branch. - The GitHub check annotation for that failed job reported
System.IO.IOException: No space left on devicewhile the runner was still inside the sharedsetup-revaeraction. - The shared setup action restored both
~/.cargo/binand the workspacetargetdirectory for every PR job, which made the cache restore footprint much larger than the dependency/install state the jobs actually needed.
- PR #6 showed a failing
- Decision:
- Stop caching the workspace
targetdirectory and~/.cargo/binin the shared GitHub Actions setup action. - Keep caching Cargo registries, git dependencies, and
sccache, which preserve the useful network and compile wins without restoring the heaviest workspace-local artifacts into each runner.
- Stop caching the workspace
- Consequences:
- Positive outcomes:
- PR jobs restore less data and are less likely to exhaust runner disk before reaching their actual step logic.
- The
Check Unused Depsjob can now reachjust udepsinstead of failing during setup.
- Risks or trade-offs:
- Some PR jobs may rebuild more from scratch because
targetis no longer restored from cache. - Cargo-installed helper binaries are no longer reused from cache and may be reinstalled when absent on the runner.
- Some PR jobs may rebuild more from scratch because
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Re-run PR workflows to confirm the disk-exhaustion false failure is gone.
- Review checkpoints:
just cijust ui-e2e
- Implementation tasks:
PR review handler normalization follow-up
- Status: Accepted
- Date: 2026-04-03
- Context:
- PR #6 still had unresolved review threads covering blank required-field handling in a few API handlers, plus a setup handler comment about manually reconstructing a request default.
- The affected paths already trimmed values, but some still relied on downstream service validation instead of returning stable field-level
400responses at the HTTP boundary.
- Decision:
- Restore
SetupStartRequest::default()in the setup handler instead of manually recreating the default payload shape. - Normalize required string fields at the HTTP boundary for indexer instance creation, instance field value/secret binding, and media-domain mapping upsert/delete handlers.
- Add focused handler tests for the new bad-request behavior so the review feedback stays covered by unit tests.
- Restore
- Consequences:
- Positive outcomes:
- Clients now get deterministic RFC9457
400responses for whitespace-only required fields before any service call. - The setup handler now stays aligned with future
SetupStartRequestdefault changes automatically. - The review threads have direct code/test evidence tied to them instead of relying on service-layer rejection.
- Clients now get deterministic RFC9457
- Risks or trade-offs:
- Request validation is slightly stricter at the HTTP boundary for blank values, which may reject inputs that previously fell through to the service layer.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Re-run
just ci. - Re-run
just ui-e2e. - Reply on the addressed review threads with the specific handler/test change and resolve them.
- Re-run
- Review checkpoints:
just cijust ui-e2e
- Implementation tasks:
Remediation plan implementation closeout
- Status: Accepted
- Date: 2026-04-04
- Context:
REMEDIATION_PLAN.mdidentified verified gaps in dashboard metrics, qBittorrent compatibility, OTEL export wiring, operational automation, container hardening, and stale status docs.- The repo rules require task records to capture motivation, design notes, test coverage, observability impact, rollback posture, and dependency rationale for any new crates.
- The implementation needed to prefer repo truth over stale roadmap claims and avoid leaving the remediation checklist itself outdated.
- Decision:
- Replace the stubbed dashboard handler with a runtime-backed snapshot sourced from injected torrent state plus filesystem inspection, including explicit degraded fallbacks.
- Extend the qB compatibility façade to the bounded Phase One mutation surface: rename, relocate, category/tag changes, reannounce, and recheck, while persisting the façade-facing metadata that those routes expose.
- Wire OTEL to a real OTLP tracing exporter behind explicit configuration and enable that path in
revaer-app, keeping the exporter dormant unless requested. - Add a
just runbookautomation path that packages Playwright-driven validation artifacts and update the operator runbook to point at the checked-in automation entrypoint. - Promote image scanning plus provenance/SBOM attestation into the image workflow and refresh roadmap/operator docs so they describe current repo reality.
- Alternatives considered:
- Leave the plan/documentation updates separate from code changes: rejected because the repo already had stale status drift.
- Add a larger observability stack or a custom exporter wrapper: rejected in favor of the smallest OTLP integration that closes the placeholder gap.
- Attempt the full FsOps PAR2/checksum/archive tranche in the same pass: deferred because it is materially larger and remained the main open gap after the safer remediations landed.
- Consequences:
- Positive outcomes:
/v1/dashboardnow returns live metrics instead of placeholders.- The qB façade now covers the intended Phase One mutation scope with tests.
- OTEL configuration reaches a real exporter path and operator docs describe the supported env vars.
just runbookcreates repeatable validation artifacts instead of relying only on a manual checklist.- Image builds now include CI scanning and provenance/SBOM attestation.
- Risks or trade-offs:
- OTEL introduces one new dependency edge and more release-build surface area.
- The automated runbook still delegates some fault-injection drills to manual follow-up.
- FsOps archive/PAR2/checksum remediation remains open and continues to be tracked in
REMEDIATION_PLAN.md.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Finish the FsOps archive/PAR2/checksum tranche and re-baseline the remediation checklist afterward.
- Decide whether image signing is required in addition to provenance/SBOM and implement it if the release posture demands it.
- Tighten OTEL startup validation for malformed exporter settings.
- Review checkpoints:
- Re-run
just ciandjust ui-e2ein an environment with the required local browser/DB/runtime dependencies. - Keep
docs/phase-one-roadmap.md,README.md, andREMEDIATION_PLAN.mdaligned whenever status claims change.
- Re-run
- Implementation tasks:
Task Record
- Motivation:
- Close the highest-signal remediation items with real implementation and remove stale planning noise that was obscuring the remaining work.
- Design notes:
- Dashboard aggregation lives in
ApiStateso the handler remains thin and fallback behavior is centralized. - qB mutation endpoints update metadata through the existing injected state/workflow surfaces instead of introducing a separate compatibility state store.
- OTEL uses the smallest viable OTLP tracing path and standard endpoint override semantics.
- Dashboard aggregation lives in
- Test coverage summary:
- Added targeted API tests for dashboard live/fallback behavior and qB mutation/metadata behavior.
- Verified app-side OTEL configuration tests and feature-gated telemetry compilation.
- Observability updates:
- Dashboard metrics are now sourced from live runtime state.
- OTEL tracing can be exported to an OTLP collector when explicitly enabled.
- Risk & rollback plan:
- The changes are isolated to API handlers/state, telemetry bootstrap, docs, and workflow automation; rollback is straightforward by reverting the affected files if regressions surface.
- Dependency rationale:
- Added
opentelemetry-otlptorevaer-telemetry. - Why this: it matches the existing
opentelemetry/tracing-opentelemetrystack already in use and enables a real OTLP exporter without introducing a parallel telemetry abstraction. - Alternatives considered: keeping placeholder-only OTEL wiring, or adding a custom exporter wrapper; both were rejected because they would preserve the verified gap while adding little value.
- Added
Remediation plan gap closure
- Status: Accepted
- Date: 2026-04-04
- Context:
REMEDIATION_PLAN.mdstill had material open items after ADR 278: the FsOps archive/PAR2/checksum tranche, image-signing follow-through, and verification friction in the UI E2E harness.- The repo requires dependency rationale, observability notes, rollback posture, and verification status to live with the code changes rather than only in chat transcripts.
- The remaining FsOps gap had to stay compatible with the repo’s safety and minimal-dependency rules while handling formats that are not realistically supported in
stdalone.
- Decision:
- Extend
revaer-fsopsso Phase One archive handling now coverszip,tar,tar.gz, andtgzin-process, while7zandraruse guarded external-tool execution (7zz,7z,unar,unrar) with structured failures when tooling is absent. - Add a dedicated PAR2 step to the FsOps pipeline that honors
disabled,verify, andrepair, preserving legacyenabledas a compatibility alias forverify. - Persist checksum metadata alongside
.revaer.metaby recording per-file SHA-256 digests plus a deterministic manifest digest after cleanup. - Store the API-project auth session in E2E state and seed the UI browser fixture from that shared session so Playwright stops fighting the app’s real auth mode.
- Finish image-workflow hardening by signing pushed architecture images and multi-arch tags with Cosign in the existing publish workflow.
- Alternatives considered:
- Shell out for every archive type: rejected because
tar/tar.gzsupport is simple and safer to keep in-process. - Add a large archive abstraction crate for every format: rejected in favor of a smaller mixed strategy with minimal new dependencies.
- Keep the UI fixture on anonymous auth and dismiss the overlay opportunistically: rejected because it fought the server’s configured auth mode and stayed flaky.
- Shell out for every archive type: rejected because
- Extend
- Consequences:
- Positive outcomes:
- FsOps now matches the documented Phase One extractor/PAR2/checksum contract.
.revaer.metacarries checksum state that can be used for future reconciliation and operator diagnostics.- UI E2E auth follows the same session the API dependency project created, removing the overlay race at its root.
- Published images now have scan, attestation, and signing coverage in one workflow.
- Risks or trade-offs:
7z/rarextraction and PAR2 repair still depend on host tooling being present.- SHA-256 checksum generation adds more filesystem work to the FsOps tail of completed jobs.
- Runtime hardening remains a deployment contract expressed through docs/workflow guidance rather than something a Dockerfile can fully enforce alone.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Keep expanding FsOps failure-path and restart-path coverage around missing tools, partial repairs, and degraded health reporting.
- Validate the signed-image workflow on the next real publish and document any registry-specific quirks.
- Re-run the full repo verification loop (
just ci,just ui-e2e) and keepREMEDIATION_PLAN.mdaligned with the verified results.
- Review checkpoints:
- Verify FsOps metadata/resume behavior remains backward compatible with older
.meta.jsonfiles. - Verify operator docs still match the actual runtime/tooling expectations for archive extraction and read-only container deployments.
- Verify FsOps metadata/resume behavior remains backward compatible with older
- Implementation tasks:
Task Record
- Motivation:
- Close the largest remaining remediation-plan gaps with code, tests, and operator-facing documentation instead of leaving Phase One behavior split between implementation and aspiration.
- Design notes:
- In-process extraction is used where it is cheap and deterministic; external tools are reserved for formats and repair flows that would otherwise require much heavier dependencies.
- Checksum persistence is modeled as a dedicated FsOps stage so resume semantics and step telemetry stay explicit.
- The UI fixture now consumes shared E2E session state rather than inferring auth mode locally.
- The Playwright UI project defaults to a lower worker count so shell bootstrap remains stable on normal local hosts while still allowing explicit worker overrides.
- Test coverage summary:
- Added FsOps tests for
tar,tar.gz, guarded external extraction, PAR2 execution, and checksum persistence. - Revalidated the flaky UI navigation spec with the shared-session fixture path and reran the full UI suite after lowering the default UI worker count.
- Added FsOps tests for
- Observability updates:
- PAR2 and checksum execution now surface as first-class FsOps steps and are persisted in
.revaer.meta. - The release workflow now produces signed images in addition to scan/SBOM/provenance artifacts.
- PAR2 and checksum execution now surface as first-class FsOps steps and are persisted in
- Risk & rollback plan:
- The changes are isolated to FsOps internals, test fixtures, workflow automation, and docs; rollback is straightforward by reverting those files if tooling regressions appear.
- Dependency rationale:
- Added
tarandflate2torevaer-fsops. - Why these: they provide small, well-understood in-process support for
tar/tar.gzwithout forcing all archive formats through host tooling. - Alternatives considered: shelling out to
tar, or adding a broader archive toolkit; both were rejected in favor of narrower, more deterministic support. - Added
sha2torevaer-fsops. - Why this: checksum persistence needs a deterministic digest implementation that works in-process across platforms.
- Alternatives considered: shelling out to
sha256sumor adding a larger crypto toolkit; both were rejected as either less portable or heavier than needed. - Added optional
reqwesttorevaer-telemetry. - Why this: the OTLP 0.31 migration needs an explicit HTTP client for the real exporter path while keeping TLS support narrow and runtime construction in bootstrap/telemetry wiring.
- Alternatives considered: relying on deprecated pipeline helpers or enabling broader client stacks through transitive defaults; both were rejected to keep the exporter current and the dependency surface tighter.
- Added
PR 21 feedback closeout
- Status: Accepted
- Date: 2026-04-04
- Context:
- PR 21 still had unresolved review threads covering qB metadata sync behavior, fsops checksum manifest accounting, runbook artifact retention, and UI auth/E2E stability.
- The follow-up needed to address the reviewer asks directly, keep the remediation branch shippable, and restore the required
just ui-e2eandjust cigates before the PR could move forward.
- Decision:
- Close the review threads with targeted fixes that map one-to-one to the remaining comments.
- Treat the unstable UI suite as part of the review scope because the updated auth storage behavior and shared E2E backend needed deterministic coverage before the PR could be considered ready.
- Consequences:
- qB metadata-only mutations now publish compatibility sync updates, checksum manifest metadata reports real manifest byte counts, and the runbook preserves Playwright artifacts on failure.
- UI E2E now seeds auth into session storage with matching read fallback, uses deterministic log-filter interactions, aligns stale route assertions with the implemented UI, and defaults to a single UI worker unless the environment overrides it.
- Follow-up:
- Keep watching the Playwright worker override path in CI or faster hosts to ensure the serial default remains the right trade-off.
- Remove any future stale UI assertions as the pages evolve instead of pinning tests to old placeholder copy.
Task Record
- Motivation:
- The PR had unresolved actionable review comments and could not be handed back until both the requested fixes and the repo quality gates were green.
- Design notes:
- qB metadata updates were routed through a shared helper so each metadata mutation publishes the same compatibility refresh event.
- Fsops checksum manifest accounting now derives manifest bytes from the serialized manifest lines instead of placeholder counts.
- The UI fixture now seeds auth in the same storage tier the browser session should own, while the app preferences layer reads both local and session storage to stay backward compatible during transition.
- The Playwright suite now defaults to one UI worker because the UI tests share a mutable backend and a single trunk-served frontend process;
E2E_UI_WORKERSstill allows explicit overrides.
- Test coverage summary:
- Reran
just ui-e2esuccessfully with101 passed. - Reran
just cisuccessfully after the feedback fixes landed.
- Reran
- Observability updates:
- qB metadata-only compatibility mutations now emit sync-visible event updates instead of silently mutating state.
- The runbook now preserves logs, Playwright reports, and test-results artifacts even on failure.
- Status-doc validation:
- No README or roadmap status claims changed in this follow-up.
- ADR catalogue entries were updated to record this task.
- Risk & rollback plan:
- The highest-risk change is the UI E2E worker default. If it regresses on faster environments, rollback is limited to the Playwright config default while preserving the explicit override hook.
- The qB/fsops/runbook changes are localized and can be reverted independently if they cause regressions.
- Dependency rationale:
- No new dependencies were added.
PR 21 Sonar and Review Closeout
- Status: Accepted
- Date: 2026-04-04
- Context:
- PR 21 still had open SonarCloud feedback on the leak period after the earlier remediation follow-up landed.
- The remaining Sonar issues were limited to GitHub Actions security hotspots in the image build workflow and new-code duplication in the filesystem post-processing service.
- Decision:
- Pin the flagged GitHub Actions steps in
build-images.ymlto immutable full commit SHAs. - Refactor the duplicated archive-extraction and tree-transfer logic in
revaer-fsopsinto shared helpers without changing runtime behavior.
- Pin the flagged GitHub Actions steps in
- Consequences:
- The workflow now follows the immutable action-pin guidance Sonar was flagging on the PR delta.
- The fsops module has less repeated code, which lowers Sonar duplication noise and makes future archive and transfer changes easier to review.
- Follow-up:
- Re-run the full
just ui-e2eandjust cigates before hand-off. - Push the branch so GitHub and SonarCloud can recalculate PR 21 status against the updated head commit.
- Re-run the full
Task Record
- Motivation:
- Close the last open PR 21 review findings and SonarCloud leak-period issues so the remediation branch can merge without lingering security or maintainability flags.
- Design notes:
build-images.ymlkeeps the same action versions semantically, but now pins the exact commits behind the previously version-tagged actions.revaer-fsopsnow has shared helpers for archive write operations, relative-path normalization, and directory-tree replication, which removes the repeated zip/tar and copy/hardlink blocks Sonar was reporting.- The UI Playwright fixture now seeds auth through an in-memory storage shim instead of writing API keys into browser storage, which closes the remaining GitHub Advanced Security review threads on
tests/fixtures/app.ts. - The refactor stayed behavior-preserving and reuses the existing fsops test coverage for archive extraction, checksum generation, and file transfer behavior.
- Test coverage summary:
just fmtjust lintjust ui-e2ejust ci
- Observability updates:
- No new metrics or spans were needed.
- Existing fsops metric emission remains unchanged because the work only reshaped helper internals and workflow pins.
- Status-doc validation:
README.mdand the existing remediation status docs were re-checked; no operator-facing behavior changed, so no status-doc content updates were required beyond this task record and catalogue entries.
- Risk & rollback plan:
- Workflow pinning risk is limited to an incorrect SHA; rollback is a revert of the workflow pin lines.
- Fsops refactor risk is confined to archive extraction and transfer helpers; rollback is a revert of
crates/revaer-fsops/src/service/mod.rs.
- Dependency rationale:
- No new dependencies were added.
- The duplication cleanup deliberately reused
stdand the existing crate graph instead of introducing helper crates or archive abstractions.
PR 21 final feedback closeout
- Status: Accepted
- Date: 2026-04-05
- Context:
- PR 21 still had open review threads after the earlier remediation follow-up.
- SonarCloud still reported new-code issues in the Playwright UI fixture, and the review feedback identified one remaining ambiguous qBittorrent mutation plus two E2E secret-handling leaks.
- Decision:
- Move the E2E runtime state file out of
tests/test-results, keep its process metadata readable only by the local user, and encrypt the shared UI/API session payload at rest with a per-run Playwright secret so no API credential is persisted in plaintext. - Tighten the qB rename handler so it rejects any request that resolves to anything other than exactly one torrent hash.
- Remove the remaining
windowreferences Sonar flagged in the UI fixture and keep the runbook artifact copy guarded against stalee2e-state.jsonfiles. - Expand the GitHub Actions CI workflow to run on
pull_requesttomain, and remove job-level branch guards that previously prevented PR heads from ever reporting checks. - Harden
just db-startso it recreates stale named Postgres containers that lack a published host port, which restores localjust ui-e2eandjust ciruns when an old container state is present. - Alternatives considered:
- Redacting the API key in-place in
e2e-state.json; rejected because the UI fixture still needed the secret and the artifact path remained risky. - Re-running setup from the UI fixture; rejected because the active auth mode prevents unauthenticated factory reset and caused 401 failures in the UI project.
- Using a plaintext temp file outside the repo tree; rejected because it still serialized the credential at rest and would keep the same leak class.
- Allowing multi-hash rename by renaming the first torrent only; rejected because it hides client mistakes and diverges from predictable mutation semantics.
- Redacting the API key in-place in
- Move the E2E runtime state file out of
- Consequences:
- Positive outcomes.
- The UI harness no longer stores live API credentials in the copied Playwright artifact tree.
- The UI suite can still reuse the authenticated API-key session produced by the API project, but the shared session data is encrypted at rest instead of being written in plaintext.
- qB compatibility mutations now fail fast on ambiguous rename input instead of silently mutating the wrong torrent.
- SonarCloud’s remaining fixture issues are addressed directly in code rather than suppressed.
- Risks or trade-offs.
- The Playwright run now depends on a per-run encryption secret being present in the worker environment; global setup provisions it automatically.
- The E2E runtime state file remains on disk for process cleanup and encrypted cross-project state handoff, but the API credential is no longer readable in plaintext.
- Positive outcomes.
- Follow-up:
- Implementation tasks.
- Re-run
just ui-e2eandjust ci. - Push the follow-up commit and re-check PR threads plus SonarCloud on the new head.
- Re-run
- Review checkpoints.
- Confirm the qB rename review thread is resolved by the new validation behavior.
- Confirm the Sonar issue list is empty for PR 21 after the next analysis cycle.
- Implementation tasks.
Task Record
- Motivation:
- The PR could not be considered clean while review threads still pointed at secret exposure and ambiguous qB compatibility behavior, and the branch still failed to report any GitHub checks because the CI workflow never triggered for pull requests.
- Design notes:
tests/support/e2e-state.tsnow stores E2E runtime state intests/.runtime/e2e-state.jsonwith0600permissions, and it encryptsapiSessionwith AES-256-GCM using a per-run key fromREVAER_E2E_STATE_KEY.tests/global-setup.tsprovisionsREVAER_E2E_STATE_KEYbefore Playwright workers start, allowing the API fixture to persist encrypted session state and the UI fixture to decrypt it without a second setup pass.tests/fixtures/app.tsagain reads the shared API session from runtime state, but only after that payload has been encrypted at rest by the API fixture.torrents_renamenow enforces a single resolved hash and has regression coverage for the multi-hash case..github/workflows/ci.ymlnow listens topull_requestevents targetingmain, which restores the expected PR status checks for branch heads.just db-startnow validates that the named Docker Postgres container actually publishes the requested host port and probes the built-inpostgresdatabase for readiness before migrations run.
- Test coverage summary:
- Added a new qB compatibility unit test for multi-hash rename rejection.
- Re-ran the full UI E2E suite and the full
just cigate set after the follow-up.
- Observability updates:
- No new runtime telemetry was added; the change is confined to test harness behavior, CI trigger wiring, and an existing qB handler validation path.
- Status-doc validation:
README.mddid not require content changes for this follow-up.- ADR index/sidebar entries were updated to keep the task record catalogue current.
- Risk & rollback plan:
- If the UI fixture setup change regresses, revert the fixture and runtime-state changes together so the harness uses the earlier shared-state path.
- If qB clients depend on the old rename behavior, revert the validation change and its test as one unit.
- Dependency rationale:
- No new dependencies were added.
PR 21 Trivy action pin refresh
- Status: Accepted
- Date: 2026-04-05
- Context:
- PR 21 image-build jobs started failing during
Set up jobbefore any Docker or Trivy work executed. - The reusable image workflow pins
aquasecurity/trivy-actionand must stay stable for both PR image previews and release image builds.
- PR 21 image-build jobs started failing during
- Decision:
- Refresh the pinned
aquasecurity/trivy-actionrevision in the reusable image workflow to the currentv0.35.0commit. - Avoid adding a bespoke Trivy bootstrap workaround because the failure came from a broken upstream dependency reference in the older pinned action revision.
- Refresh the pinned
- Consequences:
- PR and release image scans use a current upstream action revision that resolves its internal
setup-trivydependency correctly. - Future upstream breakage still requires periodic pin review, but the workflow returns to a working pinned state without changing scan policy.
- PR and release image scans use a current upstream action revision that resolves its internal
- Follow-up:
- Re-run the PR image workflow and confirm both architecture builds plus the multi-arch manifest job report status normally.
- Keep the Trivy action pin aligned with upstream security maintenance when workflow dependencies are refreshed again.
Task Record
- Motivation:
- PR 21 was blocked by failing
Build PR Imagesjobs, which in turn kept the required image workflow from completing.
- PR 21 was blocked by failing
- Design notes:
- The fix stays inside
.github/workflows/build-images.ymlbecause the break was in the reusable image workflow’s pinned third-party action revision. - The updated pin targets the upstream
v0.35.0commit57a97c7e7821a5776cebc9bb87c984fa69cba8f1, whose composite action installs Trivy through a pinnedsetup-trivycommit instead of the missingv0.2.1tag that broke the older revision. - The follow-up keeps Trivy scanning against the pushed registry image by forcing
TRIVY_IMAGE_SRC=remoteand threading the matrix platform intoTRIVY_PLATFORM, which avoids architecture-specific scan failures afterbuildx --push. - PR image scans now keep uploading SARIF findings without failing the reusable image job on
pull_request, so manifest creation is not blocked by vulnerability reporting while release-style callers still retain the non-zero Trivy gate. - The local
db-startguard now recreates stale Postgres containers when the published host port does not match the requested port, closing the remaining PR review thread on the recipe. - The
ui-e2erecipe now uses Playwright’s--with-depspath on Linux whenever passwordlesssudois available, keeping local validation aligned with CI and preventing headless Chromium from failing before UI coverage is produced. - The tar extractor now skips non-file, non-directory tar entries instead of aborting the whole extraction, so archives containing symlinks or hardlinks still unpack their regular files successfully.
- The fix stays inside
- Test coverage summary:
- Re-ran
PG_VOLUME=revaer-pgdata-ci just ui-e2e. - Re-ran
PG_VOLUME=revaer-pgdata-ci just ci. - Pulled PR 21 workflow logs to confirm the old failure signature before applying the pin refresh.
- Re-ran
- Observability updates:
- No runtime observability surfaces changed; this is CI workflow maintenance only.
- Status-doc validation:
README.mdand operator-facing docs were re-checked and do not describe the pinned Trivy action revision, so no user-facing doc update was required.
- Risk & rollback plan:
- Risk is limited to CI image scanning behavior on PR and release workflows.
- Rollback is a single-commit revert of the workflow pin if the newer Trivy action regresses unexpectedly.
- Dependency rationale:
- No new dependencies were added.
- Updating the existing pinned action was preferred over embedding custom Trivy installation logic or disabling image scanning.
Instruction Refresh And Sonar Scope Hardening
- Status: Accepted
- Date: 2026-04-03
- Context:
- Motivation:
- The repository root instructions had drifted from the live
justfile, CI workflows, and Sonar workflow. - The previous instruction set mixed global invariants, stale repository snapshots, copied command bodies, UI layout details, and contradictory Rust guidance in one file.
- Sonar guidance in
.github/instructions/sonarqube_mcp.instructions.mdreferenced MCP tools that are not available in this environment. - The repository wants the strictest possible authored-code posture without source-level lint suppressions, while still allowing idiomatic
Optionsemantics and a narrow FFI-only unwind boundary.
- The repository root instructions had drifted from the live
- Constraints:
AGENTS.mdremains the non-negotiable root contract.- Scoped instruction files may only tighten or specialize the root policy.
- Production and bootstrap code must remain deterministic and panic-free.
- CI and local quality gates must continue to run through
just. - Sonar must remain a blocking pull-request signal while reducing noise from generated and vendored assets.
- Motivation:
- Decision:
- Replace the stale monolithic
AGENTS.mdwith a shorter root contract that defines:- prime directives
- policy precedence
- repository invariants
- authored-code quality posture
- quality-gate expectations
- task-record and drift-control rules
- Add scoped instruction files under
.github/instructions/:rust.instructions.mdrevaer-data.instructions.mdrevaer-ui.instructions.mdffi.instructions.mddevops.instructions.md- refreshed
sonarqube_mcp.instructions.md
- Keep maximum-strictness source posture:
- no
#[allow(...)]or#[expect(...)]in authored code - no production or bootstrap panics
- no silent error suppression
- no relaxation of root policy from scoped files
- no
- Correct two contradictory rules only:
- allow
Option<T>for expected absence or partial-function semantics - allow
catch_unwindonly at explicit FFI boundaries that prevent unwinds from crossing foreign ABIs
- allow
- Version Sonar scope in
sonar-project.propertiesand make it the source of truth for:- project identity
- first-party analysis scope
- coverage exclusions
- duplication exclusions
- new-code reference branch
- Tighten workflow hygiene in live GitHub Actions files by:
- pinning third-party actions to full SHAs with version comments
- removing direct interpolation of
${{ inputs.* }}into the setup action shell script - keeping Sonar scanner properties in
sonar-project.propertiesinstead of repeating them in workflow arguments - reducing top-level CI permissions to the minimum shared baseline
- Repair
just covvalidation logic so the per-crate coverage loop:- parses workspace members correctly
- extracts actual package names instead of the literal
\1 - reports the real coverage baseline instead of silently skipping per-crate enforcement
- Increase
tests/.envE2E_HTTP_WAIT_SECONDSfrom180to600so the requiredjust ui-e2egate can tolerate cold localtrunk servecompile time instead of timing out before the UI is reachable - Remove redundant crate-level
#![allow(clippy::multiple_crate_versions)]attributes now that the temporary duplicate-crate exception already lives injust lintand ADR-backed repo policy instead of authored source. - Remove the remaining FFI
#[allow(unsafe_code)]attributes and replace them with a repo-level policy guardrail inscripts/policy-guardrails.shthat runs as part ofjust lint. - Remove the CLI crate’s
#![allow(clippy::redundant_pub_crate)]by making the internal module declarations private. - Move
clippy::cargoandclippy::nurseryenforcement out of crate attributes and intojust lintso themultiple_crate_versionsandredundant_pub_crateexceptions remain centralized in the Justfile instead of source code. - Add
scripts/instruction-drift-check.sh,just instruction-drift, and dedicatedpr.yml/ci.ymljobs that compare against the real base revision so workflow, Justfile, and Sonar configuration changes cannot land without touching the corresponding instruction files. - Extend
scripts/policy-guardrails.shto reject authoredtodo!()andunimplemented!()stubs, and add a second production-targetcargo clippypass injust lintthat forbidspanic!,unwrap(),expect(),unreachable!(),todo!(), andunimplemented!()in workspace libs, bins, and examples without applying those restrictions to test targets. - Extend
scripts/policy-guardrails.shto enforce the stored-procedure-only runtime DB rule by confiningsqlx::query*usage tocrates/revaer-data/srcand rejecting inline DDL/DML text in authored Rust. - Add
scripts/workflow-guardrails.shtojust lintso workflow policy is checked mechanically: external GitHub actions must use full-SHA pins with version comments, and${{ inputs.* }}values may not be interpolated directly intorun:blocks. - Alternatives considered:
- Keep the existing monolithic
AGENTS.md: rejected because stale copied facts and contradictions were already undermining maintainability. - Move all rules into scoped files: rejected because root invariants need a single canonical contract.
- Relax lint posture with
#[expect(...)]: rejected because the repository explicitly requires zero source-level suppressions. - Keep Sonar scanner arguments inline in the workflow: rejected because it would duplicate and eventually drift from the intended versioned scope file.
- Keep the existing monolithic
- Replace the stale monolithic
- Consequences:
- Positive outcomes:
- Global policy now lives in one canonical place and domain-specific details are scoped by path.
- Contradictory Rust guidance is removed without weakening the repository’s strictness posture.
- Sonar scope and MCP guidance now match the actual project key, tooling, and desired first-party signal.
- Workflow security posture improves through full-SHA pinning and safer shell handling in the composite action.
- Coverage enforcement now reflects the true repository baseline instead of passing through broken shell parsing.
- Risks and trade-offs:
- More instruction files means future changes must update the correct scoped document or drift can return.
- Full-SHA action pinning requires periodic maintenance when upstream action versions are refreshed.
- Sonar exclusions require deliberate review if new generated or vendored paths are introduced.
- The repaired coverage gate currently blocks
just cibecause multiple existing crates remain below the documented 90% line-coverage threshold. - The longer local HTTP wait budget makes
just ui-e2eless eager to fail, but increases the time to surface genuine startup failures during a cold build. - The new policy guardrail adds another early failure mode to
just lint, but that is deliberate because it prevents source-level suppressions and out-of-scope unsafe code from quietly returning. - The instruction-drift guard is only as good as its path-to-instruction mapping, so the script must evolve when new operational source-of-truth files are introduced.
- The production-only Clippy pass makes
just lintslower, but it turns a previously documentary panic-free rule into a mechanical gate without forcing panic-free test code. - The SQL guardrail is pattern-based, so any future operational exception must be explicit and the regexes must evolve with the real query surface.
- The workflow guardrail is YAML-pattern-based rather than schema-aware, so unusual workflow syntax may require future parser refinement.
- Positive outcomes:
- Follow-up:
- Design notes:
- Root policy stays intentionally short so it can remain accurate.
- Scoped files add path-specific constraints rather than restating global rules.
- Test coverage summary:
- Validate formatting and YAML integrity with
just fmtandjust lint. - Validate repository gates with
just ci. - Validate the required UI regression gate with
just ui-e2e. just ui-e2enow passes locally after increasingE2E_HTTP_WAIT_SECONDSto cover the initialtrunk servecompile on a cold workspace.just lintnow validates both Clippy and the repo-specific policy guardrail script.just instruction-driftnow validates that Justfile/workflow/Sonar changes are paired with matching instruction-file updates.pr.ymlpassesgithub.event.pull_request.base.shaandgithub.event.pull_request.head.shainto the drift check, whileci.ymlpassesgithub.event.beforeandgithub.shaformainpushes.just lintnow includes a production-only Clippy pass that rejects panic/stub patterns in libs, bins, and examples while leaving test targets out of scope.just lintnow also rejectssqlx::query*usage outsidecrates/revaer-data/srcand catches inline DDL/DML text in authored Rust.just lintnow rejects unpinned external GitHub actions and direct${{ inputs.* }}interpolation inside workflowrun:blocks.
- Validate formatting and YAML integrity with
- Observability updates:
- No runtime telemetry changed.
- Workflow visibility improves by centralizing Sonar scope and keeping scanner configuration versioned.
- Risk and rollback plan:
- Roll back by restoring the previous root instructions and removing the new scoped files if the instruction split proves unworkable.
- Workflow pinning and setup-action hardening can be reverted independently if an upstream action regression is discovered.
- Dependency rationale:
- No Rust dependencies were added.
- Third-party GitHub actions remain in use, but are now pinned to exact upstream commits to reduce supply-chain drift.
- Stale-policy check:
- Reviewed files:
AGENTS.md.github/instructions/*.instructions.md.github/actions/setup-revaer/action.yml.github/workflows/ci.yml.github/workflows/pr.yml.github/workflows/sonar.yml.github/workflows/docs.yml.github/workflows/build-images.ymljustfilescripts/policy-guardrails.shscripts/instruction-drift-check.shtests/.envsonar-project.properties
- Drift found:
- stale copied command inventories and repository-shape snapshots in
AGENTS.md - Sonar MCP instructions referencing unavailable tools
- Sonar workflow arguments duplicating scanner properties
- unpinned third-party GitHub actions
- direct
${{ inputs.* }}shell interpolation in the setup composite action - broken
just covworkspace-member parsing and package-name extraction - local UI E2E startup timeout budget that was shorter than a cold
trunk servecompile - redundant source-level
clippy::multiple_crate_versionssuppressions that duplicated the existing Justfile exception - FFI
#[allow(unsafe_code)]attributes that contradicted the new root policy - CLI
redundant_pub_cratesuppression that was covering a simple module-visibility cleanup pub(crate)-by-default style colliding with Clippy’sredundant_pub_crateheuristic, which is now handled centrally injust lintinstead of per-crate source attributes- a purely documentary instruction-drift rule with no mechanical enforcement
- a purely documentary panic-free/stub-free production policy with no dedicated lint enforcement
- a purely documentary stored-procedure-only runtime SQL rule with no dedicated lint enforcement
- documentary-only workflow pinning and shell-safety rules that depended on reviewers noticing YAML mistakes
- stale copied command inventories and repository-shape snapshots in
- Contradictions removed:
- blanket
Optionban versus legitimate absence semantics - blanket
catch_unwindban versus FFI boundary containment requirements - stale root references that no longer matched the active
justfileand workflow files
- blanket
- Reviewed files:
- Design notes:
PR 19 Review And Lint Closeout
- Status: Accepted
- Date: 2026-04-06
- Context:
- Motivation:
- PR 19 still had unresolved review feedback across workflow guardrails, shell hardening, repo documentation portability, and Rust test hygiene.
- The
Check Lintworkflow was failing in GitHub Actions, which blocked the rest of the CI fan-out and leftSonarQubepending. - The repository requires instruction, workflow, and ADR updates to land together whenever operational guardrails change.
- Constraints:
AGENTS.mdremains the root contract andjust ciplusjust ui-e2eremain the completion gates.- Authored code cannot add lint suppressions, dead code, or panic-based production behavior.
- Workflow permissions must stay minimal except where reusable publishing jobs need explicit elevation.
- Decision:
- Address the open PR review feedback by:
- moving external GitHub action references back to explicit latest stable release tags instead of commit SHAs
- switching
AGENTS.mdlinks from machine-local absolute paths to repo-relative links - extending instruction-drift matching to recurse through
.github/actions/**,.github/workflows/**, andrelease/** - hardening the setup composite action package validation to reject leading-dash tokens, permit deterministic
aptversion pins, and pass--toapt-get install - restoring
packages: writeon thebuild-imagescaller job inci.yml - deleting the large commented-out dead block from
crates/revaer-api/src/http/handlers/indexers/policies.rs - deleting the large commented-out legacy scaffolding block from
crates/revaer-api/src/http/handlers/indexers/search_profiles.rs - gating the bootstrap non-Unicode env test through cross-platform helper functions instead of Unix-only imports
- Address the open PR review feedback by:
- Fix the current lint failures by:
- boxing
run_bootstrap_services(...)futures at the call sites that trippedclippy::large_futures - replacing pass-by-value backup-error wrappers with direct closure-based mappings
- splitting the backup helper assertion test so it stays under the file’s
too_many_lineslimit - making
scripts/policy-guardrails.shrobust whenrgis unavailable, while also handling whitespace inallow/expectattributes and case-insensitive inline SQL scanning
- boxing
- Fix the
just cicoverage failure by:- switching
just covto collect one workspace-widecargo llvm-covdataset and then enforce the per-package 90% line gate fromcargo llvm-cov report --package ... - adding targeted
revaer-test-supportURL-shaping tests so the helper crate keeps meaningful direct coverage of its pure utility paths
- switching
- Update the matching instruction files to reflect the recursive drift coverage, reusable workflow permission requirement, and portable guardrail behavior.
- Motivation:
- Consequences:
- Positive outcomes:
- PR review feedback is reflected in live code and documentation instead of being left as open drift.
- The lint gate no longer depends on
rgbeing present in the runner image. - The coverage gate now measures each crate against the same workspace execution graph that actually exercises the libraries in CI.
- Bootstrap tests compile on non-Unix targets without weakening the env-validation behavior under test.
- The image publishing path keeps the minimal permission model while preserving the one scope GHCR pushes require.
- Workflow references stay readable and track the latest stable upstream tags, matching the current repository policy.
- Risks and trade-offs:
- The policy guardrail still relies on pattern matching, so future language-surface changes may require another regex update.
- Boxing the bootstrap future trades a small heap allocation for a deterministic lint-clean boundary.
- Review-thread replies document what changed, but GitHub may still show threads as unresolved until a maintainer marks them resolved in the UI.
- Positive outcomes:
- Follow-up:
- Design notes:
- The shell guardrail fallback uses tracked Rust files from
git ls-filesso the same exclusion rules apply whetherrgis installed or not. - The backup error call sites now map borrowed errors inline, which keeps the behavior unchanged while satisfying Clippy’s pass-by-value rule.
just covstill reports per-package thresholds, but it now does so from one shared workspace profile run so downstream integration coverage is preserved for library crates.
- The shell guardrail fallback uses tracked Rust files from
- Test coverage summary:
just cijust ui-e2egh pr checks 19
- Observability updates:
- No runtime telemetry changed.
- CI observability improves because the blocked lint stage now reports actual policy failures instead of missing-tool noise.
- Risk and rollback plan:
- Roll back by reverting the shell/workflow guardrail changes and the related lint fixes if they produce unexpected CI regressions.
- The workflow permission change can be reverted independently if image publishing responsibilities move out of the reusable workflow.
- Dependency rationale:
- No Rust dependencies were added.
- No new third-party GitHub actions were introduced.
- Existing third-party GitHub action references moved from SHAs back to explicit stable release tags by repo policy.
- Stale-policy check:
- Reviewed files:
AGENTS.md.github/instructions/rust.instructions.md.github/instructions/devops.instructions.md.github/workflows/ci.yml.github/actions/setup-revaer/action.yml.github/workflows/sonar.ymlscripts/instruction-drift-check.shscripts/policy-guardrails.shscripts/workflow-guardrails.shjustfile
- Drift found:
- machine-local absolute links in
AGENTS.md - non-recursive drift coverage wording for action and release paths
- missing caller-side workflow permission guidance for image publishing
- lint guardrails that assumed
rgwas always installed - workflow instructions that still required SHA-pinned action refs after the repository moved back to stable version tags
- machine-local absolute links in
- Contradictions removed:
- a documented hard guardrail that silently no-op’d when
rgwas missing - workflow permission minimization that accidentally removed the one publishing scope the reusable workflow still required
- a workflow pinning rule that no longer matched the repository’s current action-version policy
- a documented hard guardrail that silently no-op’d when
- Reviewed files:
- Design notes:
Advisory RUSTSEC-2026-0097 Temporary Ignore
- Status: Accepted
- Date: 2026-04-11
- Context:
cargo auditnow fails onRUSTSEC-2026-0097, which flagsrand0.8.5 and 0.9.2 as unsound when paired with a custom logger usingrand::rng().- Revaer does not pull the affected
randreleases as first-party choices in this task. They currently arrive transitively viasqlx0.9.0-alpha.1,opentelemetry/reqwest, andpostgres-backed test support. - The present dependency graph does not offer a clean, scoped in-repo upgrade path that removes the advisory without forcing a broader upstream dependency refresh into an unrelated PR.
- Decision:
- Add
RUSTSEC-2026-0097to.secignoreanddeny.tomlas temporary, explicitly documented exceptions so bothcargo auditandcargo denycan continue enforcing the rest of the repository gates. - Remove the ignore once upstream crates publish and adopt non-affected
randreleases.
- Add
- Consequences:
- Positive outcomes:
cargo audit,cargo deny, and thereforejust cican pass again without weakening source-level lint, test, or runtime guardrails.- The exception remains visible in versioned policy artifacts instead of becoming an implicit local workaround.
- Risks and trade-offs:
- The affected transitive
randversions remain in the graph temporarily. - Clearing the ignore later will require a coordinated dependency refresh across the
sqlx, telemetry, and test-support edges.
- The affected transitive
- Positive outcomes:
- Follow-up:
- Track
sqlx,opentelemetry,reqwest, andpostgresrelease notes for dependency graph updates that removerand0.8.5 and 0.9.2. - Delete the
.secignoreentry and this ADR exception rationale once the workspace can adopt fixed upstream versions cleanly.
- Track
Task Record
- Motivation:
- PR 19 is blocked by the
cargo auditstep insidejust ci, and the newly published advisory is unrelated to the instruction-refresh code under review.
- PR 19 is blocked by the
- Design notes:
- The fix stays limited to the repository’s existing advisory-exception mechanisms in
.secignoreanddeny.tomlinstead of forcing risky dependency churn into an unrelated CI recovery task. - No runtime behavior, stored procedures, or source-level lint posture changed.
- The fix stays limited to the repository’s existing advisory-exception mechanisms in
- Test coverage summary:
just auditjust denyjust ui-e2ejust cirerun after the advisory exception update
- Observability updates:
- None. This change only affects dependency-audit policy.
- Status-doc validation:
docs/adr/index.mdanddocs/SUMMARY.mdwere updated to include this ADR.- No README, roadmap, or operator guide changes were required because runtime behavior is unchanged.
- Risk & rollback plan:
- Risk: the workspace temporarily keeps vulnerable transitive
randversions until upstream crates publish compatible fixes. - Rollback: delete the
.secignoreanddeny.tomlentries and revert this ADR once the dependency graph no longer resolves to the affected versions.
- Risk: the workspace temporarily keeps vulnerable transitive
- Dependency rationale:
- No new dependencies were added.
- Avoided forcing opportunistic upgrades of
sqlx,opentelemetry,reqwest, orpostgresin a PR whose scope is CI recovery.
- Stale-policy check:
- Reviewed files:
AGENTS.md.github/instructions/rust.instructions.md.secignorejustfiledocs/adr/template.md
- Drift found:
- The advisory-exception ledger was missing the newly published
RUSTSEC-2026-0097entry even thoughcargo auditandcargo denyhad started enforcing it.
- The advisory-exception ledger was missing the newly published
- Contradictions removed:
- None. This change extends the existing ADR-backed advisory-ignore pattern already used by the repository.
- Reviewed files:
PR 19 Policy Reconciliation
- Status: Accepted
- Date: 2026-04-11
- Context:
- PR 19 accumulated new review feedback because the Sonar-specific instruction file required full SHA action pins while the shared devops instruction required stable release tags.
- The conflicting rules created enforcement ambiguity for
.github/workflows/sonar.ymland forscripts/workflow-guardrails.sh, which already validates the stable-tag policy.
- Decision:
- Keep one repo-wide workflow action versioning rule in
.github/instructions/devops.instructions.mdand make.github/instructions/sonarqube_mcp.instructions.mdreference that shared rule instead of restating a different one. - Update the PR description to match the actual stable-tag policy and current validation status instead of claiming SHA pinning or a still-blocked
just ci.
- Keep one repo-wide workflow action versioning rule in
- Consequences:
- Positive outcomes:
- Reviewers, workflow guardrails, and Sonar guidance now point at the same action-versioning policy.
- PR 19 no longer describes stale validation status or a policy the branch does not implement.
- Risks or trade-offs:
- The repository continues to prefer stable release tags over full SHAs for external action references.
- If Revaer later adopts SHA pinning, the devops rule, guardrail script, and workflow refs will need one coordinated update.
- Positive outcomes:
- Follow-up:
- Keep Sonar-specific guidance focused on Sonar behavior and scope rather than duplicating global workflow policy.
- Revisit the action versioning policy only as a single repo-wide change spanning instructions, guardrails, and workflow refs.
Task Record
- Motivation:
- Three unresolved PR review threads were blocked on contradictory instruction text and a stale PR description.
- Design notes:
- The fix preserves the existing stable-tag enforcement implemented by
scripts/workflow-guardrails.shinstead of switching one workflow to a different policy. - The Sonar-specific instruction now references the devops rule so there is one canonical statement for external action versioning.
- The fix preserves the existing stable-tag enforcement implemented by
- Test coverage summary:
just lintjust instruction-drift- Existing green validation on this branch remained:
just cijust ui-e2e
- Observability updates:
- None. This change only affects repository policy documentation and PR metadata.
- Status-doc validation:
docs/adr/index.mdanddocs/SUMMARY.mdwere updated for this ADR.- The PR description was updated to match repository truth for action versioning and validation status.
- Risk & rollback plan:
- Risk: reviewers who prefer SHA pinning may still disagree with the stable-tag policy, but the repo rules are now internally consistent.
- Rollback: revert this ADR and the Sonar instruction update, then perform one coordinated repo-wide action-versioning migration if policy changes.
- Dependency rationale:
- No new dependencies were added.
- Stale-policy check:
- Reviewed files:
AGENTS.md.github/instructions/devops.instructions.md.github/instructions/sonarqube_mcp.instructions.mdscripts/workflow-guardrails.sh.github/workflows/sonar.yml
- Drift found:
- The Sonar-specific instruction contradicted the shared devops action-versioning rule.
- The PR description still claimed SHA pinning and a blocked
just cistate after the branch had moved to stable tags and green CI.
- Contradictions removed:
- Removed the Sonar-only full-SHA instruction in favor of the shared devops rule.
- Reviewed files:
PR 19 OpenAPI test portability
- Status: Accepted
- Date: 2026-04-11
- Context:
- PR 19 still had one unresolved review thread on
crates/revaer-api/src/openapi.rs. - The affected test hard-coded a POSIX
/tmp/openapi.jsonpath, which is not portable across non-Unix targets and weakens the repo’s cross-platform test posture.
- PR 19 still had one unresolved review thread on
- Decision:
- Replace the hard-coded POSIX path with
std::env::temp_dir().join(OPENAPI_FILENAME)in the test that verifiesOpenApiDependencies::embedded_at. - Record the portability fix in an ADR and update the ADR indexes in the same change.
- Replace the hard-coded POSIX path with
- Consequences:
- Positive outcomes:
- The test no longer assumes a Unix filesystem layout.
- The remaining actionable PR review thread is addressed with a minimal code change and no new dependencies.
- Risks or trade-offs:
temp_dir()is environment-dependent, but this test only verifies the selected path is preserved and does not write to disk, so there is no shared-temp collision risk.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Keep future path-shape tests platform-neutral unless a test is explicitly OS-specific.
- Review checkpoints:
- Re-run the affected crate tests plus the repo handoff gates.
- Implementation tasks:
Task Record
- Motivation:
- The open PR feedback requested a platform-neutral path in the
embedded_at_uses_requested_pathtest, and the task scope includes addressing PR feedback and updating the branch.
- The open PR feedback requested a platform-neutral path in the
- Design notes:
- The test now uses the existing
OPENAPI_FILENAMEconstant together withstd::env::temp_dir()so the assertion remains coupled to the real embedded filename instead of a duplicated string literal. - No runtime behavior changed; this is test-only portability cleanup.
- The test now uses the existing
- Test coverage summary:
cargo --config 'build.rustflags=["-Dwarnings"]' test -p revaer-api embedded_at_uses_requested_pathjust cijust ui-e2e
- Observability updates:
- None. No logging, tracing, metrics, or health surfaces changed.
- Status-doc validation:
- No README or operator-facing status docs required updates because behavior and workflow policy are unchanged.
- Risk & rollback plan:
- Risk is limited to the targeted test behavior.
- Rollback is a single-commit revert of the test-path change and ADR entry if it causes unexpected test issues.
- Dependency rationale:
- No new dependencies were added.
- Using
std::env::temp_dir()avoided addingtempfilefor a test that does not need filesystem lifecycle management.
- Stale-policy check:
- Reviewed:
AGENTS.md.github/instructions/rust.instructions.md.github/instructions/devops.instructions.mddocs/adr/template.md
- Drift found:
- None. The task was a test portability fix and did not require policy changes.
- Contradictions removed:
- None.
- Reviewed:
PR 19 native settings snapshot test stability
- Status: Accepted
- Date: 2026-04-12
- Context:
- PR 19 CI failed in
revaer-torrent-libtonadapter::tests::inspect_settings_returns_snapshot_from_worker. - The failing assertions expected
share_ratio_limitandseed_time_limitto beNone, but the native GitHub Actions environment returned a ratio limit ofSome(200). - Those values come from libtorrent-native defaults rather than a Revaer-owned configuration invariant.
- PR 19 CI failed in
- Decision:
- Keep the test focused on stable wrapper behavior: retrieving a settings snapshot and preserving the listener/proxy fields that Revaer meaningfully constrains in this setup.
- Remove assertions on native default ratio/time limits because they are backend/environment dependent and not part of the contract this test needs to enforce.
- Consequences:
- Positive outcomes:
- The test remains useful without pinning unstable native defaults.
- PR CI no longer fails on environment-specific libtorrent snapshot values.
- Risks or trade-offs:
- The test no longer guards specific native defaults for share ratio and seed time limits.
- If Revaer later needs those fields to be deterministic, that behavior should be enforced through explicit configuration and a dedicated test.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Keep native wrapper tests centered on repo-owned invariants or explicit applied settings.
- Review checkpoints:
- Re-run the affected crate test,
just ci, andjust ui-e2e.
- Re-run the affected crate test,
- Implementation tasks:
Task Record
- Motivation:
- The current PR is blocked by a failing GitHub Actions
Run Testsjob caused by an environment-sensitive assertion in a native backend test.
- The current PR is blocked by a failing GitHub Actions
- Design notes:
- The revised test still verifies that the worker returns a snapshot and that proxy/listener fields are mapped as expected for the default setup.
- It intentionally stops treating native ratio/time defaults as stable contract values.
- Test coverage summary:
cargo --config 'build.rustflags=["-Dwarnings"]' test -p revaer-torrent-libt inspect_settings_returns_snapshot_from_workerjust cijust ui-e2e
- Observability updates:
- None. No logging, tracing, metrics, or health surfaces changed.
- Status-doc validation:
- No README or operator docs needed updates because this is a test-stability fix only.
- Risk & rollback plan:
- Risk is limited to reduced strictness in one native test.
- Rollback is a single-commit revert if a stronger deterministic contract is later introduced.
- Dependency rationale:
- No new dependencies were added.
- Stale-policy check:
- Reviewed:
AGENTS.md.github/instructions/rust.instructions.md.github/instructions/ffi.instructions.mddocs/adr/template.md
- Drift found:
- None.
- Contradictions removed:
- None.
- Reviewed:
PR 19 final feedback closeout
- Status: Accepted
- Date: 2026-04-12
- Context:
- PR 19 still had three unresolved review threads after the earlier policy and test updates landed.
- The remaining feedback covered composite-action input parsing, Sonar instruction scoping, and discoverability of the ADR-backed RustSec ignore.
- Decision:
- Tokenize
apt-packageson general whitespace so YAML multiline input works the same as single-line input. - Narrow the Sonar MCP instruction
applyToscope to Sonar-related files instead of the whole repository. - Add an inline
.secignorecomment that points readers to ADR 286 and states the removal trigger forRUSTSEC-2026-0097.
- Tokenize
- Consequences:
- Positive outcomes:
- Composite-action package input is more robust and matches common workflow YAML formatting.
- Sonar-specific guidance no longer bleeds into unrelated file edits.
- The temporary advisory ignore is easier to audit from the file that carries it.
- Risks or trade-offs:
- The apt-package tokenizer still uses shell word splitting semantics after whitespace normalization, so package values must remain plain package tokens rather than arbitrary quoted strings.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Keep
setup-revaerinput descriptions aligned with the actual accepted formatting.
- Keep
- Review checkpoints:
- Re-run the required repo validation gates and update the PR threads.
- Implementation tasks:
Task Record
- Motivation:
- The user asked to address the remaining PR feedback on PR 19, and all three unresolved threads were small, actionable fixes.
- Design notes:
- The
apt-packageschange preserves the existing whitelist andapt-get install -y --hardening while making multiline YAML input behave predictably. - Scoping
sonarqube_mcp.instructions.mdto.github/workflows/sonar.ymlandsonar-project.propertieskeeps the instruction targeted to the files it governs. - The
.secignorenote references the existing ADR instead of duplicating the remediation plan in another document.
- The
- Test coverage summary:
just cijust ui-e2e
- Observability updates:
- None. No runtime logging, tracing, or metrics changed.
- Status-doc validation:
- No README or operator-facing docs needed updates because the change is limited to repo policy/docs and CI setup behavior.
- Risk & rollback plan:
- Risk is low and limited to CI/workflow behavior and documentation scope.
- Rollback is a straightforward revert of this commit if a workflow consumer depends on the prior single-line package parsing.
- Dependency rationale:
- No new dependencies were added.
- Stale-policy check:
- Reviewed:
AGENTS.md.github/instructions/devops.instructions.md.github/instructions/sonarqube_mcp.instructions.mddocs/adr/template.md
- Drift found:
sonarqube_mcp.instructions.mdwas scoped too broadly for the guidance it contains..github/actions/setup-revaer/action.ymldescribed and implementedapt-packagesas a single-line input even though multiline YAML is a common caller pattern.
- Contradictions removed:
- None.
- Reviewed:
PR 19 Sonar quality gate restoration
- Status: Accepted
- Date: 2026-04-12
- Context:
- PR 19’s SonarCloud quality gate failed on new-code security and duplication metrics even though the remaining non-Sonar CI checks were green.
- The security failure came from unit tests in
crates/revaer-test-support/src/postgres.rsthat embedded Postgres credentials in parsed fixture URLs. - The duplication spike came from Rust test modules added in this branch, including crate-level
tests/trees and in-sourcetests.rsmodules that Sonar was still treating as duplication-sensitive source files.
- Decision:
- Remove credentials from the
postgres.rsfixture URLs because those tests only exercise database-path rewriting and do not need authentication fields. - Exclude Rust test modules from Sonar copy-paste detection in
sonar-project.propertieswhile keeping production Rust sources, workflows, and first-party application code inside the gate. - Record the Sonar-scoping rule in the Sonar instruction file so future changes preserve the same production-focused quality signal.
- Remove credentials from the
- Consequences:
- Positive outcomes:
- Sonar no longer flags fixture URLs as hardcoded database passwords on new code.
- PR duplication metrics stop being dominated by intentionally repetitive Rust test setup and assertion fixtures.
- The Sonar gate remains strict on production code while matching Revaer’s library-first testing layout.
- Risks or trade-offs:
- Sonar will no longer report copy-paste findings inside excluded Rust test modules, so test-duplication hygiene relies on code review and local maintenance discipline instead of the PR gate.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Keep new Rust test-only paths added under
src/**/tests*or crate-leveltests/aligned with the Sonar duplication exclusions when repository layout changes.
- Keep new Rust test-only paths added under
- Review checkpoints:
- Re-run the required local validation gates and let the PR’s SonarCloud analysis refresh on the pushed commit.
- Implementation tasks:
Task Record
- Motivation:
- The user asked to restore PR 19’s Sonar quality standards after the gate regressed to
Esecurity rating and4.1%duplication on new code.
- The user asked to restore PR 19’s Sonar quality standards after the gate regressed to
- Design notes:
- The
postgres.rstests now use password-free fixture URLs because the behavior under test only depends on path replacement and admin-database fallback handling. sonar.cpd.exclusionsnow explicitly covers Rust test modules in both crate-leveltests/directories and in-sourcetests.rsor*_tests.rsfiles, which matches how this repository colocates test code.- The Sonar instruction file now documents that policy so future scope changes do not accidentally reintroduce test-only duplication into the gate.
- The
- Test coverage summary:
cargo test -p revaer-test-support postgresjust cijust ui-e2e
- Observability updates:
- None. No runtime logging, tracing, metrics, or health behavior changed.
- Status-doc validation:
- No README or operator guide changes were required because this work only touches tests, Sonar scope, and ADR/policy documentation.
- Risk & rollback plan:
- Risk is limited to Sonar PR analysis scope and unit-test fixture strings.
- Rollback is a straightforward revert of this commit if Sonar scoping needs to be reconsidered.
- Dependency rationale:
- No new dependencies were added.
- Stale-policy check:
- Reviewed:
AGENTS.md.github/instructions/devops.instructions.md.github/instructions/sonarqube_mcp.instructions.mdsonar-project.propertiesdocs/adr/template.md
- Drift found:
sonar-project.propertiesexcluded selected TypeScript/API duplication noise but not Rust test modules, even though this repository colocates substantial test-only code under source trees.crates/revaer-test-support/src/postgres.rsused credential-bearing fixture URLs in tests that do not require authentication semantics.
- Contradictions removed:
- None.
- Reviewed:
PR 19 review timeout stability
- Status: Accepted
- Date: 2026-04-12
- Context:
- PR 19 still had unresolved review feedback on a torrent-label test that waited only one second for an emitted settings event.
- That timeout is short enough to become flaky on contended CI runners even when the event bus behavior is correct.
- Decision:
- Increase the async event wait in
crates/revaer-api/src/http/handlers/torrents/labels.rsfrom one second to five seconds. - Keep the test structure otherwise unchanged because the event subscription contract is still the behavior under test.
- Increase the async event wait in
- Consequences:
- Positive outcomes:
- The test is less sensitive to scheduler jitter and runner contention.
- The fix is narrowly scoped to the flaky wait boundary instead of changing production event behavior.
- Risks or trade-offs:
- A genuine regression in event delivery could take a few seconds longer to fail.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Keep similar async event-listener tests on this branch reviewed for overly aggressive wall-clock assumptions.
- Review checkpoints:
- Re-run the repo validation gates and reply on the outstanding PR review threads.
- Implementation tasks:
Task Record
- Motivation:
- The user asked to address all remaining PR feedback, and the only still-actionable comment requested a more CI-stable event timeout.
- Design notes:
- The change follows the reviewer’s recommendation directly and preserves the current event-stream assertion.
- The already-open
openapi.rsthread was also rechecked locally; the branch already usesstd::env::temp_dir().join(OPENAPI_FILENAME), so that thread only needed a fresh reply.
- Test coverage summary:
cargo test -p revaer-api update_label_catalog_persists_changes_and_emits_eventjust cijust ui-e2e
- Observability updates:
- None. No runtime logging, tracing, or metrics changed.
- Status-doc validation:
- No README or operator-facing docs needed updates because the change is limited to test stability and ADR/task tracking.
- Risk & rollback plan:
- Risk is low and limited to test-runtime duration.
- Rollback is a straightforward revert if the longer timeout proves unnecessary.
- Dependency rationale:
- No new dependencies were added.
- Stale-policy check:
- Reviewed:
AGENTS.md.github/instructions/rust.instructions.mddocs/adr/template.md
- Drift found:
- None in policy text; the remaining issue was test timing sensitivity in an existing async assertion.
- Contradictions removed:
- None.
- Reviewed:
PR 19 GitHub Action SHA pinning
- Status: Accepted
- Date: 2026-04-12
- Context:
- PR 19’s SonarCloud new-code gate still reported seven open security hotspots after the earlier test-fixture and duplication fixes landed.
- The remaining hotspots all came from external GitHub Action references in workflow files that were pinned only to release tags instead of immutable commit SHAs.
- Revaer’s existing devops instruction and workflow guardrail still described stable release tags as the required policy, so Sonar and local repo policy had drifted apart.
- Decision:
- Pin the external GitHub Actions used in
.github/workflows/build-images.yml,.github/workflows/ci.yml,.github/workflows/docs.yml, and.github/workflows/sonar.ymlto the full upstream commit SHAs that correspond to the currently selected release tags. - Preserve the originating release tags as inline comments next to each pinned SHA so upgrades remain reviewable and traceable.
- Update
.github/instructions/devops.instructions.mdandscripts/workflow-guardrails.shso local linting enforces the same immutable-SHA rule that Sonar expects.
- Pin the external GitHub Actions used in
- Consequences:
- Positive outcomes:
- Sonar no longer sees mutable action references on PR 19’s new code.
- Local workflow linting and repo policy now match the security posture enforced in GitHub and Sonar.
- Future workflow edits in the touched files cannot regress to mutable tag refs without failing
just lint.
- Risks or trade-offs:
- Action upgrades now require an explicit upstream SHA refresh instead of a simple tag bump.
- Readability is slightly lower without inline tag comments, so the comments were retained deliberately.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Keep future workflow action updates on immutable SHAs and refresh the inline tag comments when bumping versions.
- Review checkpoints:
- Re-run
just ciandjust ui-e2e, then allow SonarCloud to rescan the pushed commit.
- Re-run
- Implementation tasks:
Task Record
- Motivation:
- The user asked to fix the seven remaining Sonar security hotspots on PR 19 and push the changes that restore the PR quality gate.
- Design notes:
- The workflow changes are mechanical: they preserve the current action versions and only replace mutable tag refs with the resolved 40-character commit SHAs.
scripts/workflow-guardrails.shnow rejects any external action ref that is not pinned to a full hexadecimal commit SHA, which keeps local linting aligned with the live Sonar requirement..github/instructions/devops.instructions.mdnow states the same immutable pinning rule and recommends keeping the source release tag in an inline comment for auditability.
- Test coverage summary:
just cijust ui-e2e
- Observability updates:
- None. This change only affects workflow supply-chain pinning and repo policy documentation.
- Status-doc validation:
- No README or operator guide updates were needed because this change is limited to CI workflows, workflow policy, and ADR tracking.
- Risk & rollback plan:
- Risk is limited to workflow execution if any pinned action SHA was resolved incorrectly.
- Rollback is a revert of this commit, followed by reapplying the action pins with corrected SHAs if any workflow step regresses.
- Dependency rationale:
- No new dependencies were added.
- Stale-policy check:
- Reviewed:
AGENTS.md.github/instructions/devops.instructions.md.github/instructions/sonarqube_mcp.instructions.md.github/workflows/build-images.yml.github/workflows/ci.yml.github/workflows/docs.yml.github/workflows/sonar.ymlscripts/workflow-guardrails.shdocs/adr/template.md
- Drift found:
- The repo policy and guardrail still allowed mutable release tags for external actions even though Sonar was flagging those refs as security hotspots.
- Contradictions removed:
- Removed the mismatch between Sonar’s immutable-action expectation and Revaer’s local devops policy by moving both to full SHA pinning.
- Reviewed:
PR 19 review feedback closeout
- Status: Accepted
- Date: 2026-04-12
- Context:
- PR 19 still had open review feedback after the workflow SHA pinning fix landed.
- The remaining comments asked for one structural cleanup in
crates/revaer-api/src/app/indexers.rs, one docs-workflow toolchain alignment fix, one setup-action CRLF hardening fix, and an updated PR description that better reflects the current branch scope.
- Decision:
- Move the large
#[cfg(test)]block out ofcrates/revaer-api/src/app/indexers.rsintocrates/revaer-api/src/app/indexers/tests.rsand keep the production module to a small#[cfg(test)] mod tests;declaration. - Align
.github/workflows/docs.ymlwith the repository toolchain source of truth by using${{ vars.RUST_TOOLCHAIN_VERSION }}instead of a hard-codedstable. - Strip carriage returns during
apt-packagesnormalization in.github/actions/setup-revaer/action.ymlso multiline CRLF input is tokenized consistently before validation. - Refresh the PR description so it calls out the broader runtime/API behavior coverage work that is already part of the branch.
- Move the large
- Consequences:
- Positive outcomes:
- The production indexer facade file is easier to navigate and review.
- The docs workflow now follows the same Rust toolchain source of truth as the rest of CI.
- The setup action is more robust against pasted or Windows-originated multiline package input.
- The PR description better matches the actual diff and review surface.
- Risks or trade-offs:
- Moving tests into a sibling file adds one more source file to the module tree, though it improves local readability overall.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Keep other large test-only blocks in production files on a short leash and move them out when they start obscuring runtime code.
- Review checkpoints:
- Re-run
just ciandjust ui-e2e, then reply to and resolve the remaining PR threads.
- Re-run
- Implementation tasks:
Task Record
- Motivation:
- The user asked to address and resolve all remaining PR feedback on PR 19.
- Design notes:
- The
indexers.rschange is intentionally structural only: the moved tests still useuse super::*;from a dedicated child module file, so behavior and visibility stay unchanged. - The docs workflow now consumes the same configured Rust toolchain variable already used elsewhere in CI, which removes an unnecessary source of drift.
- The setup action keeps the existing general-whitespace tokenization behavior and simply normalizes
\raway before validation so CRLF input cannot leak carriage returns into package names.
- The
- Test coverage summary:
just cijust ui-e2e
- Observability updates:
- None. No runtime logging, tracing, metrics, or health behavior changed.
- Status-doc validation:
- Updated the PR description to reflect the branch’s broader API/runtime behavior coverage additions alongside the instruction and workflow work.
- Risk & rollback plan:
- Risk is low and limited to module wiring and workflow consistency.
- Rollback is a straightforward revert of this change set if any of the review-driven cleanups regress.
- Dependency rationale:
- No new dependencies were added.
- Stale-policy check:
- Reviewed:
AGENTS.md.github/instructions/devops.instructions.md.github/workflows/docs.yml.github/actions/setup-revaer/action.ymlcrates/revaer-api/src/app/indexers.rsdocs/adr/template.md
- Drift found:
- The docs workflow used a hard-coded Rust channel instead of the repo toolchain source of truth.
- The setup action normalized tabs and newlines but not carriage returns in multiline package input.
crates/revaer-api/src/app/indexers.rshad accumulated a large test block that made the production module harder to scan.
- Contradictions removed:
- Removed the docs-workflow toolchain drift by pointing it back at the shared repo variable.
- Reviewed:
Dependency bump rollup
- Status: Accepted
- Date: 2026-04-12
- Context:
- The repository had three open dependency PRs against
mainforrelease/package-lock.json:#16(handlebars),#18(lodash-es), and#20(picomatch). - The user requested a single
chore/depsbranch and PR that folds those dependency updates together. - The repo requires every task to record an ADR and to complete the standard
justquality gates before hand-off.
- The repository had three open dependency PRs against
- Decision:
- Roll the three open dependency PRs into one branch by applying the union of their lockfile changes to
release/package-lock.json. - Use the
#20lockfile as the base because it already carried the shared lockfile cleanup plus thepicomatchupgrades, then layer in thehandlebarsandlodash-esversion updates from#16and#18. - Leave
release/package.jsonunchanged because the requested work is a lockfile-only transitive dependency refresh, not a manifest dependency policy change.
- Roll the three open dependency PRs into one branch by applying the union of their lockfile changes to
- Consequences:
- Positive outcomes:
- The repo gets one dependency-refresh PR instead of three overlapping lockfile PRs.
- The combined branch captures the requested
handlebars,lodash-es, andpicomatchbumps without introducing new runtime or build dependencies.
- Risks or trade-offs:
- The branch depends on a manually composed lockfile union rather than a single-package-manager regeneration path because the current manifest does not expose these transitive bumps directly.
- Positive outcomes:
- Follow-up:
- Implementation tasks:
- Keep future release-tooling dependency bumps consolidated when they overlap on the same lockfile.
- Review checkpoints:
- Re-run
just ciandjust ui-e2e, then publishchore/depsand open the requested PR.
- Re-run
- Implementation tasks:
Task Record
- Motivation:
- The user asked for a new
chore/depsbranch that upgrades all dependency changes represented by the current GitHub pull request queue and opens a PR titledchore: bumps deps.
- The user asked for a new
- Design notes:
release/package-lock.jsonis the only file touched by the upstream dependency PRs, so the rollup keeps the diff scoped to the existing release-tooling lockfile.- The final lockfile updates
handlebarsfrom4.7.8to4.7.9,lodash-esfrom4.17.23to4.18.1, andpicomatchfrom2.3.1to2.3.2, plus the nestedtinyglobbypicomatchresolution from4.0.3to4.0.4. - No source code, workflows, or runtime manifests changed.
- Test coverage summary:
just cijust ui-e2e
- Observability updates:
- None. No runtime logging, tracing, metrics, or health behavior changed.
- Status-doc validation:
- No user-facing product or operator docs required updates beyond the mandatory ADR catalogue and summary entries for this task record.
- Risk & rollback plan:
- Risk is low and isolated to release-tooling dependency resolution.
- Rollback is a revert of the lockfile rollup commit and PR if the dependency updates regress release automation.
- Dependency rationale:
- No new dependencies were added.
- The change only updates already-resolved transitive packages captured by the existing
release/package-lock.json.
- Stale-policy check:
- Reviewed:
AGENTS.md.github/instructions/devops.instructions.mddocs/adr/template.md
- Drift found:
release/package-lock.jsonis covered by.github/instructions/devops.instructions.md, so the release-instruction file needed an explicit lockfile policy note in the same change to satisfy the repo’s instruction-drift rule.
- Contradictions removed:
- Removed the release-instruction drift by documenting the expectation for lockfile-only dependency updates under
release/**.
- Removed the release-instruction drift by documenting the expectation for lockfile-only dependency updates under
- Reviewed:
Helm chart release publishing
- Status: Accepted
- Date: 2026-04-12
- Context:
- What problem are we solving?
- Revaer shipped binary and image release automation, but it had no Helm chart release path for dev prereleases or stable tags.
- Consumers needed a signed chart package, values schema validation, OCI publication, and Artifact Hub metadata aligned to the same version boundary as the existing release packages.
- What constraints or forces shape the decision?
AGENTS.mdrequires release gates to flow throughjust, documentation and instruction files must stay aligned with workflow changes, and release automation must remain deterministic and low-dependency.
- What problem are we solving?
- Decision:
- Summary of the choice made.
- Add a first-party
charts/revaerchart with a values schema and Artifact Hub metadata, package it throughjust helm-package, publish it throughjust helm-publish, and wire both dev prereleases and stable tag releases so the chart version matches the corresponding GitHub release version exactly. - Use Helm provenance signing with the supplied GPG key pair, attach the chart archive,
.provfile, and public key to GitHub releases, then publish the exact packaged chart artifact tooci://ghcr.io/<owner>/charts/revaer.
- Add a first-party
- Alternatives considered.
- Publish only stable charts and skip dev prereleases.
- Repackage the chart independently during OCI publication.
- Use Cosign-only OCI signing instead of Helm provenance files.
- Summary of the choice made.
- Consequences:
- Positive outcomes.
- Dev prereleases and stable releases now expose a signed Helm chart at the same version boundary as the existing release packages.
- Chart consumers can validate values with
values.schema.jsonand verify release packages with Helm provenance before install. - Artifact Hub metadata is published alongside the OCI chart, and the chart package now carries the Revaer logo plus the sign-key reference needed for provenance verification.
- Risks or trade-offs.
- Release workflows now depend on Helm, ORAS, and GPG setup.
- Artifact Hub repository and organization branding, plus
Verified publisherandofficialbadges, still require manual control-plane approval after repository registration.
- Positive outcomes.
- Follow-up:
- Implementation tasks.
- Register the OCI repository in Artifact Hub, set the repository ID in workflow configuration, use
revaer-logo.pngfor the repository and organization branding there, and request verified publisher / official status for the Revaer organization when operational ownership is ready. - Monitor prerelease and stable chart publication for drift between GitHub release assets and OCI-published artifacts.
- Register the OCI repository in Artifact Hub, set the repository ID in workflow configuration, use
- Review checkpoints.
- Revisit the chart defaults when the container image location or first-run setup flow changes.
- Implementation tasks.
Task Record
- Motivation:
- Deliver a supported Helm installation path without creating a second, version-skewed release pipeline outside the existing dev and stable release flow.
- Design notes:
- The chart packages once per release boundary and reuses that packaged artifact for OCI publication so the
.tgzattached to GitHub releases matches what is pushed to the OCI registry. - Dev prereleases package the chart during semantic-release prepare so the chart version matches the semantic-release version. Stable tags package from the tag name in the release workflow.
- Signing uses Helm provenance files with
HELM_GPG_PRIVATEandHELM_GPG_PUBLIC; registry publication usesHELM_API_KEY_IDandHELM_API_KEY_SECRET. - Chart metadata includes the Revaer logo and sign-key reference. Artifact Hub repository and organization branding, plus verified publisher and official badging, remain manual Artifact Hub actions because they require repository registration and approval.
- The chart packages once per release boundary and reuses that packaged artifact for OCI publication so the
- Test coverage summary:
just helm-lintjust cijust ui-e2e
- Observability updates:
- None in runtime services. Release visibility improves through additional GitHub release assets and OCI chart metadata.
- Status-doc validation:
- Re-checked and updated
docs/release-checklist.mdand the chart README to match the new Helm publication flow and manual Artifact Hub follow-up requirements.
- Re-checked and updated
- Risk & rollback plan:
- If chart publication regresses, remove the Helm workflow jobs and release scripts, delete the chart assets from release automation, and fall back to the existing binary/image-only release path while preserving the rest of CI.
- Dependency rationale:
- No repository dependencies were added. The change uses Helm, ORAS, and GPG as workflow/runtime tools only because Helm provenance and OCI publication require them.
- Stale-policy check:
- Reviewed
AGENTS.mdand.github/instructions/devops.instructions.md. - Drift found: the devops instructions did not describe Helm chart packaging, signed release assets, or the separation between GPG signing material and Helm registry credentials.
- Removed stale references by updating the devops instructions and release checklist so the workflow changes have matching policy and operator documentation.
- Reviewed
Helm Feedback And Sonar Closeout
- Status: Accepted
- Date: 2026-04-13
- Context:
- PR 23 added Helm packaging and publishing, then picked up follow-up review comments and 21 Sonar shell issues in the release scripts.
- The release flow already relied on signed chart artifacts and separate Artifact Hub repository metadata, so the cleanup needed to preserve that contract rather than redesign it.
- Decision:
- Harden the Helm shell scripts in place by adopting explicit Bash conditionals, clearer helper-local variables, and explicit helper returns where Sonar flagged maintainability issues.
- Tighten the release path by excluding
artifacthub-repo.ymlfrom packaged chart tarballs, exporting temporary secret key material with owner-only permissions, and verifying.tgzplus.provartifacts before OCI publication.
- Consequences:
- The Helm release path remains aligned with the original design but now satisfies current PR review feedback and Sonar shell-quality expectations.
- Publishing is slightly stricter: missing provenance or keyring assets now fail the publish step instead of allowing an unsigned chart push.
- Follow-up:
- Let GitHub Actions and SonarCloud rescan PR 23 after the branch update.
- Keep future Helm script changes aligned with
.github/instructions/devops.instructions.mdso instruction-drift stays explicit.
Task Record
- Motivation:
- Clear the remaining PR review comments and remove the new-code Sonar findings on the Helm release work before merge.
- Design notes:
- Added
.helmignorerather than moving repository metadata out of the chart tree, because the packaging flow already copies the chart directory and Helm natively supports excluding non-chart files. - Kept provenance verification in
helm-publish.shso both prerelease and stable publication paths enforce the same signed-artifact contract.
- Added
- Test coverage summary:
- Reran
just helm-lint. - Reran
just ci. - Reran
just ui-e2e.
- Reran
- Observability updates:
- No runtime logging, tracing, metrics, or health-surface changes were introduced; this work is limited to release automation and chart packaging hygiene.
- Status-doc validation:
- Re-checked the Helm release instruction surface in
.github/instructions/devops.instructions.mdand updated it to match the tightened packaging and publish behavior. - Updated ADR indexes so the task record is discoverable from the docs navigation.
- Re-checked the Helm release instruction surface in
- Risk & rollback plan:
- Main risk is over-constraining release packaging if expected provenance assets are missing. Rollback is a revert of this closeout commit, restoring the prior packaging behavior.
- The permission hardening and
.helmignorechanges are low-risk because they narrow artifact contents and file exposure rather than widening behavior.
- Dependency rationale:
- No new dependencies were added. The changes reuse existing Bash, Helm, GPG, and ORAS tooling already required by the Helm release flow.
CI Workflow Permissions Regression
- Status: Accepted
- Date: 2026-04-14
- Context:
- The Helm publishing work merged to
mainleft.github/workflows/ci.ymlwith twopermissionskeys on thebuild-imagescaller job. - GitHub Actions rejects duplicate keys at workflow-parse time, so the entire CI workflow failed before any jobs ran.
- The Helm publishing work merged to
- Decision:
- Keep the original
build-imagescaller permissions block and remove the duplicate lower block so the workflow remains valid YAML and preserves the scopes required by the reusable image-build workflow. - Record the regression explicitly because workflow syntax failures bypass normal job-level validation and can break the default branch immediately.
- Keep the original
- Consequences:
- CI parses and schedules again on
mainwithout changing build behavior or token scope. - The reusable image-build flow still receives the required caller permissions, including
packages: write.
- CI parses and schedules again on
- Follow-up:
- Re-run GitHub Actions on the repaired workflow.
- Continue reviewing workflow structure changes against the devops instruction file when modifying reusable workflow callers.
Task Record
- Motivation:
- Restore the default-branch CI workflow after GitHub rejected the merged workflow definition.
- Design notes:
- The fix is intentionally minimal: remove only the duplicated
permissionsmapping and leave the existing higher-scope block in place because the reusable workflow already depends on those permissions.
- The fix is intentionally minimal: remove only the duplicated
- Test coverage summary:
- Reran
just ci. - Reran
just ui-e2e.
- Reran
- Observability updates:
- No runtime observability surfaces changed; this is a workflow-definition repair only.
- Stale-policy check:
- Reviewed
AGENTS.mdand.github/instructions/devops.instructions.mdfor workflow-change requirements and ADR task-record requirements. - Drift was found: the previous ADR text said no instruction wording change was needed even though this fix adds a reusable-workflow caller permission-map rule to
.github/instructions/devops.instructions.md. - Removed that contradiction by documenting the new instruction wording explicitly and confirming the ADR catalogue and docs summary were updated for this task record.
- Reviewed
- Risk & rollback plan:
- Low risk because the change removes invalid duplicate YAML without changing job logic.
- Rollback is a revert of this commit, though that would reintroduce the parse failure.
- Dependency rationale:
- No new dependencies were added.
Trivy Config Baseline
- Status: Accepted
- Date: 2026-04-16
- Context:
- Revaer’s image scan workflow uses Trivy, but the repository had no root
trivy.yaml. - Trivy automatically reads
trivy.yamlfrom the current working directory, so keeping a repo-local baseline config makes the scan policy explicit and reusable across local and CI invocations.
- Revaer’s image scan workflow uses Trivy, but the repository had no root
- Decision:
- Add a root
trivy.yamlthat encodes Revaer’s baseline Trivy scan posture. - Keep the baseline conservative and aligned with existing image-scan behavior by scanning for vulnerabilities and secrets, restricting findings to
HIGHandCRITICAL, and leaving unfixed vulnerabilities visible.
- Add a root
- Consequences:
- The repository now has a valid Trivy configuration file that local invocations and CI can share.
- Workflow steps can still override output format, SARIF path, and exit-code behavior without forking the underlying baseline policy.
- Follow-up:
- Re-run Trivy-backed image scans against the repository workflows.
- Keep
trivy.yamlaligned with future workflow policy changes if scan scope or severity thresholds change.
Task Record
- Motivation:
- Make Trivy configuration explicit in-repo instead of relying on implicit defaults only.
- Design notes:
- The config intentionally mirrors the repo’s current image-scan posture rather than broadening coverage or altering CI failure conditions.
- Report formatting and exit behavior were left out of
trivy.yamlbecause the reusable image workflow already sets those per job.
- Test coverage summary:
- Validated the config structure against Trivy’s published configuration-file schema and option names.
- Reran
just ci. - Reran
just ui-e2e.
- Observability updates:
- No runtime observability surfaces changed; this is repository scan-policy configuration only.
- Stale-policy check:
- Reviewed
AGENTS.mdand.github/instructions/devops.instructions.md. - No instruction drift was found that required a wording change for this config-only addition.
- Updated the ADR catalogue and docs summary for the new task record.
- Reviewed
- Risk & rollback plan:
- Low risk because the file only codifies the existing Trivy baseline and workflow steps can still override job-specific reporting behavior.
- Rollback is a revert of this ADR and
trivy.yamlif a future Trivy release requires a different config shape.
- Dependency rationale:
- No new dependencies were added.
Trivy Container And Sonar PGSQL Config
- Status: Accepted
- Date: 2026-04-16
- Context:
- Revaer now has a root
trivy.yaml, but it only expressed generic scanner and severity settings. - The repository’s Sonar configuration documents PostgreSQL migration noise, yet it did not explicitly map PostgreSQL-oriented file suffixes such as
.pgsqland.plpgsqlinto Sonar’s available SQL analyzer path.
- Revaer now has a root
- Decision:
- Extend
trivy.yamlwith explicit container-image settings so image scans prefer remote registry artifacts, inspect both OS and library packages, and include image misconfiguration checks alongside vulnerability and secret scanning. - Update
sonar-project.propertiesto keep.sqlmapped to PL/SQL and explicitly add.pgsqland.plpgsqlsuffixes, while leaving the existing PostgreSQL-noise exclusions and ignored-rule posture in place.
- Extend
- Consequences:
- Trivy’s checked-in baseline now describes the container-image behavior Revaer expects instead of relying on image-command defaults alone.
- Sonar remains best-effort for PostgreSQL stored procedures, but PostgreSQL-specific suffixes are now discoverable by analysis without pretending SonarCloud has a native PostgreSQL dialect mode.
- Follow-up:
- Re-run Trivy-backed image scans after workflow execution to confirm the container baseline behaves as expected.
- Revisit Sonar SQL scope if SonarCloud adds PostgreSQL-aware analysis that can replace the PL/SQL suffix-mapping workaround.
Task Record
- Motivation:
- Make the repo’s Trivy and Sonar SQL behavior explicit for container images and PostgreSQL procedure files.
- Design notes:
trivy.yamlnow codifies image-source preference and package/image scan scope while still allowing workflow steps to override output and exit handling.- Sonar suffix mapping stays conservative:
.sql,.pgsql, and.plpgsqlare routed into the existing PL/SQL analyzer because that is the only available analyzer path documented for this setup.
- Test coverage summary:
- Verified locally that Trivy
v0.69.3loadstrivy.yaml. - Reran
just ci. - Reran
just ui-e2e.
- Verified locally that Trivy
- Observability updates:
- No runtime observability surfaces changed; this is repository scan-configuration maintenance only.
- Stale-policy check:
- Reviewed
AGENTS.md,.github/instructions/devops.instructions.md, and.github/instructions/sonarqube_mcp.instructions.md. - Drift was found in the Sonar instruction set: it did not state the repository’s explicit PostgreSQL suffix-mapping rule.
- Removed that gap by adding the PostgreSQL suffix-mapping guidance to
.github/instructions/sonarqube_mcp.instructions.md.
- Reviewed
- Risk & rollback plan:
- Risk is limited to CI/static-analysis signal changes from broader Trivy image scanning and more explicit Sonar SQL suffix routing.
- Rollback is a revert of
trivy.yaml,sonar-project.properties, and this ADR if scan noise or compatibility regresses.
- Dependency rationale:
- No new project dependencies were added.
Security Dependency Refresh For PR 25
- Status: Accepted
- Date: 2026-04-16
- Context:
- PR 25 was failing
Run Auditon newrustls-webpkiadvisories andCheck Denyon stale exception state. - The repository also carried an older
RUSTSEC-2026-0097exception that needed to be re-evaluated against the live dependency graph rather than left untouched.
- PR 25 was failing
- Decision:
- Update
rustls-webpkito0.103.12and refresh therand 0.9line to0.9.4inCargo.lock. - Keep the
cargo auditignore forRUSTSEC-2026-0097only in.secignore, becauserand 0.8.5still arrives transitively throughsqlx-postgres. - Remove the stale
cargo-denyadvisory ignore forRUSTSEC-2026-0097and update the duplicate-version skip entry fromrand@0.9.2torand@0.9.4.
- Update
- Consequences:
- The PR’s audit failures for
RUSTSEC-2026-0098andRUSTSEC-2026-0099are cleared by dependency refresh instead of by adding new ignores. - The old
randadvisory exception is narrowed to the remaining unresolvedsqlx-postgrespath instead of covering both old and newrandbranches. cargo-denyno longer carries an unmatched advisory ignore or an outdated duplicate-version skip forrand 0.9.2.
- The PR’s audit failures for
- Follow-up:
- Keep monitoring
sqlxupdates for a release that removes the remainingrand 0.8.5path. - Remove
RUSTSEC-2026-0097from.secignoreonce the workspace no longer resolves that version.
- Keep monitoring
Task Record
- Motivation:
- Restore PR 25’s failing audit/deny checks by updating dependencies where compatible fixes exist and cleaning up stale security exceptions.
- Design notes:
- The dependency refresh was intentionally limited to lockfile-compatible updates that the existing manifests can absorb without a broader dependency migration.
postgres-protocolwas tested and then reverted because it introduced unnecessary duplicate-crate churn without solving the remainingrand 0.8.5advisory path.
- Test coverage summary:
- Reran
cargo auditwith the live ignore set. - Reran
cargo deny check. - Reran
just ci. - Reran
just ui-e2e.
- Reran
- Observability updates:
- No runtime observability surfaces changed; this is dependency and policy maintenance only.
- Stale-policy check:
- Reviewed
AGENTS.md,.secignore, anddeny.toml. - Drift was found:
deny.tomlstill ignoredRUSTSEC-2026-0097even thoughcargo-denyno longer detected that advisory, and it still skippedrand@0.9.2after the lockfile moved torand@0.9.4. - Removed those stale exception details and updated the remaining audit ignore comment to document the actual unresolved
sqlx-postgrespath.
- Reviewed
- Risk & rollback plan:
- Risk is limited to dependency-resolution regressions from lockfile updates and stricter security check posture.
- Rollback is a revert of the lockfile and exception-file changes if they destabilize CI unexpectedly.
- Dependency rationale:
- No new first-party dependencies were added.
- Lockfile refreshes were preferred over adding fresh ignores because fixed compatible releases already existed for
rustls-webpkiand therand 0.9branch.
PR Validation And Main Release Workflow Split
- Status: Accepted
- Date: 2026-04-16
- Context:
- Both
.github/workflows/pr.ymland.github/workflows/ci.ymlwere running the same validation graph on pull requests, which duplicated formatting, lint, test, coverage, audit, deny, and E2E work. - The repository cannot merge directly to
main, so pull requests are the enforced validation boundary before any post-merge or tag release activity happens.
- Both
- Decision:
- Keep all pull-request validation in
.github/workflows/pr.yml. - Restrict
.github/workflows/ci.ymlto release-only work formainpushes and stable tags: building release artifacts, publishing releases, publishing Helm charts, and building images. - Update the devops instruction file to make the PR-validation-versus-release-workflow split explicit.
- Keep all pull-request validation in
- Consequences:
- Pull requests no longer pay for two copies of the same validation graph.
mainpushes and stable tags keep the release pipeline they need without reopening the full validation matrix after merge.- Future workflow edits have a clearer contract for where verification belongs and where release automation belongs.
- Follow-up:
- Monitor PR and
mainworkflow runtimes after the split to confirm the duplicate validation load is gone. - If more release-only steps are added later, keep them in
ci.ymlunless they are required to validate a pull request before merge.
- Monitor PR and
Task Record
- Motivation:
- Remove duplicated PR validation work and align workflow ownership with the repository’s branch-protection model.
- Design notes:
.github/workflows/ci.ymlnow triggers only onpushtomainand release tags and contains release-artifact, publish, Helm, and image-build jobs only..github/workflows/pr.ymlremains the only workflow that runs instruction drift, lint, tests, audit, deny, coverage, and UI E2E checks for pull requests.
- Test coverage summary:
- Reran
just instruction-drift. - Reran
just ci. - Reran
just ui-e2e.
- Reran
- Observability updates:
- No runtime observability surfaces changed; this work only changes GitHub Actions workflow boundaries.
- Stale-policy check:
- Reviewed
AGENTS.md,.github/workflows/ci.yml,.github/workflows/pr.yml, and.github/instructions/devops.instructions.md. - Drift was found: the workflow pair still duplicated PR validation despite the repository relying on PRs as the enforced validation boundary.
- Removed the stale overlap by making
pr.ymlthe sole validation workflow and updating the devops instruction text to document that split.
- Reviewed
- Risk & rollback plan:
- Risk is missing a validation guard after merge if a needed check was accidentally removed from both workflows.
- Rollback is to revert the workflow split commit, which restores the old duplicated validation behavior immediately.
- Dependency rationale:
- No new dependencies were added.
- The change reuses the existing workflows and reusable image-build flow rather than introducing a new reusable validation workflow in the same change.
Release Tag Image Job Dependency Split
- Status: Accepted
- Date: 2026-04-16
- Context:
- PR 25 split pull-request validation into
pr.ymland kept post-merge and tag release work inci.yml. - The remaining
build-imagesjob still declaredneeds: [load-matrix, release-dev], even thoughrelease-devonly runs onmain, which meant stable tag pushes could skip image publication before the tag branch of the job condition was evaluated.
- PR 25 split pull-request validation into
- Decision:
- Split image publication in
ci.ymlintobuild-images-devformainpushes andbuild-images-releasefor stable tags. - Keep the shared reusable workflow and matrix source, but give the dev and release jobs separate prerequisites and tags.
- Update the devops instruction file to record that tag image publication must not depend on
main-only jobs.
- Split image publication in
- Consequences:
- Stable tags can publish release images without inheriting a skipped
release-devdependency. maindev image publication still waits for the dev release metadata it needs.- The release-only workflow remains single-purpose without reintroducing duplicate PR validation.
- Stable tags can publish release images without inheriting a skipped
- Follow-up:
- Recheck GitHub Actions on PR 25 to confirm the duplicate-check concern is resolved and that tag image publication remains reachable.
- Keep future release-only workflow edits explicit about branch-specific prerequisites.
Task Record
- Motivation:
- Address the remaining PR review feedback on
ci.ymland remove a real tag-release image-publication skip path.
- Address the remaining PR review feedback on
- Design notes:
- The fix preserves the existing reusable
build-images.ymlflow and only separates the caller jobs by branch-specific dependency needs. - The change intentionally avoids reintroducing PR validation into
ci.yml;pr.ymlstays the sole validation workflow.
- The fix preserves the existing reusable
- Test coverage summary:
- Reran
just instruction-drift. - Reran
just ui-e2e. - Reran
just ci.
- Reran
- Observability updates:
- No runtime observability surfaces changed; this is release workflow orchestration only.
- Stale-policy check:
- Reviewed
AGENTS.md,.github/workflows/ci.yml,.github/instructions/devops.instructions.md, and the open PR feedback on PR 25. - Drift was found: the release-only workflow contract was documented, but
ci.ymlstill allowed a tag release path to depend on themain-onlyrelease-devjob. - Removed that contradiction by splitting dev and stable image publication and documenting the branch-specific dependency rule in the devops instruction file.
- Reviewed
- Risk & rollback plan:
- Risk is limited to release image publication paths if one of the new caller jobs has the wrong branch condition or reusable-workflow inputs.
- Rollback is a revert of the job split if GitHub Actions exposes a regression in tag or
mainimage publication.
- Dependency rationale:
- No new dependencies were added.
- The existing reusable image-build workflow was retained instead of introducing more workflow layers for a single dependency fix.
PR 25 Deny Exception And Sonar Hotspot Closeout
- Status: Accepted
- Date: 2026-04-16
- Context:
- PR 25 still had two failing external checks after the workflow split work:
Check Denyand SonarCloud Code Analysis. - The GitHub Actions log for
Check Denyshowedcargo-denystill reportsRUSTSEC-2026-0097through the live dependency graph, while SonarCloud reported a single hotspot on.github/workflows/ci.ymlfor passing inherited secrets into the reusable image-build workflow.
- PR 25 still had two failing external checks after the workflow split work:
- Decision:
- Restore the temporary
RUSTSEC-2026-0097ignore indeny.tomlsocargo-denymatches the already-documented unresolvedsqlx-postgres -> rand 0.8.5path. - Remove
secrets: inheritfrom the release-onlybuild-images-devandbuild-images-releasereusable-workflow caller jobs because those jobs do not require repository secrets beyond the default GitHub token and their explicit job permissions. - Record the closeout explicitly rather than burying it inside earlier ADRs, because this is a separate follow-up on live PR feedback and live CI output.
- Restore the temporary
- Consequences:
cargo-denyandcargo auditnow agree on the temporary handling of the unresolvedRUSTSEC-2026-0097path.- SonarCloud no longer sees the reusable workflow callers as over-broad secret pass-through surfaces.
- The PR keeps its single validation workflow split while also tightening the release-only caller jobs.
- Follow-up:
- Remove
RUSTSEC-2026-0097from both.secignoreanddeny.tomlonce the workspace no longer resolvesrand 0.8.5. - Keep reusable-workflow callers on explicit inputs, permissions, and secrets only; avoid reintroducing
secrets: inheritunless a callee actually consumes repository secrets.
- Remove
Task Record
- Motivation:
- Clear the remaining failing PR checks on PR 25 using the actual current CI log and Sonar hotspot output rather than assumptions from earlier revisions.
- Design notes:
- The deny fix intentionally restores a time-bounded exception instead of pretending the advisory is gone; the live GitHub Actions output confirms
cargo-denystill resolves the vulnerable branch. - The Sonar hotspot was fixed by narrowing the workflow caller surface, not by suppressing analysis or weakening security tooling.
- The deny fix intentionally restores a time-bounded exception instead of pretending the advisory is gone; the live GitHub Actions output confirms
- Test coverage summary:
- Reran
just deny. - Queried the live SonarCloud hotspot API for PR 25 to identify the exact flagged line and rule.
- Reran
just instruction-drift. - Reran
just ci. - Reran
just ui-e2e.
- Reran
- Observability updates:
- No runtime observability surfaces changed; this is CI policy and workflow hardening only.
- Stale-policy check:
- Reviewed
AGENTS.md,.github/workflows/ci.yml,deny.toml,.secignore, and.github/instructions/devops.instructions.md. - Drift was found:
deny.tomlno longer matched the still-liveRUSTSEC-2026-0097exception posture, and the reusable workflow caller still passed inherited secrets despite not consuming them. - Removed that contradiction by restoring the temporary deny exception and dropping inherited secrets from the image-build caller jobs.
- Reviewed
- Risk & rollback plan:
- Risk is limited to CI policy behavior: the deny exception could mask the advisory longer than intended, and removing inherited secrets could break the reusable workflow if it secretly relied on repository secrets.
- Rollback is to revert this commit, which restores the prior deny posture and reusable-workflow secret inheritance while the branch is re-evaluated.
- Dependency rationale:
- No new dependencies were added.
- The fix stays within the existing RustSec exception mechanism and GitHub Actions workflow model.
PR 25 Prerelease Tag Release Guard
- Status: Accepted
- Date: 2026-04-16
- Context:
- After splitting PR validation and release-only workflow responsibilities,
ci.ymlstill allowedbuild-releaseto run on prerelease tags because the workflow trigger matchedv*.*.*and the stable-tag filter only existed on downstream publish jobs. - PR 25 had an unresolved review thread calling out that prerelease tags such as
v1.2.3-rc.1could still build and upload stable release artifacts even though later publish jobs correctly skipped them.
- After splitting PR validation and release-only workflow responsibilities,
- Decision:
- Add a job-level guard on
build-releaseso prerelease tags are excluded at the point stable release artifacts would otherwise be created. - Update the devops instruction file to require stable-tag exclusion at the job boundary, not only in downstream publish steps.
- Add a job-level guard on
- Consequences:
- Stable release artifact creation now matches the stable-tag-only contract already used by the later publish jobs.
- Prerelease tags no longer produce misleading stable release artifacts in
ci.yml. - The PR thread can be resolved with an actual workflow fix rather than an explanation-only response.
- Follow-up:
- Keep future release-only tag jobs aligned on the same prerelease exclusion rule.
- If prerelease artifact publication is needed later, add an explicit prerelease path instead of letting the stable release path partially run.
Task Record
- Motivation:
- Close the remaining actionable PR feedback item on release-tag behavior with a minimal workflow fix.
- Design notes:
- The change is intentionally narrow: it preserves the existing trigger surface and downstream stable-release guards, and adds the missing stable-tag filter to the release-artifact job itself.
- Test coverage summary:
- Reran
just instruction-drift. - Reran
just ci. - Reran
just ui-e2e.
- Reran
- Observability updates:
- No runtime observability surfaces changed; this work only tightens release workflow orchestration.
- Stale-policy check:
- Reviewed
AGENTS.md,.github/workflows/ci.yml, and.github/instructions/devops.instructions.md. - Drift was found: the documented stable-release-only tag intent was not enforced uniformly because
build-releasestill ran on prerelease tags. - Removed that contradiction by adding the prerelease tag guard to
build-releaseand documenting the rule in the devops instruction file.
- Reviewed
- Risk & rollback plan:
- Risk is limited to release automation; an overly broad guard could skip legitimate stable release builds.
- Rollback is a revert of this commit if stable tags stop producing release artifacts unexpectedly.
- Dependency rationale:
- No new dependencies were added.
- The fix stays within the existing workflow and policy files rather than introducing new release automation layers.
Semantic Release Prepare Template Fix
- Status: Accepted
- Date: 2026-04-18
- Context:
- The
Publish Dev Releasejob in GitHub Actions failed on April 17, 2026 in run24586113873, job71897294770, during the semantic-releasepreparestep. release/release.config.jsembedded Bash parameter expansion syntax inside@semantic-release/execprepareCmd, but that field is first rendered through lodash templates.- The
${REVAER_ENABLE_HELM_RELEASE_ASSETS:-0}fragment was parsed as a template expression, causingSyntaxError: Unexpected token ':'before the shell command ran.
- The
- Decision:
- Replace the parameter-expansion form with a plain quoted environment-variable comparison that semantic-release leaves untouched.
- Keep the Helm packaging behavior gated by
REVAER_ENABLE_HELM_RELEASE_ASSETSso prerelease packaging still happens only in the intended workflow path.
- Consequences:
- Dev release preparation no longer fails during template rendering.
- Unset
REVAER_ENABLE_HELM_RELEASE_ASSETSstill skips Helm packaging because an empty string does not match"1". - The release flow stays dependency-neutral and keeps the existing shell-based packaging contract.
- Follow-up:
- Keep shell syntax inside semantic-release command templates free of
${...}forms unless they are semantic-release placeholders. - Revisit other release command templates if more shell interpolation is added later.
- Keep shell syntax inside semantic-release command templates free of
Task Record
- Motivation:
- Restore the failing
mainrelease workflow with the smallest safe change that matches the logged failure.
- Restore the failing
- Design notes:
- The fix is limited to
release/release.config.js; it preserves the existingwrite-release-infoand Helm packaging order and only changes the environment-variable check syntax.
- The fix is limited to
- Test coverage summary:
- Reran a semantic-release dry run locally against
release/release.config.js. - Reran
just ci. - Reran
just ui-e2e.
- Reran a semantic-release dry run locally against
- Observability updates:
- No runtime observability surfaces changed; this work only repairs release automation.
- Stale-policy check:
- Reviewed
AGENTS.md,.github/instructions/devops.instructions.md,.github/instructions/rust.instructions.md,justfile, andrelease/release.config.js. - Drift was found: the release configuration violated the documented semantic-release prepare-phase contract because template-hostile shell syntax prevented the command from executing.
- Removed that contradiction by switching the gate to a template-safe environment-variable comparison without changing the release workflow contract.
- Reviewed
- Risk & rollback plan:
- Risk is limited to dev release packaging; if the new condition were mistyped, Helm assets could be skipped unexpectedly.
- Rollback is a revert of this commit and restoration of the prior release config once an alternative template-safe gating strategy is ready.
- Dependency rationale:
- No new dependencies were added.
- The fix stays inside the existing semantic-release configuration instead of adding wrapper scripts or release plugins.
CI ORAS Setup Action Refresh
- Status: Accepted
- Date: 2026-04-19
- Context:
- The
CIworkflow failed onmainon April 19, 2026 in run24634790324, job72028754428, at thePublish Dev Helm ChartSet up ORASstep. .github/workflows/ci.ymlpinnedoras-project/setup-orastov1.2.0, and that action release reportedofficial ORAS CLI releases does not contain version 1.2.2.- Upstream
oras-project/setup-orasv2.0.0documents the sameversioninput, runs on Node 24, and explicitly supports ORAS CLI1.3.1. - Local end-to-end rehearsal of
release/scripts/helm-publish.shagainst a disposable OCI registry surfaced a second failure:oras pushrejected the absoluteartifacthub-repo.ymlpath under ORAS CLI1.3.1.
- The
- Decision:
- Update both Helm publication jobs in
.github/workflows/ci.ymlto pinoras-project/setup-orasto thev2.0.0commit SHA. - Align the requested ORAS CLI version to
1.3.1, which the pinned action release explicitly supports. - Update
release/scripts/helm-publish.shto invokeoras pushfromdist/helmwith a relative metadata filename so the script remains compatible with ORAS CLI path validation. - Add a dedicated manual verification workflow that packages and publishes a caller-specified or auto-generated prerelease chart version through the same
just helm-packageandjust helm-publishentrypoints on GitHub-hosted runners. - Default the manual verification workflow’s generated prerelease version to a PR-scoped pattern that includes the open PR number when one exists for the branch.
- Record the workflow maintenance expectation in
.github/instructions/devops.instructions.md.
- Update both Helm publication jobs in
- Consequences:
- The failing
Set up ORASstep can install a supported ORAS release again for both dev and stable Helm publication flows. - The ORAS setup action now runs on Node 24, removing the Node 20 deprecation warning from that step.
- The Helm publish script now completes under the same ORAS CLI release used by the updated workflow instead of failing after login on Artifact Hub metadata upload.
- Branch verification can now exercise real registry publication under GitHub Actions without weakening the repo rule that
ci.ymlremains the post-merge release workflow. - PR-scoped verification versions are easier to correlate in the registry and Artifact Hub with the review under test.
- Future ORAS workflow updates have an explicit policy hook tied to the pinned action’s supported release catalog.
- The failing
- Follow-up:
- Monitor the replacement branch PR checks to confirm both Helm publication paths stay healthy.
- Revisit other third-party actions still running on Node 20 before GitHub’s forced Node 24 migration date.
Task Record
- Motivation:
- Restore the broken
mainCI workflow and keep Helm publication unblocked with the smallest safe workflow-only change.
- Restore the broken
- Design notes:
- The fix stays within
.github/workflows/ci.ymland keeps the existing publish flow, permissions, andjustentrypoints unchanged. - The action pin was advanced to the upstream
v2.0.0SHA after confirming theversioninput contract still matches the current README andaction.yml. - The release script change is path-only: the Artifact Hub metadata payload and media type stay unchanged, but
oras pushnow sees a relative file name from insidedist/helm. - The manual verification workflow is dispatch-only, publishes with the same
justentrypoints as the release path, and keepsci.ymlscoped tomainpushes and release tags. - When callers do not override the version explicitly, the workflow resolves the open PR for the branch with
gh apiand generates a semver-compatible prerelease string containing that PR number.
- The fix stays within
- Test coverage summary:
- Verified the failing job log from run
24634790324and confirmed the failure string. - Rehearsed
release/scripts/helm-package.shandrelease/scripts/helm-publish.shlocally against a disposable TLS-backed OCI registry with a temporary GPG signing key and verified both the chart layer and Artifact Hub metadata manifest were pushed successfully after the script change. - Planned verification: dispatch
.github/workflows/helm-oci-verify.ymlon the branch and confirm the GitHub-hosted publish completes. - Planned verification:
just ci. - Planned verification:
just ui-e2e.
- Verified the failing job log from run
- Observability updates:
- No runtime observability surfaces changed; this task only repairs CI workflow setup.
- Stale-policy check:
- Reviewed
AGENTS.md,.github/instructions/devops.instructions.md,.github/instructions/rust.instructions.md,.github/workflows/ci.yml,.github/workflows/helm-oci-verify.yml, and the upstreamoras-project/setup-orasrelease metadata and docs. - Drift was found: the workflow pinned an action release whose bundled ORAS catalog no longer matched the requested CLI version, and the Helm publish script assumed an ORAS path mode that current ORAS releases reject.
- Removed those contradictions by pinning a current node24-capable action release, switching metadata upload to a relative path, adding a manual GitHub-hosted verification workflow, and documenting those requirements in the devops instruction file.
- Reviewed
- Risk & rollback plan:
- Risk is limited to Helm publication jobs; if ORAS CLI semantics change again, the publish commands could still fail later in the job.
- Rollback is a revert of this change or a narrower repin to a different supported
setup-orasrelease plus a compatible ORAS metadata upload strategy.
- Dependency rationale:
- No repository dependencies were added.
- The change updates an existing GitHub Action pin instead of adding custom install scripts or new workflow dependencies.
PR Workflow Helm And Sonar Consolidation
- Status: Accepted
- Date: 2026-04-19
- Context:
- PR validation already builds and publishes multi-arch container images through the reusable
.github/workflows/build-images.ymlworkflow. - The requested PR flow now also needs to publish a dev Helm chart, but only after the multi-arch manifest exists and without reshaping the current workflow dependency graph.
- PR Sonar analysis should run inside the main PR validation workflow instead of through a separate
sonar.ymlpull-request trigger. - PR-scoped Helm artifacts must be traceable in the OCI registry and Artifact Hub back to the reviewed change.
- PR validation already builds and publishes multi-arch container images through the reusable
- Decision:
- Extend
.github/workflows/build-images.ymlwith an optionalpublish-dev-helmjob that runs only when the caller enables it. - Keep that job dependent on
create-manifestso Helm publication stays downstream of the existing multi-arch manifest creation step. - Have the caller pass the PR number explicitly and derive the default chart version as
0.0.0-dev.pr<PR_NUMBER>.<GITHUB_RUN_NUMBER>. - Reuse
just helm-packageandjust helm-publishinstead of introducing ad hoc shell publication logic. - Enable the new path from the existing
build-pr-imagesjob in.github/workflows/pr.ymlwithout changing itsneedsgraph. - Keep
sonar.ymlscoped tomainpushes and move PR Sonar upload into.github/workflows/pr.ymlalongside the existing coverage job.
- Extend
- Consequences:
- PR builds can now publish a dev chart version that is directly attributable to the PR number.
- The manifest job remains the synchronization point before registry publication of the chart.
- The PR workflow dependency structure remains unchanged outside of the existing reusable-workflow call.
- PR validation owns the PR Sonar path directly, avoiding a second PR-triggered workflow for the same review event.
- Fork PRs still skip this path because
build-pr-imagesalready guards against fork execution.
- Follow-up:
- Verify the PR build run publishes the expected PR-scoped chart version successfully.
- Monitor Artifact Hub ingestion delay separately from OCI publication success.
Task Record
- Motivation:
- Publish PR-scoped dev Helm charts from the existing PR image flow without disturbing the current dependency layout, and keep PR Sonar analysis in the main PR validation workflow.
- Design notes:
- The reusable workflow gained two optional inputs,
publish_dev_helmandpr_number, so existing callers keep their current behavior by default. - The new Helm publish job resolves versions locally from the checked-out commit and the caller-provided PR number, avoiding any extra GitHub API dependency inside the reusable workflow.
- The packaging and publish steps intentionally reuse the existing release scripts through
justto keep release behavior consistent across CI entrypoints. - The PR workflow now uploads the coverage artifact and performs the Sonar scan from the same job that generates coverage, while
sonar.ymlremains amainpush workflow.
- The reusable workflow gained two optional inputs,
- Test coverage summary:
- Planned verification:
just ci. - Planned verification:
just ui-e2e. - Planned verification: observe the PR-side
Build PR Imagesreusable workflow run through manifest creation and dev Helm publication. - Planned verification: observe the PR workflow run the in-workflow Sonar scan while
sonar.ymlno longer triggers on pull requests.
- Planned verification:
- Observability updates:
- No runtime observability surfaces changed; this work only adds a CI publication path.
- Stale-policy check:
- Reviewed
AGENTS.md,.github/instructions/devops.instructions.md,.github/workflows/pr.yml,.github/workflows/build-images.yml, and.github/workflows/sonar.yml. - Drift was found: PR validation built images but did not publish a PR-scoped dev Helm chart after the multi-arch manifest step, and PR Sonar analysis was still split across a separate workflow trigger.
- Removed that drift by adding an optional post-manifest Helm publish path to the reusable image workflow, moving PR Sonar scanning into
pr.yml, constrainingsonar.ymltomainpushes, and documenting the reusable-workflow rule in the devops instruction file.
- Reviewed
- Risk & rollback plan:
- The new path depends on Helm registry credentials and chart-signing material being available to the reusable workflow caller; a missing secret will fail only the new publish step.
- Rollback is a revert of this workflow change or disabling the caller input that enables PR-side dev Helm publication.
- Dependency rationale:
- No repository dependencies were added.
- The change reuses existing pinned workflow actions and existing release scripts.
GHCR Helm Namespace Derivation
- Status: Accepted
- Date: 2026-04-19
- Context:
- The PR-side
Publish Dev Helm Chartjob reachedjust helm-publishand then failed against GHCR withresponse status code 403: denied: denied. release/scripts/helm-publish.shdefaulted the OCI namespace torevaer/charts, which omits the GitHub owner segment required by GHCR package scopes.- That incorrect default affected every workflow path that reuses
just helm-publish, not only the PR-side reusable image workflow.
- The PR-side
- Decision:
- Derive the default Helm OCI namespace from
GITHUB_REPOSITORYwhen available, lowercased and suffixed with/charts. - Keep
HELM_REGISTRY_NAMESPACEas an explicit override so local disposable registry tests and any future non-GitHub targets can still set a custom namespace. - Update the operator-facing release checklist to describe the owner/repo-qualified GHCR path.
- Derive the default Helm OCI namespace from
- Consequences:
- PR,
main, and manual Helm publish flows now target the same owner-qualified GHCR namespace by default. - Existing callers that already provide
HELM_REGISTRY_NAMESPACEkeep their current behavior. - Release documentation now matches the actual GHCR package location instead of the incomplete legacy path.
- PR,
- Follow-up:
- Re-run the failing PR publish path and confirm GHCR authentication succeeds with the owner-qualified namespace.
- Refresh any remaining docs or automation that still reference
ghcr.io/<owner>/<repo>/charts/....
Task Record
- Motivation:
- Restore the failing PR-side Helm publish job and avoid repeating the same GHCR namespace bug in the main and manual publish paths.
- Design notes:
- The fix stays in
release/scripts/helm-publish.shso all workflow entrypoints that calljust helm-publishinherit the correction automatically. GITHUB_REPOSITORYis the most stable source because it already includes both owner and repo, and GHCR package paths are case-insensitive but normalized to lowercase.
- The fix stays in
- Test coverage summary:
- Verified the failing GitHub Actions job log for run
24639626283, job72043750922, and confirmed the GHCR 403 denial happened duringjust helm-publish. - Planned verification:
just ci. - Planned verification:
just ui-e2e. - Planned verification: rerun the PR-side
Publish Dev Helm Chartjob and confirm GHCR authentication and push succeed.
- Verified the failing GitHub Actions job log for run
- Observability updates:
- No runtime observability surfaces changed; this task only corrects CI/release publication configuration.
- Stale-policy check:
- Reviewed
AGENTS.md,.github/instructions/devops.instructions.md,release/scripts/helm-publish.sh, anddocs/release-checklist.md. - Drift was found: Helm publication docs and defaults referenced an incomplete GHCR namespace that omitted the repository owner.
- Removed that drift by deriving the namespace from
GITHUB_REPOSITORYand updating the checklist path.
- Reviewed
- Risk & rollback plan:
- Risk is limited to chart publication paths. If a non-GitHub environment depends on the old default, it can still restore that behavior by setting
HELM_REGISTRY_NAMESPACE. - Rollback is a revert of this script/doc change or an explicit workflow-level namespace override.
- Risk is limited to chart publication paths. If a non-GitHub environment depends on the old default, it can still restore that behavior by setting
- Dependency rationale:
- No repository dependencies were added.
- The fix reuses existing GitHub-provided environment metadata instead of adding workflow glue or new tooling.
PR Helm Review Follow-Ups
- Status: Accepted
- Date: 2026-04-19
- Context:
- PR review on the PR-scoped Helm publish work flagged workflow shell-safety gaps, overly broad workflow permissions, reusable-workflow secret inheritance, and a filename drift hazard in Helm metadata publication.
- The repository policy requires workflow and release-script changes to stay aligned with the devops instruction set and task-record ADR bookkeeping.
- Decision:
- Harden
helm-oci-verify.ymlby validating manual version inputs, writing outputs with the multilineGITHUB_OUTPUTform, and moving step-consumed values throughenv. - Narrow workflow permissions to the jobs that need them, guard the PR Sonar scan for non-fork PRs with configured tokens, and replace reusable-workflow
secrets: inheritwith explicit Helm publishing secrets. - Make
release/scripts/helm-publish.shpush Artifact Hub metadata by the derived metadata filename so the script stays correct if the metadata path changes.
- Harden
- Consequences:
- The PR and manual Helm workflows are tighter against shell injection, privilege creep, and secret overexposure.
- Manual verification inputs are stricter; unsupported chart or app version formats now fail fast instead of reaching downstream tooling.
- Follow-up:
- Keep future workflow-dispatch publish inputs on the same validate-then-export pattern.
- Preserve the explicit reusable-workflow secret contract if Helm publish steps move again.
Task Record
- Motivation:
- Close the open PR review threads on the Helm publish work without widening workflow scope beyond the reviewed areas.
- Design notes:
chart_versionnow uses a SemVer-compatible validation regex because Helm chart versions must stay SemVer-shaped.app_versionstays intentionally narrower than arbitrary shell text because it is only used as a release identifier, not as a free-form note field.pull-requests: readmoved from workflow scope to thecoverageand manual Helm verification jobs that actually need it.- The reusable image workflow call now receives only the four Helm secrets it consumes.
- Test coverage summary:
just instruction-driftjust cijust ui-e2e
- Observability updates:
- No runtime observability surface changed.
- Workflow failures now report invalid manual version inputs at the validation step before packaging or publishing.
- Status-doc validation:
- Reviewed
.github/instructions/devops.instructions.md,docs/adr/index.md, anddocs/SUMMARY.md; updated them to match the new workflow and release-script constraints.
- Reviewed
- Risk & rollback plan:
- Main risk is rejecting a previously tolerated manual version override. Roll back by reverting this ADR and the corresponding workflow/script changes if a legitimate version format was excluded.
- Permission and secret changes are isolated to PR/manual workflow paths and can be reverted with a single commit if a reusable workflow contract was missed.
- Dependency rationale:
- No new dependencies were added.
- Stale-policy check:
- Reviewed
AGENTS.mdand.github/instructions/devops.instructions.md. - Drift found: the instruction set did not yet capture validated
workflow_dispatchinputs or safe multiline$GITHUB_OUTPUTwrites for workflow shell surfaces. - Removed that drift by updating the devops instructions in this change.
- Reviewed
GHCR Helm GitHub Token Authentication
- Status: Accepted
- Date: 2026-04-20
- Context:
- PR run
24643631011failed inImages / Helm Chartafter the chart package and signature verification completed. - The shared
release/scripts/helm-publish.shpath reachedhelm registry login ghcr.ioand GHCR returned403 deniedwhen the workflow used the configured Helm API secret pair. - The same shared publish path is reused by the PR reusable workflow and the
mainand tag publish jobs inci.yml.
- PR run
- Decision:
- Teach
release/scripts/helm-publish.shto accept explicitHELM_REGISTRY_USERNAMEandHELM_REGISTRY_PASSWORD, with aGITHUB_TOKENfallback forghcr.io. - Update GitHub-hosted GHCR publish jobs to pass
github.actorplussecrets.GITHUB_TOKENinstead of the long-lived Helm API secret pair. - Grant
packages: writeto theci.ymlHelm publish jobs so the repo token can publish to GHCR.
- Teach
- Consequences:
- PR,
main, and tag Helm publication paths now authenticate to GHCR with the job-scoped repository token. - Local or non-GitHub publish rehearsals can still use
HELM_API_KEY_*or the new explicit registry credential variables.
- PR,
- Follow-up:
- Re-run the PR
Images / Helm Chartjob and confirm GHCR login and chart push succeed. - Keep non-GitHub registry callers on explicit override credentials instead of assuming GHCR defaults.
- Re-run the PR
Task Record
- Motivation:
- Restore the failing PR Helm chart publish job and align the shared Helm publish path with GitHub-hosted GHCR auth.
- Design notes:
- The credential selection now prefers explicit
HELM_REGISTRY_*values, then existingHELM_API_KEY_*, thenGITHUB_TOKENforghcr.io. - The reusable PR workflow already had
packages: write; theci.ymlHelm publish jobs needed that permission added to useGITHUB_TOKEN.
- The credential selection now prefers explicit
- Test coverage summary:
- Inspected GitHub Actions run
24643631011, job72055391065, and confirmed the failure occurred during GHCR authentication after successful packaging and signature verification. just cijust ui-e2e
- Inspected GitHub Actions run
- Observability updates:
- No runtime observability surface changed.
- Publish failures now report the accepted credential sources more clearly from
helm-publish.sh.
- Status-doc validation:
- Reviewed
.github/instructions/devops.instructions.md,docs/adr/index.md, anddocs/SUMMARY.md; updated them to match the GHCR auth path.
- Reviewed
- Risk & rollback plan:
- Main risk is a missing
packages: writepermission on a future caller job. Roll back by reverting this change or restoring explicit non-GitHub credentials for that caller. - Local and non-GitHub publish flows can still pin explicit credentials if the GitHub-token path is unsuitable.
- Main risk is a missing
- Dependency rationale:
- No new dependencies were added.
- Stale-policy check:
- Reviewed
AGENTS.mdand.github/instructions/devops.instructions.md. - Drift found: the instruction set did not yet record that GitHub-hosted GHCR chart publication should prefer the job-scoped repo token.
- Removed that drift by updating the devops instructions in this change.
- Reviewed
Artifact Hub OCI Repository Alignment
- Status: Accepted
- Date: 2026-04-20
- Context:
- The PR Helm workflow was publishing charts to
ghcr.io/<owner>/<repo>/charts/revaer, while the Artifact Hub repository that now exists is configured foroci://ghcr.io/vannadii/charts/revaer. - That namespace mismatch meant successful workflow publishes would land in GHCR, but not at the OCI repository URL Artifact Hub is actually tracking.
- The Artifact Hub repository now has the stable ID
dfbc5c47-d0c5-4ac7-b9d4-5812c0a6a15a, which needs to be present in the published repository metadata for verified ownership workflows.
- The PR Helm workflow was publishing charts to
- Decision:
- Change the default GHCR Helm namespace derivation to publish charts to
ghcr.io/<owner>/charts/revaer. - Ship the Artifact Hub repository ID in
charts/revaer/artifacthub-repo.ymland keep release packaging from appending a duplicaterepositoryID. - Refresh install and release docs so they reference the owner-scoped OCI chart URL rather than the older repo-scoped path.
- Change the default GHCR Helm namespace derivation to publish charts to
- Consequences:
- PR,
main, tag, and manual Helm publishes now target the same OCI repository URL that Artifact Hub is configured to ingest. - Artifact Hub repository verification metadata is stable even when GitHub Actions repository variables are unset.
- Existing references to the repo-scoped GHCR path become stale and must be updated together when the public OCI location changes.
- PR,
- Follow-up:
- Push a fresh PR Helm publish and confirm new versions appear under
ghcr.io/vannadii/charts/revaer. - Re-check Artifact Hub after its next repository processing cycle and confirm it indexes the newly published PR prerelease.
- Push a fresh PR Helm publish and confirm new versions appear under
Task Record
- Motivation:
- Align the workflow’s actual Helm publish destination with the Artifact Hub repository the user created so PR dev chart publishes become visible in Artifact Hub.
- Design notes:
release/scripts/helm-publish.shnow derives the default namespace from the GitHub owner only, because the chart name is already appended as/revaer.charts/revaer/artifacthub-repo.ymlcarries the canonical repository ID and marks the repo asoci;helm-package.shavoids duplicating that field when env overrides are also present.
- Test coverage summary:
just instruction-driftbash scripts/workflow-guardrails.shjust ui-e2ejust ci
- Observability updates:
- No runtime observability surface changed.
- The externally visible change is the GHCR package location and matching Artifact Hub metadata target.
- Status-doc validation:
- Re-checked
charts/revaer/README.md,docs/release-checklist.md,.github/instructions/devops.instructions.md,docs/adr/index.md, anddocs/SUMMARY.md; updated stale GHCR path references.
- Re-checked
- Risk & rollback plan:
- The main risk is consumers still pulling from the old repo-scoped GHCR chart path. Roll back by restoring the previous namespace derivation and reverting the docs if the owner-scoped repository proves incompatible.
- Artifact Hub ingestion remains asynchronous, so validation must allow for the service’s reprocessing delay after publish.
- Dependency rationale:
- No new dependencies were added.
- Stale-policy check:
- Reviewed
AGENTS.mdand.github/instructions/devops.instructions.md. - Drift found: devops instructions and release docs did not state the owner-scoped public OCI chart location or the chart metadata file as the repository-ID source of truth.
- Removed that drift by updating the instruction file, release docs, and ADR index in this change.
- Reviewed
Trivy SARIF Category And GHCR Token Alignment
- Status: Accepted
- Date: 2026-04-20
- Context:
- PR #27 moved image builds into a reusable workflow and added a manual Helm OCI verification workflow.
- GitHub Advanced Security started reporting
2 configurations not foundfor the Trivy scan because the workflow/job identity that code scanning used onmain(.github/workflows/ci.yml:build-images/...) no longer matched the PR branch upload identity after the refactor. - Review feedback also flagged that the manual GHCR publish path still preferred legacy Helm API secrets instead of the job-scoped
GITHUB_TOKEN, and that the reusable image workflow still trusted thepr_numberinput too much when writing environment values.
- Decision:
- Set an explicit SARIF upload category in the reusable image workflow that preserves the legacy
ci.yml:build-imagesmatrix identity for Trivy uploads. - Validate
pr_numberas numeric and use the multiline$GITHUB_ENVform before exporting PR-scoped Helm version values. - Switch the manual Helm OCI verification workflow to GHCR publication through
GITHUB_TOKENpluspackages: write. - Remove unused
HELM_API_KEY_*secret plumbing from the reusable PR image workflow call and drop stalepull-requests: readpermission from the push-only Sonar workflow.
- Set an explicit SARIF upload category in the reusable image workflow that preserves the legacy
- Consequences:
- GitHub code scanning can compare PR Trivy uploads to the existing
mainconfigurations instead of treating them as missing configurations after the workflow refactor. - Manual GHCR verification now exercises the same credential path used by the GitHub-hosted publish jobs.
- Reusable workflow callers expose fewer secrets and PR-number-derived env writes are hardened against newline or non-numeric injection.
- GitHub code scanning can compare PR Trivy uploads to the existing
- Follow-up:
- Re-run PR #27 checks and confirm the Trivy configuration warning disappears.
- Confirm the manual Helm OCI verification workflow can publish with
GITHUB_TOKENon a GitHub-hosted runner.
Task Record
- Motivation:
- Restore trustworthy PR code-scanning comparisons and close the remaining workflow review threads on PR #27 without regressing least-privilege rules.
- Design notes:
- The SARIF category is intentionally pinned to the historical
ci.ymlbuild-image key instead of the reusable workflow path because code scanning continuity matters more than reflecting the refactor in the category string. - The manual Helm verify workflow keeps
pull-requests: readbecause it still resolves an open PR number from the branch when inputs are omitted.
- The SARIF category is intentionally pinned to the historical
- Test coverage summary:
just instruction-driftjust cijust ui-e2e
- Observability updates:
- No runtime observability surface changed.
- GitHub code-scanning continuity for Trivy uploads should recover once the workflow reruns.
- Status-doc validation:
- Re-checked
.github/instructions/devops.instructions.md,docs/adr/index.md, anddocs/SUMMARY.md; updated them to match the workflow behavior.
- Re-checked
- Risk & rollback plan:
- The main risk is pinning the SARIF category to the legacy identity longer than desired. Roll back by changing the explicit category once the old code-scanning configurations are intentionally retired.
- If
GITHUB_TOKENproves insufficient for the manual GHCR publish path, restore explicit registry credentials as a documented exception.
- Dependency rationale:
- No new dependencies were added.
- Stale-policy check:
- Reviewed
AGENTS.mdand.github/instructions/devops.instructions.md. - Drift found: the instructions did not yet record the need to preserve a stable Trivy SARIF category across workflow refactors.
- Removed that drift by updating the workflow and instruction file together.
- Reviewed
Artifact Hub Verification And Official Readiness
- Status: Accepted
- Date: 2026-04-20
- Context:
- The Revaer chart repository was already aligned to the owner-scoped OCI URL and Artifact Hub repository ID, but the remaining ownership and package-metadata details were still partly implicit.
- Artifact Hub’s current repository guidance requires the repository metadata to carry the repository ID for
Verified publisher, and ownership claim flows depend on published owner identity that matches the Artifact Hub account or organization member performing the claim. - Artifact Hub also recommends explicit package metadata where automatic extraction may be incomplete, including chart image metadata that powers package security scanning.
- Decision:
- Keep
charts/revaer/artifacthub-repo.ymlas the canonical repository-metadata template and document that owner identity must match the Artifact Hub claimant. - Update
release/scripts/helm-package.shso owner metadata is appended wheneverARTIFACTHUB_OWNER_NAMEandARTIFACTHUB_OWNER_EMAILare available, including unsigned packaging paths. - Publish an explicit
artifacthub.io/imageschart annotation at release packaging time using the Revaer GHCR image tag that matches the chart app version. - Refresh the chart README, release checklist, and devops instructions to record the remaining manual Artifact Hub steps: public GHCR visibility, repository add/claim, verified-publisher confirmation, and the manual
officialstatus request.
- Keep
- Consequences:
- Published Artifact Hub repository metadata is now authoritative for both repository verification and ownership claim workflows instead of depending on signing-only paths.
- Artifact Hub can index the chart’s primary runtime image from chart metadata even if automatic image extraction is incomplete.
- The
officialbadge still cannot be granted from Git alone; the repository can only be made ready for that manual Artifact Hub request.
- Follow-up:
- Re-run a Helm publish and confirm the
artifacthub.ioOCI metadata artifact containsrepositoryIDplus the expected owner entry. - Confirm the next Artifact Hub processing cycle shows
Verified publisher, then submit theofficialstatus request if it has not already been filed.
- Re-run a Helm publish and confirm the
Task Record
- Motivation:
- Make the repository metadata authoritative enough for Artifact Hub verification and official-status workflows instead of leaving those steps partially dependent on operator memory or signing-only side effects.
- Design notes:
- Owner identity stays externally configurable through
ARTIFACTHUB_OWNER_*, with GPG UID fallback retained for signed releases. - The chart image annotation is injected at packaging time so the published image tag stays aligned with the release tag or prerelease tag.
- Owner identity stays externally configurable through
- Test coverage summary:
just helm-lintjust instruction-drift
- Observability updates:
- No runtime observability surface changed.
- Artifact Hub package metadata now exposes the published runtime image more reliably for external scanning and UI display.
- Status-doc validation:
- Re-checked
charts/revaer/README.md,docs/release-checklist.md,.github/instructions/devops.instructions.md,docs/adr/index.md, anddocs/SUMMARY.md; updated them to match the Artifact Hub readiness flow.
- Re-checked
- Risk & rollback plan:
- Main risk is stale or incorrect
ARTIFACTHUB_OWNER_*workflow variables causing an ownership mismatch in published metadata. Roll back by correcting the variables and republishing the chart metadata artifact. - If the explicit image annotation proves incorrect for a future image-layout change, remove or revise the injected annotation and republish.
- Main risk is stale or incorrect
- Dependency rationale:
- No new dependencies were added.
- Stale-policy check:
- Reviewed
AGENTS.mdand.github/instructions/devops.instructions.md. - Drift found: the instruction and operator docs did not yet state that Artifact Hub owner identity must remain present outside signing-only paths, and they did not record the explicit manual steps needed for
officialreadiness. - Removed that drift by updating the release script, docs, and instruction file together.
- Reviewed