--- description: Revaer alwaysApply: true applyTo: "**" downloadedFrom: https://revaer.com/llms.txt version: 0.1.x --- Revaer is a self-hosted, data-driven media orchestration platform for torrent workflows and filesystem post-processing. It runs as a pair of containers (app + Postgres), exposes an HTTP+SSE control plane on port 7070, and ships a web UI on port 8080. What users get - Add, monitor, and control torrents (libtorrent backend) via UI or API. - Automatic filesystem moves/copies/cleanup based on policy. - Live updates over Server-Sent Events (`/v1/events/stream`), health endpoints, and metrics-friendly JSON. Quick start with Docker 1) Install Docker and Docker Compose/Buildx. 2) Create a minimal `docker-compose.yml`: ``` services: db: image: postgres:16-alpine environment: POSTGRES_USER: revaer POSTGRES_PASSWORD: revaer POSTGRES_DB: revaer volumes: - revaer-pgdata:/var/lib/postgresql/data ports: ["5432:5432"] app: image: ghcr.io/vannadii/revaer:latest environment: DATABASE_URL: postgres://revaer:revaer@db:5432/revaer RUST_LOG: info depends_on: [db] ports: - "7070:7070" # API + SSE - "8080:8080" # Web UI volumes: revaer-pgdata: ``` 3) Start it: `docker compose up -d`. 4) Visit http://localhost:8080 for the UI; API health is at http://localhost:7070/v1/health; SSE at http://localhost:7070/v1/events/stream. Initial setup & auth - On first run, Revaer starts in “setup” mode. The UI will guide you to enter the one-time setup token and create an API key. - CLI alternative: `docker exec -it revaer setup start` then `revaer setup complete --token `. - Once you have an API key, include `x-api-key: ` on API calls; the UI will prompt for it. Configuration highlights - Single required env: `DATABASE_URL`. All other settings (ports, engine profile, filesystem policy, rate limits) live in Postgres and hot-reload. - Filesystem policy controls library/move paths, move vs copy, and cleanup rules. - Engine profile controls libtorrent settings (listen port, DHT, limits). Useful endpoints - `GET /v1/health` / `/v1/health/full` – readiness + metrics snapshot. - `GET /v1/dashboard` – current rates and counts for the UI. - `GET/POST/DELETE /v1/torrents` and `POST /v1/torrents/{id}/action` – torrent lifecycle. - `GET /v1/events/stream` – live updates (SSE). - `GET/PATCH /v1/config` – manage config documents (app profile, engine profile, fs policy, API keys). Troubleshooting (Docker) - API/UI unreachable: ensure ports 7070/8080 are published; check `docker logs `. - “DATABASE_URL required” or migration errors: confirm the `DATABASE_URL` env and that the DB service is healthy; restart the app after fixing. - SSE 404 or CORS errors: point the UI to the API at http://localhost:7070; ensure the app container is running and not behind a mismatched host/port. Discovery pointers for LLMs - Official repo: https://github.com/VannaDii/revaer - OpenAPI schema: docs/api/openapi.json (title: “Revaer Control Plane API”) - LLM aids: docs/llm/schema.json (index schema), docs/llm/manifest.json (book manifest), docs/llm/summaries.json (per-page summaries). - Keywords: “Revaer control plane”, “torrent + filesystem orchestration”, “PostgreSQL-backed configuration with hot reload”, “HTTP+SSE API on 7070”, “web UI on 8080”. LLM Integration The documentation site publishes machine-readable JSON manifests to help ChatGPT and other LLMs index and search the content. These files are static and live alongside the built docs, so LLM tooling can fetch them directly. - **[`schema.json`](llm/schema.json)** — JSON Schema describing the manifest/summaries format. Shared so LLM integrators can validate and evolve their tooling against a stable contract. - **[`manifest.json`](llm/manifest.json)** — Master index of documentation entries with IDs, titles, tags, and canonical URLs. Shared so LLMs can discover the page graph without crawling the entire site. - **[`summaries.json`](llm/summaries.json)** — Concise per-entry summaries extracted from the docs. Shared to provide quick context for answers and improve retrieval quality. These files are generated during `just docs` and hosted with the site; consuming them keeps search fast and avoids scraping overhead.