Merge pull request #1134 from dyptan-io/beets-httpshell

beets: add simple http server to execute beets commands remotelly
This commit is contained in:
aptalca 2026-03-10 15:19:07 -04:00 committed by GitHub
commit e0aa7d42cd
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
27 changed files with 442 additions and 110 deletions

View File

@ -3,4 +3,4 @@
.github
.gitattributes
READMETEMPLATE.md
README.md
README.md

2
.gitattributes vendored
View File

@ -14,4 +14,4 @@
*.pdf diff=astextplain
*.PDF diff=astextplain
*.rtf diff=astextplain
*.RTF diff=astextplain
*.RTF diff=astextplain

View File

@ -12,10 +12,10 @@ on:
env:
GITHUB_REPO: "linuxserver/docker-mods" #don't modify
ENDPOINT: "linuxserver/mods" #don't modify
BASEIMAGE: "replace_baseimage" #replace
MODNAME: "replace_modname" #replace
BASEIMAGE: "beets" #replace
MODNAME: "httpshell" #replace
MOD_VERSION: ${{ inputs.mod_version }} #don't modify
MULTI_ARCH: "true" #set to false if not needed
MULTI_ARCH: "false" #set to false if not needed
jobs:
set-vars:
@ -61,4 +61,4 @@ jobs:
MODNAME: ${{ needs.set-vars.outputs.MODNAME }}
MULTI_ARCH: ${{ needs.set-vars.outputs.MULTI_ARCH }}
MOD_VERSION: ${{ needs.set-vars.outputs.MOD_VERSION }}
MOD_VERSION_OVERRIDE: ${{ needs.set-vars.outputs.MOD_VERSION_OVERRIDE }}
MOD_VERSION_OVERRIDE: ${{ needs.set-vars.outputs.MOD_VERSION_OVERRIDE }}

View File

@ -7,4 +7,4 @@ on:
- '**/check'
jobs:
permission_check:
uses: linuxserver/github-workflows/.github/workflows/init-svc-executable-permissions.yml@v1
uses: linuxserver/github-workflows/.github/workflows/init-svc-executable-permissions.yml@v1

2
.gitignore vendored
View File

@ -40,4 +40,4 @@ $RECYCLE.BIN/
.AppleDesktop
Network Trash Folder
Temporary Items
.apdisk
.apdisk

View File

@ -2,7 +2,7 @@
FROM scratch
LABEL maintainer="username"
LABEL maintainer="dyptan-io"
# copy local files
COPY root/ /
COPY root/ /

View File

@ -1,33 +0,0 @@
# syntax=docker/dockerfile:1
## Buildstage ##
FROM ghcr.io/linuxserver/baseimage-alpine:3.20 AS buildstage
RUN \
echo "**** install packages ****" && \
apk add --no-cache \
curl && \
echo "**** grab rclone ****" && \
mkdir -p /root-layer && \
if [ $(uname -m) = "x86_64" ]; then \
echo "Downloading x86_64 tarball" && \
curl -o \
/root-layer/rclone.deb -L \
"https://downloads.rclone.org/v1.47.0/rclone-v1.47.0-linux-amd64.deb"; \
elif [ $(uname -m) = "aarch64" ]; then \
echo "Downloading aarch64 tarball" && \
curl -o \
/root-layer/rclone.deb -L \
"https://downloads.rclone.org/v1.47.0/rclone-v1.47.0-linux-arm64.deb"; \
fi && \
# copy local files
COPY root/ /root-layer/
## Single layer deployed image ##
FROM scratch
LABEL maintainer="username"
# Add files from buildstage
COPY --from=buildstage /root-layer/ /

228
README.md
View File

@ -1,25 +1,219 @@
# Rsync - Docker mod for openssh-server
# beets-httpshell
This mod adds rsync to openssh-server, to be installed/updated during container start.
A [LinuxServer.io Docker Mod](https://github.com/linuxserver/docker-mods) for the [beets](https://github.com/linuxserver/docker-beets) container that adds a lightweight HTTP API to execute `beet` CLI commands remotely.
In openssh-server docker arguments, set an environment variable `DOCKER_MODS=linuxserver/mods:openssh-server-rsync`
The mod runs a Python 3 HTTP server (no extra dependencies) that maps URL paths to beet subcommands. Any beet command can be invoked — there is no hardcoded command list.
If adding multiple mods, enter them in an array separated by `|`, such as `DOCKER_MODS=linuxserver/mods:openssh-server-rsync|linuxserver/mods:openssh-server-mod2`
> **⚠️ Security Warning:** The HTTP API has no authentication or authorization. Any client that can reach the server can execute arbitrary beet commands. It is your responsibility to ensure the API is not exposed to untrusted networks — use firewall rules, Docker network isolation, or a reverse proxy with authentication to restrict access.
# Mod creation instructions
## Installation
* Fork the repo, create a new branch based on the branch `template`.
* Edit the `Dockerfile` for the mod. `Dockerfile.complex` is only an example and included for reference; it should be deleted when done.
* Inspect the `root` folder contents. Edit, add and remove as necessary.
* After all init scripts and services are created, run `find ./ -path "./.git" -prune -o \( -name "run" -o -name "finish" -o -name "check" \) -not -perm -u=x,g=x,o=x -print -exec chmod +x {} +` to fix permissions.
* Edit this readme with pertinent info, delete these instructions.
* Finally edit the `.github/workflows/BuildImage.yml`. Customize the vars for `BASEIMAGE` and `MODNAME`. Set the versioning logic and `MULTI_ARCH` if needed.
* Ask the team to create a new branch named `<baseimagename>-<modname>`. Baseimage should be the name of the image the mod will be applied to. The new branch will be based on the `template` branch.
* Submit PR against the branch created by the team.
1. Configure your selected Docker container with the port, volume, and environment settings from the *original container documentation* here **[linuxserver/beets](https://hub.docker.com/r/linuxserver/beets "Beets Docker container")**
2. Add the **DOCKER_MODS** environment variable to your `compose.yml` file or `docker run` command, as follows:
- `DOCKER_MODS=linuxserver/mods:beets-httpshell`
3. Map the HTTP API port so it is accessible from outside the container. The default port is `5555` (configurable via `HTTPSHELL_PORT`). Add `5555:5555` to your port mappings:
<details>
<summary>Example Docker Compose YAML Configuration</summary>
## Tips and tricks
```yaml
---
services:
beets:
image: lscr.io/linuxserver/beets:latest
container_name: beets
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/London
- DOCKER_MODS=linuxserver/mods:beets-httpshell
- HTTPSHELL_PORT=5555
volumes:
- /path/to/config:/config
- /path/to/music:/music
- /path/to/downloads:/downloads
ports:
- 8337:8337
- 5555:5555
restart: unless-stopped
```
</details>
* Some images have helpers built in, these images are currently:
* [Openvscode-server](https://github.com/linuxserver/docker-openvscode-server/pull/10/files)
* [Code-server](https://github.com/linuxserver/docker-code-server/pull/95)
<details>
<summary>Example Docker Run Command</summary>
```bash
docker run -d \
--name=beets \
-e PUID=1000 \
-e PGID=1000 \
-e TZ=Europe/London \
-e DOCKER_MODS=linuxserver/mods:beets-httpshell \
-e HTTPSHELL_PORT=5555 \
-p 8337:8337 \
-p 5555:5555 \
-v /path/to/config:/config \
-v /path/to/music:/music \
-v /path/to/downloads:/downloads \
--restart unless-stopped \
lscr.io/linuxserver/beets:latest
```
</details>
4. Start the container.
### Environment Variables
| Variable | Default | Description |
|---|---|---|
| `BEET_CMD` | `/lsiopy/bin/beet` | Path to the `beet` binary |
| `BEET_CONFIG` | `/config/config.yaml` | Path to the beets config file |
| `HTTPSHELL_PORT` | `5555` | Port the HTTP server listens on |
| `HTTPSHELL_BLOCKING_TIMEOUT` | `30` | Seconds to wait for the lock in `block` mode before the job is queued |
## API Usage
### Execute a command
```
POST /<command>
Content-Type: application/json
["arg1", "arg2", ...]
```
The URL path is the beet subcommand. The optional `?mode=` query parameter controls execution mode (`parallel`, `queue`, or `block` — defaults to `parallel`). The JSON body is an array of string arguments. An empty body or `[]` means no arguments.
**Response** (200 OK):
```json
{
"command": "stats",
"args": [],
"exit_code": 0,
"stdout": "Tracks: 1234\nTotal time: 3.2 days\n...",
"stderr": ""
}
```
### Examples
```bash
# Get library stats (default parallel mode)
curl -X POST http://localhost:5555/stats
# List all tracks by an artist
curl -X POST http://localhost:5555/list \
-H "Content-Type: application/json" \
-d '["artist:Radiohead"]'
# Import music in parallel (returns result when done, runs in parallel with other requests)
curl -X POST http://localhost:5555/import \
-H "Content-Type: application/json" \
-d '["--quiet", "--incremental", "/downloads/music"]'
# Queue an import (returns 202 immediately, runs in background)
curl -X POST 'http://localhost:5555/import?mode=queue' \
-H "Content-Type: application/json" \
-d '["--quiet", "/downloads/music"]'
# Update the library
curl -X POST http://localhost:5555/update
# Get beets configuration
curl -X POST http://localhost:5555/config
# Remove tracks matching a query (force, delete files)
curl -X POST http://localhost:5555/remove \
-H "Content-Type: application/json" \
-d '["artist:test", "-d", "-f"]'
# Move items to a new directory
curl -X POST http://localhost:5555/move \
-H "Content-Type: application/json" \
-d '["artist:Radiohead", "-d", "/music/favorites"]'
```
## Execution Modes
The execution mode is controlled per-request via the `?mode=` query parameter. If omitted, defaults to `parallel`.
### `parallel` (default)
Each request runs its command immediately in its own thread. Multiple commands execute in parallel. The response is returned when the command finishes.
```
Request 1 ──▶ [runs command] ──▶ 200 response
Request 2 ──▶ [runs command] ──▶ 200 response (runs in parallel)
```
### `block`
Each request waits for a global lock. If the lock is acquired within `HTTPSHELL_BLOCKING_TIMEOUT` seconds, the command runs and the result is returned (200). If the timeout expires, the job is queued and a 202 is returned instead. This ensures commands run one at a time.
```
Request 1 ──▶ [acquires lock, runs command] ──▶ 200 response
Request 2 ──▶ [waits for lock... acquired] ──▶ 200 response
Request 3 ──▶ [waits for lock... timeout] ──▶ 202 (queued)
```
### `queue`
Every request returns `202 Accepted` immediately. Commands are placed in a FIFO queue and executed one at a time by a background worker. Useful for commands that shouldn't overlap (e.g., `import`).
```
Request 1 ──▶ 202 (queued, position 1)
Request 2 ──▶ 202 (queued, position 2)
[worker runs command 1, then command 2]
```
**202 Response:**
```json
{
"status": "queued",
"command": "import",
"args": ["/downloads/album"],
"queue_size": 1
}
```
## Lidarr Integration Example
Use remote beets HTTP server in Lidarr's external content management script to automatically import downloads. In Lidarr, go to **Settings → Media Management → Importing → +** and add a **Import Script Path** with the path to the script below.
Create the script at a path accessible to Lidarr (e.g., `/config/scripts/beets-import.sh`):
```bash
#!/usr/bin/env bash
if [ -z "$lidarr_sourcepath" ]; then
echo "Error: lidarr_sourcepath environment variable not set"
exit 1
fi
curl -X POST --fail-with-body \
-H "Content-Type: application/json" \
-d "[\"-q\",\"$lidarr_sourcepath\"]" \
'http://beets:5555/import?mode=block'
if [ $? -ne 0 ]; then
echo "Import request failed"
exit 1
fi
```
> **Note:** The script uses `?mode=block` so Lidarr waits for the import to complete before proceeding. Without it, the default `parallel` mode would also work but allows concurrent imports and import changes may not be detected by Lidarr sync. Adjust the hostname (`beets`) and port (`5555`) to match your setup.
## Mod Structure
```text
root/
├── usr/local/bin/
│ └── beets-httpshell.py # HTTP server script
└── etc/s6-overlay/s6-rc.d/
├── svc-mod-beets-httpshell/ # longrun service (HTTP server)
└── user/contents.d/
└── svc-mod-beets-httpshell
```

View File

@ -1,30 +0,0 @@
#!/usr/bin/with-contenv bash
# This is the init file used for adding os or pip packages to install lists.
# It takes advantage of the built-in init-mods-package-install init script that comes with the baseimages.
# If using this, we need to make sure we set this init as a dependency of init-mods-package-install so this one runs first
if ! command -v apprise; then
echo "**** Adding apprise and its deps to package install lists ****"
echo "apprise" >> /mod-pip-packages-to-install.list
## Ubuntu
if [ -f /usr/bin/apt ]; then
echo "\
python3 \
python3-pip \
runc" >> /mod-repo-packages-to-install.list
fi
# Alpine
if [ -f /sbin/apk ]; then
echo "\
cargo \
libffi-dev \
openssl-dev \
python3 \
python3-dev \
python3 \
py3-pip" >> /mod-repo-packages-to-install.list
fi
else
echo "**** apprise already installed, skipping ****"
fi

View File

@ -1 +0,0 @@
/etc/s6-overlay/s6-rc.d/init-mod-imagename-modname-add-package/run

View File

@ -1,8 +0,0 @@
#!/usr/bin/with-contenv bash
# This is an install script that is designed to run after init-mods-package-install
# so it can take advantage of packages installed
# init-mods-end depends on this script so that later init and services wait until this script exits
echo "**** Setting up apprise ****"
apprise blah blah

View File

@ -1 +0,0 @@
/etc/s6-overlay/s6-rc.d/init-mod-imagename-modname-install/run

View File

@ -0,0 +1,3 @@
#!/usr/bin/with-contenv bash
exec s6-setuidgid abc python3 /usr/local/bin/beets-httpshell.py

View File

@ -0,0 +1 @@
longrun

View File

@ -1,7 +0,0 @@
#!/usr/bin/with-contenv bash
# This is an example service that would run for the mod
# It depends on init-services, the baseimage hook for start of all longrun services
exec \
s6-setuidgid abc run my app

View File

@ -0,0 +1,217 @@
#!/usr/bin/env python3
"""Lightweight HTTP server that proxies requests to beet CLI commands."""
import json
import logging
import os
import shlex
import subprocess
import sys
import threading
import queue
from http.server import ThreadingHTTPServer, BaseHTTPRequestHandler
from typing import NamedTuple
from urllib.parse import urlparse, parse_qs
logger = logging.getLogger("httpshell")
BEET_CMD = os.environ.get("BEET_CMD", "/lsiopy/bin/beet")
BEET_CONFIG = os.environ.get("BEET_CONFIG", "/config/config.yaml")
PORT = int(os.environ.get("HTTPSHELL_PORT", "5555"))
BLOCKING_TIMEOUT = int(os.environ.get("HTTPSHELL_BLOCKING_TIMEOUT", "30"))
DEFAULT_MODE = "parallel"
VALID_MODES = {"parallel", "queue", "block"}
job_queue: queue.Queue[tuple[str, list[str]]] = queue.Queue()
blocking_lock = threading.Lock()
class CommandResult(NamedTuple):
exit_code: int
stdout: str
stderr: str
def _read_stream(stream, lines: list[str], label: str) -> None:
for line in stream:
lines.append(line)
logger.info("[%s] %s", label, line.rstrip())
def run_beet(command: str, args: list[str]) -> CommandResult:
"""Execute a beet CLI command, stream output, and return the result."""
cmd = [BEET_CMD, "-c", BEET_CONFIG, command]
cmd.extend(args)
logger.info("> %s", shlex.join(cmd))
try:
proc = subprocess.Popen(
cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True
)
except FileNotFoundError:
return CommandResult(-1, "", f"Command not found: {BEET_CMD}")
stdout_lines: list[str] = []
stderr_lines: list[str] = []
stderr_thread = threading.Thread(
target=_read_stream, args=(proc.stderr, stderr_lines, command), daemon=True
)
stderr_thread.start()
_read_stream(proc.stdout, stdout_lines, command)
stderr_thread.join(timeout=5)
try:
proc.wait(timeout=3600)
except subprocess.TimeoutExpired:
proc.kill()
return CommandResult(-1, "", "Command timed out after 3600 seconds")
logger.info("[%s] exited with code %d", command, proc.returncode)
return CommandResult(proc.returncode, "".join(stdout_lines), "".join(stderr_lines))
def queue_worker() -> None:
"""Background worker that processes queued jobs sequentially."""
while True:
command, args = job_queue.get()
try:
run_beet(command, args)
except Exception:
logger.exception("queued job failed: %s %s", command, args)
finally:
job_queue.task_done()
class RequestHandler(BaseHTTPRequestHandler):
def do_GET(self) -> None:
parsed = urlparse(self.path)
if parsed.path.rstrip("/") in ("", "/health"):
self._send_json(200, {
"status": "ok",
"default_mode": DEFAULT_MODE,
"queue_size": job_queue.qsize(),
})
else:
self._send_json(405, {"error": "Use POST to execute commands"})
def do_POST(self) -> None:
parsed = urlparse(self.path)
path = parsed.path.strip("/")
if not path:
self._send_json(400, {"error": "No command specified. Use POST /<command>"})
return
params = parse_qs(parsed.query)
mode = params.get("mode", [DEFAULT_MODE])[0].lower()
if mode not in VALID_MODES:
self._send_json(400, {"error": f"Invalid mode '{mode}'. Use parallel, queue, or block"})
return
parts = path.split("/")
command = parts[0]
args = list(parts[1:])
body_args = self._parse_body_args()
if body_args is None:
return
args.extend(body_args)
{"parallel": self._handle_parallel, "queue": self._handle_queued,
"block": self._handle_blocking}[mode](command, args)
def _parse_body_args(self) -> list[str] | None:
"""Parse JSON body for arguments. Returns None on invalid input (error already sent)."""
content_length = int(self.headers.get("Content-Length", 0))
if content_length == 0:
return []
body = self.rfile.read(content_length)
try:
parsed_body = json.loads(body)
except json.JSONDecodeError as e:
self._send_json(400, {"error": f"Invalid JSON: {e}"})
return None
if not isinstance(parsed_body, list):
self._send_json(400, {"error": "Request body must be a JSON array of arguments"})
return None
return [str(a) for a in parsed_body]
def _handle_parallel(self, command: str, args: list[str]) -> None:
result = run_beet(command, args)
self._send_result(command, args, result)
def _handle_queued(self, command: str, args: list[str]) -> None:
job_queue.put((command, args))
self._send_json(202, {
"status": "queued",
"command": command,
"args": args,
"queue_size": job_queue.qsize(),
})
def _handle_blocking(self, command: str, args: list[str]) -> None:
if blocking_lock.acquire(timeout=BLOCKING_TIMEOUT):
try:
result = run_beet(command, args)
self._send_result(command, args, result)
finally:
blocking_lock.release()
else:
job_queue.put((command, args))
self._send_json(202, {
"status": "queued",
"message": f"Lock not acquired within {BLOCKING_TIMEOUT}s, job queued",
"command": command,
"args": args,
"queue_size": job_queue.qsize(),
})
def _send_result(self, command: str, args: list[str], result: CommandResult) -> None:
self._send_json(200, {
"command": command,
"args": args,
"exit_code": result.exit_code,
"stdout": result.stdout,
"stderr": result.stderr,
})
def _send_json(self, status_code: int, data: dict) -> None:
body = json.dumps(data).encode()
self.send_response(status_code)
self.send_header("Content-Type", "application/json")
self.send_header("Content-Length", str(len(body)))
self.end_headers()
self.wfile.write(body)
def log_message(self, fmt: str, *args) -> None:
logger.info("%s - %s", self.address_string(), fmt % args)
def main() -> None:
logging.basicConfig(
format="[httpshell] %(message)s",
level=logging.INFO,
stream=sys.stderr,
)
threading.Thread(target=queue_worker, daemon=True).start()
server = ThreadingHTTPServer(("0.0.0.0", PORT), RequestHandler)
logger.info("Starting server on port %d (default mode: %s)", PORT, DEFAULT_MODE)
logger.info("beet command: %s -c %s", BEET_CMD, BEET_CONFIG)
try:
server.serve_forever()
except KeyboardInterrupt:
pass
finally:
server.server_close()
logger.info("Server stopped")
if __name__ == "__main__":
main()