147 Commits

Author SHA1 Message Date
Mukhtar Akere
39945616f3 Remove Proxy featurw 2025-04-13 13:03:43 +01:00
Mukhtar Akere
8029cd3840 Add support for adding torrent file 2025-04-13 12:40:31 +01:00
Mukhtar Akere
19b8664146 fix action 2025-04-13 11:36:48 +01:00
Mukhtar Akere
8ea128446c fix action 2025-04-13 11:35:36 +01:00
Mukhtar Akere
391900e93d fix action 2025-04-13 11:32:17 +01:00
Mukhtar Akere
5987028f05 fix action 2025-04-13 11:31:01 +01:00
Mukhtar Akere
7492f629f9 Add documentaion, finalizing experimental 2025-04-13 11:29:08 +01:00
Mukhtar Akere
101ae4197e fix multi-api key bug 2025-04-11 00:05:09 +01:00
Mukhtar Akere
a357897222 - Fix bandwidth limit error
- Add cooldowns for fair usage limit bug
- Fix repair bugs
2025-04-09 20:00:06 +01:00
Mukhtar Akere
92177b150b Fix url in UI 2025-04-08 21:00:40 +01:00
Mukhtar Akere
9011420ac3 Add invalid link reset worker 2025-04-08 17:48:01 +01:00
Mukhtar Akere
4b5e18df94 - Deprecate proxy
- Add Proxy for each debrid
- Add support for multiple-API keys
- Use internal http.Client for streaming
- Bug fixes etc
2025-04-08 17:30:24 +01:00
Mukhtar Akere
4659cd4273 Performance improvements; import speedup 2025-04-03 11:24:30 +01:00
Mukhtar Akere
7d954052ae - Refractor code
- Add a better logging for 429 when streaming
- Fix minor issues
2025-04-01 06:37:10 +01:00
Mukhtar Akere
8bf164451c Fix re-insertion 2025-03-31 08:47:27 +01:00
Mukhtar Akere
5792305a66 Fix sample check rd 2025-03-31 08:23:11 +01:00
Mukhtar Akere
f9addaed36 minor 2025-03-31 06:15:41 +01:00
Mukhtar Akere
face86e151 - Cleanup webdav
- Include a re-insert fature for botched torrents
- Other minor bug fixes
2025-03-31 06:11:04 +01:00
Mukhtar Akere
cf28f42db4 Update readme, fix minor config bugs 2025-03-29 00:23:10 +01:00
Mukhtar Akere
dc2301eb98 Fixes:
- Add support for multiple api keys
- Fix minor bugs, removes goroutine mem leaks
2025-03-28 23:44:21 +01:00
Mukhtar Akere
f9bc7ad914 Fixes
- Be conservative about the number of goroutines
- Minor fixes
- Add Webdav to ui
- Add more configs to UI
2025-03-28 00:25:02 +01:00
Mukhtar Akere
4ae5de99e8 Fix deleting torrent bug 2025-03-27 09:01:33 +01:00
Mukhtar Akere
d49fbea60f - Add more limit to number of gorutines
- Add gorutine stats to logs
- Fix issues with repair
2025-03-27 08:24:40 +01:00
Mukhtar Akere
7bd38736b1 Fix for file namings 2025-03-26 21:12:01 +01:00
Mukhtar Akere
56bca562f4 Fix duplicate links for files 2025-03-24 20:39:35 +01:00
Mukhtar Akere
9469c98df7 Add support for different folder naming; minor bug fixes 2025-03-24 12:12:38 +01:00
Mukhtar Akere
8c13da5d30 Improve streaming 2025-03-23 09:32:19 +01:00
Mukhtar Akere
e2f792d5ab hotfix xml 2025-03-22 06:05:53 +01:00
Mukhtar Akere
49875446b4 Fix header writing 2025-03-22 00:30:00 +01:00
Mukhtar Akere
738474be16 Experimental usability stage 2025-03-22 00:17:07 +01:00
Mukhtar Akere
d10b679584 Fix regex 2025-03-21 17:58:06 +01:00
Mukhtar Akere
f93d489956 Fix regex 2025-03-21 17:55:19 +01:00
Mukhtar Akere
8d494fc277 Update repair; fix minor bugs with namings 2025-03-21 04:10:16 +01:00
Mukhtar Akere
0c68364a6a Improvements:
- An improvised caching for stats; using metadata on ls
- Integrated into the downloading system
- Fix minor bugs noticed
- Still experiemental, sike
2025-03-20 10:42:51 +01:00
Mukhtar Akere
50c775ca74 Fix naming to accurately depict zurg 2025-03-19 05:31:36 +01:00
Mukhtar Akere
0d178992ef Improve webdav; add workers for refreshes 2025-03-19 03:08:22 +01:00
Mukhtar Akere
5d2fabe20b initializing webdav server 2025-03-18 10:02:10 +01:00
Mukhtar Akere
fa469c64c6 Merge branch 'beta' into experimental 2025-03-16 09:31:31 +01:00
Mukhtar Akere
26f6f384a3 Fix arr download_uncached settings 2025-03-16 05:43:25 +01:00
Mukhtar Akere
b91aa1db38 Add a precacher to significantly improve importing to arrs/plex 2025-03-15 23:12:37 +01:00
Mukhtar Akere
e2ff3b26de Add Umask support 2025-03-15 21:30:19 +01:00
Mukhtar Akere
2d29996d2c experimental 2025-03-15 21:08:15 +01:00
Mukhtar Akere
b4e4db27fb Hotfix for sonarr search
Some checks failed
GoReleaser / goreleaser (push) Has been cancelled
Release Docker Build / docker (push) Has been cancelled
2025-03-13 09:07:35 +01:00
Mukhtar Akere
c0589d4ad2 fix repair; repair.json, remove arr details from endpoint 2025-03-12 04:53:01 +01:00
Mukhtar Akere
a30861984c Fix saveTofile; Add a global panic, Add a recoverer for everything functions 2025-03-11 18:08:03 +01:00
Mukhtar Akere
4f92b135d4 hotfix repair; handle 206 requests; increases log retention 2025-03-11 04:22:51 +01:00
Mukhtar Akere
2b2a682218 - Fix ARR flaky bug
- Refined download uncached options
- Deprecate qbittorent log level
- Skip Repair for specified arr
2025-03-09 03:56:34 +01:00
Mukhtar Akere
a83f3d72ce Changelog 0.5.0 2025-03-05 20:15:10 +01:00
Mukhtar Akere
1c06407900 Hotfixes
Some checks failed
GoReleaser / goreleaser (push) Has been cancelled
Release Docker Build / docker (push) Has been cancelled
2025-03-02 14:33:58 +01:00
Mukhtar Akere
b1a3d8b762 Minor bug fixes 2025-02-28 21:25:47 +01:00
Mukhtar Akere
0e25de0e3c Hotfix 2025-02-28 20:48:30 +01:00
Mukhtar Akere
e741a0e32b Hotfix 2025-02-28 20:21:45 +01:00
Mukhtar Akere
84bd93805f try to fix memory hogging 2025-02-28 16:05:04 +01:00
Mukhtar Akere
fce2ce28c7 Finalize workflow 2025-02-28 04:06:51 +01:00
Mukhtar Akere
302a461efd Update workflows 2025-02-28 04:03:13 +01:00
Mukhtar Akere
7eb021aac1 - Add ghcr
- Add checks for arr url and token
2025-02-28 03:57:26 +01:00
Mukhtar Akere
7a989ccf2b hotfix v0.4.2 2025-02-28 03:33:11 +01:00
Mukhtar Akere
f04d7ac86e hotfixes 2025-02-28 03:10:14 +01:00
Mukhtar Akere
65fb2d1e7c revamp deployment 2025-02-28 00:54:11 +01:00
Mukhtar Akere
46beac7227 Changelog 0.4.2 2025-02-28 00:38:31 +01:00
Mukhtar Akere
e0e71b0f7e Wraps up v0.4.1
Some checks failed
Release / goreleaser (push) Has been cancelled
2025-02-24 00:02:28 +01:00
Mukhtar Akere
3b463adf09 Hotfix 2025-02-22 00:37:38 +01:00
Mukhtar Akere
7af0de76cc Add support for same infohashes but different categories 2025-02-22 00:14:13 +01:00
Mukhtar Akere
108da305b3 - Update Readme
- Add funding.yml
- Add Arr Queue cleanner worker
- Rewrote worker
2025-02-19 23:52:53 +01:00
Mukhtar Akere
9a7bff04ef - Fix alldebrid bug
- Minor cleanup
- speedgains
2025-02-19 01:20:05 +01:00
Mukhtar Akere
325e6c912c finalize beta-0.4.1 2025-02-14 02:42:45 +01:00
Mukhtar Akere
6a24c372f5 Hotfix for Proxy 2025-02-13 17:31:54 +01:00
Mukhtar Akere
32ecc08a72 Merge branch 'feat/add-auth' into beta 2025-02-13 14:25:56 +01:00
Mukhtar Akere
4b2f601df2 hotfix qbittorent 2025-02-13 14:19:07 +01:00
Mukhtar Akere
bfd2596367 fix mounts; backward compatibility 2025-02-13 05:07:14 +01:00
Mukhtar Akere
6f4f72d781 hotfix auth checks 2025-02-13 02:20:45 +01:00
Mukhtar Akere
14341d30bc More cleanup, more refractor, more energy, more passion, more footwork 2025-02-13 02:08:18 +01:00
Mukhtar Akere
878f78468f init adding rclone 2025-02-12 22:11:58 +01:00
Mukhtar Akere
c386495d3d Add auth 2025-02-09 23:47:02 +01:00
Mukhtar Akere
1614e29f8f Add Speed to callbacks 2025-02-09 19:17:20 +01:00
Mukhtar Akere
186a24cc4a Fix repair worker 2025-02-07 23:42:09 +01:00
Mukhtar Akere
16c825d5ba feat: restructure code; add size and ext checks (#39)
- Refractor code
- Add file size and extension checkers
- Change repair workflow to use zurg
2025-02-04 02:07:19 -08:00
Mukhtar Akere
8ca3cb32f3 Merge branch 'beta' of github.com:sirrobot01/debrid-blackhole into beta 2025-02-01 04:29:27 +01:00
Mukhtar Akere
c5b975a721 Fix Radarr Movie search 2025-02-01 04:29:01 +01:00
Elias Benbourenane
2fa6737f31 Ability to upload torrent files (#24)
- Upload Torrent file OR magnet URI
2025-01-31 18:41:00 -08:00
Mukhtar Akere
f40cd9ba56 Hotfix: Improve final docker image 2025-02-01 03:37:37 +01:00
Mukhtar Akere
bca96dd858 Hotfix: Dockerfile log file perm 2025-02-01 03:05:42 +01:00
Elias Benbourenane
1b9b7e203e Use toast notifications over JavaScript alerts (#37)
Implement UI toast notifications
2025-01-31 16:31:59 -08:00
Elias Benbourenane
99b4a3152d Torrent list state filtering (#33)
* perf: Switched from DOM-based to state-based in the main render loop logic

This removes the need to make complicated CSS selectors that would slow down the app.

It also improves debugability and readability.

* feat: Client-side state filtering

* style: Don't wrap the torrent list's header on small screens

* perf: Keep a dictionary of DOM element references
2025-01-31 14:46:44 -08:00
Mukhtar Akere
297715bf6e Add Download Progress tracking; early errors for invalid debrid torrent status (#35) 2025-01-31 14:45:49 -08:00
Mukhtar Akere
64995d0bf3 Add Healthcheck; Search missing immediately (#36) 2025-01-31 14:45:30 -08:00
Elias Benbourenane
92504cc8e0 feat: Remember the last used download options (#30) 2025-01-30 05:23:37 +01:00
Elias Benbourenane
12f89b3047 feat: Selectable torrent list items with a 'delete selected' button (#29) 2025-01-30 05:03:02 +01:00
Elias Benbourenane
5d7ddbd208 fix: Don't immediately download torrents with the magnet link handler (#28) 2025-01-30 05:02:28 +01:00
Elias Benbourenane
84b70464da feat: Keep track of magnet link handler state (#26) 2025-01-30 05:02:08 +01:00
Elias Benbourenane
092a028ad9 Disabled text wrapping on the torrent list table to enhance mobile view (#23) 2025-01-29 03:00:41 +01:00
Elias Benbourenane
530de20276 Handle multiple torrent submissions (#16)
* feat: Handle multiple torrent submissions
2025-01-28 13:12:43 -08:00
Mukhtar Akere
e2eb11056d Remove Healthcheck #22
* Move image to use distroless with PGID and PUID

* remove healthcheck
2025-01-28 13:12:22 -08:00
Mukhtar Akere
1a2504ff6c Move image to use distroless with PGID and PUID (#21) 2025-01-28 12:15:40 -08:00
Jamie Isaksen
07d632309a add healthcheck to monitor the app (#20) 2025-01-28 12:10:22 -08:00
Elias Benbourenane
d9b06fb518 Magnet link handler (#15)
* feat: Magnet link handler registration on the config page
2025-01-28 12:02:33 -08:00
Jamie Isaksen
d58b327957 correct typo (#17) 2025-01-26 19:11:30 +01:00
Mukhtar Akere
bba90cb89a Finalize v0.4.0
Some checks failed
Release / goreleaser (push) Failing after 2m18s
2025-01-25 11:40:30 +01:00
Mukhtar Akere
fc5c6e2869 Finalize v0.4.0 2025-01-24 23:33:08 +01:00
Mukhtar Akere
66f4965ec8 Fix arr storage 2025-01-23 03:11:24 +01:00
Mukhtar Akere
dc16f0d8a1 Fix arr storage 2025-01-23 03:06:11 +01:00
Mukhtar Akere
0741ddf999 Fix versionning 2025-01-23 02:31:55 +01:00
Mukhtar Akere
2ae4bd571e Fix Log file permissions 2025-01-23 02:11:37 +01:00
Mukhtar Akere
0b1c1af8b8 Fix Repair checks. Handle false positives 2025-01-23 01:35:28 +01:00
Mukhtar Akere
74a55149fc Features:
- Add file logging, server
- Fix minor repair bug
- Wrap up beta
2025-01-23 00:27:12 +01:00
Mukhtar Akere
cfb0051b04 Fix AllDebrid symlink bug 2025-01-19 08:56:52 +01:00
Mukhtar Akere
a986c4b5d0 Hotfix ui templates 2025-01-18 04:13:56 +01:00
Mukhtar Akere
3841b7751e Changelog v0.4.0 2025-01-18 03:49:05 +01:00
Mukhtar Akere
ea73572557 - Add shinning UI
- Revamp deployment process
- Fix Alldebrid file node bug
2025-01-13 20:18:59 +01:00
Mukhtar Akere
7cb41a0e8b Fix getting mount path 2025-01-11 23:10:05 +01:00
Mukhtar Akere
451c17cdf7 Merge branch 'beta' of github.com:sirrobot01/debrid-blackhole into beta 2025-01-11 23:09:00 +01:00
Mukhtar Akere
c39eebea0d [BETA] Changelog 0.3.4 (#14)
- Add repair worker
- Fix AllDebrid bugs with single movies/series
- Fix Torbox bugs
2025-01-11 07:21:49 -08:00
Mukhtar Akere
03c9657945 Add repair worker 2025-01-09 19:44:38 +01:00
Mukhtar Akere
28e5342c66 Add AllDebrid support 2025-01-01 17:12:18 +01:00
Mukhtar Akere
eeb3a31b05 Fix rar files, remove srt 2024-12-27 22:30:36 +01:00
Mukhtar Akere
e9d3e120f3 Hotfix
Some checks failed
Release / goreleaser (push) Failing after 2m2s
2024-12-25 00:06:47 +01:00
Mukhtar Akere
104df3c33c Changelog 0.3.2 2024-12-25 00:00:47 +01:00
Kai Gohegan
810c9d705e Update storage.go (#10) 2024-12-24 14:59:33 -08:00
Kai Gohegan
4ff00859a3 Update Dockerfile (#9) 2024-12-24 14:59:08 -08:00
Mukhtar Akere
b77dbcc4f4 Fix magnet conversion
Some checks failed
Release / goreleaser (push) Failing after 2m4s
2024-12-19 00:12:29 +01:00
Mukhtar Akere
58c0aafab1 Fix docker.yml 2024-12-18 17:24:40 +01:00
Mukhtar Akere
357da54083 Fix docker.yml 2024-12-18 17:20:59 +01:00
Mukhtar Akere
88a7196eaf Hotfix 2024-12-18 17:01:26 +01:00
Mukhtar Akere
abc86a0460 Changelog 0.3.1 2024-12-18 16:51:00 +01:00
robertRogerPresident
dd0b7efdff fix toboxInfo struct type (#6)
Co-authored-by: Tangui <tanguidaoudal@yahoo.fr>
2024-12-16 11:56:09 -08:00
Mukhtar Akere
7359f280b0 Make sure torrents get deleted on failed 2024-12-12 17:38:53 +01:00
Mukhtar Akere
4eb3539347 Fix docker.yml 2024-11-30 16:06:28 +01:00
Mukhtar Akere
9fb1118475 Fix docker.yml 2024-11-30 16:03:33 +01:00
Mukhtar Akere
07491b43fe Fix docker.yml 2024-11-30 16:01:07 +01:00
Mukhtar Akere
8f7c9a19c5 Fix goreleaser
Some checks failed
Release / goreleaser (push) Failing after 2m44s
2024-11-30 15:50:42 +01:00
Mukhtar Akere
a51364d150 Changelog 0.3.0 2024-11-30 15:46:58 +01:00
Mukhtar Akere
df2aa4e361 0.2.7:
- Add support for multiple debrid providers
- Add Torbox support
- Add support for configurable debrid cache checks
- Add support for configurable debrid download uncached torrents
2024-11-25 16:48:23 +01:00
Mukhtar Akere
b51cb954f8 Merge branch 'beta' 2024-11-25 16:39:47 +01:00
Mukhtar Akere
8bdb2e3547 Hotfix & Updated Readme 2024-11-23 23:41:49 +01:00
Mukhtar Akere
2c9a076cd2 Hotfix 2024-11-23 21:10:42 +01:00
Mukhtar Akere
d2a77620bc Features:
- Add Torbox(Tested)
- Fix RD cache check
- Minor fixes
2024-11-23 19:52:15 +01:00
Mukhtar Akere
4b8f1ccfb6 Changelog 0.2.6 2024-10-08 15:43:38 +01:00
Mukhtar Akere
f118c5b794 Changelog 0.2.5 2024-10-01 11:17:31 +01:00
Mukhtar Akere
f6c6144601 Changelog 0.2.4 2024-09-22 16:33:26 +01:00
Mukhtar Akere
ff74e279d9 Wrap up file downloading feature 2024-09-22 16:28:31 +01:00
Mukhtar Akere
ba147ac56c Adds Support for Downloader 2024-09-20 21:09:26 +01:00
Mukhtar Akere
01981114cb Changelog 0.2.3 2024-09-17 00:29:02 +01:00
Mukhtar Akere
2ec0354881 Hotfix: Download Uncached 2024-09-15 22:28:07 +01:00
Mukhtar Akere
329e4c60f5 Changelog 0.2.2 2024-09-15 03:33:28 +01:00
Mukhtar Akere
d5e07dc961 Random Fixes:
- Fix uncached error
- Fix naming and race conditions
2024-09-14 01:44:18 +01:00
Mukhtar Akere
f622cbfe63 Random Fixes:
- Fix uncached error
- Fix naming and race conditions
2024-09-14 01:42:52 +01:00
Mukhtar Akere
9511f3e99e Changelog 0.2.0 (#1)
* Changelog 0.2.0
2024-09-11 22:01:10 -07:00
112 changed files with 15044 additions and 1615 deletions

52
.air.toml Normal file
View File

@@ -0,0 +1,52 @@
root = "."
testdata_dir = "testdata"
tmp_dir = "tmp"
[build]
args_bin = ["--config", "data/"]
bin = "./tmp/main"
cmd = "bash -c 'go build -ldflags \"-X github.com/sirrobot01/decypharr/pkg/version.Version=0.0.0 -X github.com/sirrobot01/decypharr/pkg/version.Channel=dev\" -o ./tmp/main .'"
delay = 1000
exclude_dir = ["assets", "tmp", "vendor", "testdata", "data"]
exclude_file = []
exclude_regex = ["_test.go"]
exclude_unchanged = false
follow_symlink = false
full_bin = ""
include_dir = []
include_ext = ["go", "tpl", "tmpl", "html", ".json"]
include_file = []
kill_delay = "0s"
log = "build-errors.log"
poll = false
poll_interval = 0
post_cmd = []
pre_cmd = []
rerun = false
rerun_delay = 500
send_interrupt = false
stop_on_error = false
[color]
app = ""
build = "yellow"
main = "magenta"
runner = "green"
watcher = "cyan"
[log]
main_only = false
silent = false
time = false
[misc]
clean_on_exit = false
[proxy]
app_port = 0
enabled = false
proxy_port = 0
[screen]
clear_on_rebuild = false
keep_scroll = true

View File

@@ -5,4 +5,8 @@ docker-compose.yml
.DS_Store
**/.idea/
*.magnet
**.torrent
**.torrent
torrents.json
**/dist/
*.json
.ven/**

2
.github/FUNDING.yml vendored Normal file
View File

@@ -0,0 +1,2 @@
github: sirrobot01
buy_me_a_coffee: sirrobot01

85
.github/workflows/beta-docker.yml vendored Normal file
View File

@@ -0,0 +1,85 @@
name: Beta Docker Build
on:
push:
branches:
- beta
permissions:
contents: read
packages: write
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Calculate beta version
id: calculate_version
run: |
LATEST_TAG=$(git tag | grep -v 'beta' | sort -V | tail -n1)
echo "Found latest tag: ${LATEST_TAG}"
IFS='.' read -r -a VERSION_PARTS <<< "$LATEST_TAG"
MAJOR="${VERSION_PARTS[0]}"
MINOR="${VERSION_PARTS[1]}"
PATCH="${VERSION_PARTS[2]}"
NEW_PATCH=$((PATCH + 1))
BETA_VERSION="${MAJOR}.${MINOR}.${NEW_PATCH}"
echo "Calculated beta version: ${BETA_VERSION}"
echo "beta_version=${BETA_VERSION}" >> $GITHUB_ENV
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Cache Docker layers
uses: actions/cache@v3
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
# Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# Login to GitHub Container Registry
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push beta Docker image
uses: docker/build-push-action@v5
with:
context: .
platforms: linux/amd64,linux/arm64,linux/arm/v7
push: true
tags: |
cy01/blackhole:beta
ghcr.io/${{ github.repository_owner }}/decypharr:beta
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new,mode=max
build-args: |
VERSION=${{ env.beta_version }}
CHANNEL=beta
- name: Move cache
run: |
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache

29
.github/workflows/deploy-docs.yml vendored Normal file
View File

@@ -0,0 +1,29 @@
name: ci
on:
push:
branches:
- main
- beta
permissions:
contents: write
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Configure Git Credentials
run: |
git config user.name github-actions[bot]
git config user.email 41898282+github-actions[bot]@users.noreply.github.com
- uses: actions/setup-python@v5
with:
python-version: 3.x
- run: echo "cache_id=$(date --utc '+%V')" >> $GITHUB_ENV
- uses: actions/cache@v4
with:
key: mkdocs-material-${{ env.cache_id }}
path: .cache
restore-keys: |
mkdocs-material-
- run: pip install mkdocs-material
- run: cd docs && mkdocs gh-deploy --force

33
.github/workflows/goreleaser.yml vendored Normal file
View File

@@ -0,0 +1,33 @@
name: GoReleaser
on:
push:
tags:
- '*'
permissions:
contents: write
jobs:
goreleaser:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.22'
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@v5
with:
distribution: goreleaser
version: latest
args: release --clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
RELEASE_CHANNEL: stable

77
.github/workflows/release-docker.yml vendored Normal file
View File

@@ -0,0 +1,77 @@
name: Release Docker Build
on:
push:
tags:
- '*'
permissions:
contents: read
packages: write
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Get tag name
id: get_tag
run: |
TAG_NAME=${GITHUB_REF#refs/tags/}
echo "tag_name=${TAG_NAME}" >> $GITHUB_ENV
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Cache Docker layers
uses: actions/cache@v3
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
# Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# Login to GitHub Container Registry
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push release Docker image
uses: docker/build-push-action@v5
with:
context: .
platforms: linux/amd64,linux/arm64,linux/arm/v7
push: true
tags: |
cy01/blackhole:latest
cy01/blackhole:${{ env.tag_name }}
ghcr.io/${{ github.repository_owner }}/decypharr:latest
ghcr.io/${{ github.repository_owner }}/decypharr:${{ env.tag_name }}
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new,mode=max
build-args: |
VERSION=${{ env.tag_name }}
CHANNEL=stable
- name: Move cache
run: |
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache

6
.gitignore vendored
View File

@@ -1,6 +1,5 @@
data/
config.json
docker-compose.yml
.idea/
.DS_Store
*.torrent
@@ -9,3 +8,8 @@ docker-compose.yml
*.log
*.log.*
dist/
tmp/**
torrents.json
logs/**
auth.json
.ven/

View File

@@ -1,8 +1,7 @@
version: 2
version: 1
before:
hooks:
# You may remove this if you don't use go modules.
- go mod tidy
builds:
@@ -16,19 +15,22 @@ builds:
- amd64
- arm
- arm64
ldflags:
- -s -w
- -X github.com/sirrobot01/decypharr/pkg/version.Version={{.Version}}
- -X github.com/sirrobot01/decypharr/pkg/version.Channel={{.Env.RELEASE_CHANNEL}}
archives:
- format: tar.gz
# this name template makes the OS and Arch compatible with the results of `uname`.
name_template: >-
{{ .ProjectName }}_
decypharr_
{{- title .Os }}_
{{- if eq .Arch "amd64" }}x86_64
{{- else if eq .Arch "386" }}i386
{{- else }}{{ .Arch }}{{ end }}
{{- if .Arm }}v{{ .Arm }}{{ end }}
# use zip for windows archives
format_overrides:
- goos: windows
format: zip

View File

@@ -1,34 +0,0 @@
#### 0.1.0
- Initial Release
- Added Real Debrid Support
- Added Arrs Support
- Added Proxy Support
- Added Basic Authentication for Proxy
- Added Rate Limiting for Debrid Providers
#### 0.1.1
- Added support for "No Blackhole" for Arrs
- Added support for "Cached Only" for Proxy
- Bug Fixes
#### 0.1.2
- Bug fixes
- Code cleanup
- Get available hashes at once
#### 0.1.3
- Searching for infohashes in the xml description/summary/comments
- Added local cache support
- Added max cache size
- Rewrite blackhole.go
- Bug fixes
- Fixed indexer getting disabled
- Fixed blackhole not working
#### 0.1.4
- Rewrote Report log
- Fix YTS, 1337x not grabbing infohash
- Fix Torrent symlink bug
-

View File

@@ -1,26 +1,66 @@
FROM --platform=$BUILDPLATFORM golang:1.22 as builder
# Stage 1: Build binaries
FROM --platform=$BUILDPLATFORM golang:1.23-alpine as builder
ARG TARGETPLATFORM
ARG BUILDPLATFORM
ARG TARGETOS
ARG TARGETARCH
ARG VERSION=0.0.0
ARG CHANNEL=dev
# Set destination for COPY
WORKDIR /app
# Download Go modules
COPY go.mod go.sum ./
RUN go mod download
RUN --mount=type=cache,target=/go/pkg/mod \
go mod download -x
# Copy the source code. Note the slash at the end, as explained in
# https://docs.docker.com/reference/dockerfile/#copy
ADD . .
COPY . .
# Build
RUN CGO_ENABLED=0 GOOS=$(echo $TARGETPLATFORM | cut -d '/' -f1) GOARCH=$(echo $TARGETPLATFORM | cut -d '/' -f2) go build -o /blackhole
# Build main binary
RUN --mount=type=cache,target=/go/pkg/mod \
--mount=type=cache,target=/root/.cache/go-build \
CGO_ENABLED=0 GOOS=$TARGETOS GOARCH=$TARGETARCH \
go build -trimpath \
-ldflags="-w -s -X github.com/sirrobot01/decypharr/pkg/version.Version=${VERSION} -X github.com/sirrobot01/decypharr/pkg/version.Channel=${CHANNEL}" \
-o /decypharr
FROM scratch
COPY --from=builder /blackhole /blackhole
# Build healthcheck (optimized)
RUN --mount=type=cache,target=/go/pkg/mod \
--mount=type=cache,target=/root/.cache/go-build \
CGO_ENABLED=0 GOOS=$TARGETOS GOARCH=$TARGETARCH \
go build -trimpath -ldflags="-w -s" \
-o /healthcheck cmd/healthcheck/main.go
EXPOSE 8181
# Stage 2: Create directory structure
FROM alpine:3.19 as dirsetup
RUN mkdir -p /app/logs && \
mkdir -p /app/cache && \
chmod 777 /app/logs && \
touch /app/logs/decypharr.log && \
chmod 666 /app/logs/decypharr.log
# Run
CMD ["/blackhole", "--config", "/app/config.json"]
# Stage 3: Final image
FROM gcr.io/distroless/static-debian12:nonroot
LABEL version = "${VERSION}-${CHANNEL}"
LABEL org.opencontainers.image.source = "https://github.com/sirrobot01/decypharr"
LABEL org.opencontainers.image.title = "decypharr"
LABEL org.opencontainers.image.authors = "sirrobot01"
LABEL org.opencontainers.image.documentation = "https://github.com/sirrobot01/decypharr/blob/main/README.md"
# Copy binaries
COPY --from=builder --chown=nonroot:nonroot /decypharr /usr/bin/decypharr
COPY --from=builder --chown=nonroot:nonroot /healthcheck /usr/bin/healthcheck
# Copy pre-made directory structure
COPY --from=dirsetup --chown=nonroot:nonroot /app /app
# Metadata
ENV LOG_PATH=/app/logs
EXPOSE 8181 8282
VOLUME ["/app"]
USER nonroot:nonroot
HEALTHCHECK CMD ["/usr/bin/healthcheck"]
CMD ["/usr/bin/decypharr", "--config", "/app"]

21
LICENSE Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2025 Mukhtar Akere
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

153
README.md
View File

@@ -1,123 +1,96 @@
### GoBlackHole(with Debrid Proxy Support)
# DecyphArr
This is a Golang implementation go Torrent Blackhole with a **Real Debrid Proxy Support**.
![ui](docs/docs/images/main.png)
#### Uses
- Torrent Blackhole that supports the Arrs(Sonarr, Radarr, etc)
- Proxy support for the Arrs
**DecyphArr** is an implementation of QbitTorrent with **Multiple Debrid service support**, written in Go.
The proxy is useful in filtering out un-cached Real Debrid torrents
## What is DecyphArr?
### Changelog
DecyphArr combines the power of QBittorrent with popular Debrid services to enhance your media management. It provides a familiar interface for Sonarr, Radarr, and other \*Arr applications while leveraging the capabilities of Debrid providers.
- View the [CHANGELOG.md](CHANGELOG.md) for the latest changes
## Features
- 🔄 Mock Qbittorent API that supports the Arrs (Sonarr, Radarr, Lidarr etc)
- 🖥️ Full-fledged UI for managing torrents
- 🛡️ Proxy support for filtering out un-cached Debrid torrents
- 🔌 Multiple Debrid providers support
- 📁 WebDAV server support for each debrid provider
- 🔧 Repair Worker for missing files
## Supported Debrid Providers
- [Real Debrid](https://real-debrid.com)
- [Torbox](https://torbox.app)
- [Debrid Link](https://debrid-link.com)
- [All Debrid](https://alldebrid.com)
## Quick Start
### Docker (Recommended)
#### Installation
##### Docker Compose
```yaml
version: '3.7'
services:
blackhole:
decypharr:
image: cy01/blackhole:latest # or cy01/blackhole:beta
container_name: blackhole
container_name: decypharr
ports:
- "8282:8282" # qBittorrent
user: "1000:1000"
volumes:
- ./logs:/app/logs
- ~/plex/media:/media
- ~/plex/media/symlinks/:/media/symlinks/
- ~/plex/configs/blackhole/config.json:/app/config.json # Config file, see below
- /mnt/:/mnt
- ./configs/:/app # config.json must be in this directory
environment:
- PUID=1000
- PGID=1000
- UMASK=002
restart: unless-stopped
```
##### Binary
Download the binary from the releases page and run it with the config file.
## Documentation
```bash
./blackhole --config /path/to/config.json
```
For complete documentation, please visit our [Documentation](https://sirrobot01.github.io/debrid-blackhole/).
The documentation includes:
- Detailed installation instructions
- Configuration guide
- Usage with Sonarr/Radarr
- WebDAV setup
- Repair Worker information
- ...and more!
## Basic Configuration
#### Config
```json
{
"debrid": {
"name": "realdebrid",
"host": "https://api.real-debrid.com/rest/1.0",
"api_key": "realdebrid_api_key",
"folder": "data/realdebrid/torrents/",
"rate_limit": "250/minute"
},
"arrs": [
"debrids": [
{
"watch_folder": "data/sonarr/",
"completed_folder": "data/sonarr/completed/",
"token": "sonarr_api_key",
"url": "http://localhost:8787"
},
{
"watch_folder": "data/radarr/",
"completed_folder": "data/radarr/completed/",
"token": "radarr_api_key",
"url": "http://localhost:7878"
},
{
"watch_folder": "data/radarr4k/",
"completed_folder": "data/radarr4k/completed/",
"token": "radarr4k_api_key",
"url": "http://localhost:7878"
"name": "realdebrid",
"host": "https://api.real-debrid.com/rest/1.0",
"api_key": "your_api_key_here",
"folder": "/mnt/remote/realdebrid/__all__/",
"use_webdav": true
}
],
"proxy": {
"enabled": true,
"port": "8181",
"debug": false,
"username": "username",
"password": "password",
"cached_only": true
"qbittorrent": {
"port": "8282",
"download_folder": "/mnt/symlinks/",
"categories": ["sonarr", "radarr"]
},
"max_cache_size": 1000
"repair": {
"enabled": false,
"interval": "12h",
"run_on_start": false
},
"use_auth": false,
"log_level": "info"
}
```
#### Config Notes
##### Debrid Config
- This config key is important as it's used for both Blackhole and Proxy
## Contributing
##### Arrs Config
- An empty array will disable Blackhole for the Arrs
- The `watch_folder` is the folder where the Blackhole will watch for torrents
- The `completed_folder` is the folder where the Blackhole will move the completed torrents
- The `token` is the API key for the Arr(This is optional, I think)
Contributions are welcome! Please feel free to submit a Pull Request.
##### Proxy Config
- The `enabled` key is used to enable the proxy
- The `port` key is the port the proxy will listen on
- The `debug` key is used to enable debug logs
- The `username` and `password` keys are used for basic authentication
- The `cached_only` means only cached torrents will be returned
-
### Proxy
The proxy is useful in filtering out un-cached Real Debrid torrents.
The proxy is a simple HTTP proxy that requires basic authentication. The proxy can be enabled by setting the `proxy.enabled` to `true` in the config file.
The proxy listens on the port `8181` by default. The username and password can be set in the config file.
Setting Up Proxy in Arr
- Sonarr/Radarr
- Settings -> General -> Use Proxy
- Hostname: `localhost` # or the IP of the server
- Port: `8181` # or the port set in the config file
- Username: `username` # or the username set in the config file
- Password: `password` # or the password set in the config file
- Bypass Proxy for Local Addresses -> `No`
### TODO
- [ ] Add more Debrid Providers
- [ ] Add more Proxy features
- [ ] Add more tests
## License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.

View File

@@ -1,186 +0,0 @@
package cmd
import (
"fmt"
"github.com/fsnotify/fsnotify"
"goBlack/common"
"goBlack/debrid"
"log"
"os"
"path/filepath"
"sync"
"time"
)
type Blackhole struct {
config *common.Config
deb debrid.Service
cache *common.Cache
}
func NewBlackhole(config *common.Config, deb debrid.Service, cache *common.Cache) *Blackhole {
return &Blackhole{
config: config,
deb: deb,
cache: cache,
}
}
func fileReady(path string) bool {
_, err := os.Stat(path)
return !os.IsNotExist(err) // Returns true if the file exists
}
func checkFileLoop(wg *sync.WaitGroup, dir string, file debrid.TorrentFile, ready chan<- debrid.TorrentFile) {
defer wg.Done()
ticker := time.NewTicker(1 * time.Second) // Check every second
defer ticker.Stop()
path := filepath.Join(dir, file.Path)
for {
select {
case <-ticker.C:
if fileReady(path) {
ready <- file
return
}
}
}
}
func (b *Blackhole) processFiles(arr *debrid.Arr, torrent *debrid.Torrent) {
var wg sync.WaitGroup
files := torrent.Files
ready := make(chan debrid.TorrentFile, len(files))
log.Printf("Checking %d files...", len(files))
for _, file := range files {
wg.Add(1)
go checkFileLoop(&wg, arr.Debrid.Folder, file, ready)
}
go func() {
wg.Wait()
close(ready)
}()
for r := range ready {
log.Println("File is ready:", r.Name)
b.createSymLink(arr, torrent)
}
go torrent.Cleanup(true)
fmt.Printf("%s downloaded", torrent.Name)
}
func (b *Blackhole) createSymLink(arr *debrid.Arr, torrent *debrid.Torrent) {
path := filepath.Join(arr.CompletedFolder, torrent.Folder)
err := os.MkdirAll(path, os.ModePerm)
if err != nil {
log.Printf("Failed to create directory: %s\n", path)
}
for _, file := range torrent.Files {
// Combine the directory and filename to form a full path
fullPath := filepath.Join(path, file.Name) // completedFolder/MyTVShow/MyTVShow.S01E01.720p.mkv
// Create a symbolic link if file doesn't exist
torrentPath := filepath.Join(arr.Debrid.Folder, torrent.Folder, file.Name) // debridFolder/MyTVShow/MyTVShow.S01E01.720p.mkv
_ = os.Symlink(torrentPath, fullPath)
}
}
func watcher(watcher *fsnotify.Watcher, events map[string]time.Time) {
for {
select {
case event, ok := <-watcher.Events:
if !ok {
return
}
if event.Op&fsnotify.Write == fsnotify.Write {
if filepath.Ext(event.Name) == ".torrent" || filepath.Ext(event.Name) == ".magnet" {
events[event.Name] = time.Now()
}
}
case err, ok := <-watcher.Errors:
if !ok {
return
}
log.Println("ERROR:", err)
}
}
}
func (b *Blackhole) processFilesDebounced(arr *debrid.Arr, events map[string]time.Time, debouncePeriod time.Duration) {
ticker := time.NewTicker(1 * time.Second) // Check every second
defer ticker.Stop()
for range ticker.C {
for file, lastEventTime := range events {
if time.Since(lastEventTime) >= debouncePeriod {
log.Printf("Torrent file detected: %s", file)
// Process the torrent file
torrent, err := b.deb.Process(arr, file)
if err != nil && torrent != nil {
// remove torrent file
torrent.Cleanup(true)
_ = torrent.MarkAsFailed()
log.Printf("Error processing torrent file: %s", err)
}
if err == nil && torrent != nil && len(torrent.Files) > 0 {
go b.processFiles(arr, torrent)
}
delete(events, file) // remove file from channel
}
}
}
}
func (b *Blackhole) startArr(arr *debrid.Arr) {
log.Printf("Watching: %s", arr.WatchFolder)
w, err := fsnotify.NewWatcher()
if err != nil {
log.Println(err)
}
defer func(w *fsnotify.Watcher) {
err := w.Close()
if err != nil {
log.Println(err)
}
}(w)
events := make(map[string]time.Time)
go watcher(w, events)
if err = w.Add(arr.WatchFolder); err != nil {
log.Println("Error Watching folder:", err)
return
}
b.processFilesDebounced(arr, events, 1*time.Second)
}
func (b *Blackhole) Start() {
log.Println("[*] Starting Blackhole")
var wg sync.WaitGroup
for _, conf := range b.config.Arrs {
wg.Add(1)
defer wg.Done()
headers := map[string]string{
"X-Api-Key": conf.Token,
}
client := common.NewRLHTTPClient(nil, headers)
arr := &debrid.Arr{
Debrid: b.config.Debrid,
WatchFolder: conf.WatchFolder,
CompletedFolder: conf.CompletedFolder,
Token: conf.Token,
URL: conf.URL,
Client: client,
}
go b.startArr(arr)
}
wg.Wait()
}

113
cmd/decypharr/main.go Normal file
View File

@@ -0,0 +1,113 @@
package decypharr
import (
"context"
"fmt"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/pkg/qbit"
"github.com/sirrobot01/decypharr/pkg/server"
"github.com/sirrobot01/decypharr/pkg/service"
"github.com/sirrobot01/decypharr/pkg/version"
"github.com/sirrobot01/decypharr/pkg/web"
"github.com/sirrobot01/decypharr/pkg/webdav"
"github.com/sirrobot01/decypharr/pkg/worker"
"os"
"runtime/debug"
"strconv"
"sync"
"syscall"
)
func Start(ctx context.Context) error {
if umaskStr := os.Getenv("UMASK"); umaskStr != "" {
umask, err := strconv.ParseInt(umaskStr, 8, 32)
if err != nil {
return fmt.Errorf("invalid UMASK value: %s", umaskStr)
}
// Set umask
syscall.Umask(int(umask))
}
cfg := config.Get()
var wg sync.WaitGroup
errChan := make(chan error)
_log := logger.GetDefaultLogger()
_log.Info().Msgf("Starting Decypher (%s)", version.GetInfo().String())
_log.Info().Msgf("Default Log Level: %s", cfg.LogLevel)
svc := service.New()
_qbit := qbit.New()
srv := server.New()
_webdav := webdav.New()
ui := web.New(_qbit).Routes()
webdavRoutes := _webdav.Routes()
qbitRoutes := _qbit.Routes()
// Register routes
srv.Mount("/", ui)
srv.Mount("/api/v2", qbitRoutes)
srv.Mount("/webdav", webdavRoutes)
safeGo := func(f func() error) {
wg.Add(1)
go func() {
defer wg.Done()
defer func() {
if r := recover(); r != nil {
stack := debug.Stack()
_log.Error().
Interface("panic", r).
Str("stack", string(stack)).
Msg("Recovered from panic in goroutine")
// Send error to channel so the main goroutine is aware
errChan <- fmt.Errorf("panic: %v", r)
}
}()
if err := f(); err != nil {
errChan <- err
}
}()
}
safeGo(func() error {
return _webdav.Start(ctx)
})
safeGo(func() error {
return srv.Start(ctx)
})
safeGo(func() error {
return worker.Start(ctx)
})
if cfg.Repair.Enabled {
safeGo(func() error {
err := svc.Repair.Start(ctx)
if err != nil {
_log.Error().Err(err).Msg("Error during repair")
}
return nil // Not propagating repair errors to terminate the app
})
}
go func() {
wg.Wait()
close(errChan)
}()
// Wait for context cancellation or completion or error
select {
case err := <-errChan:
return err
case <-ctx.Done():
return ctx.Err()
}
}

22
cmd/healthcheck/main.go Normal file
View File

@@ -0,0 +1,22 @@
package main
import (
"cmp"
"net/http"
"os"
)
func main() {
port := cmp.Or(os.Getenv("QBIT_PORT"), "8282")
resp, err := http.Get("http://localhost:" + port + "/api/v2/app/version")
if err != nil {
os.Exit(1)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
os.Exit(1)
}
os.Exit(0)
}

View File

@@ -1,39 +0,0 @@
package cmd
import (
"cmp"
"goBlack/common"
"goBlack/debrid"
"sync"
)
func Start(config *common.Config) {
maxCacheSize := cmp.Or(config.MaxCacheSize, 1000)
cache := common.NewCache(maxCacheSize)
deb := debrid.NewDebrid(config.Debrid, cache)
var wg sync.WaitGroup
if config.Proxy.Enabled {
proxy := NewProxy(*config, deb, cache)
wg.Add(1)
go func() {
defer wg.Done()
proxy.Start()
}()
}
if len(config.Arrs) > 0 {
blackhole := NewBlackhole(config, deb, cache)
wg.Add(1)
go func() {
defer wg.Done()
blackhole.Start()
}()
}
// Wait indefinitely
wg.Wait()
}

View File

@@ -1,321 +0,0 @@
package cmd
import (
"bytes"
"cmp"
"encoding/xml"
"fmt"
"github.com/elazarl/goproxy"
"github.com/elazarl/goproxy/ext/auth"
"github.com/valyala/fastjson"
"goBlack/common"
"goBlack/debrid"
"io"
"log"
"net/http"
"os"
"regexp"
"strings"
"sync"
)
type RSS struct {
XMLName xml.Name `xml:"rss"`
Text string `xml:",chardata"`
Version string `xml:"version,attr"`
Atom string `xml:"atom,attr"`
Torznab string `xml:"torznab,attr"`
Channel struct {
Text string `xml:",chardata"`
Link struct {
Text string `xml:",chardata"`
Rel string `xml:"rel,attr"`
Type string `xml:"type,attr"`
} `xml:"link"`
Title string `xml:"title"`
Items []Item `xml:"item"`
} `xml:"channel"`
}
type Item struct {
Text string `xml:",chardata"`
Title string `xml:"title"`
Description string `xml:"description"`
GUID string `xml:"guid"`
ProwlarrIndexer struct {
Text string `xml:",chardata"`
ID string `xml:"id,attr"`
Type string `xml:"type,attr"`
} `xml:"prowlarrindexer"`
Comments string `xml:"comments"`
PubDate string `xml:"pubDate"`
Size string `xml:"size"`
Link string `xml:"link"`
Category []string `xml:"category"`
Enclosure struct {
Text string `xml:",chardata"`
URL string `xml:"url,attr"`
Length string `xml:"length,attr"`
Type string `xml:"type,attr"`
} `xml:"enclosure"`
TorznabAttrs []struct {
Text string `xml:",chardata"`
Name string `xml:"name,attr"`
Value string `xml:"value,attr"`
} `xml:"attr"`
}
type Proxy struct {
port string
enabled bool
debug bool
username string
password string
cachedOnly bool
debrid debrid.Service
cache *common.Cache
}
func NewProxy(config common.Config, deb debrid.Service, cache *common.Cache) *Proxy {
cfg := config.Proxy
port := cmp.Or(os.Getenv("PORT"), cfg.Port, "8181")
return &Proxy{
port: port,
enabled: cfg.Enabled,
debug: cfg.Debug,
username: cfg.Username,
password: cfg.Password,
cachedOnly: cfg.CachedOnly,
debrid: deb,
cache: cache,
}
}
func (p *Proxy) ProcessJSONResponse(resp *http.Response) *http.Response {
if resp == nil || resp.Body == nil {
return resp
}
body, err := io.ReadAll(resp.Body)
if err != nil {
return resp
}
err = resp.Body.Close()
if err != nil {
return nil
}
var par fastjson.Parser
v, err := par.ParseBytes(body)
if err != nil {
// If it's not JSON, return the original response
resp.Body = io.NopCloser(bytes.NewReader(body))
return resp
}
// Modify the JSON
// Serialize the modified JSON back to bytes
modifiedBody := v.MarshalTo(nil)
// Set the modified body back to the response
resp.Body = io.NopCloser(bytes.NewReader(modifiedBody))
resp.ContentLength = int64(len(modifiedBody))
resp.Header.Set("Content-Length", string(rune(len(modifiedBody))))
return resp
}
func (p *Proxy) ProcessResponse(resp *http.Response) *http.Response {
if resp == nil || resp.Body == nil {
return resp
}
contentType := resp.Header.Get("Content-Type")
switch contentType {
case "application/json":
return resp // p.ProcessJSONResponse(resp)
case "application/xml":
return p.ProcessXMLResponse(resp)
case "application/rss+xml":
return p.ProcessXMLResponse(resp)
default:
return resp
}
}
func getItemsHash(items []Item) map[string]string {
var wg sync.WaitGroup
idHashMap := sync.Map{} // Use sync.Map for concurrent access
for _, item := range items {
wg.Add(1)
go func(item Item) {
defer wg.Done()
hash := strings.ToLower(item.getHash())
if hash != "" {
idHashMap.Store(item.GUID, hash) // Store directly into sync.Map
}
}(item)
}
wg.Wait()
// Convert sync.Map to regular map
finalMap := make(map[string]string)
idHashMap.Range(func(key, value interface{}) bool {
finalMap[key.(string)] = value.(string)
return true
})
return finalMap
}
func (item Item) getHash() string {
infohash := ""
for _, attr := range item.TorznabAttrs {
if attr.Name == "infohash" {
return attr.Value
}
}
if strings.Contains(item.GUID, "magnet:?") {
magnet, err := common.GetMagnetInfo(item.GUID)
if err == nil && magnet != nil && magnet.InfoHash != "" {
return magnet.InfoHash
}
}
magnetLink := item.Link
if magnetLink == "" {
// We can't check the availability of the torrent without a magnet link or infohash
return ""
}
if strings.Contains(magnetLink, "magnet:?") {
magnet, err := common.GetMagnetInfo(magnetLink)
if err == nil && magnet != nil && magnet.InfoHash != "" {
return magnet.InfoHash
}
}
//Check Description for infohash
hash := common.ExtractInfoHash(item.Description)
if hash == "" {
// Check Title for infohash
hash = common.ExtractInfoHash(item.Comments)
}
infohash = hash
if infohash == "" {
//Get torrent file from http link
//Takes too long, not worth it
//magnet, err := common.OpenMagnetHttpURL(magnetLink)
//if err == nil && magnet != nil && magnet.InfoHash != "" {
// log.Printf("Magnet: %s", magnet.InfoHash)
//}
}
return infohash
}
func (p *Proxy) ProcessXMLResponse(resp *http.Response) *http.Response {
if resp == nil || resp.Body == nil {
return resp
}
body, err := io.ReadAll(resp.Body)
if err != nil {
log.Println("Error reading response body:", err)
resp.Body = io.NopCloser(bytes.NewReader(body))
return resp
}
err = resp.Body.Close()
if err != nil {
return nil
}
var rss RSS
err = xml.Unmarshal(body, &rss)
if err != nil {
log.Printf("Error unmarshalling XML: %v", err)
resp.Body = io.NopCloser(bytes.NewReader(body))
return resp
}
indexer := ""
if len(rss.Channel.Items) > 0 {
indexer = rss.Channel.Items[0].ProwlarrIndexer.Text
}
// Step 4: Extract infohash or magnet URI, manipulate data
IdsHashMap := getItemsHash(rss.Channel.Items)
hashes := make([]string, 0)
for _, hash := range IdsHashMap {
if hash != "" {
hashes = append(hashes, hash)
}
}
availableHashesMap := p.debrid.IsAvailable(hashes)
newItems := make([]Item, 0, len(rss.Channel.Items))
if len(hashes) > 0 {
for _, item := range rss.Channel.Items {
hash := IdsHashMap[item.GUID]
if hash == "" {
continue
}
isCached, exists := availableHashesMap[hash]
if !exists || !isCached {
continue
}
newItems = append(newItems, item)
}
}
log.Printf("[%s Report]: %d/%d items are cached || Found %d infohash", indexer, len(newItems), len(rss.Channel.Items), len(hashes))
rss.Channel.Items = newItems
// rss.Channel.Items = newItems
modifiedBody, err := xml.MarshalIndent(rss, "", " ")
if err != nil {
log.Printf("Error marshalling XML: %v", err)
resp.Body = io.NopCloser(bytes.NewReader(body))
return resp
}
modifiedBody = append([]byte(xml.Header), modifiedBody...)
// Set the modified body back to the response
resp.Body = io.NopCloser(bytes.NewReader(modifiedBody))
return resp
}
func UrlMatches(re *regexp.Regexp) goproxy.ReqConditionFunc {
return func(req *http.Request, ctx *goproxy.ProxyCtx) bool {
return re.MatchString(req.URL.String())
}
}
func (p *Proxy) Start() {
username, password := p.username, p.password
proxy := goproxy.NewProxyHttpServer()
if username != "" || password != "" {
// Set up basic auth for proxy
auth.ProxyBasic(proxy, "my_realm", func(user, pwd string) bool {
return user == username && password == pwd
})
}
proxy.OnRequest(goproxy.ReqHostMatches(regexp.MustCompile("^.443$"))).HandleConnect(goproxy.AlwaysMitm)
proxy.OnResponse(
UrlMatches(regexp.MustCompile("^.*/api\\?t=(search|tvsearch|movie)(&.*)?$")),
goproxy.StatusCodeIs(http.StatusOK, http.StatusAccepted)).DoFunc(
func(resp *http.Response, ctx *goproxy.ProxyCtx) *http.Response {
return p.ProcessResponse(resp)
})
proxy.Verbose = p.debug
portFmt := fmt.Sprintf(":%s", p.port)
log.Printf("[*] Starting proxy server on %s\n", portFmt)
log.Fatal(http.ListenAndServe(fmt.Sprintf("%s", portFmt), proxy))
}

View File

@@ -1,88 +0,0 @@
package common
import (
"sync"
)
type Cache struct {
data map[string]struct{}
order []string
maxItems int
mu sync.RWMutex
}
func NewCache(maxItems int) *Cache {
if maxItems <= 0 {
maxItems = 1000
}
return &Cache{
data: make(map[string]struct{}, maxItems),
order: make([]string, 0, maxItems),
maxItems: maxItems,
}
}
func (c *Cache) Add(value string) {
c.mu.Lock()
defer c.mu.Unlock()
if _, exists := c.data[value]; !exists {
if len(c.order) >= c.maxItems {
delete(c.data, c.order[0])
c.order = c.order[1:]
}
c.data[value] = struct{}{}
c.order = append(c.order, value)
}
}
func (c *Cache) AddMultiple(values map[string]bool) {
c.mu.Lock()
defer c.mu.Unlock()
for value := range values {
if _, exists := c.data[value]; !exists {
if len(c.order) >= c.maxItems {
delete(c.data, c.order[0])
c.order = c.order[1:]
}
c.data[value] = struct{}{}
c.order = append(c.order, value)
}
}
}
func (c *Cache) Get(index int) (string, bool) {
c.mu.RLock()
defer c.mu.RUnlock()
if index < 0 || index >= len(c.order) {
return "", false
}
return c.order[index], true
}
func (c *Cache) GetMultiple(values []string) map[string]bool {
c.mu.RLock()
defer c.mu.RUnlock()
result := make(map[string]bool, len(values))
for _, value := range values {
if _, exists := c.data[value]; exists {
result[value] = true
}
}
return result
}
func (c *Cache) Exists(value string) bool {
c.mu.RLock()
defer c.mu.RUnlock()
_, exists := c.data[value]
return exists
}
func (c *Cache) Len() int {
c.mu.RLock()
defer c.mu.RUnlock()
return len(c.order)
}

View File

@@ -1,58 +0,0 @@
package common
import (
"encoding/json"
"log"
"os"
)
type DebridConfig struct {
Name string `json:"name"`
Host string `json:"host"`
APIKey string `json:"api_key"`
Folder string `json:"folder"`
DownloadUncached bool `json:"download_uncached"`
RateLimit string `json:"rate_limit"` // 200/minute or 10/second
}
type Config struct {
Debrid DebridConfig `json:"debrid"`
Arrs []struct {
WatchFolder string `json:"watch_folder"`
CompletedFolder string `json:"completed_folder"`
Token string `json:"token"`
URL string `json:"url"`
} `json:"arrs"`
Proxy struct {
Port string `json:"port"`
Enabled bool `json:"enabled"`
Debug bool `json:"debug"`
Username string `json:"username"`
Password string `json:"password"`
CachedOnly bool `json:"cached_only"`
}
MaxCacheSize int `json:"max_cache_size"`
}
func LoadConfig(path string) (*Config, error) {
// Load the config file
file, err := os.Open(path)
if err != nil {
return nil, err
}
defer func(file *os.File) {
err := file.Close()
if err != nil {
log.Fatal(err)
}
}(file)
decoder := json.NewDecoder(file)
config := &Config{}
err = decoder.Decode(config)
if err != nil {
return nil, err
}
return config, nil
}

View File

@@ -1,38 +0,0 @@
package common
import (
"regexp"
)
var (
VIDEOMATCH = "(?i)(\\.)(YUV|WMV|WEBM|VOB|VIV|SVI|ROQ|RMVB|RM|OGV|OGG|NSV|MXF|MTS|M2TS|TS|MPG|MPEG|M2V|MP2|MPE|MPV|MP4|M4P|M4V|MOV|QT|MNG|MKV|FLV|DRC|AVI|ASF|AMV)$"
SUBMATCH = "(?i)(\\.)(SRT|SUB|SBV|ASS|VTT|TTML|DFXP|STL|SCC|CAP|SMI|TTXT|TDS|USF|JSS|SSA|PSB|RT|LRC|SSB)$"
SeasonMatch = "(?i)(?:season|s)[.\\-_\\s]?(\\d+)"
)
func RegexMatch(regex string, value string) bool {
re := regexp.MustCompile(regex)
return re.MatchString(value)
}
func RemoveExtension(value string) string {
re := regexp.MustCompile(VIDEOMATCH)
// Find the last index of the matched extension
loc := re.FindStringIndex(value)
if loc != nil {
return value[:loc[0]]
} else {
return value
}
}
func RegexFind(regex string, value string) string {
re := regexp.MustCompile(regex)
match := re.FindStringSubmatch(value)
if len(match) > 0 {
return match[0]
} else {
return ""
}
}

View File

@@ -1,130 +0,0 @@
package common
import (
"crypto/tls"
"fmt"
"golang.org/x/time/rate"
"io"
"log"
"net/http"
"regexp"
"strconv"
"time"
)
type RLHTTPClient struct {
client *http.Client
Ratelimiter *rate.Limiter
Headers map[string]string
}
func (c *RLHTTPClient) Doer(req *http.Request) (*http.Response, error) {
if c.Ratelimiter != nil {
err := c.Ratelimiter.Wait(req.Context())
if err != nil {
return nil, err
}
}
resp, err := c.client.Do(req)
if err != nil {
return nil, err
}
return resp, nil
}
func (c *RLHTTPClient) Do(req *http.Request) (*http.Response, error) {
var resp *http.Response
var err error
backoff := time.Millisecond * 500
for i := 0; i < 3; i++ {
resp, err = c.Doer(req)
if err != nil {
return nil, err
}
if resp.StatusCode != http.StatusTooManyRequests {
return resp, nil
}
// Close the response body to prevent resource leakage
resp.Body.Close()
// Wait for the backoff duration before retrying
time.Sleep(backoff)
// Exponential backoff
backoff *= 2
}
return resp, fmt.Errorf("max retries exceeded")
}
func (c *RLHTTPClient) MakeRequest(method string, url string, body io.Reader) ([]byte, error) {
req, err := http.NewRequest(method, url, body)
if err != nil {
return nil, err
}
if c.Headers != nil {
for key, value := range c.Headers {
req.Header.Set(key, value)
}
}
res, err := c.Do(req)
if err != nil {
return nil, err
}
statusOk := strconv.Itoa(res.StatusCode)[0] == '2'
if !statusOk {
return nil, fmt.Errorf("unexpected status code: %d", res.StatusCode)
}
defer func(Body io.ReadCloser) {
err := Body.Close()
if err != nil {
log.Println(err)
}
}(res.Body)
return io.ReadAll(res.Body)
}
func NewRLHTTPClient(rl *rate.Limiter, headers map[string]string) *RLHTTPClient {
tr := &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
}
c := &RLHTTPClient{
client: &http.Client{
Transport: tr,
},
Ratelimiter: rl,
Headers: headers,
}
return c
}
func ParseRateLimit(rateStr string) *rate.Limiter {
if rateStr == "" {
return nil
}
re := regexp.MustCompile(`(\d+)/(minute|second)`)
matches := re.FindStringSubmatch(rateStr)
if len(matches) != 3 {
return nil
}
count, err := strconv.Atoi(matches[1])
if err != nil {
return nil
}
unit := matches[2]
switch unit {
case "minute":
reqsPerSecond := float64(count) / 60.0
return rate.NewLimiter(rate.Limit(reqsPerSecond), 5)
case "second":
return rate.NewLimiter(rate.Limit(float64(count)), 5)
default:
return nil
}
}

View File

@@ -1,112 +0,0 @@
package debrid
import (
"github.com/anacrolix/torrent/metainfo"
"goBlack/common"
"path/filepath"
)
type Service interface {
SubmitMagnet(torrent *Torrent) (*Torrent, error)
CheckStatus(torrent *Torrent) (*Torrent, error)
DownloadLink(torrent *Torrent) error
Process(arr *Arr, magnet string) (*Torrent, error)
IsAvailable(infohashes []string) map[string]bool
}
type Debrid struct {
Host string `json:"host"`
APIKey string
DownloadUncached bool
client *common.RLHTTPClient
cache *common.Cache
}
func NewDebrid(dc common.DebridConfig, cache *common.Cache) Service {
switch dc.Name {
case "realdebrid":
return NewRealDebrid(dc, cache)
default:
return NewRealDebrid(dc, cache)
}
}
func GetTorrentInfo(filePath string) (*Torrent, error) {
// Open and read the .torrent file
if filepath.Ext(filePath) == ".torrent" {
return getTorrentInfo(filePath)
} else {
return torrentFromMagnetFile(filePath)
}
}
func torrentFromMagnetFile(filePath string) (*Torrent, error) {
magnetLink := common.OpenMagnetFile(filePath)
magnet, err := common.GetMagnetInfo(magnetLink)
if err != nil {
return nil, err
}
torrent := &Torrent{
InfoHash: magnet.InfoHash,
Name: magnet.Name,
Size: magnet.Size,
Magnet: magnet,
Filename: filePath,
}
return torrent, nil
}
func getTorrentInfo(filePath string) (*Torrent, error) {
mi, err := metainfo.LoadFromFile(filePath)
if err != nil {
return nil, err
}
hash := mi.HashInfoBytes()
infoHash := hash.HexString()
info, err := mi.UnmarshalInfo()
if err != nil {
return nil, err
}
magnet := &common.Magnet{
InfoHash: infoHash,
Name: info.Name,
Size: info.Length,
Link: mi.Magnet(&hash, &info).String(),
}
torrent := &Torrent{
InfoHash: infoHash,
Name: info.Name,
Size: info.Length,
Magnet: magnet,
Filename: filePath,
}
return torrent, nil
}
func GetLocalCache(infohashes []string, cache *common.Cache) ([]string, map[string]bool) {
result := make(map[string]bool)
hashes := make([]string, len(infohashes))
if len(infohashes) == 0 {
return hashes, result
}
if len(infohashes) == 1 {
if cache.Exists(infohashes[0]) {
return hashes, map[string]bool{infohashes[0]: true}
}
return infohashes, result
}
cachedHashes := cache.GetMultiple(infohashes)
for _, h := range infohashes {
_, exists := cachedHashes[h]
if !exists {
hashes = append(hashes, h)
} else {
result[h] = true
}
}
return hashes, result
}

View File

@@ -1,194 +0,0 @@
package debrid
import (
"encoding/json"
"fmt"
"goBlack/common"
"goBlack/debrid/structs"
"log"
"net/http"
gourl "net/url"
"path/filepath"
"strconv"
"strings"
)
type RealDebrid struct {
Host string `json:"host"`
APIKey string
DownloadUncached bool
client *common.RLHTTPClient
cache *common.Cache
}
func (r *RealDebrid) Process(arr *Arr, magnet string) (*Torrent, error) {
torrent, err := GetTorrentInfo(magnet)
torrent.Arr = arr
if err != nil {
return torrent, err
}
log.Printf("Torrent Name: %s", torrent.Name)
if !r.DownloadUncached {
hash, exists := r.IsAvailable([]string{torrent.InfoHash})[torrent.InfoHash]
if !exists || !hash {
return torrent, fmt.Errorf("torrent is not cached")
}
log.Printf("Torrent: %s is cached", torrent.Name)
}
torrent, err = r.SubmitMagnet(torrent)
if err != nil || torrent.Id == "" {
return nil, err
}
return r.CheckStatus(torrent)
}
func (r *RealDebrid) IsAvailable(infohashes []string) map[string]bool {
// Check if the infohashes are available in the local cache
hashes, result := GetLocalCache(infohashes, r.cache)
if len(hashes) == 0 {
// Either all the infohashes are locally cached or none are
r.cache.AddMultiple(result)
return result
}
// Divide hashes into groups of 100
for i := 0; i < len(hashes); i += 200 {
end := i + 200
if end > len(hashes) {
end = len(hashes)
}
// Filter out empty strings
validHashes := make([]string, 0, end-i)
for _, hash := range hashes[i:end] {
if hash != "" {
validHashes = append(validHashes, hash)
}
}
// If no valid hashes in this batch, continue to the next batch
if len(validHashes) == 0 {
continue
}
hashStr := strings.Join(validHashes, "/")
url := fmt.Sprintf("%s/torrents/instantAvailability/%s", r.Host, hashStr)
resp, err := r.client.MakeRequest(http.MethodGet, url, nil)
if err != nil {
log.Println("Error checking availability:", err)
return result
}
var data structs.RealDebridAvailabilityResponse
err = json.Unmarshal(resp, &data)
if err != nil {
log.Println("Error marshalling availability:", err)
return result
}
for _, h := range hashes[i:end] {
hosters, exists := data[strings.ToLower(h)]
if exists && len(hosters.Rd) > 0 {
result[h] = true
}
}
}
r.cache.AddMultiple(result) // Add the results to the cache
return result
}
func (r *RealDebrid) SubmitMagnet(torrent *Torrent) (*Torrent, error) {
url := fmt.Sprintf("%s/torrents/addMagnet", r.Host)
payload := gourl.Values{
"magnet": {torrent.Magnet.Link},
}
var data structs.RealDebridAddMagnetSchema
resp, err := r.client.MakeRequest(http.MethodPost, url, strings.NewReader(payload.Encode()))
if err != nil {
return nil, err
}
err = json.Unmarshal(resp, &data)
log.Printf("Torrent: %s added with id: %s\n", torrent.Name, data.Id)
torrent.Id = data.Id
return torrent, nil
}
func (r *RealDebrid) CheckStatus(torrent *Torrent) (*Torrent, error) {
url := fmt.Sprintf("%s/torrents/info/%s", r.Host, torrent.Id)
for {
resp, err := r.client.MakeRequest(http.MethodGet, url, nil)
if err != nil {
return torrent, err
}
var data structs.RealDebridTorrentInfo
err = json.Unmarshal(resp, &data)
status := data.Status
torrent.Folder = common.RemoveExtension(data.OriginalFilename)
if status == "error" || status == "dead" || status == "magnet_error" {
return torrent, fmt.Errorf("torrent: %s has error", torrent.Name)
} else if status == "waiting_files_selection" {
files := make([]TorrentFile, 0)
for _, f := range data.Files {
name := filepath.Base(f.Path)
if !common.RegexMatch(common.VIDEOMATCH, name) && !common.RegexMatch(common.SUBMATCH, name) {
continue
}
fileId := f.ID
file := &TorrentFile{
Name: name,
Path: filepath.Join(torrent.Folder, name),
Size: int64(f.Bytes),
Id: strconv.Itoa(fileId),
}
files = append(files, *file)
}
torrent.Files = files
if len(files) == 0 {
return torrent, fmt.Errorf("no video files found")
}
filesId := make([]string, 0)
for _, f := range files {
filesId = append(filesId, f.Id)
}
p := gourl.Values{
"files": {strings.Join(filesId, ",")},
}
payload := strings.NewReader(p.Encode())
_, err = r.client.MakeRequest(http.MethodPost, fmt.Sprintf("%s/torrents/selectFiles/%s", r.Host, torrent.Id), payload)
if err != nil {
return torrent, err
}
} else if status == "downloaded" {
log.Printf("Torrent: %s downloaded\n", torrent.Name)
err = r.DownloadLink(torrent)
if err != nil {
return torrent, err
}
break
} else if status == "downloading" {
return torrent, fmt.Errorf("torrent is uncached")
}
}
return torrent, nil
}
func (r *RealDebrid) DownloadLink(torrent *Torrent) error {
return nil
}
func NewRealDebrid(dc common.DebridConfig, cache *common.Cache) *RealDebrid {
rl := common.ParseRateLimit(dc.RateLimit)
headers := map[string]string{
"Authorization": fmt.Sprintf("Bearer %s", dc.APIKey),
}
client := common.NewRLHTTPClient(rl, headers)
return &RealDebrid{
Host: dc.Host,
APIKey: dc.APIKey,
DownloadUncached: dc.DownloadUncached,
client: client,
cache: cache,
}
}

View File

@@ -1,96 +0,0 @@
package structs
import (
"encoding/json"
"fmt"
)
type RealDebridAvailabilityResponse map[string]Hoster
func (r *RealDebridAvailabilityResponse) UnmarshalJSON(data []byte) error {
// First, try to unmarshal as an object
var objectData map[string]Hoster
err := json.Unmarshal(data, &objectData)
if err == nil {
*r = objectData
return nil
}
// If that fails, try to unmarshal as an array
var arrayData []map[string]Hoster
err = json.Unmarshal(data, &arrayData)
if err != nil {
return fmt.Errorf("failed to unmarshal as both object and array: %v", err)
}
// If it's an array, use the first element
if len(arrayData) > 0 {
*r = arrayData[0]
return nil
}
// If it's an empty array, initialize as an empty map
*r = make(map[string]Hoster)
return nil
}
type Hoster struct {
Rd []map[string]FileVariant `json:"rd"`
}
func (h *Hoster) UnmarshalJSON(data []byte) error {
// Attempt to unmarshal into the expected structure (an object with an "rd" key)
type Alias Hoster
var obj Alias
if err := json.Unmarshal(data, &obj); err == nil {
*h = Hoster(obj)
return nil
}
// If unmarshalling into an object fails, check if it's an empty array
var arr []interface{}
if err := json.Unmarshal(data, &arr); err == nil && len(arr) == 0 {
// It's an empty array; initialize with no entries
*h = Hoster{Rd: nil}
return nil
}
// If both attempts fail, return an error
return fmt.Errorf("hoster: cannot unmarshal JSON data: %s", string(data))
}
type FileVariant struct {
Filename string `json:"filename"`
Filesize int `json:"filesize"`
}
type RealDebridAddMagnetSchema struct {
Id string `json:"id"`
Uri string `json:"uri"`
}
type RealDebridTorrentInfo struct {
ID string `json:"id"`
Filename string `json:"filename"`
OriginalFilename string `json:"original_filename"`
Hash string `json:"hash"`
Bytes int `json:"bytes"`
OriginalBytes int `json:"original_bytes"`
Host string `json:"host"`
Split int `json:"split"`
Progress int `json:"progress"`
Status string `json:"status"`
Added string `json:"added"`
Files []struct {
ID int `json:"id"`
Path string `json:"path"`
Bytes int `json:"bytes"`
Selected int `json:"selected"`
} `json:"files"`
Links []string `json:"links"`
Ended string `json:"ended,omitempty"`
Speed int `json:"speed,omitempty"`
Seeders int `json:"seeders,omitempty"`
}
// 5e6e2e77fd3921a7903a41336c844cc409bf8788/14527C07BDFDDFC642963238BB6E7507B9742947/66A1CD1A5C7F4014877A51AC2620E857E3BB4D16

View File

@@ -1,142 +0,0 @@
package debrid
import (
"encoding/json"
"goBlack/common"
"log"
"net/http"
gourl "net/url"
"os"
"strconv"
"strings"
)
type Arr struct {
WatchFolder string `json:"watch_folder"`
CompletedFolder string `json:"completed_folder"`
Debrid common.DebridConfig `json:"debrid"`
Token string `json:"token"`
URL string `json:"url"`
Client *common.RLHTTPClient
}
type ArrHistorySchema struct {
Page int `json:"page"`
PageSize int `json:"pageSize"`
SortKey string `json:"sortKey"`
SortDirection string `json:"sortDirection"`
TotalRecords int `json:"totalRecords"`
Records []struct {
ID int `json:"id"`
DownloadID string `json:"downloadId"`
} `json:"records"`
}
type Torrent struct {
Id string `json:"id"`
InfoHash string `json:"info_hash"`
Name string `json:"name"`
Folder string `json:"folder"`
Filename string `json:"filename"`
Size int64 `json:"size"`
Magnet *common.Magnet `json:"magnet"`
Files []TorrentFile `json:"files"`
Status string `json:"status"`
Arr *Arr
}
type TorrentFile struct {
Id string `json:"id"`
Name string `json:"name"`
Size int64 `json:"size"`
Path string `json:"path"`
}
func (arr *Arr) GetHeaders() map[string]string {
return map[string]string{
"X-Api-Key": arr.Token,
}
}
func (arr *Arr) GetURL() string {
url, _ := gourl.JoinPath(arr.URL, "api/v3/")
return url
}
func getEventId(eventType string) int {
switch eventType {
case "grabbed":
return 1
case "seriesFolderDownloaded":
return 2
case "DownloadFolderImported":
return 3
case "DownloadFailed":
return 4
case "DownloadIgnored":
return 7
default:
return 0
}
}
func (arr *Arr) GetHistory(downloadId, eventType string) *ArrHistorySchema {
eventId := getEventId(eventType)
query := gourl.Values{}
if downloadId != "" {
query.Add("downloadId", downloadId)
}
if eventId != 0 {
query.Add("eventId", strconv.Itoa(eventId))
}
query.Add("pageSize", "100")
url := arr.GetURL() + "history/" + "?" + query.Encode()
resp, err := arr.Client.MakeRequest(http.MethodGet, url, nil)
if err != nil {
return nil
}
var data *ArrHistorySchema
err = json.Unmarshal(resp, &data)
if err != nil {
return nil
}
return data
}
func (t *Torrent) Cleanup(remove bool) {
if remove {
err := os.Remove(t.Filename)
if err != nil {
return
}
}
}
func (t *Torrent) MarkAsFailed() error {
downloadId := strings.ToUpper(t.Magnet.InfoHash)
history := t.Arr.GetHistory(downloadId, "grabbed")
if history == nil {
return nil
}
torrentId := 0
for _, record := range history.Records {
if strings.EqualFold(record.DownloadID, downloadId) {
torrentId = record.ID
break
}
}
if torrentId != 0 {
url, err := gourl.JoinPath(t.Arr.GetURL(), "history/failed/", strconv.Itoa(torrentId))
if err != nil {
return err
}
_, err = t.Arr.Client.MakeRequest(http.MethodPost, url, nil)
if err == nil {
log.Printf("Marked torrent: %s as failed", t.Name)
}
}
return nil
}

171
docs/docs/changelog.md Normal file
View File

@@ -0,0 +1,171 @@
# Changelog
## 0.5.0
- A more refined repair worker (with more control)
- UI Improvements
- Pagination for torrents
- Dark mode
- Ordered torrents table
- Fix Arr API flaky behavior
- Discord Notifications
- Minor bug fixes
- Add Tautulli support
- playback_failed event triggers a repair
- Miscellaneous improvements
- Add an option to skip the repair worker for a specific arr
- Arr specific uncached downloading option
- Option to download uncached torrents from UI
- Remove QbitTorrent Log level (Use the global log level)
## 0.4.2
- Hotfixes
- Fix saving torrents error
- Fix bugs with the UI
- Speed improvements
## 0.4.1
- Adds optional UI authentication
- Downloaded Torrents persist on restart
- Fixes
- Fix Alldebrid struggling to find the correct file
- Minor bug fixes or speed-gains
- A new cleanup worker to clean up ARR queues
## 0.4.0
- Add support for multiple debrid providers
- A full-fledged UI for adding torrents, repairing files, viewing config and managing torrents
- Fix issues with Alldebrid
- Fix file transversal bug
- Fix files with no parent directory
- Logging
- Add a more robust logging system
- Add logging to a file
- Add logging to the UI
- Qbittorrent
- Add support for tags (creating, deleting, listing)
- Add support for categories (creating, deleting, listing)
- Fix issues with arr sending torrents using a different content type
## 0.3.3
- Add AllDebrid Support
- Fix Torbox not downloading uncached torrents
- Fix Rar files being downloaded
## 0.3.2
- Fix DebridLink not downloading
- Fix Torbox with uncached torrents
- Add new /internal/cached endpoint to check if an hash is cached
- Implement per-debrid local cache
- Fix file check for torbox
- Other minor bug fixes
## 0.3.1
- Add DebridLink Support
- Refactor error handling
## 0.3.0
- Add UI for adding torrents
- Refraction of the code
- Fix Torbox bug
- Update CI/CD
- Update Readme
## 0.2.7
- Add support for multiple debrid providers
- Add Torbox support
- Add support for configurable debrid cache checks
- Add support for configurable debrid download uncached torrents
## 0.2.6
- Delete torrent for empty matched files
- Update Readme
## 0.2.5
- Fix ContentPath not being set prior
- Rewrote Readme
- Cleaned up the code
## 0.2.4
- Add file download support (Sequential Download)
- Fix http handler error
- Fix *arrs map failing concurrently
- Fix cache not being updated
## 0.2.3
- Delete uncached items from RD
- Fail if the torrent is not cached (optional)
- Fix cache not being updated
## 0.2.2
- Fix name mismatch in the cache
- Fix directory mapping with mounts
- Add Support for refreshing the *arrs
## 0.2.1
- Fix Uncached torrents not being downloaded/downloaded
- Minor bug fixed
- Fix Race condition in the cache and file system
## 0.2.0
- Implement 0.2.0-beta changes
- Removed Blackhole
- Added QbitTorrent API
- Cleaned up the code
## 0.2.0-beta
- Switch to QbitTorrent API instead of Blackhole
- Rewrote the whole codebase
## 0.1.4
- Rewrote Report log
- Fix YTS, 1337x not grabbing infohash
- Fix Torrent symlink bug
## 0.1.3
- Searching for infohashes in the xml description/summary/comments
- Added local cache support
- Added max cache size
- Rewrite blackhole.go
- Bug fixes
- Fixed indexer getting disabled
- Fixed blackhole not working
## 0.1.2
- Bug fixes
- Code cleanup
- Get available hashes at once
## 0.1.1
- Added support for "No Blackhole" for Arrs
- Added support for "Cached Only" for Proxy
- Bug Fixes
## 0.1.0
- Initial Release
- Added Real Debrid Support
- Added Arrs Support
- Added Proxy Support
- Added Basic Authentication for Proxy
- Added Rate Limiting for Debrid Providers

View File

@@ -0,0 +1,75 @@
# Arr Applications Configuration
DecyphArr can integrate directly with Sonarr, Radarr, and other Arr applications. This section explains how to configure the Arr integration in your `config.json` file.
## Basic Configuration
The Arr applications are configured under the `arrs` key:
```json
"arrs": [
{
"name": "sonarr",
"host": "http://sonarr:8989",
"token": "your-sonarr-api-key",
"cleanup": true
},
{
"name": "radarr",
"host": "http://radarr:7878",
"token": "your-radarr-api-key",
"cleanup": true
}
]
```
### !!! note
This configuration is optional if you've already set up the qBittorrent client in your Arr applications with the correct host and token information. It's particularly useful for the Repair Worker functionality.
### Configuration Options
Each Arr application supports the following options:
- `name`: The name of the Arr application, which should match the category in qBittorrent
- `host`: The host URL of the Arr application, including protocol and port
- `token`: The API token/key of the Arr application
- `cleanup`: Whether to clean up the Arr queue (removes completed downloads). This is only useful for Sonarr.
### Finding Your API Key
#### Sonarr/Radarr/Lidarr
1. Go to Sonarr > Settings > General
2. Look for "API Key" in the "General" section
3. Copy the API key
### Multiple Arr Applications
You can configure multiple Arr applications by adding more entries to the arrs array:
```json
"arrs": [
{
"name": "sonarr",
"host": "http://sonarr:8989",
"token": "your-sonarr-api-key",
"cleanup": true
},
{
"name": "sonarr-anime",
"host": "http://sonarr-anime:8989",
"token": "your-sonarr-anime-api-key",
"cleanup": true
},
{
"name": "radarr",
"host": "http://radarr:7878",
"token": "your-radarr-api-key",
"cleanup": false
},
{
"name": "lidarr",
"host": "http://lidarr:8686",
"token": "your-lidarr-api-key",
"cleanup": false
}
]
```

View File

@@ -0,0 +1,123 @@
# Debrid Providers Configuration
DecyphArr supports multiple Debrid providers. This section explains how to configure each provider in your `config.json` file.
## Basic Configuration
Each Debrid provider is configured in the `debrids` array:
```json
"debrids": [
{
"name": "realdebrid",
"host": "https://api.real-debrid.com/rest/1.0",
"api_key": "your-api-key",
"folder": "/mnt/remote/realdebrid/__all__/"
},
{
"name": "alldebrid",
"host": "https://api.alldebrid.com/v4",
"api_key": "your-api-key",
"folder": "/mnt/remote/alldebrid/downloads/"
}
]
```
### Provider Options
Each Debrid provider accepts the following configuration options:
#### Basic Options
- `name`: The name of the Debrid provider (realdebrid, alldebrid, debridlink, torbox)
- `host`: The API endpoint of the Debrid provider
- `api_key`: Your API key for the Debrid service (can be comma-separated for multiple keys)
- `folder`: The folder where your Debrid content is mounted (via webdav, rclone, zurg, etc.)
#### Advanced Options
- `download_api_keys`: Array of API keys used specifically for downloading torrents (defaults to the same as api_key)
- `rate_limit`: Rate limit for API requests (null by default)
- `download_uncached`: Whether to download uncached torrents (disabled by default)
- `check_cached`: Whether to check if torrents are cached (disabled by default)
- `use_webdav`: Whether to create a WebDAV server for this Debrid provider (disabled by default)
### Using Multiple API Keys
For services that support it, you can provide multiple download API keys for better load balancing:
```json
{
"name": "realdebrid",
"host": "https://api.real-debrid.com/rest/1.0",
"api_key": "key1",
"download_api_keys": ["key1", "key2", "key3"],
"folder": "/mnt/remote/realdebrid/__all__/"
}
```
### Example Configuration
#### Real Debrid
```json
{
"name": "realdebrid",
"host": "https://api.real-debrid.com/rest/1.0",
"api_key": "your-api-key",
"folder": "/mnt/remote/realdebrid/__all__/",
"rate_limit": null,
"download_uncached": false,
"check_cached": true,
"use_webdav": true
}
```
#### All Debrid
```json
{
"name": "alldebrid",
"host": "https://api.alldebrid.com/v4",
"api_key": "your-api-key",
"folder": "/mnt/remote/alldebrid/torrents/",
"rate_limit": null,
"download_uncached": false,
"check_cached": true,
"use_webdav": true
}
```
#### Debrid Link
```json
{
"name": "debridlink",
"host": "https://debrid-link.com/api/v2",
"api_key": "your-api-key",
"folder": "/mnt/remote/debridlink/torrents/",
"rate_limit": null,
"download_uncached": false,
"check_cached": true,
"use_webdav": true
}
```
#### Torbox
```json
{
"name": "torbox",
"host": "https://api.torbox.com/v1",
"api_key": "your-api-key",
"folder": "/mnt/remote/torbox/torrents/",
"rate_limit": null,
"download_uncached": false,
"check_cached": true,
"use_webdav": true
}
```

View File

@@ -0,0 +1,69 @@
# General Configuration
This section covers the basic configuration options for DecyphArr that apply to the entire application.
## Basic Settings
Here are the fundamental configuration options:
```json
{
"use_auth": false,
"log_level": "info",
"discord_webhook_url": "",
"min_file_size": 0,
"max_file_size": 0,
"allowed_file_types": [".mp4", ".mkv", ".avi", ...]
}
```
### Configuration Options
#### Log Level
The `log_level` setting determines how verbose the application logs will be:
- `debug`: Detailed information, useful for troubleshooting
- `info`: General operational information (default)
- `warn`: Warning messages
- `error`: Error messages only
- `trace`: Very detailed information, including all requests and responses
#### Authentication
The `use_auth` option enables basic authentication for the UI:
```json
"use_auth": true
```
When enabled, you'll need to provide a username and password to access the DecyphArr interface.
#### File Size Limits
You can set minimum and maximum file size limits for torrents:
```json
"min_file_size": 0, // Minimum file size in bytes (0 = no minimum)
"max_file_size": 0 // Maximum file size in bytes (0 = no maximum)
```
#### Allowed File Types
You can restrict the types of files that DecyphArr will process by specifying allowed file extensions. This is useful for filtering out unwanted file types.
```json
"allowed_file_types": [
".mp4", ".mkv", ".avi", ".mov",
".m4v", ".mpg", ".mpeg", ".wmv",
".m4a", ".mp3", ".flac", ".wav"
]
```
If not specified, all movie, TV show, and music file types are allowed by default.
#### Discord Notifications
To receive notifications on Discord, add your webhook URL:
```json
"discord_webhook_url": "https://discord.com/api/webhooks/..."
```
This will send notifications for various events, such as successful downloads or errors.

View File

@@ -0,0 +1,44 @@
# Configuration Overview
DecyphArr uses a JSON configuration file to manage its settings. This file should be named `config.json` and placed in your configured directory.
## Basic Configuration
Here's a minimal configuration to get started:
```json
{
"debrids": [
{
"name": "realdebrid",
"host": "https://api.real-debrid.com/rest/1.0",
"api_key": "realdebrid_key",
"folder": "/mnt/remote/realdebrid/__all__/"
}
],
"qbittorrent": {
"port": "8282",
"download_folder": "/mnt/symlinks/",
"categories": ["sonarr", "radarr"]
},
"repair": {
"enabled": false,
"interval": "12h",
"run_on_start": false
},
"use_auth": false,
"log_level": "info"
}
```
### Configuration Sections
DecyphArr's configuration is divided into several sections:
- [General Configuration](general.md) - Basic settings like logging and authentication
- [Debrid Providers](debrid.md) - Configure one or more Debrid services
- [qBittorrent Settings](qbittorrent.md) - Settings for the qBittorrent API
- [Arr Integration](arrs.md) - Configuration for Sonarr, Radarr, etc.
Full Configuration Example
For a complete configuration file with all available options, see our [full configuration example](../extras/config.full.json).

View File

@@ -0,0 +1,74 @@
# qBittorrent Configuration
DecyphArr emulates a qBittorrent instance to integrate with Arr applications. This section explains how to configure the qBittorrent settings in your `config.json` file.
## Basic Configuration
The qBittorrent functionality is configured under the `qbittorrent` key:
```json
"qbittorrent": {
"port": "8282",
"download_folder": "/mnt/symlinks/",
"categories": ["sonarr", "radarr", "lidarr"],
"refresh_interval": 5
}
```
### Configuration Options
#### Essential Settings
- `port`: The port on which the qBittorrent API will listen (default: 8282)
- `download_folder`: The folder where symlinks or downloaded files will be placed
- `categories`: An array of categories to organize downloads (usually matches your Arr applications)
#### Advanced Settings
- `refresh_interval`: How often (in seconds) to refresh the Arrs Monitored Downloads (default: 5)
#### Categories
Categories help organize your downloads and match them to specific Arr applications. Typically, you'll want to configure categories that match your Sonarr, Radarr, or other Arr applications:
```json
"categories": ["sonarr", "radarr", "lidarr", "readarr"]
```
When setting up your Arr applications to connect to DecyphArr, you'll specify these same category names.
#### Download Folder
The `download_folder` setting specifies where DecyphArr will place downloaded files or create symlinks:
```json
"download_folder": "/mnt/symlinks/"
```
This folder should be:
- Accessible to DecyphArr
- Accessible to your Arr applications
- Have sufficient space if downloading files locally
#### Port Configuration
The `port` setting determines which port the qBittorrent API will listen on:
```json
"port": "8282"
```
Ensure this port:
- Is not used by other applications
- Is accessible to your Arr applications
- Is properly exposed if using Docker (see the Docker Compose example in the Installation guide)
#### Refresh Interval
The refresh_interval setting controls how often DecyphArr checks for updates from your Arr applications:
```json
"refresh_interval": 5
```
This value is in seconds. Lower values provide more responsive updates but may increase CPU usage.

View File

@@ -0,0 +1,96 @@
{
"debrids": [
{
"name": "realdebrid",
"host": "https://api.real-debrid.com/rest/1.0",
"api_key": "realdebrid_key",
"folder": "/mnt/remote/realdebrid/__all__/",
"download_api_keys": [],
"proxy": "",
"rate_limit": "250/minute",
"download_uncached": false,
"check_cached": false,
"use_webdav": true,
"torrents_refresh_interval": "15s",
"folder_naming": "original_no_ext",
"auto_expire_links_after": "3d",
"rc_url": "http://your-ip-address:9990",
"rc_user": "your_rclone_rc_user",
"rc_pass": "your_rclone_rc_pass"
},
{
"name": "torbox",
"host": "https://api.torbox.app/v1",
"api_key": "torbox_api_key",
"folder": "/mnt/remote/torbox/torrents/",
"rate_limit": "250/minute",
"download_uncached": false,
"check_cached": true
},
{
"name": "debridlink",
"host": "https://debrid-link.com/api/v2",
"api_key": "debridlink_key",
"folder": "/mnt/remote/debridlink/torrents/",
"rate_limit": "250/minute",
"download_uncached": false,
"check_cached": false
},
{
"name": "alldebrid",
"host": "http://api.alldebrid.com/v4.1",
"api_key": "alldebrid_key",
"folder": "/mnt/remote/alldebrid/magnet/",
"rate_limit": "600/minute",
"download_uncached": false,
"check_cached": false
}
],
"max_cache_size": 1000,
"qbittorrent": {
"port": "8282",
"download_folder": "/mnt/symlinks/",
"categories": ["sonarr", "radarr"],
"refresh_interval": 5,
"skip_pre_cache": false
},
"arrs": [
{
"name": "sonarr",
"host": "http://radarr:8989",
"token": "arr_key",
"cleanup": true,
"skip_repair": true,
"download_uncached": false
},
{
"name": "radarr",
"host": "http://radarr:7878",
"token": "arr_key",
"cleanup": false,
"download_uncached": false
},
{
"name": "lidarr",
"host": "http://lidarr:7878",
"token": "arr_key",
"cleanup": false,
"skip_repair": true,
"download_uncached": false
}
],
"repair": {
"enabled": false,
"interval": "12h",
"run_on_start": false,
"zurg_url": "",
"use_webdav": false,
"auto_process": false
},
"log_level": "info",
"min_file_size": "",
"max_file_size": "",
"allowed_file_types": [],
"use_auth": false,
"discord_webhook_url": "https://discord.com/api/webhooks/..."
}

View File

@@ -0,0 +1,5 @@
[decypharr]
type = webdav
url = http://decypharr:8282/webdav/realdebrid
vendor = other
pacer_min_sleep = 0

View File

@@ -0,0 +1,40 @@
# Features Overview
DecyphArr extends the functionality of qBittorrent by integrating with Debrid services, providing several powerful features that enhance your media management experience.
## Core Features
### Mock qBittorrent API
DecyphArr implements a complete qBittorrent-compatible API that can be used with Sonarr, Radarr, Lidarr, and other Arr applications. This allows you to:
- Seamlessly integrate with your existing Arr setup
- Use familiar interfaces to manage your downloads
- Benefit from Debrid services without changing your workflow
### Comprehensive UI
The DecyphArr user interface provides:
- Torrent management capabilities
- Status monitoring
- Configuration options
- Multiple Debrid provider integration
## Advanced Features
DecyphArr includes several advanced features that extend its capabilities:
- [Repair Worker](repair-worker.md): Identifies and fixes issues with your media files
- [WebDAV Server](webdav.md): Provides direct access to your Debrid files
## Supported Debrid Providers
DecyphArr supports multiple Debrid providers:
- Real Debrid
- Torbox
- Debrid Link
- All Debrid
Each provider can be configured separately, allowing you to use one or multiple services simultaneously.

View File

@@ -0,0 +1,41 @@
# Repair Worker
The Repair Worker is a powerful feature that helps maintain the health of your media library by scanning for and fixing issues with files.
## What It Does
The Repair Worker performs the following tasks:
- Searches for broken symlinks or file references
- Identifies missing files in your library
- Locates deleted or unreadable files
- Automatically repairs issues when possible
## Configuration
To enable and configure the Repair Worker, add the following to your `config.json`:
```json
"repair": {
"enabled": true,
"interval": "12h",
"run_on_start": false,
"use_webdav": false,
"zurg_url": "http://localhost:9999",
"auto_process": true
}
```
### Configuration Options
- `enabled`: Set to `true` to enable the Repair Worker.
- `interval`: The time interval for the Repair Worker to run (e.g., `12h`, `1d`).
- `run_on_start`: If set to `true`, the Repair Worker will run immediately after DecyphArr starts.
- `use_webdav`: If set to `true`, the Repair Worker will use WebDAV for file operations.
- `zurg_url`: The URL for the Zurg service (if using).
- `auto_process`: If set to `true`, the Repair Worker will automatically process files that it finds issues with.
### Performance Tips
- For users of the WebDAV server, enable `use_webdav` for exponentially faster repair processes
- If using Zurg, set the `zurg_url` parameter to greatly improve repair speed

View File

@@ -0,0 +1,60 @@
# WebDAV Server
DecyphArr includes a built-in WebDAV server that provides direct access to your Debrid files, making them easily accessible to media players and other applications.
## Overview
While most Debrid providers have their own WebDAV servers, DecyphArr's implementation offers faster access and additional features. The WebDAV server listens on port `8080` by default.
## Accessing the WebDAV Server
- URL: `http://localhost:8282/webdav` or `http://<your-server-ip>:8080/webdav`
## Configuration
You can configure WebDAV settings either globally or per-Debrid provider in your `config.json`:
```json
"webdav": {
"torrents_refresh_interval": "15s",
"download_links_refresh_interval": "40m",
"folder_naming": "original_no_ext",
"auto_expire_links_after": "3d",
"rc_url": "http://localhost:5572",
"rc_user": "username",
"rc_pass": "password"
}
```
### Configuration Options
- `torrents_refresh_interval`: Interval for refreshing torrent data (e.g., `15s`, `1m`, `1h`).
- `download_links_refresh_interval`: Interval for refreshing download links (e.g., `40m`, `1h`).
- `workers`: Number of concurrent workers for processing requests.
- folder_naming: Naming convention for folders:
- `original_no_ext`: Original file name without extension
- `original`: Original file name with extension
- `filename`: Torrent filename
- `filename_no_ext`: Torrent filename without extension
- `id`: Torrent ID
- `auto_expire_links_after`: Time after which download links will expire (e.g., `3d`, `1w`).
- `rc_url`, `rc_user`, `rc_pass`: Rclone RC configuration for VFS refreshes
### Using with Media Players
The WebDAV server works well with media players like:
- Infuse
- VidHub
- Plex (via mounting)
- Kodi
### Mounting with Rclone
You can mount the WebDAV server locally using Rclone. Example configuration:
```conf
[decypharr]
type = webdav
url = http://localhost:8080/webdav/realdebrid
vendor = other
```
For a complete Rclone configuration example, see our [sample rclone.conf](../extras/rclone.conf).

BIN
docs/docs/images/logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

BIN
docs/docs/images/main.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 188 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 264 KiB

28
docs/docs/index.md Normal file
View File

@@ -0,0 +1,28 @@
# DecyphArr
![DecyphArr UI](images/main.png)
**DecyphArr** is an implementation of QbitTorrent with **Multiple Debrid service support**, written in Go.
## What is DecyphArr?
DecyphArr combines the power of QBittorrent with popular Debrid services to enhance your media management. It provides a familiar interface for Sonarr, Radarr, and other \*Arr applications while leveraging the capabilities of Debrid providers.
## Key Features
- 🔄 Mock Qbittorent API that supports Sonarr, Radarr, Lidarr and other Arr applications
- 🖥️ Full-fledged UI for managing torrents
- 🔌 Multiple Debrid providers support
- 📁 WebDAV server support for each Debrid provider
- 🔧 Repair Worker for missing files
## Supported Debrid Providers
- [Real Debrid](https://real-debrid.com)
- [Torbox](https://torbox.app)
- [Debrid Link](https://debrid-link.com)
- [All Debrid](https://alldebrid.com)
## Getting Started
Check out our [Installation Guide](installation.md) to get started with DecyphArr.

71
docs/docs/installation.md Normal file
View File

@@ -0,0 +1,71 @@
# Installation
There are multiple ways to install and run DecyphArr. Choose the method that works best for your setup.
## Docker Installation (Recommended)
Docker is the easiest way to get started with DecyphArr.
### Available Docker Registries
You can use either Docker Hub or GitHub Container Registry to pull the image:
- Docker Hub: `cy01/blackhole:latest`
- GitHub Container Registry: `ghcr.io/sirrobot01/decypharr:latest`
### Docker Tags
- `latest`: The latest stable release
- `beta`: The latest beta release
- `vX.Y.Z`: A specific version (e.g., `v0.1.0`)
- `nightly`: The latest nightly build (usually unstable)
- `experimental`: The latest experimental build (highly unstable)
### Docker Compose Setup
Create a `docker-compose.yml` file with the following content:
```yaml
version: '3.7'
services:
decypharr:
image: cy01/blackhole:latest # or cy01/blackhole:beta
container_name: decypharr
ports:
- "8282:8282" # qBittorrent
- "8181:8181" # Proxy
user: "1000:1000"
volumes:
- /mnt/:/mnt
- ./configs/:/app # config.json must be in this directory
environment:
- PUID=1000
- PGID=1000
- UMASK=002
- QBIT_PORT=8282 # qBittorrent Port (optional)
- PORT=8181 # Proxy Port (optional)
restart: unless-stopped
depends_on:
- rclone # If you are using rclone with docker
```
Run the Docker Compose setup:
```bash
docker-compose up -d
```
## Binary Installation
If you prefer not to use Docker, you can download and run the binary directly.
Download the binary from the releases page
Create a configuration file (see Configuration)
Run the binary:
```bash
chmod +x decypharr
./decypharr --config /path/to/config
```
The config directory should contain your config.json file.

39
docs/docs/usage.md Normal file
View File

@@ -0,0 +1,39 @@
# Usage Guide
This guide will help you get started with DecyphArr after installation.
## Basic Setup
1. Create your `config.json` file (see [Configuration](configuration/index.md) for details)
2. Start the DecyphArr service using Docker or binary
3. Access the UI at `http://localhost:8282` (or your configured host/port)
4. Connect your Arr applications (Sonarr, Radarr, etc.)
## Connecting to Sonarr/Radarr
To connect DecyphArr to your Sonarr or Radarr instance:
1. In Sonarr/Radarr, go to **Settings → Download Client → Add Client → qBittorrent**
2. Configure the following settings:
- **Host**: `localhost` (or the IP of your DecyphArr server)
- **Port**: `8282` (or your configured qBittorrent port)
- **Username**: `http://sonarr:8989` (your Arr host with http/https)
- **Password**: `sonarr_token` (your Arr API token)
- **Category**: e.g., `sonarr`, `radarr` (match what you configured in DecyphArr)
- **Use SSL**: `No`
- **Sequential Download**: `No` or `Yes` (if you want to download torrents locally instead of symlink)
3. Click **Test** to verify the connection
4. Click **Save** to add the download client
![Sonarr/Radarr Setup](images/sonarr-setup.png)
## Using the UI
The DecyphArr UI provides a familiar qBittorrent-like interface with additional features for Debrid services:
- View and manage all your torrents
- Monitor download status
- Check cache status across different Debrid providers
- Access WebDAV functionality
Access the UI at `http://localhost:8282` or your configured host/port.

77
docs/mkdocs.yml Normal file
View File

@@ -0,0 +1,77 @@
site_name: Decypharr
site_url: https://sirrobot01.github.io/decypharr
site_description: QbitTorrent with Debrid Support
repo_url: https://github.com/sirrobot01/decypharr
repo_name: sirrobot01/decypharr
edit_uri: blob/main/docs
theme:
name: material
logo: images/logo.png
font:
text: Roboto
code: Roboto Mono
palette:
- media: "(prefers-color-scheme: light)"
scheme: default
primary: indigo
accent: indigo
toggle:
icon: material/weather-night
name: Switch to dark mode
- media: "(prefers-color-scheme: dark)"
scheme: slate
primary: indigo
accent: indigo
toggle:
icon: material/weather-sunny
name: Switch to light mode
features:
- navigation.search.highlight
- navigation.search.suggest
- navigation.search.share
- navigation.search.suggest
- navigation.search.share
- navigation.search.highlight
- navigation.search.suggest
- navigation.search.share
icon:
repo: fontawesome/brands/github
markdown_extensions:
- admonition
- pymdownx.details
- pymdownx.superfences
- pymdownx.highlight
- pymdownx.inlinehilite
- pymdownx.tabbed
- pymdownx.emoji:
emoji_index: !!python/name:material.extensions.emoji.twemoji
emoji_generator: !!python/name:materialx.emoji.to_svg
- attr_list
- md_in_html
- def_list
- toc:
permalink: true
nav:
- Home: index.md
- Installation: installation.md
- Usage: usage.md
- Configuration:
- Overview: configuration/index.md
- General: configuration/general.md
- Debrid Providers: configuration/debrid.md
- qBittorrent: configuration/qbittorrent.md
- Arr Integration: configuration/arrs.md
- Features:
- Overview: features/index.md
- Repair Worker: features/repair-worker.md
- WebDAV: features/webdav.md
- Changelog: changelog.md
plugins:
- search
- tags

33
go.mod
View File

@@ -1,23 +1,42 @@
module goBlack
module github.com/sirrobot01/decypharr
go 1.22
go 1.23.0
toolchain go1.23.2
require (
github.com/anacrolix/torrent v1.55.0
github.com/beevik/etree v1.5.0
github.com/cavaliergopher/grab/v3 v3.0.1
github.com/elazarl/goproxy v0.0.0-20240726154733-8b0c20506380
github.com/elazarl/goproxy/ext v0.0.0-20190711103511-473e67f1d7d2
github.com/fsnotify/fsnotify v1.7.0
github.com/go-chi/chi/v5 v5.1.0
github.com/goccy/go-json v0.10.5
github.com/google/uuid v1.6.0
github.com/gorilla/sessions v1.4.0
github.com/puzpuzpuz/xsync/v3 v3.5.1
github.com/rs/zerolog v1.33.0
github.com/valyala/fastjson v1.6.4
golang.org/x/time v0.6.0
golang.org/x/crypto v0.33.0
golang.org/x/net v0.35.0
golang.org/x/sync v0.12.0
golang.org/x/time v0.8.0
gopkg.in/natefinch/lumberjack.v2 v2.2.1
)
require (
github.com/anacrolix/missinggo v1.3.0 // indirect
github.com/anacrolix/missinggo/v2 v2.7.3 // indirect
github.com/bradfitz/iter v0.0.0-20191230175014-e8f45d346db8 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/google/go-cmp v0.6.0 // indirect
github.com/gorilla/securecookie v1.1.2 // indirect
github.com/huandu/xstrings v1.3.2 // indirect
golang.org/x/net v0.27.0 // indirect
golang.org/x/sys v0.22.0 // indirect
golang.org/x/text v0.16.0 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/rogpeppe/go-internal v1.13.1 // indirect
github.com/stretchr/testify v1.10.0 // indirect
golang.org/x/sys v0.30.0 // indirect
golang.org/x/text v0.22.0 // indirect
)

70
go.sum
View File

@@ -36,6 +36,8 @@ github.com/anacrolix/tagflag v1.1.0/go.mod h1:Scxs9CV10NQatSmbyjqmqmeQNwGzlNe0CM
github.com/anacrolix/torrent v1.55.0 h1:s9yh/YGdPmbN9dTa+0Inh2dLdrLQRvEAj1jdFW/Hdd8=
github.com/anacrolix/torrent v1.55.0/go.mod h1:sBdZHBSZNj4de0m+EbYg7vvs/G/STubxu/GzzNbojsE=
github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/beevik/etree v1.5.0 h1:iaQZFSDS+3kYZiGoc9uKeOkUY3nYMXOKLl6KIJxiJWs=
github.com/beevik/etree v1.5.0/go.mod h1:gPNJNaBGVZ9AwsidazFZyygnd+0pAU38N4D+WemwKNs=
github.com/benbjohnson/immutable v0.2.0/go.mod h1:uc6OHo6PN2++n98KHLxW8ef4W42ylHiQSENghE1ezxI=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
@@ -44,11 +46,15 @@ github.com/bradfitz/iter v0.0.0-20140124041915-454541ec3da2/go.mod h1:PyRFw1Lt2w
github.com/bradfitz/iter v0.0.0-20190303215204-33e6a9893b0c/go.mod h1:PyRFw1Lt2wKX4ZVSQ2mk+PeDa1rxyObEDlApuIsUKuo=
github.com/bradfitz/iter v0.0.0-20191230175014-e8f45d346db8 h1:GKTyiRCL6zVf5wWaqKnf+7Qs6GbEPfd4iMOitWzXJx8=
github.com/bradfitz/iter v0.0.0-20191230175014-e8f45d346db8/go.mod h1:spo1JLcs67NmW1aVLEgtA8Yy1elc+X8y5SRW1sFW4Og=
github.com/cavaliergopher/grab/v3 v3.0.1 h1:4z7TkBfmPjmLAAmkkAZNX/6QJ1nNFdv3SdIHXju0Fr4=
github.com/cavaliergopher/grab/v3 v3.0.1/go.mod h1:1U/KNnD+Ft6JJiYoYBAimKH2XrYptb8Kl3DFGmsjpq4=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
github.com/dustin/go-humanize v0.0.0-20180421182945-02af3965c54e/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
@@ -62,19 +68,22 @@ github.com/elazarl/goproxy/ext v0.0.0-20190711103511-473e67f1d7d2/go.mod h1:gNh8
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA=
github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM=
github.com/glycerine/go-unsnap-stream v0.0.0-20180323001048-9f0cb55181dd/go.mod h1:/20jfyN9Y5QPEAprSgKAUr+glWDY39ZiUEAYOEv5dsE=
github.com/glycerine/go-unsnap-stream v0.0.0-20181221182339-f9677308dec2/go.mod h1:/20jfyN9Y5QPEAprSgKAUr+glWDY39ZiUEAYOEv5dsE=
github.com/glycerine/go-unsnap-stream v0.0.0-20190901134440-81cf024a9e0a/go.mod h1:/20jfyN9Y5QPEAprSgKAUr+glWDY39ZiUEAYOEv5dsE=
github.com/glycerine/goconvey v0.0.0-20180728074245-46e3a41ad493/go.mod h1:Ogl1Tioa0aV7gstGFO7KhffUsb9M4ydbEbbxpcEDc24=
github.com/glycerine/goconvey v0.0.0-20190315024820-982ee783a72e/go.mod h1:Ogl1Tioa0aV7gstGFO7KhffUsb9M4ydbEbbxpcEDc24=
github.com/glycerine/goconvey v0.0.0-20190410193231-58a59202ab31/go.mod h1:Ogl1Tioa0aV7gstGFO7KhffUsb9M4ydbEbbxpcEDc24=
github.com/go-chi/chi/v5 v5.1.0 h1:acVI1TYaD+hhedDJ3r54HyA6sExp3HfXq7QWEEY/xMw=
github.com/go-chi/chi/v5 v5.1.0/go.mod h1:DslCQbL2OYiznFReuXYUmQ2hGd1aDpCnlMNITLSKoi8=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/goccy/go-json v0.10.5 h1:Fq85nIqj+gXn/S5ahsiTlK3TmC85qgirsdTP/+DeaC4=
github.com/goccy/go-json v0.10.5/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.0/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
@@ -100,12 +109,20 @@ github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gopherjs/gopherjs v0.0.0-20181103185306-d547d1d9531e/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gopherjs/gopherjs v0.0.0-20190309154008-847fc94819f9/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gopherjs/gopherjs v0.0.0-20190910122728-9d188e94fb99/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/context v1.1.1/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg=
github.com/gorilla/mux v1.6.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gorilla/securecookie v1.1.2 h1:YCIWL56dvtr73r6715mJs5ZvhtnY73hBvEF8kXD8ePA=
github.com/gorilla/securecookie v1.1.2/go.mod h1:NfCASbcHqRSY+3a8tlWJwsQap2VX5pwzwo4h3eOamfo=
github.com/gorilla/sessions v1.4.0 h1:kpIYOp/oi6MG/p5PgxApU8srsSw9tuFbt46Lt7auzqQ=
github.com/gorilla/sessions v1.4.0/go.mod h1:FLWm50oby91+hl7p/wRxDth9bWSuk0qVL2emc7lT5ik=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/huandu/xstrings v1.0.0/go.mod h1:4qWG/gcEcfX4z/mBDHJ++3ReCw9ibxbsNJbcucJdbSo=
@@ -128,6 +145,12 @@ github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
@@ -145,8 +168,9 @@ github.com/pierrec/lz4 v2.0.5+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3-0.20190127221311-3c4408c8b829/go.mod h1:p2iRAGwDERtqlqzRXnrOVns+ignqQo//hLXqYxZYVNs=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
@@ -163,10 +187,15 @@ github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A=
github.com/prometheus/procfs v0.0.11/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/puzpuzpuz/xsync/v3 v3.5.1 h1:GJYJZwO6IdxN/IKbneznS6yPkVC+c3zyY/j19c++5Fg=
github.com/puzpuzpuz/xsync/v3 v3.5.1/go.mod h1:VjzYrABPabuM4KyBh1Ftq6u8nhwY5tBPKP9jpmh0nnA=
github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/rogpeppe/go-charset v0.0.0-20180617210344-2471d30d28b4/go.mod h1:qgYeAmZ5ZIpBWTGllZSQnw97Dj+woV0toclVaRGI8pc=
github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8=
github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=
github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=
github.com/rs/xid v1.5.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
github.com/rs/zerolog v1.33.0 h1:1cU2KZkvPxNyfgEmhHAz/1A9Bz+llsdYzklWFzgp0r8=
github.com/rs/zerolog v1.33.0/go.mod h1:/7mN4D5sKwJLZQ2b/znpjC3/GQWY/xaDXUM0kKWRHss=
github.com/ryszard/goskiplist v0.0.0-20150312221310-2dfbae5fcf46/go.mod h1:uAQ5PCi+MFsC7HjREoAz1BU+Mq60+05gifQSsHSDG/8=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
@@ -180,8 +209,8 @@ github.com/stretchr/testify v1.2.1/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXf
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/tinylib/msgp v1.0.2/go.mod h1:+d+yLhGm8mzTaHzB+wgMYrodPfmZrzkirds8fDWklFE=
github.com/tinylib/msgp v1.1.0/go.mod h1:+d+yLhGm8mzTaHzB+wgMYrodPfmZrzkirds8fDWklFE=
github.com/tinylib/msgp v1.1.2/go.mod h1:+d+yLhGm8mzTaHzB+wgMYrodPfmZrzkirds8fDWklFE=
@@ -194,6 +223,8 @@ go.opencensus.io v0.20.2/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.33.0 h1:IOBPskki6Lysi0lo9qQvbxiQ+FvsCC/YWOecCHAixus=
golang.org/x/crypto v0.33.0/go.mod h1:bVdXmD7IV/4GdElGPozy6U7lWdRXA4qyRVGJV57uQ5M=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
@@ -209,8 +240,8 @@ golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73r
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.27.0 h1:5K3Njcw06/l2y9vpGCSdcxWOYHOUk3dVNGDXN+FvAys=
golang.org/x/net v0.27.0/go.mod h1:dDi0PyhWNoiUOrAS8uXv/vnScO4wnHQO4mj9fn/RytE=
golang.org/x/net v0.35.0 h1:T5GQRQb2y08kTAByq9L4/bz8cipCdA8FbRTXewonqY8=
golang.org/x/net v0.35.0/go.mod h1:EglIi67kWsHKlRzzVMUD93VMSWGFOMSZgxFjparz1Qk=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -218,6 +249,8 @@ golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.12.0 h1:MHc5BpPuC30uJk597Ri8TV3CNZcTLu6B6z4lJy+g6Jw=
golang.org/x/sync v0.12.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -229,14 +262,17 @@ golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200413165638-669c56c373c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.22.0 h1:RI27ohtqKCnwULzJLqkv897zojh5/DwS/ENaMzUOaWI=
golang.org/x/sys v0.22.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.30.0 h1:QjkSwP/36a20jFYWkSue1YwXzLmsV5Gfq7Eiy72C1uc=
golang.org/x/sys v0.30.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.16.0 h1:a94ExnEXNtEwYLGJSIUxnWoxoRz/ZcCsV63ROupILh4=
golang.org/x/text v0.16.0/go.mod h1:GhwF1Be+LQoKShO3cGOHzqOgRrGaYc9AvblQOmPVHnI=
golang.org/x/time v0.6.0 h1:eTDhh4ZXt5Qf0augr54TN6suAUudPcawVZeIAPU7D4U=
golang.org/x/time v0.6.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/text v0.22.0 h1:bofq7m3/HAFvbF51jz3Q9wLg3jkvSPuiZu/pD1XwgtM=
golang.org/x/text v0.22.0/go.mod h1:YRoo4H8PVmsu+E3Ou7cqLVH8oXWIHVoX0jqUWALQhfY=
golang.org/x/time v0.8.0 h1:9i3RxcPv3PZnitoVGMPDKZSq1xW1gK1Xy3ArNOGZfEg=
golang.org/x/time v0.8.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@@ -263,6 +299,8 @@ gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/natefinch/lumberjack.v2 v2.2.1 h1:bBRl1b0OH9s/DuPhuXpNl+VtCaJXFZ5/uEFST95x9zc=
gopkg.in/natefinch/lumberjack.v2 v2.2.1/go.mod h1:YD8tP3GAjkrDg1eZH7EGmyESg/lsYskCTPBJVb9jqSc=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=

316
internal/config/config.go Normal file
View File

@@ -0,0 +1,316 @@
package config
import (
"cmp"
"errors"
"fmt"
"github.com/goccy/go-json"
"os"
"path/filepath"
"sync"
)
var (
instance *Config
once sync.Once
configPath string
)
type Debrid struct {
Name string `json:"name"`
Host string `json:"host"`
APIKey string `json:"api_key"`
DownloadAPIKeys []string `json:"download_api_keys"`
Folder string `json:"folder"`
DownloadUncached bool `json:"download_uncached"`
CheckCached bool `json:"check_cached"`
RateLimit string `json:"rate_limit"` // 200/minute or 10/second
Proxy string `json:"proxy"`
UseWebDav bool `json:"use_webdav"`
WebDav
}
type QBitTorrent struct {
Username string `json:"username"`
Password string `json:"password"`
Port string `json:"port"`
DownloadFolder string `json:"download_folder"`
Categories []string `json:"categories"`
RefreshInterval int `json:"refresh_interval"`
SkipPreCache bool `json:"skip_pre_cache"`
}
type Arr struct {
Name string `json:"name"`
Host string `json:"host"`
Token string `json:"token"`
Cleanup bool `json:"cleanup"`
SkipRepair bool `json:"skip_repair"`
DownloadUncached *bool `json:"download_uncached"`
}
type Repair struct {
Enabled bool `json:"enabled"`
Interval string `json:"interval"`
RunOnStart bool `json:"run_on_start"`
ZurgURL string `json:"zurg_url"`
AutoProcess bool `json:"auto_process"`
UseWebDav bool `json:"use_webdav"`
Workers int `json:"workers"`
}
type Auth struct {
Username string `json:"username"`
Password string `json:"password"`
}
type WebDav struct {
TorrentsRefreshInterval string `json:"torrents_refresh_interval"`
DownloadLinksRefreshInterval string `json:"download_links_refresh_interval"`
Workers int `json:"workers"`
AutoExpireLinksAfter string `json:"auto_expire_links_after"`
// Folder
FolderNaming string `json:"folder_naming"`
// Rclone
RcUrl string `json:"rc_url"`
RcUser string `json:"rc_user"`
RcPass string `json:"rc_pass"`
}
type Config struct {
LogLevel string `json:"log_level"`
Debrids []Debrid `json:"debrids"`
MaxCacheSize int `json:"max_cache_size"`
QBitTorrent QBitTorrent `json:"qbittorrent"`
Arrs []Arr `json:"arrs"`
Repair Repair `json:"repair"`
WebDav WebDav `json:"webdav"`
AllowedExt []string `json:"allowed_file_types"`
MinFileSize string `json:"min_file_size"` // Minimum file size to download, 10MB, 1GB, etc
MaxFileSize string `json:"max_file_size"` // Maximum file size to download (0 means no limit)
Path string `json:"-"` // Path to save the config file
UseAuth bool `json:"use_auth"`
Auth *Auth `json:"-"`
DiscordWebhook string `json:"discord_webhook_url"`
}
func (c *Config) JsonFile() string {
return filepath.Join(c.Path, "config.json")
}
func (c *Config) AuthFile() string {
return filepath.Join(c.Path, "auth.json")
}
func (c *Config) loadConfig() error {
// Load the config file
if configPath == "" {
return fmt.Errorf("config path not set")
}
c.Path = configPath
file, err := os.ReadFile(c.JsonFile())
if err != nil {
return err
}
if err := json.Unmarshal(file, &c); err != nil {
return fmt.Errorf("error unmarshaling config: %w", err)
}
for i, debrid := range c.Debrids {
c.Debrids[i] = c.updateDebrid(debrid)
}
if len(c.AllowedExt) == 0 {
c.AllowedExt = getDefaultExtensions()
}
// Load the auth file
c.Auth = c.GetAuth()
//Validate the config
if err := validateConfig(c); err != nil {
return err
}
return nil
}
func validateDebrids(debrids []Debrid) error {
if len(debrids) == 0 {
return errors.New("no debrids configured")
}
errChan := make(chan error, len(debrids))
var wg sync.WaitGroup
for _, debrid := range debrids {
// Basic field validation
if debrid.Host == "" {
return errors.New("debrid host is required")
}
if debrid.APIKey == "" {
return errors.New("debrid api key is required")
}
if debrid.Folder == "" {
return errors.New("debrid folder is required")
}
// Check folder existence
//wg.Add(1)
//go func(folder string) {
// defer wg.Done()
// if _, err := os.Stat(folder); os.IsNotExist(err) {
// errChan <- fmt.Errorf("debrid folder does not exist: %s", folder)
// }
//}(debrid.Folder)
}
// Wait for all checks to complete
go func() {
wg.Wait()
close(errChan)
}()
// Return first error if any
if err := <-errChan; err != nil {
return err
}
return nil
}
//func validateQbitTorrent(config *QBitTorrent) error {
// if config.DownloadFolder == "" {
// return errors.New("qbittorent download folder is required")
// }
// if _, err := os.Stat(config.DownloadFolder); os.IsNotExist(err) {
// return fmt.Errorf("qbittorent download folder(%s) does not exist", config.DownloadFolder)
// }
// return nil
//}
func validateConfig(config *Config) error {
// Run validations concurrently
if err := validateDebrids(config.Debrids); err != nil {
return fmt.Errorf("debrids validation error: %w", err)
}
return nil
}
func SetConfigPath(path string) error {
configPath = path
return nil
}
func Get() *Config {
once.Do(func() {
instance = &Config{} // Initialize instance first
if err := instance.loadConfig(); err != nil {
fmt.Fprintf(os.Stderr, "configuration Error: %v\n", err)
os.Exit(1)
}
})
return instance
}
func (c *Config) GetMinFileSize() int64 {
// 0 means no limit
if c.MinFileSize == "" {
return 0
}
s, err := parseSize(c.MinFileSize)
if err != nil {
return 0
}
return s
}
func (c *Config) GetMaxFileSize() int64 {
// 0 means no limit
if c.MaxFileSize == "" {
return 0
}
s, err := parseSize(c.MaxFileSize)
if err != nil {
return 0
}
return s
}
func (c *Config) IsSizeAllowed(size int64) bool {
if size == 0 {
return true // Maybe the debrid hasn't reported the size yet
}
if c.GetMinFileSize() > 0 && size < c.GetMinFileSize() {
return false
}
if c.GetMaxFileSize() > 0 && size > c.GetMaxFileSize() {
return false
}
return true
}
func (c *Config) GetAuth() *Auth {
if !c.UseAuth {
return nil
}
if c.Auth == nil {
c.Auth = &Auth{}
if _, err := os.Stat(c.AuthFile()); err == nil {
file, err := os.ReadFile(c.AuthFile())
if err == nil {
_ = json.Unmarshal(file, c.Auth)
}
}
}
return c.Auth
}
func (c *Config) SaveAuth(auth *Auth) error {
c.Auth = auth
data, err := json.Marshal(auth)
if err != nil {
return err
}
return os.WriteFile(c.AuthFile(), data, 0644)
}
func (c *Config) NeedsSetup() bool {
if c.UseAuth {
return c.GetAuth().Username == ""
}
return false
}
func (c *Config) updateDebrid(d Debrid) Debrid {
if len(d.DownloadAPIKeys) == 0 {
d.DownloadAPIKeys = append(d.DownloadAPIKeys, d.APIKey)
}
if !d.UseWebDav {
return d
}
if d.TorrentsRefreshInterval == "" {
d.TorrentsRefreshInterval = cmp.Or(c.WebDav.TorrentsRefreshInterval, "15s") // 15 seconds
}
if d.WebDav.DownloadLinksRefreshInterval == "" {
d.DownloadLinksRefreshInterval = cmp.Or(c.WebDav.DownloadLinksRefreshInterval, "40m") // 40 minutes
}
if d.Workers == 0 {
d.Workers = cmp.Or(c.WebDav.Workers, 30) // 30 workers
}
if d.FolderNaming == "" {
d.FolderNaming = cmp.Or(c.WebDav.FolderNaming, "original_no_ext")
}
if d.AutoExpireLinksAfter == "" {
d.AutoExpireLinksAfter = cmp.Or(c.WebDav.AutoExpireLinksAfter, "3d") // 2 days
}
return d
}

75
internal/config/misc.go Normal file
View File

@@ -0,0 +1,75 @@
package config
import (
"path/filepath"
"sort"
"strconv"
"strings"
)
func (c *Config) IsAllowedFile(filename string) bool {
ext := strings.ToLower(filepath.Ext(filename))
if ext == "" {
return false
}
// Remove the leading dot
ext = ext[1:]
for _, allowed := range c.AllowedExt {
if ext == allowed {
return true
}
}
return false
}
func getDefaultExtensions() []string {
videoExts := strings.Split("webm,m4v,3gp,nsv,ty,strm,rm,rmvb,m3u,ifo,mov,qt,divx,xvid,bivx,nrg,pva,wmv,asf,asx,ogm,ogv,m2v,avi,bin,dat,dvr-ms,mpg,mpeg,mp4,avc,vp3,svq3,nuv,viv,dv,fli,flv,wpl,img,iso,vob,mkv,mk3d,ts,wtv,m2ts'", ",")
musicExts := strings.Split("MP3,WAV,FLAC,OGG,WMA,AIFF,ALAC,M4A,APE,AC3,DTS,M4P,MID,MIDI,MKA,MP2,MPA,RA,VOC,WV,AMR", ",")
// Combine both slices
allExts := append(videoExts, musicExts...)
// Convert to lowercase
for i, ext := range allExts {
allExts[i] = strings.ToLower(ext)
}
// Remove duplicates
seen := make(map[string]struct{})
var unique []string
for _, ext := range allExts {
if _, ok := seen[ext]; !ok {
seen[ext] = struct{}{}
unique = append(unique, ext)
}
}
sort.Strings(unique)
return unique
}
func parseSize(sizeStr string) (int64, error) {
sizeStr = strings.ToUpper(strings.TrimSpace(sizeStr))
// Absolute size-based cache
multiplier := 1.0
if strings.HasSuffix(sizeStr, "GB") {
multiplier = 1024 * 1024 * 1024
sizeStr = strings.TrimSuffix(sizeStr, "GB")
} else if strings.HasSuffix(sizeStr, "MB") {
multiplier = 1024 * 1024
sizeStr = strings.TrimSuffix(sizeStr, "MB")
} else if strings.HasSuffix(sizeStr, "KB") {
multiplier = 1024
sizeStr = strings.TrimSuffix(sizeStr, "KB")
}
size, err := strconv.ParseFloat(sizeStr, 64)
if err != nil {
return 0, err
}
return int64(size * multiplier), nil
}

97
internal/logger/logger.go Normal file
View File

@@ -0,0 +1,97 @@
package logger
import (
"fmt"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/config"
"gopkg.in/natefinch/lumberjack.v2"
"os"
"path/filepath"
"strings"
"sync"
)
var (
once sync.Once
logger zerolog.Logger
)
func GetLogPath() string {
cfg := config.Get()
logsDir := filepath.Join(cfg.Path, "logs")
if _, err := os.Stat(logsDir); os.IsNotExist(err) {
if err := os.MkdirAll(logsDir, 0755); err != nil {
panic(fmt.Sprintf("Failed to create logs directory: %v", err))
}
}
return filepath.Join(logsDir, "decypharr.log")
}
func New(prefix string) zerolog.Logger {
level := config.Get().LogLevel
rotatingLogFile := &lumberjack.Logger{
Filename: GetLogPath(),
MaxSize: 10,
MaxAge: 15,
Compress: true,
}
consoleWriter := zerolog.ConsoleWriter{
Out: os.Stdout,
TimeFormat: "2006-01-02 15:04:05",
NoColor: false, // Set to true if you don't want colors
FormatLevel: func(i interface{}) string {
return strings.ToUpper(fmt.Sprintf("| %-6s|", i))
},
FormatMessage: func(i interface{}) string {
return fmt.Sprintf("[%s] %v", prefix, i)
},
}
fileWriter := zerolog.ConsoleWriter{
Out: rotatingLogFile,
TimeFormat: "2006-01-02 15:04:05",
NoColor: true, // No colors in file output
FormatLevel: func(i interface{}) string {
return strings.ToUpper(fmt.Sprintf("| %-6s|", i))
},
FormatMessage: func(i interface{}) string {
return fmt.Sprintf("[%s] %v", prefix, i)
},
}
multi := zerolog.MultiLevelWriter(consoleWriter, fileWriter)
logger := zerolog.New(multi).
With().
Timestamp().
Logger().
Level(zerolog.InfoLevel)
// Set the log level
level = strings.ToLower(level)
switch level {
case "debug":
logger = logger.Level(zerolog.DebugLevel)
case "info":
logger = logger.Level(zerolog.InfoLevel)
case "warn":
logger = logger.Level(zerolog.WarnLevel)
case "error":
logger = logger.Level(zerolog.ErrorLevel)
case "trace":
logger = logger.Level(zerolog.TraceLevel)
}
return logger
}
func GetDefaultLogger() zerolog.Logger {
once.Do(func() {
logger = New("decypharr")
})
return logger
}

100
internal/request/discord.go Normal file
View File

@@ -0,0 +1,100 @@
package request
import (
"bytes"
"fmt"
"github.com/goccy/go-json"
"github.com/sirrobot01/decypharr/internal/config"
"io"
"net/http"
"strings"
)
type DiscordEmbed struct {
Title string `json:"title"`
Description string `json:"description"`
Color int `json:"color"`
}
type DiscordWebhook struct {
Embeds []DiscordEmbed `json:"embeds"`
}
func getDiscordColor(status string) int {
switch status {
case "success":
return 3066993
case "error":
return 15158332
case "warning":
return 15844367
case "pending":
return 3447003
default:
return 0
}
}
func getDiscordHeader(event string) string {
switch event {
case "download_complete":
return "[Decypharr] Download Completed"
case "download_failed":
return "[Decypharr] Download Failed"
case "repair_pending":
return "[Decypharr] Repair Completed, Awaiting action"
case "repair_complete":
return "[Decypharr] Repair Complete"
default:
// split the event string and capitalize the first letter of each word
evs := strings.Split(event, "_")
for i, ev := range evs {
evs[i] = strings.ToTitle(ev)
}
return "[Decypharr] %s" + strings.Join(evs, " ")
}
}
func SendDiscordMessage(event string, status string, message string) error {
cfg := config.Get()
webhookURL := cfg.DiscordWebhook
if webhookURL == "" {
return nil
}
// Create the proper Discord webhook structure
webhook := DiscordWebhook{
Embeds: []DiscordEmbed{
{
Title: getDiscordHeader(event),
Description: message,
Color: getDiscordColor(status),
},
},
}
payload, err := json.Marshal(webhook)
if err != nil {
return fmt.Errorf("failed to marshal discord payload: %v", err)
}
req, err := http.NewRequest(http.MethodPost, webhookURL, bytes.NewReader(payload))
if err != nil {
return fmt.Errorf("failed to create discord request: %v", err)
}
req.Header.Set("Content-Type", "application/json")
resp, err := http.DefaultClient.Do(req)
if err != nil {
return fmt.Errorf("failed to send discord message: %v", err)
}
defer resp.Body.Close()
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
bodyBytes, _ := io.ReadAll(resp.Body)
return fmt.Errorf("discord returned error status code: %s, body: %s", resp.Status, string(bodyBytes))
}
return nil
}

View File

@@ -0,0 +1,29 @@
package request
type HTTPError struct {
StatusCode int
Message string
Code string
}
func (e *HTTPError) Error() string {
return e.Message
}
var HosterUnavailableError = &HTTPError{
StatusCode: 503,
Message: "Hoster is unavailable",
Code: "hoster_unavailable",
}
var TrafficExceededError = &HTTPError{
StatusCode: 503,
Message: "Traffic exceeded",
Code: "traffic_exceeded",
}
var ErrLinkBroken = &HTTPError{
StatusCode: 404,
Message: "File is unavailable",
Code: "file_unavailable",
}

476
internal/request/request.go Normal file
View File

@@ -0,0 +1,476 @@
package request
import (
"bytes"
"compress/gzip"
"context"
"crypto/tls"
"fmt"
"github.com/goccy/go-json"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/logger"
"golang.org/x/net/proxy"
"golang.org/x/time/rate"
"io"
"math"
"math/rand"
"net"
"net/http"
"net/url"
"regexp"
"strconv"
"strings"
"sync"
"time"
)
func JoinURL(base string, paths ...string) (string, error) {
// Split the last path component to separate query parameters
lastPath := paths[len(paths)-1]
parts := strings.Split(lastPath, "?")
paths[len(paths)-1] = parts[0]
joined, err := url.JoinPath(base, paths...)
if err != nil {
return "", err
}
// Add back query parameters if they exist
if len(parts) > 1 {
return joined + "?" + parts[1], nil
}
return joined, nil
}
var (
once sync.Once
instance *Client
)
type ClientOption func(*Client)
// Client represents an HTTP client with additional capabilities
type Client struct {
client *http.Client
rateLimiter *rate.Limiter
headers map[string]string
headersMu sync.RWMutex
maxRetries int
timeout time.Duration
skipTLSVerify bool
retryableStatus map[int]struct{}
logger zerolog.Logger
proxy string
// cooldown
statusCooldowns map[int]time.Duration
statusCooldownsMu sync.RWMutex
lastStatusTime map[int]time.Time
lastStatusTimeMu sync.RWMutex
}
func WithStatusCooldown(statusCode int, cooldown time.Duration) ClientOption {
return func(c *Client) {
c.statusCooldownsMu.Lock()
if c.statusCooldowns == nil {
c.statusCooldowns = make(map[int]time.Duration)
}
c.statusCooldowns[statusCode] = cooldown
c.statusCooldownsMu.Unlock()
}
}
// WithMaxRetries sets the maximum number of retry attempts
func WithMaxRetries(maxRetries int) ClientOption {
return func(c *Client) {
c.maxRetries = maxRetries
}
}
// WithTimeout sets the request timeout
func WithTimeout(timeout time.Duration) ClientOption {
return func(c *Client) {
c.timeout = timeout
}
}
func WithRedirectPolicy(policy func(req *http.Request, via []*http.Request) error) ClientOption {
return func(c *Client) {
c.client.CheckRedirect = policy
}
}
// WithRateLimiter sets a rate limiter
func WithRateLimiter(rl *rate.Limiter) ClientOption {
return func(c *Client) {
c.rateLimiter = rl
}
}
// WithHeaders sets default headers
func WithHeaders(headers map[string]string) ClientOption {
return func(c *Client) {
c.headersMu.Lock()
c.headers = headers
c.headersMu.Unlock()
}
}
func (c *Client) SetHeader(key, value string) {
c.headersMu.Lock()
c.headers[key] = value
c.headersMu.Unlock()
}
func WithLogger(logger zerolog.Logger) ClientOption {
return func(c *Client) {
c.logger = logger
}
}
func WithTransport(transport *http.Transport) ClientOption {
return func(c *Client) {
c.client.Transport = transport
}
}
// WithRetryableStatus adds status codes that should trigger a retry
func WithRetryableStatus(statusCodes ...int) ClientOption {
return func(c *Client) {
for _, code := range statusCodes {
c.retryableStatus[code] = struct{}{}
}
}
}
func WithProxy(proxyURL string) ClientOption {
return func(c *Client) {
c.proxy = proxyURL
}
}
// doRequest performs a single HTTP request with rate limiting
func (c *Client) doRequest(req *http.Request) (*http.Response, error) {
if c.rateLimiter != nil {
err := c.rateLimiter.Wait(req.Context())
if err != nil {
return nil, fmt.Errorf("rate limiter wait: %w", err)
}
}
return c.client.Do(req)
}
// Do performs an HTTP request with retries for certain status codes
func (c *Client) Do(req *http.Request) (*http.Response, error) {
// Save the request body for reuse in retries
var bodyBytes []byte
var err error
if req.Body != nil {
bodyBytes, err = io.ReadAll(req.Body)
if err != nil {
return nil, fmt.Errorf("reading request body: %w", err)
}
req.Body.Close()
}
backoff := time.Millisecond * 500
var resp *http.Response
for attempt := 0; attempt <= c.maxRetries; attempt++ {
// Reset the request body if it exists
if bodyBytes != nil {
req.Body = io.NopCloser(bytes.NewReader(bodyBytes))
}
// Apply headers
c.headersMu.RLock()
if c.headers != nil {
for key, value := range c.headers {
req.Header.Set(key, value)
}
}
c.headersMu.RUnlock()
if attempt > 0 && resp != nil {
c.statusCooldownsMu.RLock()
cooldown, exists := c.statusCooldowns[resp.StatusCode]
c.statusCooldownsMu.RUnlock()
if exists {
c.lastStatusTimeMu.RLock()
lastTime, timeExists := c.lastStatusTime[resp.StatusCode]
c.lastStatusTimeMu.RUnlock()
if timeExists {
elapsed := time.Since(lastTime)
if elapsed < cooldown {
// We need to wait longer for this status code
waitTime := cooldown - elapsed
select {
case <-req.Context().Done():
return nil, req.Context().Err()
case <-time.After(waitTime):
// Continue after waiting
}
}
}
}
}
resp, err = c.doRequest(req)
if err == nil {
c.lastStatusTimeMu.Lock()
c.lastStatusTime[resp.StatusCode] = time.Now()
c.lastStatusTimeMu.Unlock()
}
if err != nil {
// Check if this is a network error that might be worth retrying
if attempt < c.maxRetries {
// Apply backoff with jitter
jitter := time.Duration(rand.Int63n(int64(backoff / 4)))
sleepTime := backoff + jitter
select {
case <-req.Context().Done():
return nil, req.Context().Err()
case <-time.After(sleepTime):
// Continue to next retry attempt
}
// Exponential backoff
backoff *= 2
continue
}
return nil, err
}
// Check if the status code is retryable
if _, ok := c.retryableStatus[resp.StatusCode]; !ok || attempt == c.maxRetries {
return resp, nil
}
// Close the response body before retrying
resp.Body.Close()
// Apply backoff with jitter
jitter := time.Duration(rand.Int63n(int64(backoff / 4)))
sleepTime := backoff + jitter
select {
case <-req.Context().Done():
return nil, req.Context().Err()
case <-time.After(sleepTime):
// Continue to next retry attempt
}
// Exponential backoff
backoff *= 2
}
return nil, fmt.Errorf("max retries exceeded")
}
// MakeRequest performs an HTTP request and returns the response body as bytes
func (c *Client) MakeRequest(req *http.Request) ([]byte, error) {
res, err := c.Do(req)
if err != nil {
return nil, err
}
defer func() {
if err := res.Body.Close(); err != nil {
c.logger.Printf("Failed to close response body: %v", err)
}
}()
bodyBytes, err := io.ReadAll(res.Body)
if err != nil {
return nil, fmt.Errorf("reading response body: %w", err)
}
if res.StatusCode < 200 || res.StatusCode >= 300 {
return nil, fmt.Errorf("HTTP error %d: %s", res.StatusCode, string(bodyBytes))
}
return bodyBytes, nil
}
func (c *Client) Get(url string) (*http.Response, error) {
req, err := http.NewRequest(http.MethodGet, url, nil)
if err != nil {
return nil, fmt.Errorf("creating GET request: %w", err)
}
return c.Do(req)
}
// New creates a new HTTP client with the specified options
func New(options ...ClientOption) *Client {
client := &Client{
maxRetries: 3,
skipTLSVerify: true,
retryableStatus: map[int]struct{}{
http.StatusTooManyRequests: struct{}{},
http.StatusInternalServerError: struct{}{},
http.StatusBadGateway: struct{}{},
http.StatusServiceUnavailable: struct{}{},
http.StatusGatewayTimeout: struct{}{},
},
logger: logger.New("request"),
timeout: 60 * time.Second,
proxy: "",
headers: make(map[string]string), // Initialize headers map
statusCooldowns: make(map[int]time.Duration),
lastStatusTime: make(map[int]time.Time),
}
// default http client
client.client = &http.Client{
Timeout: client.timeout,
}
// Apply options before configuring transport
for _, option := range options {
option(client)
}
// Check if transport was set by WithTransport option
if client.client.Transport == nil {
transport := &http.Transport{
TLSClientConfig: &tls.Config{
InsecureSkipVerify: client.skipTLSVerify,
},
// Connection pooling
MaxIdleConns: 100,
MaxIdleConnsPerHost: 50,
MaxConnsPerHost: 100,
// Timeouts
IdleConnTimeout: 90 * time.Second,
TLSHandshakeTimeout: 10 * time.Second,
ResponseHeaderTimeout: 10 * time.Second,
ExpectContinueTimeout: 1 * time.Second,
// TCP keep-alive
DialContext: (&net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
}).DialContext,
// Enable HTTP/2
ForceAttemptHTTP2: true,
// Disable compression to save CPU
DisableCompression: false,
}
// Configure proxy if needed
if client.proxy != "" {
if strings.HasPrefix(client.proxy, "socks5://") {
// Handle SOCKS5 proxy
socksURL, err := url.Parse(client.proxy)
if err != nil {
client.logger.Error().Msgf("Failed to parse SOCKS5 proxy URL: %v", err)
} else {
auth := &proxy.Auth{}
if socksURL.User != nil {
auth.User = socksURL.User.Username()
password, _ := socksURL.User.Password()
auth.Password = password
}
dialer, err := proxy.SOCKS5("tcp", socksURL.Host, auth, proxy.Direct)
if err != nil {
client.logger.Error().Msgf("Failed to create SOCKS5 dialer: %v", err)
} else {
transport.DialContext = func(ctx context.Context, network, addr string) (net.Conn, error) {
return dialer.Dial(network, addr)
}
}
}
} else {
proxyURL, err := url.Parse(client.proxy)
if err != nil {
client.logger.Error().Msgf("Failed to parse proxy URL: %v", err)
} else {
transport.Proxy = http.ProxyURL(proxyURL)
}
}
} else {
transport.Proxy = http.ProxyFromEnvironment
}
// Set the transport to the client
client.client.Transport = transport
}
return client
}
func ParseRateLimit(rateStr string) *rate.Limiter {
if rateStr == "" {
return nil
}
re := regexp.MustCompile(`(\d+)/(minute|second)`)
matches := re.FindStringSubmatch(rateStr)
if len(matches) != 3 {
return nil
}
count, err := strconv.Atoi(matches[1])
if err != nil {
return nil
}
unit := matches[2]
switch unit {
case "minute":
reqsPerSecond := float64(count) / 60.0
burstSize := int(math.Max(30, float64(count)*0.25))
return rate.NewLimiter(rate.Limit(reqsPerSecond), burstSize)
case "second":
burstSize := int(math.Max(30, float64(count)*5))
return rate.NewLimiter(rate.Limit(float64(count)), burstSize)
default:
return nil
}
}
func JSONResponse(w http.ResponseWriter, data interface{}, code int) {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(code)
err := json.NewEncoder(w).Encode(data)
if err != nil {
return
}
}
func Gzip(body []byte) []byte {
var b bytes.Buffer
if len(body) == 0 {
return nil
}
gz := gzip.NewWriter(&b)
_, err := gz.Write(body)
if err != nil {
return nil
}
err = gz.Close()
if err != nil {
return nil
}
return b.Bytes()
}
func Default() *Client {
once.Do(func() {
instance = New()
})
return instance
}

2
internal/utils/file.go Normal file
View File

@@ -0,0 +1,2 @@
package utils

View File

@@ -1,18 +1,23 @@
package common
package utils
import (
"bufio"
"bytes"
"context"
"encoding/base32"
"encoding/hex"
"fmt"
"github.com/anacrolix/torrent/metainfo"
"github.com/sirrobot01/decypharr/internal/request"
"io"
"log"
"math/rand"
"net/http"
"net/url"
"os"
"path/filepath"
"regexp"
"strings"
"time"
)
type Magnet struct {
@@ -20,6 +25,69 @@ type Magnet struct {
InfoHash string
Size int64
Link string
File []byte
}
func (m *Magnet) IsTorrent() bool {
return m.File != nil
}
func GetMagnetFromFile(file io.Reader, filePath string) (*Magnet, error) {
var (
m *Magnet
err error
)
if filepath.Ext(filePath) == ".torrent" {
torrentData, err := io.ReadAll(file)
if err != nil {
return nil, err
}
m, err = GetMagnetFromBytes(torrentData)
if err != nil {
return nil, err
}
} else {
// .magnet file
magnetLink := ReadMagnetFile(file)
m, err = GetMagnetInfo(magnetLink)
if err != nil {
return nil, err
}
}
m.Name = strings.TrimSuffix(filePath, filepath.Ext(filePath))
return m, nil
}
func GetMagnetFromUrl(url string) (*Magnet, error) {
if strings.HasPrefix(url, "magnet:") {
return GetMagnetInfo(url)
} else if strings.HasPrefix(url, "http") {
return OpenMagnetHttpURL(url)
}
return nil, fmt.Errorf("invalid url")
}
func GetMagnetFromBytes(torrentData []byte) (*Magnet, error) {
// Create a scanner to read the file line by line
mi, err := metainfo.Load(bytes.NewReader(torrentData))
if err != nil {
return nil, err
}
hash := mi.HashInfoBytes()
infoHash := hash.HexString()
info, err := mi.UnmarshalInfo()
if err != nil {
return nil, err
}
log.Println("InfoHash: ", infoHash)
magnet := &Magnet{
InfoHash: infoHash,
Name: info.Name,
Size: info.Length,
Link: mi.Magnet(&hash, &info).String(),
File: torrentData,
}
return magnet, nil
}
func OpenMagnetFile(filePath string) string {
@@ -34,13 +102,15 @@ func OpenMagnetFile(filePath string) string {
return
}
}(file) // Ensure the file is closed after the function ends
return ReadMagnetFile(file)
}
// Create a scanner to read the file line by line
func ReadMagnetFile(file io.Reader) string {
scanner := bufio.NewScanner(file)
for scanner.Scan() {
magnetLink := scanner.Text()
if magnetLink != "" {
return magnetLink
content := scanner.Text()
if content != "" {
return content
}
}
@@ -62,28 +132,11 @@ func OpenMagnetHttpURL(magnetLink string) (*Magnet, error) {
return
}
}(resp) // Ensure the response is closed after the function ends
// Create a scanner to read the file line by line
mi, err := metainfo.Load(resp.Body)
torrentData, err := io.ReadAll(resp.Body)
if err != nil {
return nil, err
return nil, fmt.Errorf("error reading response body: %v", err)
}
hash := mi.HashInfoBytes()
infoHash := hash.HexString()
info, err := mi.UnmarshalInfo()
if err != nil {
return nil, err
}
log.Println("InfoHash: ", infoHash)
magnet := &Magnet{
InfoHash: infoHash,
Name: info.Name,
Size: info.Length,
Link: mi.Magnet(&hash, &info).String(),
}
return magnet, nil
return GetMagnetFromBytes(torrentData)
}
func GetMagnetInfo(magnetLink string) (*Magnet, error) {
@@ -115,15 +168,6 @@ func GetMagnetInfo(magnetLink string) (*Magnet, error) {
return magnet, nil
}
func RandomString(length int) string {
const charset = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
b := make([]byte, length)
for i := range b {
b[i] = charset[rand.Intn(len(charset))]
}
return string(b)
}
func ExtractInfoHash(magnetDesc string) string {
const prefix = "xt=urn:btih:"
start := strings.Index(magnetDesc, prefix)
@@ -167,3 +211,57 @@ func processInfoHash(input string) (string, error) {
// If we get here, it's not a valid infohash and we couldn't convert it
return "", fmt.Errorf("invalid infohash: %s", input)
}
func GetInfohashFromURL(url string) (string, error) {
// Download the torrent file
var magnetLink string
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
redirectFunc := func(req *http.Request, via []*http.Request) error {
if len(via) >= 3 {
return fmt.Errorf("stopped after 3 redirects")
}
if strings.HasPrefix(req.URL.String(), "magnet:") {
// Stop the redirect chain
magnetLink = req.URL.String()
return http.ErrUseLastResponse
}
return nil
}
client := request.New(
request.WithTimeout(30*time.Second),
request.WithRedirectPolicy(redirectFunc),
)
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
if err != nil {
return "", err
}
resp, err := client.Do(req)
if err != nil {
return "", err
}
defer resp.Body.Close()
if magnetLink != "" {
return ExtractInfoHash(magnetLink), nil
}
mi, err := metainfo.Load(resp.Body)
if err != nil {
return "", err
}
hash := mi.HashInfoBytes()
infoHash := hash.HexString()
return infoHash, nil
}
func ConstructMagnet(infoHash, name string) *Magnet {
// Create a magnet link from the infohash and name
name = url.QueryEscape(strings.TrimSpace(name))
magnetUri := fmt.Sprintf("magnet:?xt=urn:btih:%s&dn=%s", infoHash, name)
return &Magnet{
InfoHash: infoHash,
Name: name,
Size: 0,
Link: magnetUri,
}
}

15
internal/utils/misc.go Normal file
View File

@@ -0,0 +1,15 @@
package utils
func RemoveItem[S ~[]E, E comparable](s S, values ...E) S {
result := make(S, 0, len(s))
outer:
for _, item := range s {
for _, v := range values {
if item == v {
continue outer
}
}
result = append(result, item)
}
return result
}

58
internal/utils/regex.go Normal file
View File

@@ -0,0 +1,58 @@
package utils
import (
"path/filepath"
"regexp"
"strings"
)
var (
VIDEOMATCH = "(?i)(\\.)(webm|m4v|3gp|nsv|ty|strm|rm|rmvb|m3u|ifo|mov|qt|divx|xvid|bivx|nrg|pva|wmv|asf|asx|ogm|ogv|m2v|avi|bin|dat|dvr-ms|mpg|mpeg|mp4|avc|vp3|svq3|nuv|viv|dv|fli|flv|wpl|img|iso|vob|mkv|mk3d|ts|wtv|m2ts)$"
MUSICMATCH = "(?i)(\\.)(mp2|mp3|m4a|m4b|m4p|ogg|oga|opus|wma|wav|wv|flac|ape|aif|aiff|aifc)$"
)
var SAMPLEMATCH = `(?i)(^|[\\/])(sample|trailer|thumb|special|extras?)s?([\s._-]|$|/)|(\(sample\))|(-\s*sample)`
func RegexMatch(regex string, value string) bool {
re := regexp.MustCompile(regex)
return re.MatchString(value)
}
func RemoveInvalidChars(value string) string {
return strings.Map(func(r rune) rune {
if r == filepath.Separator || r == ':' {
return r
}
if filepath.IsAbs(string(r)) {
return r
}
if strings.ContainsRune(filepath.VolumeName("C:"+string(r)), r) {
return r
}
if r < 32 || strings.ContainsRune(`<>:"/\|?*`, r) {
return -1
}
return r
}, value)
}
func RemoveExtension(value string) string {
re := regexp.MustCompile(VIDEOMATCH + "|" + MUSICMATCH)
// Find the last index of the matched extension
loc := re.FindStringIndex(value)
if loc != nil {
return value[:loc[0]]
} else {
return value
}
}
func IsMediaFile(path string) bool {
mediaPattern := VIDEOMATCH + "|" + MUSICMATCH
return RegexMatch(mediaPattern, path)
}
func IsSampleFile(path string) bool {
return RegexMatch(SAMPLEMATCH, path)
}

43
main.go
View File

@@ -1,22 +1,51 @@
package main
import (
"context"
"flag"
"goBlack/cmd"
"goBlack/common"
"github.com/sirrobot01/decypharr/cmd/decypharr"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/pkg/version"
"log"
"net/http"
_ "net/http/pprof" // registers pprof handlers
"os"
"os/signal"
"runtime/debug"
"syscall"
)
func main() {
defer func() {
if r := recover(); r != nil {
log.Printf("FATAL: Recovered from panic in main: %v\n", r)
debug.PrintStack()
}
}()
if version.GetInfo().Channel == "dev" {
log.Println("Running in dev mode")
go func() {
if err := http.ListenAndServe(":6060", nil); err != nil {
log.Fatalf("pprof server failed: %v", err)
}
}()
}
var configPath string
flag.StringVar(&configPath, "config", "config.json", "path to the config file")
flag.StringVar(&configPath, "config", "/data", "path to the data folder")
flag.Parse()
// Load the config file
conf, err := common.LoadConfig(configPath)
if err != nil {
if err := config.SetConfigPath(configPath); err != nil {
log.Fatal(err)
}
cmd.Start(conf)
config.Get()
// Create a context that's cancelled on SIGINT/SIGTERM
ctx, stop := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM)
defer stop()
if err := decypharr.Start(ctx); err != nil {
log.Fatal(err)
}
}

189
pkg/arr/arr.go Normal file
View File

@@ -0,0 +1,189 @@
package arr
import (
"bytes"
"fmt"
"github.com/goccy/go-json"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/request"
"io"
"net/http"
"strconv"
"strings"
"sync"
"time"
)
// Type is a type of arr
type Type string
const (
Sonarr Type = "sonarr"
Radarr Type = "radarr"
Lidarr Type = "lidarr"
Readarr Type = "readarr"
)
type Arr struct {
Name string `json:"name"`
Host string `json:"host"`
Token string `json:"token"`
Type Type `json:"type"`
Cleanup bool `json:"cleanup"`
SkipRepair bool `json:"skip_repair"`
DownloadUncached *bool `json:"download_uncached"`
client *request.Client
}
func New(name, host, token string, cleanup, skipRepair bool, downloadUncached *bool) *Arr {
return &Arr{
Name: name,
Host: host,
Token: strings.TrimSpace(token),
Type: InferType(host, name),
Cleanup: cleanup,
SkipRepair: skipRepair,
DownloadUncached: downloadUncached,
client: request.New(),
}
}
func (a *Arr) Request(method, endpoint string, payload interface{}) (*http.Response, error) {
if a.Token == "" || a.Host == "" {
return nil, fmt.Errorf("arr not configured")
}
url, err := request.JoinURL(a.Host, endpoint)
if err != nil {
return nil, err
}
var body io.Reader
if payload != nil {
b, err := json.Marshal(payload)
if err != nil {
return nil, err
}
body = bytes.NewReader(b)
}
req, err := http.NewRequest(method, url, body)
if err != nil {
return nil, err
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("X-Api-Key", a.Token)
if a.client == nil {
a.client = request.New()
}
var resp *http.Response
for attempts := 0; attempts < 5; attempts++ {
resp, err = a.client.Do(req)
if err != nil {
return nil, err
}
// If we got a 401, wait briefly and retry
if resp.StatusCode == http.StatusUnauthorized {
resp.Body.Close() // Don't leak response bodies
if attempts < 4 { // Don't sleep on the last attempt
time.Sleep(time.Duration(attempts+1) * 100 * time.Millisecond)
continue
}
}
return resp, nil
}
return resp, err
}
func (a *Arr) Validate() error {
if a.Token == "" || a.Host == "" {
return nil
}
resp, err := a.Request("GET", "/api/v3/health", nil)
if err != nil {
return err
}
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("arr test failed: %s", resp.Status)
}
return nil
}
type Storage struct {
Arrs map[string]*Arr // name -> arr
mu sync.RWMutex
}
func InferType(host, name string) Type {
switch {
case strings.Contains(host, "sonarr") || strings.Contains(name, "sonarr"):
return Sonarr
case strings.Contains(host, "radarr") || strings.Contains(name, "radarr"):
return Radarr
case strings.Contains(host, "lidarr") || strings.Contains(name, "lidarr"):
return Lidarr
case strings.Contains(host, "readarr") || strings.Contains(name, "readarr"):
return Readarr
default:
return ""
}
}
func NewStorage() *Storage {
arrs := make(map[string]*Arr)
for _, a := range config.Get().Arrs {
name := a.Name
arrs[name] = New(name, a.Host, a.Token, a.Cleanup, a.SkipRepair, a.DownloadUncached)
}
return &Storage{
Arrs: arrs,
}
}
func (as *Storage) AddOrUpdate(arr *Arr) {
as.mu.Lock()
defer as.mu.Unlock()
if arr.Name == "" {
return
}
as.Arrs[arr.Name] = arr
}
func (as *Storage) Get(name string) *Arr {
as.mu.RLock()
defer as.mu.RUnlock()
return as.Arrs[name]
}
func (as *Storage) GetAll() []*Arr {
as.mu.RLock()
defer as.mu.RUnlock()
arrs := make([]*Arr, 0, len(as.Arrs))
for _, arr := range as.Arrs {
if arr.Host != "" && arr.Token != "" {
arrs = append(arrs, arr)
}
}
return arrs
}
func (a *Arr) Refresh() error {
payload := struct {
Name string `json:"name"`
}{
Name: "RefreshMonitoredDownloads",
}
resp, err := a.Request(http.MethodPost, "api/v3/command", payload)
if err == nil && resp != nil {
statusOk := strconv.Itoa(resp.StatusCode)[0] == '2'
if statusOk {
return nil
}
}
return fmt.Errorf("failed to refresh: %v", err)
}

276
pkg/arr/content.go Normal file
View File

@@ -0,0 +1,276 @@
package arr
import (
"context"
"fmt"
"github.com/goccy/go-json"
"golang.org/x/sync/errgroup"
"net/http"
"strconv"
"strings"
)
type episode struct {
Id int `json:"id"`
EpisodeFileID int `json:"episodeFileId"`
}
type sonarrSearch struct {
Name string `json:"name"`
SeasonNumber int `json:"seasonNumber"`
SeriesId int `json:"seriesId"`
}
type radarrSearch struct {
Name string `json:"name"`
MovieIds []int `json:"movieIds"`
}
func (a *Arr) GetMedia(mediaId string) ([]Content, error) {
// Get series
if a.Type == Radarr {
return GetMovies(a, mediaId)
}
// This is likely Sonarr
resp, err := a.Request(http.MethodGet, fmt.Sprintf("api/v3/series?tvdbId=%s", mediaId), nil)
if err != nil {
return nil, err
}
defer resp.Body.Close()
if resp.StatusCode == http.StatusNotFound {
// This is likely Radarr
return GetMovies(a, mediaId)
}
a.Type = Sonarr
type series struct {
Title string `json:"title"`
Id int `json:"id"`
}
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("failed to get series: %s", resp.Status)
}
var data []series
if err = json.NewDecoder(resp.Body).Decode(&data); err != nil {
return nil, fmt.Errorf("failed to decode series: %v", err)
}
// Get series files
contents := make([]Content, 0)
for _, d := range data {
resp, err = a.Request(http.MethodGet, fmt.Sprintf("api/v3/episodefile?seriesId=%d", d.Id), nil)
if err != nil {
continue
}
var ct Content
var seriesFiles []seriesFile
episodeFileIDMap := make(map[int]int)
func() {
defer resp.Body.Close()
if err = json.NewDecoder(resp.Body).Decode(&seriesFiles); err != nil {
return
}
ct = Content{
Title: d.Title,
Id: d.Id,
}
}()
resp, err = a.Request(http.MethodGet, fmt.Sprintf("api/v3/episode?seriesId=%d", d.Id), nil)
if err != nil {
continue
}
func() {
defer resp.Body.Close()
var episodes []episode
if err = json.NewDecoder(resp.Body).Decode(&episodes); err != nil {
return
}
for _, e := range episodes {
episodeFileIDMap[e.EpisodeFileID] = e.Id
}
}()
files := make([]ContentFile, 0)
for _, file := range seriesFiles {
eId, ok := episodeFileIDMap[file.Id]
if !ok {
eId = 0
}
if file.Id == 0 || file.Path == "" {
// Skip files without path
continue
}
files = append(files, ContentFile{
FileId: file.Id,
Path: file.Path,
Id: d.Id,
EpisodeId: eId,
SeasonNumber: file.SeasonNumber,
})
}
if len(files) == 0 {
// Skip series without files
continue
}
ct.Files = files
contents = append(contents, ct)
}
return contents, nil
}
func GetMovies(a *Arr, tvId string) ([]Content, error) {
resp, err := a.Request(http.MethodGet, fmt.Sprintf("api/v3/movie?tmdbId=%s", tvId), nil)
if err != nil {
return nil, err
}
if resp.StatusCode == http.StatusNotFound {
// This is likely Lidarr or Readarr
return nil, fmt.Errorf("failed to get movies: %s", resp.Status)
}
a.Type = Radarr
defer resp.Body.Close()
var movies []Movie
if err = json.NewDecoder(resp.Body).Decode(&movies); err != nil {
return nil, fmt.Errorf("failed to decode movies: %v", err)
}
contents := make([]Content, 0)
for _, movie := range movies {
if movie.MovieFile.Id == 0 || movie.MovieFile.Path == "" {
// Skip movies without files
continue
}
ct := Content{
Title: movie.Title,
Id: movie.Id,
}
files := make([]ContentFile, 0)
files = append(files, ContentFile{
FileId: movie.MovieFile.Id,
Id: movie.Id,
Path: movie.MovieFile.Path,
})
ct.Files = files
contents = append(contents, ct)
}
return contents, nil
}
// searchSonarr searches for missing files in the arr
// map ids are series id and season number
func (a *Arr) searchSonarr(files []ContentFile) error {
ids := make(map[string]any)
for _, f := range files {
// Join series id and season number
id := fmt.Sprintf("%d-%d", f.Id, f.SeasonNumber)
ids[id] = nil
}
g, ctx := errgroup.WithContext(context.Background())
// Limit concurrent goroutines
g.SetLimit(10)
for id := range ids {
id := id
g.Go(func() error {
select {
case <-ctx.Done():
return ctx.Err()
default:
}
parts := strings.Split(id, "-")
if len(parts) != 2 {
return fmt.Errorf("invalid id: %s", id)
}
seriesId, err := strconv.Atoi(parts[0])
if err != nil {
return err
}
seasonNumber, err := strconv.Atoi(parts[1])
if err != nil {
return err
}
payload := sonarrSearch{
Name: "SeasonSearch",
SeasonNumber: seasonNumber,
SeriesId: seriesId,
}
resp, err := a.Request(http.MethodPost, "api/v3/command", payload)
if err != nil {
return fmt.Errorf("failed to automatic search: %v", err)
}
if resp.StatusCode >= 300 || resp.StatusCode < 200 {
return fmt.Errorf("failed to automatic search. Status Code: %s", resp.Status)
}
return nil
})
}
if err := g.Wait(); err != nil {
return err
}
return nil
}
func (a *Arr) searchRadarr(files []ContentFile) error {
ids := make([]int, 0)
for _, f := range files {
ids = append(ids, f.Id)
}
payload := radarrSearch{
Name: "MoviesSearch",
MovieIds: ids,
}
resp, err := a.Request(http.MethodPost, "api/v3/command", payload)
if err != nil {
return fmt.Errorf("failed to automatic search: %v", err)
}
if statusOk := strconv.Itoa(resp.StatusCode)[0] == '2'; !statusOk {
return fmt.Errorf("failed to automatic search. Status Code: %s", resp.Status)
}
return nil
}
func (a *Arr) SearchMissing(files []ContentFile) error {
switch a.Type {
case Sonarr:
return a.searchSonarr(files)
case Radarr:
return a.searchRadarr(files)
default:
return fmt.Errorf("unknown arr type: %s", a.Type)
}
}
func (a *Arr) DeleteFiles(files []ContentFile) error {
ids := make([]int, 0)
for _, f := range files {
ids = append(ids, f.FileId)
}
var payload interface{}
switch a.Type {
case Sonarr:
payload = struct {
EpisodeFileIds []int `json:"episodeFileIds"`
}{
EpisodeFileIds: ids,
}
_, err := a.Request(http.MethodDelete, "api/v3/episodefile/bulk", payload)
if err != nil {
return err
}
case Radarr:
payload = struct {
MovieFileIds []int `json:"movieFileIds"`
}{
MovieFileIds: ids,
}
_, err := a.Request(http.MethodDelete, "api/v3/moviefile/bulk", payload)
if err != nil {
return err
}
default:
return fmt.Errorf("unknown arr type: %s", a.Type)
}
return nil
}

191
pkg/arr/history.go Normal file
View File

@@ -0,0 +1,191 @@
package arr
import (
"github.com/goccy/go-json"
"io"
"net/http"
gourl "net/url"
"strconv"
"strings"
)
type HistorySchema struct {
Page int `json:"page"`
PageSize int `json:"pageSize"`
SortKey string `json:"sortKey"`
SortDirection string `json:"sortDirection"`
TotalRecords int `json:"totalRecords"`
Records []struct {
ID int `json:"id"`
DownloadID string `json:"downloadId"`
} `json:"records"`
}
type QueueResponseScheme struct {
Page int `json:"page"`
PageSize int `json:"pageSize"`
SortKey string `json:"sortKey"`
SortDirection string `json:"sortDirection"`
TotalRecords int `json:"totalRecords"`
Records []QueueSchema `json:"records"`
}
type QueueSchema struct {
SeriesId int `json:"seriesId"`
EpisodeId int `json:"episodeId"`
SeasonNumber int `json:"seasonNumber"`
Title string `json:"title"`
Status string `json:"status"`
TrackedDownloadStatus string `json:"trackedDownloadStatus"`
TrackedDownloadState string `json:"trackedDownloadState"`
StatusMessages []struct {
Title string `json:"title"`
Messages []string `json:"messages"`
} `json:"statusMessages"`
DownloadId string `json:"downloadId"`
Protocol string `json:"protocol"`
DownloadClient string `json:"downloadClient"`
DownloadClientHasPostImportCategory bool `json:"downloadClientHasPostImportCategory"`
Indexer string `json:"indexer"`
OutputPath string `json:"outputPath"`
EpisodeHasFile bool `json:"episodeHasFile"`
Id int `json:"id"`
}
func (a *Arr) GetHistory(downloadId, eventType string) *HistorySchema {
query := gourl.Values{}
if downloadId != "" {
query.Add("downloadId", downloadId)
}
query.Add("eventType", eventType)
query.Add("pageSize", "100")
url := "api/v3/history" + "?" + query.Encode()
resp, err := a.Request(http.MethodGet, url, nil)
if err != nil {
return nil
}
defer resp.Body.Close()
var data *HistorySchema
if err = json.NewDecoder(resp.Body).Decode(&data); err != nil {
return nil
}
return data
}
func (a *Arr) GetQueue() []QueueSchema {
query := gourl.Values{}
query.Add("page", "1")
query.Add("pageSize", "200")
results := make([]QueueSchema, 0)
for {
url := "api/v3/queue" + "?" + query.Encode()
resp, err := a.Request(http.MethodGet, url, nil)
if err != nil {
break
}
func() {
defer func(Body io.ReadCloser) {
err := Body.Close()
if err != nil {
return
}
}(resp.Body)
var data QueueResponseScheme
if err = json.NewDecoder(resp.Body).Decode(&data); err != nil {
return
}
results = append(results, data.Records...)
if len(results) >= data.TotalRecords {
// We've fetched all records
err = io.EOF // Signal to exit the loop
return
}
query.Set("page", strconv.Itoa(data.Page+1))
}()
if err != nil {
break
}
}
return results
}
func (a *Arr) CleanupQueue() error {
queue := a.GetQueue()
type messedUp struct {
id int
episodeId int
seasonNum int
}
cleanups := make(map[int][]messedUp)
for _, q := range queue {
isMessedUp := false
if q.Protocol == "torrent" && q.Status == "completed" && q.TrackedDownloadStatus == "warning" && q.TrackedDownloadState == "importPending" {
messages := q.StatusMessages
if len(messages) > 0 {
for _, m := range messages {
if strings.Contains(strings.Join(m.Messages, " "), "No files found are eligible for import in") {
isMessedUp = true
break
}
if strings.Contains(m.Title, "One or more episodes expected in this release were not imported or missing from the release") {
isMessedUp = true
break
}
}
}
}
if isMessedUp {
cleanups[q.SeriesId] = append(cleanups[q.SeriesId], messedUp{
id: q.Id,
episodeId: q.EpisodeId,
seasonNum: q.SeasonNumber,
})
}
}
if len(cleanups) == 0 {
return nil
}
queueIds := make([]int, 0)
for _, c := range cleanups {
// Delete the messed up episodes from queue
for _, m := range c {
queueIds = append(queueIds, m.id)
}
}
// Delete the messed up episodes from queue
payload := struct {
Ids []int `json:"ids"`
}{
Ids: queueIds,
}
// Blocklist that hash(it's typically not complete, then research the episode)
query := gourl.Values{}
query.Add("removeFromClient", "true")
query.Add("blocklist", "true")
query.Add("skipRedownload", "false")
query.Add("changeCategory", "false")
url := "api/v3/queue/bulk" + "?" + query.Encode()
_, err := a.Request(http.MethodDelete, url, payload)
if err != nil {
return err
}
return nil
}

209
pkg/arr/import.go Normal file
View File

@@ -0,0 +1,209 @@
package arr
import (
"fmt"
"github.com/goccy/go-json"
"io"
"net/http"
gourl "net/url"
"strconv"
"time"
)
type ImportResponseSchema struct {
Path string `json:"path"`
RelativePath string `json:"relativePath"`
FolderName string `json:"folderName"`
Name string `json:"name"`
Size int `json:"size"`
Series struct {
Title string `json:"title"`
SortTitle string `json:"sortTitle"`
Status string `json:"status"`
Ended bool `json:"ended"`
Overview string `json:"overview"`
Network string `json:"network"`
AirTime string `json:"airTime"`
Images []struct {
CoverType string `json:"coverType"`
RemoteUrl string `json:"remoteUrl"`
} `json:"images"`
OriginalLanguage struct {
Id int `json:"id"`
Name string `json:"name"`
} `json:"originalLanguage"`
Seasons []struct {
SeasonNumber int `json:"seasonNumber"`
Monitored bool `json:"monitored"`
} `json:"seasons"`
Year int `json:"year"`
Path string `json:"path"`
QualityProfileId int `json:"qualityProfileId"`
SeasonFolder bool `json:"seasonFolder"`
Monitored bool `json:"monitored"`
MonitorNewItems string `json:"monitorNewItems"`
UseSceneNumbering bool `json:"useSceneNumbering"`
Runtime int `json:"runtime"`
TvdbId int `json:"tvdbId"`
TvRageId int `json:"tvRageId"`
TvMazeId int `json:"tvMazeId"`
TmdbId int `json:"tmdbId"`
FirstAired time.Time `json:"firstAired"`
LastAired time.Time `json:"lastAired"`
SeriesType string `json:"seriesType"`
CleanTitle string `json:"cleanTitle"`
ImdbId string `json:"imdbId"`
TitleSlug string `json:"titleSlug"`
Certification string `json:"certification"`
Genres []string `json:"genres"`
Tags []interface{} `json:"tags"`
Added time.Time `json:"added"`
Ratings struct {
Votes int `json:"votes"`
Value float64 `json:"value"`
} `json:"ratings"`
LanguageProfileId int `json:"languageProfileId"`
Id int `json:"id"`
} `json:"series"`
SeasonNumber int `json:"seasonNumber"`
Episodes []struct {
SeriesId int `json:"seriesId"`
TvdbId int `json:"tvdbId"`
EpisodeFileId int `json:"episodeFileId"`
SeasonNumber int `json:"seasonNumber"`
EpisodeNumber int `json:"episodeNumber"`
Title string `json:"title"`
AirDate string `json:"airDate"`
AirDateUtc time.Time `json:"airDateUtc"`
Runtime int `json:"runtime"`
Overview string `json:"overview"`
HasFile bool `json:"hasFile"`
Monitored bool `json:"monitored"`
AbsoluteEpisodeNumber int `json:"absoluteEpisodeNumber"`
UnverifiedSceneNumbering bool `json:"unverifiedSceneNumbering"`
Id int `json:"id"`
FinaleType string `json:"finaleType,omitempty"`
} `json:"episodes"`
ReleaseGroup string `json:"releaseGroup"`
Quality struct {
Quality struct {
Id int `json:"id"`
Name string `json:"name"`
Source string `json:"source"`
Resolution int `json:"resolution"`
} `json:"quality"`
Revision struct {
Version int `json:"version"`
Real int `json:"real"`
IsRepack bool `json:"isRepack"`
} `json:"revision"`
} `json:"quality"`
Languages []struct {
Id int `json:"id"`
Name string `json:"name"`
} `json:"languages"`
QualityWeight int `json:"qualityWeight"`
CustomFormats []interface{} `json:"customFormats"`
CustomFormatScore int `json:"customFormatScore"`
IndexerFlags int `json:"indexerFlags"`
ReleaseType string `json:"releaseType"`
Rejections []struct {
Reason string `json:"reason"`
Type string `json:"type"`
} `json:"rejections"`
Id int `json:"id"`
}
type ManualImportRequestFile struct {
Path string `json:"path"`
SeriesId int `json:"seriesId"`
SeasonNumber int `json:"seasonNumber"`
EpisodeIds []int `json:"episodeIds"`
Quality struct {
Quality struct {
Id int `json:"id"`
Name string `json:"name"`
Source string `json:"source"`
Resolution int `json:"resolution"`
} `json:"quality"`
Revision struct {
Version int `json:"version"`
Real int `json:"real"`
IsRepack bool `json:"isRepack"`
} `json:"revision"`
} `json:"quality"`
Languages []struct {
Id int `json:"id"`
Name string `json:"name"`
} `json:"languages"`
ReleaseGroup string `json:"releaseGroup"`
CustomFormats []interface{} `json:"customFormats"`
CustomFormatScore int `json:"customFormatScore"`
IndexerFlags int `json:"indexerFlags"`
ReleaseType string `json:"releaseType"`
Rejections []struct {
Reason string `json:"reason"`
Type string `json:"type"`
} `json:"rejections"`
}
type ManualImportRequestSchema struct {
Name string `json:"name"`
Files []ManualImportRequestFile `json:"files"`
ImportMode string `json:"importMode"`
}
func (a *Arr) Import(path string, seriesId int, seasons []int) (io.ReadCloser, error) {
query := gourl.Values{}
query.Add("folder", path)
if seriesId != 0 {
query.Add("seriesId", strconv.Itoa(seriesId))
}
url := "api/v3/manualimport" + "?" + query.Encode()
resp, err := a.Request(http.MethodGet, url, nil)
if err != nil {
return nil, fmt.Errorf("failed to import, invalid file: %w", err)
}
defer resp.Body.Close()
var data []ImportResponseSchema
if err = json.NewDecoder(resp.Body).Decode(&data); err != nil {
return nil, fmt.Errorf("failed to decode response: %w", err)
}
var files []ManualImportRequestFile
for _, d := range data {
episodesIds := []int{}
for _, e := range d.Episodes {
episodesIds = append(episodesIds, e.Id)
}
file := ManualImportRequestFile{
Path: d.Path,
SeriesId: d.Series.Id,
SeasonNumber: d.SeasonNumber,
EpisodeIds: episodesIds,
Quality: d.Quality,
Languages: d.Languages,
ReleaseGroup: d.ReleaseGroup,
CustomFormats: d.CustomFormats,
CustomFormatScore: d.CustomFormatScore,
IndexerFlags: d.IndexerFlags,
ReleaseType: d.ReleaseType,
Rejections: d.Rejections,
}
files = append(files, file)
}
request := ManualImportRequestSchema{
Name: "ManualImport",
Files: files,
ImportMode: "copy",
}
url = "api/v3/command"
resp, err = a.Request(http.MethodPost, url, request)
if err != nil {
return nil, fmt.Errorf("failed to import: %w", err)
}
defer resp.Body.Close()
return resp.Body, nil
}

39
pkg/arr/types.go Normal file
View File

@@ -0,0 +1,39 @@
package arr
type Movie struct {
Title string `json:"title"`
OriginalTitle string `json:"originalTitle"`
Path string `json:"path"`
MovieFile struct {
MovieId int `json:"movieId"`
RelativePath string `json:"relativePath"`
Path string `json:"path"`
Id int `json:"id"`
} `json:"movieFile"`
Id int `json:"id"`
}
type ContentFile struct {
Name string `json:"name"`
Path string `json:"path"`
Id int `json:"id"`
EpisodeId int `json:"showId"`
FileId int `json:"fileId"`
TargetPath string `json:"targetPath"`
IsSymlink bool `json:"isSymlink"`
IsBroken bool `json:"isBroken"`
SeasonNumber int `json:"seasonNumber"`
}
type Content struct {
Title string `json:"title"`
Id int `json:"id"`
Files []ContentFile `json:"files"`
}
type seriesFile struct {
SeriesId int `json:"seriesId"`
SeasonNumber int `json:"seasonNumber"`
Path string `json:"path"`
Id int `json:"id"`
}

View File

@@ -0,0 +1,383 @@
package alldebrid
import (
"fmt"
"github.com/goccy/go-json"
"github.com/puzpuzpuz/xsync/v3"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
"net/http"
gourl "net/url"
"path/filepath"
"slices"
"strconv"
"sync"
"time"
)
type AllDebrid struct {
Name string
Host string `json:"host"`
APIKey string
DownloadKeys *xsync.MapOf[string, types.Account]
DownloadUncached bool
client *request.Client
MountPath string
logger zerolog.Logger
CheckCached bool
}
func New(dc config.Debrid) *AllDebrid {
rl := request.ParseRateLimit(dc.RateLimit)
headers := map[string]string{
"Authorization": fmt.Sprintf("Bearer %s", dc.APIKey),
}
_log := logger.New(dc.Name)
client := request.New(
request.WithHeaders(headers),
request.WithLogger(_log),
request.WithRateLimiter(rl),
request.WithProxy(dc.Proxy),
)
accounts := xsync.NewMapOf[string, types.Account]()
for idx, key := range dc.DownloadAPIKeys {
id := strconv.Itoa(idx)
accounts.Store(id, types.Account{
Name: key,
ID: id,
Token: key,
})
}
return &AllDebrid{
Name: "alldebrid",
Host: dc.Host,
APIKey: dc.APIKey,
DownloadKeys: accounts,
DownloadUncached: dc.DownloadUncached,
client: client,
MountPath: dc.Folder,
logger: logger.New(dc.Name),
CheckCached: dc.CheckCached,
}
}
func (ad *AllDebrid) GetName() string {
return ad.Name
}
func (ad *AllDebrid) GetLogger() zerolog.Logger {
return ad.logger
}
func (ad *AllDebrid) IsAvailable(hashes []string) map[string]bool {
// Check if the infohashes are available in the local cache
result := make(map[string]bool)
// Divide hashes into groups of 100
// AllDebrid does not support checking cached infohashes
return result
}
func (ad *AllDebrid) SubmitMagnet(torrent *types.Torrent) (*types.Torrent, error) {
url := fmt.Sprintf("%s/magnet/upload", ad.Host)
query := gourl.Values{}
query.Add("magnets[]", torrent.Magnet.Link)
url += "?" + query.Encode()
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := ad.client.MakeRequest(req)
if err != nil {
return nil, err
}
var data UploadMagnetResponse
err = json.Unmarshal(resp, &data)
if err != nil {
return nil, err
}
magnets := data.Data.Magnets
if len(magnets) == 0 {
return nil, fmt.Errorf("error adding torrent")
}
magnet := magnets[0]
torrentId := strconv.Itoa(magnet.ID)
torrent.Id = torrentId
return torrent, nil
}
func getAlldebridStatus(statusCode int) string {
switch {
case statusCode == 4:
return "downloaded"
case statusCode >= 0 && statusCode <= 3:
return "downloading"
default:
return "error"
}
}
func flattenFiles(files []MagnetFile, parentPath string, index *int) map[string]types.File {
result := make(map[string]types.File)
cfg := config.Get()
for _, f := range files {
currentPath := f.Name
if parentPath != "" {
currentPath = filepath.Join(parentPath, f.Name)
}
if f.Elements != nil {
// This is a folder, recurse into it
subFiles := flattenFiles(f.Elements, currentPath, index)
for k, v := range subFiles {
if _, ok := result[k]; ok {
// File already exists, use path as key
result[v.Path] = v
} else {
result[k] = v
}
}
} else {
// This is a file
fileName := filepath.Base(f.Name)
// Skip sample files
if utils.IsSampleFile(f.Name) {
continue
}
if !cfg.IsAllowedFile(fileName) {
continue
}
if !cfg.IsSizeAllowed(f.Size) {
continue
}
*index++
file := types.File{
Id: strconv.Itoa(*index),
Name: fileName,
Size: f.Size,
Path: currentPath,
Link: f.Link,
}
result[file.Name] = file
}
}
return result
}
func (ad *AllDebrid) UpdateTorrent(t *types.Torrent) error {
url := fmt.Sprintf("%s/magnet/status?id=%s", ad.Host, t.Id)
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := ad.client.MakeRequest(req)
if err != nil {
return err
}
var res TorrentInfoResponse
err = json.Unmarshal(resp, &res)
if err != nil {
ad.logger.Info().Msgf("Error unmarshalling torrent info: %s", err)
return err
}
data := res.Data.Magnets
status := getAlldebridStatus(data.StatusCode)
name := data.Filename
t.Name = name
t.Status = status
t.Filename = name
t.OriginalFilename = name
t.Folder = name
t.MountPath = ad.MountPath
t.Debrid = ad.Name
if status == "downloaded" {
t.Bytes = data.Size
t.Progress = float64((data.Downloaded / data.Size) * 100)
t.Speed = data.DownloadSpeed
t.Seeders = data.Seeders
index := -1
files := flattenFiles(data.Files, "", &index)
t.Files = files
}
return nil
}
func (ad *AllDebrid) CheckStatus(torrent *types.Torrent, isSymlink bool) (*types.Torrent, error) {
for {
err := ad.UpdateTorrent(torrent)
if err != nil || torrent == nil {
return torrent, err
}
status := torrent.Status
if status == "downloaded" {
ad.logger.Info().Msgf("Torrent: %s downloaded", torrent.Name)
if !isSymlink {
err = ad.GenerateDownloadLinks(torrent)
if err != nil {
return torrent, err
}
}
break
} else if slices.Contains(ad.GetDownloadingStatus(), status) {
if !torrent.DownloadUncached {
return torrent, fmt.Errorf("torrent: %s not cached", torrent.Name)
}
// Break out of the loop if the torrent is downloading.
// This is necessary to prevent infinite loop since we moved to sync downloading and async processing
return torrent, nil
} else {
return torrent, fmt.Errorf("torrent: %s has error", torrent.Name)
}
}
return torrent, nil
}
func (ad *AllDebrid) DeleteTorrent(torrentId string) error {
url := fmt.Sprintf("%s/magnet/delete?id=%s", ad.Host, torrentId)
req, _ := http.NewRequest(http.MethodGet, url, nil)
if _, err := ad.client.MakeRequest(req); err != nil {
return err
}
ad.logger.Info().Msgf("Torrent %s deleted from AD", torrentId)
return nil
}
func (ad *AllDebrid) GenerateDownloadLinks(t *types.Torrent) error {
filesCh := make(chan types.File, len(t.Files))
errCh := make(chan error, len(t.Files))
var wg sync.WaitGroup
wg.Add(len(t.Files))
for _, file := range t.Files {
go func(file types.File) {
defer wg.Done()
link, accountId, err := ad.GetDownloadLink(t, &file)
if err != nil {
errCh <- err
return
}
file.DownloadLink = link
file.Generated = time.Now()
file.AccountId = accountId
if link == "" {
errCh <- fmt.Errorf("error getting download links %w", err)
return
}
filesCh <- file
}(file)
}
go func() {
wg.Wait()
close(filesCh)
close(errCh)
}()
files := make(map[string]types.File, len(t.Files))
for file := range filesCh {
files[file.Name] = file
}
// Check for errors
for err := range errCh {
if err != nil {
return err // Return the first error encountered
}
}
t.Files = files
return nil
}
func (ad *AllDebrid) GetDownloadLink(t *types.Torrent, file *types.File) (string, string, error) {
url := fmt.Sprintf("%s/link/unlock", ad.Host)
query := gourl.Values{}
query.Add("link", file.Link)
url += "?" + query.Encode()
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := ad.client.MakeRequest(req)
if err != nil {
return "", "", err
}
var data DownloadLink
if err = json.Unmarshal(resp, &data); err != nil {
return "", "", err
}
link := data.Data.Link
if link == "" {
return "", "", fmt.Errorf("error getting download links %s", data.Error.Message)
}
return link, "0", nil
}
func (ad *AllDebrid) GetCheckCached() bool {
return ad.CheckCached
}
func (ad *AllDebrid) GetTorrents() ([]*types.Torrent, error) {
url := fmt.Sprintf("%s/magnet/status?status=ready", ad.Host)
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := ad.client.MakeRequest(req)
torrents := make([]*types.Torrent, 0)
if err != nil {
return torrents, err
}
var res TorrentsListResponse
err = json.Unmarshal(resp, &res)
if err != nil {
ad.logger.Info().Msgf("Error unmarshalling torrent info: %s", err)
return torrents, err
}
for _, magnet := range res.Data.Magnets {
torrents = append(torrents, &types.Torrent{
Id: strconv.Itoa(magnet.Id),
Name: magnet.Filename,
Bytes: magnet.Size,
Status: getAlldebridStatus(magnet.StatusCode),
Filename: magnet.Filename,
OriginalFilename: magnet.Filename,
Files: make(map[string]types.File),
InfoHash: magnet.Hash,
Debrid: ad.Name,
MountPath: ad.MountPath,
})
}
return torrents, nil
}
func (ad *AllDebrid) GetDownloads() (map[string]types.DownloadLinks, error) {
return nil, nil
}
func (ad *AllDebrid) GetDownloadingStatus() []string {
return []string{"downloading"}
}
func (ad *AllDebrid) GetDownloadUncached() bool {
return ad.DownloadUncached
}
func (ad *AllDebrid) CheckLink(link string) error {
return nil
}
func (ad *AllDebrid) GetMountPath() string {
return ad.MountPath
}
func (ad *AllDebrid) DisableAccount(accountId string) {
}
func (ad *AllDebrid) ResetActiveDownloadKeys() {
}

View File

@@ -0,0 +1,83 @@
package alldebrid
type errorResponse struct {
Code string `json:"code"`
Message string `json:"message"`
}
type MagnetFile struct {
Name string `json:"n"`
Size int64 `json:"s"`
Link string `json:"l"`
Elements []MagnetFile `json:"e"`
}
type magnetInfo struct {
Id int `json:"id"`
Filename string `json:"filename"`
Size int64 `json:"size"`
Hash string `json:"hash"`
Status string `json:"status"`
StatusCode int `json:"statusCode"`
UploadDate int `json:"uploadDate"`
Downloaded int64 `json:"downloaded"`
Uploaded int64 `json:"uploaded"`
DownloadSpeed int64 `json:"downloadSpeed"`
UploadSpeed int64 `json:"uploadSpeed"`
Seeders int `json:"seeders"`
CompletionDate int `json:"completionDate"`
Type string `json:"type"`
Notified bool `json:"notified"`
Version int `json:"version"`
NbLinks int `json:"nbLinks"`
Files []MagnetFile `json:"files"`
}
type TorrentInfoResponse struct {
Status string `json:"status"`
Data struct {
Magnets magnetInfo `json:"magnets"`
} `json:"data"`
Error *errorResponse `json:"error"`
}
type TorrentsListResponse struct {
Status string `json:"status"`
Data struct {
Magnets []magnetInfo `json:"magnets"`
} `json:"data"`
Error *errorResponse `json:"error"`
}
type UploadMagnetResponse struct {
Status string `json:"status"`
Data struct {
Magnets []struct {
Magnet string `json:"magnet"`
Hash string `json:"hash"`
Name string `json:"name"`
FilenameOriginal string `json:"filename_original"`
Size int64 `json:"size"`
Ready bool `json:"ready"`
ID int `json:"id"`
} `json:"magnets"`
}
Error *errorResponse `json:"error"`
}
type DownloadLink struct {
Status string `json:"status"`
Data struct {
Link string `json:"link"`
Host string `json:"host"`
Filename string `json:"filename"`
Streaming []interface{} `json:"streaming"`
Paws bool `json:"paws"`
Filesize int `json:"filesize"`
Id string `json:"id"`
Path []struct {
Name string `json:"n"`
Size int `json:"s"`
} `json:"path"`
} `json:"data"`
Error *errorResponse `json:"error"`
}

803
pkg/debrid/debrid/cache.go Normal file
View File

@@ -0,0 +1,803 @@
package debrid
import (
"bufio"
"context"
"errors"
"fmt"
"github.com/goccy/go-json"
"github.com/puzpuzpuz/xsync/v3"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
"os"
"path/filepath"
"runtime"
"strconv"
"sync"
"sync/atomic"
"time"
)
type WebDavFolderNaming string
const (
WebDavUseFileName WebDavFolderNaming = "filename"
WebDavUseOriginalName WebDavFolderNaming = "original"
WebDavUseFileNameNoExt WebDavFolderNaming = "filename_no_ext"
WebDavUseOriginalNameNoExt WebDavFolderNaming = "original_no_ext"
WebDavUseID WebDavFolderNaming = "id"
)
type PropfindResponse struct {
Data []byte
GzippedData []byte
Ts time.Time
}
type CachedTorrent struct {
*types.Torrent
AddedOn time.Time `json:"added_on"`
IsComplete bool `json:"is_complete"`
}
type downloadLinkCache struct {
Link string
AccountId string
ExpiresAt time.Time
}
type RepairType string
const (
RepairTypeReinsert RepairType = "reinsert"
RepairTypeDelete RepairType = "delete"
)
type RepairRequest struct {
Type RepairType
TorrentID string
Priority int
FileName string
}
type Cache struct {
dir string
client types.Client
logger zerolog.Logger
torrents *xsync.MapOf[string, *CachedTorrent] // key: torrent.Id, value: *CachedTorrent
torrentsNames *xsync.MapOf[string, *CachedTorrent] // key: torrent.Name, value: torrent
listings atomic.Value
downloadLinks *xsync.MapOf[string, downloadLinkCache]
invalidDownloadLinks *xsync.MapOf[string, string]
PropfindResp *xsync.MapOf[string, PropfindResponse]
folderNaming WebDavFolderNaming
// repair
repairChan chan RepairRequest
repairsInProgress *xsync.MapOf[string, struct{}]
// config
workers int
torrentRefreshInterval time.Duration
downloadLinksRefreshInterval time.Duration
autoExpiresLinksAfter time.Duration
// refresh mutex
listingRefreshMu sync.RWMutex // for refreshing torrents
downloadLinksRefreshMu sync.RWMutex // for refreshing download links
torrentsRefreshMu sync.RWMutex // for refreshing torrents
saveSemaphore chan struct{}
ctx context.Context
}
func New(dc config.Debrid, client types.Client) *Cache {
cfg := config.Get()
torrentRefreshInterval, err := time.ParseDuration(dc.TorrentsRefreshInterval)
if err != nil {
torrentRefreshInterval = time.Second * 15
}
downloadLinksRefreshInterval, err := time.ParseDuration(dc.DownloadLinksRefreshInterval)
if err != nil {
downloadLinksRefreshInterval = time.Minute * 40
}
autoExpiresLinksAfter, err := time.ParseDuration(dc.AutoExpireLinksAfter)
if err != nil {
autoExpiresLinksAfter = time.Hour * 24
}
workers := runtime.NumCPU() * 50
if dc.Workers > 0 {
workers = dc.Workers
}
return &Cache{
dir: filepath.Join(cfg.Path, "cache", dc.Name), // path to save cache files
torrents: xsync.NewMapOf[string, *CachedTorrent](),
torrentsNames: xsync.NewMapOf[string, *CachedTorrent](),
invalidDownloadLinks: xsync.NewMapOf[string, string](),
client: client,
logger: logger.New(fmt.Sprintf("%s-webdav", client.GetName())),
workers: workers,
downloadLinks: xsync.NewMapOf[string, downloadLinkCache](),
torrentRefreshInterval: torrentRefreshInterval,
downloadLinksRefreshInterval: downloadLinksRefreshInterval,
PropfindResp: xsync.NewMapOf[string, PropfindResponse](),
folderNaming: WebDavFolderNaming(dc.FolderNaming),
autoExpiresLinksAfter: autoExpiresLinksAfter,
repairsInProgress: xsync.NewMapOf[string, struct{}](),
saveSemaphore: make(chan struct{}, 50),
ctx: context.Background(),
}
}
func (c *Cache) Start(ctx context.Context) error {
if err := os.MkdirAll(c.dir, 0755); err != nil {
return fmt.Errorf("failed to create cache directory: %w", err)
}
c.ctx = ctx
if err := c.Sync(); err != nil {
return fmt.Errorf("failed to sync cache: %w", err)
}
// initial download links
go func() {
c.refreshDownloadLinks()
}()
go func() {
err := c.Refresh()
if err != nil {
c.logger.Error().Err(err).Msg("Failed to start cache refresh worker")
}
}()
c.repairChan = make(chan RepairRequest, 100)
go c.repairWorker()
return nil
}
func (c *Cache) load() (map[string]*CachedTorrent, error) {
torrents := make(map[string]*CachedTorrent)
var results sync.Map
if err := os.MkdirAll(c.dir, 0755); err != nil {
return torrents, fmt.Errorf("failed to create cache directory: %w", err)
}
files, err := os.ReadDir(c.dir)
if err != nil {
return torrents, fmt.Errorf("failed to read cache directory: %w", err)
}
// Get only json files
var jsonFiles []os.DirEntry
for _, file := range files {
if !file.IsDir() && filepath.Ext(file.Name()) == ".json" {
jsonFiles = append(jsonFiles, file)
}
}
if len(jsonFiles) == 0 {
return torrents, nil
}
// Create channels with appropriate buffering
workChan := make(chan os.DirEntry, min(c.workers, len(jsonFiles)))
// Create a wait group for workers
var wg sync.WaitGroup
// Start workers
for i := 0; i < c.workers; i++ {
wg.Add(1)
go func() {
defer wg.Done()
now := time.Now()
for {
file, ok := <-workChan
if !ok {
return // Channel closed, exit goroutine
}
fileName := file.Name()
filePath := filepath.Join(c.dir, fileName)
data, err := os.ReadFile(filePath)
if err != nil {
c.logger.Debug().Err(err).Msgf("Failed to read file: %s", filePath)
continue
}
var ct CachedTorrent
if err := json.Unmarshal(data, &ct); err != nil {
c.logger.Debug().Err(err).Msgf("Failed to unmarshal file: %s", filePath)
continue
}
isComplete := true
if len(ct.Files) != 0 {
// Check if all files are valid, if not, delete the file.json and remove from cache.
for _, f := range ct.Files {
if f.Link == "" {
isComplete = false
break
}
}
if isComplete {
addedOn, err := time.Parse(time.RFC3339, ct.Added)
if err != nil {
addedOn = now
}
ct.AddedOn = addedOn
ct.IsComplete = true
results.Store(ct.Id, &ct)
}
}
}
}()
}
// Feed work to workers
for _, file := range jsonFiles {
workChan <- file
}
// Signal workers that no more work is coming
close(workChan)
// Wait for all workers to complete
wg.Wait()
// Convert sync.Map to regular map
results.Range(func(key, value interface{}) bool {
id, _ := key.(string)
torrent, _ := value.(*CachedTorrent)
torrents[id] = torrent
return true
})
return torrents, nil
}
func (c *Cache) Sync() error {
defer c.logger.Info().Msg("WebDav server sync complete")
cachedTorrents, err := c.load()
if err != nil {
c.logger.Debug().Err(err).Msg("Failed to load cache")
}
torrents, err := c.client.GetTorrents()
if err != nil {
return fmt.Errorf("failed to sync torrents: %v", err)
}
c.logger.Info().Msgf("Got %d torrents from %s", len(torrents), c.client.GetName())
newTorrents := make([]*types.Torrent, 0)
idStore := make(map[string]struct{}, len(torrents))
for _, t := range torrents {
idStore[t.Id] = struct{}{}
if _, ok := cachedTorrents[t.Id]; !ok {
newTorrents = append(newTorrents, t)
}
}
// Check for deleted torrents
deletedTorrents := make([]string, 0)
for _, t := range cachedTorrents {
if _, ok := idStore[t.Id]; !ok {
deletedTorrents = append(deletedTorrents, t.Id)
}
}
if len(deletedTorrents) > 0 {
c.logger.Info().Msgf("Found %d deleted torrents", len(deletedTorrents))
for _, id := range deletedTorrents {
if _, ok := cachedTorrents[id]; ok {
delete(cachedTorrents, id)
c.removeFromDB(id)
}
}
}
// Write these torrents to the cache
c.setTorrents(cachedTorrents)
c.logger.Info().Msgf("Loaded %d torrents from cache", len(cachedTorrents))
if len(newTorrents) > 0 {
c.logger.Info().Msgf("Found %d new torrents", len(newTorrents))
if err := c.sync(newTorrents); err != nil {
return fmt.Errorf("failed to sync torrents: %v", err)
}
}
return nil
}
func (c *Cache) sync(torrents []*types.Torrent) error {
// Create channels with appropriate buffering
workChan := make(chan *types.Torrent, min(c.workers, len(torrents)))
// Use an atomic counter for progress tracking
var processed int64
var errorCount int64
// Create a wait group for workers
var wg sync.WaitGroup
// Start workers
for i := 0; i < c.workers; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for {
select {
case t, ok := <-workChan:
if !ok {
return // Channel closed, exit goroutine
}
if err := c.ProcessTorrent(t, false); err != nil {
c.logger.Error().Err(err).Str("torrent", t.Name).Msg("sync error")
atomic.AddInt64(&errorCount, 1)
}
count := atomic.AddInt64(&processed, 1)
if count%1000 == 0 {
c.refreshListings()
c.logger.Info().Msgf("Progress: %d/%d torrents processed", count, len(torrents))
}
case <-c.ctx.Done():
return // Context cancelled, exit goroutine
}
}
}()
}
// Feed work to workers
for _, t := range torrents {
select {
case workChan <- t:
// Work sent successfully
case <-c.ctx.Done():
break // Context cancelled
}
}
// Signal workers that no more work is coming
close(workChan)
// Wait for all workers to complete
wg.Wait()
c.refreshListings()
c.logger.Info().Msgf("Sync complete: %d torrents processed, %d errors", len(torrents), errorCount)
return nil
}
func (c *Cache) GetTorrentFolder(torrent *types.Torrent) string {
switch c.folderNaming {
case WebDavUseFileName:
return torrent.Filename
case WebDavUseOriginalName:
return torrent.OriginalFilename
case WebDavUseFileNameNoExt:
return utils.RemoveExtension(torrent.Filename)
case WebDavUseOriginalNameNoExt:
return utils.RemoveExtension(torrent.OriginalFilename)
case WebDavUseID:
return torrent.Id
default:
return torrent.Filename
}
}
func (c *Cache) setTorrent(t *CachedTorrent) {
c.torrents.Store(t.Id, t)
c.torrentsNames.Store(c.GetTorrentFolder(t.Torrent), t)
c.SaveTorrent(t)
}
func (c *Cache) setTorrents(torrents map[string]*CachedTorrent) {
for _, t := range torrents {
c.torrents.Store(t.Id, t)
c.torrentsNames.Store(c.GetTorrentFolder(t.Torrent), t)
}
c.refreshListings()
c.SaveTorrents()
}
func (c *Cache) GetListing() []os.FileInfo {
if v, ok := c.listings.Load().([]os.FileInfo); ok {
return v
}
return nil
}
func (c *Cache) Close() error {
return nil
}
func (c *Cache) GetTorrents() map[string]*CachedTorrent {
torrents := make(map[string]*CachedTorrent)
c.torrents.Range(func(key string, value *CachedTorrent) bool {
torrents[key] = value
return true
})
return torrents
}
func (c *Cache) GetTorrent(id string) *CachedTorrent {
if t, ok := c.torrents.Load(id); ok {
return t
}
return nil
}
func (c *Cache) GetTorrentByName(name string) *CachedTorrent {
if t, ok := c.torrentsNames.Load(name); ok {
return t
}
return nil
}
func (c *Cache) SaveTorrents() {
c.torrents.Range(func(key string, value *CachedTorrent) bool {
c.SaveTorrent(value)
return true
})
}
func (c *Cache) SaveTorrent(ct *CachedTorrent) {
marshaled, err := json.MarshalIndent(ct, "", " ")
if err != nil {
c.logger.Debug().Err(err).Msgf("Failed to marshal torrent: %s", ct.Id)
return
}
// Store just the essential info needed for the file operation
saveInfo := struct {
id string
jsonData []byte
}{
id: ct.Torrent.Id,
jsonData: marshaled,
}
// Try to acquire semaphore without blocking
select {
case c.saveSemaphore <- struct{}{}:
go func() {
defer func() { <-c.saveSemaphore }()
c.saveTorrent(saveInfo.id, saveInfo.jsonData)
}()
default:
c.saveTorrent(saveInfo.id, saveInfo.jsonData)
}
}
func (c *Cache) saveTorrent(id string, data []byte) {
fileName := id + ".json"
filePath := filepath.Join(c.dir, fileName)
// Use a unique temporary filename for concurrent safety
tmpFile := filePath + ".tmp." + strconv.FormatInt(time.Now().UnixNano(), 10)
f, err := os.Create(tmpFile)
if err != nil {
c.logger.Debug().Err(err).Msgf("Failed to create file: %s", tmpFile)
return
}
// Track if we've closed the file
fileClosed := false
defer func() {
// Only close if not already closed
if !fileClosed {
_ = f.Close()
}
// Clean up the temp file if it still exists and rename failed
_ = os.Remove(tmpFile)
}()
w := bufio.NewWriter(f)
if _, err := w.Write(data); err != nil {
c.logger.Debug().Err(err).Msgf("Failed to write data: %s", tmpFile)
return
}
if err := w.Flush(); err != nil {
c.logger.Debug().Err(err).Msgf("Failed to flush data: %s", tmpFile)
return
}
// Close the file before renaming
_ = f.Close()
fileClosed = true
if err := os.Rename(tmpFile, filePath); err != nil {
c.logger.Debug().Err(err).Msgf("Failed to rename file: %s", tmpFile)
return
}
}
func (c *Cache) ProcessTorrent(t *types.Torrent, refreshRclone bool) error {
isComplete := func(files map[string]types.File) bool {
_complete := len(files) > 0
for _, file := range files {
if file.Link == "" {
_complete = false
break
}
}
return _complete
}
if !isComplete(t.Files) {
if err := c.client.UpdateTorrent(t); err != nil {
return fmt.Errorf("failed to update torrent: %w", err)
}
}
if !isComplete(t.Files) {
c.logger.Debug().Msgf("Torrent %s is still not complete. Triggering a reinsert(disabled)", t.Id)
//ct, err := c.reInsertTorrent(t)
//if err != nil {
// c.logger.Debug().Err(err).Msgf("Failed to reinsert torrent %s", t.Id)
// return err
//}
//c.logger.Debug().Msgf("Reinserted torrent %s", ct.Id)
} else {
addedOn, err := time.Parse(time.RFC3339, t.Added)
if err != nil {
addedOn = time.Now()
}
ct := &CachedTorrent{
Torrent: t,
IsComplete: len(t.Files) > 0,
AddedOn: addedOn,
}
c.setTorrent(ct)
}
if refreshRclone {
c.refreshListings()
}
return nil
}
func (c *Cache) GetDownloadLink(torrentId, filename, fileLink string) string {
// Check link cache
if dl := c.checkDownloadLink(fileLink); dl != "" {
return dl
}
ct := c.GetTorrent(torrentId)
if ct == nil {
return ""
}
file := ct.Files[filename]
if file.Link == "" {
// file link is empty, refresh the torrent to get restricted links
ct = c.refreshTorrent(ct) // Refresh the torrent from the debrid
if ct == nil {
return ""
} else {
file = ct.Files[filename]
}
}
// If file.Link is still empty, return
if file.Link == "" {
c.logger.Debug().Msgf("File link is empty for %s. Release is probably nerfed", filename)
// Try to reinsert the torrent?
ct, err := c.reInsertTorrent(ct.Torrent)
if err != nil {
c.logger.Debug().Err(err).Msgf("Failed to reinsert torrent %s", ct.Name)
return ""
}
file = ct.Files[filename]
c.logger.Debug().Msgf("Reinserted torrent %s", ct.Name)
}
c.logger.Trace().Msgf("Getting download link for %s", filename)
downloadLink, accountId, err := c.client.GetDownloadLink(ct.Torrent, &file)
if err != nil {
if errors.Is(err, request.HosterUnavailableError) {
c.logger.Debug().Err(err).Msgf("Hoster is unavailable. Triggering repair for %s", ct.Name)
ct, err := c.reInsertTorrent(ct.Torrent)
if err != nil {
c.logger.Debug().Err(err).Msgf("Failed to reinsert torrent %s", ct.Name)
return ""
}
c.logger.Debug().Msgf("Reinserted torrent %s", ct.Name)
file = ct.Files[filename]
// Retry getting the download link
downloadLink, accountId, err = c.client.GetDownloadLink(ct.Torrent, &file)
if err != nil {
c.logger.Debug().Err(err).Msgf("Failed to get download link for %s", file.Link)
return ""
}
if downloadLink == "" {
c.logger.Debug().Msgf("Download link is empty for %s", file.Link)
return ""
}
file.DownloadLink = downloadLink
file.Generated = time.Now()
file.AccountId = accountId
ct.Files[filename] = file
go func() {
c.updateDownloadLink(file.Link, downloadLink, accountId)
c.setTorrent(ct)
}()
return file.DownloadLink
} else if errors.Is(err, request.TrafficExceededError) {
// This is likely a fair usage limit error
} else {
c.logger.Debug().Err(err).Msgf("Failed to get download link for %s", file.Link)
return ""
}
}
file.DownloadLink = downloadLink
file.Generated = time.Now()
file.AccountId = accountId
ct.Files[filename] = file
go func() {
c.updateDownloadLink(file.Link, downloadLink, file.AccountId)
c.setTorrent(ct)
}()
return file.DownloadLink
}
func (c *Cache) GenerateDownloadLinks(t *CachedTorrent) {
if err := c.client.GenerateDownloadLinks(t.Torrent); err != nil {
c.logger.Error().Err(err).Msg("Failed to generate download links")
}
for _, file := range t.Files {
c.updateDownloadLink(file.Link, file.DownloadLink, file.AccountId)
}
c.SaveTorrent(t)
}
func (c *Cache) AddTorrent(t *types.Torrent) error {
if len(t.Files) == 0 {
if err := c.client.UpdateTorrent(t); err != nil {
return fmt.Errorf("failed to update torrent: %w", err)
}
}
addedOn, err := time.Parse(time.RFC3339, t.Added)
if err != nil {
addedOn = time.Now()
}
ct := &CachedTorrent{
Torrent: t,
IsComplete: len(t.Files) > 0,
AddedOn: addedOn,
}
c.setTorrent(ct)
c.refreshListings()
go c.GenerateDownloadLinks(ct)
return nil
}
func (c *Cache) updateDownloadLink(link, downloadLink string, accountId string) {
c.downloadLinks.Store(link, downloadLinkCache{
Link: downloadLink,
ExpiresAt: time.Now().Add(c.autoExpiresLinksAfter),
AccountId: accountId,
})
}
func (c *Cache) checkDownloadLink(link string) string {
if dl, ok := c.downloadLinks.Load(link); ok {
if dl.ExpiresAt.After(time.Now()) && !c.IsDownloadLinkInvalid(dl.Link) {
return dl.Link
}
}
return ""
}
func (c *Cache) MarkDownloadLinkAsInvalid(link, downloadLink, reason string) {
c.invalidDownloadLinks.Store(downloadLink, reason)
// Remove the download api key from active
if reason == "bandwidth_exceeded" {
if dl, ok := c.downloadLinks.Load(link); ok {
if dl.AccountId != "" && dl.Link == downloadLink {
c.client.DisableAccount(dl.AccountId)
}
}
}
c.downloadLinks.Delete(link) // Remove the download link from cache
}
func (c *Cache) IsDownloadLinkInvalid(downloadLink string) bool {
if reason, ok := c.invalidDownloadLinks.Load(downloadLink); ok {
c.logger.Debug().Msgf("Download link %s is invalid: %s", downloadLink, reason)
return true
}
return false
}
func (c *Cache) GetClient() types.Client {
return c.client
}
func (c *Cache) DeleteTorrent(id string) error {
c.logger.Info().Msgf("Deleting torrent %s", id)
c.torrentsRefreshMu.Lock()
defer c.torrentsRefreshMu.Unlock()
if t, ok := c.torrents.Load(id); ok {
_ = c.client.DeleteTorrent(id) // SKip error handling, we don't care if it fails
c.torrents.Delete(id)
c.torrentsNames.Delete(c.GetTorrentFolder(t.Torrent))
c.removeFromDB(id)
c.refreshListings()
}
return nil
}
func (c *Cache) DeleteTorrents(ids []string) {
c.logger.Info().Msgf("Deleting %d torrents", len(ids))
for _, id := range ids {
if t, ok := c.torrents.Load(id); ok {
c.torrents.Delete(id)
c.torrentsNames.Delete(c.GetTorrentFolder(t.Torrent))
c.removeFromDB(id)
}
}
c.refreshListings()
}
func (c *Cache) removeFromDB(torrentId string) {
// Moves the torrent file to the trash
filePath := filepath.Join(c.dir, torrentId+".json")
// Check if the file exists
if _, err := os.Stat(filePath); errors.Is(err, os.ErrNotExist) {
return
}
// Move the file to the trash
trashPath := filepath.Join(c.dir, "trash", torrentId+".json")
if err := os.MkdirAll(filepath.Dir(trashPath), 0755); err != nil {
return
}
if err := os.Rename(filePath, trashPath); err != nil {
return
}
}
func (c *Cache) OnRemove(torrentId string) {
c.logger.Debug().Msgf("OnRemove triggered for %s", torrentId)
err := c.DeleteTorrent(torrentId)
if err != nil {
c.logger.Error().Err(err).Msgf("Failed to delete torrent: %s", torrentId)
return
}
}
func (c *Cache) GetLogger() zerolog.Logger {
return c.logger
}

View File

@@ -0,0 +1,86 @@
package debrid
import (
"fmt"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/arr"
"github.com/sirrobot01/decypharr/pkg/debrid/alldebrid"
"github.com/sirrobot01/decypharr/pkg/debrid/debrid_link"
"github.com/sirrobot01/decypharr/pkg/debrid/realdebrid"
"github.com/sirrobot01/decypharr/pkg/debrid/torbox"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
)
func createDebridClient(dc config.Debrid) types.Client {
switch dc.Name {
case "realdebrid":
return realdebrid.New(dc)
case "torbox":
return torbox.New(dc)
case "debridlink":
return debrid_link.New(dc)
case "alldebrid":
return alldebrid.New(dc)
default:
return realdebrid.New(dc)
}
}
func ProcessTorrent(d *Engine, magnet *utils.Magnet, a *arr.Arr, isSymlink, overrideDownloadUncached bool) (*types.Torrent, error) {
debridTorrent := &types.Torrent{
InfoHash: magnet.InfoHash,
Magnet: magnet,
Name: magnet.Name,
Arr: a,
Size: magnet.Size,
Files: make(map[string]types.File),
}
errs := make([]error, 0)
for index, db := range d.Clients {
logger := db.GetLogger()
logger.Info().Msgf("Processing debrid: %s", db.GetName())
// Override first, arr second, debrid third
if overrideDownloadUncached {
debridTorrent.DownloadUncached = true
} else if a.DownloadUncached != nil {
// Arr cached is set
debridTorrent.DownloadUncached = *a.DownloadUncached
} else {
debridTorrent.DownloadUncached = db.GetDownloadUncached()
}
logger.Info().Msgf("Torrent Hash: %s", debridTorrent.InfoHash)
if db.GetCheckCached() {
hash, exists := db.IsAvailable([]string{debridTorrent.InfoHash})[debridTorrent.InfoHash]
if !exists || !hash {
logger.Info().Msgf("Torrent: %s is not cached", debridTorrent.Name)
continue
} else {
logger.Info().Msgf("Torrent: %s is cached(or downloading)", debridTorrent.Name)
}
}
dbt, err := db.SubmitMagnet(debridTorrent)
if dbt != nil {
dbt.Arr = a
}
if err != nil || dbt == nil || dbt.Id == "" {
errs = append(errs, err)
continue
}
logger.Info().Msgf("Torrent: %s(id=%s) submitted to %s", dbt.Name, dbt.Id, db.GetName())
d.LastUsed = index
return db.CheckStatus(dbt, isSymlink)
}
err := fmt.Errorf("failed to process torrent")
for _, e := range errs {
err = fmt.Errorf("%w\n%w", err, e)
}
return nil, err
}

View File

@@ -0,0 +1,55 @@
package debrid
import (
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
)
type Engine struct {
Clients map[string]types.Client
Caches map[string]*Cache
LastUsed string
}
func NewEngine() *Engine {
cfg := config.Get()
clients := make(map[string]types.Client)
caches := make(map[string]*Cache)
for _, dc := range cfg.Debrids {
client := createDebridClient(dc)
logger := client.GetLogger()
if dc.UseWebDav {
caches[dc.Name] = New(dc, client)
logger.Info().Msg("Debrid Service started with WebDAV")
} else {
logger.Info().Msg("Debrid Service started")
}
clients[dc.Name] = client
}
d := &Engine{
Clients: clients,
LastUsed: "",
Caches: caches,
}
return d
}
func (d *Engine) Get() types.Client {
if d.LastUsed == "" {
for _, c := range d.Clients {
return c
}
}
return d.Clients[d.LastUsed]
}
func (d *Engine) GetByName(name string) types.Client {
return d.Clients[name]
}
func (d *Engine) GetDebrids() map[string]types.Client {
return d.Clients
}

View File

@@ -0,0 +1,249 @@
package debrid
import (
"fmt"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
"io"
"net/http"
"os"
"slices"
"sort"
"strings"
"sync"
"time"
)
type fileInfo struct {
name string
size int64
mode os.FileMode
modTime time.Time
isDir bool
}
func (fi *fileInfo) Name() string { return fi.name }
func (fi *fileInfo) Size() int64 { return fi.size }
func (fi *fileInfo) Mode() os.FileMode { return fi.mode }
func (fi *fileInfo) ModTime() time.Time { return fi.modTime }
func (fi *fileInfo) IsDir() bool { return fi.isDir }
func (fi *fileInfo) Sys() interface{} { return nil }
func (c *Cache) refreshListings() {
if c.listingRefreshMu.TryLock() {
defer c.listingRefreshMu.Unlock()
} else {
return
}
// COpy the torrents to a string|time map
torrentsTime := make(map[string]time.Time, c.torrents.Size())
torrents := make([]string, 0, c.torrents.Size())
c.torrentsNames.Range(func(key string, value *CachedTorrent) bool {
torrentsTime[key] = value.AddedOn
torrents = append(torrents, key)
return true
})
// Sort the torrents by name
sort.Strings(torrents)
files := make([]os.FileInfo, 0, len(torrents))
for _, t := range torrents {
files = append(files, &fileInfo{
name: t,
size: 0,
mode: 0755 | os.ModeDir,
modTime: torrentsTime[t],
isDir: true,
})
}
// Atomic store of the complete ready-to-use slice
c.listings.Store(files)
_ = c.refreshXml()
if err := c.RefreshRclone(); err != nil {
c.logger.Trace().Err(err).Msg("Failed to refresh rclone") // silent error
}
}
func (c *Cache) refreshTorrents() {
if c.torrentsRefreshMu.TryLock() {
defer c.torrentsRefreshMu.Unlock()
} else {
return
}
// Create a copy of the current torrents to avoid concurrent issues
torrents := make(map[string]string, c.torrents.Size()) // a mpa of id and name
c.torrents.Range(func(key string, t *CachedTorrent) bool {
torrents[t.Id] = t.Name
return true
})
// Get new torrents from the debrid service
debTorrents, err := c.client.GetTorrents()
if err != nil {
c.logger.Debug().Err(err).Msg("Failed to get torrents")
return
}
if len(debTorrents) == 0 {
// Maybe an error occurred
return
}
// Get the newly added torrents only
_newTorrents := make([]*types.Torrent, 0)
idStore := make(map[string]struct{}, len(debTorrents))
for _, t := range debTorrents {
idStore[t.Id] = struct{}{}
if _, ok := torrents[t.Id]; !ok {
_newTorrents = append(_newTorrents, t)
}
}
// Check for deleted torrents
deletedTorrents := make([]string, 0)
for id := range torrents {
if _, ok := idStore[id]; !ok {
deletedTorrents = append(deletedTorrents, id)
}
}
newTorrents := make([]*types.Torrent, 0)
for _, t := range _newTorrents {
if !slices.Contains(deletedTorrents, t.Id) {
newTorrents = append(newTorrents, t)
}
}
if len(deletedTorrents) > 0 {
c.DeleteTorrents(deletedTorrents)
}
if len(newTorrents) == 0 {
return
}
c.logger.Info().Msgf("Found %d new torrents", len(newTorrents))
workChan := make(chan *types.Torrent, min(100, len(newTorrents)))
errChan := make(chan error, len(newTorrents))
var wg sync.WaitGroup
for i := 0; i < c.workers; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for t := range workChan {
select {
case <-c.ctx.Done():
return
default:
}
if err := c.ProcessTorrent(t, true); err != nil {
c.logger.Debug().Err(err).Msgf("Failed to process new torrent %s", t.Id)
errChan <- err
}
}
}()
}
for _, t := range newTorrents {
select {
case <-c.ctx.Done():
break
default:
workChan <- t
}
}
close(workChan)
wg.Wait()
c.logger.Debug().Msgf("Processed %d new torrents", len(newTorrents))
}
func (c *Cache) RefreshRclone() error {
client := request.Default()
cfg := config.Get().WebDav
if cfg.RcUrl == "" {
return nil
}
// Create form data
data := "dir=__all__&dir2=torrents"
// Create a POST request with form URL-encoded content
forgetReq, err := http.NewRequest("POST", fmt.Sprintf("%s/vfs/forget", cfg.RcUrl), strings.NewReader(data))
if err != nil {
return err
}
if cfg.RcUser != "" && cfg.RcPass != "" {
forgetReq.SetBasicAuth(cfg.RcUser, cfg.RcPass)
}
// Set the appropriate content type for form data
forgetReq.Header.Set("Content-Type", "application/x-www-form-urlencoded")
// Send the request
forgetResp, err := client.Do(forgetReq)
if err != nil {
return err
}
defer forgetResp.Body.Close()
if forgetResp.StatusCode != 200 {
body, _ := io.ReadAll(forgetResp.Body)
return fmt.Errorf("failed to forget rclone: %s - %s", forgetResp.Status, string(body))
}
return nil
}
func (c *Cache) refreshTorrent(t *CachedTorrent) *CachedTorrent {
_torrent := t.Torrent
err := c.client.UpdateTorrent(_torrent)
if err != nil {
c.logger.Debug().Msgf("Failed to get torrent files for %s: %v", t.Id, err)
return nil
}
if len(t.Files) == 0 {
return nil
}
addedOn, err := time.Parse(time.RFC3339, _torrent.Added)
if err != nil {
addedOn = time.Now()
}
ct := &CachedTorrent{
Torrent: _torrent,
AddedOn: addedOn,
IsComplete: len(t.Files) > 0,
}
c.setTorrent(ct)
return ct
}
func (c *Cache) refreshDownloadLinks() {
if c.downloadLinksRefreshMu.TryLock() {
defer c.downloadLinksRefreshMu.Unlock()
} else {
return
}
downloadLinks, err := c.client.GetDownloads()
if err != nil {
c.logger.Debug().Err(err).Msg("Failed to get download links")
}
for k, v := range downloadLinks {
// if link is generated in the last 24 hours, add it to cache
timeSince := time.Since(v.Generated)
if timeSince < c.autoExpiresLinksAfter {
c.downloadLinks.Store(k, downloadLinkCache{
Link: v.DownloadLink,
ExpiresAt: v.Generated.Add(c.autoExpiresLinksAfter - timeSince),
})
} else {
c.downloadLinks.Delete(k)
}
}
c.logger.Debug().Msgf("Refreshed %d download links", len(downloadLinks))
}

168
pkg/debrid/debrid/repair.go Normal file
View File

@@ -0,0 +1,168 @@
package debrid
import (
"errors"
"fmt"
"github.com/puzpuzpuz/xsync/v3"
"github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
"slices"
"time"
)
func (c *Cache) IsTorrentBroken(t *CachedTorrent, filenames []string) bool {
// Check torrent files
isBroken := false
files := make(map[string]types.File)
if len(filenames) > 0 {
for name, f := range t.Files {
if slices.Contains(filenames, name) {
files[name] = f
}
}
} else {
files = t.Files
}
// Check empty links
for _, f := range files {
// Check if file is missing
if f.Link == "" {
// refresh torrent and then break
t = c.refreshTorrent(t)
break
}
}
files = t.Files
for _, f := range files {
// Check if file link is still missing
if f.Link == "" {
isBroken = true
break
} else {
// Check if file.Link not in the downloadLink Cache
if err := c.client.CheckLink(f.Link); err != nil {
if errors.Is(err, request.HosterUnavailableError) {
isBroken = true
break
}
}
}
}
return isBroken
}
func (c *Cache) repairWorker() {
// This watches a channel for torrents to repair
for req := range c.repairChan {
torrentId := req.TorrentID
if _, inProgress := c.repairsInProgress.Load(torrentId); inProgress {
c.logger.Debug().Str("torrentId", torrentId).Msg("Skipping duplicate repair request")
continue
}
// Mark as in progress
c.repairsInProgress.Store(torrentId, struct{}{})
c.logger.Debug().Str("torrentId", req.TorrentID).Msg("Received repair request")
// Get the torrent from the cache
cachedTorrent, ok := c.torrents.Load(torrentId)
if !ok || cachedTorrent == nil {
c.logger.Warn().Str("torrentId", torrentId).Msg("Torrent not found in cache")
continue
}
switch req.Type {
case RepairTypeReinsert:
c.logger.Debug().Str("torrentId", torrentId).Msg("Reinserting torrent")
var err error
cachedTorrent, err = c.reInsertTorrent(cachedTorrent.Torrent)
if err != nil {
c.logger.Error().Err(err).Str("torrentId", cachedTorrent.Id).Msg("Failed to reinsert torrent")
continue
}
case RepairTypeDelete:
c.logger.Debug().Str("torrentId", torrentId).Msg("Deleting torrent")
if err := c.DeleteTorrent(torrentId); err != nil {
c.logger.Error().Err(err).Str("torrentId", torrentId).Msg("Failed to delete torrent")
continue
}
}
c.repairsInProgress.Delete(torrentId)
}
}
func (c *Cache) reInsertTorrent(torrent *types.Torrent) (*CachedTorrent, error) {
// Check if Magnet is not empty, if empty, reconstruct the magnet
if _, ok := c.repairsInProgress.Load(torrent.Id); ok {
return nil, fmt.Errorf("repair already in progress for torrent %s", torrent.Id)
}
if torrent.Magnet == nil {
torrent.Magnet = utils.ConstructMagnet(torrent.InfoHash, torrent.Name)
}
oldID := torrent.Id
defer func() {
err := c.DeleteTorrent(oldID)
if err != nil {
c.logger.Error().Err(err).Str("torrentId", oldID).Msg("Failed to delete old torrent")
}
}()
// Submit the magnet to the debrid service
torrent.Id = ""
var err error
torrent, err = c.client.SubmitMagnet(torrent)
if err != nil {
// Remove the old torrent from the cache and debrid service
return nil, fmt.Errorf("failed to submit magnet: %w", err)
}
// Check if the torrent was submitted
if torrent == nil || torrent.Id == "" {
return nil, fmt.Errorf("failed to submit magnet: empty torrent")
}
torrent.DownloadUncached = false // Set to false, avoid re-downloading
torrent, err = c.client.CheckStatus(torrent, true)
if err != nil && torrent != nil {
// Torrent is likely in progress
_ = c.DeleteTorrent(torrent.Id)
return nil, fmt.Errorf("failed to check status: %w", err)
}
if torrent == nil {
return nil, fmt.Errorf("failed to check status: empty torrent")
}
// Update the torrent in the cache
addedOn, err := time.Parse(time.RFC3339, torrent.Added)
for _, f := range torrent.Files {
if f.Link == "" {
// Delete the new torrent
_ = c.DeleteTorrent(torrent.Id)
return nil, fmt.Errorf("failed to reinsert torrent: empty link")
}
}
if err != nil {
addedOn = time.Now()
}
ct := &CachedTorrent{
Torrent: torrent,
IsComplete: len(torrent.Files) > 0,
AddedOn: addedOn,
}
c.setTorrent(ct)
c.refreshListings()
return ct, nil
}
func (c *Cache) resetInvalidLinks() {
c.invalidDownloadLinks = xsync.NewMapOf[string, string]()
c.client.ResetActiveDownloadKeys() // Reset the active download keys
}

View File

@@ -0,0 +1,75 @@
package debrid
import "time"
func (c *Cache) Refresh() error {
// For now, we just want to refresh the listing and download links
//go c.refreshDownloadLinksWorker()
go c.refreshTorrentsWorker()
go c.resetInvalidLinksWorker()
return nil
}
func (c *Cache) refreshDownloadLinksWorker() {
refreshTicker := time.NewTicker(c.downloadLinksRefreshInterval)
defer refreshTicker.Stop()
for range refreshTicker.C {
c.refreshDownloadLinks()
}
}
func (c *Cache) refreshTorrentsWorker() {
refreshTicker := time.NewTicker(c.torrentRefreshInterval)
defer refreshTicker.Stop()
for range refreshTicker.C {
c.refreshTorrents()
}
}
func (c *Cache) resetInvalidLinksWorker() {
// Calculate time until next 00:00 CET
now := time.Now()
loc, err := time.LoadLocation("CET")
if err != nil {
// Fallback if CET timezone can't be loaded
c.logger.Error().Err(err).Msg("Failed to load CET timezone, using local time")
loc = time.Local
}
nowInCET := now.In(loc)
next := time.Date(
nowInCET.Year(),
nowInCET.Month(),
nowInCET.Day(),
0, 0, 0, 0,
loc,
)
// If it's already past 12:00 CET today, schedule for tomorrow
if nowInCET.After(next) {
next = next.Add(24 * time.Hour)
}
// Duration until next 12:00 CET
initialWait := next.Sub(nowInCET)
// Set up initial timer
timer := time.NewTimer(initialWait)
defer timer.Stop()
c.logger.Debug().Msgf("Scheduled Links Reset at %s (in %s)", next.Format("2006-01-02 15:04:05 MST"), initialWait)
// Wait for the first execution
<-timer.C
c.resetInvalidLinks()
// Now set up the daily ticker
refreshTicker := time.NewTicker(24 * time.Hour)
defer refreshTicker.Stop()
for range refreshTicker.C {
c.resetInvalidLinks()
}
}

125
pkg/debrid/debrid/xml.go Normal file
View File

@@ -0,0 +1,125 @@
package debrid
import (
"fmt"
"github.com/beevik/etree"
"github.com/sirrobot01/decypharr/internal/request"
"net/http"
"os"
path "path/filepath"
"time"
)
func (c *Cache) refreshXml() error {
parents := []string{"__all__", "torrents"}
torrents := c.GetListing()
for _, parent := range parents {
if err := c.refreshParentXml(torrents, parent); err != nil {
return fmt.Errorf("failed to refresh XML for %s: %v", parent, err)
}
}
c.logger.Trace().Msgf("Refreshed XML cache for %s", c.client.GetName())
return nil
}
func (c *Cache) refreshParentXml(torrents []os.FileInfo, parent string) error {
// Define the WebDAV namespace
davNS := "DAV:"
// Create the root multistatus element
doc := etree.NewDocument()
doc.CreateProcInst("xml", `version="1.0" encoding="UTF-8"`)
multistatus := doc.CreateElement("D:multistatus")
multistatus.CreateAttr("xmlns:D", davNS)
// Get the current timestamp in RFC1123 format (WebDAV format)
currentTime := time.Now().UTC().Format(http.TimeFormat)
// Add the parent directory
baseUrl := path.Clean(fmt.Sprintf("/webdav/%s/%s", c.client.GetName(), parent))
parentPath := fmt.Sprintf("%s/", baseUrl)
addDirectoryResponse(multistatus, parentPath, parent, currentTime)
// Add torrents to the XML
for _, torrent := range torrents {
name := torrent.Name()
// Note the path structure change - parent first, then torrent name
torrentPath := fmt.Sprintf("/webdav/%s/%s/%s/",
c.client.GetName(),
parent,
name,
)
addDirectoryResponse(multistatus, torrentPath, name, currentTime)
}
// Convert to XML string
xmlData, err := doc.WriteToBytes()
if err != nil {
return fmt.Errorf("failed to generate XML: %v", err)
}
// Store in cache
key0 := fmt.Sprintf("propfind:%s:0", baseUrl)
key1 := fmt.Sprintf("propfind:%s:1", baseUrl)
res := PropfindResponse{
Data: xmlData,
GzippedData: request.Gzip(xmlData),
Ts: time.Now(),
}
c.PropfindResp.Store(key0, res)
c.PropfindResp.Store(key1, res)
return nil
}
func addDirectoryResponse(multistatus *etree.Element, href, displayName, modTime string) *etree.Element {
responseElem := multistatus.CreateElement("D:response")
// Add href - ensure it's properly formatted
hrefElem := responseElem.CreateElement("D:href")
hrefElem.SetText(href)
// Add propstat
propstatElem := responseElem.CreateElement("D:propstat")
// Add prop
propElem := propstatElem.CreateElement("D:prop")
// Add resource type (collection = directory)
resourceTypeElem := propElem.CreateElement("D:resourcetype")
resourceTypeElem.CreateElement("D:collection")
// Add display name
displayNameElem := propElem.CreateElement("D:displayname")
displayNameElem.SetText(displayName)
// Add last modified time
lastModElem := propElem.CreateElement("D:getlastmodified")
lastModElem.SetText(modTime)
// Add content type for directories
contentTypeElem := propElem.CreateElement("D:getcontenttype")
contentTypeElem.SetText("httpd/unix-directory")
// Add length (size) - directories typically have zero size
contentLengthElem := propElem.CreateElement("D:getcontentlength")
contentLengthElem.SetText("0")
// Add supported lock
lockElem := propElem.CreateElement("D:supportedlock")
lockEntryElem := lockElem.CreateElement("D:lockentry")
lockScopeElem := lockEntryElem.CreateElement("D:lockscope")
lockScopeElem.CreateElement("D:exclusive")
lockTypeElem := lockEntryElem.CreateElement("D:locktype")
lockTypeElem.CreateElement("D:write")
// Add status
statusElem := propstatElem.CreateElement("D:status")
statusElem.SetText("HTTP/1.1 200 OK")
return responseElem
}

View File

@@ -0,0 +1,387 @@
package debrid_link
import (
"bytes"
"fmt"
"github.com/goccy/go-json"
"github.com/puzpuzpuz/xsync/v3"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
"slices"
"strconv"
"time"
"net/http"
"strings"
)
type DebridLink struct {
Name string
Host string `json:"host"`
APIKey string
DownloadKeys *xsync.MapOf[string, types.Account]
DownloadUncached bool
client *request.Client
MountPath string
logger zerolog.Logger
CheckCached bool
}
func (dl *DebridLink) GetName() string {
return dl.Name
}
func (dl *DebridLink) GetLogger() zerolog.Logger {
return dl.logger
}
func (dl *DebridLink) IsAvailable(hashes []string) map[string]bool {
// Check if the infohashes are available in the local cache
result := make(map[string]bool)
// Divide hashes into groups of 100
for i := 0; i < len(hashes); i += 100 {
end := i + 100
if end > len(hashes) {
end = len(hashes)
}
// Filter out empty strings
validHashes := make([]string, 0, end-i)
for _, hash := range hashes[i:end] {
if hash != "" {
validHashes = append(validHashes, hash)
}
}
// If no valid hashes in this batch, continue to the next batch
if len(validHashes) == 0 {
continue
}
hashStr := strings.Join(validHashes, ",")
url := fmt.Sprintf("%s/seedbox/cached/%s", dl.Host, hashStr)
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := dl.client.MakeRequest(req)
if err != nil {
dl.logger.Info().Msgf("Error checking availability: %v", err)
return result
}
var data AvailableResponse
err = json.Unmarshal(resp, &data)
if err != nil {
dl.logger.Info().Msgf("Error marshalling availability: %v", err)
return result
}
if data.Value == nil {
return result
}
value := *data.Value
for _, h := range hashes[i:end] {
_, exists := value[h]
if exists {
result[h] = true
}
}
}
return result
}
func (dl *DebridLink) UpdateTorrent(t *types.Torrent) error {
url := fmt.Sprintf("%s/seedbox/list?ids=%s", dl.Host, t.Id)
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := dl.client.MakeRequest(req)
if err != nil {
return err
}
var res TorrentInfo
err = json.Unmarshal(resp, &res)
if err != nil {
return err
}
if !res.Success {
return fmt.Errorf("error getting torrent")
}
if res.Value == nil {
return fmt.Errorf("torrent not found")
}
dt := *res.Value
if len(dt) == 0 {
return fmt.Errorf("torrent not found")
}
data := dt[0]
status := "downloading"
if data.Status == 100 {
status = "downloaded"
}
name := utils.RemoveInvalidChars(data.Name)
t.Id = data.ID
t.Name = name
t.Bytes = data.TotalSize
t.Folder = name
t.Progress = data.DownloadPercent
t.Status = status
t.Speed = data.DownloadSpeed
t.Seeders = data.PeersConnected
t.Filename = name
t.OriginalFilename = name
cfg := config.Get()
for _, f := range data.Files {
if !cfg.IsSizeAllowed(f.Size) {
continue
}
file := types.File{
Id: f.ID,
Name: f.Name,
Size: f.Size,
Path: f.Name,
DownloadLink: f.DownloadURL,
Link: f.DownloadURL,
}
t.Files[f.Name] = file
}
return nil
}
func (dl *DebridLink) SubmitMagnet(t *types.Torrent) (*types.Torrent, error) {
url := fmt.Sprintf("%s/seedbox/add", dl.Host)
payload := map[string]string{"url": t.Magnet.Link}
jsonPayload, _ := json.Marshal(payload)
req, _ := http.NewRequest(http.MethodPost, url, bytes.NewBuffer(jsonPayload))
resp, err := dl.client.MakeRequest(req)
if err != nil {
return nil, err
}
var res SubmitTorrentInfo
err = json.Unmarshal(resp, &res)
if err != nil {
return nil, err
}
if !res.Success || res.Value == nil {
return nil, fmt.Errorf("error adding torrent")
}
data := *res.Value
status := "downloading"
name := utils.RemoveInvalidChars(data.Name)
t.Id = data.ID
t.Name = name
t.Bytes = data.TotalSize
t.Folder = name
t.Progress = data.DownloadPercent
t.Status = status
t.Speed = data.DownloadSpeed
t.Seeders = data.PeersConnected
t.Filename = name
t.OriginalFilename = name
t.MountPath = dl.MountPath
t.Debrid = dl.Name
for _, f := range data.Files {
file := types.File{
Id: f.ID,
Name: f.Name,
Size: f.Size,
Path: f.Name,
Link: f.DownloadURL,
DownloadLink: f.DownloadURL,
Generated: time.Now(),
}
t.Files[f.Name] = file
}
return t, nil
}
func (dl *DebridLink) CheckStatus(torrent *types.Torrent, isSymlink bool) (*types.Torrent, error) {
for {
err := dl.UpdateTorrent(torrent)
if err != nil || torrent == nil {
return torrent, err
}
status := torrent.Status
if status == "downloaded" {
dl.logger.Info().Msgf("Torrent: %s downloaded", torrent.Name)
err = dl.GenerateDownloadLinks(torrent)
if err != nil {
return torrent, err
}
break
} else if slices.Contains(dl.GetDownloadingStatus(), status) {
if !torrent.DownloadUncached {
return torrent, fmt.Errorf("torrent: %s not cached", torrent.Name)
}
// Break out of the loop if the torrent is downloading.
// This is necessary to prevent infinite loop since we moved to sync downloading and async processing
return torrent, nil
} else {
return torrent, fmt.Errorf("torrent: %s has error", torrent.Name)
}
}
return torrent, nil
}
func (dl *DebridLink) DeleteTorrent(torrentId string) error {
url := fmt.Sprintf("%s/seedbox/%s/remove", dl.Host, torrentId)
req, _ := http.NewRequest(http.MethodDelete, url, nil)
if _, err := dl.client.MakeRequest(req); err != nil {
return err
}
dl.logger.Info().Msgf("Torrent: %s deleted from DebridLink", torrentId)
return nil
}
func (dl *DebridLink) GenerateDownloadLinks(t *types.Torrent) error {
// Download links are already generated
return nil
}
func (dl *DebridLink) GetDownloads() (map[string]types.DownloadLinks, error) {
return nil, nil
}
func (dl *DebridLink) GetDownloadLink(t *types.Torrent, file *types.File) (string, string, error) {
return file.DownloadLink, "0", nil
}
func (dl *DebridLink) GetDownloadingStatus() []string {
return []string{"downloading"}
}
func (dl *DebridLink) GetCheckCached() bool {
return dl.CheckCached
}
func (dl *DebridLink) GetDownloadUncached() bool {
return dl.DownloadUncached
}
func New(dc config.Debrid) *DebridLink {
rl := request.ParseRateLimit(dc.RateLimit)
headers := map[string]string{
"Authorization": fmt.Sprintf("Bearer %s", dc.APIKey),
"Content-Type": "application/json",
}
_log := logger.New(dc.Name)
client := request.New(
request.WithHeaders(headers),
request.WithLogger(_log),
request.WithRateLimiter(rl),
request.WithProxy(dc.Proxy),
)
accounts := xsync.NewMapOf[string, types.Account]()
for idx, key := range dc.DownloadAPIKeys {
id := strconv.Itoa(idx)
accounts.Store(id, types.Account{
Name: key,
ID: id,
Token: key,
})
}
return &DebridLink{
Name: "debridlink",
Host: dc.Host,
APIKey: dc.APIKey,
DownloadKeys: accounts,
DownloadUncached: dc.DownloadUncached,
client: client,
MountPath: dc.Folder,
logger: logger.New(dc.Name),
CheckCached: dc.CheckCached,
}
}
func (dl *DebridLink) GetTorrents() ([]*types.Torrent, error) {
page := 0
perPage := 100
torrents := make([]*types.Torrent, 0)
for {
t, err := dl.getTorrents(page, perPage)
if err != nil {
break
}
if len(t) == 0 {
break
}
torrents = append(torrents, t...)
page++
}
return torrents, nil
}
func (dl *DebridLink) getTorrents(page, perPage int) ([]*types.Torrent, error) {
url := fmt.Sprintf("%s/seedbox/list?page=%d&perPage=%d", dl.Host, page, perPage)
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := dl.client.MakeRequest(req)
torrents := make([]*types.Torrent, 0)
if err != nil {
return torrents, err
}
var res TorrentInfo
err = json.Unmarshal(resp, &res)
if err != nil {
dl.logger.Info().Msgf("Error unmarshalling torrent info: %s", err)
return torrents, err
}
data := *res.Value
if len(data) == 0 {
return torrents, nil
}
for _, t := range data {
if t.Status != 100 {
continue
}
torrent := &types.Torrent{
Id: t.ID,
Name: t.Name,
Bytes: t.TotalSize,
Status: "downloaded",
Filename: t.Name,
OriginalFilename: t.Name,
InfoHash: t.HashString,
Files: make(map[string]types.File),
Debrid: dl.Name,
MountPath: dl.MountPath,
}
cfg := config.Get()
for _, f := range t.Files {
if !cfg.IsSizeAllowed(f.Size) {
continue
}
file := types.File{
Id: f.ID,
Name: f.Name,
Size: f.Size,
Path: f.Name,
DownloadLink: f.DownloadURL,
Link: f.DownloadURL,
}
torrent.Files[f.Name] = file
}
torrents = append(torrents, torrent)
}
return torrents, nil
}
func (dl *DebridLink) CheckLink(link string) error {
return nil
}
func (dl *DebridLink) GetMountPath() string {
return dl.MountPath
}
func (dl *DebridLink) DisableAccount(accountId string) {
}
func (dl *DebridLink) ResetActiveDownloadKeys() {
}

View File

@@ -0,0 +1,45 @@
package debrid_link
type APIResponse[T any] struct {
Success bool `json:"success"`
Value *T `json:"value"` // Use pointer to allow nil
}
type AvailableResponse APIResponse[map[string]map[string]struct {
Name string `json:"name"`
HashString string `json:"hashString"`
Files []struct {
Name string `json:"name"`
Size int `json:"size"`
} `json:"files"`
}]
type debridLinkTorrentInfo struct {
ID string `json:"id"`
Name string `json:"name"`
HashString string `json:"hashString"`
UploadRatio float64 `json:"uploadRatio"`
ServerID string `json:"serverId"`
Wait bool `json:"wait"`
PeersConnected int `json:"peersConnected"`
Status int `json:"status"`
TotalSize int64 `json:"totalSize"`
Files []struct {
ID string `json:"id"`
Name string `json:"name"`
DownloadURL string `json:"downloadUrl"`
Size int64 `json:"size"`
DownloadPercent int `json:"downloadPercent"`
} `json:"files"`
Trackers []struct {
Announce string `json:"announce"`
} `json:"trackers"`
Created int64 `json:"created"`
DownloadPercent float64 `json:"downloadPercent"`
DownloadSpeed int64 `json:"downloadSpeed"`
UploadSpeed int64 `json:"uploadSpeed"`
}
type TorrentInfo APIResponse[[]debridLinkTorrentInfo]
type SubmitTorrentInfo APIResponse[debridLinkTorrentInfo]

View File

@@ -0,0 +1,728 @@
package realdebrid
import (
"bytes"
"errors"
"fmt"
"github.com/goccy/go-json"
"github.com/puzpuzpuz/xsync/v3"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
"io"
"net/http"
gourl "net/url"
"path/filepath"
"slices"
"sort"
"strconv"
"strings"
"sync"
"time"
)
type RealDebrid struct {
Name string
Host string `json:"host"`
APIKey string
DownloadKeys *xsync.MapOf[string, types.Account] // index | Account
DownloadUncached bool
client *request.Client
downloadClient *request.Client
MountPath string
logger zerolog.Logger
CheckCached bool
}
func New(dc config.Debrid) *RealDebrid {
rl := request.ParseRateLimit(dc.RateLimit)
headers := map[string]string{
"Authorization": fmt.Sprintf("Bearer %s", dc.APIKey),
}
_log := logger.New(dc.Name)
accounts := xsync.NewMapOf[string, types.Account]()
firstDownloadKey := dc.DownloadAPIKeys[0]
for idx, key := range dc.DownloadAPIKeys {
id := strconv.Itoa(idx)
accounts.Store(id, types.Account{
Name: key,
ID: id,
Token: key,
})
}
downloadHeaders := map[string]string{
"Authorization": fmt.Sprintf("Bearer %s", firstDownloadKey),
}
downloadClient := request.New(
request.WithHeaders(downloadHeaders),
request.WithRateLimiter(rl),
request.WithLogger(_log),
request.WithMaxRetries(5),
request.WithRetryableStatus(429, 447),
request.WithProxy(dc.Proxy),
request.WithStatusCooldown(447, 10*time.Second), // 447 is a fair use error
)
client := request.New(
request.WithHeaders(headers),
request.WithRateLimiter(rl),
request.WithLogger(_log),
request.WithMaxRetries(5),
request.WithRetryableStatus(429),
request.WithProxy(dc.Proxy),
)
return &RealDebrid{
Name: "realdebrid",
Host: dc.Host,
APIKey: dc.APIKey,
DownloadKeys: accounts,
DownloadUncached: dc.DownloadUncached,
client: client,
downloadClient: downloadClient,
MountPath: dc.Folder,
logger: logger.New(dc.Name),
CheckCached: dc.CheckCached,
}
}
func (r *RealDebrid) GetName() string {
return r.Name
}
func (r *RealDebrid) GetLogger() zerolog.Logger {
return r.logger
}
func getSelectedFiles(t *types.Torrent, data TorrentInfo) map[string]types.File {
selectedFiles := make([]types.File, 0)
for _, f := range data.Files {
if f.Selected == 1 {
name := filepath.Base(f.Path)
file := types.File{
Name: name,
Path: name,
Size: f.Bytes,
Id: strconv.Itoa(f.ID),
}
selectedFiles = append(selectedFiles, file)
}
}
files := make(map[string]types.File)
for index, f := range selectedFiles {
if index >= len(data.Links) {
break
}
f.Link = data.Links[index]
files[f.Name] = f
}
return files
}
// getTorrentFiles returns a list of torrent files from the torrent info
// validate is used to determine if the files should be validated
// if validate is false, selected files will be returned
func getTorrentFiles(t *types.Torrent, data TorrentInfo) map[string]types.File {
files := make(map[string]types.File)
cfg := config.Get()
idx := 0
for _, f := range data.Files {
name := filepath.Base(f.Path)
if utils.IsSampleFile(f.Path) {
// Skip sample files
continue
}
if !cfg.IsAllowedFile(name) {
continue
}
if !cfg.IsSizeAllowed(f.Bytes) {
continue
}
file := types.File{
Name: name,
Path: name,
Size: f.Bytes,
Id: strconv.Itoa(f.ID),
}
files[name] = file
idx++
}
return files
}
func (r *RealDebrid) IsAvailable(hashes []string) map[string]bool {
// Check if the infohashes are available in the local cache
result := make(map[string]bool)
// Divide hashes into groups of 100
for i := 0; i < len(hashes); i += 200 {
end := i + 200
if end > len(hashes) {
end = len(hashes)
}
// Filter out empty strings
validHashes := make([]string, 0, end-i)
for _, hash := range hashes[i:end] {
if hash != "" {
validHashes = append(validHashes, hash)
}
}
// If no valid hashes in this batch, continue to the next batch
if len(validHashes) == 0 {
continue
}
hashStr := strings.Join(validHashes, "/")
url := fmt.Sprintf("%s/torrents/instantAvailability/%s", r.Host, hashStr)
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := r.client.MakeRequest(req)
if err != nil {
r.logger.Info().Msgf("Error checking availability: %v", err)
return result
}
var data AvailabilityResponse
err = json.Unmarshal(resp, &data)
if err != nil {
r.logger.Info().Msgf("Error marshalling availability: %v", err)
return result
}
for _, h := range hashes[i:end] {
hosters, exists := data[strings.ToLower(h)]
if exists && len(hosters.Rd) > 0 {
result[h] = true
}
}
}
return result
}
func (r *RealDebrid) SubmitMagnet(t *types.Torrent) (*types.Torrent, error) {
if t.Magnet.IsTorrent() {
return r.addTorrent(t)
}
return r.addMagnet(t)
}
func (r *RealDebrid) addTorrent(t *types.Torrent) (*types.Torrent, error) {
url := fmt.Sprintf("%s/torrents/addTorrent", r.Host)
var data AddMagnetSchema
req, err := http.NewRequest(http.MethodPut, url, bytes.NewReader(t.Magnet.File))
if err != nil {
return nil, err
}
req.Header.Add("Content-Type", "application/x-bittorrent")
resp, err := r.client.MakeRequest(req)
if err != nil {
return nil, err
}
if err = json.Unmarshal(resp, &data); err != nil {
return nil, err
}
t.Id = data.Id
t.Debrid = r.Name
t.MountPath = r.MountPath
return t, nil
}
func (r *RealDebrid) addMagnet(t *types.Torrent) (*types.Torrent, error) {
url := fmt.Sprintf("%s/torrents/addMagnet", r.Host)
payload := gourl.Values{
"magnet": {t.Magnet.Link},
}
var data AddMagnetSchema
req, _ := http.NewRequest(http.MethodPost, url, strings.NewReader(payload.Encode()))
resp, err := r.client.MakeRequest(req)
if err != nil {
return nil, err
}
if err = json.Unmarshal(resp, &data); err != nil {
return nil, err
}
t.Id = data.Id
t.Debrid = r.Name
t.MountPath = r.MountPath
return t, nil
}
func (r *RealDebrid) UpdateTorrent(t *types.Torrent) error {
url := fmt.Sprintf("%s/torrents/info/%s", r.Host, t.Id)
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := r.client.MakeRequest(req)
if err != nil {
return err
}
var data TorrentInfo
err = json.Unmarshal(resp, &data)
if err != nil {
return err
}
t.Name = data.Filename
t.Bytes = data.Bytes
t.Folder = data.OriginalFilename
t.Progress = data.Progress
t.Status = data.Status
t.Speed = data.Speed
t.Seeders = data.Seeders
t.Filename = data.Filename
t.OriginalFilename = data.OriginalFilename
t.Links = data.Links
t.MountPath = r.MountPath
t.Debrid = r.Name
t.Added = data.Added
t.Files = getSelectedFiles(t, data) // Get selected files
return nil
}
func (r *RealDebrid) CheckStatus(t *types.Torrent, isSymlink bool) (*types.Torrent, error) {
url := fmt.Sprintf("%s/torrents/info/%s", r.Host, t.Id)
req, _ := http.NewRequest(http.MethodGet, url, nil)
for {
resp, err := r.client.MakeRequest(req)
if err != nil {
r.logger.Info().Msgf("ERROR Checking file: %v", err)
return t, err
}
var data TorrentInfo
if err = json.Unmarshal(resp, &data); err != nil {
return t, err
}
status := data.Status
t.Name = data.Filename // Important because some magnet changes the name
t.Folder = data.OriginalFilename
t.Filename = data.Filename
t.OriginalFilename = data.OriginalFilename
t.Bytes = data.Bytes
t.Progress = data.Progress
t.Speed = data.Speed
t.Seeders = data.Seeders
t.Links = data.Links
t.Status = status
t.Debrid = r.Name
t.MountPath = r.MountPath
if status == "waiting_files_selection" {
t.Files = getTorrentFiles(t, data)
if len(t.Files) == 0 {
return t, fmt.Errorf("no video files found")
}
filesId := make([]string, 0)
for _, f := range t.Files {
filesId = append(filesId, f.Id)
}
p := gourl.Values{
"files": {strings.Join(filesId, ",")},
}
payload := strings.NewReader(p.Encode())
req, _ := http.NewRequest(http.MethodPost, fmt.Sprintf("%s/torrents/selectFiles/%s", r.Host, t.Id), payload)
_, err = r.client.MakeRequest(req)
if err != nil {
return t, err
}
} else if status == "downloaded" {
t.Files = getSelectedFiles(t, data) // Get selected files
r.logger.Info().Msgf("Torrent: %s downloaded to RD", t.Name)
if !isSymlink {
err = r.GenerateDownloadLinks(t)
if err != nil {
return t, err
}
}
break
} else if slices.Contains(r.GetDownloadingStatus(), status) {
if !t.DownloadUncached {
return t, fmt.Errorf("torrent: %s not cached", t.Name)
}
return t, nil
} else {
return t, fmt.Errorf("torrent: %s has error: %s", t.Name, status)
}
}
return t, nil
}
func (r *RealDebrid) DeleteTorrent(torrentId string) error {
url := fmt.Sprintf("%s/torrents/delete/%s", r.Host, torrentId)
req, _ := http.NewRequest(http.MethodDelete, url, nil)
if _, err := r.client.MakeRequest(req); err != nil {
return err
}
r.logger.Info().Msgf("Torrent: %s deleted from RD", torrentId)
return nil
}
func (r *RealDebrid) GenerateDownloadLinks(t *types.Torrent) error {
filesCh := make(chan types.File, len(t.Files))
errCh := make(chan error, len(t.Files))
var wg sync.WaitGroup
wg.Add(len(t.Files))
for _, f := range t.Files {
go func(file types.File) {
defer wg.Done()
link, accountId, err := r.GetDownloadLink(t, &file)
if err != nil {
errCh <- err
return
}
file.DownloadLink = link
file.AccountId = accountId
filesCh <- file
}(f)
}
go func() {
wg.Wait()
close(filesCh)
close(errCh)
}()
// Collect results
files := make(map[string]types.File, len(t.Files))
for file := range filesCh {
files[file.Name] = file
}
// Check for errors
for err := range errCh {
if err != nil {
return err // Return the first error encountered
}
}
t.Files = files
return nil
}
func (r *RealDebrid) CheckLink(link string) error {
url := fmt.Sprintf("%s/unrestrict/check", r.Host)
payload := gourl.Values{
"link": {link},
}
req, _ := http.NewRequest(http.MethodPost, url, strings.NewReader(payload.Encode()))
resp, err := r.client.Do(req)
if err != nil {
return err
}
if resp.StatusCode == http.StatusNotFound {
return request.HosterUnavailableError // File has been removed
}
return nil
}
func (r *RealDebrid) _getDownloadLink(file *types.File) (string, error) {
url := fmt.Sprintf("%s/unrestrict/link/", r.Host)
payload := gourl.Values{
"link": {file.Link},
}
req, _ := http.NewRequest(http.MethodPost, url, strings.NewReader(payload.Encode()))
resp, err := r.downloadClient.Do(req)
if err != nil {
return "", err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
// Read the response body to get the error message
b, err := io.ReadAll(resp.Body)
if err != nil {
return "", err
}
var data ErrorResponse
if err = json.Unmarshal(b, &data); err != nil {
return "", err
}
switch data.ErrorCode {
case 23:
return "", request.TrafficExceededError
case 24:
return "", request.HosterUnavailableError // Link has been nerfed
case 19:
return "", request.HosterUnavailableError // File has been removed
case 36:
return "", request.TrafficExceededError // traffic exceeded
case 34:
return "", request.TrafficExceededError // traffic exceeded
default:
return "", fmt.Errorf("realdebrid API error: Status: %d || Code: %d", resp.StatusCode, data.ErrorCode)
}
}
b, err := io.ReadAll(resp.Body)
if err != nil {
return "", err
}
var data UnrestrictResponse
if err = json.Unmarshal(b, &data); err != nil {
return "", err
}
return data.Download, nil
}
func (r *RealDebrid) GetDownloadLink(t *types.Torrent, file *types.File) (string, string, error) {
defer r.downloadClient.SetHeader("Authorization", fmt.Sprintf("Bearer %s", r.APIKey))
var (
downloadLink string
accountId string
err error
)
accounts := r.getActiveAccounts()
if len(accounts) < 1 {
// No active download keys. It's likely that the key has reached bandwidth limit
return "", "", fmt.Errorf("no active download keys")
}
for _, account := range accounts {
r.downloadClient.SetHeader("Authorization", fmt.Sprintf("Bearer %s", account.Token))
downloadLink, err = r._getDownloadLink(file)
if err != nil {
if errors.Is(err, request.TrafficExceededError) {
continue
}
// If the error is not traffic exceeded, skip generating the link with a new key
return "", "", err
} else {
// If we successfully generated a link, break the loop
accountId = account.ID
file.AccountId = accountId
break
}
}
if downloadLink != "" {
// If we successfully generated a link, return it
return downloadLink, accountId, nil
}
// If we reach here, it means all keys are disabled or traffic exceeded
if err != nil {
if errors.Is(err, request.TrafficExceededError) {
return "", "", request.TrafficExceededError
}
return "", "", fmt.Errorf("error generating download link: %v", err)
}
return "", "", fmt.Errorf("error generating download link: %v", err)
}
func (r *RealDebrid) GetCheckCached() bool {
return r.CheckCached
}
func (r *RealDebrid) getTorrents(offset int, limit int) (int, []*types.Torrent, error) {
url := fmt.Sprintf("%s/torrents?limit=%d", r.Host, limit)
torrents := make([]*types.Torrent, 0)
if offset > 0 {
url = fmt.Sprintf("%s&offset=%d", url, offset)
}
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := r.client.Do(req)
if err != nil {
return 0, torrents, err
}
if resp.StatusCode == http.StatusNoContent {
return 0, torrents, nil
}
if resp.StatusCode != http.StatusOK {
resp.Body.Close()
return 0, torrents, fmt.Errorf("realdebrid API error: %d", resp.StatusCode)
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
return 0, torrents, err
}
totalItems, _ := strconv.Atoi(resp.Header.Get("X-Total-Count"))
var data []TorrentsResponse
if err = json.Unmarshal(body, &data); err != nil {
return 0, nil, err
}
filenames := map[string]struct{}{}
for _, t := range data {
if t.Status != "downloaded" {
continue
}
if _, exists := filenames[t.Filename]; exists {
continue
}
torrents = append(torrents, &types.Torrent{
Id: t.Id,
Name: t.Filename,
Bytes: t.Bytes,
Progress: t.Progress,
Status: t.Status,
Filename: t.Filename,
OriginalFilename: t.Filename,
Links: t.Links,
Files: make(map[string]types.File),
InfoHash: t.Hash,
Debrid: r.Name,
MountPath: r.MountPath,
Added: t.Added.Format(time.RFC3339),
})
filenames[t.Filename] = struct{}{}
}
return totalItems, torrents, nil
}
func (r *RealDebrid) GetTorrents() ([]*types.Torrent, error) {
limit := 5000
// Get first batch and total count
totalItems, firstBatch, err := r.getTorrents(0, limit)
if err != nil {
return nil, err
}
allTorrents := firstBatch
// Calculate remaining requests
remaining := totalItems - len(firstBatch)
if remaining <= 0 {
return allTorrents, nil
}
// Prepare for concurrent fetching
var fetchError error
// Calculate how many more requests we need
batchCount := (remaining + limit - 1) / limit // ceiling division
for i := 1; i <= batchCount; i++ {
_, batch, err := r.getTorrents(i*limit, limit)
if err != nil {
fetchError = err
continue
}
allTorrents = append(allTorrents, batch...)
}
if fetchError != nil {
return nil, fetchError
}
return allTorrents, nil
}
func (r *RealDebrid) GetDownloads() (map[string]types.DownloadLinks, error) {
links := make(map[string]types.DownloadLinks)
offset := 0
limit := 1000
accounts := r.getActiveAccounts()
if len(accounts) < 1 {
// No active download keys. It's likely that the key has reached bandwidth limit
return nil, fmt.Errorf("no active download keys")
}
r.downloadClient.SetHeader("Authorization", fmt.Sprintf("Bearer %s", accounts[0].Token))
for {
dl, err := r._getDownloads(offset, limit)
if err != nil {
break
}
if len(dl) == 0 {
break
}
for _, d := range dl {
if _, exists := links[d.Link]; exists {
// This is ordered by date, so we can skip the rest
continue
}
links[d.Link] = d
}
offset += len(dl)
}
return links, nil
}
func (r *RealDebrid) _getDownloads(offset int, limit int) ([]types.DownloadLinks, error) {
url := fmt.Sprintf("%s/downloads?limit=%d", r.Host, limit)
if offset > 0 {
url = fmt.Sprintf("%s&offset=%d", url, offset)
}
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := r.downloadClient.MakeRequest(req)
if err != nil {
return nil, err
}
var data []DownloadsResponse
if err = json.Unmarshal(resp, &data); err != nil {
return nil, err
}
links := make([]types.DownloadLinks, 0)
for _, d := range data {
links = append(links, types.DownloadLinks{
Filename: d.Filename,
Size: d.Filesize,
Link: d.Link,
DownloadLink: d.Download,
Generated: d.Generated,
Id: d.Id,
})
}
return links, nil
}
func (r *RealDebrid) GetDownloadingStatus() []string {
return []string{"downloading", "magnet_conversion", "queued", "compressing", "uploading"}
}
func (r *RealDebrid) GetDownloadUncached() bool {
return r.DownloadUncached
}
func (r *RealDebrid) GetMountPath() string {
return r.MountPath
}
func (r *RealDebrid) DisableAccount(accountId string) {
if value, ok := r.DownloadKeys.Load(accountId); ok {
value.Disabled = true
r.DownloadKeys.Store(accountId, value)
r.logger.Info().Msgf("Disabled account Index: %s", value.ID)
}
}
func (r *RealDebrid) ResetActiveDownloadKeys() {
r.DownloadKeys.Range(func(key string, value types.Account) bool {
value.Disabled = false
r.DownloadKeys.Store(key, value)
return true
})
}
func (r *RealDebrid) getActiveAccounts() []types.Account {
accounts := make([]types.Account, 0)
r.DownloadKeys.Range(func(key string, value types.Account) bool {
if value.Disabled {
return true
}
accounts = append(accounts, value)
return true
})
sort.Slice(accounts, func(i, j int) bool {
return accounts[i].ID < accounts[j].ID
})
return accounts
}

View File

@@ -0,0 +1,141 @@
package realdebrid
import (
"fmt"
"github.com/goccy/go-json"
"time"
)
type AvailabilityResponse map[string]Hoster
func (r *AvailabilityResponse) UnmarshalJSON(data []byte) error {
// First, try to unmarshal as an object
var objectData map[string]Hoster
err := json.Unmarshal(data, &objectData)
if err == nil {
*r = objectData
return nil
}
// If that fails, try to unmarshal as an array
var arrayData []map[string]Hoster
err = json.Unmarshal(data, &arrayData)
if err != nil {
return fmt.Errorf("failed to unmarshal as both object and array: %v", err)
}
// If it's an array, use the first element
if len(arrayData) > 0 {
*r = arrayData[0]
return nil
}
// If it's an empty array, initialize as an empty map
*r = make(map[string]Hoster)
return nil
}
type Hoster struct {
Rd []map[string]FileVariant `json:"rd"`
}
func (h *Hoster) UnmarshalJSON(data []byte) error {
// Attempt to unmarshal into the expected structure (an object with an "rd" key)
type Alias Hoster
var obj Alias
if err := json.Unmarshal(data, &obj); err == nil {
*h = Hoster(obj)
return nil
}
// If unmarshalling into an object fails, check if it's an empty array
var arr []interface{}
if err := json.Unmarshal(data, &arr); err == nil && len(arr) == 0 {
// It's an empty array; initialize with no entries
*h = Hoster{Rd: nil}
return nil
}
// If both attempts fail, return an error
return fmt.Errorf("hoster: cannot unmarshal JSON data: %s", string(data))
}
type FileVariant struct {
Filename string `json:"filename"`
Filesize int `json:"filesize"`
}
type AddMagnetSchema struct {
Id string `json:"id"`
Uri string `json:"uri"`
}
type TorrentInfo struct {
ID string `json:"id"`
Filename string `json:"filename"`
OriginalFilename string `json:"original_filename"`
Hash string `json:"hash"`
Bytes int64 `json:"bytes"`
OriginalBytes int64 `json:"original_bytes"`
Host string `json:"host"`
Split int `json:"split"`
Progress float64 `json:"progress"`
Status string `json:"status"`
Added string `json:"added"`
Files []struct {
ID int `json:"id"`
Path string `json:"path"`
Bytes int64 `json:"bytes"`
Selected int `json:"selected"`
} `json:"files"`
Links []string `json:"links"`
Ended string `json:"ended,omitempty"`
Speed int64 `json:"speed,omitempty"`
Seeders int `json:"seeders,omitempty"`
}
type UnrestrictResponse struct {
Id string `json:"id"`
Filename string `json:"filename"`
MimeType string `json:"mimeType"`
Filesize int64 `json:"filesize"`
Link string `json:"link"`
Host string `json:"host"`
Chunks int `json:"chunks"`
Crc int `json:"crc"`
Download string `json:"download"`
Streamable int `json:"streamable"`
}
type TorrentsResponse struct {
Id string `json:"id"`
Filename string `json:"filename"`
Hash string `json:"hash"`
Bytes int64 `json:"bytes"`
Host string `json:"host"`
Split int64 `json:"split"`
Progress float64 `json:"progress"`
Status string `json:"status"`
Added time.Time `json:"added"`
Links []string `json:"links"`
Ended time.Time `json:"ended"`
}
type DownloadsResponse struct {
Id string `json:"id"`
Filename string `json:"filename"`
MimeType string `json:"mimeType"`
Filesize int64 `json:"filesize"`
Link string `json:"link"`
Host string `json:"host"`
HostIcon string `json:"host_icon"`
Chunks int64 `json:"chunks"`
Download string `json:"download"`
Streamable int `json:"streamable"`
Generated time.Time `json:"generated"`
}
type ErrorResponse struct {
Error string `json:"error"`
ErrorCode int `json:"error_code"`
}

383
pkg/debrid/torbox/torbox.go Normal file
View File

@@ -0,0 +1,383 @@
package torbox
import (
"bytes"
"fmt"
"github.com/goccy/go-json"
"github.com/puzpuzpuz/xsync/v3"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
"mime/multipart"
"net/http"
gourl "net/url"
"path"
"path/filepath"
"slices"
"strconv"
"strings"
"sync"
)
type Torbox struct {
Name string
Host string `json:"host"`
APIKey string
DownloadKeys *xsync.MapOf[string, types.Account]
DownloadUncached bool
client *request.Client
MountPath string
logger zerolog.Logger
CheckCached bool
}
func New(dc config.Debrid) *Torbox {
rl := request.ParseRateLimit(dc.RateLimit)
headers := map[string]string{
"Authorization": fmt.Sprintf("Bearer %s", dc.APIKey),
}
_log := logger.New(dc.Name)
client := request.New(
request.WithHeaders(headers),
request.WithRateLimiter(rl),
request.WithLogger(_log),
request.WithProxy(dc.Proxy),
)
accounts := xsync.NewMapOf[string, types.Account]()
for idx, key := range dc.DownloadAPIKeys {
id := strconv.Itoa(idx)
accounts.Store(id, types.Account{
Name: key,
ID: id,
Token: key,
})
}
return &Torbox{
Name: "torbox",
Host: dc.Host,
APIKey: dc.APIKey,
DownloadKeys: accounts,
DownloadUncached: dc.DownloadUncached,
client: client,
MountPath: dc.Folder,
logger: _log,
CheckCached: dc.CheckCached,
}
}
func (tb *Torbox) GetName() string {
return tb.Name
}
func (tb *Torbox) GetLogger() zerolog.Logger {
return tb.logger
}
func (tb *Torbox) IsAvailable(hashes []string) map[string]bool {
// Check if the infohashes are available in the local cache
result := make(map[string]bool)
// Divide hashes into groups of 100
for i := 0; i < len(hashes); i += 100 {
end := i + 100
if end > len(hashes) {
end = len(hashes)
}
// Filter out empty strings
validHashes := make([]string, 0, end-i)
for _, hash := range hashes[i:end] {
if hash != "" {
validHashes = append(validHashes, hash)
}
}
// If no valid hashes in this batch, continue to the next batch
if len(validHashes) == 0 {
continue
}
hashStr := strings.Join(validHashes, ",")
url := fmt.Sprintf("%s/api/torrents/checkcached?hash=%s", tb.Host, hashStr)
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := tb.client.MakeRequest(req)
if err != nil {
tb.logger.Info().Msgf("Error checking availability: %v", err)
return result
}
var res AvailableResponse
err = json.Unmarshal(resp, &res)
if err != nil {
tb.logger.Info().Msgf("Error marshalling availability: %v", err)
return result
}
if res.Data == nil {
return result
}
for h, c := range *res.Data {
if c.Size > 0 {
result[strings.ToUpper(h)] = true
}
}
}
return result
}
func (tb *Torbox) SubmitMagnet(torrent *types.Torrent) (*types.Torrent, error) {
url := fmt.Sprintf("%s/api/torrents/createtorrent", tb.Host)
payload := &bytes.Buffer{}
writer := multipart.NewWriter(payload)
_ = writer.WriteField("magnet", torrent.Magnet.Link)
err := writer.Close()
if err != nil {
return nil, err
}
req, _ := http.NewRequest(http.MethodPost, url, payload)
req.Header.Set("Content-Type", writer.FormDataContentType())
resp, err := tb.client.MakeRequest(req)
if err != nil {
return nil, err
}
var data AddMagnetResponse
err = json.Unmarshal(resp, &data)
if err != nil {
return nil, err
}
if data.Data == nil {
return nil, fmt.Errorf("error adding torrent")
}
dt := *data.Data
torrentId := strconv.Itoa(dt.Id)
torrent.Id = torrentId
torrent.MountPath = tb.MountPath
torrent.Debrid = tb.Name
return torrent, nil
}
func getTorboxStatus(status string, finished bool) string {
if finished {
return "downloaded"
}
downloading := []string{"completed", "cached", "paused", "downloading", "uploading",
"checkingResumeData", "metaDL", "pausedUP", "queuedUP", "checkingUP",
"forcedUP", "allocating", "downloading", "metaDL", "pausedDL",
"queuedDL", "checkingDL", "forcedDL", "checkingResumeData", "moving"}
switch {
case slices.Contains(downloading, status):
return "downloading"
default:
return "error"
}
}
func (tb *Torbox) UpdateTorrent(t *types.Torrent) error {
url := fmt.Sprintf("%s/api/torrents/mylist/?id=%s", tb.Host, t.Id)
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := tb.client.MakeRequest(req)
if err != nil {
return err
}
var res InfoResponse
err = json.Unmarshal(resp, &res)
if err != nil {
return err
}
data := res.Data
name := data.Name
t.Name = name
t.Bytes = data.Size
t.Folder = name
t.Progress = data.Progress * 100
t.Status = getTorboxStatus(data.DownloadState, data.DownloadFinished)
t.Speed = data.DownloadSpeed
t.Seeders = data.Seeds
t.Filename = name
t.OriginalFilename = name
t.MountPath = tb.MountPath
t.Debrid = tb.Name
cfg := config.Get()
for _, f := range data.Files {
fileName := filepath.Base(f.Name)
if utils.IsSampleFile(f.AbsolutePath) {
// Skip sample files
continue
}
if !cfg.IsAllowedFile(fileName) {
continue
}
if !cfg.IsSizeAllowed(f.Size) {
continue
}
file := types.File{
Id: strconv.Itoa(f.Id),
Name: fileName,
Size: f.Size,
Path: fileName,
}
t.Files[fileName] = file
}
var cleanPath string
if len(t.Files) > 0 {
cleanPath = path.Clean(data.Files[0].Name)
} else {
cleanPath = path.Clean(data.Name)
}
t.OriginalFilename = strings.Split(cleanPath, "/")[0]
t.Debrid = tb.Name
return nil
}
func (tb *Torbox) CheckStatus(torrent *types.Torrent, isSymlink bool) (*types.Torrent, error) {
for {
err := tb.UpdateTorrent(torrent)
if err != nil || torrent == nil {
return torrent, err
}
status := torrent.Status
if status == "downloaded" {
tb.logger.Info().Msgf("Torrent: %s downloaded", torrent.Name)
if !isSymlink {
err = tb.GenerateDownloadLinks(torrent)
if err != nil {
return torrent, err
}
}
break
} else if slices.Contains(tb.GetDownloadingStatus(), status) {
if !torrent.DownloadUncached {
return torrent, fmt.Errorf("torrent: %s not cached", torrent.Name)
}
// Break out of the loop if the torrent is downloading.
// This is necessary to prevent infinite loop since we moved to sync downloading and async processing
return torrent, nil
} else {
return torrent, fmt.Errorf("torrent: %s has error", torrent.Name)
}
}
return torrent, nil
}
func (tb *Torbox) DeleteTorrent(torrentId string) error {
url := fmt.Sprintf("%s/api/torrents/controltorrent/%s", tb.Host, torrentId)
payload := map[string]string{"torrent_id": torrentId, "action": "Delete"}
jsonPayload, _ := json.Marshal(payload)
req, _ := http.NewRequest(http.MethodDelete, url, bytes.NewBuffer(jsonPayload))
if _, err := tb.client.MakeRequest(req); err != nil {
return err
}
tb.logger.Info().Msgf("Torrent %s deleted from Torbox", torrentId)
return nil
}
func (tb *Torbox) GenerateDownloadLinks(t *types.Torrent) error {
filesCh := make(chan types.File, len(t.Files))
errCh := make(chan error, len(t.Files))
var wg sync.WaitGroup
wg.Add(len(t.Files))
for _, file := range t.Files {
go func() {
defer wg.Done()
link, accountId, err := tb.GetDownloadLink(t, &file)
if err != nil {
errCh <- err
return
}
file.DownloadLink = link
file.AccountId = accountId
filesCh <- file
}()
}
go func() {
wg.Wait()
close(filesCh)
close(errCh)
}()
// Collect results
files := make(map[string]types.File, len(t.Files))
for file := range filesCh {
files[file.Name] = file
}
// Check for errors
for err := range errCh {
if err != nil {
return err // Return the first error encountered
}
}
t.Files = files
return nil
}
func (tb *Torbox) GetDownloadLink(t *types.Torrent, file *types.File) (string, string, error) {
url := fmt.Sprintf("%s/api/torrents/requestdl/", tb.Host)
query := gourl.Values{}
query.Add("torrent_id", t.Id)
query.Add("token", tb.APIKey)
query.Add("file_id", file.Id)
url += "?" + query.Encode()
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := tb.client.MakeRequest(req)
if err != nil {
return "", "", err
}
var data DownloadLinksResponse
if err = json.Unmarshal(resp, &data); err != nil {
return "", "", err
}
if data.Data == nil {
return "", "", fmt.Errorf("error getting download links")
}
link := *data.Data
return link, "0", nil
}
func (tb *Torbox) GetDownloadingStatus() []string {
return []string{"downloading"}
}
func (tb *Torbox) GetCheckCached() bool {
return tb.CheckCached
}
func (tb *Torbox) GetTorrents() ([]*types.Torrent, error) {
return nil, nil
}
func (tb *Torbox) GetDownloadUncached() bool {
return tb.DownloadUncached
}
func (tb *Torbox) GetDownloads() (map[string]types.DownloadLinks, error) {
return nil, nil
}
func (tb *Torbox) CheckLink(link string) error {
return nil
}
func (tb *Torbox) GetMountPath() string {
return tb.MountPath
}
func (tb *Torbox) DisableAccount(accountId string) {
}
func (tb *Torbox) ResetActiveDownloadKeys() {
}

View File

@@ -0,0 +1,75 @@
package torbox
import "time"
type APIResponse[T any] struct {
Success bool `json:"success"`
Error any `json:"error"`
Detail string `json:"detail"`
Data *T `json:"data"` // Use pointer to allow nil
}
type AvailableResponse APIResponse[map[string]struct {
Name string `json:"name"`
Size int `json:"size"`
Hash string `json:"hash"`
}]
type AddMagnetResponse APIResponse[struct {
Id int `json:"torrent_id"`
Hash string `json:"hash"`
}]
type torboxInfo struct {
Id int `json:"id"`
AuthId string `json:"auth_id"`
Server int `json:"server"`
Hash string `json:"hash"`
Name string `json:"name"`
Magnet interface{} `json:"magnet"`
Size int64 `json:"size"`
Active bool `json:"active"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
DownloadState string `json:"download_state"`
Seeds int `json:"seeds"`
Peers int `json:"peers"`
Ratio float64 `json:"ratio"`
Progress float64 `json:"progress"`
DownloadSpeed int64 `json:"download_speed"`
UploadSpeed int `json:"upload_speed"`
Eta int `json:"eta"`
TorrentFile bool `json:"torrent_file"`
ExpiresAt interface{} `json:"expires_at"`
DownloadPresent bool `json:"download_present"`
Files []struct {
Id int `json:"id"`
Md5 interface{} `json:"md5"`
Hash string `json:"hash"`
Name string `json:"name"`
Size int64 `json:"size"`
Zipped bool `json:"zipped"`
S3Path string `json:"s3_path"`
Infected bool `json:"infected"`
Mimetype string `json:"mimetype"`
ShortName string `json:"short_name"`
AbsolutePath string `json:"absolute_path"`
} `json:"files"`
DownloadPath string `json:"download_path"`
InactiveCheck int `json:"inactive_check"`
Availability int `json:"availability"`
DownloadFinished bool `json:"download_finished"`
Tracker interface{} `json:"tracker"`
TotalUploaded int `json:"total_uploaded"`
TotalDownloaded int `json:"total_downloaded"`
Cached bool `json:"cached"`
Owner string `json:"owner"`
SeedTorrent bool `json:"seed_torrent"`
AllowZipped bool `json:"allow_zipped"`
LongTermSeeding bool `json:"long_term_seeding"`
TrackerMessage interface{} `json:"tracker_message"`
}
type InfoResponse APIResponse[torboxInfo]
type DownloadLinksResponse APIResponse[string]

View File

@@ -0,0 +1,26 @@
package types
import (
"github.com/rs/zerolog"
)
type Client interface {
SubmitMagnet(tr *Torrent) (*Torrent, error)
CheckStatus(tr *Torrent, isSymlink bool) (*Torrent, error)
GenerateDownloadLinks(tr *Torrent) error
GetDownloadLink(tr *Torrent, file *File) (string, string, error)
DeleteTorrent(torrentId string) error
IsAvailable(infohashes []string) map[string]bool
GetCheckCached() bool
GetDownloadUncached() bool
UpdateTorrent(torrent *Torrent) error
GetTorrents() ([]*Torrent, error)
GetName() string
GetLogger() zerolog.Logger
GetDownloadingStatus() []string
GetDownloads() (map[string]DownloadLinks, error)
CheckLink(link string) error
GetMountPath() string
DisableAccount(string)
ResetActiveDownloadKeys()
}

128
pkg/debrid/types/torrent.go Normal file
View File

@@ -0,0 +1,128 @@
package types
import (
"fmt"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/arr"
"os"
"path/filepath"
"sync"
"time"
)
type Torrent struct {
Id string `json:"id"`
InfoHash string `json:"info_hash"`
Name string `json:"name"`
Folder string `json:"folder"`
Filename string `json:"filename"`
OriginalFilename string `json:"original_filename"`
Size int64 `json:"size"`
Bytes int64 `json:"bytes"` // Size of only the files that are downloaded
Magnet *utils.Magnet `json:"magnet"`
Files map[string]File `json:"files"`
Status string `json:"status"`
Added string `json:"added"`
Progress float64 `json:"progress"`
Speed int64 `json:"speed"`
Seeders int `json:"seeders"`
Links []string `json:"links"`
MountPath string `json:"mount_path"`
Debrid string `json:"debrid"`
Arr *arr.Arr `json:"arr"`
Mu sync.Mutex `json:"-"`
SizeDownloaded int64 `json:"-"` // This is used for local download
DownloadUncached bool `json:"-"`
}
type DownloadLinks struct {
Filename string `json:"filename"`
Link string `json:"link"`
DownloadLink string `json:"download_link"`
Generated time.Time `json:"generated"`
Size int64 `json:"size"`
Id string `json:"id"`
}
func (t *Torrent) GetSymlinkFolder(parent string) string {
return filepath.Join(parent, t.Arr.Name, t.Folder)
}
func (t *Torrent) GetMountFolder(rClonePath string) (string, error) {
_log := logger.GetDefaultLogger()
possiblePaths := []string{
t.OriginalFilename,
t.Filename,
utils.RemoveExtension(t.OriginalFilename),
}
for _, path := range possiblePaths {
_p := filepath.Join(rClonePath, path)
_log.Trace().Msgf("Checking path: %s", _p)
_, err := os.Stat(_p)
if !os.IsNotExist(err) {
return path, nil
}
}
return "", fmt.Errorf("no path found")
}
type File struct {
Id string `json:"id"`
Name string `json:"name"`
Size int64 `json:"size"`
Path string `json:"path"`
Link string `json:"link"`
DownloadLink string `json:"download_link"`
AccountId string `json:"account_id"`
Generated time.Time `json:"generated"`
}
func (f *File) IsValid() bool {
cfg := config.Get()
name := filepath.Base(f.Path)
if utils.IsSampleFile(f.Path) {
return false
}
if !cfg.IsAllowedFile(name) {
return false
}
if !cfg.IsSizeAllowed(f.Size) {
return false
}
if f.Link == "" {
return false
}
return true
}
func (t *Torrent) Cleanup(remove bool) {
if remove {
err := os.Remove(t.Filename)
if err != nil {
return
}
}
}
func (t *Torrent) GetFile(id string) *File {
for _, f := range t.Files {
if f.Id == id {
return &f
}
}
return nil
}
type Account struct {
ID string `json:"id"`
Disabled bool `json:"disabled"`
Name string `json:"name"`
Token string `json:"token"`
}

268
pkg/qbit/downloader.go Normal file
View File

@@ -0,0 +1,268 @@
package qbit
import (
"fmt"
"github.com/cavaliergopher/grab/v3"
"github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/internal/utils"
debrid "github.com/sirrobot01/decypharr/pkg/debrid/types"
"io"
"os"
"path/filepath"
"sync"
"time"
)
func Download(client *grab.Client, url, filename string, progressCallback func(int64, int64)) error {
req, err := grab.NewRequest(filename, url)
if err != nil {
return err
}
resp := client.Do(req)
t := time.NewTicker(time.Second)
defer t.Stop()
var lastReported int64
Loop:
for {
select {
case <-t.C:
current := resp.BytesComplete()
speed := int64(resp.BytesPerSecond())
if current != lastReported {
if progressCallback != nil {
progressCallback(current-lastReported, speed)
}
lastReported = current
}
case <-resp.Done:
break Loop
}
}
// Report final bytes
if progressCallback != nil {
progressCallback(resp.BytesComplete()-lastReported, 0)
}
return resp.Err()
}
func (q *QBit) ProcessManualFile(torrent *Torrent) (string, error) {
debridTorrent := torrent.DebridTorrent
q.logger.Info().Msgf("Downloading %d files...", len(debridTorrent.Files))
torrentPath := filepath.Join(q.DownloadFolder, debridTorrent.Arr.Name, utils.RemoveExtension(debridTorrent.OriginalFilename))
torrentPath = utils.RemoveInvalidChars(torrentPath)
err := os.MkdirAll(torrentPath, os.ModePerm)
if err != nil {
// add previous error to the error and return
return "", fmt.Errorf("failed to create directory: %s: %v", torrentPath, err)
}
q.downloadFiles(torrent, torrentPath)
return torrentPath, nil
}
func (q *QBit) downloadFiles(torrent *Torrent, parent string) {
debridTorrent := torrent.DebridTorrent
var wg sync.WaitGroup
semaphore := make(chan struct{}, 5)
totalSize := int64(0)
for _, file := range debridTorrent.Files {
totalSize += file.Size
}
debridTorrent.Mu.Lock()
debridTorrent.SizeDownloaded = 0 // Reset downloaded bytes
debridTorrent.Progress = 0 // Reset progress
debridTorrent.Mu.Unlock()
progressCallback := func(downloaded int64, speed int64) {
debridTorrent.Mu.Lock()
defer debridTorrent.Mu.Unlock()
torrent.Mu.Lock()
defer torrent.Mu.Unlock()
// Update total downloaded bytes
debridTorrent.SizeDownloaded += downloaded
debridTorrent.Speed = speed
// Calculate overall progress
if totalSize > 0 {
debridTorrent.Progress = float64(debridTorrent.SizeDownloaded) / float64(totalSize) * 100
}
q.UpdateTorrentMin(torrent, debridTorrent)
}
client := &grab.Client{
UserAgent: "qBitTorrent",
HTTPClient: request.New(request.WithTimeout(0)),
}
for _, file := range debridTorrent.Files {
if file.DownloadLink == "" {
q.logger.Info().Msgf("No download link found for %s", file.Name)
continue
}
wg.Add(1)
semaphore <- struct{}{}
go func(file debrid.File) {
defer wg.Done()
defer func() { <-semaphore }()
filename := file.Link
err := Download(
client,
file.DownloadLink,
filepath.Join(parent, filename),
progressCallback,
)
if err != nil {
q.logger.Error().Msgf("Failed to download %s: %v", filename, err)
} else {
q.logger.Info().Msgf("Downloaded %s", filename)
}
}(file)
}
wg.Wait()
q.logger.Info().Msgf("Downloaded all files for %s", debridTorrent.Name)
}
func (q *QBit) ProcessSymlink(torrent *Torrent) (string, error) {
debridTorrent := torrent.DebridTorrent
files := debridTorrent.Files
if len(files) == 0 {
return "", fmt.Errorf("no video files found")
}
q.logger.Info().Msgf("Checking symlinks for %d files...", len(files))
rCloneBase := debridTorrent.MountPath
torrentPath, err := q.getTorrentPath(rCloneBase, debridTorrent) // /MyTVShow/
// This returns filename.ext for alldebrid instead of the parent folder filename/
torrentFolder := torrentPath
if err != nil {
return "", fmt.Errorf("failed to get torrent path: %v", err)
}
// Check if the torrent path is a file
torrentRclonePath := filepath.Join(rCloneBase, torrentPath) // leave it as is
if debridTorrent.Debrid == "alldebrid" && utils.IsMediaFile(torrentPath) {
// Alldebrid hotfix for single file torrents
torrentFolder = utils.RemoveExtension(torrentFolder)
torrentRclonePath = rCloneBase // /mnt/rclone/magnets/ // Remove the filename since it's in the root folder
}
return q.createSymlinks(debridTorrent, torrentRclonePath, torrentFolder) // verify cos we're using external webdav
}
func (q *QBit) createSymlinks(debridTorrent *debrid.Torrent, rclonePath, torrentFolder string) (string, error) {
files := debridTorrent.Files
torrentSymlinkPath := filepath.Join(q.DownloadFolder, debridTorrent.Arr.Name, torrentFolder)
err := os.MkdirAll(torrentSymlinkPath, os.ModePerm)
if err != nil {
return "", fmt.Errorf("failed to create directory: %s: %v", torrentSymlinkPath, err)
}
pending := make(map[string]debrid.File)
for _, file := range files {
pending[file.Path] = file
}
ticker := time.NewTicker(100 * time.Millisecond)
defer ticker.Stop()
filePaths := make([]string, 0, len(pending))
for len(pending) > 0 {
<-ticker.C
for path, file := range pending {
fullFilePath := filepath.Join(rclonePath, file.Path)
if _, err := os.Stat(fullFilePath); !os.IsNotExist(err) {
q.logger.Info().Msgf("File is ready: %s", file.Path)
_filePath := q.createSymLink(torrentSymlinkPath, rclonePath, file)
filePaths = append(filePaths, _filePath)
delete(pending, path)
}
}
}
if q.SkipPreCache {
return torrentSymlinkPath, nil
}
go func() {
if err := q.preCacheFile(debridTorrent.Name, filePaths); err != nil {
q.logger.Error().Msgf("Failed to pre-cache file: %s", err)
}
}() // Pre-cache the files in the background
// Pre-cache the first 256KB and 1MB of the file
return torrentSymlinkPath, nil
}
func (q *QBit) getTorrentPath(rclonePath string, debridTorrent *debrid.Torrent) (string, error) {
for {
torrentPath, err := debridTorrent.GetMountFolder(rclonePath)
if err == nil {
q.logger.Debug().Msgf("Found torrent path: %s", torrentPath)
return torrentPath, err
}
time.Sleep(100 * time.Millisecond)
}
}
func (q *QBit) createSymLink(path string, torrentMountPath string, file debrid.File) string {
// Combine the directory and filename to form a full path
fullPath := filepath.Join(path, file.Name) // /mnt/symlinks/{category}/MyTVShow/MyTVShow.S01E01.720p.mkv
// Create a symbolic link if file doesn't exist
torrentFilePath := filepath.Join(torrentMountPath, file.Path) // debridFolder/MyTVShow/MyTVShow.S01E01.720p.mkv
err := os.Symlink(torrentFilePath, fullPath)
if err != nil {
// It's okay if the symlink already exists
q.logger.Debug().Msgf("Failed to create symlink: %s: %v", fullPath, err)
}
return torrentFilePath
}
func (q *QBit) preCacheFile(name string, filePaths []string) error {
q.logger.Trace().Msgf("Pre-caching file: %s", name)
if len(filePaths) == 0 {
return fmt.Errorf("no file paths provided")
}
for _, filePath := range filePaths {
func() {
file, err := os.Open(filePath)
defer func(file *os.File) {
_ = file.Close()
}(file)
if err != nil {
return
}
// Pre-cache the file header (first 256KB) using 16KB chunks.
q.readSmallChunks(file, 0, 256*1024, 16*1024)
q.readSmallChunks(file, 1024*1024, 64*1024, 16*1024)
}()
}
return nil
}
func (q *QBit) readSmallChunks(file *os.File, startPos int64, totalToRead int, chunkSize int) {
_, err := file.Seek(startPos, 0)
if err != nil {
return
}
buf := make([]byte, chunkSize)
bytesRemaining := totalToRead
for bytesRemaining > 0 {
toRead := chunkSize
if bytesRemaining < chunkSize {
toRead = bytesRemaining
}
n, err := file.Read(buf[:toRead])
if err != nil {
if err == io.EOF {
break
}
return
}
bytesRemaining -= n
}
}

402
pkg/qbit/http.go Normal file
View File

@@ -0,0 +1,402 @@
package qbit
import (
"context"
"encoding/base64"
"github.com/go-chi/chi/v5"
"github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/pkg/arr"
"github.com/sirrobot01/decypharr/pkg/service"
"net/http"
"path/filepath"
"strings"
)
func decodeAuthHeader(header string) (string, string, error) {
encodedTokens := strings.Split(header, " ")
if len(encodedTokens) != 2 {
return "", "", nil
}
encodedToken := encodedTokens[1]
bytes, err := base64.StdEncoding.DecodeString(encodedToken)
if err != nil {
return "", "", err
}
bearer := string(bytes)
colonIndex := strings.LastIndex(bearer, ":")
host := bearer[:colonIndex]
token := bearer[colonIndex+1:]
return host, token, nil
}
func (q *QBit) CategoryContext(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
category := strings.Trim(r.URL.Query().Get("category"), "")
if category == "" {
// Get from form
_ = r.ParseForm()
category = r.Form.Get("category")
if category == "" {
// Get from multipart form
_ = r.ParseMultipartForm(32 << 20)
category = r.FormValue("category")
}
}
ctx := context.WithValue(r.Context(), "category", strings.TrimSpace(category))
next.ServeHTTP(w, r.WithContext(ctx))
})
}
func (q *QBit) authContext(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
host, token, err := decodeAuthHeader(r.Header.Get("Authorization"))
category := r.Context().Value("category").(string)
svc := service.GetService()
// Check if arr exists
a := svc.Arr.Get(category)
if a == nil {
downloadUncached := false
a = arr.New(category, "", "", false, false, &downloadUncached)
}
if err == nil {
host = strings.TrimSpace(host)
if host != "" {
a.Host = host
}
token = strings.TrimSpace(token)
if token != "" {
a.Token = token
}
}
svc.Arr.AddOrUpdate(a)
ctx := context.WithValue(r.Context(), "arr", a)
next.ServeHTTP(w, r.WithContext(ctx))
})
}
func HashesCtx(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_hashes := chi.URLParam(r, "hashes")
var hashes []string
if _hashes != "" {
hashes = strings.Split(_hashes, "|")
}
if hashes == nil {
// Get hashes from form
_ = r.ParseForm()
hashes = r.Form["hashes"]
}
for i, hash := range hashes {
hashes[i] = strings.TrimSpace(hash)
}
ctx := context.WithValue(r.Context(), "hashes", hashes)
next.ServeHTTP(w, r.WithContext(ctx))
})
}
func (q *QBit) handleLogin(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
_arr := ctx.Value("arr").(*arr.Arr)
if _arr == nil {
// No arr
_, _ = w.Write([]byte("Ok."))
return
}
if err := _arr.Validate(); err != nil {
q.logger.Info().Msgf("Error validating arr: %v", err)
}
_, _ = w.Write([]byte("Ok."))
}
func (q *QBit) handleVersion(w http.ResponseWriter, r *http.Request) {
_, _ = w.Write([]byte("v4.3.2"))
}
func (q *QBit) handleWebAPIVersion(w http.ResponseWriter, r *http.Request) {
_, _ = w.Write([]byte("2.7"))
}
func (q *QBit) handlePreferences(w http.ResponseWriter, r *http.Request) {
preferences := NewAppPreferences()
preferences.WebUiUsername = q.Username
preferences.SavePath = q.DownloadFolder
preferences.TempPath = filepath.Join(q.DownloadFolder, "temp")
request.JSONResponse(w, preferences, http.StatusOK)
}
func (q *QBit) handleBuildInfo(w http.ResponseWriter, r *http.Request) {
res := BuildInfo{
Bitness: 64,
Boost: "1.75.0",
Libtorrent: "1.2.11.0",
Openssl: "1.1.1i",
Qt: "5.15.2",
Zlib: "1.2.11",
}
request.JSONResponse(w, res, http.StatusOK)
}
func (q *QBit) handleShutdown(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
}
func (q *QBit) handleTorrentsInfo(w http.ResponseWriter, r *http.Request) {
//log all url params
ctx := r.Context()
category := ctx.Value("category").(string)
filter := strings.Trim(r.URL.Query().Get("filter"), "")
hashes, _ := ctx.Value("hashes").([]string)
torrents := q.Storage.GetAllSorted(category, filter, hashes, "added_on", false)
request.JSONResponse(w, torrents, http.StatusOK)
}
func (q *QBit) handleTorrentsAdd(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
// Parse form based on content type
contentType := r.Header.Get("Content-Type")
if strings.Contains(contentType, "multipart/form-data") {
if err := r.ParseMultipartForm(32 << 20); err != nil {
q.logger.Info().Msgf("Error parsing multipart form: %v", err)
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
} else if strings.Contains(contentType, "application/x-www-form-urlencoded") {
if err := r.ParseForm(); err != nil {
q.logger.Info().Msgf("Error parsing form: %v", err)
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
} else {
http.Error(w, "Invalid content type", http.StatusBadRequest)
return
}
isSymlink := strings.ToLower(r.FormValue("sequentialDownload")) != "true"
category := r.FormValue("category")
atleastOne := false
ctx = context.WithValue(ctx, "isSymlink", isSymlink)
// Handle magnet URLs
if urls := r.FormValue("urls"); urls != "" {
var urlList []string
for _, u := range strings.Split(urls, "\n") {
urlList = append(urlList, strings.TrimSpace(u))
}
for _, url := range urlList {
if err := q.AddMagnet(ctx, url, category); err != nil {
q.logger.Info().Msgf("Error adding magnet: %v", err)
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
atleastOne = true
}
}
// Handle torrent files
if r.MultipartForm != nil && r.MultipartForm.File != nil {
if files := r.MultipartForm.File["torrents"]; len(files) > 0 {
for _, fileHeader := range files {
if err := q.AddTorrent(ctx, fileHeader, category); err != nil {
q.logger.Info().Msgf("Error adding torrent: %v", err)
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
atleastOne = true
}
}
}
if !atleastOne {
http.Error(w, "No valid URLs or torrents provided", http.StatusBadRequest)
return
}
w.WriteHeader(http.StatusOK)
}
func (q *QBit) handleTorrentsDelete(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
hashes, _ := ctx.Value("hashes").([]string)
if len(hashes) == 0 {
http.Error(w, "No hashes provided", http.StatusBadRequest)
return
}
category := ctx.Value("category").(string)
for _, hash := range hashes {
q.Storage.Delete(hash, category)
}
w.WriteHeader(http.StatusOK)
}
func (q *QBit) handleTorrentsPause(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
hashes, _ := ctx.Value("hashes").([]string)
category := ctx.Value("category").(string)
for _, hash := range hashes {
torrent := q.Storage.Get(hash, category)
if torrent == nil {
continue
}
go q.PauseTorrent(torrent)
}
w.WriteHeader(http.StatusOK)
}
func (q *QBit) handleTorrentsResume(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
hashes, _ := ctx.Value("hashes").([]string)
category := ctx.Value("category").(string)
for _, hash := range hashes {
torrent := q.Storage.Get(hash, category)
if torrent == nil {
continue
}
go q.ResumeTorrent(torrent)
}
w.WriteHeader(http.StatusOK)
}
func (q *QBit) handleTorrentRecheck(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
hashes, _ := ctx.Value("hashes").([]string)
category := ctx.Value("category").(string)
for _, hash := range hashes {
torrent := q.Storage.Get(hash, category)
if torrent == nil {
continue
}
go q.RefreshTorrent(torrent)
}
w.WriteHeader(http.StatusOK)
}
func (q *QBit) handleCategories(w http.ResponseWriter, r *http.Request) {
var categories = map[string]TorrentCategory{}
for _, cat := range q.Categories {
path := filepath.Join(q.DownloadFolder, cat)
categories[cat] = TorrentCategory{
Name: cat,
SavePath: path,
}
}
request.JSONResponse(w, categories, http.StatusOK)
}
func (q *QBit) handleCreateCategory(w http.ResponseWriter, r *http.Request) {
err := r.ParseForm()
if err != nil {
http.Error(w, "Failed to parse form data", http.StatusBadRequest)
return
}
name := r.Form.Get("category")
if name == "" {
http.Error(w, "No name provided", http.StatusBadRequest)
return
}
q.Categories = append(q.Categories, name)
request.JSONResponse(w, nil, http.StatusOK)
}
func (q *QBit) handleTorrentProperties(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
hash := r.URL.Query().Get("hash")
torrent := q.Storage.Get(hash, ctx.Value("category").(string))
properties := q.GetTorrentProperties(torrent)
request.JSONResponse(w, properties, http.StatusOK)
}
func (q *QBit) handleTorrentFiles(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
hash := r.URL.Query().Get("hash")
torrent := q.Storage.Get(hash, ctx.Value("category").(string))
if torrent == nil {
return
}
files := q.GetTorrentFiles(torrent)
request.JSONResponse(w, files, http.StatusOK)
}
func (q *QBit) handleSetCategory(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
category := ctx.Value("category").(string)
hashes, _ := ctx.Value("hashes").([]string)
torrents := q.Storage.GetAll("", "", hashes)
for _, torrent := range torrents {
torrent.Category = category
q.Storage.AddOrUpdate(torrent)
}
request.JSONResponse(w, nil, http.StatusOK)
}
func (q *QBit) handleAddTorrentTags(w http.ResponseWriter, r *http.Request) {
err := r.ParseForm()
if err != nil {
http.Error(w, "Failed to parse form data", http.StatusBadRequest)
return
}
ctx := r.Context()
hashes, _ := ctx.Value("hashes").([]string)
tags := strings.Split(r.FormValue("tags"), ",")
for i, tag := range tags {
tags[i] = strings.TrimSpace(tag)
}
torrents := q.Storage.GetAll("", "", hashes)
for _, t := range torrents {
q.SetTorrentTags(t, tags)
}
request.JSONResponse(w, nil, http.StatusOK)
}
func (q *QBit) handleRemoveTorrentTags(w http.ResponseWriter, r *http.Request) {
err := r.ParseForm()
if err != nil {
http.Error(w, "Failed to parse form data", http.StatusBadRequest)
return
}
ctx := r.Context()
hashes, _ := ctx.Value("hashes").([]string)
tags := strings.Split(r.FormValue("tags"), ",")
for i, tag := range tags {
tags[i] = strings.TrimSpace(tag)
}
torrents := q.Storage.GetAll("", "", hashes)
for _, torrent := range torrents {
q.RemoveTorrentTags(torrent, tags)
}
request.JSONResponse(w, nil, http.StatusOK)
}
func (q *QBit) handleGetTags(w http.ResponseWriter, r *http.Request) {
request.JSONResponse(w, q.Tags, http.StatusOK)
}
func (q *QBit) handleCreateTags(w http.ResponseWriter, r *http.Request) {
err := r.ParseForm()
if err != nil {
http.Error(w, "Failed to parse form data", http.StatusBadRequest)
return
}
tags := strings.Split(r.FormValue("tags"), ",")
for i, tag := range tags {
tags[i] = strings.TrimSpace(tag)
}
q.AddTags(tags)
request.JSONResponse(w, nil, http.StatusOK)
}

90
pkg/qbit/import.go Normal file
View File

@@ -0,0 +1,90 @@
package qbit
import (
"fmt"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/debrid/debrid"
"github.com/sirrobot01/decypharr/pkg/service"
"time"
"github.com/google/uuid"
"github.com/sirrobot01/decypharr/pkg/arr"
)
type ImportRequest struct {
ID string `json:"id"`
Path string `json:"path"`
Magnet *utils.Magnet `json:"magnet"`
Arr *arr.Arr `json:"arr"`
IsSymlink bool `json:"isSymlink"`
SeriesId int `json:"series"`
Seasons []int `json:"seasons"`
Episodes []string `json:"episodes"`
DownloadUncached bool `json:"downloadUncached"`
Failed bool `json:"failed"`
FailedAt time.Time `json:"failedAt"`
Reason string `json:"reason"`
Completed bool `json:"completed"`
CompletedAt time.Time `json:"completedAt"`
Async bool `json:"async"`
}
type ManualImportResponseSchema struct {
Priority string `json:"priority"`
Status string `json:"status"`
Result string `json:"result"`
Queued time.Time `json:"queued"`
Trigger string `json:"trigger"`
SendUpdatesToClient bool `json:"sendUpdatesToClient"`
UpdateScheduledTask bool `json:"updateScheduledTask"`
Id int `json:"id"`
}
func NewImportRequest(magnet *utils.Magnet, arr *arr.Arr, isSymlink, downloadUncached bool) *ImportRequest {
return &ImportRequest{
ID: uuid.NewString(),
Magnet: magnet,
Arr: arr,
Failed: false,
Completed: false,
Async: false,
IsSymlink: isSymlink,
DownloadUncached: downloadUncached,
}
}
func (i *ImportRequest) Fail(reason string) {
i.Failed = true
i.FailedAt = time.Now()
i.Reason = reason
}
func (i *ImportRequest) Complete() {
i.Completed = true
i.CompletedAt = time.Now()
}
func (i *ImportRequest) Process(q *QBit) (err error) {
// Use this for now.
// This sends the torrent to the arr
svc := service.GetService()
torrent := createTorrentFromMagnet(i.Magnet, i.Arr.Name, "manual")
debridTorrent, err := debrid.ProcessTorrent(svc.Debrid, i.Magnet, i.Arr, i.IsSymlink, i.DownloadUncached)
if err != nil || debridTorrent == nil {
if debridTorrent != nil {
dbClient := service.GetDebrid().GetByName(debridTorrent.Debrid)
go func() {
_ = dbClient.DeleteTorrent(debridTorrent.Id)
}()
}
if err == nil {
err = fmt.Errorf("failed to process torrent")
}
return err
}
torrent = q.UpdateTorrentMin(torrent, debridTorrent)
q.Storage.AddOrUpdate(torrent)
go q.ProcessFiles(torrent, debridTorrent, i.Arr, i.IsSymlink)
return nil
}

28
pkg/qbit/misc.go Normal file
View File

@@ -0,0 +1,28 @@
package qbit
import (
"github.com/google/uuid"
"github.com/sirrobot01/decypharr/internal/utils"
"strings"
)
func createTorrentFromMagnet(magnet *utils.Magnet, category, source string) *Torrent {
torrent := &Torrent{
ID: uuid.NewString(),
Hash: strings.ToLower(magnet.InfoHash),
Name: magnet.Name,
Size: magnet.Size,
Category: category,
Source: source,
State: "downloading",
MagnetUri: magnet.Link,
Tracker: "udp://tracker.opentrackr.org:1337",
UpLimit: -1,
DlLimit: -1,
AutoTmm: false,
Ratio: 1,
RatioLimit: 1,
}
return torrent
}

41
pkg/qbit/qbit.go Normal file
View File

@@ -0,0 +1,41 @@
package qbit
import (
"cmp"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger"
"os"
"path/filepath"
)
type QBit struct {
Username string `json:"username"`
Password string `json:"password"`
Port string `json:"port"`
DownloadFolder string `json:"download_folder"`
Categories []string `json:"categories"`
Storage *TorrentStorage
logger zerolog.Logger
Tags []string
RefreshInterval int
SkipPreCache bool
}
func New() *QBit {
_cfg := config.Get()
cfg := _cfg.QBitTorrent
port := cmp.Or(cfg.Port, os.Getenv("QBIT_PORT"), "8282")
refreshInterval := cmp.Or(cfg.RefreshInterval, 10)
return &QBit{
Username: cfg.Username,
Password: cfg.Password,
Port: port,
DownloadFolder: cfg.DownloadFolder,
Categories: cfg.Categories,
Storage: NewTorrentStorage(filepath.Join(_cfg.Path, "torrents.json")),
logger: logger.New("qbit"),
RefreshInterval: refreshInterval,
SkipPreCache: cfg.SkipPreCache,
}
}

42
pkg/qbit/routes.go Normal file
View File

@@ -0,0 +1,42 @@
package qbit
import (
"github.com/go-chi/chi/v5"
"net/http"
)
func (q *QBit) Routes() http.Handler {
r := chi.NewRouter()
r.Use(q.CategoryContext)
r.Group(func(r chi.Router) {
r.Use(q.authContext)
r.Post("/auth/login", q.handleLogin)
r.Route("/torrents", func(r chi.Router) {
r.Use(HashesCtx)
r.Get("/info", q.handleTorrentsInfo)
r.Post("/add", q.handleTorrentsAdd)
r.Post("/delete", q.handleTorrentsDelete)
r.Get("/categories", q.handleCategories)
r.Post("/createCategory", q.handleCreateCategory)
r.Post("/setCategory", q.handleSetCategory)
r.Post("/addTags", q.handleAddTorrentTags)
r.Post("/removeTags", q.handleRemoveTorrentTags)
r.Post("/createTags", q.handleCreateTags)
r.Get("/tags", q.handleGetTags)
r.Get("/pause", q.handleTorrentsPause)
r.Get("/resume", q.handleTorrentsResume)
r.Get("/recheck", q.handleTorrentRecheck)
r.Get("/properties", q.handleTorrentProperties)
r.Get("/files", q.handleTorrentFiles)
})
r.Route("/app", func(r chi.Router) {
r.Get("/version", q.handleVersion)
r.Get("/webapiVersion", q.handleWebAPIVersion)
r.Get("/preferences", q.handlePreferences)
r.Get("/buildInfo", q.handleBuildInfo)
r.Get("/shutdown", q.handleShutdown)
})
})
return r
}

234
pkg/qbit/storage.go Normal file
View File

@@ -0,0 +1,234 @@
package qbit
import (
"fmt"
"github.com/goccy/go-json"
"os"
"sort"
"sync"
)
func keyPair(hash, category string) string {
if category == "" {
category = "uncategorized"
}
return fmt.Sprintf("%s|%s", hash, category)
}
type Torrents = map[string]*Torrent
type TorrentStorage struct {
torrents Torrents
mu sync.RWMutex
filename string // Added to store the filename for persistence
}
func loadTorrentsFromJSON(filename string) (Torrents, error) {
data, err := os.ReadFile(filename)
if err != nil {
return nil, err
}
torrents := make(Torrents)
if err := json.Unmarshal(data, &torrents); err != nil {
return nil, err
}
return torrents, nil
}
func NewTorrentStorage(filename string) *TorrentStorage {
// Open the JSON file and read the data
torrents, err := loadTorrentsFromJSON(filename)
if err != nil {
torrents = make(Torrents)
}
// Create a new TorrentStorage
return &TorrentStorage{
torrents: torrents,
filename: filename,
}
}
func (ts *TorrentStorage) Add(torrent *Torrent) {
ts.mu.Lock()
defer ts.mu.Unlock()
ts.torrents[keyPair(torrent.Hash, torrent.Category)] = torrent
go func() {
err := ts.saveToFile()
if err != nil {
fmt.Println(err)
}
}()
}
func (ts *TorrentStorage) AddOrUpdate(torrent *Torrent) {
ts.mu.Lock()
defer ts.mu.Unlock()
ts.torrents[keyPair(torrent.Hash, torrent.Category)] = torrent
go func() {
err := ts.saveToFile()
if err != nil {
fmt.Println(err)
}
}()
}
func (ts *TorrentStorage) Get(hash, category string) *Torrent {
ts.mu.RLock()
defer ts.mu.RUnlock()
torrent, exists := ts.torrents[keyPair(hash, category)]
if !exists && category == "" {
// Try to find the torrent without knowing the category
for _, t := range ts.torrents {
if t.Hash == hash {
return t
}
}
}
return torrent
}
func (ts *TorrentStorage) GetAll(category string, filter string, hashes []string) []*Torrent {
ts.mu.RLock()
defer ts.mu.RUnlock()
torrents := make([]*Torrent, 0)
for _, torrent := range ts.torrents {
if category != "" && torrent.Category != category {
continue
}
if filter != "" && torrent.State != filter {
continue
}
torrents = append(torrents, torrent)
}
if len(hashes) > 0 {
filtered := make([]*Torrent, 0)
for _, hash := range hashes {
for _, torrent := range torrents {
if torrent.Hash == hash {
filtered = append(filtered, torrent)
}
}
}
torrents = filtered
}
return torrents
}
func (ts *TorrentStorage) GetAllSorted(category string, filter string, hashes []string, sortBy string, ascending bool) []*Torrent {
torrents := ts.GetAll(category, filter, hashes)
if sortBy != "" {
sort.Slice(torrents, func(i, j int) bool {
// If ascending is false, swap i and j to get descending order
if !ascending {
i, j = j, i
}
switch sortBy {
case "name":
return torrents[i].Name < torrents[j].Name
case "size":
return torrents[i].Size < torrents[j].Size
case "added_on":
return torrents[i].AddedOn < torrents[j].AddedOn
case "completed":
return torrents[i].Completed < torrents[j].Completed
case "progress":
return torrents[i].Progress < torrents[j].Progress
case "state":
return torrents[i].State < torrents[j].State
case "category":
return torrents[i].Category < torrents[j].Category
case "dlspeed":
return torrents[i].Dlspeed < torrents[j].Dlspeed
case "upspeed":
return torrents[i].Upspeed < torrents[j].Upspeed
case "ratio":
return torrents[i].Ratio < torrents[j].Ratio
default:
// Default sort by added_on
return torrents[i].AddedOn < torrents[j].AddedOn
}
})
}
return torrents
}
func (ts *TorrentStorage) Update(torrent *Torrent) {
ts.mu.Lock()
defer ts.mu.Unlock()
ts.torrents[keyPair(torrent.Hash, torrent.Category)] = torrent
go func() {
err := ts.saveToFile()
if err != nil {
fmt.Println(err)
}
}()
}
func (ts *TorrentStorage) Delete(hash, category string) {
ts.mu.Lock()
defer ts.mu.Unlock()
key := keyPair(hash, category)
torrent, exists := ts.torrents[key]
if !exists && category == "" {
// Remove the torrent without knowing the category
for k, t := range ts.torrents {
if t.Hash == hash {
key = k
torrent = t
break
}
}
}
delete(ts.torrents, key)
if torrent == nil {
return
}
// Delete the torrent folder
if torrent.ContentPath != "" {
err := os.RemoveAll(torrent.ContentPath)
if err != nil {
return
}
}
go func() {
err := ts.saveToFile()
if err != nil {
fmt.Println(err)
}
}()
}
func (ts *TorrentStorage) DeleteMultiple(hashes []string) {
ts.mu.Lock()
defer ts.mu.Unlock()
for _, hash := range hashes {
for key, torrent := range ts.torrents {
if torrent.Hash == hash {
delete(ts.torrents, key)
}
}
}
go func() {
err := ts.saveToFile()
if err != nil {
fmt.Println(err)
}
}()
}
func (ts *TorrentStorage) Save() error {
return ts.saveToFile()
}
// saveToFile is a helper function to write the current state to the JSON file
func (ts *TorrentStorage) saveToFile() error {
ts.mu.RLock()
data, err := json.MarshalIndent(ts.torrents, "", " ")
ts.mu.RUnlock()
if err != nil {
return err
}
return os.WriteFile(ts.filename, data, 0644)
}

345
pkg/qbit/torrent.go Normal file
View File

@@ -0,0 +1,345 @@
package qbit
import (
"cmp"
"context"
"fmt"
"github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/arr"
db "github.com/sirrobot01/decypharr/pkg/debrid/debrid"
debrid "github.com/sirrobot01/decypharr/pkg/debrid/types"
"github.com/sirrobot01/decypharr/pkg/service"
"io"
"mime/multipart"
"os"
"path/filepath"
"slices"
"strings"
"time"
)
// All torrent related helpers goes here
func (q *QBit) AddMagnet(ctx context.Context, url, category string) error {
magnet, err := utils.GetMagnetFromUrl(url)
if err != nil {
return fmt.Errorf("error parsing magnet link: %w", err)
}
err = q.Process(ctx, magnet, category)
if err != nil {
return fmt.Errorf("failed to process torrent: %w", err)
}
return nil
}
func (q *QBit) AddTorrent(ctx context.Context, fileHeader *multipart.FileHeader, category string) error {
file, _ := fileHeader.Open()
defer file.Close()
var reader io.Reader = file
magnet, err := utils.GetMagnetFromFile(reader, fileHeader.Filename)
if err != nil {
return fmt.Errorf("error reading file: %s \n %w", fileHeader.Filename, err)
}
err = q.Process(ctx, magnet, category)
if err != nil {
return fmt.Errorf("failed to process torrent: %w", err)
}
return nil
}
func (q *QBit) Process(ctx context.Context, magnet *utils.Magnet, category string) error {
svc := service.GetService()
torrent := createTorrentFromMagnet(magnet, category, "auto")
a, ok := ctx.Value("arr").(*arr.Arr)
if !ok {
return fmt.Errorf("arr not found in context")
}
isSymlink := ctx.Value("isSymlink").(bool)
debridTorrent, err := db.ProcessTorrent(svc.Debrid, magnet, a, isSymlink, false)
if err != nil || debridTorrent == nil {
if debridTorrent != nil {
dbClient := service.GetDebrid().GetByName(debridTorrent.Debrid)
go func() {
_ = dbClient.DeleteTorrent(debridTorrent.Id)
}()
}
if err == nil {
err = fmt.Errorf("failed to process torrent")
}
return err
}
torrent = q.UpdateTorrentMin(torrent, debridTorrent)
q.Storage.AddOrUpdate(torrent)
go q.ProcessFiles(torrent, debridTorrent, a, isSymlink) // We can send async for file processing not to delay the response
return nil
}
func (q *QBit) ProcessFiles(torrent *Torrent, debridTorrent *debrid.Torrent, arr *arr.Arr, isSymlink bool) {
svc := service.GetService()
client := svc.Debrid.GetByName(debridTorrent.Debrid)
for debridTorrent.Status != "downloaded" {
q.logger.Debug().Msgf("%s <- (%s) Download Progress: %.2f%%", debridTorrent.Debrid, debridTorrent.Name, debridTorrent.Progress)
dbT, err := client.CheckStatus(debridTorrent, isSymlink)
if err != nil {
q.logger.Error().Msgf("Error checking status: %v", err)
go func() {
err := client.DeleteTorrent(debridTorrent.Id)
if err != nil {
q.logger.Error().Msgf("Error deleting torrent: %v", err)
}
}()
q.MarkAsFailed(torrent)
if err := arr.Refresh(); err != nil {
q.logger.Error().Msgf("Error refreshing arr: %v", err)
}
return
}
debridTorrent = dbT
torrent = q.UpdateTorrentMin(torrent, debridTorrent)
// Exit the loop for downloading statuses to prevent memory buildup
if !slices.Contains(client.GetDownloadingStatus(), debridTorrent.Status) {
break
}
time.Sleep(time.Duration(q.RefreshInterval) * time.Second)
}
var (
torrentSymlinkPath string
err error
)
debridTorrent.Arr = arr
// File is done downloading at this stage
// Check if debrid supports webdav by checking cache
if isSymlink {
cache, ok := svc.Debrid.Caches[debridTorrent.Debrid]
if ok {
q.logger.Info().Msgf("Using internal webdav for %s", debridTorrent.Debrid)
// Use webdav to download the file
if err := cache.AddTorrent(debridTorrent); err != nil {
q.logger.Error().Msgf("Error adding torrent to cache: %v", err)
q.MarkAsFailed(torrent)
return
}
rclonePath := filepath.Join(debridTorrent.MountPath, cache.GetTorrentFolder(debridTorrent)) // /mnt/remote/realdebrid/MyTVShow
torrentFolderNoExt := utils.RemoveExtension(debridTorrent.Name)
torrentSymlinkPath, err = q.createSymlinks(debridTorrent, rclonePath, torrentFolderNoExt) // /mnt/symlinks/{category}/MyTVShow/
} else {
// User is using either zurg or debrid webdav
torrentSymlinkPath, err = q.ProcessSymlink(torrent) // /mnt/symlinks/{category}/MyTVShow/
}
} else {
torrentSymlinkPath, err = q.ProcessManualFile(torrent)
}
if err != nil {
q.MarkAsFailed(torrent)
go func() {
err := client.DeleteTorrent(debridTorrent.Id)
if err != nil {
q.logger.Error().Msgf("Error deleting torrent: %v", err)
}
}()
q.logger.Info().Msgf("Error: %v", err)
return
}
torrent.TorrentPath = torrentSymlinkPath
q.UpdateTorrent(torrent, debridTorrent)
go func() {
if err := request.SendDiscordMessage("download_complete", "success", torrent.discordContext()); err != nil {
q.logger.Error().Msgf("Error sending discord message: %v", err)
}
}()
if err := arr.Refresh(); err != nil {
q.logger.Error().Msgf("Error refreshing arr: %v", err)
}
}
func (q *QBit) MarkAsFailed(t *Torrent) *Torrent {
t.State = "error"
q.Storage.AddOrUpdate(t)
go func() {
if err := request.SendDiscordMessage("download_failed", "error", t.discordContext()); err != nil {
q.logger.Error().Msgf("Error sending discord message: %v", err)
}
}()
return t
}
func (q *QBit) UpdateTorrentMin(t *Torrent, debridTorrent *debrid.Torrent) *Torrent {
if debridTorrent == nil {
return t
}
addedOn, err := time.Parse(time.RFC3339, debridTorrent.Added)
if err != nil {
addedOn = time.Now()
}
totalSize := debridTorrent.Bytes
progress := cmp.Or(debridTorrent.Progress, 100)
progress = progress / 100.0
sizeCompleted := int64(float64(totalSize) * progress)
var speed int64
if debridTorrent.Speed != 0 {
speed = debridTorrent.Speed
}
var eta int
if speed != 0 {
eta = int((totalSize - sizeCompleted) / speed)
}
t.ID = debridTorrent.Id
t.Name = debridTorrent.Name
t.AddedOn = addedOn.Unix()
t.DebridTorrent = debridTorrent
t.Debrid = debridTorrent.Debrid
t.Size = totalSize
t.Completed = sizeCompleted
t.Downloaded = sizeCompleted
t.DownloadedSession = sizeCompleted
t.Uploaded = sizeCompleted
t.UploadedSession = sizeCompleted
t.AmountLeft = totalSize - sizeCompleted
t.Progress = progress
t.Eta = eta
t.Dlspeed = speed
t.Upspeed = speed
t.SavePath = filepath.Join(q.DownloadFolder, t.Category) + string(os.PathSeparator)
t.ContentPath = filepath.Join(t.SavePath, t.Name) + string(os.PathSeparator)
return t
}
func (q *QBit) UpdateTorrent(t *Torrent, debridTorrent *debrid.Torrent) *Torrent {
if debridTorrent == nil {
return t
}
_db := service.GetDebrid().GetByName(debridTorrent.Debrid)
if debridTorrent.Status != "downloaded" {
_ = _db.UpdateTorrent(debridTorrent)
}
t = q.UpdateTorrentMin(t, debridTorrent)
t.ContentPath = t.TorrentPath + string(os.PathSeparator)
if t.IsReady() {
t.State = "pausedUP"
q.Storage.Update(t)
return t
}
ticker := time.NewTicker(100 * time.Millisecond)
defer ticker.Stop()
for {
select {
case <-ticker.C:
if t.IsReady() {
t.State = "pausedUP"
q.Storage.Update(t)
return t
}
updatedT := q.UpdateTorrent(t, debridTorrent)
t = updatedT
case <-time.After(10 * time.Minute): // Add a timeout
return t
}
}
}
func (q *QBit) ResumeTorrent(t *Torrent) bool {
return true
}
func (q *QBit) PauseTorrent(t *Torrent) bool {
return true
}
func (q *QBit) RefreshTorrent(t *Torrent) bool {
return true
}
func (q *QBit) GetTorrentProperties(t *Torrent) *TorrentProperties {
return &TorrentProperties{
AdditionDate: t.AddedOn,
Comment: "Debrid Blackhole <https://github.com/sirrobot01/decypharr>",
CreatedBy: "Debrid Blackhole <https://github.com/sirrobot01/decypharr>",
CreationDate: t.AddedOn,
DlLimit: -1,
UpLimit: -1,
DlSpeed: t.Dlspeed,
UpSpeed: t.Upspeed,
TotalSize: t.Size,
TotalUploaded: t.Uploaded,
TotalDownloaded: t.Downloaded,
TotalUploadedSession: t.UploadedSession,
TotalDownloadedSession: t.DownloadedSession,
LastSeen: time.Now().Unix(),
NbConnectionsLimit: 100,
Peers: 0,
PeersTotal: 2,
SeedingTime: 1,
Seeds: 100,
ShareRatio: 100,
}
}
func (q *QBit) GetTorrentFiles(t *Torrent) []*TorrentFile {
files := make([]*TorrentFile, 0)
if t.DebridTorrent == nil {
return files
}
for _, file := range t.DebridTorrent.Files {
files = append(files, &TorrentFile{
Name: file.Path,
Size: file.Size,
})
}
return files
}
func (q *QBit) SetTorrentTags(t *Torrent, tags []string) bool {
torrentTags := strings.Split(t.Tags, ",")
for _, tag := range tags {
if tag == "" {
continue
}
if !slices.Contains(torrentTags, tag) {
torrentTags = append(torrentTags, tag)
}
if !slices.Contains(q.Tags, tag) {
q.Tags = append(q.Tags, tag)
}
}
t.Tags = strings.Join(torrentTags, ",")
q.Storage.Update(t)
return true
}
func (q *QBit) RemoveTorrentTags(t *Torrent, tags []string) bool {
torrentTags := strings.Split(t.Tags, ",")
newTorrentTags := utils.RemoveItem(torrentTags, tags...)
q.Tags = utils.RemoveItem(q.Tags, tags...)
t.Tags = strings.Join(newTorrentTags, ",")
q.Storage.Update(t)
return true
}
func (q *QBit) AddTags(tags []string) bool {
for _, tag := range tags {
if tag == "" {
continue
}
if !slices.Contains(q.Tags, tag) {
q.Tags = append(q.Tags, tag)
}
}
return true
}
func (q *QBit) RemoveTags(tags []string) bool {
q.Tags = utils.RemoveItem(q.Tags, tags...)
return true
}

442
pkg/qbit/types.go Normal file
View File

@@ -0,0 +1,442 @@
package qbit
import (
"fmt"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
"sync"
)
type BuildInfo struct {
Libtorrent string `json:"libtorrent"`
Bitness int `json:"bitness"`
Boost string `json:"boost"`
Openssl string `json:"openssl"`
Qt string `json:"qt"`
Zlib string `json:"zlib"`
}
type AppPreferences struct {
AddTrackers string `json:"add_trackers"`
AddTrackersEnabled bool `json:"add_trackers_enabled"`
AltDlLimit int `json:"alt_dl_limit"`
AltUpLimit int `json:"alt_up_limit"`
AlternativeWebuiEnabled bool `json:"alternative_webui_enabled"`
AlternativeWebuiPath string `json:"alternative_webui_path"`
AnnounceIp string `json:"announce_ip"`
AnnounceToAllTiers bool `json:"announce_to_all_tiers"`
AnnounceToAllTrackers bool `json:"announce_to_all_trackers"`
AnonymousMode bool `json:"anonymous_mode"`
AsyncIoThreads int `json:"async_io_threads"`
AutoDeleteMode int `json:"auto_delete_mode"`
AutoTmmEnabled bool `json:"auto_tmm_enabled"`
AutorunEnabled bool `json:"autorun_enabled"`
AutorunProgram string `json:"autorun_program"`
BannedIPs string `json:"banned_IPs"`
BittorrentProtocol int `json:"bittorrent_protocol"`
BypassAuthSubnetWhitelist string `json:"bypass_auth_subnet_whitelist"`
BypassAuthSubnetWhitelistEnabled bool `json:"bypass_auth_subnet_whitelist_enabled"`
BypassLocalAuth bool `json:"bypass_local_auth"`
CategoryChangedTmmEnabled bool `json:"category_changed_tmm_enabled"`
CheckingMemoryUse int `json:"checking_memory_use"`
CreateSubfolderEnabled bool `json:"create_subfolder_enabled"`
CurrentInterfaceAddress string `json:"current_interface_address"`
CurrentNetworkInterface string `json:"current_network_interface"`
Dht bool `json:"dht"`
DiskCache int `json:"disk_cache"`
DiskCacheTtl int `json:"disk_cache_ttl"`
DlLimit int `json:"dl_limit"`
DontCountSlowTorrents bool `json:"dont_count_slow_torrents"`
DyndnsDomain string `json:"dyndns_domain"`
DyndnsEnabled bool `json:"dyndns_enabled"`
DyndnsPassword string `json:"dyndns_password"`
DyndnsService int `json:"dyndns_service"`
DyndnsUsername string `json:"dyndns_username"`
EmbeddedTrackerPort int `json:"embedded_tracker_port"`
EnableCoalesceReadWrite bool `json:"enable_coalesce_read_write"`
EnableEmbeddedTracker bool `json:"enable_embedded_tracker"`
EnableMultiConnectionsFromSameIp bool `json:"enable_multi_connections_from_same_ip"`
EnableOsCache bool `json:"enable_os_cache"`
EnablePieceExtentAffinity bool `json:"enable_piece_extent_affinity"`
EnableSuperSeeding bool `json:"enable_super_seeding"`
EnableUploadSuggestions bool `json:"enable_upload_suggestions"`
Encryption int `json:"encryption"`
ExportDir string `json:"export_dir"`
ExportDirFin string `json:"export_dir_fin"`
FilePoolSize int `json:"file_pool_size"`
IncompleteFilesExt bool `json:"incomplete_files_ext"`
IpFilterEnabled bool `json:"ip_filter_enabled"`
IpFilterPath string `json:"ip_filter_path"`
IpFilterTrackers bool `json:"ip_filter_trackers"`
LimitLanPeers bool `json:"limit_lan_peers"`
LimitTcpOverhead bool `json:"limit_tcp_overhead"`
LimitUtpRate bool `json:"limit_utp_rate"`
ListenPort int `json:"listen_port"`
Locale string `json:"locale"`
Lsd bool `json:"lsd"`
MailNotificationAuthEnabled bool `json:"mail_notification_auth_enabled"`
MailNotificationEmail string `json:"mail_notification_email"`
MailNotificationEnabled bool `json:"mail_notification_enabled"`
MailNotificationPassword string `json:"mail_notification_password"`
MailNotificationSender string `json:"mail_notification_sender"`
MailNotificationSmtp string `json:"mail_notification_smtp"`
MailNotificationSslEnabled bool `json:"mail_notification_ssl_enabled"`
MailNotificationUsername string `json:"mail_notification_username"`
MaxActiveDownloads int `json:"max_active_downloads"`
MaxActiveTorrents int `json:"max_active_torrents"`
MaxActiveUploads int `json:"max_active_uploads"`
MaxConnec int `json:"max_connec"`
MaxConnecPerTorrent int `json:"max_connec_per_torrent"`
MaxRatio int `json:"max_ratio"`
MaxRatioAct int `json:"max_ratio_act"`
MaxRatioEnabled bool `json:"max_ratio_enabled"`
MaxSeedingTime int `json:"max_seeding_time"`
MaxSeedingTimeEnabled bool `json:"max_seeding_time_enabled"`
MaxUploads int `json:"max_uploads"`
MaxUploadsPerTorrent int `json:"max_uploads_per_torrent"`
OutgoingPortsMax int `json:"outgoing_ports_max"`
OutgoingPortsMin int `json:"outgoing_ports_min"`
Pex bool `json:"pex"`
PreallocateAll bool `json:"preallocate_all"`
ProxyAuthEnabled bool `json:"proxy_auth_enabled"`
ProxyIp string `json:"proxy_ip"`
ProxyPassword string `json:"proxy_password"`
ProxyPeerConnections bool `json:"proxy_peer_connections"`
ProxyPort int `json:"proxy_port"`
ProxyTorrentsOnly bool `json:"proxy_torrents_only"`
ProxyType int `json:"proxy_type"`
ProxyUsername string `json:"proxy_username"`
QueueingEnabled bool `json:"queueing_enabled"`
RandomPort bool `json:"random_port"`
RecheckCompletedTorrents bool `json:"recheck_completed_torrents"`
ResolvePeerCountries bool `json:"resolve_peer_countries"`
RssAutoDownloadingEnabled bool `json:"rss_auto_downloading_enabled"`
RssMaxArticlesPerFeed int `json:"rss_max_articles_per_feed"`
RssProcessingEnabled bool `json:"rss_processing_enabled"`
RssRefreshInterval int `json:"rss_refresh_interval"`
SavePath string `json:"save_path"`
SavePathChangedTmmEnabled bool `json:"save_path_changed_tmm_enabled"`
SaveResumeDataInterval int `json:"save_resume_data_interval"`
ScanDirs ScanDirs `json:"scan_dirs"`
ScheduleFromHour int `json:"schedule_from_hour"`
ScheduleFromMin int `json:"schedule_from_min"`
ScheduleToHour int `json:"schedule_to_hour"`
ScheduleToMin int `json:"schedule_to_min"`
SchedulerDays int `json:"scheduler_days"`
SchedulerEnabled bool `json:"scheduler_enabled"`
SendBufferLowWatermark int `json:"send_buffer_low_watermark"`
SendBufferWatermark int `json:"send_buffer_watermark"`
SendBufferWatermarkFactor int `json:"send_buffer_watermark_factor"`
SlowTorrentDlRateThreshold int `json:"slow_torrent_dl_rate_threshold"`
SlowTorrentInactiveTimer int `json:"slow_torrent_inactive_timer"`
SlowTorrentUlRateThreshold int `json:"slow_torrent_ul_rate_threshold"`
SocketBacklogSize int `json:"socket_backlog_size"`
StartPausedEnabled bool `json:"start_paused_enabled"`
StopTrackerTimeout int `json:"stop_tracker_timeout"`
TempPath string `json:"temp_path"`
TempPathEnabled bool `json:"temp_path_enabled"`
TorrentChangedTmmEnabled bool `json:"torrent_changed_tmm_enabled"`
UpLimit int `json:"up_limit"`
UploadChokingAlgorithm int `json:"upload_choking_algorithm"`
UploadSlotsBehavior int `json:"upload_slots_behavior"`
Upnp bool `json:"upnp"`
UpnpLeaseDuration int `json:"upnp_lease_duration"`
UseHttps bool `json:"use_https"`
UtpTcpMixedMode int `json:"utp_tcp_mixed_mode"`
WebUiAddress string `json:"web_ui_address"`
WebUiBanDuration int `json:"web_ui_ban_duration"`
WebUiClickjackingProtectionEnabled bool `json:"web_ui_clickjacking_protection_enabled"`
WebUiCsrfProtectionEnabled bool `json:"web_ui_csrf_protection_enabled"`
WebUiDomainList string `json:"web_ui_domain_list"`
WebUiHostHeaderValidationEnabled bool `json:"web_ui_host_header_validation_enabled"`
WebUiHttpsCertPath string `json:"web_ui_https_cert_path"`
WebUiHttpsKeyPath string `json:"web_ui_https_key_path"`
WebUiMaxAuthFailCount int `json:"web_ui_max_auth_fail_count"`
WebUiPort int `json:"web_ui_port"`
WebUiSecureCookieEnabled bool `json:"web_ui_secure_cookie_enabled"`
WebUiSessionTimeout int `json:"web_ui_session_timeout"`
WebUiUpnp bool `json:"web_ui_upnp"`
WebUiUsername string `json:"web_ui_username"`
WebUiPassword string `json:"web_ui_password"`
SSLKey string `json:"ssl_key"`
SSLCert string `json:"ssl_cert"`
RSSDownloadRepack string `json:"rss_download_repack_proper_episodes"`
RSSSmartEpisodeFilters string `json:"rss_smart_episode_filters"`
WebUiUseCustomHttpHeaders bool `json:"web_ui_use_custom_http_headers"`
WebUiUseCustomHttpHeadersEnabled bool `json:"web_ui_use_custom_http_headers_enabled"`
}
type ScanDirs struct{}
type TorrentCategory struct {
Name string `json:"name"`
SavePath string `json:"savePath"`
}
type Torrent struct {
ID string `json:"id"`
DebridTorrent *types.Torrent `json:"-"`
Debrid string `json:"debrid"`
TorrentPath string `json:"-"`
AddedOn int64 `json:"added_on,omitempty"`
AmountLeft int64 `json:"amount_left"`
AutoTmm bool `json:"auto_tmm"`
Availability float64 `json:"availability,omitempty"`
Category string `json:"category,omitempty"`
Completed int64 `json:"completed"`
CompletionOn int `json:"completion_on,omitempty"`
ContentPath string `json:"content_path"`
DlLimit int `json:"dl_limit"`
Dlspeed int64 `json:"dlspeed"`
Downloaded int64 `json:"downloaded"`
DownloadedSession int64 `json:"downloaded_session"`
Eta int `json:"eta"`
FlPiecePrio bool `json:"f_l_piece_prio,omitempty"`
ForceStart bool `json:"force_start,omitempty"`
Hash string `json:"hash"`
LastActivity int64 `json:"last_activity,omitempty"`
MagnetUri string `json:"magnet_uri,omitempty"`
MaxRatio int `json:"max_ratio,omitempty"`
MaxSeedingTime int `json:"max_seeding_time,omitempty"`
Name string `json:"name,omitempty"`
NumComplete int `json:"num_complete,omitempty"`
NumIncomplete int `json:"num_incomplete,omitempty"`
NumLeechs int `json:"num_leechs,omitempty"`
NumSeeds int `json:"num_seeds,omitempty"`
Priority int `json:"priority,omitempty"`
Progress float64 `json:"progress"`
Ratio int `json:"ratio,omitempty"`
RatioLimit int `json:"ratio_limit,omitempty"`
SavePath string `json:"save_path"`
SeedingTimeLimit int `json:"seeding_time_limit,omitempty"`
SeenComplete int64 `json:"seen_complete,omitempty"`
SeqDl bool `json:"seq_dl"`
Size int64 `json:"size,omitempty"`
State string `json:"state,omitempty"`
SuperSeeding bool `json:"super_seeding"`
Tags string `json:"tags,omitempty"`
TimeActive int `json:"time_active,omitempty"`
TotalSize int64 `json:"total_size,omitempty"`
Tracker string `json:"tracker,omitempty"`
UpLimit int64 `json:"up_limit,omitempty"`
Uploaded int64 `json:"uploaded,omitempty"`
UploadedSession int64 `json:"uploaded_session,omitempty"`
Upspeed int64 `json:"upspeed,omitempty"`
Source string `json:"source,omitempty"`
Mu sync.Mutex `json:"-"`
}
func (t *Torrent) IsReady() bool {
return t.AmountLeft <= 0 && t.TorrentPath != ""
}
func (t *Torrent) discordContext() string {
format := `
**Name:** %s
**Arr:** %s
**Hash:** %s
**MagnetURI:** %s
**Debrid:** %s
`
return fmt.Sprintf(format, t.Name, t.Category, t.Hash, t.MagnetUri, t.Debrid)
}
type TorrentProperties struct {
AdditionDate int64 `json:"addition_date,omitempty"`
Comment string `json:"comment,omitempty"`
CompletionDate int64 `json:"completion_date,omitempty"`
CreatedBy string `json:"created_by,omitempty"`
CreationDate int64 `json:"creation_date,omitempty"`
DlLimit int `json:"dl_limit,omitempty"`
DlSpeed int64 `json:"dl_speed,omitempty"`
DlSpeedAvg int `json:"dl_speed_avg,omitempty"`
Eta int `json:"eta,omitempty"`
LastSeen int64 `json:"last_seen,omitempty"`
NbConnections int `json:"nb_connections,omitempty"`
NbConnectionsLimit int `json:"nb_connections_limit,omitempty"`
Peers int `json:"peers,omitempty"`
PeersTotal int `json:"peers_total,omitempty"`
PieceSize int64 `json:"piece_size,omitempty"`
PiecesHave int64 `json:"pieces_have,omitempty"`
PiecesNum int64 `json:"pieces_num,omitempty"`
Reannounce int `json:"reannounce,omitempty"`
SavePath string `json:"save_path,omitempty"`
SeedingTime int `json:"seeding_time,omitempty"`
Seeds int `json:"seeds,omitempty"`
SeedsTotal int `json:"seeds_total,omitempty"`
ShareRatio int `json:"share_ratio,omitempty"`
TimeElapsed int64 `json:"time_elapsed,omitempty"`
TotalDownloaded int64 `json:"total_downloaded,omitempty"`
TotalDownloadedSession int64 `json:"total_downloaded_session,omitempty"`
TotalSize int64 `json:"total_size,omitempty"`
TotalUploaded int64 `json:"total_uploaded,omitempty"`
TotalUploadedSession int64 `json:"total_uploaded_session,omitempty"`
TotalWasted int64 `json:"total_wasted,omitempty"`
UpLimit int `json:"up_limit,omitempty"`
UpSpeed int64 `json:"up_speed,omitempty"`
UpSpeedAvg int `json:"up_speed_avg,omitempty"`
}
type TorrentFile struct {
Index int `json:"index,omitempty"`
Name string `json:"name,omitempty"`
Size int64 `json:"size,omitempty"`
Progress int `json:"progress,omitempty"`
Priority int `json:"priority,omitempty"`
IsSeed bool `json:"is_seed,omitempty"`
PieceRange []int `json:"piece_range,omitempty"`
Availability float64 `json:"availability,omitempty"`
}
func NewAppPreferences() *AppPreferences {
preferences := &AppPreferences{
AddTrackers: "",
AddTrackersEnabled: false,
AltDlLimit: 10240,
AltUpLimit: 10240,
AlternativeWebuiEnabled: false,
AlternativeWebuiPath: "",
AnnounceIp: "",
AnnounceToAllTiers: true,
AnnounceToAllTrackers: false,
AnonymousMode: false,
AsyncIoThreads: 4,
AutoDeleteMode: 0,
AutoTmmEnabled: false,
AutorunEnabled: false,
AutorunProgram: "",
BannedIPs: "",
BittorrentProtocol: 0,
BypassAuthSubnetWhitelist: "",
BypassAuthSubnetWhitelistEnabled: false,
BypassLocalAuth: false,
CategoryChangedTmmEnabled: false,
CheckingMemoryUse: 32,
CreateSubfolderEnabled: true,
CurrentInterfaceAddress: "",
CurrentNetworkInterface: "",
Dht: true,
DiskCache: -1,
DiskCacheTtl: 60,
DlLimit: 0,
DontCountSlowTorrents: false,
DyndnsDomain: "changeme.dyndns.org",
DyndnsEnabled: false,
DyndnsPassword: "",
DyndnsService: 0,
DyndnsUsername: "",
EmbeddedTrackerPort: 9000,
EnableCoalesceReadWrite: true,
EnableEmbeddedTracker: false,
EnableMultiConnectionsFromSameIp: false,
EnableOsCache: true,
EnablePieceExtentAffinity: false,
EnableSuperSeeding: false,
EnableUploadSuggestions: false,
Encryption: 0,
ExportDir: "",
ExportDirFin: "",
FilePoolSize: 40,
IncompleteFilesExt: false,
IpFilterEnabled: false,
IpFilterPath: "",
IpFilterTrackers: false,
LimitLanPeers: true,
LimitTcpOverhead: false,
LimitUtpRate: true,
ListenPort: 31193,
Locale: "en",
Lsd: true,
MailNotificationAuthEnabled: false,
MailNotificationEmail: "",
MailNotificationEnabled: false,
MailNotificationPassword: "",
MailNotificationSender: "qBittorrentNotification@example.com",
MailNotificationSmtp: "smtp.changeme.com",
MailNotificationSslEnabled: false,
MailNotificationUsername: "",
MaxActiveDownloads: 3,
MaxActiveTorrents: 5,
MaxActiveUploads: 3,
MaxConnec: 500,
MaxConnecPerTorrent: 100,
MaxRatio: -1,
MaxRatioAct: 0,
MaxRatioEnabled: false,
MaxSeedingTime: -1,
MaxSeedingTimeEnabled: false,
MaxUploads: -1,
MaxUploadsPerTorrent: -1,
OutgoingPortsMax: 0,
OutgoingPortsMin: 0,
Pex: true,
PreallocateAll: false,
ProxyAuthEnabled: false,
ProxyIp: "0.0.0.0",
ProxyPassword: "",
ProxyPeerConnections: false,
ProxyPort: 8080,
ProxyTorrentsOnly: false,
ProxyType: 0,
ProxyUsername: "",
QueueingEnabled: false,
RandomPort: false,
RecheckCompletedTorrents: false,
ResolvePeerCountries: true,
RssAutoDownloadingEnabled: false,
RssMaxArticlesPerFeed: 50,
RssProcessingEnabled: false,
RssRefreshInterval: 30,
SavePathChangedTmmEnabled: false,
SaveResumeDataInterval: 60,
ScanDirs: ScanDirs{},
ScheduleFromHour: 8,
ScheduleFromMin: 0,
ScheduleToHour: 20,
ScheduleToMin: 0,
SchedulerDays: 0,
SchedulerEnabled: false,
SendBufferLowWatermark: 10,
SendBufferWatermark: 500,
SendBufferWatermarkFactor: 50,
SlowTorrentDlRateThreshold: 2,
SlowTorrentInactiveTimer: 60,
SlowTorrentUlRateThreshold: 2,
SocketBacklogSize: 30,
StartPausedEnabled: false,
StopTrackerTimeout: 1,
TempPathEnabled: false,
TorrentChangedTmmEnabled: true,
UpLimit: 0,
UploadChokingAlgorithm: 1,
UploadSlotsBehavior: 0,
Upnp: true,
UpnpLeaseDuration: 0,
UseHttps: false,
UtpTcpMixedMode: 0,
WebUiAddress: "*",
WebUiBanDuration: 3600,
WebUiClickjackingProtectionEnabled: true,
WebUiCsrfProtectionEnabled: true,
WebUiDomainList: "*",
WebUiHostHeaderValidationEnabled: true,
WebUiHttpsCertPath: "",
WebUiHttpsKeyPath: "",
WebUiMaxAuthFailCount: 5,
WebUiPort: 8080,
WebUiSecureCookieEnabled: true,
WebUiSessionTimeout: 3600,
WebUiUpnp: false,
// Fields in the struct but not in the JSON (set to zero values):
WebUiPassword: "",
SSLKey: "",
SSLCert: "",
RSSDownloadRepack: "",
RSSSmartEpisodeFilters: "",
WebUiUseCustomHttpHeaders: false,
WebUiUseCustomHttpHeadersEnabled: false,
}
return preferences
}

159
pkg/repair/clean.go Normal file
View File

@@ -0,0 +1,159 @@
package repair
//func (r *Repair) clean(job *Job) error {
// // Create a new error group
// g, ctx := errgroup.WithContext(context.Background())
//
// uniqueItems := make(map[string]string)
// mu := sync.Mutex{}
//
// // Limit concurrent goroutines
// g.SetLimit(10)
//
// for _, a := range job.Arrs {
// a := a // Capture range variable
// g.Go(func() error {
// // Check if context was canceled
// select {
// case <-ctx.Done():
// return ctx.Err()
// default:
// }
//
// items, err := r.cleanArr(job, a, "")
// if err != nil {
// r.logger.Error().Err(err).Msgf("Error cleaning %s", a)
// return err
// }
//
// // Safely append the found items to the shared slice
// if len(items) > 0 {
// mu.Lock()
// for k, v := range items {
// uniqueItems[k] = v
// }
// mu.Unlock()
// }
//
// return nil
// })
// }
//
// if err := g.Wait(); err != nil {
// return err
// }
//
// if len(uniqueItems) == 0 {
// job.CompletedAt = time.Now()
// job.Status = JobCompleted
//
// go func() {
// if err := request.SendDiscordMessage("repair_clean_complete", "success", job.discordContext()); err != nil {
// r.logger.Error().Msgf("Error sending discord message: %v", err)
// }
// }()
//
// return nil
// }
//
// cache := r.deb.Caches["realdebrid"]
// if cache == nil {
// return fmt.Errorf("cache not found")
// }
// torrents := cache.GetTorrents()
//
// dangling := make([]string, 0)
// for _, t := range torrents {
// if _, ok := uniqueItems[t.Name]; !ok {
// dangling = append(dangling, t.Id)
// }
// }
//
// r.logger.Info().Msgf("Found %d delapitated items", len(dangling))
//
// if len(dangling) == 0 {
// job.CompletedAt = time.Now()
// job.Status = JobCompleted
// return nil
// }
//
// client := r.deb.Clients["realdebrid"]
// if client == nil {
// return fmt.Errorf("client not found")
// }
// for _, id := range dangling {
// err := client.DeleteTorrent(id)
// if err != nil {
// return err
// }
// }
//
// return nil
//}
//
//func (r *Repair) cleanArr(j *Job, _arr string, tmdbId string) (map[string]string, error) {
// uniqueItems := make(map[string]string)
// a := r.arrs.Get(_arr)
//
// r.logger.Info().Msgf("Starting repair for %s", a.Name)
// media, err := a.GetMedia(tmdbId)
// if err != nil {
// r.logger.Info().Msgf("Failed to get %s media: %v", a.Name, err)
// return uniqueItems, err
// }
//
// // Create a new error group
// g, ctx := errgroup.WithContext(context.Background())
//
// mu := sync.Mutex{}
//
// // Limit concurrent goroutines
// g.SetLimit(runtime.NumCPU() * 4)
//
// for _, m := range media {
// m := m // Create a new variable scoped to the loop iteration
// g.Go(func() error {
// // Check if context was canceled
// select {
// case <-ctx.Done():
// return ctx.Err()
// default:
// }
//
// u := r.getUniquePaths(m)
// for k, v := range u {
// mu.Lock()
// uniqueItems[k] = v
// mu.Unlock()
// }
// return nil
// })
// }
//
// if err := g.Wait(); err != nil {
// return uniqueItems, err
// }
//
// r.logger.Info().Msgf("Repair completed for %s. %d unique items", a.Name, len(uniqueItems))
// return uniqueItems, nil
//}
//func (r *Repair) getUniquePaths(media arr.Content) map[string]string {
// // Use zurg setup to check file availability with zurg
// // This reduces bandwidth usage significantly
//
// uniqueParents := make(map[string]string)
// files := media.Files
// for _, file := range files {
// target := getSymlinkTarget(file.Path)
// if target != "" {
// file.IsSymlink = true
// dir, f := filepath.Split(target)
// parent := filepath.Base(filepath.Clean(dir))
// // Set target path folder/file.mkv
// file.TargetPath = f
// uniqueParents[parent] = target
// }
// }
// return uniqueParents
//}

149
pkg/repair/misc.go Normal file
View File

@@ -0,0 +1,149 @@
package repair
import (
"fmt"
"github.com/sirrobot01/decypharr/pkg/arr"
"os"
"path/filepath"
"strconv"
"strings"
"time"
)
func parseSchedule(schedule string) (time.Duration, error) {
if schedule == "" {
return time.Hour, nil // default 60m
}
// Check if it's a time-of-day format (HH:MM)
if strings.Contains(schedule, ":") {
return parseTimeOfDay(schedule)
}
// Otherwise treat as duration interval
return parseDurationInterval(schedule)
}
func parseTimeOfDay(schedule string) (time.Duration, error) {
now := time.Now()
scheduledTime, err := time.Parse("15:04", schedule)
if err != nil {
return 0, fmt.Errorf("invalid time format: %s. Use HH:MM in 24-hour format", schedule)
}
// Convert scheduled time to today
scheduleToday := time.Date(
now.Year(), now.Month(), now.Day(),
scheduledTime.Hour(), scheduledTime.Minute(), 0, 0,
now.Location(),
)
if scheduleToday.Before(now) {
scheduleToday = scheduleToday.Add(24 * time.Hour)
}
return scheduleToday.Sub(now), nil
}
func parseDurationInterval(interval string) (time.Duration, error) {
if len(interval) < 2 {
return 0, fmt.Errorf("invalid interval format: %s", interval)
}
numStr := interval[:len(interval)-1]
unit := interval[len(interval)-1]
num, err := strconv.Atoi(numStr)
if err != nil {
return 0, fmt.Errorf("invalid number in interval: %s", numStr)
}
switch unit {
case 'm':
return time.Duration(num) * time.Minute, nil
case 'h':
return time.Duration(num) * time.Hour, nil
case 'd':
return time.Duration(num) * 24 * time.Hour, nil
case 's':
return time.Duration(num) * time.Second, nil
default:
return 0, fmt.Errorf("invalid unit in interval: %c", unit)
}
}
func fileIsSymlinked(file string) bool {
info, err := os.Lstat(file)
if err != nil {
return false
}
return info.Mode()&os.ModeSymlink != 0
}
func getSymlinkTarget(file string) string {
if fileIsSymlinked(file) {
target, err := os.Readlink(file)
if err != nil {
return ""
}
if !filepath.IsAbs(target) {
dir := filepath.Dir(file)
target = filepath.Join(dir, target)
}
return target
}
return ""
}
func fileIsReadable(filePath string) error {
// First check if file exists and is accessible
info, err := os.Stat(filePath)
if err != nil {
return err
}
// Check if it's a regular file
if !info.Mode().IsRegular() {
return fmt.Errorf("not a regular file")
}
// Try to read the first 1024 bytes
err = checkFileStart(filePath)
if err != nil {
return err
}
return nil
}
func checkFileStart(filePath string) error {
f, err := os.Open(filePath)
if err != nil {
return err
}
defer f.Close()
// Read first 1kb
buffer := make([]byte, 1024)
_, err = f.Read(buffer)
if err != nil {
return err
}
return nil
}
func collectFiles(media arr.Content) map[string][]arr.ContentFile {
uniqueParents := make(map[string][]arr.ContentFile)
files := media.Files
for _, file := range files {
target := getSymlinkTarget(file.Path)
if target != "" {
file.IsSymlink = true
dir, f := filepath.Split(target)
torrentNamePath := filepath.Clean(dir)
// Set target path folder/file.mkv
file.TargetPath = f
uniqueParents[torrentNamePath] = append(uniqueParents[torrentNamePath], file)
}
}
return uniqueParents
}

780
pkg/repair/repair.go Normal file
View File

@@ -0,0 +1,780 @@
package repair
import (
"context"
"fmt"
"github.com/goccy/go-json"
"github.com/google/uuid"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/pkg/arr"
"github.com/sirrobot01/decypharr/pkg/debrid/debrid"
"golang.org/x/sync/errgroup"
"net"
"net/http"
"net/url"
"os"
"path/filepath"
"runtime"
"sort"
"strings"
"sync"
"time"
)
type Repair struct {
Jobs map[string]*Job
arrs *arr.Storage
deb *debrid.Engine
duration time.Duration
runOnStart bool
ZurgURL string
IsZurg bool
useWebdav bool
autoProcess bool
logger zerolog.Logger
filename string
workers int
ctx context.Context
}
func New(arrs *arr.Storage, engine *debrid.Engine) *Repair {
cfg := config.Get()
duration, err := parseSchedule(cfg.Repair.Interval)
if err != nil {
duration = time.Hour * 24
}
workers := runtime.NumCPU() * 20
if cfg.Repair.Workers > 0 {
workers = cfg.Repair.Workers
}
r := &Repair{
arrs: arrs,
logger: logger.New("repair"),
duration: duration,
runOnStart: cfg.Repair.RunOnStart,
ZurgURL: cfg.Repair.ZurgURL,
useWebdav: cfg.Repair.UseWebDav,
autoProcess: cfg.Repair.AutoProcess,
filename: filepath.Join(cfg.Path, "repair.json"),
deb: engine,
workers: workers,
ctx: context.Background(),
}
if r.ZurgURL != "" {
r.IsZurg = true
}
// Load jobs from file
r.loadFromFile()
return r
}
func (r *Repair) Start(ctx context.Context) error {
cfg := config.Get()
r.ctx = ctx
if r.runOnStart {
r.logger.Info().Msgf("Running initial repair")
go func() {
if err := r.AddJob([]string{}, []string{}, r.autoProcess, true); err != nil {
r.logger.Error().Err(err).Msg("Error running initial repair")
}
}()
}
ticker := time.NewTicker(r.duration)
defer ticker.Stop()
r.logger.Info().Msgf("Starting repair worker with %v interval", r.duration)
for {
select {
case <-r.ctx.Done():
r.logger.Info().Msg("Repair worker stopped")
return nil
case t := <-ticker.C:
r.logger.Info().Msgf("Running repair at %v", t.Format("15:04:05"))
if err := r.AddJob([]string{}, []string{}, r.autoProcess, true); err != nil {
r.logger.Error().Err(err).Msg("Error running repair")
}
// If using time-of-day schedule, reset the ticker for next day
if strings.Contains(cfg.Repair.Interval, ":") {
ticker.Reset(r.duration)
}
r.logger.Info().Msgf("Next scheduled repair at %v", t.Add(r.duration).Format("15:04:05"))
}
}
}
type JobStatus string
const (
JobStarted JobStatus = "started"
JobPending JobStatus = "pending"
JobFailed JobStatus = "failed"
JobCompleted JobStatus = "completed"
JobProcessing JobStatus = "processing"
)
type Job struct {
ID string `json:"id"`
Arrs []string `json:"arrs"`
MediaIDs []string `json:"media_ids"`
StartedAt time.Time `json:"created_at"`
BrokenItems map[string][]arr.ContentFile `json:"broken_items"`
Status JobStatus `json:"status"`
CompletedAt time.Time `json:"finished_at"`
FailedAt time.Time `json:"failed_at"`
AutoProcess bool `json:"auto_process"`
Recurrent bool `json:"recurrent"`
Error string `json:"error"`
}
func (j *Job) discordContext() string {
format := `
**ID**: %s
**Arrs**: %s
**Media IDs**: %s
**Status**: %s
**Started At**: %s
**Completed At**: %s
`
dateFmt := "2006-01-02 15:04:05"
return fmt.Sprintf(format, j.ID, strings.Join(j.Arrs, ","), strings.Join(j.MediaIDs, ", "), j.Status, j.StartedAt.Format(dateFmt), j.CompletedAt.Format(dateFmt))
}
func (r *Repair) getArrs(arrNames []string) []string {
arrs := make([]string, 0)
if len(arrNames) == 0 {
// No specific arrs, get all
// Also check if any arrs are set to skip repair
_arrs := r.arrs.GetAll()
for _, a := range _arrs {
if a.SkipRepair {
continue
}
arrs = append(arrs, a.Name)
}
} else {
for _, name := range arrNames {
a := r.arrs.Get(name)
if a == nil || a.Host == "" || a.Token == "" {
continue
}
arrs = append(arrs, a.Name)
}
}
return arrs
}
func jobKey(arrNames []string, mediaIDs []string) string {
return fmt.Sprintf("%s-%s", strings.Join(arrNames, ","), strings.Join(mediaIDs, ","))
}
func (r *Repair) reset(j *Job) {
// Update job for rerun
j.Status = JobStarted
j.StartedAt = time.Now()
j.CompletedAt = time.Time{}
j.FailedAt = time.Time{}
j.BrokenItems = nil
j.Error = ""
if j.Recurrent || j.Arrs == nil {
j.Arrs = r.getArrs([]string{}) // Get new arrs
}
}
func (r *Repair) newJob(arrsNames []string, mediaIDs []string) *Job {
arrs := r.getArrs(arrsNames)
return &Job{
ID: uuid.New().String(),
Arrs: arrs,
MediaIDs: mediaIDs,
StartedAt: time.Now(),
Status: JobStarted,
}
}
func (r *Repair) preRunChecks() error {
if r.useWebdav {
if len(r.deb.Caches) == 0 {
return fmt.Errorf("no caches found")
}
return nil
}
// Check if zurg url is reachable
if !r.IsZurg {
return nil
}
resp, err := http.Get(fmt.Sprint(r.ZurgURL, "/http/version.txt"))
if err != nil {
r.logger.Debug().Err(err).Msgf("Precheck failed: Failed to reach zurg at %s", r.ZurgURL)
return err
}
if resp.StatusCode != http.StatusOK {
r.logger.Debug().Msgf("Precheck failed: Zurg returned %d", resp.StatusCode)
return err
}
return nil
}
func (r *Repair) AddJob(arrsNames []string, mediaIDs []string, autoProcess, recurrent bool) error {
key := jobKey(arrsNames, mediaIDs)
job, ok := r.Jobs[key]
if job != nil && job.Status == JobStarted {
return fmt.Errorf("job already running")
}
if !ok {
job = r.newJob(arrsNames, mediaIDs)
}
job.AutoProcess = autoProcess
job.Recurrent = recurrent
r.reset(job)
r.Jobs[key] = job
go r.saveToFile()
go func() {
if err := r.repair(job); err != nil {
r.logger.Error().Err(err).Msg("Error running repair")
r.logger.Error().Err(err).Msg("Error running repair")
job.FailedAt = time.Now()
job.Error = err.Error()
job.Status = JobFailed
job.CompletedAt = time.Now()
}
}()
return nil
}
func (r *Repair) repair(job *Job) error {
defer r.saveToFile()
if err := r.preRunChecks(); err != nil {
return err
}
// Use a mutex to protect concurrent access to brokenItems
var mu sync.Mutex
brokenItems := map[string][]arr.ContentFile{}
g, ctx := errgroup.WithContext(r.ctx)
for _, a := range job.Arrs {
a := a // Capture range variable
g.Go(func() error {
select {
case <-ctx.Done():
return ctx.Err()
default:
}
var items []arr.ContentFile
var err error
if len(job.MediaIDs) == 0 {
items, err = r.repairArr(job, a, "")
if err != nil {
r.logger.Error().Err(err).Msgf("Error repairing %s", a)
return err
}
} else {
for _, id := range job.MediaIDs {
someItems, err := r.repairArr(job, a, id)
if err != nil {
r.logger.Error().Err(err).Msgf("Error repairing %s with ID %s", a, id)
return err
}
items = append(items, someItems...)
}
}
// Safely append the found items to the shared slice
if len(items) > 0 {
mu.Lock()
brokenItems[a] = items
mu.Unlock()
}
return nil
})
}
// Wait for all goroutines to complete and check for errors
if err := g.Wait(); err != nil {
job.FailedAt = time.Now()
job.Error = err.Error()
job.Status = JobFailed
job.CompletedAt = time.Now()
go func() {
if err := request.SendDiscordMessage("repair_failed", "error", job.discordContext()); err != nil {
r.logger.Error().Msgf("Error sending discord message: %v", err)
}
}()
return err
}
if len(brokenItems) == 0 {
job.CompletedAt = time.Now()
job.Status = JobCompleted
go func() {
if err := request.SendDiscordMessage("repair_complete", "success", job.discordContext()); err != nil {
r.logger.Error().Msgf("Error sending discord message: %v", err)
}
}()
return nil
}
job.BrokenItems = brokenItems
if job.AutoProcess {
// Job is already processed
job.CompletedAt = time.Now() // Mark as completed
job.Status = JobCompleted
go func() {
if err := request.SendDiscordMessage("repair_complete", "success", job.discordContext()); err != nil {
r.logger.Error().Msgf("Error sending discord message: %v", err)
}
}()
} else {
job.Status = JobPending
go func() {
if err := request.SendDiscordMessage("repair_pending", "pending", job.discordContext()); err != nil {
r.logger.Error().Msgf("Error sending discord message: %v", err)
}
}()
}
return nil
}
func (r *Repair) repairArr(j *Job, _arr string, tmdbId string) ([]arr.ContentFile, error) {
brokenItems := make([]arr.ContentFile, 0)
a := r.arrs.Get(_arr)
r.logger.Info().Msgf("Starting repair for %s", a.Name)
media, err := a.GetMedia(tmdbId)
if err != nil {
r.logger.Info().Msgf("Failed to get %s media: %v", a.Name, err)
return brokenItems, err
}
r.logger.Info().Msgf("Found %d %s media", len(media), a.Name)
if len(media) == 0 {
r.logger.Info().Msgf("No %s media found", a.Name)
return brokenItems, nil
}
// Check first media to confirm mounts are accessible
if !r.isMediaAccessible(media[0]) {
r.logger.Info().Msgf("Skipping repair. Parent directory not accessible for. Check your mounts")
return brokenItems, nil
}
// Mutex for brokenItems
var mu sync.Mutex
var wg sync.WaitGroup
workerChan := make(chan arr.Content, min(len(media), r.workers))
for i := 0; i < r.workers; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for m := range workerChan {
select {
case <-r.ctx.Done():
return
default:
}
items := r.getBrokenFiles(m)
if items != nil {
r.logger.Debug().Msgf("Found %d broken files for %s", len(items), m.Title)
if j.AutoProcess {
r.logger.Info().Msgf("Auto processing %d broken items for %s", len(items), m.Title)
// Delete broken items
if err := a.DeleteFiles(items); err != nil {
r.logger.Debug().Msgf("Failed to delete broken items for %s: %v", m.Title, err)
}
// Search for missing items
if err := a.SearchMissing(items); err != nil {
r.logger.Debug().Msgf("Failed to search missing items for %s: %v", m.Title, err)
}
}
mu.Lock()
brokenItems = append(brokenItems, items...)
mu.Unlock()
}
}
}()
}
for _, m := range media {
select {
case <-r.ctx.Done():
break
default:
workerChan <- m
}
}
close(workerChan)
wg.Wait()
if len(brokenItems) == 0 {
r.logger.Info().Msgf("No broken items found for %s", a.Name)
return brokenItems, nil
}
r.logger.Info().Msgf("Repair completed for %s. %d broken items found", a.Name, len(brokenItems))
return brokenItems, nil
}
func (r *Repair) isMediaAccessible(m arr.Content) bool {
files := m.Files
if len(files) == 0 {
return false
}
firstFile := files[0]
r.logger.Debug().Msgf("Checking parent directory for %s", firstFile.Path)
//if _, err := os.Stat(firstFile.Path); os.IsNotExist(err) {
// r.logger.Debug().Msgf("Parent directory not accessible for %s", firstFile.Path)
// return false
//}
// Check symlink parent directory
symlinkPath := getSymlinkTarget(firstFile.Path)
r.logger.Debug().Msgf("Checking symlink parent directory for %s", symlinkPath)
if symlinkPath != "" {
parentSymlink := filepath.Dir(filepath.Dir(symlinkPath)) // /mnt/zurg/torrents/movie/movie.mkv -> /mnt/zurg/torrents
if _, err := os.Stat(parentSymlink); os.IsNotExist(err) {
return false
}
}
return true
}
func (r *Repair) getBrokenFiles(media arr.Content) []arr.ContentFile {
if r.useWebdav {
return r.getWebdavBrokenFiles(media)
} else if r.IsZurg {
return r.getZurgBrokenFiles(media)
} else {
return r.getFileBrokenFiles(media)
}
}
func (r *Repair) getFileBrokenFiles(media arr.Content) []arr.ContentFile {
// This checks symlink target, try to get read a tiny bit of the file
brokenFiles := make([]arr.ContentFile, 0)
uniqueParents := collectFiles(media)
for parent, f := range uniqueParents {
// Check stat
// Check file stat first
firstFile := f[0]
// Read a tiny bit of the file
if err := fileIsReadable(firstFile.Path); err != nil {
r.logger.Debug().Msgf("Broken file found at: %s", parent)
brokenFiles = append(brokenFiles, f...)
continue
}
}
if len(brokenFiles) == 0 {
r.logger.Debug().Msgf("No broken files found for %s", media.Title)
return nil
}
r.logger.Debug().Msgf("%d broken files found for %s", len(brokenFiles), media.Title)
return brokenFiles
}
func (r *Repair) getZurgBrokenFiles(media arr.Content) []arr.ContentFile {
// Use zurg setup to check file availability with zurg
// This reduces bandwidth usage significantly
brokenFiles := make([]arr.ContentFile, 0)
uniqueParents := collectFiles(media)
tr := &http.Transport{
TLSHandshakeTimeout: 60 * time.Second,
DialContext: (&net.Dialer{
Timeout: 20 * time.Second,
KeepAlive: 30 * time.Second,
}).DialContext,
}
client := request.New(request.WithTimeout(0), request.WithTransport(tr))
// Access zurg url + symlink folder + first file(encoded)
for parent, f := range uniqueParents {
r.logger.Debug().Msgf("Checking %s", parent)
torrentName := url.PathEscape(filepath.Base(parent))
encodedFile := url.PathEscape(f[0].TargetPath)
fullURL := fmt.Sprintf("%s/http/__all__/%s/%s", r.ZurgURL, torrentName, encodedFile)
// Check file stat first
if _, err := os.Stat(f[0].Path); os.IsNotExist(err) {
r.logger.Debug().Msgf("Broken symlink found: %s", fullURL)
brokenFiles = append(brokenFiles, f...)
continue
}
resp, err := client.Get(fullURL)
if err != nil {
r.logger.Debug().Err(err).Msgf("Failed to reach %s", fullURL)
brokenFiles = append(brokenFiles, f...)
continue
}
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
r.logger.Debug().Msgf("Failed to get download url for %s", fullURL)
resp.Body.Close()
brokenFiles = append(brokenFiles, f...)
continue
}
downloadUrl := resp.Request.URL.String()
resp.Body.Close()
if downloadUrl != "" {
r.logger.Trace().Msgf("Found download url: %s", downloadUrl)
} else {
r.logger.Debug().Msgf("Failed to get download url for %s", fullURL)
brokenFiles = append(brokenFiles, f...)
continue
}
}
if len(brokenFiles) == 0 {
r.logger.Debug().Msgf("No broken files found for %s", media.Title)
return nil
}
r.logger.Debug().Msgf("%d broken files found for %s", len(brokenFiles), media.Title)
return brokenFiles
}
func (r *Repair) getWebdavBrokenFiles(media arr.Content) []arr.ContentFile {
// Use internal webdav setup to check file availability
caches := r.deb.Caches
if len(caches) == 0 {
r.logger.Info().Msg("No caches found. Can't use webdav")
return nil
}
clients := r.deb.Clients
if len(clients) == 0 {
r.logger.Info().Msg("No clients found. Can't use webdav")
return nil
}
brokenFiles := make([]arr.ContentFile, 0)
uniqueParents := collectFiles(media)
// Access zurg url + symlink folder + first file(encoded)
for torrentPath, f := range uniqueParents {
r.logger.Debug().Msgf("Checking %s", torrentPath)
// Get the debrid first
dir := filepath.Dir(torrentPath)
debridName := ""
for _, client := range clients {
mountPath := client.GetMountPath()
if mountPath == "" {
continue
}
if filepath.Clean(mountPath) == filepath.Clean(dir) {
debridName = client.GetName()
break
}
}
if debridName == "" {
r.logger.Debug().Msgf("No debrid found for %s. Skipping", torrentPath)
continue
}
cache, ok := caches[debridName]
if !ok {
r.logger.Debug().Msgf("No cache found for %s. Skipping", debridName)
continue
}
// Check if torrent exists
torrentName := filepath.Clean(filepath.Base(torrentPath))
torrent := cache.GetTorrentByName(torrentName)
if torrent == nil {
r.logger.Debug().Msgf("No torrent found for %s. Skipping", torrentName)
continue
}
files := make([]string, 0)
for _, file := range f {
files = append(files, file.TargetPath)
}
if cache.IsTorrentBroken(torrent, files) {
r.logger.Debug().Msgf("[webdav] Broken symlink found: %s", torrentPath)
// Delete the torrent?
brokenFiles = append(brokenFiles, f...)
continue
}
}
if len(brokenFiles) == 0 {
r.logger.Debug().Msgf("No broken files found for %s", media.Title)
return nil
}
r.logger.Debug().Msgf("%d broken files found for %s", len(brokenFiles), media.Title)
return brokenFiles
}
func (r *Repair) GetJob(id string) *Job {
for _, job := range r.Jobs {
if job.ID == id {
return job
}
}
return nil
}
func (r *Repair) GetJobs() []*Job {
jobs := make([]*Job, 0)
for _, job := range r.Jobs {
jobs = append(jobs, job)
}
sort.Slice(jobs, func(i, j int) bool {
return jobs[i].StartedAt.After(jobs[j].StartedAt)
})
return jobs
}
func (r *Repair) ProcessJob(id string) error {
job := r.GetJob(id)
if job == nil {
return fmt.Errorf("job %s not found", id)
}
// All validation checks remain the same
if job.Status != JobPending {
return fmt.Errorf("job %s not pending", id)
}
if job.StartedAt.IsZero() {
return fmt.Errorf("job %s not started", id)
}
if !job.CompletedAt.IsZero() {
return fmt.Errorf("job %s already completed", id)
}
if !job.FailedAt.IsZero() {
return fmt.Errorf("job %s already failed", id)
}
brokenItems := job.BrokenItems
if len(brokenItems) == 0 {
r.logger.Info().Msgf("No broken items found for job %s", id)
job.CompletedAt = time.Now()
job.Status = JobCompleted
return nil
}
g, ctx := errgroup.WithContext(r.ctx)
g.SetLimit(r.workers)
for arrName, items := range brokenItems {
items := items
arrName := arrName
g.Go(func() error {
select {
case <-ctx.Done():
return ctx.Err()
default:
}
a := r.arrs.Get(arrName)
if a == nil {
r.logger.Error().Msgf("Arr %s not found", arrName)
return nil
}
if err := a.DeleteFiles(items); err != nil {
r.logger.Error().Err(err).Msgf("Failed to delete broken items for %s", arrName)
return nil
}
// Search for missing items
if err := a.SearchMissing(items); err != nil {
r.logger.Error().Err(err).Msgf("Failed to search missing items for %s", arrName)
return nil
}
return nil
})
}
// Update job status to in-progress
job.Status = JobProcessing
r.saveToFile()
// Launch a goroutine to wait for completion and update the job
go func() {
if err := g.Wait(); err != nil {
job.FailedAt = time.Now()
job.Error = err.Error()
job.CompletedAt = time.Now()
job.Status = JobFailed
r.logger.Error().Err(err).Msgf("Job %s failed", id)
} else {
job.CompletedAt = time.Now()
job.Status = JobCompleted
r.logger.Info().Msgf("Job %s completed successfully", id)
}
r.saveToFile()
}()
return nil
}
func (r *Repair) saveToFile() {
// Save jobs to file
data, err := json.Marshal(r.Jobs)
if err != nil {
r.logger.Debug().Err(err).Msg("Failed to marshal jobs")
}
_ = os.WriteFile(r.filename, data, 0644)
}
func (r *Repair) loadFromFile() {
data, err := os.ReadFile(r.filename)
if err != nil && os.IsNotExist(err) {
r.Jobs = make(map[string]*Job)
return
}
_jobs := make(map[string]*Job)
err = json.Unmarshal(data, &_jobs)
if err != nil {
r.logger.Trace().Err(err).Msg("Failed to unmarshal jobs; resetting")
r.Jobs = make(map[string]*Job)
return
}
jobs := make(map[string]*Job)
for k, v := range _jobs {
if v.Status != JobPending {
// Skip jobs that are not pending processing due to reboot
continue
}
jobs[k] = v
}
r.Jobs = jobs
}
func (r *Repair) DeleteJobs(ids []string) {
for _, id := range ids {
if id == "" {
continue
}
for k, job := range r.Jobs {
if job.ID == id {
delete(r.Jobs, k)
}
}
}
go r.saveToFile()
}

133
pkg/server/server.go Normal file
View File

@@ -0,0 +1,133 @@
package server
import (
"context"
"errors"
"fmt"
"github.com/go-chi/chi/v5"
"github.com/go-chi/chi/v5/middleware"
"github.com/goccy/go-json"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger"
"io"
"net/http"
"os"
"os/signal"
"runtime"
"syscall"
)
type Server struct {
router *chi.Mux
logger zerolog.Logger
}
func New() *Server {
l := logger.New("http")
r := chi.NewRouter()
r.Use(middleware.Recoverer)
r.Handle("/static/*", http.StripPrefix("/static/", http.FileServer(http.Dir("static"))))
return &Server{
router: r,
logger: l,
}
}
func (s *Server) Start(ctx context.Context) error {
cfg := config.Get()
// Register routes
// Register webhooks
s.router.Post("/webhooks/tautulli", s.handleTautulli)
// Register logs
s.router.Get("/logs", s.getLogs)
s.router.Get("/stats", s.getStats)
port := fmt.Sprintf(":%s", cfg.QBitTorrent.Port)
s.logger.Info().Msgf("Server started on %s", port)
srv := &http.Server{
Addr: port,
Handler: s.router,
}
ctx, stop := signal.NotifyContext(ctx, os.Interrupt, syscall.SIGTERM)
defer stop()
go func() {
if err := srv.ListenAndServe(); err != nil && !errors.Is(err, http.ErrServerClosed) {
s.logger.Info().Msgf("Error starting server: %v", err)
stop()
}
}()
<-ctx.Done()
s.logger.Info().Msg("Shutting down gracefully...")
return srv.Shutdown(context.Background())
}
func (s *Server) AddRoutes(routes func(r chi.Router) http.Handler) {
routes(s.router)
}
func (s *Server) Mount(pattern string, handler http.Handler) {
s.router.Mount(pattern, handler)
}
func (s *Server) getLogs(w http.ResponseWriter, r *http.Request) {
logFile := logger.GetLogPath()
// Open and read the file
file, err := os.Open(logFile)
if err != nil {
http.Error(w, "Error reading log file", http.StatusInternalServerError)
return
}
defer func(file *os.File) {
err := file.Close()
if err != nil {
s.logger.Debug().Err(err).Msg("Error closing log file")
}
}(file)
// Set headers
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
w.Header().Set("Content-Disposition", "inline; filename=application.log")
w.Header().Set("Cache-Control", "no-cache, no-store, must-revalidate")
w.Header().Set("Pragma", "no-cache")
w.Header().Set("Expires", "0")
// Stream the file
_, err = io.Copy(w, file)
if err != nil {
s.logger.Debug().Err(err).Msg("Error streaming log file")
http.Error(w, "Error streaming log file", http.StatusInternalServerError)
return
}
}
func (s *Server) getStats(w http.ResponseWriter, r *http.Request) {
var memStats runtime.MemStats
runtime.ReadMemStats(&memStats)
stats := map[string]interface{}{
// Memory stats
"heap_alloc_mb": fmt.Sprintf("%.2fMB", float64(memStats.HeapAlloc)/1024/1024),
"total_alloc_mb": fmt.Sprintf("%.2fMB", float64(memStats.TotalAlloc)/1024/1024),
"sys_mb": fmt.Sprintf("%.2fMB", float64(memStats.Sys)/1024/1024),
// GC stats
"gc_cycles": memStats.NumGC,
// Goroutine stats
"goroutines": runtime.NumGoroutine(),
// System info
"num_cpu": runtime.NumCPU(),
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
if err := json.NewEncoder(w).Encode(stats); err != nil {
s.logger.Error().Err(err).Msg("Failed to encode stats")
}
}

54
pkg/server/webhook.go Normal file
View File

@@ -0,0 +1,54 @@
package server
import (
"cmp"
"github.com/goccy/go-json"
"github.com/sirrobot01/decypharr/pkg/service"
"net/http"
)
func (s *Server) handleTautulli(w http.ResponseWriter, r *http.Request) {
// Verify it's a POST request
if r.Method != http.MethodPost {
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
return
}
// Parse the JSON body from Tautulli
var payload struct {
Type string `json:"type"`
TvdbID string `json:"tvdb_id"`
TmdbID string `json:"tmdb_id"`
Topic string `json:"topic"`
AutoProcess bool `json:"autoProcess"`
}
if err := json.NewDecoder(r.Body).Decode(&payload); err != nil {
s.logger.Error().Err(err).Msg("Failed to parse webhook body")
http.Error(w, "Failed to parse webhook body: "+err.Error(), http.StatusBadRequest)
return
}
if payload.Topic != "tautulli" {
http.Error(w, "Invalid topic", http.StatusBadRequest)
return
}
if payload.TmdbID == "" && payload.TvdbID == "" {
http.Error(w, "Invalid ID", http.StatusBadRequest)
return
}
svc := service.GetService()
repair := svc.Repair
mediaId := cmp.Or(payload.TmdbID, payload.TvdbID)
if repair == nil {
http.Error(w, "Repair service is not enabled", http.StatusInternalServerError)
return
}
if err := repair.AddJob([]string{}, []string{mediaId}, payload.AutoProcess, false); err != nil {
http.Error(w, "Failed to add job: "+err.Error(), http.StatusInternalServerError)
return
}
}

55
pkg/service/service.go Normal file
View File

@@ -0,0 +1,55 @@
package service
import (
"github.com/sirrobot01/decypharr/pkg/arr"
"github.com/sirrobot01/decypharr/pkg/debrid/debrid"
"github.com/sirrobot01/decypharr/pkg/repair"
"sync"
)
type Service struct {
Repair *repair.Repair
Arr *arr.Storage
Debrid *debrid.Engine
}
var (
instance *Service
once sync.Once
)
func New() *Service {
once.Do(func() {
arrs := arr.NewStorage()
deb := debrid.NewEngine()
instance = &Service{
Repair: repair.New(arrs, deb),
Arr: arrs,
Debrid: deb,
}
})
return instance
}
// GetService returns the singleton instance
func GetService() *Service {
if instance == nil {
instance = New()
}
return instance
}
func Update() *Service {
arrs := arr.NewStorage()
deb := debrid.NewEngine()
instance = &Service{
Repair: repair.New(arrs, deb),
Arr: arrs,
Debrid: deb,
}
return instance
}
func GetDebrid() *debrid.Engine {
return GetService().Debrid
}

24
pkg/version/version.go Normal file
View File

@@ -0,0 +1,24 @@
package version
import "fmt"
type Info struct {
Version string `json:"version"`
Channel string `json:"channel"`
}
func (i Info) String() string {
return fmt.Sprintf("%s-%s", i.Version, i.Channel)
}
var (
Version = ""
Channel = ""
)
func GetInfo() Info {
return Info{
Version: Version,
Channel: Channel,
}
}

38
pkg/web/routes.go Normal file
View File

@@ -0,0 +1,38 @@
package web
import (
"github.com/go-chi/chi/v5"
"net/http"
)
func (ui *Handler) Routes() http.Handler {
r := chi.NewRouter()
r.Get("/login", ui.LoginHandler)
r.Post("/login", ui.LoginHandler)
r.Get("/setup", ui.SetupHandler)
r.Post("/setup", ui.SetupHandler)
r.Group(func(r chi.Router) {
r.Use(ui.authMiddleware)
r.Get("/", ui.IndexHandler)
r.Get("/download", ui.DownloadHandler)
r.Get("/repair", ui.RepairHandler)
r.Get("/config", ui.ConfigHandler)
r.Route("/internal", func(r chi.Router) {
r.Get("/arrs", ui.handleGetArrs)
r.Post("/add", ui.handleAddContent)
r.Post("/repair", ui.handleRepairMedia)
r.Get("/repair/jobs", ui.handleGetRepairJobs)
r.Post("/repair/jobs/{id}/process", ui.handleProcessRepairJob)
r.Delete("/repair/jobs", ui.handleDeleteRepairJob)
r.Get("/torrents", ui.handleGetTorrents)
r.Delete("/torrents/{category}/{hash}", ui.handleDeleteTorrent)
r.Delete("/torrents/", ui.handleDeleteTorrents)
r.Get("/config", ui.handleGetConfig)
r.Get("/version", ui.handleGetVersion)
})
})
return r
}

495
pkg/web/server.go Normal file
View File

@@ -0,0 +1,495 @@
package web
import (
"embed"
"fmt"
"github.com/goccy/go-json"
"github.com/gorilla/sessions"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/qbit"
"github.com/sirrobot01/decypharr/pkg/service"
"golang.org/x/crypto/bcrypt"
"html/template"
"net/http"
"strings"
"github.com/go-chi/chi/v5"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/pkg/arr"
"github.com/sirrobot01/decypharr/pkg/version"
)
type AddRequest struct {
Url string `json:"url"`
Arr string `json:"arr"`
File string `json:"file"`
NotSymlink bool `json:"notSymlink"`
Content string `json:"content"`
Seasons []string `json:"seasons"`
Episodes []string `json:"episodes"`
}
type ArrResponse struct {
Name string `json:"name"`
Url string `json:"url"`
}
type ContentResponse struct {
ID string `json:"id"`
Title string `json:"title"`
Type string `json:"type"`
ArrID string `json:"arr"`
}
type RepairRequest struct {
ArrName string `json:"arr"`
MediaIds []string `json:"mediaIds"`
Async bool `json:"async"`
AutoProcess bool `json:"autoProcess"`
}
//go:embed web/*
var content embed.FS
type Handler struct {
qbit *qbit.QBit
logger zerolog.Logger
}
func New(qbit *qbit.QBit) *Handler {
return &Handler{
qbit: qbit,
logger: logger.New("ui"),
}
}
var (
store = sessions.NewCookieStore([]byte("your-secret-key")) // Change this to a secure key
templates *template.Template
)
func init() {
templates = template.Must(template.ParseFS(
content,
"web/layout.html",
"web/index.html",
"web/download.html",
"web/repair.html",
"web/config.html",
"web/login.html",
"web/setup.html",
))
store.Options = &sessions.Options{
Path: "/",
MaxAge: 86400 * 7,
HttpOnly: false,
}
}
func (ui *Handler) authMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Check if setup is needed
cfg := config.Get()
if cfg.NeedsSetup() && r.URL.Path != "/setup" {
http.Redirect(w, r, "/setup", http.StatusSeeOther)
return
}
if !cfg.UseAuth {
next.ServeHTTP(w, r)
return
}
// Skip auth check for setup page
if r.URL.Path == "/setup" {
next.ServeHTTP(w, r)
return
}
session, _ := store.Get(r, "auth-session")
auth, ok := session.Values["authenticated"].(bool)
if !ok || !auth {
http.Redirect(w, r, "/login", http.StatusSeeOther)
return
}
next.ServeHTTP(w, r)
})
}
func (ui *Handler) verifyAuth(username, password string) bool {
// If you're storing hashed password, use bcrypt to compare
if username == "" {
return false
}
auth := config.Get().GetAuth()
if auth == nil {
return false
}
if username != auth.Username {
return false
}
err := bcrypt.CompareHashAndPassword([]byte(auth.Password), []byte(password))
return err == nil
}
func (ui *Handler) LoginHandler(w http.ResponseWriter, r *http.Request) {
if r.Method == "GET" {
data := map[string]interface{}{
"Page": "login",
"Title": "Login",
}
if err := templates.ExecuteTemplate(w, "layout", data); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
return
}
var credentials struct {
Username string `json:"username"`
Password string `json:"password"`
}
if err := json.NewDecoder(r.Body).Decode(&credentials); err != nil {
http.Error(w, "Invalid request", http.StatusBadRequest)
return
}
if ui.verifyAuth(credentials.Username, credentials.Password) {
session, _ := store.Get(r, "auth-session")
session.Values["authenticated"] = true
session.Values["username"] = credentials.Username
if err := session.Save(r, w); err != nil {
http.Error(w, "Error saving session", http.StatusInternalServerError)
return
}
http.Redirect(w, r, "/", http.StatusSeeOther)
return
}
http.Error(w, "Invalid credentials", http.StatusUnauthorized)
}
func (ui *Handler) LogoutHandler(w http.ResponseWriter, r *http.Request) {
session, _ := store.Get(r, "auth-session")
session.Values["authenticated"] = false
session.Options.MaxAge = -1
err := session.Save(r, w)
if err != nil {
return
}
http.Redirect(w, r, "/login", http.StatusSeeOther)
}
func (ui *Handler) SetupHandler(w http.ResponseWriter, r *http.Request) {
cfg := config.Get()
authCfg := cfg.GetAuth()
if !cfg.NeedsSetup() {
http.Redirect(w, r, "/", http.StatusSeeOther)
return
}
if r.Method == "GET" {
data := map[string]interface{}{
"Page": "setup",
"Title": "Setup",
}
if err := templates.ExecuteTemplate(w, "layout", data); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
return
}
// Handle POST (setup attempt)
username := r.FormValue("username")
password := r.FormValue("password")
confirmPassword := r.FormValue("confirmPassword")
if password != confirmPassword {
http.Error(w, "Passwords do not match", http.StatusBadRequest)
return
}
// Hash the password
hashedPassword, err := bcrypt.GenerateFromPassword([]byte(password), bcrypt.DefaultCost)
if err != nil {
http.Error(w, "Error processing password", http.StatusInternalServerError)
return
}
// Set the credentials
authCfg.Username = username
authCfg.Password = string(hashedPassword)
if err := cfg.SaveAuth(authCfg); err != nil {
http.Error(w, "Error saving credentials", http.StatusInternalServerError)
return
}
// Create a session
session, _ := store.Get(r, "auth-session")
session.Values["authenticated"] = true
session.Values["username"] = username
if err := session.Save(r, w); err != nil {
http.Error(w, "Error saving session", http.StatusInternalServerError)
return
}
http.Redirect(w, r, "/", http.StatusSeeOther)
}
func (ui *Handler) IndexHandler(w http.ResponseWriter, r *http.Request) {
data := map[string]interface{}{
"Page": "index",
"Title": "Torrents",
}
if err := templates.ExecuteTemplate(w, "layout", data); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
}
func (ui *Handler) DownloadHandler(w http.ResponseWriter, r *http.Request) {
data := map[string]interface{}{
"Page": "download",
"Title": "Download",
}
if err := templates.ExecuteTemplate(w, "layout", data); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
}
func (ui *Handler) RepairHandler(w http.ResponseWriter, r *http.Request) {
data := map[string]interface{}{
"Page": "repair",
"Title": "Repair",
}
if err := templates.ExecuteTemplate(w, "layout", data); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
}
func (ui *Handler) ConfigHandler(w http.ResponseWriter, r *http.Request) {
data := map[string]interface{}{
"Page": "config",
"Title": "Config",
}
if err := templates.ExecuteTemplate(w, "layout", data); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
}
func (ui *Handler) handleGetArrs(w http.ResponseWriter, r *http.Request) {
svc := service.GetService()
request.JSONResponse(w, svc.Arr.GetAll(), http.StatusOK)
}
func (ui *Handler) handleAddContent(w http.ResponseWriter, r *http.Request) {
if err := r.ParseMultipartForm(32 << 20); err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
svc := service.GetService()
results := make([]*qbit.ImportRequest, 0)
errs := make([]string, 0)
arrName := r.FormValue("arr")
notSymlink := r.FormValue("notSymlink") == "true"
downloadUncached := r.FormValue("downloadUncached") == "true"
_arr := svc.Arr.Get(arrName)
if _arr == nil {
_arr = arr.New(arrName, "", "", false, false, &downloadUncached)
}
// Handle URLs
if urls := r.FormValue("urls"); urls != "" {
var urlList []string
for _, u := range strings.Split(urls, "\n") {
if trimmed := strings.TrimSpace(u); trimmed != "" {
urlList = append(urlList, trimmed)
}
}
for _, url := range urlList {
magnet, err := utils.GetMagnetFromUrl(url)
if err != nil {
errs = append(errs, fmt.Sprintf("Failed to parse URL %s: %v", url, err))
continue
}
importReq := qbit.NewImportRequest(magnet, _arr, !notSymlink, downloadUncached)
if err := importReq.Process(ui.qbit); err != nil {
errs = append(errs, fmt.Sprintf("URL %s: %v", url, err))
continue
}
results = append(results, importReq)
}
}
// Handle torrent/magnet files
if files := r.MultipartForm.File["files"]; len(files) > 0 {
for _, fileHeader := range files {
file, err := fileHeader.Open()
if err != nil {
errs = append(errs, fmt.Sprintf("Failed to open file %s: %v", fileHeader.Filename, err))
continue
}
magnet, err := utils.GetMagnetFromFile(file, fileHeader.Filename)
if err != nil {
errs = append(errs, fmt.Sprintf("Failed to parse torrent file %s: %v", fileHeader.Filename, err))
continue
}
importReq := qbit.NewImportRequest(magnet, _arr, !notSymlink, downloadUncached)
err = importReq.Process(ui.qbit)
if err != nil {
errs = append(errs, fmt.Sprintf("File %s: %v", fileHeader.Filename, err))
continue
}
results = append(results, importReq)
}
}
request.JSONResponse(w, struct {
Results []*qbit.ImportRequest `json:"results"`
Errors []string `json:"errors,omitempty"`
}{
Results: results,
Errors: errs,
}, http.StatusOK)
}
func (ui *Handler) handleRepairMedia(w http.ResponseWriter, r *http.Request) {
var req RepairRequest
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
svc := service.GetService()
var arrs []string
if req.ArrName != "" {
_arr := svc.Arr.Get(req.ArrName)
if _arr == nil {
http.Error(w, "No Arrs found to repair", http.StatusNotFound)
return
}
arrs = append(arrs, req.ArrName)
}
if req.Async {
go func() {
if err := svc.Repair.AddJob(arrs, req.MediaIds, req.AutoProcess, false); err != nil {
ui.logger.Error().Err(err).Msg("Failed to repair media")
}
}()
request.JSONResponse(w, "Repair process started", http.StatusOK)
return
}
if err := svc.Repair.AddJob([]string{req.ArrName}, req.MediaIds, req.AutoProcess, false); err != nil {
http.Error(w, fmt.Sprintf("Failed to repair: %v", err), http.StatusInternalServerError)
return
}
request.JSONResponse(w, "Repair completed", http.StatusOK)
}
func (ui *Handler) handleGetVersion(w http.ResponseWriter, r *http.Request) {
v := version.GetInfo()
request.JSONResponse(w, v, http.StatusOK)
}
func (ui *Handler) handleGetTorrents(w http.ResponseWriter, r *http.Request) {
request.JSONResponse(w, ui.qbit.Storage.GetAll("", "", nil), http.StatusOK)
}
func (ui *Handler) handleDeleteTorrent(w http.ResponseWriter, r *http.Request) {
hash := chi.URLParam(r, "hash")
category := r.URL.Query().Get("category")
if hash == "" {
http.Error(w, "No hash provided", http.StatusBadRequest)
return
}
ui.qbit.Storage.Delete(hash, category)
w.WriteHeader(http.StatusOK)
}
func (ui *Handler) handleDeleteTorrents(w http.ResponseWriter, r *http.Request) {
hashesStr := r.URL.Query().Get("hashes")
if hashesStr == "" {
http.Error(w, "No hashes provided", http.StatusBadRequest)
return
}
hashes := strings.Split(hashesStr, ",")
ui.qbit.Storage.DeleteMultiple(hashes)
w.WriteHeader(http.StatusOK)
}
func (ui *Handler) handleGetConfig(w http.ResponseWriter, r *http.Request) {
cfg := config.Get()
arrCfgs := make([]config.Arr, 0)
svc := service.GetService()
for _, a := range svc.Arr.GetAll() {
arrCfgs = append(arrCfgs, config.Arr{
Host: a.Host,
Name: a.Name,
Token: a.Token,
Cleanup: a.Cleanup,
SkipRepair: a.SkipRepair,
DownloadUncached: a.DownloadUncached,
})
}
cfg.Arrs = arrCfgs
request.JSONResponse(w, cfg, http.StatusOK)
}
func (ui *Handler) handleGetRepairJobs(w http.ResponseWriter, r *http.Request) {
svc := service.GetService()
request.JSONResponse(w, svc.Repair.GetJobs(), http.StatusOK)
}
func (ui *Handler) handleProcessRepairJob(w http.ResponseWriter, r *http.Request) {
id := chi.URLParam(r, "id")
if id == "" {
http.Error(w, "No job ID provided", http.StatusBadRequest)
return
}
svc := service.GetService()
if err := svc.Repair.ProcessJob(id); err != nil {
ui.logger.Error().Err(err).Msg("Failed to process repair job")
}
w.WriteHeader(http.StatusOK)
}
func (ui *Handler) handleDeleteRepairJob(w http.ResponseWriter, r *http.Request) {
// Read ids from body
var req struct {
IDs []string `json:"ids"`
}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
if len(req.IDs) == 0 {
http.Error(w, "No job IDs provided", http.StatusBadRequest)
return
}
svc := service.GetService()
svc.Repair.DeleteJobs(req.IDs)
w.WriteHeader(http.StatusOK)
}

495
pkg/web/web/config.html Normal file
View File

@@ -0,0 +1,495 @@
{{ define "config" }}
<div class="container mt-4">
<div class="card">
<div class="card-header">
<h4 class="mb-0"><i class="bi bi-gear me-2"></i>Configuration</h4>
</div>
<div class="card-body">
<form id="configForm">
<div class="section mb-5">
<h5 class="border-bottom pb-2">General Configuration</h5>
<div class="row">
<div class="col-md-6">
<div class="form-group">
<label for="log-level">Log Level</label>
<select class="form-select" name="log_level" id="log-level" disabled>
<option value="info">Info</option>
<option value="debug">Debug</option>
<option value="warn">Warning</option>
<option value="error">Error</option>
<option value="trace">Trace</option>
</select>
</div>
</div>
<!-- Register Magnet Link Button -->
<div class="col-md-6">
<label>
<!-- Empty label to keep the button aligned -->
</label>
<div class="btn btn-primary w-100" onclick="registerMagnetLinkHandler()" id="registerMagnetLink">
Open Magnet Links in Decypharr
</div>
</div>
<div class="col-md-6 mt-3">
<div class="form-group">
<label for="discordWebhookUrl">Discord Webhook URL</label>
<div class="input-group">
<textarea type="text"
class="form-control"
id="discordWebhookUrl"
name="discord_webhook_url"
disabled
placeholder="https://discord..."></textarea>
</div>
</div>
</div>
<div class="col-md-6 mt-3">
<div class="form-group">
<label for="allowedExtensions">Allowed File Extensions</label>
<div class="input-group">
<textarea
class="form-control"
id="allowedExtensions"
name="allowed_file_types"
disabled
placeholder="mkv, mp4, avi, etc.">
</textarea>
</div>
</div>
</div>
<div class="col-md-6 mt-3">
<div class="form-group">
<label for="minFileSize">Minimum File Size</label>
<input type="text"
class="form-control"
id="minFileSize"
name="min_file_size"
disabled
placeholder="e.g., 10MB, 1GB">
<small class="form-text text-muted">Minimum file size to download (0 for no limit)</small>
</div>
</div>
<div class="col-md-6 mt-3">
<div class="form-group">
<label for="maxFileSize">Maximum File Size</label>
<input type="text"
class="form-control"
id="maxFileSize"
name="max_file_size"
disabled
placeholder="e.g., 50GB, 100MB">
<small class="form-text text-muted">Maximum file size to download (0 for no limit)</small>
</div>
</div>
</div>
</div>
<!-- Debrid Configuration -->
<div class="section mb-5">
<h5 class="border-bottom pb-2">Debrids</h5>
<div id="debridConfigs"></div>
</div>
<!-- QBitTorrent Configuration -->
<div class="section mb-5">
<h5 class="border-bottom pb-2">QBitTorrent</h5>
<div class="row">
<div class="col-md-6 mb-3">
<label class="form-label">Username</label>
<input type="text" disabled class="form-control" name="qbit.username">
</div>
<div class="col-md-6 mb-3">
<label class="form-label">Password</label>
<input type="password" disabled class="form-control" name="qbit.password">
</div>
<div class="col-md-6 mb-3">
<label class="form-label">Port</label>
<input type="text" disabled class="form-control" name="qbit.port">
</div>
<div class="col-md-6 mb-3">
<label class="form-label">Symlink/Download Folder</label>
<input type="text" disabled class="form-control" name="qbit.download_folder">
</div>
<div class="col-md-6 mb-3">
<label class="form-label">Refresh Interval (seconds)</label>
<input type="number" class="form-control" name="qbit.refresh_interval">
</div>
<div class="col-md-6 mb-3">
<input type="checkbox" disabled class="form-check-input" name="qbit.skip_pre_cache">
<label class="form-check-label">Skip Pre-Cache On Download(This caches a tiny part of your file to speed up import)</label>
</div>
</div>
</div>
<!-- Arr Configurations -->
<div class="section mb-5">
<h5 class="border-bottom pb-2">Arrs</h5>
<div id="arrConfigs"></div>
</div>
<!-- Repair Configuration -->
<div class="section mb-5">
<h5 class="border-bottom pb-2">Repair Configuration</h5>
<div class="row">
<div class="col-md-3 mb-3">
<label class="form-label">Interval</label>
<input type="text" disabled class="form-control" name="repair.interval" placeholder="e.g., 24h">
</div>
<div class="col-md-4 mb-3">
<label class="form-label">Zurg URL</label>
<input type="text" disabled class="form-control" name="repair.zurg_url" placeholder="http://zurg:9999">
</div>
</div>
<div class="col-12">
<div class="form-check me-3 d-inline-block">
<input type="checkbox" disabled class="form-check-input" name="repair.enabled" id="repairEnabled">
<label class="form-check-label" for="repairEnabled">Enable Repair</label>
</div>
<div class="form-check me-3 d-inline-block">
<input type="checkbox" disabled class="form-check-input" name="repair.use_webdav" id="repairUseWebdav">
<label class="form-check-label" for="repairUseWebdav">Use Webdav</label>
</div>
<div class="form-check me-3 d-inline-block">
<input type="checkbox" disabled class="form-check-input" name="repair.run_on_start" id="repairOnStart">
<label class="form-check-label" for="repairOnStart">Run on Start</label>
</div>
<div class="form-check d-inline-block">
<input type="checkbox" disabled class="form-check-input" name="repair.auto_process" id="autoProcess">
<label class="form-check-label" for="autoProcess">Auto Process(Scheduled jobs will be processed automatically)</label>
</div>
</div>
</div>
</form>
</div>
</div>
</div>
<script>
// Templates for dynamic elements
const debridTemplate = (index) => `
<div class="config-item position-relative mb-3 p-3 border rounded">
<div class="row mb-2">
<div class="col-md-6 mb-3">
<label class="form-label">Name</label>
<input type="text" disabled class="form-control" name="debrid[${index}].name" required>
</div>
<div class="col-md-6 mb-3">
<label class="form-label">Host</label>
<input type="text" disabled class="form-control" name="debrid[${index}].host" required>
</div>
<div class="col-md-6 mb-3">
<label class="form-label">API Key</label>
<input type="password" disabled class="form-control" name="debrid[${index}].api_key" required>
</div>
<div class="col-md-6 mb-3">
<label class="form-label">Mount Folder</label>
<input type="text" disabled class="form-control" name="debrid[${index}].folder">
</div>
<div class="col-md-6 mb-3">
<label class="form-label">Rate Limit</label>
<input type="text" disabled class="form-control" name="debrid[${index}].rate_limit" placeholder="e.g., 200/minute">
</div>
<div class="col-12">
<div class="form-check me-3 d-inline-block">
<input type="checkbox" disabled class="form-check-input" name="debrid[${index}].download_uncached">
<label class="form-check-label">Download Uncached</label>
</div>
<div class="form-check d-inline-block">
<input type="checkbox" disabled class="form-check-input" name="debrid[${index}].check_cached">
<label class="form-check-label">Check Cached</label>
</div>
</div>
</div>
<div class="row mt-3 webdav-${index} d-none">
<h6 class="pb-2">Webdav</h6>
<div class="col-md-3 mb-3">
<label class="form-label">Torrents Refresh Interval</label>
<input type="text" disabled class="form-control" name="debrid[${index}].torrents_refresh_interval" placeholder="15s" required>
</div>
<div class="col-md-3 mb-3">
<label class="form-label">Download Links Refresh Interval</label>
<input type="text" disabled class="form-control" name="debrid[${index}].download_links_refresh_interval" placeholder="24h" required>
</div>
<div class="col-md-3 mb-3">
<label class="form-label">Expire Links After</label>
<input type="text" disabled class="form-control" name="debrid[${index}].auto_expire_links_after" placeholder="24h" required>
</div>
<div class="col-md-3 mb-3">
<label class="form-label">Folder Naming Structure</label>
<select class="form-select" name="debrid[${index}].folder_naming" disabled>
<option value="filename">File name</option>
<option value="filename_no_ext">File name with No Ext</option>
<option value="original">Original name</option>
<option value="original_no_ext">Original name with No Ext</option>
<option value="id">Use ID</option>
</select>
</div>
<div class="col-md-3 mb-3">
<label class="form-label">Number of Workers</label>
<input type="text" disabled class="form-control" name="debrid[${index}].workers" required placeholder="e.g., 20">
</div>
<div class="col-md-3 mb-3">
<label class="form-label">Rclone RC URL</label>
<input type="text" disabled class="form-control" name="debrid[${index}].rc_url" placeholder="e.g., http://localhost:9990">
</div>
<div class="col-md-3 mb-3">
<label class="form-label">Rclone RC User</label>
<input type="text" disabled class="form-control" name="debrid[${index}].rc_user">
</div>
<div class="col-md-3 mb-3">
<label class="form-label">Rclone RC Password</label>
<input type="password" disabled class="form-control" name="debrid[${index}].rc_pass">
</div>
</div>
</div>
`;
const arrTemplate = (index) => `
<div class="config-item position-relative mb-3 p-3 border rounded">
<div class="row">
<div class="col-md-4 mb-3">
<label class="form-label">Name</label>
<input type="text" disabled class="form-control" name="arr[${index}].name" required>
</div>
<div class="col-md-4 mb-3">
<label class="form-label">Host</label>
<input type="text" disabled class="form-control" name="arr[${index}].host" required>
</div>
<div class="col-md-4 mb-3">
<label class="form-label">API Token</label>
<input type="password" disabled class="form-control" name="arr[${index}].token" required>
</div>
</div>
<div class="row">
<div class="col-md-2 mb-3">
<div class="form-check">
<label class="form-check-label">Cleanup Queue</label>
<input type="checkbox" disabled class="form-check-input" name="arr[${index}].cleanup">
</div>
</div>
<div class="col-md-2 mb-3">
<div class="form-check">
<label class="form-check-label">Skip Repair</label>
<input type="checkbox" disabled class="form-check-input" name="arr[${index}].skip_repair">
</div>
</div>
<div class="col-md-2 mb-3">
<div class="form-check">
<label class="form-check-label">Download Uncached</label>
<input type="checkbox" disabled class="form-check-input" name="arr[${index}].download_uncached">
</div>
</div>
</div>
</div>
`;
// Main functionality
document.addEventListener('DOMContentLoaded', function() {
let debridCount = 0;
let arrCount = 0;
// Load existing configuration
fetch('/internal/config')
.then(response => response.json())
.then(config => {
// Load Debrid configs
config.debrids?.forEach(debrid => {
addDebridConfig(debrid);
});
// Load QBitTorrent config
if (config.qbittorrent) {
Object.entries(config.qbittorrent).forEach(([key, value]) => {
const input = document.querySelector(`[name="qbit.${key}"]`);
if (input) {
if (input.type === 'checkbox') {
input.checked = value;
} else {
input.value = value;
}
}
});
}
// Load Arr configs
config.arrs?.forEach(arr => {
addArrConfig(arr);
});
// Load Repair config
if (config.repair) {
Object.entries(config.repair).forEach(([key, value]) => {
const input = document.querySelector(`[name="repair.${key}"]`);
if (input) {
if (input.type === 'checkbox') {
input.checked = value;
} else {
input.value = value;
}
}
});
}
// Load general config
const logLevel = document.getElementById('log-level');
logLevel.value = config.log_level;
if (config.allowed_file_types && Array.isArray(config.allowed_file_types)) {
document.querySelector('[name="allowed_file_types"]').value = config.allowed_file_types.join(', ');
}
if (config.min_file_size) {
document.querySelector('[name="min_file_size"]').value = config.min_file_size;
}
if (config.max_file_size) {
document.querySelector('[name="max_file_size"]').value = config.max_file_size;
}
if (config.discord_webhook_url) {
document.querySelector('[name="discord_webhook_url"]').value = config.discord_webhook_url;
}
});
// Handle form submission
document.getElementById('configForm').addEventListener('submit', async (e) => {
e.preventDefault();
const formData = new FormData(e.target);
const config = {
debrids: [],
qbittorrent: {},
arrs: [],
repair: {}
};
// Process form data
for (let [key, value] of formData.entries()) {
if (key.startsWith('debrid[')) {
const match = key.match(/debrid\[(\d+)\]\.(.+)/);
if (match) {
const [_, index, field] = match;
if (!config.debrids[index]) config.debrids[index] = {};
config.debrids[index][field] = value;
}
} else if (key.startsWith('qbit.')) {
config.qbittorrent[key.replace('qbit.', '')] = value;
} else if (key.startsWith('arr[')) {
const match = key.match(/arr\[(\d+)\]\.(.+)/);
if (match) {
const [_, index, field] = match;
if (!config.arrs[index]) config.arrs[index] = {};
config.arrs[index][field] = value;
}
} else if (key.startsWith('repair.')) {
config.repair[key.replace('repair.', '')] = value;
}
}
// Clean up arrays (remove empty entries)
config.debrids = config.debrids.filter(Boolean);
config.arrs = config.arrs.filter(Boolean);
try {
const response = await fetch('/internal/config', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify(config)
});
if (!response.ok) throw new Error(await response.text());
createToast('Configuration saved successfully!');
} catch (error) {
createToast(`Error saving configuration: ${error.message}`, 'error');
}
});
// Helper functions
function addDebridConfig(data = {}) {
const container = document.getElementById('debridConfigs');
container.insertAdjacentHTML('beforeend', debridTemplate(debridCount));
if (data) {
if (data.use_webdav) {
let _webCfg = container.querySelector(`.webdav-${debridCount}`);
if (_webCfg) {
_webCfg.classList.remove('d-none');
}
}
function setFieldValues(obj, prefix) {
Object.entries(obj).forEach(([key, value]) => {
const fieldName = prefix ? `${prefix}.${key}` : key;
// If value is an object and not null, recursively process nested fields
if (value !== null && typeof value === 'object' && !Array.isArray(value)) {
setFieldValues(value, fieldName);
} else {
// Handle leaf values (actual form fields)
const input = container.querySelector(`[name="debrid[${debridCount}].${fieldName}"]`);
if (input) {
if (input.type === 'checkbox') {
input.checked = value;
} else {
input.value = value;
}
}
}
});
}
// Start processing with the root object
setFieldValues(data, '');
}
debridCount++;
}
function addArrConfig(data = {}) {
const container = document.getElementById('arrConfigs');
container.insertAdjacentHTML('beforeend', arrTemplate(arrCount));
if (data) {
Object.entries(data).forEach(([key, value]) => {
const input = container.querySelector(`[name="arr[${arrCount}].${key}"]`);
if (input) {
if (input.type === 'checkbox') {
input.checked = value;
} else {
input.value = value;
}
}
});
}
arrCount++;
}
});
// Register magnet link handler
function registerMagnetLinkHandler() {
if ('registerProtocolHandler' in navigator) {
try {
navigator.registerProtocolHandler(
'magnet',
`${window.location.origin}/download?magnet=%s`,
'DecyphArr'
);
localStorage.setItem('magnetHandler', 'true');
document.getElementById('registerMagnetLink').innerText = '✅ DecyphArr Can Open Magnet Links';
document.getElementById('registerMagnetLink').classList.add('bg-white', 'text-black');
console.log('Registered magnet link handler successfully.');
} catch (error) {
console.error('Failed to register magnet link handler:', error);
}
}
}
var magnetHandler = localStorage.getItem('magnetHandler');
if (magnetHandler === 'true') {
document.getElementById('registerMagnetLink').innerText = '✅ DecyphArr Can Open Magnet Links';
document.getElementById('registerMagnetLink').classList.add('bg-white', 'text-black');
}
</script>
{{ end }}

154
pkg/web/web/download.html Normal file
View File

@@ -0,0 +1,154 @@
{{ define "download" }}
<div class="container mt-4">
<div class="card">
<div class="card-header">
<h4 class="mb-0"><i class="bi bi-cloud-download me-2"></i>Add New Download</h4>
</div>
<div class="card-body">
<form id="downloadForm" enctype="multipart/form-data">
<div class="mb-2">
<label for="magnetURI" class="form-label">Torrent(s)</label>
<textarea class="form-control" id="magnetURI" name="urls" rows="8" placeholder="Paste your magnet links or torrent URLs here, one per line..."></textarea>
</div>
<div class="mb-3">
<input type="file" class="form-control" id="torrentFiles" name="torrents" multiple accept=".torrent,.magnet">
</div>
<hr />
<div class="mb-3">
<label for="category" class="form-label">Enter Category</label>
<input type="text" class="form-control" id="category" name="arr" placeholder="Enter Category (e.g sonarr, radarr, radarr4k)">
</div>
<div class="row mb-3">
<div class="col-md-2 mb-3">
<div class="form-check d-inline-block me-3">
<input type="checkbox" class="form-check-input" id="isSymlink" name="notSymlink">
<label class="form-check-label" for="isSymlink">No Symlinks</label>
</div>
</div>
<div class="col-md-2 mb-3">
<div class="form-check d-inline-block">
<input type="checkbox" class="form-check-input" name="downloadUncached" id="downloadUncached">
<label class="form-check-label" for="downloadUncached">Download Uncached</label>
</div>
</div>
</div>
<button type="submit" class="btn btn-primary" id="submitDownload">
<i class="bi bi-cloud-upload me-2"></i>Add to Download Queue
</button>
</form>
</div>
</div>
</div>
<script>
document.addEventListener('DOMContentLoaded', () => {
const loadSavedDownloadOptions = () => {
const savedCategory = localStorage.getItem('downloadCategory');
const savedSymlink = localStorage.getItem('downloadSymlink');
const savedDownloadUncached = localStorage.getItem('downloadUncached');
document.getElementById('category').value = savedCategory || '';
document.getElementById('isSymlink').checked = savedSymlink === 'true';
document.getElementById('downloadUncached').checked = savedDownloadUncached === 'true';
};
const saveCurrentDownloadOptions = () => {
const category = document.getElementById('category').value;
const isSymlink = document.getElementById('isSymlink').checked;
const downloadUncached = document.getElementById('downloadUncached').checked;
localStorage.setItem('downloadCategory', category);
localStorage.setItem('downloadSymlink', isSymlink.toString());
localStorage.setItem('downloadUncached', downloadUncached.toString());
};
// Load the last used download options from local storage
loadSavedDownloadOptions();
// Handle form submission
document.getElementById('downloadForm').addEventListener('submit', async (e) => {
e.preventDefault();
const submitBtn = document.getElementById('submitDownload');
const originalText = submitBtn.innerHTML;
submitBtn.disabled = true;
submitBtn.innerHTML = '<span class="spinner-border spinner-border-sm me-2"></span>Adding...';
try {
const formData = new FormData();
// Add URLs if present
const urls = document.getElementById('magnetURI').value
.split('\n')
.map(url => url.trim())
.filter(url => url.length > 0);
if (urls.length > 0) {
formData.append('urls', urls.join('\n'));
}
// Add torrent files if present
const fileInput = document.getElementById('torrentFiles');
for (let i = 0; i < fileInput.files.length; i++) {
formData.append('files', fileInput.files[i]);
}
if (urls.length + fileInput.files.length === 0) {
createToast('Please submit at least one torrent', 'warning');
return;
}
if (urls.length + fileInput.files.length > 100) {
createToast('Please submit up to 100 torrents at a time', 'warning');
return;
}
formData.append('arr', document.getElementById('category').value);
formData.append('notSymlink', document.getElementById('isSymlink').checked);
formData.append('downloadUncached', document.getElementById('downloadUncached').checked);
const response = await fetch('/internal/add', {
method: 'POST',
body: formData
});
const result = await response.json();
if (!response.ok) throw new Error(result.error || 'Unknown error');
if (result.errors && result.errors.length > 0) {
if (result.results.length > 0) {
createToast(`Added ${result.results.length} torrents with ${result.errors.length} errors:\n${result.errors.join('\n')}`, 'warning');
} else {
createToast(`Failed to add torrents:\n${result.errors.join('\n')}`, 'error');
}
} else {
createToast(`Successfully added ${result.results.length} torrents!`);
//document.getElementById('magnetURI').value = '';
//document.getElementById('torrentFiles').value = '';
}
} catch (error) {
createToast(`Error adding downloads: ${error.message}`, 'error');
} finally {
submitBtn.disabled = false;
submitBtn.innerHTML = originalText;
}
});
// Save the download options to local storage when they change
document.getElementById('category').addEventListener('change', saveCurrentDownloadOptions);
document.getElementById('isSymlink').addEventListener('change', saveCurrentDownloadOptions);
// Read the URL parameters for a magnet link and add it to the download queue if found
const urlParams = new URLSearchParams(window.location.search);
const magnetURI = urlParams.get('magnet');
if (magnetURI) {
document.getElementById('magnetURI').value = magnetURI;
history.replaceState({}, document.title, window.location.pathname);
}
});
</script>
{{ end }}

416
pkg/web/web/index.html Normal file
View File

@@ -0,0 +1,416 @@
{{ define "index" }}
<div class="container mt-4">
<div class="card">
<div class="card-header d-flex justify-content-between align-items-center gap-4">
<h4 class="mb-0 text-nowrap"><i class="bi bi-table me-2"></i>Active Torrents</h4>
<div class="d-flex align-items-center overflow-auto" style="flex-wrap: nowrap; gap: 0.5rem;">
<button class="btn btn-outline-danger btn-sm" id="batchDeleteBtn" style="display: none; flex-shrink: 0;">
<i class="bi bi-trash me-1"></i>Delete Selected
</button>
<button class="btn btn-outline-secondary btn-sm me-2" id="refreshBtn" style="flex-shrink: 0;">
<i class="bi bi-arrow-clockwise me-1"></i>Refresh
</button>
<select class="form-select form-select-sm d-inline-block w-auto me-2" id="stateFilter" style="flex-shrink: 0;">
<option value="">All States</option>
<option value="pausedup">Completed</option>
<option value="downloading">Downloading</option>
<option value="error">Error</option>
</select>
<select class="form-select form-select-sm d-inline-block w-auto" id="categoryFilter">
<option value="">All Categories</option>
</select>
<select class="form-select form-select-sm d-inline-block w-auto" id="sortSelector" style="flex-shrink: 0;">
<option value="added_on" selected>Date Added (Newest First)</option>
<option value="added_on_asc">Date Added (Oldest First)</option>
<option value="name_asc">Name (A-Z)</option>
<option value="name_desc">Name (Z-A)</option>
<option value="size_desc">Size (Largest First)</option>
<option value="size_asc">Size (Smallest First)</option>
<option value="progress_desc">Progress (Most First)</option>
<option value="progress_asc">Progress (Least First)</option>
</select>
</div>
</div>
<div class="card-body p-0">
<div class="table-responsive">
<table class="table table-hover mb-0">
<thead>
<tr>
<th>
<input type="checkbox" class="form-check-input" id="selectAll">
</th>
<th>Name</th>
<th>Size</th>
<th>Progress</th>
<th>Speed</th>
<th>Category</th>
<th>Debrid</th>
<th>State</th>
<th>Actions</th>
</tr>
</thead>
<tbody id="torrentsList">
</tbody>
</table>
</div>
<div class="d-flex justify-content-between align-items-center p-3 border-top">
<div class="pagination-info">
<span id="paginationInfo">Showing 0-0 of 0 torrents</span>
</div>
<nav aria-label="Torrents pagination">
<ul class="pagination pagination-sm m-0" id="paginationControls"></ul>
</nav>
</div>
</div>
</div>
</div>
<script>
let refs = {
torrentsList: document.getElementById('torrentsList'),
categoryFilter: document.getElementById('categoryFilter'),
stateFilter: document.getElementById('stateFilter'),
sortSelector: document.getElementById('sortSelector'),
selectAll: document.getElementById('selectAll'),
batchDeleteBtn: document.getElementById('batchDeleteBtn'),
refreshBtn: document.getElementById('refreshBtn'),
paginationControls: document.getElementById('paginationControls'),
paginationInfo: document.getElementById('paginationInfo')
};
let state = {
torrents: [],
selectedTorrents: new Set(),
categories: new Set(),
states: new Set('downloading', 'pausedup', 'error'),
selectedCategory: refs.categoryFilter?.value || '',
selectedState: refs.stateFilter?.value || '',
sortBy: refs.sortSelector?.value || 'added_on',
itemsPerPage: 20,
currentPage: 1
};
const torrentRowTemplate = (torrent) => `
<tr data-hash="${torrent.hash}">
<td>
<input type="checkbox" class="form-check-input torrent-select" data-hash="${torrent.hash}" ${state.selectedTorrents.has(torrent.hash) ? 'checked' : ''}>
</td>
<td class="text-nowrap text-truncate overflow-hidden" style="max-width: 350px;" title="${torrent.name}">${torrent.name}</td>
<td class="text-nowrap">${formatBytes(torrent.size)}</td>
<td style="min-width: 150px;">
<div class="progress" style="height: 8px;">
<div class="progress-bar" role="progressbar"
style="width: ${(torrent.progress * 100).toFixed(1)}%"
aria-valuenow="${(torrent.progress * 100).toFixed(1)}"
aria-valuemin="0"
aria-valuemax="100"></div>
</div>
<small class="text-muted">${(torrent.progress * 100).toFixed(1)}%</small>
</td>
<td>${formatSpeed(torrent.dlspeed)}</td>
<td><span class="badge bg-secondary">${torrent.category || 'None'}</span></td>
<td>${torrent.debrid || 'None'}</td>
<td><span class="badge ${getStateColor(torrent.state)}">${torrent.state}</span></td>
<td>
<button class="btn btn-sm btn-outline-danger" onclick="deleteTorrent('${torrent.hash}', '${torrent.category}')">
<i class="bi bi-trash"></i>
</button>
</td>
</tr>
`;
function formatBytes(bytes) {
if (!bytes) return '0 B';
const k = 1024;
const sizes = ['B', 'KB', 'MB', 'GB', 'TB'];
const i = Math.floor(Math.log(bytes) / Math.log(k));
return `${parseFloat((bytes / Math.pow(k, i)).toFixed(2))} ${sizes[i]}`;
}
function formatSpeed(speed) {
return `${formatBytes(speed)}/s`;
}
function getStateColor(state) {
const stateColors = {
'downloading': 'bg-primary',
'pausedup': 'bg-success',
'error': 'bg-danger',
};
return stateColors[state?.toLowerCase()] || 'bg-secondary';
}
function updateUI() {
// Filter torrents by selected category and state
let filteredTorrents = state.torrents;
if (state.selectedCategory) {
filteredTorrents = filteredTorrents.filter(t => t.category === state.selectedCategory);
}
if (state.selectedState) {
filteredTorrents = filteredTorrents.filter(t => t.state === state.selectedState);
}
// Sort the filtered torrents
filteredTorrents = sortTorrents(filteredTorrents, state.sortBy);
const totalPages = Math.ceil(filteredTorrents.length / state.itemsPerPage);
if (state.currentPage > totalPages && totalPages > 0) {
state.currentPage = totalPages;
}
const paginatedTorrents = paginateTorrents(filteredTorrents);
// Update the torrents list table
refs.torrentsList.innerHTML = paginatedTorrents.map(torrent => torrentRowTemplate(torrent)).join('');
// Update the category filter dropdown
const currentCategories = Array.from(state.categories).sort();
const categoryOptions = ['<option value="">All Categories</option>']
.concat(currentCategories.map(cat =>
`<option value="${cat}" ${cat === state.selectedCategory ? 'selected' : ''}>${cat}</option>`
));
refs.categoryFilter.innerHTML = categoryOptions.join('');
// Clean up selected torrents that no longer exist
state.selectedTorrents = new Set(
Array.from(state.selectedTorrents)
.filter(hash => filteredTorrents.some(t => t.hash === hash))
);
// Update batch delete button visibility
refs.batchDeleteBtn.style.display = state.selectedTorrents.size > 0 ? '' : 'none';
// Update the select all checkbox state
refs.selectAll.checked = filteredTorrents.length > 0 && filteredTorrents.every(torrent => state.selectedTorrents.has(torrent.hash));
}
async function loadTorrents() {
try {
const response = await fetch('/internal/torrents');
const torrents = await response.json();
state.torrents = torrents;
state.categories = new Set(torrents.map(t => t.category).filter(Boolean));
updateUI();
} catch (error) {
console.error('Error loading torrents:', error);
}
}
function sortTorrents(torrents, sortBy) {
// Create a copy of the array to avoid mutating the original
const result = [...torrents];
// Parse the sort value to determine field and direction
const [field, direction] = sortBy.includes('_asc') || sortBy.includes('_desc')
? [sortBy.split('_').slice(0, -1).join('_'), sortBy.endsWith('_asc') ? 'asc' : 'desc']
: [sortBy, 'desc']; // Default to descending if not specified
result.sort((a, b) => {
let valueA, valueB;
// Get values based on field
switch (field) {
case 'name':
valueA = a.name?.toLowerCase() || '';
valueB = b.name?.toLowerCase() || '';
break;
case 'size':
valueA = a.size || 0;
valueB = b.size || 0;
break;
case 'progress':
valueA = a.progress || 0;
valueB = b.progress || 0;
break;
case 'added_on':
valueA = a.added_on || 0;
valueB = b.added_on || 0;
break;
default:
valueA = a[field] || 0;
valueB = b[field] || 0;
}
// Compare based on type
if (typeof valueA === 'string') {
return direction === 'asc'
? valueA.localeCompare(valueB)
: valueB.localeCompare(valueA);
} else {
return direction === 'asc'
? valueA - valueB
: valueB - valueA;
}
});
return result;
}
async function deleteTorrent(hash, category) {
if (!confirm('Are you sure you want to delete this torrent?')) return;
try {
await fetch(`/internal/torrents/${category}/${hash}`, {
method: 'DELETE'
});
await loadTorrents();
createToast('Torrent deleted successfully');
} catch (error) {
console.error('Error deleting torrent:', error);
createToast('Failed to delete torrent', 'error');
}
}
async function deleteSelectedTorrents() {
if (!confirm(`Are you sure you want to delete ${state.selectedTorrents.size} selected torrents?`)) return;
try {
// COmma separated list of hashes
const hashes = Array.from(state.selectedTorrents).join(',');
await fetch(`/internal/torrents/?hashes=${encodeURIComponent(hashes)}`, {
method: 'DELETE'
});
await loadTorrents();
createToast('Selected torrents deleted successfully');
} catch (error) {
console.error('Error deleting torrents:', error);
createToast('Failed to delete some torrents' , 'error');
}
}
function paginateTorrents(torrents) {
const totalItems = torrents.length;
const totalPages = Math.ceil(totalItems / state.itemsPerPage);
const startIndex = (state.currentPage - 1) * state.itemsPerPage;
const endIndex = Math.min(startIndex + state.itemsPerPage, totalItems);
// Update pagination info text
refs.paginationInfo.textContent =
`Showing ${totalItems > 0 ? startIndex + 1 : 0}-${endIndex} of ${totalItems} torrents`;
// Generate pagination controls
refs.paginationControls.innerHTML = '';
if (totalPages <= 1) {
return torrents.slice(startIndex, endIndex);
}
// Previous button
const prevLi = document.createElement('li');
prevLi.className = `page-item ${state.currentPage === 1 ? 'disabled' : ''}`;
prevLi.innerHTML = `
<a class="page-link" href="#" aria-label="Previous" ${state.currentPage === 1 ? 'tabindex="-1" aria-disabled="true"' : ''}>
<span aria-hidden="true">&laquo;</span>
</a>
`;
if (state.currentPage > 1) {
prevLi.querySelector('a').addEventListener('click', (e) => {
e.preventDefault();
state.currentPage--;
updateUI();
});
}
refs.paginationControls.appendChild(prevLi);
// Page numbers
const maxPageButtons = 5;
let startPage = Math.max(1, state.currentPage - Math.floor(maxPageButtons / 2));
let endPage = Math.min(totalPages, startPage + maxPageButtons - 1);
if (endPage - startPage + 1 < maxPageButtons) {
startPage = Math.max(1, endPage - maxPageButtons + 1);
}
for (let i = startPage; i <= endPage; i++) {
const pageLi = document.createElement('li');
pageLi.className = `page-item ${i === state.currentPage ? 'active' : ''}`;
pageLi.innerHTML = `<a class="page-link" href="#">${i}</a>`;
pageLi.querySelector('a').addEventListener('click', (e) => {
e.preventDefault();
state.currentPage = i;
updateUI();
});
refs.paginationControls.appendChild(pageLi);
}
// Next button
const nextLi = document.createElement('li');
nextLi.className = `page-item ${state.currentPage === totalPages ? 'disabled' : ''}`;
nextLi.innerHTML = `
<a class="page-link" href="#" aria-label="Next" ${state.currentPage === totalPages ? 'tabindex="-1" aria-disabled="true"' : ''}>
<span aria-hidden="true">&raquo;</span>
</a>
`;
if (state.currentPage < totalPages) {
nextLi.querySelector('a').addEventListener('click', (e) => {
e.preventDefault();
state.currentPage++;
updateUI();
});
}
refs.paginationControls.appendChild(nextLi);
return torrents.slice(startIndex, endIndex);
}
document.addEventListener('DOMContentLoaded', () => {
loadTorrents();
const refreshInterval = setInterval(loadTorrents, 5000);
refs.refreshBtn.addEventListener('click', loadTorrents);
refs.batchDeleteBtn.addEventListener('click', deleteSelectedTorrents);
refs.selectAll.addEventListener('change', (e) => {
const filteredTorrents = state.torrents.filter(t => {
if (state.selectedCategory && t.category !== state.selectedCategory) return false;
if (state.selectedState && t.state?.toLowerCase() !== state.selectedState.toLowerCase()) return false;
return true;
});
if (e.target.checked) {
filteredTorrents.forEach(torrent => state.selectedTorrents.add(torrent.hash));
} else {
filteredTorrents.forEach(torrent => state.selectedTorrents.delete(torrent.hash));
}
updateUI();
});
refs.torrentsList.addEventListener('change', (e) => {
if (e.target.classList.contains('torrent-select')) {
const hash = e.target.dataset.hash;
if (e.target.checked) {
state.selectedTorrents.add(hash);
} else {
state.selectedTorrents.delete(hash);
}
updateUI();
}
});
refs.categoryFilter.addEventListener('change', (e) => {
state.selectedCategory = e.target.value;
state.currentPage = 1; // Reset to first page
updateUI();
});
refs.stateFilter.addEventListener('change', (e) => {
state.selectedState = e.target.value;
state.currentPage = 1; // Reset to first page
updateUI();
});
refs.sortSelector.addEventListener('change', (e) => {
state.sortBy = e.target.value;
state.currentPage = 1; // Reset to first page
updateUI();
});
window.addEventListener('beforeunload', () => {
clearInterval(refreshInterval);
});
});
</script>
{{ end }}

Some files were not shown because too many files have changed in this diff Show More