318 Commits

Author SHA1 Message Date
Mukhtar Akere
207d43b13f fix issuses with rclone on windows
Some checks failed
ci / deploy (push) Successful in 1m6s
Release Docker Build / docker (push) Has been cancelled
GoReleaser / goreleaser (push) Has been cancelled
2025-10-27 11:56:23 +01:00
Mukhtar Akere
9f9a85d302 fix gh workflow 2025-10-27 11:49:12 +01:00
Mukhtar Akere
2712315108 hotfix: downloader deleting files with multi-season
Some checks failed
Beta Docker Build / docker (push) Failing after 2m53s
2025-10-27 11:45:12 +01:00
Mukhtar Akere
1f384ba4f7 remove ci/cd test 2025-10-26 11:41:39 +01:00
Mukhtar Akere
7db79e99ba minor fixes 2025-10-22 16:57:06 +01:00
Mukhtar Akere
ad394c86ee Merge branch 'beta' of github.com:sirrobot01/decypharr into beta 2025-10-22 16:52:04 +01:00
crashxer
7af90ebe47 Add feature to remove torrent tracker URLs from torrents for private tracker downloads (#99)
- Remove trackers from torrenst/magnet URI

---------

Co-authored-by: Mukhtar Akere <akeremukhtar10@gmail.com>
2025-10-22 16:44:23 +01:00
Duc Nghiem Xuan
7032cc368b Update version badge link format (#163)
Fix version badge link to use correct version format.
2025-10-22 16:40:24 +01:00
rotecode
f21f5cad94 fix regression in webdav file removal logic (#162) 2025-10-21 10:50:26 +01:00
Mukhtar Akere
f93d1a5913 minor cleanup
Some checks failed
Release Docker Build / docker (push) Has been cancelled
GoReleaser / goreleaser (push) Has been cancelled
2025-10-16 21:04:14 +01:00
Mukhtar Akere
2a4f09c06d Delete failed download link for next retry 2025-10-15 11:56:15 +01:00
Mukhtar Akere
b1b6353fb3 fix download_uncached bug 2025-10-13 20:37:59 +01:00
Mukhtar Akere
df7979c430 Fix some minor issues with authentication and qbit auth 2025-10-13 20:12:17 +01:00
Mukhtar Akere
726f97e13c chore:
- Rewrite arr storage to fix issues with repair
- Fix issues with restarts taking longer than expected
- Add bw_limit to rclone config
- Add support for skipping multi-season
- Other minor bug fixes
2025-10-13 17:02:50 +01:00
Mukhtar Akere
ab485adfc8 hotfix 2025-10-08 09:14:55 +01:00
Mukhtar Akere
700d00b802 - Fix issues with new setup
- Fix arr setup getting thr wrong crendentials
- Add file link invalidator
- Other minor bug fixes
2025-10-08 08:13:13 +01:00
Mukhtar Akere
22dae9efad Add a new worker that checks if an account is opened
Some checks failed
Release Docker Build / docker (push) Has been cancelled
GoReleaser / goreleaser (push) Has been cancelled
2025-09-17 23:30:45 +01:00
Mukhtar Akere
3f0870cd1c torbox: fix pagination bug, fix download uncached bug 2025-09-16 21:34:58 +01:00
Mukhtar Akere
30b2db06e7 Rewrote account switching, fix some minor bugs here and there 2025-09-16 21:15:24 +01:00
Mukhtar Akere
76f5b85313 Fix issues with dir-cache-time, umask and wrongly set gid,uid, add extra vfs options 2025-09-05 16:11:22 +01:00
Mukhtar Akere
85cd37f29b Revert former beta chnages 2025-08-30 04:10:18 +01:00
Mukhtar Akere
aff12c2e4b Fix Added bug in torrent 2025-08-28 03:26:43 +01:00
Mukhtar Akere
d76ca032ab hotfix config update 2025-08-28 01:30:54 +01:00
Mukhtar Akere
8bb786c689 hotfix nil downloadLink 2025-08-27 23:57:49 +01:00
Mukhtar Akere
83058489b6 Add callback URL for post-processing 2025-08-27 13:02:43 +01:00
Mukhtar Akere
267cc2d32b Fix issues with account swutching 2025-08-26 15:31:24 +01:00
Mukhtar Akere
eefe8a3901 Hotfix for download link generation and account switching 2025-08-24 21:54:26 +01:00
Mukhtar Akere
618eb73067 - Add support for multi-season imports
- Improve in-memoery storage, whic reduces memory usage
- Fix issues with rclone integration
2025-08-24 16:25:37 +01:00
Mukhtar Akere
f8667938b6 Add more rclone flags, fix minor issues 2025-08-23 06:00:07 +01:00
Mukhtar Akere
b0a698f15e - Imporve memeoery footprint
- Add batch processing for arr repairs
2025-08-21 03:32:46 +01:00
Mukhtar Akere
2548c21e5b Fix rclone file log
Some checks failed
GoReleaser / goreleaser (push) Has been cancelled
Release Docker Build / docker (push) Has been cancelled
2025-08-19 01:01:53 +01:00
Mukhtar Akere
1b03ccefbb Hotfix rclone logging flags 2025-08-19 00:55:43 +01:00
Mukhtar Akere
e3a249a9cc Fix issues with rclone mounting
Some checks failed
GoReleaser / goreleaser (push) Has been cancelled
Release Docker Build / docker (push) Has been cancelled
2025-08-18 22:12:26 +01:00
Mukhtar Akere
8696db42d2 - Add more rclone supports
- Add rclone log viewer
- Add more stats to Stats page
- Fix some minor bugs
2025-08-18 01:57:02 +01:00
Mukhtar Akere
742d8fb088 - Fix issues with cache dir
Some checks failed
GoReleaser / goreleaser (push) Has been cancelled
Release Docker Build / docker (push) Has been cancelled
- Fix responsiveness issue with navbars
- Support user entry for users running as non-root
- Other minor fixes
2025-08-12 15:14:42 +01:00
Mukhtar Akere
a0e9f7f553 Fix issues with exit code on windows, fix gh-docs
Some checks failed
GoReleaser / goreleaser (push) Has been cancelled
Release Docker Build / docker (push) Has been cancelled
2025-08-10 11:35:50 +01:00
Mukhtar Akere
4be4f6b293 Merge branch 'beta' 2025-08-10 11:09:11 +01:00
Mukhtar Akere
6c8949b831 Add auth to qbittorent middleware 2025-08-09 20:25:16 +01:00
Mukhtar Akere
0dd1efb07c Final bug fixes 2025-08-09 19:57:32 +01:00
Mukhtar Akere
3aeb806033 Wrap up 1.1.0 2025-08-09 10:55:10 +01:00
Mukhtar Akere
7c8156eacf Fix nil map 2025-08-08 13:17:09 +01:00
Mukhtar Akere
d8a963f77f fix failed cache dir 2025-08-08 12:49:29 +01:00
Mukhtar Akere
27e7bc8f47 fix failed cache dir 2025-08-08 12:48:05 +01:00
Mukhtar Akere
1d243dd12b Add Stats page 2025-08-08 12:45:58 +01:00
Mukhtar Akere
b4efa22bfd Fix issues with no gobal config 2025-08-08 06:04:42 +01:00
Mukhtar Akere
6f9fafd7d8 Migrate to full rclone rcd 2025-08-08 05:22:52 +01:00
Mukhtar Akere
eba24c9d63 Fix issues with rclone setup 2025-08-07 05:31:07 +01:00
Mukhtar Akere
c620ba3d56 Add vfs cache poll interval 2025-08-05 12:29:55 +01:00
Mukhtar Akere
fab3a7e4f7 minor fixes, change help text 2025-08-05 11:49:52 +01:00
Mukhtar Akere
01615cb51e Cleanup mounts 2025-08-05 05:18:24 +01:00
Mukhtar Akere
cb63fc69f5 Final fix for writeheader 2025-08-05 05:01:34 +01:00
Mukhtar Akere
40755fbdde Fix issues with headers 2025-08-05 04:39:03 +01:00
Mukhtar Akere
d0ae839617 Fix issues with headers 2025-08-05 04:28:38 +01:00
Mukhtar Akere
ce972779c3 Fix superflous header issue 2025-08-05 04:01:41 +01:00
Mukhtar Akere
139249a1f3 - Add mounting support
- Fix minor issues
2025-08-04 16:57:09 +01:00
Mukhtar Akere
a60d93677f Fix config.html 2025-07-24 03:07:20 +01:00
Mukhtar Akere
9c31ad266e Fix config.html 2025-07-24 03:03:18 +01:00
Mukhtar Akere
3d2fcf5656 Fix superflous header, other minor bugs 2025-07-21 20:35:49 +01:00
Mukhtar Akere
afe577bf2f - Fix repair bugs
- Minor html/js bugs from new template
- Other minor issues
2025-07-13 06:30:02 +01:00
Mukhtar Akere
604402250e hotfix login and registration 2025-07-12 00:57:48 +01:00
Mukhtar Akere
74615a80ff Fix config.js 2025-07-11 13:17:43 +01:00
Sadman Sakib
b901bd5175 Feature/torbox provider improvements (#100)
- Add Torbox WebDAV implementation
- Fix Issues with sample and extension checks
2025-07-11 13:17:03 +01:00
Mukhtar Akere
8c56e59107 Fix some UI bugs; colors etc 2025-07-11 06:03:11 +01:00
Mukhtar Akere
b8b9e76753 Add seeders, add Remove selected from debrid button 2025-07-10 15:15:02 +01:00
Mukhtar Akere
6fb54d322e Fix dockerignore 2025-07-10 02:31:30 +01:00
Mukhtar Akere
cf61546bec Move to tailwind-build instead of CDNs 2025-07-10 02:17:35 +01:00
Mukhtar Akere
c72867ff57 Testing a new UI 2025-07-09 20:08:09 +01:00
Mukhtar Akere
fa6920f94a Merge branch 'beta'
Some checks failed
GoReleaser / goreleaser (push) Has been cancelled
Release Docker Build / docker (push) Has been cancelled
2025-07-09 05:14:39 +01:00
Mukhtar Akere
dba5604d79 fix refresh rclone http client 2025-07-07 00:08:48 +01:00
iPromKnight
f656b7e4e2 feat: Allow deleting all __bad__ with a single button (#98) 2025-07-04 20:13:12 +01:00
Mukhtar Akere
c7b07137c5 Fix repair bug 2025-07-03 23:36:30 +01:00
Mukhtar Akere
c0aa4eaeba Fix modtime bug 2025-07-02 01:17:31 +01:00
Mukhtar Akere
2c90e518aa fix playback issues 2025-07-01 16:10:23 +01:00
Mukhtar Akere
dec7d93272 fix streaming 2025-07-01 15:28:19 +01:00
Mukhtar Akere
8d092615db Update stream client; Add repair strategy 2025-07-01 04:42:33 +01:00
iPromKnight
a4ee0973cc fix: AllDebrid webdav compatibility, and uncached downloads (#97) 2025-07-01 04:10:21 +01:00
Mukhtar Akere
ab12610346 Merge branch 'beta'
Some checks failed
GoReleaser / goreleaser (push) Has been cancelled
Release Docker Build / docker (push) Has been cancelled
2025-06-26 21:15:22 +01:00
Mukhtar Akere
1d19be9013 hotfix repair html table 2025-06-26 07:31:12 +01:00
Mukhtar Akere
cee0e20fe1 hotfix repair and download rate limit 2025-06-26 06:08:50 +01:00
Mukhtar Akere
a3e698e04f Add repair and download rate limit 2025-06-26 05:45:20 +01:00
Mukhtar Akere
e123a2fd5e Hotfix issues with 1.0.3 2025-06-26 03:51:28 +01:00
Mukhtar Akere
817051589e Move to per-torrent repair; Fix issues issues with adding torrents 2025-06-23 18:54:52 +01:00
Mukhtar Akere
705de2d2bc Merge branch 'beta'
Some checks failed
GoReleaser / goreleaser (push) Has been cancelled
Release Docker Build / docker (push) Has been cancelled
2025-06-23 12:00:53 +01:00
Mukhtar Akere
54c421a480 Update Docs 2025-06-23 11:59:26 +01:00
Mukhtar Akere
1b98b994b7 Add size to arr ContentFile 2025-06-19 18:23:38 +01:00
Mukhtar Akere
06096c3748 Hotfix empty arr setup 2025-06-19 17:58:30 +01:00
Mukhtar Akere
7474011ef0 Update repair tool 2025-06-19 15:56:01 +01:00
Mukhtar Akere
086aa3b1ff Improve Arr integerations 2025-06-19 14:40:12 +01:00
Mukhtar Akere
c15e9d8f70 Updste repair 2025-06-18 12:44:05 +01:00
Mukhtar Akere
b2e99585f7 Fix issues with repair, move to a different streaming option 2025-06-18 10:42:44 +01:00
Mukhtar Akere
5661b05ec1 added CET timezone 2025-06-16 22:54:11 +01:00
Mukhtar Akere
b7226b21ec added CET timezone 2025-06-16 22:41:46 +01:00
Mukhtar Akere
605d5b81c2 Fix duration bug in config 2025-06-16 13:55:02 +01:00
Mukhtar Akere
8d87c602b9 - Add remove stalled torrent
- Few cleanup
2025-06-15 22:46:07 +01:00
Mukhtar Akere
7cf25f53e7 hotfix 2025-06-14 19:32:50 +01:00
Mukhtar Akere
22280f15cf cleanup torrent cache 2025-06-14 16:55:45 +01:00
Mukhtar Akere
a539aa53bd - Speed up repairs when checking links \n
- Remove run on start for repairs since it causes issues \n
- Add support for arr-specific debrid
- Support for queuing system
- Support for no-op when sending torrents to debrid
2025-06-14 16:09:28 +01:00
Mukhtar Akere
3efda45304 - IMplement multi-download api tokens
- Move things around a bit
2025-06-08 19:06:17 +01:00
Mukhtar Akere
5bf1dab5e6 Torrent Queuing for Botched torrent (#83)
* Implement a queue for handling failed torrent

* Add checks for getting slots

* Few other cleanups, change some function names
2025-06-07 17:23:41 +01:00
Mukhtar Akere
84603b084b Some improvements to beta 2025-06-07 10:03:01 +01:00
Mukhtar Akere
dfcf8708f1 final prep for 1.0.3 2025-06-03 10:45:23 +01:00
Mukhtar Akere
30a1dd74a7 Add Basic healtcheck 2025-06-02 20:45:39 +01:00
Mukhtar Akere
f041ef47a7 fix cloudflare, probably? 2025-06-02 20:04:41 +01:00
Mukhtar Akere
349a13468b fix cloudflare, maybe? 2025-06-02 15:44:03 +01:00
Mukhtar Akere
9c6c44d785 - Revamp decypharr arch \n
- Add callback_ur, download_folder to addContent API \n
- Fix few bugs \n
- More declarative UI keywords
- Speed up repairs
- Few other improvements/bug fixes
2025-06-02 12:57:36 +01:00
Mukhtar Akere
1cd09239f9 - Add more indepth stats like number of torrents, profile details etc
- Add torrent ingest endpoints
- Add issue template
2025-05-29 04:05:44 +01:00
Elias Benbourenane
f9c49cbbef Torrent list context menu (#40)
* feat: Torrent list context menu

* style: Leave more padding on the context menu for smaller screens
2025-05-28 07:29:18 -07:00
Mukhtar Akere
60b8d87f1c hotfix rar PR 2025-05-28 00:14:43 +01:00
Elias Benbourenane
fbd6cd5038 Random access for RARed RealDebrid torrents (#61)
* feat: AI translated port of RARAR.py in Go

* feat: Extract and cache byte ranges of RARed RD torrents

* feat: Stream and download files with byte ranges if specified

* refactor: Use a more structured data format for byte ranges

* fix: Rework streaming to fix error handling

* perf: More efficient RAR file pre-processing

* feat: Made the RAR unpacker an optional config option

* refactor: Remove unnecessary Rar prefix for more idiomatic code

* refactor: More appropriate private method declaration

* feat: Error handling for parsing RARed torrents with retry requests and EOF validation

* fix: Correctly parse unicode file names

* fix: Handle special character conversion for RAR torrent file names

* refactor: Removed debug logs

* feat: Only allow two concurrent RAR unpacking tasks

* fix: Include "<" and ">" as unsafe chars for RAR unpacking

* refactor: Seperate types into their own file

* refactor: Don't read RAR files on reader initialization
2025-05-27 16:10:23 -07:00
Mukhtar Akere
87bf8d0574 Merge branch 'beta'
Some checks failed
GoReleaser / goreleaser (push) Has been cancelled
Release Docker Build / docker (push) Has been cancelled
2025-05-27 23:45:13 +01:00
Mukhtar Akere
7f25599b60 - Add support for per-file deletion
- Per-file repair instead of per-torrent
- Fix issues with LoadLocation
- Fix other minor bug fixes woth torbox
2025-05-27 19:31:19 +01:00
Mukhtar Akere
d313ed0712 hotfix non-webdav symlinker
Some checks failed
GoReleaser / goreleaser (push) Has been cancelled
Release Docker Build / docker (push) Has been cancelled
2025-05-26 00:16:46 +01:00
Mukhtar Akere
09202b88e9 Finalize Beta
Some checks failed
GoReleaser / goreleaser (push) Has been cancelled
Release Docker Build / docker (push) Has been cancelled
2025-05-23 02:30:04 +01:00
Mukhtar Akere
d10a6ddedd - Add etags to stream url
- Support for non-File files with range instead of readint to memory
- Log more errors for reealdebrid
2025-05-22 22:23:49 +01:00
Mukhtar Akere
7c1defc684 Add timer for non-webdav adds 2025-05-22 20:03:07 +01:00
Mukhtar Akere
83a453cd0c Add serve from rclone; add readiness check for each debrid, rather than waiting for all to be ready 2025-05-22 20:01:10 +01:00
Mukhtar Akere
a2bdad7c2a Add add_samples to available flags 2025-05-22 15:14:31 +01:00
Mukhtar Akere
57ccd67c83 Fix timeout in grab; remove pprof 2025-05-20 18:47:05 +01:00
Mukhtar Akere
5aa1c67544 - Add PROPFIND for root path
- Reduce signifcantly memoery footprint
- Fix minor bugs
2025-05-20 12:57:27 +01:00
Mukhtar Akere
53748ea297 Fix file downloads bug 2025-05-18 13:26:43 +01:00
Mukhtar Akere
109d0a0c1c - Add cancellation context
- Other bug fixes
2025-05-17 21:23:43 +01:00
Mukhtar Akere
35a74d8dba - Fix Delete button in webdav
- Other bug fixes
2025-05-16 16:43:01 +01:00
Mukhtar Akere
b984697fe3 - Cleaup the code
- Add delete button to webdav ui
- Some other bug fixes here and there
2025-05-15 02:42:38 +01:00
somesuchnonsense
690d7668c1 fixed windows filepath issues by delaying path to filepath conversion (#66) 2025-05-14 10:42:20 -07:00
Mukhtar Akere
3c8e6bae81 Move fully off zsync. Defaults ot simple maps with mutexes 2025-05-14 14:55:18 +01:00
Mukhtar Akere
64edc5547d Revert download link error to debug 2025-05-13 14:08:49 +01:00
Mukhtar Akere
03a1d73825 Fix issues with __bad__ 2025-05-13 14:00:03 +01:00
Mukhtar Akere
3b018b3571 hotfix: bad torrent 2025-05-13 13:37:35 +01:00
Mukhtar Akere
e5b3e0741e remove count from re-inset since it's moot 2025-05-13 13:28:15 +01:00
Mukhtar Akere
36e681d0e6 - Add __bad__ for bad torrents
- Add colors to logs info
- Make logs a bit more clearer
- Mark torrent has bad instead of deleting them
2025-05-13 13:17:26 +01:00
Mukhtar Akere
7c1bb52793 - Re-enable refresh torrents
- Fix issues with re-inserts etc
- Fix getting torrents and updating
2025-05-12 16:07:47 +01:00
Mukhtar Akere
9de7cfd73b - Improve propfind handler
- remove path escapes in fileinfo
- other minor fixes
2025-05-12 03:35:40 +01:00
Mukhtar Akere
ffb1745bf6 Add support for rclone refresh dirs instead of refreshing everything 2025-05-11 15:20:06 +01:00
Mukhtar Akere
0f56badb45 - Hotfixes;
- Speed improvements
2025-05-11 08:01:42 +01:00
Mukhtar Akere
8e464cdcea - Add support for virtual folders
- Fix minor bug fixes
2025-05-10 19:52:53 +01:00
Mukhtar Akere
4cdfd051f3 Increase debouncer time 2025-05-10 01:27:01 +01:00
Mukhtar Akere
e05c6d5028 - Retyr RD 502 errors
- Fix issues with re-inserted torrents bugging out
- Fix version.txt
- Massive improvements in importing times
- Fix issues with config.json resetting
- Fix other minor issues
2025-05-10 01:04:51 +01:00
Mukhtar Akere
57de04b164 Optimize caching, speed up imports 2025-05-08 02:15:46 +01:00
Mukhtar Akere
0deb88e265 minor bug fixes; improvements, final-beta-pre-stable 2025-05-07 18:25:09 +01:00
Mukhtar Akere
21354529e7 fix timeout when downloading 2025-05-02 20:48:29 +01:00
Elias Benbourenane
ef820b5bf4 Consistent WebDav file sorting (#62)
* style: Sort webdav file lists by name
* style: Consistent sorting on the get torrents route
2025-05-02 11:11:58 -07:00
Mukhtar Akere
130433203f fix propfind cache 2025-04-30 00:52:15 +01:00
Mukhtar Akere
1248d99680 revamp rate limiter 2025-04-29 23:41:03 +01:00
Mukhtar Akere
c0703cb622 hotfix 2025-04-29 12:07:01 +01:00
Mukhtar Akere
6c2bfa811a Important hotfixes:
- Re-inserting botched torrents
- Fixing issues with failed download link
2025-04-29 11:36:16 +01:00
Mukhtar Akere
75a5bb90a3 Hotfix 2025-04-29 10:32:05 +01:00
Mukhtar Akere
1f190e3cb6 fix 2025-04-28 23:10:48 +01:00
Mukhtar Akere
5f06a244b8 Fix issues with dupliacte names; other minor bug fixes 2025-04-28 23:06:44 +01:00
Mukhtar Akere
10467ff9f8 Fix bugs with deleted torrents from different names 2025-04-28 01:13:48 +01:00
Mukhtar Akere
f977c52571 - Fix nil checks
- Enable add arr to config page
- Other minor fixes
2025-04-27 23:43:19 +01:00
Mukhtar Akere
a3e64cc269 - Delete empty files selected torrents
- Add more info to UI
- Add a global file download limit for local downloads
2025-04-27 01:16:24 +01:00
Mukhtar Akere
e8112a4647 hotfix 2025-04-26 21:13:09 +01:00
somesuchnonsense
6e2d1e1a7f updated filepaths for multiplatform support (#56)
- Migrate from Go's path to path/filepath to support multi-OS support
2025-04-26 12:59:23 -07:00
Mukhtar Akere
bce51ecd4f hotfix 2025-04-25 15:21:49 +01:00
Mukhtar Akere
ae5e237379 Hotfix 2025-04-25 12:48:04 +01:00
Mukhtar Akere
07f1d0f28d - Fix symlinks % bug
- A cleaner settings page
- More bug fixes
2025-04-25 12:36:12 +01:00
Mukhtar Akere
267430e6fb Fixes
- Download Link fix
- reinsert fix
2025-04-23 16:38:55 +01:00
Mukhtar Akere
1a4db69b20 hotfix 2025-04-22 21:40:39 +01:00
Mukhtar Akere
3cc8ad3cdc wraps up duplicate names implementation 2025-04-22 21:24:33 +01:00
Mukhtar Akere
fb39e92a88 - Add support for merging files from torrents with the same name
- Add infohash as a folder naming
- Other minor bugs
2025-04-22 19:32:55 +01:00
Mukhtar Akere
2139d3a175 fix auth setup bug 2025-04-22 02:00:43 +01:00
Mukhtar Akere
32935ce3aa fix bugs; move to gocron for scheduled jobs 2025-04-21 23:23:35 +01:00
Mukhtar Akere
a27c5dd491 Hotfixes:
- Fix % error in url encode
- FIx alldebrid downloading bug
- Fix dupicate checks for newly added torrents
2025-04-20 00:44:58 +01:00
Mukhtar Akere
dc8ee3d150 Fix repair bug; fix torrent refreshes 2025-04-19 10:41:45 +01:00
Mukhtar Akere
52877107c9 - Fix alldebrid bug with webdav(for nested files)
- Add support for re-inserting broken files
- Other minor bug fixes
2025-04-18 15:56:52 +01:00
Mukhtar Akere
f34a371274 wrap up url escape bug; fix pausedUP bug 2025-04-17 19:12:28 +01:00
Mukhtar Akere
8c78da3f69 fix escape 2025-04-17 16:31:39 +01:00
Mukhtar Akere
1983e27124 - Fix url escape for webdav files
- Add support for bind address, url base
2025-04-17 15:34:47 +01:00
Mukhtar Akere
80615e06d1 - Fix url escape for webdav files
- Add support for bind address, url base
2025-04-17 15:26:58 +01:00
Mukhtar Akere
b5b6f0ff73 fix invalid characters in name 2025-04-17 01:07:40 +01:00
Mukhtar Akere
3fe9053aa8 fix progress report 2025-04-16 23:24:53 +01:00
Mukhtar Akere
c07a85f4d0 fix reinsert torrent error 2025-04-16 23:19:52 +01:00
Mukhtar Akere
af067cace9 Changelog 0.6.0 2025-04-16 17:31:50 +01:00
Mukhtar Akere
ea79e2a6fb Merge branch 'experimental' into beta 2025-04-16 11:35:53 +01:00
Mukhtar Akere
58fb4e6e14 finalize experimental 2025-04-16 11:32:42 +01:00
Mukhtar Akere
39945616f3 Remove Proxy featurw 2025-04-13 13:03:43 +01:00
Mukhtar Akere
8029cd3840 Add support for adding torrent file 2025-04-13 12:40:31 +01:00
Mukhtar Akere
19b8664146 fix action 2025-04-13 11:36:48 +01:00
Mukhtar Akere
8ea128446c fix action 2025-04-13 11:35:36 +01:00
Mukhtar Akere
391900e93d fix action 2025-04-13 11:32:17 +01:00
Mukhtar Akere
5987028f05 fix action 2025-04-13 11:31:01 +01:00
Mukhtar Akere
7492f629f9 Add documentaion, finalizing experimental 2025-04-13 11:29:08 +01:00
Mukhtar Akere
101ae4197e fix multi-api key bug 2025-04-11 00:05:09 +01:00
Mukhtar Akere
a357897222 - Fix bandwidth limit error
- Add cooldowns for fair usage limit bug
- Fix repair bugs
2025-04-09 20:00:06 +01:00
Mukhtar Akere
92177b150b Fix url in UI 2025-04-08 21:00:40 +01:00
Mukhtar Akere
9011420ac3 Add invalid link reset worker 2025-04-08 17:48:01 +01:00
Mukhtar Akere
4b5e18df94 - Deprecate proxy
- Add Proxy for each debrid
- Add support for multiple-API keys
- Use internal http.Client for streaming
- Bug fixes etc
2025-04-08 17:30:24 +01:00
Mukhtar Akere
003f73c456 Merge branch 'beta' of github.com:sirrobot01/debrid-blackhole into beta
Some checks failed
GoReleaser / goreleaser (push) Has been cancelled
Release Docker Build / docker (push) Has been cancelled
2025-04-03 11:26:47 +01:00
Mukhtar Akere
b34935d490 hotfix un-cached downloading 2025-04-03 11:26:27 +01:00
Mukhtar Akere
4659cd4273 Performance improvements; import speedup 2025-04-03 11:24:30 +01:00
Mukhtar Akere
7d954052ae - Refractor code
- Add a better logging for 429 when streaming
- Fix minor issues
2025-04-01 06:37:10 +01:00
Mukhtar Akere
8bf164451c Fix re-insertion 2025-03-31 08:47:27 +01:00
Mukhtar Akere
5792305a66 Fix sample check rd 2025-03-31 08:23:11 +01:00
Mukhtar Akere
f9addaed36 minor 2025-03-31 06:15:41 +01:00
Mukhtar Akere
face86e151 - Cleanup webdav
- Include a re-insert fature for botched torrents
- Other minor bug fixes
2025-03-31 06:11:04 +01:00
Mukhtar Akere
cf28f42db4 Update readme, fix minor config bugs 2025-03-29 00:23:10 +01:00
Mukhtar Akere
dc2301eb98 Fixes:
- Add support for multiple api keys
- Fix minor bugs, removes goroutine mem leaks
2025-03-28 23:44:21 +01:00
Mukhtar Akere
f9bc7ad914 Fixes
- Be conservative about the number of goroutines
- Minor fixes
- Add Webdav to ui
- Add more configs to UI
2025-03-28 00:25:02 +01:00
Mukhtar Akere
4ae5de99e8 Fix deleting torrent bug 2025-03-27 09:01:33 +01:00
Mukhtar Akere
d49fbea60f - Add more limit to number of gorutines
- Add gorutine stats to logs
- Fix issues with repair
2025-03-27 08:24:40 +01:00
Mukhtar Akere
7bd38736b1 Fix for file namings 2025-03-26 21:12:01 +01:00
Mukhtar Akere
56bca562f4 Fix duplicate links for files 2025-03-24 20:39:35 +01:00
Mukhtar Akere
9469c98df7 Add support for different folder naming; minor bug fixes 2025-03-24 12:12:38 +01:00
Mukhtar Akere
8c13da5d30 Improve streaming 2025-03-23 09:32:19 +01:00
Mukhtar Akere
dc6ee2f020 fix umask for windows
Some checks failed
GoReleaser / goreleaser (push) Has been cancelled
Release Docker Build / docker (push) Has been cancelled
2025-03-23 07:14:46 +01:00
Mukhtar Akere
fce0fc0215 update changelog 2025-03-22 06:11:15 +01:00
Mukhtar Akere
e2f792d5ab hotfix xml 2025-03-22 06:05:53 +01:00
Mukhtar Akere
49875446b4 Fix header writing 2025-03-22 00:30:00 +01:00
Mukhtar Akere
738474be16 Experimental usability stage 2025-03-22 00:17:07 +01:00
Mukhtar Akere
d10b679584 Fix regex 2025-03-21 17:58:06 +01:00
Mukhtar Akere
f93d489956 Fix regex 2025-03-21 17:55:19 +01:00
Mukhtar Akere
8d494fc277 Update repair; fix minor bugs with namings 2025-03-21 04:10:16 +01:00
Mukhtar Akere
0c68364a6a Improvements:
- An improvised caching for stats; using metadata on ls
- Integrated into the downloading system
- Fix minor bugs noticed
- Still experiemental, sike
2025-03-20 10:42:51 +01:00
David Young
4e2fb9c74f Fix minor doc issues (#47)
Signed-off-by: David Young <davidy@funkypenguin.co.nz>
2025-03-18 21:36:05 -07:00
Mukhtar Akere
50c775ca74 Fix naming to accurately depict zurg 2025-03-19 05:31:36 +01:00
Mukhtar Akere
0d178992ef Improve webdav; add workers for refreshes 2025-03-19 03:08:22 +01:00
Mukhtar Akere
5d2fabe20b initializing webdav server 2025-03-18 10:02:10 +01:00
Mukhtar Akere
fa469c64c6 Merge branch 'beta' into experimental 2025-03-16 09:31:31 +01:00
Mukhtar Akere
26f6f384a3 Fix arr download_uncached settings 2025-03-16 05:43:25 +01:00
Mukhtar Akere
b91aa1db38 Add a precacher to significantly improve importing to arrs/plex 2025-03-15 23:12:37 +01:00
Mukhtar Akere
e2ff3b26de Add Umask support 2025-03-15 21:30:19 +01:00
Mukhtar Akere
2d29996d2c experimental 2025-03-15 21:08:15 +01:00
Mukhtar Akere
b4e4db27fb Hotfix for sonarr search
Some checks failed
GoReleaser / goreleaser (push) Has been cancelled
Release Docker Build / docker (push) Has been cancelled
2025-03-13 09:07:35 +01:00
Mukhtar Akere
c0589d4ad2 fix repair; repair.json, remove arr details from endpoint 2025-03-12 04:53:01 +01:00
Mukhtar Akere
a30861984c Fix saveTofile; Add a global panic, Add a recoverer for everything functions 2025-03-11 18:08:03 +01:00
Mukhtar Akere
4f92b135d4 hotfix repair; handle 206 requests; increases log retention 2025-03-11 04:22:51 +01:00
Mukhtar Akere
2b2a682218 - Fix ARR flaky bug
- Refined download uncached options
- Deprecate qbittorent log level
- Skip Repair for specified arr
2025-03-09 03:56:34 +01:00
Mukhtar Akere
a83f3d72ce Changelog 0.5.0 2025-03-05 20:15:10 +01:00
Mukhtar Akere
1c06407900 Hotfixes
Some checks failed
GoReleaser / goreleaser (push) Has been cancelled
Release Docker Build / docker (push) Has been cancelled
2025-03-02 14:33:58 +01:00
Mukhtar Akere
b1a3d8b762 Minor bug fixes 2025-02-28 21:25:47 +01:00
Mukhtar Akere
0e25de0e3c Hotfix 2025-02-28 20:48:30 +01:00
Mukhtar Akere
e741a0e32b Hotfix 2025-02-28 20:21:45 +01:00
Mukhtar Akere
84bd93805f try to fix memory hogging 2025-02-28 16:05:04 +01:00
Mukhtar Akere
fce2ce28c7 Finalize workflow 2025-02-28 04:06:51 +01:00
Mukhtar Akere
302a461efd Update workflows 2025-02-28 04:03:13 +01:00
Mukhtar Akere
7eb021aac1 - Add ghcr
- Add checks for arr url and token
2025-02-28 03:57:26 +01:00
Mukhtar Akere
7a989ccf2b hotfix v0.4.2 2025-02-28 03:33:11 +01:00
Mukhtar Akere
f04d7ac86e hotfixes 2025-02-28 03:10:14 +01:00
Mukhtar Akere
65fb2d1e7c revamp deployment 2025-02-28 00:54:11 +01:00
Mukhtar Akere
46beac7227 Changelog 0.4.2 2025-02-28 00:38:31 +01:00
Mukhtar Akere
e0e71b0f7e Wraps up v0.4.1
Some checks failed
Release / goreleaser (push) Has been cancelled
2025-02-24 00:02:28 +01:00
Mukhtar Akere
3b463adf09 Hotfix 2025-02-22 00:37:38 +01:00
Mukhtar Akere
7af0de76cc Add support for same infohashes but different categories 2025-02-22 00:14:13 +01:00
Mukhtar Akere
108da305b3 - Update Readme
- Add funding.yml
- Add Arr Queue cleanner worker
- Rewrote worker
2025-02-19 23:52:53 +01:00
Mukhtar Akere
9a7bff04ef - Fix alldebrid bug
- Minor cleanup
- speedgains
2025-02-19 01:20:05 +01:00
Mukhtar Akere
325e6c912c finalize beta-0.4.1 2025-02-14 02:42:45 +01:00
Mukhtar Akere
6a24c372f5 Hotfix for Proxy 2025-02-13 17:31:54 +01:00
Mukhtar Akere
32ecc08a72 Merge branch 'feat/add-auth' into beta 2025-02-13 14:25:56 +01:00
Mukhtar Akere
4b2f601df2 hotfix qbittorent 2025-02-13 14:19:07 +01:00
Mukhtar Akere
bfd2596367 fix mounts; backward compatibility 2025-02-13 05:07:14 +01:00
Mukhtar Akere
6f4f72d781 hotfix auth checks 2025-02-13 02:20:45 +01:00
Mukhtar Akere
14341d30bc More cleanup, more refractor, more energy, more passion, more footwork 2025-02-13 02:08:18 +01:00
Mukhtar Akere
878f78468f init adding rclone 2025-02-12 22:11:58 +01:00
Mukhtar Akere
c386495d3d Add auth 2025-02-09 23:47:02 +01:00
Mukhtar Akere
1614e29f8f Add Speed to callbacks 2025-02-09 19:17:20 +01:00
Mukhtar Akere
186a24cc4a Fix repair worker 2025-02-07 23:42:09 +01:00
Mukhtar Akere
16c825d5ba feat: restructure code; add size and ext checks (#39)
- Refractor code
- Add file size and extension checkers
- Change repair workflow to use zurg
2025-02-04 02:07:19 -08:00
Mukhtar Akere
8ca3cb32f3 Merge branch 'beta' of github.com:sirrobot01/debrid-blackhole into beta 2025-02-01 04:29:27 +01:00
Mukhtar Akere
c5b975a721 Fix Radarr Movie search 2025-02-01 04:29:01 +01:00
Elias Benbourenane
2fa6737f31 Ability to upload torrent files (#24)
- Upload Torrent file OR magnet URI
2025-01-31 18:41:00 -08:00
Mukhtar Akere
f40cd9ba56 Hotfix: Improve final docker image 2025-02-01 03:37:37 +01:00
Mukhtar Akere
bca96dd858 Hotfix: Dockerfile log file perm 2025-02-01 03:05:42 +01:00
Elias Benbourenane
1b9b7e203e Use toast notifications over JavaScript alerts (#37)
Implement UI toast notifications
2025-01-31 16:31:59 -08:00
Elias Benbourenane
99b4a3152d Torrent list state filtering (#33)
* perf: Switched from DOM-based to state-based in the main render loop logic

This removes the need to make complicated CSS selectors that would slow down the app.

It also improves debugability and readability.

* feat: Client-side state filtering

* style: Don't wrap the torrent list's header on small screens

* perf: Keep a dictionary of DOM element references
2025-01-31 14:46:44 -08:00
Mukhtar Akere
297715bf6e Add Download Progress tracking; early errors for invalid debrid torrent status (#35) 2025-01-31 14:45:49 -08:00
Mukhtar Akere
64995d0bf3 Add Healthcheck; Search missing immediately (#36) 2025-01-31 14:45:30 -08:00
Elias Benbourenane
92504cc8e0 feat: Remember the last used download options (#30) 2025-01-30 05:23:37 +01:00
Elias Benbourenane
12f89b3047 feat: Selectable torrent list items with a 'delete selected' button (#29) 2025-01-30 05:03:02 +01:00
Elias Benbourenane
5d7ddbd208 fix: Don't immediately download torrents with the magnet link handler (#28) 2025-01-30 05:02:28 +01:00
Elias Benbourenane
84b70464da feat: Keep track of magnet link handler state (#26) 2025-01-30 05:02:08 +01:00
Elias Benbourenane
092a028ad9 Disabled text wrapping on the torrent list table to enhance mobile view (#23) 2025-01-29 03:00:41 +01:00
Elias Benbourenane
530de20276 Handle multiple torrent submissions (#16)
* feat: Handle multiple torrent submissions
2025-01-28 13:12:43 -08:00
Mukhtar Akere
e2eb11056d Remove Healthcheck #22
* Move image to use distroless with PGID and PUID

* remove healthcheck
2025-01-28 13:12:22 -08:00
Mukhtar Akere
1a2504ff6c Move image to use distroless with PGID and PUID (#21) 2025-01-28 12:15:40 -08:00
Jamie Isaksen
07d632309a add healthcheck to monitor the app (#20) 2025-01-28 12:10:22 -08:00
Elias Benbourenane
d9b06fb518 Magnet link handler (#15)
* feat: Magnet link handler registration on the config page
2025-01-28 12:02:33 -08:00
Jamie Isaksen
d58b327957 correct typo (#17) 2025-01-26 19:11:30 +01:00
Mukhtar Akere
bba90cb89a Finalize v0.4.0
Some checks failed
Release / goreleaser (push) Failing after 2m18s
2025-01-25 11:40:30 +01:00
Mukhtar Akere
fc5c6e2869 Finalize v0.4.0 2025-01-24 23:33:08 +01:00
Mukhtar Akere
66f4965ec8 Fix arr storage 2025-01-23 03:11:24 +01:00
Mukhtar Akere
dc16f0d8a1 Fix arr storage 2025-01-23 03:06:11 +01:00
Mukhtar Akere
0741ddf999 Fix versionning 2025-01-23 02:31:55 +01:00
Mukhtar Akere
2ae4bd571e Fix Log file permissions 2025-01-23 02:11:37 +01:00
Mukhtar Akere
0b1c1af8b8 Fix Repair checks. Handle false positives 2025-01-23 01:35:28 +01:00
Mukhtar Akere
74a55149fc Features:
- Add file logging, server
- Fix minor repair bug
- Wrap up beta
2025-01-23 00:27:12 +01:00
Mukhtar Akere
cfb0051b04 Fix AllDebrid symlink bug 2025-01-19 08:56:52 +01:00
Mukhtar Akere
a986c4b5d0 Hotfix ui templates 2025-01-18 04:13:56 +01:00
Mukhtar Akere
3841b7751e Changelog v0.4.0 2025-01-18 03:49:05 +01:00
Mukhtar Akere
ea73572557 - Add shinning UI
- Revamp deployment process
- Fix Alldebrid file node bug
2025-01-13 20:18:59 +01:00
Mukhtar Akere
7cb41a0e8b Fix getting mount path 2025-01-11 23:10:05 +01:00
Mukhtar Akere
451c17cdf7 Merge branch 'beta' of github.com:sirrobot01/debrid-blackhole into beta 2025-01-11 23:09:00 +01:00
Mukhtar Akere
c39eebea0d [BETA] Changelog 0.3.4 (#14)
- Add repair worker
- Fix AllDebrid bugs with single movies/series
- Fix Torbox bugs
2025-01-11 07:21:49 -08:00
Mukhtar Akere
03c9657945 Add repair worker 2025-01-09 19:44:38 +01:00
Mukhtar Akere
28e5342c66 Add AllDebrid support 2025-01-01 17:12:18 +01:00
Mukhtar Akere
eeb3a31b05 Fix rar files, remove srt 2024-12-27 22:30:36 +01:00
Mukhtar Akere
e9d3e120f3 Hotfix
Some checks failed
Release / goreleaser (push) Failing after 2m2s
2024-12-25 00:06:47 +01:00
Mukhtar Akere
104df3c33c Changelog 0.3.2 2024-12-25 00:00:47 +01:00
Kai Gohegan
810c9d705e Update storage.go (#10) 2024-12-24 14:59:33 -08:00
Kai Gohegan
4ff00859a3 Update Dockerfile (#9) 2024-12-24 14:59:08 -08:00
Mukhtar Akere
b77dbcc4f4 Fix magnet conversion
Some checks failed
Release / goreleaser (push) Failing after 2m4s
2024-12-19 00:12:29 +01:00
Mukhtar Akere
58c0aafab1 Fix docker.yml 2024-12-18 17:24:40 +01:00
Mukhtar Akere
357da54083 Fix docker.yml 2024-12-18 17:20:59 +01:00
Mukhtar Akere
88a7196eaf Hotfix 2024-12-18 17:01:26 +01:00
Mukhtar Akere
abc86a0460 Changelog 0.3.1 2024-12-18 16:51:00 +01:00
robertRogerPresident
dd0b7efdff fix toboxInfo struct type (#6)
Co-authored-by: Tangui <tanguidaoudal@yahoo.fr>
2024-12-16 11:56:09 -08:00
Mukhtar Akere
7359f280b0 Make sure torrents get deleted on failed 2024-12-12 17:38:53 +01:00
Mukhtar Akere
4eb3539347 Fix docker.yml 2024-11-30 16:06:28 +01:00
Mukhtar Akere
9fb1118475 Fix docker.yml 2024-11-30 16:03:33 +01:00
Mukhtar Akere
07491b43fe Fix docker.yml 2024-11-30 16:01:07 +01:00
Mukhtar Akere
8f7c9a19c5 Fix goreleaser
Some checks failed
Release / goreleaser (push) Failing after 2m44s
2024-11-30 15:50:42 +01:00
Mukhtar Akere
a51364d150 Changelog 0.3.0 2024-11-30 15:46:58 +01:00
Mukhtar Akere
df2aa4e361 0.2.7:
- Add support for multiple debrid providers
- Add Torbox support
- Add support for configurable debrid cache checks
- Add support for configurable debrid download uncached torrents
2024-11-25 16:48:23 +01:00
Mukhtar Akere
b51cb954f8 Merge branch 'beta' 2024-11-25 16:39:47 +01:00
Mukhtar Akere
8bdb2e3547 Hotfix & Updated Readme 2024-11-23 23:41:49 +01:00
Mukhtar Akere
2c9a076cd2 Hotfix 2024-11-23 21:10:42 +01:00
Mukhtar Akere
d2a77620bc Features:
- Add Torbox(Tested)
- Fix RD cache check
- Minor fixes
2024-11-23 19:52:15 +01:00
Mukhtar Akere
4b8f1ccfb6 Changelog 0.2.6 2024-10-08 15:43:38 +01:00
Mukhtar Akere
f118c5b794 Changelog 0.2.5 2024-10-01 11:17:31 +01:00
198 changed files with 32315 additions and 2969 deletions

52
.air.toml Normal file
View File

@@ -0,0 +1,52 @@
root = "."
testdata_dir = "testdata"
tmp_dir = "tmp"
[build]
args_bin = ["--config", "data/"]
bin = "./tmp/main"
cmd = "bash -c 'go build -ldflags \"-X github.com/sirrobot01/decypharr/pkg/version.Version=0.0.0 -X github.com/sirrobot01/decypharr/pkg/version.Channel=dev\" -o ./tmp/main .'"
delay = 1000
exclude_dir = ["tmp", "vendor", "testdata", "data", "logs", "docs", "dist", "node_modules", ".ven"]
exclude_file = []
exclude_regex = ["_test.go"]
exclude_unchanged = false
follow_symlink = false
full_bin = ""
include_dir = []
include_ext = ["go", "tpl", "tmpl", "html", ".json", ".js", ".css"]
include_file = []
kill_delay = "1s"
log = "build-errors.log"
poll = false
poll_interval = 0
post_cmd = []
pre_cmd = []
rerun = false
rerun_delay = 500
send_interrupt = true
stop_on_error = true
[color]
app = ""
build = "yellow"
main = "magenta"
runner = "green"
watcher = "cyan"
[log]
main_only = false
silent = false
time = false
[misc]
clean_on_exit = false
[proxy]
app_port = 0
enabled = false
proxy_port = 0
[screen]
clear_on_rebuild = false
keep_scroll = true

View File

@@ -5,4 +5,29 @@ docker-compose.yml
.DS_Store
**/.idea/
*.magnet
**.torrent
**.torrent
torrents.json
**/dist/
*.json
.ven/**
docs/**
# Don't copy node modules
node_modules/
# Don't copy development files
.git/
.gitignore
*.md
.env*
*.log
# Build artifacts
decypharr
healthcheck
*.exe
.venv/
data/**
.stignore
.stfolder/**

2
.github/FUNDING.yml vendored Normal file
View File

@@ -0,0 +1,2 @@
github: sirrobot01
buy_me_a_coffee: sirrobot01

76
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View File

@@ -0,0 +1,76 @@
name: Bug Report
description: 'Report a new bug'
labels: ['Type: Bug', 'Status: Needs Triage']
body:
- type: checkboxes
attributes:
label: Is there an existing issue for this?
description: Please search to see if an open or closed issue already exists for the bug you encountered. If a bug exists and is closed note that it may only be fixed in an unstable branch.
options:
- label: I have searched the existing open and closed issues
required: true
- type: textarea
attributes:
label: Current Behavior
description: A concise description of what you're experiencing.
validations:
required: true
- type: textarea
attributes:
label: Expected Behavior
description: A concise description of what you expected to happen.
validations:
required: true
- type: textarea
attributes:
label: Steps To Reproduce
description: Steps to reproduce the behavior.
placeholder: |
1. In this environment...
2. With this config...
3. Run '...'
4. See error...
validations:
required: false
- type: textarea
attributes:
label: Environment
description: |
examples:
- **OS**: Ubuntu 20.04
- **Version**: v1.0.0
- **Docker Install**: Yes
- **Browser**: Firefox 90 (If UI related)
value: |
- OS:
- Version:
- Docker Install:
- Browser:
render: markdown
validations:
required: true
- type: dropdown
attributes:
label: What branch are you running?
options:
- Main/Latest
- Beta
- Experimental
validations:
required: true
- type: textarea
attributes:
label: Trace Logs? **Not Optional**
description: |
Trace Logs
- are **required** for bug reports
- are not optional
validations:
required: true
- type: checkboxes
attributes:
label: Trace Logs have been provided as applicable
description: Trace logs are **generally required** and are not optional for all bug reports and contain `trace`. Info logs are invalid for bug reports and do not contain `debug` nor `trace`
options:
- label: I have read and followed the steps in the documentation link and provided the required trace logs - the logs contain `trace` - that are relevant and show this issue.
required: true

View File

@@ -0,0 +1,38 @@
name: Feature Request
description: 'Suggest an idea for Decypharr'
labels: ['Type: Feature Request', 'Status: Needs Triage']
body:
- type: checkboxes
attributes:
label: Is there an existing issue for this?
description: Please search to see if an open or closed issue already exists for the feature you are requesting. If a request exists and is closed note that it may only be fixed in an unstable branch.
options:
- label: I have searched the existing open and closed issues
required: true
- type: textarea
attributes:
label: Is your feature request related to a problem? Please describe
description: A clear and concise description of what the problem is.
validations:
required: true
- type: textarea
attributes:
label: Describe the solution you'd like
description: A clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
attributes:
label: Describe alternatives you've considered
description: A clear and concise description of any alternative solutions or features you've considered.
validations:
required: true
- type: textarea
attributes:
label: Anything else?
description: |
Links? References? Mockups? Anything that will give us more context about the feature you are encountering!
Tip: You can attach images or log files by clicking this area to highlight it and then dragging files in.
validations:
required: true

85
.github/workflows/beta-docker.yml vendored Normal file
View File

@@ -0,0 +1,85 @@
name: Beta Docker Build
on:
push:
branches:
- beta
permissions:
contents: read
packages: write
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Calculate beta version
id: calculate_version
run: |
LATEST_TAG=$(git tag | grep -v 'beta' | sort -V | tail -n1)
echo "Found latest tag: ${LATEST_TAG}"
IFS='.' read -r -a VERSION_PARTS <<< "$LATEST_TAG"
MAJOR="${VERSION_PARTS[0]}"
MINOR="${VERSION_PARTS[1]}"
PATCH="${VERSION_PARTS[2]}"
NEW_PATCH=$((PATCH + 1))
BETA_VERSION="${MAJOR}.${MINOR}.${NEW_PATCH}"
echo "Calculated beta version: ${BETA_VERSION}"
echo "beta_version=${BETA_VERSION}" >> $GITHUB_ENV
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Cache Docker layers
uses: actions/cache@v3
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
# Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# Login to GitHub Container Registry
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push beta Docker image
uses: docker/build-push-action@v5
with:
context: .
platforms: linux/amd64,linux/arm64,linux/arm/v7
push: true
tags: |
cy01/blackhole:beta
ghcr.io/${{ github.repository_owner }}/decypharr:beta
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new,mode=max
build-args: |
VERSION=${{ env.beta_version }}
CHANNEL=beta
- name: Move cache
run: |
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache

28
.github/workflows/deploy-docs.yml vendored Normal file
View File

@@ -0,0 +1,28 @@
name: ci
on:
push:
branches:
- main
permissions:
contents: write
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Configure Git Credentials
run: |
git config user.name github-actions[bot]
git config user.email 41898282+github-actions[bot]@users.noreply.github.com
- uses: actions/setup-python@v5
with:
python-version: 3.x
- run: echo "cache_id=$(date --utc '+%V')" >> $GITHUB_ENV
- uses: actions/cache@v4
with:
key: mkdocs-material-${{ env.cache_id }}
path: .cache
restore-keys: |
mkdocs-material-
- run: cd docs && pip install -r requirements.txt
- run: cd docs && mkdocs gh-deploy --force

33
.github/workflows/goreleaser.yml vendored Normal file
View File

@@ -0,0 +1,33 @@
name: GoReleaser
on:
push:
tags:
- '*'
permissions:
contents: write
jobs:
goreleaser:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.24'
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@v5
with:
distribution: goreleaser
version: latest
args: release --clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
RELEASE_CHANNEL: stable

77
.github/workflows/release-docker.yml vendored Normal file
View File

@@ -0,0 +1,77 @@
name: Release Docker Build
on:
push:
tags:
- '*'
permissions:
contents: read
packages: write
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Get tag name
id: get_tag
run: |
TAG_NAME=${GITHUB_REF#refs/tags/}
echo "tag_name=${TAG_NAME}" >> $GITHUB_ENV
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Cache Docker layers
uses: actions/cache@v3
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
# Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# Login to GitHub Container Registry
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push release Docker image
uses: docker/build-push-action@v5
with:
context: .
platforms: linux/amd64,linux/arm64,linux/arm/v7
push: true
tags: |
cy01/blackhole:latest
cy01/blackhole:${{ env.tag_name }}
ghcr.io/${{ github.repository_owner }}/decypharr:latest
ghcr.io/${{ github.repository_owner }}/decypharr:${{ env.tag_name }}
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new,mode=max
build-args: |
VERSION=${{ env.tag_name }}
CHANNEL=stable
- name: Move cache
run: |
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache

13
.gitignore vendored
View File

@@ -1,11 +1,22 @@
data/
config.json
docker-compose.yml
.idea/
.DS_Store
*.torrent
!testdata/*.torrent
*.magnet
!testdata/*.magnet
*.db
*.log
*.log.*
dist/
tmp/**
torrents.json
logs/**
auth.json
.ven/
.env
node_modules/
.venv/
.stignore
.stfolder/**

View File

@@ -1,8 +1,7 @@
version: 2
version: 1
before:
hooks:
# You may remove this if you don't use go modules.
- go mod tidy
builds:
@@ -16,19 +15,22 @@ builds:
- amd64
- arm
- arm64
ldflags:
- -s -w
- -X github.com/sirrobot01/decypharr/pkg/version.Version={{.Version}}
- -X github.com/sirrobot01/decypharr/pkg/version.Channel={{.Env.RELEASE_CHANNEL}}
archives:
- format: tar.gz
# this name template makes the OS and Arch compatible with the results of `uname`.
name_template: >-
{{ .ProjectName }}_
decypharr_
{{- title .Os }}_
{{- if eq .Arch "amd64" }}x86_64
{{- else if eq .Arch "386" }}i386
{{- else }}{{ .Arch }}{{ end }}
{{- if .Arm }}v{{ .Arm }}{{ end }}
# use zip for windows archives
format_overrides:
- goos: windows
format: zip

View File

@@ -1,70 +0,0 @@
#### 0.1.0
- Initial Release
- Added Real Debrid Support
- Added Arrs Support
- Added Proxy Support
- Added Basic Authentication for Proxy
- Added Rate Limiting for Debrid Providers
#### 0.1.1
- Added support for "No Blackhole" for Arrs
- Added support for "Cached Only" for Proxy
- Bug Fixes
#### 0.1.2
- Bug fixes
- Code cleanup
- Get available hashes at once
#### 0.1.3
- Searching for infohashes in the xml description/summary/comments
- Added local cache support
- Added max cache size
- Rewrite blackhole.go
- Bug fixes
- Fixed indexer getting disabled
- Fixed blackhole not working
#### 0.1.4
- Rewrote Report log
- Fix YTS, 1337x not grabbing infohash
- Fix Torrent symlink bug
#### 0.2.0-beta
- Switch to QbitTorrent API instead of Blackhole
- Rewrote the whole codebase
#### 0.2.0
- Implement 0.2.0-beta changes
- Removed Blackhole
- Added QbitTorrent API
- Cleaned up the code
#### 0.2.1
- Fix Uncached torrents not being downloaded/downloaded
- Minor bug fixed
- Fix Race condition in the cache and file system
#### 0.2.2
- Fix name mismatch in the cache
- Fix directory mapping with mounts
- Add Support for refreshing the *arrs
#### 0.2.3
- Delete uncached items from RD
- Fail if the torrent is not cached(optional)
- Fix cache not being updated
#### 0.2.4
- Add file download support(Sequential Download)
- Fix http handler error
- Fix *arrs map failing concurrently
- Fix cache not being updated

View File

@@ -1,27 +1,77 @@
FROM --platform=$BUILDPLATFORM golang:1.22 as builder
# Stage 1: Build binaries
FROM --platform=$BUILDPLATFORM golang:1.24-alpine as builder
ARG TARGETPLATFORM
ARG BUILDPLATFORM
ARG TARGETOS
ARG TARGETARCH
ARG VERSION=0.0.0
ARG CHANNEL=dev
# Set destination for COPY
WORKDIR /app
# Download Go modules
COPY go.mod go.sum ./
RUN go mod download
RUN --mount=type=cache,target=/go/pkg/mod \
go mod download -x
# Copy the source code. Note the slash at the end, as explained in
# https://docs.docker.com/reference/dockerfile/#copy
ADD . .
COPY . .
# Build
RUN CGO_ENABLED=0 GOOS=$(echo $TARGETPLATFORM | cut -d '/' -f1) GOARCH=$(echo $TARGETPLATFORM | cut -d '/' -f2) go build -o /blackhole
# Build main binary
RUN --mount=type=cache,target=/go/pkg/mod \
--mount=type=cache,target=/root/.cache/go-build \
CGO_ENABLED=0 GOOS=$TARGETOS GOARCH=$TARGETARCH \
go build -trimpath \
-ldflags="-w -s -X github.com/sirrobot01/decypharr/pkg/version.Version=${VERSION} -X github.com/sirrobot01/decypharr/pkg/version.Channel=${CHANNEL}" \
-o /decypharr
FROM scratch
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=builder /blackhole /blackhole
# Build healthcheck (optimized)
RUN --mount=type=cache,target=/go/pkg/mod \
--mount=type=cache,target=/root/.cache/go-build \
CGO_ENABLED=0 GOOS=$TARGETOS GOARCH=$TARGETARCH \
go build -trimpath -ldflags="-w -s" \
-o /healthcheck cmd/healthcheck/main.go
EXPOSE 8181
# Stage 2: Final image
FROM alpine:latest
# Run
CMD ["/blackhole", "--config", "/app/config.json"]
ARG VERSION=0.0.0
ARG CHANNEL=dev
LABEL version = "${VERSION}-${CHANNEL}"
LABEL org.opencontainers.image.source = "https://github.com/sirrobot01/decypharr"
LABEL org.opencontainers.image.title = "decypharr"
LABEL org.opencontainers.image.authors = "sirrobot01"
LABEL org.opencontainers.image.documentation = "https://github.com/sirrobot01/decypharr/blob/main/README.md"
# Install dependencies including rclone
RUN apk add --no-cache fuse3 ca-certificates su-exec shadow curl unzip && \
echo "user_allow_other" >> /etc/fuse.conf && \
case "$(uname -m)" in \
x86_64) ARCH=amd64 ;; \
aarch64) ARCH=arm64 ;; \
armv7l) ARCH=arm ;; \
*) echo "Unsupported architecture: $(uname -m)" && exit 1 ;; \
esac && \
curl -O "https://downloads.rclone.org/rclone-current-linux-${ARCH}.zip" && \
unzip "rclone-current-linux-${ARCH}.zip" && \
cp rclone-*/rclone /usr/local/bin/ && \
chmod +x /usr/local/bin/rclone && \
rm -rf rclone-* && \
apk del curl unzip
# Copy binaries and entrypoint
COPY --from=builder /decypharr /usr/bin/decypharr
COPY --from=builder /healthcheck /usr/bin/healthcheck
COPY scripts/entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
# Set environment variables
ENV PUID=1000
ENV PGID=1000
ENV LOG_PATH=/app/logs
EXPOSE 8282
VOLUME ["/app"]
HEALTHCHECK --interval=10s --retries=10 CMD ["/usr/bin/healthcheck", "--config", "/app", "--basic"]
ENTRYPOINT ["/entrypoint.sh"]
CMD ["/usr/bin/decypharr", "--config", "/app"]

21
LICENSE Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2025 Mukhtar Akere
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

170
README.md
View File

@@ -1,142 +1,72 @@
### GoBlackHole(with Debrid Proxy Support)
# Decypharr
This is a Golang implementation go Torrent QbitTorrent with a **Real Debrid Proxy Support**.
![ui](docs/docs/images/main.png)
#### Uses
- Mock Qbittorent API that supports the Arrs(Sonarr, Radarr, etc)
- Proxy support for the Arrs
**Decypharr** is an implementation of QbitTorrent with **Multiple Debrid service support**, written in Go.
The proxy is useful in filtering out un-cached Real Debrid torrents
## What is Decypharr?
### Changelog
Decypharr combines the power of QBittorrent with popular Debrid services to enhance your media management. It provides a familiar interface for Sonarr, Radarr, and other \*Arr applications.
- View the [CHANGELOG.md](CHANGELOG.md) for the latest changes
## Features
- Mock Qbittorent API that supports the Arrs (Sonarr, Radarr, Lidarr etc)
- Full-fledged UI for managing torrents
- Multiple Debrid providers support
- WebDAV server support for each debrid provider
- Optional mounting of WebDAV to your system(using [Rclone](https://rclone.org/))
- Repair Worker for missing files
## Supported Debrid Providers
- [Real Debrid](https://real-debrid.com)
- [Torbox](https://torbox.app)
- [Debrid Link](https://debrid-link.com)
- [All Debrid](https://alldebrid.com)
## Quick Start
### Docker (Recommended)
#### Installation
##### Docker Compose
```yaml
version: '3.7'
services:
blackhole:
image: cy01/blackhole:latest # or cy01/blackhole:beta
container_name: blackhole
decypharr:
image: cy01/blackhole:latest
container_name: decypharr
ports:
- "8282:8282" # qBittorrent
- "8181:8181" # Proxy
user: "1000:1000"
- "8282:8282"
volumes:
- ./logs:/app/logs
- ~/plex/media:/media
- ~/plex/media/symlinks/:/media/symlinks/
- ~/plex/configs/blackhole/config.json:/app/config.json # Config file, see below
environment:
- PUID=1000
- PGID=1000
- UMASK=002
- QBIT_PORT=8282 # qBittorrent Port. This is optional. You can set this in the config file
- PORT=8181 # Proxy Port. This is optional. You can set this in the config file
- /mnt/:/mnt:rshared
- ./configs/:/app # config.json must be in this directory
restart: unless-stopped
devices:
- /dev/fuse:/dev/fuse:rwm
cap_add:
- SYS_ADMIN
security_opt:
- apparmor:unconfined
```
##### Binary
Download the binary from the releases page and run it with the config file.
## Documentation
```bash
./blackhole --config /path/to/config.json
```
For complete documentation, please visit our [Documentation](https://sirrobot01.github.io/decypharr/).
#### Config
```json
{
"debrid": {
"name": "realdebrid",
"host": "https://api.real-debrid.com/rest/1.0",
"api_key": "realdebrid_api_key",
"folder": "data/realdebrid/torrents/",
"rate_limit": "250/minute"
},
"proxy": {
"enabled": true,
"port": "8181",
"debug": false,
"username": "username",
"password": "password",
"cached_only": true
},
"max_cache_size": 1000,
"qbittorrent": {
"port": "8282",
"username": "admin", // deprecated
"password": "admin", // deprecated
"download_folder": "/media/symlinks/",
"categories": ["sonarr", "radarr"],
"refresh_interval": 5 // in seconds
}
}
```
The documentation includes:
#### Config Notes
##### Debrid Config
- This config key is important as it's used for both Blackhole and Proxy
- Detailed installation instructions
- Configuration guide
- Usage with Sonarr/Radarr
- WebDAV setup
- Repair Worker information
- ...and more!
##### Proxy Config
- The `enabled` key is used to enable the proxy
- The `port` key is the port the proxy will listen on
- The `debug` key is used to enable debug logs
- The `username` and `password` keys are used for basic authentication
- The `cached_only` means only cached torrents will be returned
## Basic Configuration
You can configure Decypharr through the Web UI or by editing the `config.json` file directly.
##### Qbittorrent Config
- The `port` key is the port the qBittorrent will listen on
- The `download_folder` is the folder where the torrents will be downloaded. e.g `/media/symlinks/`
- The `categories` key is used to filter out torrents based on the category. e.g `sonarr`, `radarr`
## Contributing
### Proxy
Contributions are welcome! Please feel free to submit a Pull Request.
The proxy is useful in filtering out un-cached Real Debrid torrents.
The proxy is a simple HTTP proxy that requires basic authentication. The proxy can be enabled by setting the `proxy.enabled` to `true` in the config file.
The proxy listens on the port `8181` by default. The username and password can be set in the config file.
Setting Up Proxy in Arr
- Sonarr/Radarr
- Settings -> General -> Use Proxy
- Hostname: `localhost` # or the IP of the server
- Port: `8181` # or the port set in the config file
- Username: `username` # or the username set in the config file
- Password: `password` # or the password set in the config file
- Bypass Proxy for Local Addresses -> `No`
### Qbittorrent
The qBittorrent is a mock qBittorrent API that supports the Arrs(Sonarr, Radarr, etc).
Setting Up Qbittorrent in Arr
- Sonarr/Radarr
- Settings -> Download Client -> Add Client -> qBittorrent
- Host: `localhost` # or the IP of the server
- Port: `8282` # or the port set in the config file/ docker-compose env
- Username: `http://sonarr:8989` # Your arr host with http/https
- Password: `sonarr_token` # Your arr token
- Category: e.g `sonarr`, `radarr`
- Use SSL -> `No`
- Sequential Download -> `No`|`Yes` (If you want to download the torrents locally instead of symlink)
- Test
- Save
### TODO
- [ ] A proper name!!!!
- [ ] Debrid
- [ ] Add more Debrid Providers
- [ ] Proxy
- [ ] Add more Proxy features
- [ ] Qbittorrent
- [ ] Add more Qbittorrent features
- [ ] Persist torrents on restart/server crash
- [ ] Add tests
## License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.

200
cmd/decypharr/main.go Normal file
View File

@@ -0,0 +1,200 @@
package decypharr
import (
"context"
"fmt"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/pkg/qbit"
"github.com/sirrobot01/decypharr/pkg/server"
"github.com/sirrobot01/decypharr/pkg/version"
"github.com/sirrobot01/decypharr/pkg/web"
"github.com/sirrobot01/decypharr/pkg/webdav"
"github.com/sirrobot01/decypharr/pkg/wire"
"net/http"
"os"
"runtime"
"runtime/debug"
"strconv"
"sync"
)
func Start(ctx context.Context) error {
if umaskStr := os.Getenv("UMASK"); umaskStr != "" {
umask, err := strconv.ParseInt(umaskStr, 8, 32)
if err != nil {
return fmt.Errorf("invalid UMASK value: %s", umaskStr)
}
SetUmask(int(umask))
}
restartCh := make(chan struct{}, 1)
web.SetRestartFunc(func() {
select {
case restartCh <- struct{}{}:
default:
}
})
svcCtx, cancelSvc := context.WithCancel(ctx)
defer cancelSvc()
// Create the logger path if it doesn't exist
for {
cfg := config.Get()
_log := logger.Default()
// ascii banner
fmt.Printf(`
+-------------------------------------------------------+
| |
| ╔╦╗╔═╗╔═╗╦ ╦╔═╗╦ ╦╔═╗╦═╗╦═╗ |
| ║║║╣ ║ └┬┘╠═╝╠═╣╠═╣╠╦╝╠╦╝ (%s) |
| ═╩╝╚═╝╚═╝ ┴ ╩ ╩ ╩╩ ╩╩╚═╩╚═ |
| |
+-------------------------------------------------------+
| Log Level: %s |
+-------------------------------------------------------+
`, version.GetInfo(), cfg.LogLevel)
// Initialize services
qb := qbit.New()
wd := webdav.New()
ui := web.New().Routes()
webdavRoutes := wd.Routes()
qbitRoutes := qb.Routes()
// Register routes
handlers := map[string]http.Handler{
"/": ui,
"/api/v2": qbitRoutes,
"/webdav": webdavRoutes,
}
srv := server.New(handlers)
reset := func() {
// Reset the store and services
qb.Reset()
wire.Reset()
// refresh GC
runtime.GC()
}
done := make(chan struct{})
go func(ctx context.Context) {
if err := startServices(ctx, cancelSvc, wd, srv); err != nil {
_log.Error().Err(err).Msg("Error starting services")
cancelSvc()
}
close(done)
}(svcCtx)
select {
case <-ctx.Done():
// graceful shutdown
cancelSvc() // propagate to services
<-done // wait for them to finish
_log.Info().Msg("Decypharr has been stopped gracefully.")
reset() // reset store and services
return nil
case <-restartCh:
cancelSvc() // tell existing services to shut down
_log.Info().Msg("Restarting Decypharr...")
<-done // wait for them to finish
_log.Info().Msg("Decypharr has been restarted.")
reset() // reset store and services
// rebuild svcCtx off the original parent
svcCtx, cancelSvc = context.WithCancel(ctx)
}
}
}
func startServices(ctx context.Context, cancelSvc context.CancelFunc, wd *webdav.WebDav, srv *server.Server) error {
var wg sync.WaitGroup
errChan := make(chan error)
_log := logger.Default()
safeGo := func(f func() error) {
wg.Add(1)
go func() {
defer wg.Done()
defer func() {
if r := recover(); r != nil {
stack := debug.Stack()
_log.Error().
Interface("panic", r).
Str("stack", string(stack)).
Msg("Recovered from panic in goroutine")
// Send error to channel so the main goroutine is aware
errChan <- fmt.Errorf("panic: %v", r)
}
}()
if err := f(); err != nil {
errChan <- err
}
}()
}
safeGo(func() error {
return wd.Start(ctx)
})
safeGo(func() error {
return srv.Start(ctx)
})
// Start rclone RC server if enabled
safeGo(func() error {
rcManager := wire.Get().RcloneManager()
if rcManager == nil {
return nil
}
return rcManager.Start(ctx)
})
if cfg := config.Get(); cfg.Repair.Enabled {
safeGo(func() error {
repair := wire.Get().Repair()
if repair != nil {
if err := repair.Start(ctx); err != nil {
_log.Error().Err(err).Msg("repair failed")
}
}
return nil
})
}
safeGo(func() error {
wire.Get().StartWorkers(ctx)
return nil
})
go func() {
wg.Wait()
close(errChan)
}()
go func() {
for err := range errChan {
if err != nil {
_log.Error().Err(err).Msg("Service error detected")
// If the error is critical, return it to stop the main loop
if ctx.Err() == nil {
_log.Error().Msg("Stopping services due to error")
cancelSvc() // Cancel the service context to stop all services
}
}
}
}()
// Wait for context cancellation
<-ctx.Done()
_log.Debug().Msg("Services context cancelled")
return nil
}

View File

@@ -0,0 +1,9 @@
//go:build !windows
package decypharr
import "syscall"
func SetUmask(umask int) {
syscall.Umask(umask)
}

View File

@@ -0,0 +1,8 @@
//go:build windows
// +build windows
package decypharr
func SetUmask(umask int) {
// No-op on Windows
}

175
cmd/healthcheck/main.go Normal file
View File

@@ -0,0 +1,175 @@
package main
import (
"cmp"
"context"
"encoding/json"
"flag"
"fmt"
"github.com/sirrobot01/decypharr/internal/config"
"net/http"
"os"
"strings"
"time"
)
// HealthStatus represents the status of various components
type HealthStatus struct {
QbitAPI bool `json:"qbit_api"`
WebUI bool `json:"web_ui"`
WebDAVService bool `json:"webdav_service"`
OverallStatus bool `json:"overall_status"`
}
func main() {
var (
configPath string
isBasicCheck bool
debug bool
)
flag.StringVar(&configPath, "config", "/data", "path to the data folder")
flag.BoolVar(&isBasicCheck, "basic", false, "perform basic health check without WebDAV")
flag.BoolVar(&debug, "debug", false, "enable debug mode for detailed output")
flag.Parse()
config.SetConfigPath(configPath)
cfg := config.Get()
// Get port from environment variable or use default
port := getEnvOrDefault("QBIT_PORT", cfg.Port)
webdavPath := ""
for _, debrid := range cfg.Debrids {
if debrid.UseWebDav {
webdavPath = debrid.Name
}
}
// Initialize status
status := HealthStatus{
QbitAPI: false,
WebUI: false,
WebDAVService: false,
OverallStatus: false,
}
// Create a context with timeout for all HTTP requests
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
baseUrl := cmp.Or(cfg.URLBase, "/")
if !strings.HasPrefix(baseUrl, "/") {
baseUrl = "/" + baseUrl
}
// Check qBittorrent API
if checkQbitAPI(ctx, baseUrl, port) {
status.QbitAPI = true
}
// Check Web UI
if checkWebUI(ctx, baseUrl, port) {
status.WebUI = true
}
if isBasicCheck {
status.WebDAVService = checkBaseWebdav(ctx, baseUrl, port)
} else {
// If not a basic check, check WebDAV with debrid path
if webdavPath != "" {
status.WebDAVService = checkDebridWebDAV(ctx, baseUrl, port, webdavPath)
} else {
// If no WebDAV path is set, consider it healthy
status.WebDAVService = true
}
}
// Determine overall status
// Consider the application healthy if core services are running
status.OverallStatus = status.QbitAPI && status.WebUI
if webdavPath != "" {
status.OverallStatus = status.OverallStatus && status.WebDAVService
}
// Optional: output health status as JSON for logging
if debug {
statusJSON, _ := json.MarshalIndent(status, "", " ")
fmt.Println(string(statusJSON))
}
// Exit with appropriate code
if status.OverallStatus {
os.Exit(0)
} else {
os.Exit(1)
}
}
func getEnvOrDefault(key, defaultValue string) string {
if value, exists := os.LookupEnv(key); exists {
return value
}
return defaultValue
}
func checkQbitAPI(ctx context.Context, baseUrl, port string) bool {
url := fmt.Sprintf("http://localhost:%s%sapi/v2/app/version", port, baseUrl)
req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
if err != nil {
return false
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
return false
}
defer resp.Body.Close()
return resp.StatusCode == http.StatusOK
}
func checkWebUI(ctx context.Context, baseUrl, port string) bool {
req, err := http.NewRequestWithContext(ctx, "GET", fmt.Sprintf("http://localhost:%s%s", port, baseUrl), nil)
if err != nil {
return false
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
return false
}
defer resp.Body.Close()
return resp.StatusCode == http.StatusOK
}
func checkBaseWebdav(ctx context.Context, baseUrl, port string) bool {
url := fmt.Sprintf("http://localhost:%s%swebdav/", port, baseUrl)
req, err := http.NewRequestWithContext(ctx, "PROPFIND", url, nil)
if err != nil {
return false
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
return false
}
defer resp.Body.Close()
return resp.StatusCode == http.StatusMultiStatus ||
resp.StatusCode == http.StatusOK
}
func checkDebridWebDAV(ctx context.Context, baseUrl, port, path string) bool {
url := fmt.Sprintf("http://localhost:%s%swebdav/%s", port, baseUrl, path)
req, err := http.NewRequestWithContext(ctx, "PROPFIND", url, nil)
if err != nil {
return false
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
return false
}
defer resp.Body.Close()
return resp.StatusCode == http.StatusMultiStatus ||
resp.StatusCode == http.StatusOK
}

View File

@@ -1,40 +0,0 @@
package cmd
import (
"cmp"
"goBlack/common"
"goBlack/pkg/debrid"
"goBlack/pkg/proxy"
"goBlack/pkg/qbit"
"sync"
)
func Start(config *common.Config) {
maxCacheSize := cmp.Or(config.MaxCacheSize, 1000)
cache := common.NewCache(maxCacheSize)
deb := debrid.NewDebrid(config.Debrid, cache)
var wg sync.WaitGroup
if config.Proxy.Enabled {
p := proxy.NewProxy(*config, deb, cache)
wg.Add(1)
go func() {
defer wg.Done()
p.Start()
}()
}
if config.QBitTorrent.Port != "" {
qb := qbit.NewQBit(config, deb, cache)
wg.Add(1)
go func() {
defer wg.Done()
qb.Start()
}()
}
// Wait indefinitely
wg.Wait()
}

View File

@@ -1,88 +0,0 @@
package common
import (
"sync"
)
type Cache struct {
data map[string]struct{}
order []string
maxItems int
mu sync.RWMutex
}
func NewCache(maxItems int) *Cache {
if maxItems <= 0 {
maxItems = 1000
}
return &Cache{
data: make(map[string]struct{}, maxItems),
order: make([]string, 0, maxItems),
maxItems: maxItems,
}
}
func (c *Cache) Add(value string) {
c.mu.Lock()
defer c.mu.Unlock()
if _, exists := c.data[value]; !exists {
if len(c.order) >= c.maxItems {
delete(c.data, c.order[0])
c.order = c.order[1:]
}
c.data[value] = struct{}{}
c.order = append(c.order, value)
}
}
func (c *Cache) AddMultiple(values map[string]bool) {
c.mu.Lock()
defer c.mu.Unlock()
for value := range values {
if _, exists := c.data[value]; !exists {
if len(c.order) >= c.maxItems {
delete(c.data, c.order[0])
c.order = c.order[1:]
}
c.data[value] = struct{}{}
c.order = append(c.order, value)
}
}
}
func (c *Cache) Get(index int) (string, bool) {
c.mu.RLock()
defer c.mu.RUnlock()
if index < 0 || index >= len(c.order) {
return "", false
}
return c.order[index], true
}
func (c *Cache) GetMultiple(values []string) map[string]bool {
c.mu.RLock()
defer c.mu.RUnlock()
result := make(map[string]bool, len(values))
for _, value := range values {
if _, exists := c.data[value]; exists {
result[value] = true
}
}
return result
}
func (c *Cache) Exists(value string) bool {
c.mu.RLock()
defer c.mu.RUnlock()
_, exists := c.data[value]
return exists
}
func (c *Cache) Len() int {
c.mu.RLock()
defer c.mu.RUnlock()
return len(c.order)
}

View File

@@ -1,69 +0,0 @@
package common
import (
"encoding/json"
"log"
"os"
)
type DebridConfig struct {
Name string `json:"name"`
Host string `json:"host"`
APIKey string `json:"api_key"`
Folder string `json:"folder"`
DownloadUncached bool `json:"download_uncached"`
RateLimit string `json:"rate_limit"` // 200/minute or 10/second
}
type ProxyConfig struct {
Port string `json:"port"`
Enabled bool `json:"enabled"`
Debug bool `json:"debug"`
Username string `json:"username"`
Password string `json:"password"`
CachedOnly *bool `json:"cached_only"`
}
type QBitTorrentConfig struct {
Username string `json:"username"`
Password string `json:"password"`
Port string `json:"port"`
Debug bool `json:"debug"`
DownloadFolder string `json:"download_folder"`
Categories []string `json:"categories"`
RefreshInterval int `json:"refresh_interval"`
}
type Config struct {
Debrid DebridConfig `json:"debrid"`
Proxy ProxyConfig `json:"proxy"`
MaxCacheSize int `json:"max_cache_size"`
QBitTorrent QBitTorrentConfig `json:"qbittorrent"`
}
func LoadConfig(path string) (*Config, error) {
// Load the config file
file, err := os.Open(path)
if err != nil {
return nil, err
}
defer func(file *os.File) {
err := file.Close()
if err != nil {
log.Fatal(err)
}
}(file)
decoder := json.NewDecoder(file)
config := &Config{}
err = decoder.Decode(config)
if err != nil {
return nil, err
}
if config.Proxy.CachedOnly == nil {
config.Proxy.CachedOnly = new(bool)
*config.Proxy.CachedOnly = true
}
return config, nil
}

View File

@@ -1,59 +0,0 @@
package common
import (
"path/filepath"
"regexp"
"strings"
)
var (
VIDEOMATCH = "(?i)(\\.)(YUV|WMV|WEBM|VOB|VIV|SVI|ROQ|RMVB|RM|OGV|OGG|NSV|MXF|MPG|MPEG|M2V|MP2|MPE|MPV|MP4|M4P|M4V|MOV|QT|MNG|MKV|FLV|DRC|AVI|ASF|AMV|MKA|F4V|3GP|3G2|DIVX|X264|X265)$"
MUSICMATCH = "(?i)(\\.)(?:MP3|WAV|FLAC|AAC|OGG|WMA|AIFF|ALAC|M4A|APE|AC3|DTS|M4P|MID|MIDI|MKA|MP2|MPA|RA|VOC|WV|AMR)$"
SUBMATCH = "(?i)(\\.)(SRT|SUB|SBV|ASS|VTT|TTML|DFXP|STL|SCC|CAP|SMI|TTXT|TDS|USF|JSS|SSA|PSB|RT|LRC|SSB)$"
SAMPLEMATCH = `(?i)(^|[\\/]|[._-])(sample|trailer|thumb)s?([._-]|$)`
)
func RegexMatch(regex string, value string) bool {
re := regexp.MustCompile(regex)
return re.MatchString(value)
}
func RemoveInvalidChars(value string) string {
return strings.Map(func(r rune) rune {
if r == filepath.Separator || r == ':' {
return r
}
if filepath.IsAbs(string(r)) {
return r
}
if strings.ContainsRune(filepath.VolumeName("C:"+string(r)), r) {
return r
}
if r < 32 || strings.ContainsRune(`<>:"/\|?*`, r) {
return -1
}
return r
}, value)
}
func RemoveExtension(value string) string {
re := regexp.MustCompile(VIDEOMATCH + "|" + SUBMATCH + "|" + SAMPLEMATCH + "|" + MUSICMATCH)
// Find the last index of the matched extension
loc := re.FindStringIndex(value)
if loc != nil {
return value[:loc[0]]
} else {
return value
}
}
func RegexFind(regex string, value string) string {
re := regexp.MustCompile(regex)
match := re.FindStringSubmatch(value)
if len(match) > 0 {
return match[0]
} else {
return ""
}
}

View File

@@ -1,130 +0,0 @@
package common
import (
"crypto/tls"
"fmt"
"golang.org/x/time/rate"
"io"
"log"
"net/http"
"regexp"
"strconv"
"time"
)
type RLHTTPClient struct {
client *http.Client
Ratelimiter *rate.Limiter
Headers map[string]string
}
func (c *RLHTTPClient) Doer(req *http.Request) (*http.Response, error) {
if c.Ratelimiter != nil {
err := c.Ratelimiter.Wait(req.Context())
if err != nil {
return nil, err
}
}
resp, err := c.client.Do(req)
if err != nil {
return nil, err
}
return resp, nil
}
func (c *RLHTTPClient) Do(req *http.Request) (*http.Response, error) {
var resp *http.Response
var err error
backoff := time.Millisecond * 500
for i := 0; i < 3; i++ {
resp, err = c.Doer(req)
if err != nil {
return nil, err
}
if resp.StatusCode != http.StatusTooManyRequests {
return resp, nil
}
// Close the response body to prevent resource leakage
resp.Body.Close()
// Wait for the backoff duration before retrying
time.Sleep(backoff)
// Exponential backoff
backoff *= 2
}
return resp, fmt.Errorf("max retries exceeded")
}
func (c *RLHTTPClient) MakeRequest(method string, url string, body io.Reader) ([]byte, error) {
req, err := http.NewRequest(method, url, body)
if err != nil {
return nil, err
}
if c.Headers != nil {
for key, value := range c.Headers {
req.Header.Set(key, value)
}
}
res, err := c.Do(req)
if err != nil {
return nil, err
}
statusOk := strconv.Itoa(res.StatusCode)[0] == '2'
if !statusOk {
return nil, fmt.Errorf("unexpected status code: %d", res.StatusCode)
}
defer func(Body io.ReadCloser) {
err := Body.Close()
if err != nil {
log.Println(err)
}
}(res.Body)
return io.ReadAll(res.Body)
}
func NewRLHTTPClient(rl *rate.Limiter, headers map[string]string) *RLHTTPClient {
tr := &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
}
c := &RLHTTPClient{
client: &http.Client{
Transport: tr,
},
Ratelimiter: rl,
Headers: headers,
}
return c
}
func ParseRateLimit(rateStr string) *rate.Limiter {
if rateStr == "" {
return nil
}
re := regexp.MustCompile(`(\d+)/(minute|second)`)
matches := re.FindStringSubmatch(rateStr)
if len(matches) != 3 {
return nil
}
count, err := strconv.Atoi(matches[1])
if err != nil {
return nil
}
unit := matches[2]
switch unit {
case "minute":
reqsPerSecond := float64(count) / 60.0
return rate.NewLimiter(rate.Limit(reqsPerSecond), 5)
case "second":
return rate.NewLimiter(rate.Limit(float64(count)), 5)
default:
return nil
}
}

418
docs/docs/api-spec.yaml Normal file
View File

@@ -0,0 +1,418 @@
openapi: 3.0.3
info:
title: Decypharr API
description: QbitTorrent with Debrid Support API
version: 1.0.0
contact:
name: Decypharr
url: https://github.com/sirrobot01/decypharr
servers:
- url: /api
description: API endpoints
security:
- cookieAuth: []
- bearerAuth: []
paths:
/arrs:
get:
summary: Get all configured Arrs
description: Retrieve a list of all configured Arr applications (Sonarr, Radarr, etc.)
tags:
- Arrs
responses:
'200':
description: Successfully retrieved Arrs
content:
application/json:
schema:
type: array
items:
$ref: '#/components/schemas/Arr'
/add:
post:
summary: Add content for processing
description: Add torrent files or magnet links for processing through debrid services
tags:
- Content
requestBody:
content:
multipart/form-data:
schema:
type: object
properties:
arr:
type: string
description: Name of the Arr application
action:
type: string
description: Action to perform
debrid:
type: string
description: Debrid service to use
callbackUrl:
type: string
description: Optional callback URL
downloadFolder:
type: string
description: Download folder path
downloadUncached:
type: boolean
description: Whether to download uncached content
urls:
type: string
description: Newline-separated URLs or magnet links
files:
type: array
items:
type: string
format: binary
description: Torrent files to upload
responses:
'200':
description: Content added successfully
content:
application/json:
schema:
type: object
properties:
results:
type: array
items:
$ref: '#/components/schemas/ImportRequest'
errors:
type: array
items:
type: string
'400':
description: Bad request
/repair:
post:
summary: Repair media
description: Start a repair process for specified media items
tags:
- Repair
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/RepairRequest'
responses:
'200':
description: Repair started or completed
content:
application/json:
schema:
type: string
'400':
description: Bad request
'404':
description: Arr not found
'500':
description: Internal server error
/repair/jobs:
get:
summary: Get repair jobs
description: Retrieve all repair jobs
tags:
- Repair
responses:
'200':
description: Successfully retrieved repair jobs
content:
application/json:
schema:
type: array
items:
$ref: '#/components/schemas/RepairJob'
delete:
summary: Delete repair jobs
description: Delete multiple repair jobs by IDs
tags:
- Repair
requestBody:
required: true
content:
application/json:
schema:
type: object
properties:
ids:
type: array
items:
type: string
required:
- ids
responses:
'200':
description: Jobs deleted successfully
'400':
description: Bad request
/repair/jobs/{id}/process:
post:
summary: Process repair job
description: Process a specific repair job by ID
tags:
- Repair
parameters:
- name: id
in: path
required: true
schema:
type: string
description: Job ID
responses:
'200':
description: Job processed successfully
'400':
description: Bad request
/repair/jobs/{id}/stop:
post:
summary: Stop repair job
description: Stop a running repair job by ID
tags:
- Repair
parameters:
- name: id
in: path
required: true
schema:
type: string
description: Job ID
responses:
'200':
description: Job stopped successfully
'400':
description: Bad request
'500':
description: Internal server error
/torrents:
get:
summary: Get all torrents
description: Retrieve all torrents sorted by added date
tags:
- Torrents
responses:
'200':
description: Successfully retrieved torrents
content:
application/json:
schema:
type: array
items:
$ref: '#/components/schemas/Torrent'
delete:
summary: Delete multiple torrents
description: Delete multiple torrents by hash list
tags:
- Torrents
parameters:
- name: hashes
in: query
required: true
schema:
type: string
description: Comma-separated list of torrent hashes
- name: removeFromDebrid
in: query
schema:
type: boolean
default: false
description: Whether to remove from debrid service
responses:
'200':
description: Torrents deleted successfully
'400':
description: Bad request
/torrents/{category}/{hash}:
delete:
summary: Delete single torrent
description: Delete a specific torrent by category and hash
tags:
- Torrents
parameters:
- name: category
in: path
required: true
schema:
type: string
description: Torrent category
- name: hash
in: path
required: true
schema:
type: string
description: Torrent hash
- name: removeFromDebrid
in: query
schema:
type: boolean
default: false
description: Whether to remove from debrid service
responses:
'200':
description: Torrent deleted successfully
'400':
description: Bad request
components:
securitySchemes:
cookieAuth:
type: apiKey
in: cookie
name: auth-session
bearerAuth:
type: http
scheme: bearer
bearerFormat: token
description: API token for authentication
schemas:
Arr:
type: object
properties:
name:
type: string
description: Name of the Arr application
host:
type: string
description: Host URL of the Arr application
token:
type: string
description: API token for the Arr application
cleanup:
type: boolean
description: Whether to cleanup after processing
skipRepair:
type: boolean
description: Whether to skip repair operations
downloadUncached:
type: boolean
description: Whether to download uncached content
selectedDebrid:
type: string
description: Selected debrid service
source:
type: string
description: Source of the Arr configuration
ImportRequest:
type: object
properties:
debridName:
type: string
description: Name of the debrid service
downloadFolder:
type: string
description: Download folder path
magnet:
type: string
description: Magnet link
arr:
$ref: '#/components/schemas/Arr'
action:
type: string
description: Action to perform
downloadUncached:
type: boolean
description: Whether to download uncached content
callbackUrl:
type: string
description: Callback URL
importType:
type: string
description: Type of import (API, etc.)
RepairRequest:
type: object
properties:
arrName:
type: string
description: Name of the Arr application
mediaIds:
type: array
items:
type: string
description: List of media IDs to repair
autoProcess:
type: boolean
description: Whether to auto-process the repair
async:
type: boolean
description: Whether to run repair asynchronously
required:
- arrName
RepairJob:
type: object
properties:
id:
type: string
description: Job ID
status:
type: string
description: Job status
arrName:
type: string
description: Associated Arr application
mediaIds:
type: array
items:
type: string
description: Media IDs being repaired
createdAt:
type: string
format: date-time
description: Job creation timestamp
Torrent:
type: object
properties:
hash:
type: string
description: Torrent hash
name:
type: string
description: Torrent name
category:
type: string
description: Torrent category
addedOn:
type: string
format: date-time
description: Date when torrent was added
size:
type: integer
description: Torrent size in bytes
progress:
type: number
format: float
description: Download progress (0-1)
status:
type: string
description: Torrent status
tags:
- name: Arrs
description: Arr application management
- name: Content
description: Content addition and processing
- name: Repair
description: Media repair operations
- name: Torrents
description: Torrent management
- name: Configuration
description: Application configuration
- name: Authentication
description: API token management

90
docs/docs/api.md Normal file
View File

@@ -0,0 +1,90 @@
# API Documentation
Decypharr provides a RESTful API for managing torrents, debrid services, and Arr integrations. The API requires authentication and all endpoints are prefixed with `/api`.
## Authentication
The API supports two authentication methods:
### 1. Session-based Authentication (Cookies)
Log in through the web interface (`/login`) to establish an authenticated session. The session cookie (`auth-session`) will be automatically included in subsequent API requests from the same browser session.
### 2. API Token Authentication (Bearer Token)
Use API tokens for programmatic access. Include the token in the `Authorization` header for each request:
- `Authorization: Bearer <your-token>`
## Interactive API Documentation
<swagger-ui src="api-spec.yaml"/>
## API Endpoints Overview
### Arrs Management
- `GET /api/arrs` - Get all configured Arr applications (Sonarr, Radarr, etc.)
### Content Management
- `POST /api/add` - Add torrent files or magnet links for processing through debrid services
### Repair Operations
- `POST /api/repair` - Start repair process for media items
- `GET /api/repair/jobs` - Get all repair jobs
- `POST /api/repair/jobs/{id}/process` - Process a specific repair job
- `POST /api/repair/jobs/{id}/stop` - Stop a running repair job
- `DELETE /api/repair/jobs` - Delete multiple repair jobs
### Torrent Management
- `GET /api/torrents` - Get all torrents
- `DELETE /api/torrents/{category}/{hash}` - Delete a specific torrent
- `DELETE /api/torrents/` - Delete multiple torrents
## Usage Examples
### Adding Content via API
#### Using API Token:
```bash
curl -H "Authorization: Bearer $API_TOKEN" -X POST http://localhost:8080/api/add \
-F "arr=sonarr" \
-F "debrid=realdebrid" \
-F "urls=magnet:?xt=urn:btih:..." \
-F "downloadUncached=true"
-F "file=@/path/to/torrent/file.torrent"
-F "callbackUrl=http://your.callback.url/endpoint"
```
#### Using Session Cookies:
```bash
# Login first (this sets the session cookie)
curl -c cookies.txt -X POST http://localhost:8080/login \
-H "Content-Type: application/json" \
-d '{"username": "your_username", "password": "your_password"}'
# Then use the session cookie for API calls
curl -b cookies.txt -X POST http://localhost:8080/api/add \
-F "arr=sonarr" \
-F "debrid=realdebrid" \
-F "urls=magnet:?xt=urn:btih:..." \
-F "downloadUncached=true"
```
### Getting Torrents
```bash
# With API token
curl -H "Authorization: Bearer $API_TOKEN" -X GET http://localhost:8080/api/torrents
```
### Starting a Repair Job
```bash
# With API token
curl -H "Authorization: Bearer $API_TOKEN" -X POST http://localhost:8080/api/repair \
-H "Content-Type: application/json" \
-d '{
"arrName": "sonarr",
"mediaIds": ["123", "456"],
"autoProcess": true,
"async": true
}'
```

View File

@@ -0,0 +1,44 @@
# Features Overview
Decypharr extends the functionality of qBittorrent by integrating with Debrid services, providing several powerful features that enhance your media management experience.
## Core Features
### Mock qBittorrent API
Decypharr implements a complete qBittorrent-compatible API that can be used with Sonarr, Radarr, Lidarr, and other Arr applications. This allows you to:
- Seamlessly integrate with your existing Arr setup
- Use familiar interfaces to manage your downloads
- Benefit from Debrid services without changing your workflow
### Comprehensive UI
The Decypharr user interface provides:
- Torrent management capabilities
- Status monitoring
- Configuration options
- Multiple Debrid provider integration
## Advanced Features
Decypharr includes several advanced features that extend its capabilities:
- [Repair Support](repair-worker.md): Identifies and fixes issues with your media files
- WebDav Server: Provides direct access to your Debrid files
- Mounting Support: Allows you to mount Debrid services using [rclone](https://rclone.org), making it easy to access your files directly from your system
- Multiple Debrid Providers: Supports Real Debrid, Torbox, Debrid Link, and All Debrid, allowing you to choose the best service for your needs
## Supported Debrid Providers
Decypharr supports multiple Debrid providers:
- Real Debrid
- Torbox
- Debrid Link
- All Debrid
- Premiumize(Coming Soon)
- Usenet(Coming Soon)
Each provider can be configured separately, allowing you to use one or multiple services simultaneously.

View File

@@ -0,0 +1,117 @@
# Private Tracker Downloads
It is against the rules of most private trackers to download using debrid services. That's because debrid services do not seed back.
Despite that, **many torrents from private trackers are cached on debrid services**.
This can happen if the exact same torrent is uploaded to a public tracker or if another user downloads the torrent from the private tracker using their debrid account.
However, you do **_NOT_** want to be the first person who downloads and caches the private tracker torrent because it is a very quick way to get your private tracker account banned.
Fortunately, decypharr offers a feature that allows you to check whether a private tracker torrent has _already_ been cached.
In a way, this feature lets you use your private trackers to find hashes for the latest releases that have not yet been indexed by zilean, torrentio, and other debrid-focused indexers.
This allows you to add private tracker torrents to your debrid account without breaking the most common private tracker rules. This significantly reduces the chance of account bans, **but please read the `Risks` section below** for more details and other precautions you should make.
## Risks
A lot of care has gone into ensuring this feature is compliant with most private tracker rules:
- The passkey is not leaked
- The private tracker announce URLs are not leaked
- The private tracker swarm is not leaked
- Even the torrent content is not leaked (by you)
You are merely downloading it from another source. It's not much different than downloading a torrent that has been uploaded to MegaUpload or another file hoster.
**But it is NOT completely risk-free.**
### Suspicious-looking activity
To use this feature, you must download the `.torrent` file from the private tracker. But since you will never leech the content, it can make your account look suspicious.
In fact, there is a strictly forbidden technique called `ghostleeching` that also requires downloading of the `.torrent` file, and tracker admins might suspect that this is what you are doing.
We know of one user who got banned from a Unit3D-based tracker for this.
**Here is what is recommended:**
- Be a good private tracker user in general. Perma-seed, upload, contribute
- Only enable `Interactive Search` in the arrs (disable `Automatic Search`)
- Only use it for content that is not on public sources yet, and you need to watch **RIGHT NOW** without having time to wait for the download to finish
- Do **NOT** use it to avoid seeding
### Accidentally disable this feature
Another big risk is that you might accidentally disable the feature. The consequence will be that you actually leech the torrent from the tracker, don't seed it, and expose the private swarm to an untrusted third party.
You should avoid this at all costs.
Therefore, to reduce the risk further, it is recommended to enable the feature using both methods:
1. Using the global `Always Remove Tracker URLs` setting in your decypharr `config.json`
2. And by enabling the `First and Last First` setting in Radarr / Sonarr
This way, if one of them gets disabled, you have another backup.
## How to enable this feature
### Always Remove Tracker URLs
- In the web UI under `Settings -> QBitTorrent -> Always Remove Tracker URLs`
- Or in your `config.json` by setting the `qbittorrent.always_rm_tracker_url` to `true`
This ensures that the Tracker URLs are removed from **ALL torrents** (regardless of whether they are public, private, or how they were added).
But this can make downloads of uncached torrents slower or stall because the tracker helps the client find peers to download from.
If the torrent file has no tracker URLs, the torrent client can try to find peers for public torrents using [DHT](https://en.wikipedia.org/wiki/Mainline_DHT). However, this may be less efficient than connecting to a tracker, and the downloads may be slower or stall.
If you only download cached torrents, there is no further downside to enabling this option.
### Only on specific Arr-app clients and indexers
Alternatively, you can toggle it only for specific download clients and indexers in the Arr-apps...
- Enable `Show Advanced Settings` in your Arr app
- Add a new download client in `Settings -> Download Clients` and call it something like `Decypharr (Private)`
- Enable the `First and Last First` checkbox, which will tell Decypharr to remove the tracker URLs
- Add a duplicate version of your private tracker indexer for Decypharr downloads
- Untick `Enable Automatic Search`
- Tick `Enable Interactive Search`
- Set `Download Client` to your new `Decypharr (Private)` client (requires `Show Advanced Settings`)
If you are using Prowlarr to sync your indexers, you can't set the `Download Client` in Prowlarr. You must update it directly in your Arr-apps after the indexers get synced. But future updates to the indexers won't reset the setting.
### Test it
After enabling the feature, try adding a [public torrent](https://ubuntu.com/download/alternative-downloads) through the Decypharr UI and a **public torrent** through your Arr-apps.
Then check the decypharr log to check for a log entry like...
```log
Removed 2 tracker URLs from torrent file
```
If you see this log entry, it means the tracker URLs are being stripped from your torrents and you can safely enable it on private tracker indexers.
## How it works
When you add a new torrent through the QBitTorrent API or through the Web UI, decypharr converts your torrent into a magnet link and then uses your debrid service's API to download that magnet link.
The torrent magnet link contains:
1. The `info hash` that uniquely identifies the torrent, files, and file names
2. The torrent name
3. The URLs of the tracker to connect to
Private tracker URLs in torrents contain a `passkey`. This is a unique identifier that ties the torrent file to your private tracker account.
Only if the `passkey` is valid will the tracker allow the torrent client to connect and download the files. This is also how private torrent trackers measure your downloads and uploads.
The `Remove Tracker URLs` feature removes all the tracker URLs (which include your private `passkey`). This means when decypharr attempts to download the torrent, it only passes the `info hash` and torrent name to the debrid service.
Without the tracker URLs, your debrid service has no way to connect to the private tracker to download the files, and your `passkey` and the private torrent tracker swarm are not exposed.
**But if the torrent is already cached, it's immediately added to your account.**

View File

@@ -0,0 +1,18 @@
# Repair Worker
![Repair Worker](../images/repair.png)
The Repair Worker is a powerful feature that helps maintain the health of your media library by scanning for and fixing issues with files.
## What It Does
The Repair Worker performs the following tasks:
- Searches for broken symlinks or file references
- Identifies missing files in your library
- Locates deleted or unreadable files
- Automatically repairs issues when possible
## Configuration
You can enable and configure the Repair Worker in the Decypharr settings. It can be set to run at regular intervals, such as every 12 hours or daily.

View File

@@ -0,0 +1,26 @@
### Downloading with Decypharr
While Decypharr provides a Qbittorent API for integration with media management applications, it also allows you to manually download torrents directly through its interface. This guide will walk you through the process of downloading torrents using Decypharr.
- You can either use the Decypharr UI to add torrents manually or use its [API](../api.md) to automate the process.
## Manual Downloading
![Downloading UI](../images/download.png)
To manually download a torrent using Decypharr, follow these steps:
1. Navigate to the "Download" section in the Decypharr UI.
2. You can either upload torrent file(s) or paste magnet links directly into the input fields
3. Select the action(defaults to Symlink)
4. Add any additional options, such as:
- *Download Folder*: Specify the folder where the downloaded files will be saved.
- *Arr Category*: Choose the category for the download, which helps in organizing files in your media management applications.
- **Post Download Action**: Select what to do after the download completes:
- **Create Symlink**: Create a symlink to the downloaded files in the mount folder(default)
- **Download**: Download the file directly.
- **No Action**: Do nothing after the download completes.
- **Debrid Provider**: Choose which Debrid service to use for the download(if you have multiple)
- **Download Uncached**: If enabled, Decypharr will attempt to download uncached files from the Debrid service.
Note:
- If you use an arr category, your download will go into **{download_folder}/{arr}**

View File

@@ -0,0 +1,4 @@
# Guides for setting up Decypharr
- [Manual Downloading with Decypharr](downloading.md)
- [Internal Mounting](internal-mounting.md)

View File

@@ -0,0 +1,85 @@
# Internal Mounting
This guide explains how to use Decypharr's internal mounting feature to eliminate the need for external rclone setup.
## Overview
![Decypharr Internal Mounting](../images/settings/rclone.png)
Instead of requiring users to install and configure rclone separately, Decypharr can now mount your WebDAV endpoints internally using rclone as a library dependency. This provides a seamless experience where files appear as regular filesystem paths without any external dependencies.
## Prerequisites
- **Docker users**: FUSE support may need to be enabled in the container depending on your Docker setup
- **macOS users**: May need [macFUSE](https://osxfuse.github.io/) installed for mounting functionality
- **Linux users**: FUSE should be available by default on most distributions
- **Windows users**: Mounting functionality may be limited
### Configuration Options
You can set the options in the Web UI or directly in the configuration file:
#### Note:
Check the Rclone documentation for more details on the available options: [Rclone Mount Options](https://rclone.org/commands/rclone_mount/).
## How It Works
1. **WebDAV Server**: Decypharr starts its internal WebDAV server for enabled providers
2. **Internal Mount**: Rclone is used internally to mount the WebDAV endpoint to a local filesystem path
3. **File Access**: Your applications can access files using regular filesystem paths like `/mnt/decypharr/realdebrid/__all__/MyMovie/`
## Benefits
- **Automatic Setup**: Mounting is handled automatically by Decypharr using internal rclone rcd
- **Filesystem Access**: Files appear as regular directories and files
- **Seamless Integration**: Works with existing media servers without changes
## Docker Compose
```yaml
version: '3.8'
services:
decypharr:
image: sirrobot01/decypharr:latest
container_name: decypharr
ports:
- "8282:8282"
volumes:
- ./config:/config
- /mnt:/mnt:rshared # Important: use 'rshared' for mount propagation
devices:
- /dev/fuse:/dev/fuse:rwm
cap_add:
- SYS_ADMIN
security_opt:
- apparmor:unconfined
environment:
- UMASK=002
- PUID=1000 # Change to your user ID
- PGID=1000 # Change to your group ID
```
**Important Docker Notes:**
- Mount volumes with `:rshared` to allow mount propagation
- Include `/dev/fuse` device for FUSE mounting
## Troubleshooting
### Mount Failures
If mounting fails, check:
1. **FUSE Installation**:
- **macOS**: Install macFUSE from https://osxfuse.github.io/
- **Linux**: Install fuse package (`apt install fuse` or `yum install fuse`)
- **Docker**: Fuse is already included in the container, but ensure the host supports it
2. **Permissions**: Ensure the application has sufficient privileges
### No Mount Methods Available
If you see "no mount method available" errors:
1. **Check Platform Support**: Some platforms have limited FUSE support
2. **Install Dependencies**: Ensure FUSE libraries are installed
3. **Use WebDAV Directly**: Access files via `http://localhost:8282/webdav/provider/`
4. **External Mounting**: Use OS-native WebDAV mounting as fallback

Binary file not shown.

After

Width:  |  Height:  |  Size: 293 KiB

BIN
docs/docs/images/logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 431 KiB

BIN
docs/docs/images/main.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 417 KiB

BIN
docs/docs/images/repair.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 286 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 264 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 264 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 169 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 364 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 216 KiB

BIN
docs/docs/images/webdav.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

28
docs/docs/index.md Normal file
View File

@@ -0,0 +1,28 @@
# Decypharr
![Decypharr UI - Light Mode](images/main-light.png){: .light-mode-image}
![Decypharr UI - Dark Mode](images/main.png){: .dark-mode-image}
**Decypharr** is an implementation of QbitTorrent with **Multiple Debrid service support**, written in Go.
## What is Decypharr?
**TLDR**; Decypharr is a self-hosted, open-source download client that integrates with multiple Debrid services. It provides a user-friendly interface for managing files and supports popular media management applications like Sonarr and Radarr.
## Key Features
- Mock Qbittorent API that supports Sonarr, Radarr, Lidarr, and other Arr applications
- Multiple Debrid providers support
- WebDAV server support for each Debrid provider with an optional mounting feature(using [rclone](https://rclone.org))
- Repair Worker for missing files, symlinks etc
## Supported Debrid Providers
- [Real Debrid](https://real-debrid.com)
- [Torbox](https://torbox.app)
- [Debrid Link](https://debrid-link.com)
- [All Debrid](https://alldebrid.com)
## Getting Started
Check out our [Installation Guide](installation.md) to get started with Decypharr.

107
docs/docs/installation.md Normal file
View File

@@ -0,0 +1,107 @@
# Installation
There are multiple ways to install and run Decypharr. Choose the method that works best for your setup.
## Docker Installation (Recommended)
Docker is the easiest way to get started with Decypharr.
### Available Docker Registries
You can use either Docker Hub or GitHub Container Registry to pull the image:
- Docker Hub: `cy01/blackhole:latest`
- GitHub Container Registry: `ghcr.io/sirrobot01/decypharr:latest`
### Docker Tags
- `latest`: The latest stable release
- `beta`: The latest beta release
- `vX.Y.Z`: A specific version (e.g., `v0.1.0`)
- `experimental`: The latest experimental build (highly unstable)
### Docker CLI Setup
Pull the Docker image:
```bash
docker pull cy01/blackhole:latest
```
Run the Docker container:
```bash
docker run -d \
--name decypharr \
--restart unless-stopped \
-p 8282:8282 \
-v /mnt/:/mnt:rshared \
-v ./config/:/app \
--device /dev/fuse:/dev/fuse:rwm \
--cap-add SYS_ADMIN \
--security-opt apparmor:unconfined \
cy01/blackhole:latest
```
### Docker Compose Setup
Create a `docker-compose.yml` file with the following content:
```yaml
services:
decypharr:
image: cy01/blackhole:latest
container_name: decypharr
ports:
- "8282:8282"
volumes:
- /mnt/:/mnt:rshared
- ./config/:/app
restart: unless-stopped
devices:
- /dev/fuse:/dev/fuse:rwm
cap_add:
- SYS_ADMIN
security_opt:
- apparmor:unconfined
```
Run the Docker Compose setup:
```bash
docker-compose up -d
```
## Binary Installation
If you prefer not to use Docker, you can download and run the binary directly.
Download your OS-specific release from the [release page](https://github.com/sirrobot01/decypharr/releases).
Create a configuration file (see Configuration)
Run the binary:
```bash
chmod +x decypharr
./decypharr --config /path/to/config/folder
```
### Notes for Docker Users
- Ensure that the `/mnt/` directory is mounted correctly to access your media files.
- You can adjust the `PUID` and `PGID` environment variables to match your user and group IDs for proper file permissions.
- The `UMASK` environment variable can be set to control file permissions created by Decypharr.
##### Health Checks
- Health checks are disabled by default. You can enable them by adding a `healthcheck` section in your `docker-compose.yml` file.
- Health checks the availability of several parts of the application;
- The main web interface
- The qBittorrent API
- The WebDAV server (if enabled). You should disable health checks for the initial indexes as they can take a long time to complete.
```yaml
services:
decypharr:
...
...
healthcheck:
test: ["CMD", "/usr/bin/healthcheck", "--config", "/app/"]
interval: 10s
timeout: 10s
retries: 3
```

View File

@@ -0,0 +1,24 @@
/* Light mode image - visible by default */
.light-mode-image {
display: block;
}
/* Dark mode image - hidden by default */
.dark-mode-image {
display: none;
}
/* When dark theme (slate) is active */
[data-md-color-scheme="slate"] .light-mode-image {
display: none;
}
[data-md-color-scheme="slate"] .dark-mode-image {
display: block;
}
/* Optional: smooth transition */
.light-mode-image,
.dark-mode-image {
transition: opacity 0.2s ease-in-out;
}

77
docs/docs/usage.md Normal file
View File

@@ -0,0 +1,77 @@
# Usage Guide
This guide will help you get started with Decypharr after installation.
After installing Decypharr, you can access the web interface at `http://localhost:8282` or your configured host/port.
### Initial Configuration
If it's the first time you're accessing the UI, you will be prompted to set up your credentials. You can skip this step if you don't want to enable authentication. If you choose to set up credentials, enter a username and password confirm password, then click **Save**. You will be redirected to the settings page.
### Debrid Configuration
![Decypharr Settings](images/settings/debrid.png)
- Click on **Debrid** in the tab
- Add your desired Debrid services (Real Debrid, Torbox, Debrid Link, All Debrid) by entering the required API keys or tokens.
- Set the **Mount/Rclone Folder**. This is where decypharr will look for added torrents to symlink them to your media library.
- If you're using internal webdav, do not forget the `/__all__` suffix
- Enable WebDAV
- You can leave the remaining settings as default for now.
### Qbittorent Configuration
![Qbittorrent Settings](images/settings/qbittorent.png)
- Click on **Qbittorrent** in the tab
- Set the **Download Folder** to where you want Decypharr to save downloaded files. These files will be symlinked to the mount folder you configured earlier.
- Set **Always Remove Tracker URLs** if you want to always remove the tracker URLs torrents and magnet links. This is useful if you want to [download private tracker torrents](features/private-tracker-downloads.md) without breaking the rules, but will make uncached torrents always stall.
You can leave the remaining settings as default for now.
### Arrs Configuration
You can skip Arr configuration for now. Decypharr will auto-add them when you connect to Sonarr or Radarr later.
#### Connecting to Sonarr/Radarr
![Sonarr/Radarr Setup](images/settings/arr.png)
To connect Decypharr to your Sonarr or Radarr instance:
1. In Sonarr/Radarr, go to **Settings → Download Client → Add Client → qBittorrent**
2. Configure the following settings:
- **Host**: `localhost` (or the IP of your Decypharr server)
- **Port**: `8282` (or your configured qBittorrent port)
- **Username**: `http://sonarr:8989` (your Arr host with http/https)
- **Password**: `sonarr_token` (your Arr API token, you can get this from Sonarr/Radarr settings)
- **Category**: e.g., `sonarr`, `radarr` (match what you configured in Decypharr)
- **Use SSL**: `No`
- **Sequential Download**: `No` or `Yes` (if you want to download torrents locally instead of symlink)
- **First and Last First**: `No` by default or `Yes` if you want to remove torrent tracker URLs from the torrents. This can make it possible to [download private trackers torrents without breaking the rules](features/private-tracker-downloads.md).
3. Click **Test** to verify the connection
4. Click **Save** to add the download client
### Rclone Configuration
![Rclone Settings](images/settings/rclone.png)
If you want Decypharr to automatically mount WebDAV folders using Rclone, you need to set up Rclone first:
If you're using Docker, the rclone binary is already included in the container. If you're running Decypharr directly, make sure Rclone is installed on your system.
Enable **Mount**
- **Global Mount Path**: Set the path where you want to mount the WebDAV folders (e.g., `/mnt/remote`). Decypharr will create subfolders for each Debrid service. For example, if you set `/mnt/remote`, it will create `/mnt/remote/realdebrid`, `/mnt/remote/torbox`, etc. This should be the grandparent of your mount folder set in the Debrid configuration.
- **User ID**: Set the user ID for Rclone mounts (default is gotten from the environment variable `PUID`).
- **Group ID**: Set the group ID for Rclone mounts (default is gotten from the environment variable `PGID`).
- **Buffer Size**: Set the buffer size for Rclone mounts.
You should set other options based on your use case. If you don't know what you're doing, leave it as defaults. Checkout the [Rclone documentation](https://rclone.org/commands/rclone_mount/) for more details.
### Repair Configuration
![Repair Settings](images/settings/repair.png)
Repair is an optional feature that allows you to fix missing files, symlinks, and other issues in your media library.
- Click on **Repair** in the tab
- Enable **Scheduled Repair** if you want Decypharr to automatically check for missing files at your specified interval.
- Set the **Repair Interval** to how often you want Decypharr to check for missing files (e.g 1h, 6h, 12h, 24h, you can also use cron syntax like `0 0 * * *` for daily checks).
- Enable **WebDav**(You shoukd enable this, if you enabled WebDav in Debrid configuration)
- **Auto Process**: Enable this if you want Decypharr to automatically process repair jobs when they are done. This could delete the original files, symlinks, be wary!!!
- **Worker Threads**: Set the number of worker threads for processing repair jobs. More threads can speed up the process but may consume more resources.

79
docs/mkdocs.yml Normal file
View File

@@ -0,0 +1,79 @@
site_name: Decypharr
site_url: https://sirrobot01.github.io/decypharr
site_description: QbitTorrent with Debrid Support
repo_url: https://github.com/sirrobot01/decypharr
repo_name: sirrobot01/decypharr
edit_uri: blob/main/docs
extra_css:
- styles/styles.css
theme:
name: material
logo: images/logo.png
font:
text: Roboto
code: Roboto Mono
palette:
- media: "(prefers-color-scheme: light)"
scheme: default
primary: indigo
accent: indigo
toggle:
icon: material/weather-night
name: Switch to dark mode
- media: "(prefers-color-scheme: dark)"
scheme: slate
primary: indigo
accent: indigo
toggle:
icon: material/weather-sunny
name: Switch to light mode
features:
- navigation.search.highlight
- navigation.search.suggest
- navigation.search.share
- navigation.search.suggest
- navigation.search.share
- navigation.search.highlight
- navigation.search.suggest
- navigation.search.share
icon:
repo: fontawesome/brands/github
markdown_extensions:
- admonition
- pymdownx.details
- pymdownx.superfences
- pymdownx.highlight
- pymdownx.inlinehilite
- pymdownx.tabbed
- pymdownx.emoji:
emoji_index: !!python/name:material.extensions.emoji.twemoji
emoji_generator: !!python/name:materialx.emoji.to_svg
- attr_list
- md_in_html
- def_list
- toc:
permalink: true
nav:
- Home: index.md
- Installation: installation.md
- Usage: usage.md
- API Documentation: api.md
- Features:
- Overview: features/index.md
- Repair Worker: features/repair-worker.md
- Private Tracker Downloads: features/private-tracker-downloads.md
- Guides:
- Overview: guides/index.md
- Manual Downloading: guides/downloading.md
- Internal Mounting: guides/internal-mounting.md
plugins:
- search
- tags
- swagger-ui-tag

3
docs/requirements.txt Normal file
View File

@@ -0,0 +1,3 @@
mkdocs==1.6.1
mkdocs-material==9.6.16
mkdocs-swagger-ui-tag==0.6.10

40
go.mod
View File

@@ -1,28 +1,40 @@
module goBlack
module github.com/sirrobot01/decypharr
go 1.22
go 1.24.0
toolchain go1.24.3
require (
github.com/anacrolix/torrent v1.55.0
github.com/cavaliergopher/grab/v3 v3.0.1
github.com/elazarl/goproxy v0.0.0-20240726154733-8b0c20506380
github.com/elazarl/goproxy/ext v0.0.0-20190711103511-473e67f1d7d2
github.com/go-chi/chi/v5 v5.1.0
github.com/go-chi/chi/v5 v5.2.2
github.com/go-co-op/gocron/v2 v2.16.1
github.com/google/uuid v1.6.0
github.com/valyala/fasthttp v1.55.0
github.com/valyala/fastjson v1.6.4
golang.org/x/time v0.6.0
github.com/gorilla/sessions v1.4.0
github.com/puzpuzpuz/xsync/v4 v4.1.0
github.com/robfig/cron/v3 v3.0.1
github.com/rs/zerolog v1.33.0
github.com/stanNthe5/stringbuf v0.0.3
go.uber.org/ratelimit v0.3.1
golang.org/x/crypto v0.39.0
golang.org/x/net v0.41.0
golang.org/x/sync v0.15.0
gopkg.in/natefinch/lumberjack.v2 v2.2.1
)
require (
github.com/anacrolix/missinggo v1.3.0 // indirect
github.com/anacrolix/missinggo/v2 v2.7.3 // indirect
github.com/andybalholm/brotli v1.1.0 // indirect
github.com/benbjohnson/clock v1.3.0 // indirect
github.com/bradfitz/iter v0.0.0-20191230175014-e8f45d346db8 // indirect
github.com/google/go-cmp v0.6.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/google/go-cmp v0.7.0 // indirect
github.com/gorilla/securecookie v1.1.2 // indirect
github.com/huandu/xstrings v1.3.2 // indirect
github.com/klauspost/compress v1.17.9 // indirect
github.com/valyala/bytebufferpool v1.0.0 // indirect
golang.org/x/net v0.27.0 // indirect
golang.org/x/text v0.16.0 // indirect
github.com/jonboulle/clockwork v0.5.0 // indirect
github.com/mattn/go-colorable v0.1.14 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/rogpeppe/go-internal v1.14.1 // indirect
golang.org/x/sys v0.33.0 // indirect
)

92
go.sum
View File

@@ -35,9 +35,9 @@ github.com/anacrolix/tagflag v1.0.0/go.mod h1:1m2U/K6ZT+JZG0+bdMK6qauP49QT4wE5pm
github.com/anacrolix/tagflag v1.1.0/go.mod h1:Scxs9CV10NQatSmbyjqmqmeQNwGzlNe0CMUMIxqHIG8=
github.com/anacrolix/torrent v1.55.0 h1:s9yh/YGdPmbN9dTa+0Inh2dLdrLQRvEAj1jdFW/Hdd8=
github.com/anacrolix/torrent v1.55.0/go.mod h1:sBdZHBSZNj4de0m+EbYg7vvs/G/STubxu/GzzNbojsE=
github.com/andybalholm/brotli v1.1.0 h1:eLKJA0d02Lf0mVpIDgYnqXcUn0GqVmEFny3VuID1U3M=
github.com/andybalholm/brotli v1.1.0/go.mod h1:sms7XGricyQI9K10gOSf56VKKWS4oLer58Q+mhRPtnY=
github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/benbjohnson/clock v1.3.0 h1:ip6w0uFQkncKQ979AypyG0ER7mqUSBdKLOgAle/AT8A=
github.com/benbjohnson/clock v1.3.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
github.com/benbjohnson/immutable v0.2.0/go.mod h1:uc6OHo6PN2++n98KHLxW8ef4W42ylHiQSENghE1ezxI=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
@@ -50,19 +50,17 @@ github.com/cavaliergopher/grab/v3 v3.0.1 h1:4z7TkBfmPjmLAAmkkAZNX/6QJ1nNFdv3SdIH
github.com/cavaliergopher/grab/v3 v3.0.1/go.mod h1:1U/KNnD+Ft6JJiYoYBAimKH2XrYptb8Kl3DFGmsjpq4=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
github.com/dustin/go-humanize v0.0.0-20180421182945-02af3965c54e/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/eapache/go-resiliency v1.1.0/go.mod h1:kFI+JgMyC7bLPUVY133qvEBtVayf5mFgVsvEsIPBvNs=
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU=
github.com/eapache/queue v1.1.0/go.mod h1:6eCeP0CKFpHLu8blIFXhExK/dRa7WDZfr6jVFPTqq+I=
github.com/elazarl/goproxy v0.0.0-20240726154733-8b0c20506380 h1:1NyRx2f4W4WBRyg0Kys0ZbaNmDDzZ2R/C7DTi+bbsJ0=
github.com/elazarl/goproxy v0.0.0-20240726154733-8b0c20506380/go.mod h1:thX175TtLTzLj3p7N/Q9IiKZ7NF+p72cvL91emV0hzo=
github.com/elazarl/goproxy/ext v0.0.0-20190711103511-473e67f1d7d2 h1:dWB6v3RcOy03t/bUadywsbyrQwCqZeNIEX6M1OtSZOM=
github.com/elazarl/goproxy/ext v0.0.0-20190711103511-473e67f1d7d2/go.mod h1:gNh8nYJoAm43RfaxurUnxr+N1PwuFV3ZMl/efxlIlY8=
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
@@ -72,13 +70,16 @@ github.com/glycerine/go-unsnap-stream v0.0.0-20190901134440-81cf024a9e0a/go.mod
github.com/glycerine/goconvey v0.0.0-20180728074245-46e3a41ad493/go.mod h1:Ogl1Tioa0aV7gstGFO7KhffUsb9M4ydbEbbxpcEDc24=
github.com/glycerine/goconvey v0.0.0-20190315024820-982ee783a72e/go.mod h1:Ogl1Tioa0aV7gstGFO7KhffUsb9M4ydbEbbxpcEDc24=
github.com/glycerine/goconvey v0.0.0-20190410193231-58a59202ab31/go.mod h1:Ogl1Tioa0aV7gstGFO7KhffUsb9M4ydbEbbxpcEDc24=
github.com/go-chi/chi/v5 v5.1.0 h1:acVI1TYaD+hhedDJ3r54HyA6sExp3HfXq7QWEEY/xMw=
github.com/go-chi/chi/v5 v5.1.0/go.mod h1:DslCQbL2OYiznFReuXYUmQ2hGd1aDpCnlMNITLSKoi8=
github.com/go-chi/chi/v5 v5.2.2 h1:CMwsvRVTbXVytCk1Wd72Zy1LAsAh9GxMmSNWLHCG618=
github.com/go-chi/chi/v5 v5.2.2/go.mod h1:L2yAIGWB3H+phAw1NxKwWM+7eUH/lU8pOMm5hHcoops=
github.com/go-co-op/gocron/v2 v2.16.1 h1:ux/5zxVRveCaCuTtNI3DiOk581KC1KpJbpJFYUEVYwo=
github.com/go-co-op/gocron/v2 v2.16.1/go.mod h1:opexeOFy5BplhsKdA7bzY9zeYih8I8/WNJ4arTIFPVc=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.0/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
@@ -101,9 +102,11 @@ github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5a
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
@@ -112,6 +115,10 @@ github.com/gopherjs/gopherjs v0.0.0-20190309154008-847fc94819f9/go.mod h1:wJfORR
github.com/gopherjs/gopherjs v0.0.0-20190910122728-9d188e94fb99/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/context v1.1.1/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg=
github.com/gorilla/mux v1.6.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gorilla/securecookie v1.1.2 h1:YCIWL56dvtr73r6715mJs5ZvhtnY73hBvEF8kXD8ePA=
github.com/gorilla/securecookie v1.1.2/go.mod h1:NfCASbcHqRSY+3a8tlWJwsQap2VX5pwzwo4h3eOamfo=
github.com/gorilla/sessions v1.4.0 h1:kpIYOp/oi6MG/p5PgxApU8srsSw9tuFbt46Lt7auzqQ=
github.com/gorilla/sessions v1.4.0/go.mod h1:FLWm50oby91+hl7p/wRxDth9bWSuk0qVL2emc7lT5ik=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/huandu/xstrings v1.0.0/go.mod h1:4qWG/gcEcfX4z/mBDHJ++3ReCw9ibxbsNJbcucJdbSo=
@@ -119,14 +126,14 @@ github.com/huandu/xstrings v1.2.0/go.mod h1:DvyZB1rfVYsBIigL8HwpZgxHwXozlTgGqn63
github.com/huandu/xstrings v1.3.1/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE=
github.com/huandu/xstrings v1.3.2 h1:L18LIDzqlW6xN2rEkpdV8+oL/IXWJ1APd+vsdYy4Wdw=
github.com/huandu/xstrings v1.3.2/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE=
github.com/jonboulle/clockwork v0.5.0 h1:Hyh9A8u51kptdkR+cqRpT1EebBwTn1oK9YfGYbdFz6I=
github.com/jonboulle/clockwork v0.5.0/go.mod h1:3mZlmanh0g2NDKO5TWZVJAfofYk64M7XN3SzBPjZF60=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/jtolds/gls v4.2.1+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.17.9 h1:6KIumPrER1LHsvBVuDa0r5xaG0Es51mhhB9BQB2qeMA=
github.com/klauspost/compress v1.17.9/go.mod h1:Di0epgTjJY877eYKx5yC51cX2A2Vl2ibi7bDH9ttBbw=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
@@ -136,6 +143,13 @@ github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE=
github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
@@ -153,8 +167,9 @@ github.com/pierrec/lz4 v2.0.5+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3-0.20190127221311-3c4408c8b829/go.mod h1:p2iRAGwDERtqlqzRXnrOVns+ignqQo//hLXqYxZYVNs=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
@@ -171,10 +186,16 @@ github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A=
github.com/prometheus/procfs v0.0.11/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/puzpuzpuz/xsync/v4 v4.1.0 h1:x9eHRl4QhZFIPJ17yl4KKW9xLyVWbb3/Yq4SXpjF71U=
github.com/puzpuzpuz/xsync/v4 v4.1.0/go.mod h1:VJDmTCJMBt8igNxnkQd86r+8KUeN1quSfNKu5bLYFQo=
github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/rogpeppe/go-charset v0.0.0-20180617210344-2471d30d28b4/go.mod h1:qgYeAmZ5ZIpBWTGllZSQnw97Dj+woV0toclVaRGI8pc=
github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8=
github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/rs/xid v1.5.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
github.com/rs/zerolog v1.33.0 h1:1cU2KZkvPxNyfgEmhHAz/1A9Bz+llsdYzklWFzgp0r8=
github.com/rs/zerolog v1.33.0/go.mod h1:/7mN4D5sKwJLZQ2b/znpjC3/GQWY/xaDXUM0kKWRHss=
github.com/ryszard/goskiplist v0.0.0-20150312221310-2dfbae5fcf46/go.mod h1:uAQ5PCi+MFsC7HjREoAz1BU+Mq60+05gifQSsHSDG/8=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
@@ -182,30 +203,34 @@ github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1
github.com/smartystreets/assertions v0.0.0-20190215210624-980c5ac6f3ac/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v0.0.0-20181108003508-044398e4856c/go.mod h1:XDJAKZRPZ1CvBcN2aX5YOUTYGHki24fSF0Iv48Ibg0s=
github.com/smartystreets/goconvey v0.0.0-20190306220146-200a235640ff/go.mod h1:KSQcGKpxUMHk3nbYzs/tIBAM2iDooCn0BmttHOJEbLs=
github.com/stanNthe5/stringbuf v0.0.3 h1:3ChRipDckEY6FykaQ1Dowy3B+ZQa72EDBCasvT5+D1w=
github.com/stanNthe5/stringbuf v0.0.3/go.mod h1:hii5Vr+mucoWkNJlIYQVp8YvuPtq45fFnJEAhcPf2cQ=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.1/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/tinylib/msgp v1.0.2/go.mod h1:+d+yLhGm8mzTaHzB+wgMYrodPfmZrzkirds8fDWklFE=
github.com/tinylib/msgp v1.1.0/go.mod h1:+d+yLhGm8mzTaHzB+wgMYrodPfmZrzkirds8fDWklFE=
github.com/tinylib/msgp v1.1.2/go.mod h1:+d+yLhGm8mzTaHzB+wgMYrodPfmZrzkirds8fDWklFE=
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
github.com/valyala/fasthttp v1.55.0 h1:Zkefzgt6a7+bVKHnu/YaYSOPfNYNisSVBo/unVCf8k8=
github.com/valyala/fasthttp v1.55.0/go.mod h1:NkY9JtkrpPKmgwV3HTaS2HWaJss9RSIsRVfcxxoHiOM=
github.com/valyala/fastjson v1.6.4 h1:uAUNq9Z6ymTgGhcm0UynUAB6tlbakBrz6CQFax3BXVQ=
github.com/valyala/fastjson v1.6.4/go.mod h1:CLCAqky6SMuOcxStkYQvblddUtoRxhYMGLrsQns1aXY=
github.com/willf/bitset v1.1.9/go.mod h1:RjeCKbqT1RxIR/KWY6phxZiaY1IyutSBfGjNPySAYV4=
github.com/willf/bitset v1.1.10/go.mod h1:RjeCKbqT1RxIR/KWY6phxZiaY1IyutSBfGjNPySAYV4=
go.opencensus.io v0.20.1/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
go.opencensus.io v0.20.2/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.uber.org/atomic v1.7.0 h1:ADUqmZGgLDDfbSL9ZmPxKTybcoEYHgpYfELNoN+7hsw=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/ratelimit v0.3.1 h1:K4qVE+byfv/B3tC+4nYWP7v/6SimcO7HzHekoMNBma0=
go.uber.org/ratelimit v0.3.1/go.mod h1:6euWsTB6U/Nb3X++xEUXA8ciPJvr19Q/0h1+oDcJhRk=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.39.0 h1:SHs+kF4LP+f+p14esP5jAoDpHU8Gu/v9lFRK6IT5imM=
golang.org/x/crypto v0.39.0/go.mod h1:L+Xg3Wf6HoL4Bn4238Z6ft6KfEpN0tJGo53AAPC632U=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
@@ -221,8 +246,8 @@ golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73r
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.27.0 h1:5K3Njcw06/l2y9vpGCSdcxWOYHOUk3dVNGDXN+FvAys=
golang.org/x/net v0.27.0/go.mod h1:dDi0PyhWNoiUOrAS8uXv/vnScO4wnHQO4mj9fn/RytE=
golang.org/x/net v0.41.0 h1:vBTly1HeNPEn3wtREYfy4GZ/NECgw2Cnl+nK6Nz3uvw=
golang.org/x/net v0.41.0/go.mod h1:B/K4NNqkfmg07DQYrbwvSluqCJOOXwUjeb/5lOisjbA=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -230,6 +255,8 @@ golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.15.0 h1:KWH3jNZsfyT6xfAfKiz6MRNmd46ByHDYaZ7KSkCtdW8=
golang.org/x/sync v0.15.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -241,12 +268,13 @@ golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200413165638-669c56c373c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw=
golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.16.0 h1:a94ExnEXNtEwYLGJSIUxnWoxoRz/ZcCsV63ROupILh4=
golang.org/x/text v0.16.0/go.mod h1:GhwF1Be+LQoKShO3cGOHzqOgRrGaYc9AvblQOmPVHnI=
golang.org/x/time v0.6.0 h1:eTDhh4ZXt5Qf0augr54TN6suAUudPcawVZeIAPU7D4U=
golang.org/x/time v0.6.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@@ -273,6 +301,8 @@ gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/natefinch/lumberjack.v2 v2.2.1 h1:bBRl1b0OH9s/DuPhuXpNl+VtCaJXFZ5/uEFST95x9zc=
gopkg.in/natefinch/lumberjack.v2 v2.2.1/go.mod h1:YD8tP3GAjkrDg1eZH7EGmyESg/lsYskCTPBJVb9jqSc=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=

19
internal/config/auth.go Normal file
View File

@@ -0,0 +1,19 @@
package config
import "golang.org/x/crypto/bcrypt"
func VerifyAuth(username, password string) bool {
// If you're storing hashed password, use bcrypt to compare
if username == "" {
return false
}
auth := Get().GetAuth()
if auth == nil {
return false
}
if username != auth.Username {
return false
}
err := bcrypt.CompareHashAndPassword([]byte(auth.Password), []byte(password))
return err == nil
}

519
internal/config/config.go Normal file
View File

@@ -0,0 +1,519 @@
package config
import (
"cmp"
"crypto/rand"
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"os"
"path/filepath"
"runtime"
"strings"
"sync"
)
type RepairStrategy string
const (
RepairStrategyPerFile RepairStrategy = "per_file"
RepairStrategyPerTorrent RepairStrategy = "per_torrent"
)
var (
instance *Config
once sync.Once
configPath string
)
type Debrid struct {
Name string `json:"name,omitempty"`
APIKey string `json:"api_key,omitempty"`
DownloadAPIKeys []string `json:"download_api_keys,omitempty"`
Folder string `json:"folder,omitempty"`
RcloneMountPath string `json:"rclone_mount_path,omitempty"` // Custom rclone mount path for this debrid service
DownloadUncached bool `json:"download_uncached,omitempty"`
CheckCached bool `json:"check_cached,omitempty"`
RateLimit string `json:"rate_limit,omitempty"` // 200/minute or 10/second
RepairRateLimit string `json:"repair_rate_limit,omitempty"`
DownloadRateLimit string `json:"download_rate_limit,omitempty"`
Proxy string `json:"proxy,omitempty"`
UnpackRar bool `json:"unpack_rar,omitempty"`
AddSamples bool `json:"add_samples,omitempty"`
MinimumFreeSlot int `json:"minimum_free_slot,omitempty"` // Minimum active pots to use this debrid
Limit int `json:"limit,omitempty"` // Maximum number of total torrents
UseWebDav bool `json:"use_webdav,omitempty"`
WebDav
}
type QBitTorrent struct {
Username string `json:"username,omitempty"`
Password string `json:"password,omitempty"`
Port string `json:"port,omitempty"` // deprecated
DownloadFolder string `json:"download_folder,omitempty"`
Categories []string `json:"categories,omitempty"`
RefreshInterval int `json:"refresh_interval,omitempty"`
SkipPreCache bool `json:"skip_pre_cache,omitempty"`
MaxDownloads int `json:"max_downloads,omitempty"`
AlwaysRmTrackerUrls bool `json:"always_rm_tracker_urls,omitempty"`
}
type Arr struct {
Name string `json:"name,omitempty"`
Host string `json:"host,omitempty"`
Token string `json:"token,omitempty"`
Cleanup bool `json:"cleanup,omitempty"`
SkipRepair bool `json:"skip_repair,omitempty"`
DownloadUncached *bool `json:"download_uncached,omitempty"`
SelectedDebrid string `json:"selected_debrid,omitempty"`
Source string `json:"source,omitempty"` // The source of the arr, e.g. "auto", "config", "". Auto means it was automatically detected from the arr
}
type Repair struct {
Enabled bool `json:"enabled,omitempty"`
Interval string `json:"interval,omitempty"`
ZurgURL string `json:"zurg_url,omitempty"`
AutoProcess bool `json:"auto_process,omitempty"`
UseWebDav bool `json:"use_webdav,omitempty"`
Workers int `json:"workers,omitempty"`
ReInsert bool `json:"reinsert,omitempty"`
Strategy RepairStrategy `json:"strategy,omitempty"`
}
type Auth struct {
Username string `json:"username,omitempty"`
Password string `json:"password,omitempty"`
APIToken string `json:"api_token,omitempty"`
}
type Rclone struct {
// Global mount folder where all providers will be mounted as subfolders
Enabled bool `json:"enabled,omitempty"`
MountPath string `json:"mount_path,omitempty"`
RcPort string `json:"rc_port,omitempty"`
// Cache settings
CacheDir string `json:"cache_dir,omitempty"`
// VFS settings
VfsCacheMode string `json:"vfs_cache_mode,omitempty"` // off, minimal, writes, full
VfsCacheMaxAge string `json:"vfs_cache_max_age,omitempty"` // Maximum age of objects in the cache (default 1h)
VfsDiskSpaceTotal string `json:"vfs_disk_space_total,omitempty"` // Total disk space available for the cache (default off)
VfsCacheMaxSize string `json:"vfs_cache_max_size,omitempty"` // Maximum size of the cache (default off)
VfsCachePollInterval string `json:"vfs_cache_poll_interval,omitempty"` // How often to poll for changes (default 1m)
VfsReadChunkSize string `json:"vfs_read_chunk_size,omitempty"` // Read chunk size (default 128M)
VfsReadChunkSizeLimit string `json:"vfs_read_chunk_size_limit,omitempty"` // Max chunk size (default off)
VfsReadAhead string `json:"vfs_read_ahead,omitempty"` // read ahead size
BufferSize string `json:"buffer_size,omitempty"` // Buffer size for reading files (default 16M)
BwLimit string `json:"bw_limit,omitempty"` // Bandwidth limit (default off)
VfsCacheMinFreeSpace string `json:"vfs_cache_min_free_space,omitempty"`
VfsFastFingerprint bool `json:"vfs_fast_fingerprint,omitempty"`
VfsReadChunkStreams int `json:"vfs_read_chunk_streams,omitempty"`
AsyncRead *bool `json:"async_read,omitempty"` // Use async read for files
Transfers int `json:"transfers,omitempty"` // Number of transfers to use (default 4)
UseMmap bool `json:"use_mmap,omitempty"`
// File system settings
UID uint32 `json:"uid,omitempty"` // User ID for mounted files
GID uint32 `json:"gid,omitempty"` // Group ID for mounted files
Umask string `json:"umask,omitempty"`
// Timeout settings
AttrTimeout string `json:"attr_timeout,omitempty"` // Attribute cache timeout (default 1s)
DirCacheTime string `json:"dir_cache_time,omitempty"` // Directory cache time (default 5m)
// Performance settings
NoModTime bool `json:"no_modtime,omitempty"` // Don't read/write modification time
NoChecksum bool `json:"no_checksum,omitempty"` // Don't checksum files on upload
LogLevel string `json:"log_level,omitempty"`
}
type Config struct {
// server
BindAddress string `json:"bind_address,omitempty"`
URLBase string `json:"url_base,omitempty"`
Port string `json:"port,omitempty"`
LogLevel string `json:"log_level,omitempty"`
Debrids []Debrid `json:"debrids,omitempty"`
QBitTorrent QBitTorrent `json:"qbittorrent,omitempty"`
Arrs []Arr `json:"arrs,omitempty"`
Repair Repair `json:"repair,omitempty"`
WebDav WebDav `json:"webdav,omitempty"`
Rclone Rclone `json:"rclone,omitempty"`
AllowedExt []string `json:"allowed_file_types,omitempty"`
MinFileSize string `json:"min_file_size,omitempty"` // Minimum file size to download, 10MB, 1GB, etc
MaxFileSize string `json:"max_file_size,omitempty"` // Maximum file size to download (0 means no limit)
Path string `json:"-"` // Path to save the config file
UseAuth bool `json:"use_auth,omitempty"`
Auth *Auth `json:"-"`
DiscordWebhook string `json:"discord_webhook_url,omitempty"`
RemoveStalledAfter string `json:"remove_stalled_after,omitzero"`
CallbackURL string `json:"callback_url,omitempty"`
EnableWebdavAuth bool `json:"enable_webdav_auth,omitempty"`
}
func (c *Config) JsonFile() string {
return filepath.Join(c.Path, "config.json")
}
func (c *Config) AuthFile() string {
return filepath.Join(c.Path, "auth.json")
}
func (c *Config) TorrentsFile() string {
return filepath.Join(c.Path, "torrents.json")
}
func (c *Config) loadConfig() error {
// Load the config file
if configPath == "" {
return fmt.Errorf("config path not set")
}
c.Path = configPath
file, err := os.ReadFile(c.JsonFile())
if err != nil {
if os.IsNotExist(err) {
fmt.Printf("Config file not found, creating a new one at %s\n", c.JsonFile())
// Create a default config file if it doesn't exist
if err := c.createConfig(c.Path); err != nil {
return fmt.Errorf("failed to create config file: %w", err)
}
return c.Save()
}
return fmt.Errorf("error reading config file: %w", err)
}
if err := json.Unmarshal(file, &c); err != nil {
return fmt.Errorf("error unmarshaling config: %w", err)
}
c.setDefaults()
return nil
}
func validateDebrids(debrids []Debrid) error {
if len(debrids) == 0 {
return errors.New("no debrids configured")
}
for _, debrid := range debrids {
// Basic field validation
if debrid.APIKey == "" {
return errors.New("debrid api key is required")
}
if debrid.Folder == "" {
return errors.New("debrid folder is required")
}
}
return nil
}
func validateQbitTorrent(config *QBitTorrent) error {
if config.DownloadFolder == "" {
return errors.New("qbittorent download folder is required")
}
if _, err := os.Stat(config.DownloadFolder); os.IsNotExist(err) {
return fmt.Errorf("qbittorent download folder(%s) does not exist", config.DownloadFolder)
}
return nil
}
func validateRepair(config *Repair) error {
if !config.Enabled {
return nil
}
if config.Interval == "" {
return errors.New("repair interval is required")
}
return nil
}
func ValidateConfig(config *Config) error {
// Run validations concurrently
if err := validateDebrids(config.Debrids); err != nil {
return err
}
if err := validateQbitTorrent(&config.QBitTorrent); err != nil {
return err
}
if err := validateRepair(&config.Repair); err != nil {
return err
}
return nil
}
// generateAPIToken creates a new random API token
func generateAPIToken() (string, error) {
bytes := make([]byte, 32) // 256-bit token
if _, err := rand.Read(bytes); err != nil {
return "", err
}
return hex.EncodeToString(bytes), nil
}
func SetConfigPath(path string) {
configPath = path
}
func Get() *Config {
once.Do(func() {
instance = &Config{} // Initialize instance first
if err := instance.loadConfig(); err != nil {
_, _ = fmt.Fprintf(os.Stderr, "configuration Error: %v\n", err)
os.Exit(1)
}
})
return instance
}
func (c *Config) GetMinFileSize() int64 {
// 0 means no limit
if c.MinFileSize == "" {
return 0
}
s, err := ParseSize(c.MinFileSize)
if err != nil {
return 0
}
return s
}
func (c *Config) GetMaxFileSize() int64 {
// 0 means no limit
if c.MaxFileSize == "" {
return 0
}
s, err := ParseSize(c.MaxFileSize)
if err != nil {
return 0
}
return s
}
func (c *Config) IsSizeAllowed(size int64) bool {
if size == 0 {
return true // Maybe the debrid hasn't reported the size yet
}
if c.GetMinFileSize() > 0 && size < c.GetMinFileSize() {
return false
}
if c.GetMaxFileSize() > 0 && size > c.GetMaxFileSize() {
return false
}
return true
}
func (c *Config) SecretKey() string {
return cmp.Or(os.Getenv("DECYPHARR_SECRET_KEY"), "\"wqj(v%lj*!-+kf@4&i95rhh_!5_px5qnuwqbr%cjrvrozz_r*(\"")
}
func (c *Config) GetAuth() *Auth {
if !c.UseAuth {
return nil
}
if c.Auth == nil {
c.Auth = &Auth{}
if _, err := os.Stat(c.AuthFile()); err == nil {
file, err := os.ReadFile(c.AuthFile())
if err == nil {
_ = json.Unmarshal(file, c.Auth)
}
}
}
return c.Auth
}
func (c *Config) SaveAuth(auth *Auth) error {
c.Auth = auth
data, err := json.Marshal(auth)
if err != nil {
return err
}
return os.WriteFile(c.AuthFile(), data, 0644)
}
func (c *Config) CheckSetup() error {
return ValidateConfig(c)
}
func (c *Config) NeedsAuth() bool {
return c.UseAuth && (c.Auth == nil || c.Auth.Username == "" || c.Auth.Password == "")
}
func (c *Config) updateDebrid(d Debrid) Debrid {
workers := runtime.NumCPU() * 50
perDebrid := workers / len(c.Debrids)
var downloadKeys []string
if len(d.DownloadAPIKeys) > 0 {
downloadKeys = d.DownloadAPIKeys
} else {
// If no download API keys are specified, use the main API key
downloadKeys = []string{d.APIKey}
}
d.DownloadAPIKeys = downloadKeys
if !d.UseWebDav {
return d
}
if d.TorrentsRefreshInterval == "" {
d.TorrentsRefreshInterval = cmp.Or(c.WebDav.TorrentsRefreshInterval, "45s") // 45 seconds
}
if d.WebDav.DownloadLinksRefreshInterval == "" {
d.DownloadLinksRefreshInterval = cmp.Or(c.WebDav.DownloadLinksRefreshInterval, "40m") // 40 minutes
}
if d.Workers == 0 {
d.Workers = perDebrid
}
if d.FolderNaming == "" {
d.FolderNaming = cmp.Or(c.WebDav.FolderNaming, "original_no_ext")
}
if d.AutoExpireLinksAfter == "" {
d.AutoExpireLinksAfter = cmp.Or(c.WebDav.AutoExpireLinksAfter, "3d") // 2 days
}
// Merge debrid specified directories with global directories
directories := c.WebDav.Directories
if directories == nil {
directories = make(map[string]WebdavDirectories)
}
for name, dir := range d.Directories {
directories[name] = dir
}
d.Directories = directories
d.RcUrl = cmp.Or(d.RcUrl, c.WebDav.RcUrl)
d.RcUser = cmp.Or(d.RcUser, c.WebDav.RcUser)
d.RcPass = cmp.Or(d.RcPass, c.WebDav.RcPass)
return d
}
func (c *Config) setDefaults() {
for i, debrid := range c.Debrids {
c.Debrids[i] = c.updateDebrid(debrid)
}
if len(c.AllowedExt) == 0 {
c.AllowedExt = getDefaultExtensions()
}
c.Port = cmp.Or(c.Port, c.QBitTorrent.Port)
if c.URLBase == "" {
c.URLBase = "/"
}
// validate url base starts with /
if !strings.HasPrefix(c.URLBase, "/") {
c.URLBase = "/" + c.URLBase
}
if !strings.HasSuffix(c.URLBase, "/") {
c.URLBase += "/"
}
// Set repair defaults
if c.Repair.Strategy == "" {
c.Repair.Strategy = RepairStrategyPerTorrent
}
// Rclone defaults
if c.Rclone.Enabled {
c.Rclone.RcPort = cmp.Or(c.Rclone.RcPort, "5572")
if c.Rclone.AsyncRead == nil {
_asyncTrue := true
c.Rclone.AsyncRead = &_asyncTrue
}
c.Rclone.VfsCacheMode = cmp.Or(c.Rclone.VfsCacheMode, "off")
if c.Rclone.UID == 0 {
c.Rclone.UID = uint32(os.Getuid())
}
if c.Rclone.GID == 0 {
if runtime.GOOS == "windows" {
// On Windows, we use the current user's SID as GID
c.Rclone.GID = uint32(os.Getuid()) // Windows does not have GID, using UID instead
} else {
c.Rclone.GID = uint32(os.Getgid())
}
}
if c.Rclone.Transfers == 0 {
c.Rclone.Transfers = 4 // Default number of transfers
}
if c.Rclone.VfsCacheMode != "off" {
c.Rclone.VfsCachePollInterval = cmp.Or(c.Rclone.VfsCachePollInterval, "1m") // Clean cache every minute
}
c.Rclone.DirCacheTime = cmp.Or(c.Rclone.DirCacheTime, "5m")
c.Rclone.LogLevel = cmp.Or(c.Rclone.LogLevel, "INFO")
}
// Load the auth file
c.Auth = c.GetAuth()
// Generate API token if auth is enabled and no token exists
if c.UseAuth {
if c.Auth == nil {
c.Auth = &Auth{}
}
if c.Auth.APIToken == "" {
if token, err := generateAPIToken(); err == nil {
c.Auth.APIToken = token
// Save the updated auth config
_ = c.SaveAuth(c.Auth)
}
}
}
}
func (c *Config) Save() error {
c.setDefaults()
data, err := json.MarshalIndent(c, "", " ")
if err != nil {
return err
}
if err := os.WriteFile(c.JsonFile(), data, 0644); err != nil {
return err
}
return nil
}
func (c *Config) createConfig(path string) error {
// Create the directory if it doesn't exist
if err := os.MkdirAll(path, 0755); err != nil {
return fmt.Errorf("failed to create config directory: %w", err)
}
c.Path = path
c.URLBase = "/"
c.Port = "8282"
c.LogLevel = "info"
c.UseAuth = true
c.QBitTorrent = QBitTorrent{
DownloadFolder: filepath.Join(path, "downloads"),
Categories: []string{"sonarr", "radarr"},
RefreshInterval: 15,
}
return nil
}
// Reload forces a reload of the configuration from disk
func Reload() {
instance = nil
once = sync.Once{}
}
func DefaultFreeSlot() int {
return 10
}

75
internal/config/misc.go Normal file
View File

@@ -0,0 +1,75 @@
package config
import (
"path/filepath"
"sort"
"strconv"
"strings"
)
func (c *Config) IsAllowedFile(filename string) bool {
ext := strings.ToLower(filepath.Ext(filename))
if ext == "" {
return false
}
// Remove the leading dot
ext = ext[1:]
for _, allowed := range c.AllowedExt {
if ext == allowed {
return true
}
}
return false
}
func getDefaultExtensions() []string {
videoExts := strings.Split("webm,m4v,3gp,nsv,ty,strm,rm,rmvb,m3u,ifo,mov,qt,divx,xvid,bivx,nrg,pva,wmv,asf,asx,ogm,ogv,m2v,avi,bin,dat,dvr-ms,mpg,mpeg,mp4,avc,vp3,svq3,nuv,viv,dv,fli,flv,wpl,vob,mkv,mk3d,ts,wtv,m2ts", ",")
musicExts := strings.Split("MP3,WAV,FLAC,OGG,WMA,AIFF,ALAC,M4A,APE,AC3,DTS,M4P,MID,MIDI,MKA,MP2,MPA,RA,VOC,WV,AMR", ",")
// Combine both slices
allExts := append(videoExts, musicExts...)
// Convert to lowercase
for i, ext := range allExts {
allExts[i] = strings.ToLower(ext)
}
// Remove duplicates
seen := make(map[string]struct{})
var unique []string
for _, ext := range allExts {
if _, ok := seen[ext]; !ok {
seen[ext] = struct{}{}
unique = append(unique, ext)
}
}
sort.Strings(unique)
return unique
}
func ParseSize(sizeStr string) (int64, error) {
sizeStr = strings.ToUpper(strings.TrimSpace(sizeStr))
// Absolute size-based cache
multiplier := 1.0
if strings.HasSuffix(sizeStr, "GB") {
multiplier = 1024 * 1024 * 1024
sizeStr = strings.TrimSuffix(sizeStr, "GB")
} else if strings.HasSuffix(sizeStr, "MB") {
multiplier = 1024 * 1024
sizeStr = strings.TrimSuffix(sizeStr, "MB")
} else if strings.HasSuffix(sizeStr, "KB") {
multiplier = 1024
sizeStr = strings.TrimSuffix(sizeStr, "KB")
}
size, err := strconv.ParseFloat(sizeStr, 64)
if err != nil {
return 0, err
}
return int64(size * multiplier), nil
}

26
internal/config/webdav.go Normal file
View File

@@ -0,0 +1,26 @@
package config
type WebdavDirectories struct {
Filters map[string]string `json:"filters,omitempty"`
//SaveStrms bool `json:"save_streams,omitempty"`
}
type WebDav struct {
TorrentsRefreshInterval string `json:"torrents_refresh_interval,omitempty"`
DownloadLinksRefreshInterval string `json:"download_links_refresh_interval,omitempty"`
Workers int `json:"workers,omitempty"`
AutoExpireLinksAfter string `json:"auto_expire_links_after,omitempty"`
ServeFromRclone bool `json:"serve_from_rclone,omitempty"`
// Folder
FolderNaming string `json:"folder_naming,omitempty"`
// Rclone
RcUrl string `json:"rc_url,omitempty"`
RcUser string `json:"rc_user,omitempty"`
RcPass string `json:"rc_pass,omitempty"`
RcRefreshDirs string `json:"rc_refresh_dirs,omitempty"` // comma separated list of directories to refresh
// Directories
Directories map[string]WebdavDirectories `json:"directories,omitempty"`
}

114
internal/logger/logger.go Normal file
View File

@@ -0,0 +1,114 @@
package logger
import (
"fmt"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/config"
"gopkg.in/natefinch/lumberjack.v2"
"os"
"path/filepath"
"strings"
"sync"
)
var (
once sync.Once
logger zerolog.Logger
)
func GetLogPath() string {
cfg := config.Get()
logsDir := filepath.Join(cfg.Path, "logs")
if _, err := os.Stat(logsDir); os.IsNotExist(err) {
if err := os.MkdirAll(logsDir, 0755); err != nil {
panic(fmt.Sprintf("Failed to create logs directory: %v", err))
}
}
return logsDir
}
func New(prefix string) zerolog.Logger {
level := config.Get().LogLevel
rotatingLogFile := &lumberjack.Logger{
Filename: filepath.Join(GetLogPath(), "decypharr.log"),
MaxSize: 10,
MaxAge: 15,
Compress: true,
}
consoleWriter := zerolog.ConsoleWriter{
Out: os.Stdout,
TimeFormat: "2006-01-02 15:04:05",
NoColor: false, // Set to true if you don't want colors
FormatLevel: func(i interface{}) string {
var colorCode string
switch strings.ToLower(fmt.Sprintf("%s", i)) {
case "debug":
colorCode = "\033[36m"
case "info":
colorCode = "\033[32m"
case "warn":
colorCode = "\033[33m"
case "error":
colorCode = "\033[31m"
case "fatal":
colorCode = "\033[35m"
case "panic":
colorCode = "\033[41m"
default:
colorCode = "\033[37m" // White
}
return fmt.Sprintf("%s| %-6s|\033[0m", colorCode, strings.ToUpper(fmt.Sprintf("%s", i)))
},
FormatMessage: func(i interface{}) string {
return fmt.Sprintf("[%s] %v", prefix, i)
},
}
fileWriter := zerolog.ConsoleWriter{
Out: rotatingLogFile,
TimeFormat: "2006-01-02 15:04:05",
NoColor: true, // No colors in file output
FormatLevel: func(i interface{}) string {
return strings.ToUpper(fmt.Sprintf("| %-6s|", i))
},
FormatMessage: func(i interface{}) string {
return fmt.Sprintf("[%s] %v", prefix, i)
},
}
multi := zerolog.MultiLevelWriter(consoleWriter, fileWriter)
logger := zerolog.New(multi).
With().
Timestamp().
Logger().
Level(zerolog.InfoLevel)
// Set the log level
level = strings.ToLower(level)
switch level {
case "debug":
logger = logger.Level(zerolog.DebugLevel)
case "info":
logger = logger.Level(zerolog.InfoLevel)
case "warn":
logger = logger.Level(zerolog.WarnLevel)
case "error":
logger = logger.Level(zerolog.ErrorLevel)
case "trace":
logger = logger.Level(zerolog.TraceLevel)
}
return logger
}
func Default() zerolog.Logger {
once.Do(func() {
logger = New("decypharr")
})
return logger
}

102
internal/request/discord.go Normal file
View File

@@ -0,0 +1,102 @@
package request
import (
"bytes"
"encoding/json"
"fmt"
"github.com/sirrobot01/decypharr/internal/config"
"io"
"net/http"
"strings"
)
type DiscordEmbed struct {
Title string `json:"title"`
Description string `json:"description"`
Color int `json:"color"`
}
type DiscordWebhook struct {
Embeds []DiscordEmbed `json:"embeds"`
}
func getDiscordColor(status string) int {
switch status {
case "success":
return 3066993
case "error":
return 15158332
case "warning":
return 15844367
case "pending":
return 3447003
default:
return 0
}
}
func getDiscordHeader(event string) string {
switch event {
case "download_complete":
return "[Decypharr] Download Completed"
case "download_failed":
return "[Decypharr] Download Failed"
case "repair_pending":
return "[Decypharr] Repair Completed, Awaiting action"
case "repair_complete":
return "[Decypharr] Repair Complete"
case "repair_cancelled":
return "[Decypharr] Repair Cancelled"
default:
// split the event string and capitalize the first letter of each word
evs := strings.Split(event, "_")
for i, ev := range evs {
evs[i] = strings.ToTitle(ev)
}
return "[Decypharr] %s" + strings.Join(evs, " ")
}
}
func SendDiscordMessage(event string, status string, message string) error {
cfg := config.Get()
webhookURL := cfg.DiscordWebhook
if webhookURL == "" {
return nil
}
// Create the proper Discord webhook structure
webhook := DiscordWebhook{
Embeds: []DiscordEmbed{
{
Title: getDiscordHeader(event),
Description: message,
Color: getDiscordColor(status),
},
},
}
payload, err := json.Marshal(webhook)
if err != nil {
return fmt.Errorf("failed to marshal discord payload: %v", err)
}
req, err := http.NewRequest(http.MethodPost, webhookURL, bytes.NewReader(payload))
if err != nil {
return fmt.Errorf("failed to create discord request: %v", err)
}
req.Header.Set("Content-Type", "application/json")
resp, err := http.DefaultClient.Do(req)
if err != nil {
return fmt.Errorf("failed to send discord message: %v", err)
}
defer resp.Body.Close()
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
bodyBytes, _ := io.ReadAll(resp.Body)
return fmt.Errorf("discord returned error status code: %s, body: %s", resp.Status, string(bodyBytes))
}
return nil
}

464
internal/request/request.go Normal file
View File

@@ -0,0 +1,464 @@
package request
import (
"bytes"
"context"
"crypto/tls"
"encoding/json"
"errors"
"fmt"
"io"
"math/rand"
"net"
"net/http"
"net/url"
"strconv"
"strings"
"sync"
"time"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/logger"
"go.uber.org/ratelimit"
"golang.org/x/net/proxy"
)
func JoinURL(base string, paths ...string) (string, error) {
// Split the last path component to separate query parameters
lastPath := paths[len(paths)-1]
parts := strings.Split(lastPath, "?")
paths[len(paths)-1] = parts[0]
joined, err := url.JoinPath(base, paths...)
if err != nil {
return "", err
}
// Add back query parameters if they exist
if len(parts) > 1 {
return joined + "?" + parts[1], nil
}
return joined, nil
}
var (
once sync.Once
instance *Client
)
type ClientOption func(*Client)
// Client represents an HTTP client with additional capabilities
type Client struct {
client *http.Client
rateLimiter ratelimit.Limiter
headers map[string]string
headersMu sync.RWMutex
maxRetries int
timeout time.Duration
skipTLSVerify bool
retryableStatus map[int]struct{}
logger zerolog.Logger
proxy string
}
// WithMaxRetries sets the maximum number of retry attempts
func WithMaxRetries(maxRetries int) ClientOption {
return func(c *Client) {
c.maxRetries = maxRetries
}
}
// WithTimeout sets the request timeout
func WithTimeout(timeout time.Duration) ClientOption {
return func(c *Client) {
c.timeout = timeout
}
}
func WithRedirectPolicy(policy func(req *http.Request, via []*http.Request) error) ClientOption {
return func(c *Client) {
c.client.CheckRedirect = policy
}
}
// WithRateLimiter sets a rate limiter
func WithRateLimiter(rl ratelimit.Limiter) ClientOption {
return func(c *Client) {
c.rateLimiter = rl
}
}
// WithHeaders sets default headers
func WithHeaders(headers map[string]string) ClientOption {
return func(c *Client) {
c.headersMu.Lock()
c.headers = headers
c.headersMu.Unlock()
}
}
func (c *Client) SetHeader(key, value string) {
c.headersMu.Lock()
c.headers[key] = value
c.headersMu.Unlock()
}
func WithLogger(logger zerolog.Logger) ClientOption {
return func(c *Client) {
c.logger = logger
}
}
func WithTransport(transport *http.Transport) ClientOption {
return func(c *Client) {
c.client.Transport = transport
}
}
// WithRetryableStatus adds status codes that should trigger a retry
func WithRetryableStatus(statusCodes ...int) ClientOption {
return func(c *Client) {
c.retryableStatus = make(map[int]struct{}) // reset the map
for _, code := range statusCodes {
c.retryableStatus[code] = struct{}{}
}
}
}
func WithProxy(proxyURL string) ClientOption {
return func(c *Client) {
c.proxy = proxyURL
}
}
// doRequest performs a single HTTP request with rate limiting
func (c *Client) doRequest(req *http.Request) (*http.Response, error) {
if c.rateLimiter != nil {
select {
case <-req.Context().Done():
return nil, req.Context().Err()
default:
c.rateLimiter.Take()
}
}
return c.client.Do(req)
}
// Do performs an HTTP request with retries for certain status codes
func (c *Client) Do(req *http.Request) (*http.Response, error) {
// Save the request body for reuse in retries
var bodyBytes []byte
var err error
if req.Body != nil {
bodyBytes, err = io.ReadAll(req.Body)
if err != nil {
return nil, fmt.Errorf("reading request body: %w", err)
}
req.Body.Close()
}
backoff := time.Millisecond * 500
var resp *http.Response
for attempt := 0; attempt <= c.maxRetries; attempt++ {
// Reset the request body if it exists
if bodyBytes != nil {
req.Body = io.NopCloser(bytes.NewReader(bodyBytes))
}
// Apply headers
c.headersMu.RLock()
if c.headers != nil {
for key, value := range c.headers {
req.Header.Set(key, value)
}
}
c.headersMu.RUnlock()
resp, err = c.doRequest(req)
if err != nil {
// Check if this is a network error that might be worth retrying
if isRetryableError(err) && attempt < c.maxRetries {
// Apply backoff with jitter
jitter := time.Duration(rand.Int63n(int64(backoff / 4)))
sleepTime := backoff + jitter
select {
case <-req.Context().Done():
return nil, req.Context().Err()
case <-time.After(sleepTime):
// Continue to next retry attempt
}
// Exponential backoff
backoff *= 2
continue
}
return nil, err
}
// Check if the status code is retryable
if _, ok := c.retryableStatus[resp.StatusCode]; !ok || attempt == c.maxRetries {
return resp, nil
}
// Close the response body before retrying
resp.Body.Close()
// Apply backoff with jitter
jitter := time.Duration(rand.Int63n(int64(backoff / 4)))
sleepTime := backoff + jitter
select {
case <-req.Context().Done():
return nil, req.Context().Err()
case <-time.After(sleepTime):
// Continue to next retry attempt
}
// Exponential backoff
backoff *= 2
}
return nil, fmt.Errorf("max retries exceeded")
}
// MakeRequest performs an HTTP request and returns the response body as bytes
func (c *Client) MakeRequest(req *http.Request) ([]byte, error) {
res, err := c.Do(req)
if err != nil {
return nil, err
}
defer func() {
if err := res.Body.Close(); err != nil {
c.logger.Printf("Failed to close response body: %v", err)
}
}()
bodyBytes, err := io.ReadAll(res.Body)
if err != nil {
return nil, fmt.Errorf("reading response body: %w", err)
}
if res.StatusCode < 200 || res.StatusCode >= 300 {
return nil, fmt.Errorf("HTTP error %d: %s", res.StatusCode, string(bodyBytes))
}
return bodyBytes, nil
}
func (c *Client) Get(url string) (*http.Response, error) {
req, err := http.NewRequest(http.MethodGet, url, nil)
if err != nil {
return nil, fmt.Errorf("creating GET request: %w", err)
}
return c.Do(req)
}
// New creates a new HTTP client with the specified options
func New(options ...ClientOption) *Client {
client := &Client{
maxRetries: 3,
skipTLSVerify: true,
retryableStatus: map[int]struct{}{
http.StatusTooManyRequests: struct{}{},
http.StatusInternalServerError: struct{}{},
http.StatusBadGateway: struct{}{},
http.StatusServiceUnavailable: struct{}{},
http.StatusGatewayTimeout: struct{}{},
},
logger: logger.New("request"),
timeout: 60 * time.Second,
proxy: "",
headers: make(map[string]string),
}
// default http client
client.client = &http.Client{
Timeout: client.timeout,
}
// Apply options before configuring transport
for _, option := range options {
option(client)
}
// Check if transport was set by WithTransport option
if client.client.Transport == nil {
transport := &http.Transport{
TLSClientConfig: &tls.Config{
InsecureSkipVerify: client.skipTLSVerify,
},
DisableKeepAlives: false,
}
// Configure proxy if needed
SetProxy(transport, client.proxy)
// Set the transport to the client
client.client.Transport = transport
}
return client
}
func ParseRateLimit(rateStr string) ratelimit.Limiter {
if rateStr == "" {
return nil
}
parts := strings.SplitN(rateStr, "/", 2)
if len(parts) != 2 {
return nil
}
// parse count
count, err := strconv.Atoi(strings.TrimSpace(parts[0]))
if err != nil || count <= 0 {
return nil
}
// Set slack size to 10%
slackSize := count / 10
// normalize unit
unit := strings.ToLower(strings.TrimSpace(parts[1]))
unit = strings.TrimSuffix(unit, "s")
switch unit {
case "minute", "min":
return ratelimit.New(count, ratelimit.Per(time.Minute), ratelimit.WithSlack(slackSize))
case "second", "sec":
return ratelimit.New(count, ratelimit.Per(time.Second), ratelimit.WithSlack(slackSize))
case "hour", "hr":
return ratelimit.New(count, ratelimit.Per(time.Hour), ratelimit.WithSlack(slackSize))
case "day", "d":
return ratelimit.New(count, ratelimit.Per(24*time.Hour), ratelimit.WithSlack(slackSize))
default:
return nil
}
}
func JSONResponse(w http.ResponseWriter, data interface{}, code int) {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(code)
err := json.NewEncoder(w).Encode(data)
if err != nil {
return
}
}
func Default() *Client {
once.Do(func() {
instance = New()
})
return instance
}
func isRetryableError(err error) bool {
errString := err.Error()
// Connection reset and other network errors
if strings.Contains(errString, "connection reset by peer") ||
strings.Contains(errString, "read: connection reset") ||
strings.Contains(errString, "connection refused") ||
strings.Contains(errString, "network is unreachable") ||
strings.Contains(errString, "connection timed out") ||
strings.Contains(errString, "no such host") ||
strings.Contains(errString, "i/o timeout") ||
strings.Contains(errString, "unexpected EOF") ||
strings.Contains(errString, "TLS handshake timeout") {
return true
}
// Check for net.Error type which can provide more information
var netErr net.Error
if errors.As(err, &netErr) {
// Retry on timeout errors and temporary errors
return netErr.Timeout()
}
// Not a retryable error
return false
}
func SetProxy(transport *http.Transport, proxyURL string) {
if proxyURL != "" {
if strings.HasPrefix(proxyURL, "socks5://") {
// Handle SOCKS5 proxy
socksURL, err := url.Parse(proxyURL)
if err != nil {
return
} else {
auth := &proxy.Auth{}
if socksURL.User != nil {
auth.User = socksURL.User.Username()
password, _ := socksURL.User.Password()
auth.Password = password
}
dialer, err := proxy.SOCKS5("tcp", socksURL.Host, auth, proxy.Direct)
if err != nil {
return
} else {
transport.DialContext = func(ctx context.Context, network, addr string) (net.Conn, error) {
return dialer.Dial(network, addr)
}
}
}
} else {
_proxy, err := url.Parse(proxyURL)
if err != nil {
return
} else {
transport.Proxy = http.ProxyURL(_proxy)
}
}
} else {
transport.Proxy = http.ProxyFromEnvironment
}
return
}
func ValidateURL(urlStr string) error {
if urlStr == "" {
return fmt.Errorf("URL cannot be empty")
}
// Try parsing as full URL first
u, err := url.Parse(urlStr)
if err == nil && u.Scheme != "" && u.Host != "" {
// It's a full URL, validate scheme
if u.Scheme != "http" && u.Scheme != "https" {
return fmt.Errorf("URL scheme must be http or https")
}
return nil
}
// Check if it's a host:port format (no scheme)
if strings.Contains(urlStr, ":") && !strings.Contains(urlStr, "://") {
// Try parsing with http:// prefix
testURL := "http://" + urlStr
u, err := url.Parse(testURL)
if err != nil {
return fmt.Errorf("invalid host:port format: %w", err)
}
if u.Host == "" {
return fmt.Errorf("host is required in host:port format")
}
// Validate port number
if u.Port() == "" {
return fmt.Errorf("port is required in host:port format")
}
return nil
}
return fmt.Errorf("invalid URL format: %s", urlStr)
}

View File

@@ -0,0 +1,45 @@
package testutil
import (
"os"
"path/filepath"
"strings"
)
// GetTestDataPath returns the path to the testdata directory in the project root
func GetTestDataPath() string {
return filepath.Join("..", "..", "testdata")
}
// GetTestDataFilePath returns the path to a specific file in the testdata directory
func GetTestDataFilePath(filename string) string {
return filepath.Join(GetTestDataPath(), filename)
}
// GetTestTorrentPath returns the path to the Ubuntu test torrent file
func GetTestTorrentPath() string {
return GetTestDataFilePath("ubuntu-25.04-desktop-amd64.iso.torrent")
}
// GetTestMagnetPath returns the path to the Ubuntu test magnet file
func GetTestMagnetPath() string {
return GetTestDataFilePath("ubuntu-25.04-desktop-amd64.iso.magnet")
}
// GetTestDataBytes reads and returns the raw bytes of a test data file
func GetTestDataBytes(filename string) ([]byte, error) {
filePath := GetTestDataFilePath(filename)
return os.ReadFile(filePath)
}
// GetTestDataContent reads and returns the content of a test data file
func GetTestDataContent(filename string) (string, error) {
content, err := GetTestDataBytes(filename)
return strings.TrimSpace(string(content)), err
}
// GetTestMagnetContent reads and returns the content of the Ubuntu test magnet file
func GetTestMagnetContent() (string, error) {
return GetTestDataContent("ubuntu-25.04-desktop-amd64.iso.magnet")
}

View File

@@ -0,0 +1,43 @@
package utils
import (
"sync"
"time"
)
type Debouncer[T any] struct {
mu sync.Mutex
timer *time.Timer
interval time.Duration
caller func(arg T)
}
func NewDebouncer[T any](interval time.Duration, caller func(arg T)) *Debouncer[T] {
return &Debouncer[T]{
interval: interval,
caller: caller,
}
}
func (d *Debouncer[T]) Call(arg T) {
d.mu.Lock()
defer d.mu.Unlock()
if d.timer != nil {
d.timer.Stop()
}
d.timer = time.AfterFunc(d.interval, func() {
d.caller(arg)
})
}
func (d *Debouncer[T]) Stop() {
d.mu.Lock()
defer d.mu.Unlock()
if d.timer != nil {
d.timer.Stop()
d.timer = nil
}
}

47
internal/utils/error.go Normal file
View File

@@ -0,0 +1,47 @@
package utils
import "errors"
type HTTPError struct {
StatusCode int
Message string
Code string
}
func (e *HTTPError) Error() string {
return e.Message
}
var HosterUnavailableError = &HTTPError{
StatusCode: 503,
Message: "Hoster is unavailable",
Code: "hoster_unavailable",
}
var TrafficExceededError = &HTTPError{
StatusCode: 503,
Message: "Traffic exceeded",
Code: "traffic_exceeded",
}
var ErrLinkBroken = &HTTPError{
StatusCode: 404,
Message: "File is unavailable",
Code: "file_unavailable",
}
var TorrentNotFoundError = &HTTPError{
StatusCode: 404,
Message: "Torrent not found",
Code: "torrent_not_found",
}
var TooManyActiveDownloadsError = &HTTPError{
StatusCode: 509,
Message: "Too many active downloads",
Code: "too_many_active_downloads",
}
func IsTooManyActiveDownloadsError(err error) bool {
return errors.As(err, &TooManyActiveDownloadsError)
}

137
internal/utils/file.go Normal file
View File

@@ -0,0 +1,137 @@
package utils
import (
"fmt"
"io"
"net/url"
"os"
"strings"
)
func PathUnescape(path string) string {
// try to use url.PathUnescape
if unescaped, err := url.PathUnescape(path); err == nil {
return unescaped
}
// unescape %
unescapedPath := strings.ReplaceAll(path, "%25", "%")
// add others
return unescapedPath
}
func PreCacheFile(filePaths []string) error {
if len(filePaths) == 0 {
return fmt.Errorf("no file paths provided")
}
for _, filePath := range filePaths {
err := func(f string) error {
file, err := os.Open(f)
if err != nil {
if os.IsNotExist(err) {
// File has probably been moved by arr, return silently
return nil
}
return fmt.Errorf("failed to open file: %s: %v", f, err)
}
defer file.Close()
// Pre-cache the file header (first 256KB) using 16KB chunks.
if err := readSmallChunks(file, 0, 256*1024, 16*1024); err != nil {
return err
}
if err := readSmallChunks(file, 1024*1024, 64*1024, 16*1024); err != nil {
return err
}
return nil
}(filePath)
if err != nil {
return err
}
}
return nil
}
func readSmallChunks(file *os.File, startPos int64, totalToRead int, chunkSize int) error {
_, err := file.Seek(startPos, 0)
if err != nil {
return err
}
buf := make([]byte, chunkSize)
bytesRemaining := totalToRead
for bytesRemaining > 0 {
toRead := chunkSize
if bytesRemaining < chunkSize {
toRead = bytesRemaining
}
n, err := file.Read(buf[:toRead])
if err != nil {
if err == io.EOF {
break
}
return err
}
bytesRemaining -= n
}
return nil
}
func EnsureDir(dirPath string) error {
if dirPath == "" {
return fmt.Errorf("directory path is empty")
}
_, err := os.Stat(dirPath)
if os.IsNotExist(err) {
// Directory does not exist, create it
if err := os.MkdirAll(dirPath, 0755); err != nil {
return fmt.Errorf("failed to create directory: %v", err)
}
return nil
}
return err
}
func FormatSize(bytes int64) string {
const (
KB = 1024
MB = 1024 * KB
GB = 1024 * MB
TB = 1024 * GB
)
var size float64
var unit string
switch {
case bytes >= TB:
size = float64(bytes) / TB
unit = "TB"
case bytes >= GB:
size = float64(bytes) / GB
unit = "GB"
case bytes >= MB:
size = float64(bytes) / MB
unit = "MB"
case bytes >= KB:
size = float64(bytes) / KB
unit = "KB"
default:
size = float64(bytes)
unit = "bytes"
}
// Format to 2 decimal places for larger units, no decimals for bytes
if unit == "bytes" {
return fmt.Sprintf("%.0f %s", size, unit)
}
return fmt.Sprintf("%.2f %s", size, unit)
}

View File

@@ -1,80 +1,112 @@
package common
package utils
import (
"bufio"
"bytes"
"context"
"encoding/base32"
"encoding/hex"
"fmt"
"github.com/anacrolix/torrent/metainfo"
"io"
"log"
"math/rand"
"net/http"
"net/url"
"os"
"path"
"path/filepath"
"regexp"
"strings"
"time"
"github.com/anacrolix/torrent/metainfo"
"github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/internal/request"
)
var (
hexRegex = regexp.MustCompile("^[0-9a-fA-F]{40}$")
)
type Magnet struct {
Name string
InfoHash string
Size int64
Link string
Name string `json:"name"`
InfoHash string `json:"infoHash"`
Size int64 `json:"size"`
Link string `json:"link"`
File []byte `json:"-"`
}
func GetMagnetFromFile(file io.Reader, filePath string) (*Magnet, error) {
func (m *Magnet) IsTorrent() bool {
return m.File != nil
}
// stripTrackersFromMagnet removes trackers from a magnet and returns a modified copy
func stripTrackersFromMagnet(mi metainfo.Magnet, fileType string) metainfo.Magnet {
originalTrackerCount := len(mi.Trackers)
if len(mi.Trackers) > 0 {
log := logger.Default()
mi.Trackers = nil
log.Printf("Removed %d tracker URLs from %s", originalTrackerCount, fileType)
}
return mi
}
func GetMagnetFromFile(file io.Reader, filePath string, rmTrackerUrls bool) (*Magnet, error) {
var (
m *Magnet
err error
)
if filepath.Ext(filePath) == ".torrent" {
mi, err := metainfo.Load(file)
torrentData, err := io.ReadAll(file)
if err != nil {
return nil, err
}
hash := mi.HashInfoBytes()
infoHash := hash.HexString()
info, err := mi.UnmarshalInfo()
m, err = GetMagnetFromBytes(torrentData, rmTrackerUrls)
if err != nil {
return nil, err
}
magnet := &Magnet{
InfoHash: infoHash,
Name: info.Name,
Size: info.Length,
Link: mi.Magnet(&hash, &info).String(),
}
return magnet, nil
} else {
// .magnet file
magnetLink := ReadMagnetFile(file)
return GetMagnetInfo(magnetLink)
m, err = GetMagnetInfo(magnetLink, rmTrackerUrls)
if err != nil {
return nil, err
}
}
m.Name = strings.TrimSuffix(filePath, filepath.Ext(filePath))
return m, nil
}
func GetMagnetFromUrl(url string) (*Magnet, error) {
func GetMagnetFromUrl(url string, rmTrackerUrls bool) (*Magnet, error) {
if strings.HasPrefix(url, "magnet:") {
return GetMagnetInfo(url)
return GetMagnetInfo(url, rmTrackerUrls)
} else if strings.HasPrefix(url, "http") {
return OpenMagnetHttpURL(url)
return OpenMagnetHttpURL(url, rmTrackerUrls)
}
return nil, fmt.Errorf("invalid url")
}
func OpenMagnetFile(filePath string) string {
file, err := os.Open(filePath)
func GetMagnetFromBytes(torrentData []byte, rmTrackerUrls bool) (*Magnet, error) {
// Create a scanner to read the file line by line
mi, err := metainfo.Load(bytes.NewReader(torrentData))
if err != nil {
log.Println("Error opening file:", err)
return ""
return nil, err
}
defer func(file *os.File) {
err := file.Close()
if err != nil {
return
}
}(file) // Ensure the file is closed after the function ends
return ReadMagnetFile(file)
hash := mi.HashInfoBytes()
infoHash := hash.HexString()
info, err := mi.UnmarshalInfo()
if err != nil {
return nil, err
}
magnetMeta := mi.Magnet(&hash, &info)
if rmTrackerUrls {
magnetMeta = stripTrackersFromMagnet(magnetMeta, "torrent file")
}
magnet := &Magnet{
InfoHash: infoHash,
Name: info.Name,
Size: info.Length,
Link: magnetMeta.String(),
File: torrentData,
}
return magnet, nil
}
func ReadMagnetFile(file io.Reader) string {
@@ -88,12 +120,13 @@ func ReadMagnetFile(file io.Reader) string {
// Check for any errors during scanning
if err := scanner.Err(); err != nil {
log := logger.Default()
log.Println("Error reading file:", err)
}
return ""
}
func OpenMagnetHttpURL(magnetLink string) (*Magnet, error) {
func OpenMagnetHttpURL(magnetLink string, rmTrackerUrls bool) (*Magnet, error) {
resp, err := http.Get(magnetLink)
if err != nil {
return nil, fmt.Errorf("error making GET request: %v", err)
@@ -104,67 +137,43 @@ func OpenMagnetHttpURL(magnetLink string) (*Magnet, error) {
return
}
}(resp) // Ensure the response is closed after the function ends
// Create a scanner to read the file line by line
mi, err := metainfo.Load(resp.Body)
torrentData, err := io.ReadAll(resp.Body)
if err != nil {
return nil, err
return nil, fmt.Errorf("error reading response body: %v", err)
}
hash := mi.HashInfoBytes()
infoHash := hash.HexString()
info, err := mi.UnmarshalInfo()
if err != nil {
return nil, err
}
log.Println("InfoHash: ", infoHash)
magnet := &Magnet{
InfoHash: infoHash,
Name: info.Name,
Size: info.Length,
Link: mi.Magnet(&hash, &info).String(),
}
return magnet, nil
return GetMagnetFromBytes(torrentData, rmTrackerUrls)
}
func GetMagnetInfo(magnetLink string) (*Magnet, error) {
func GetMagnetInfo(magnetLink string, rmTrackerUrls bool) (*Magnet, error) {
if magnetLink == "" {
return nil, fmt.Errorf("error getting magnet from file")
}
magnetURI, err := url.Parse(magnetLink)
mi, err := metainfo.ParseMagnetUri(magnetLink)
if err != nil {
return nil, fmt.Errorf("error parsing magnet link")
return nil, fmt.Errorf("error parsing magnet link: %w", err)
}
query := magnetURI.Query()
xt := query.Get("xt")
dn := query.Get("dn")
// Extract BTIH
parts := strings.Split(xt, ":")
btih := ""
if len(parts) > 2 {
btih = parts[2]
// Strip all announce URLs if requested
if rmTrackerUrls {
mi = stripTrackersFromMagnet(mi, "magnet link")
}
btih := mi.InfoHash.HexString()
dn := mi.DisplayName
// Reconstruct the magnet link using the (possibly modified) spec
finalLink := mi.String()
magnet := &Magnet{
InfoHash: btih,
Name: dn,
Size: 0,
Link: magnetLink,
Link: finalLink,
}
return magnet, nil
}
func RandomString(length int) string {
const charset = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
b := make([]byte, length)
for i := range b {
b[i] = charset[rand.Intn(len(charset))]
}
return string(b)
}
func ExtractInfoHash(magnetDesc string) string {
const prefix = "xt=urn:btih:"
start := strings.Index(magnetDesc, prefix)
@@ -185,7 +194,6 @@ func ExtractInfoHash(magnetDesc string) string {
func processInfoHash(input string) (string, error) {
// Regular expression for a valid 40-character hex infohash
hexRegex := regexp.MustCompile("^[0-9a-fA-F]{40}$")
// If it's already a valid hex infohash, return it as is
if hexRegex.MatchString(input) {
@@ -209,30 +217,26 @@ func processInfoHash(input string) (string, error) {
return "", fmt.Errorf("invalid infohash: %s", input)
}
func NewLogger(prefix string, output *os.File) *log.Logger {
f := fmt.Sprintf("[%s] ", prefix)
return log.New(output, f, log.LstdFlags)
}
func GetInfohashFromURL(url string) (string, error) {
// Download the torrent file
var magnetLink string
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
client := &http.Client{
Timeout: 30 * time.Second,
CheckRedirect: func(req *http.Request, via []*http.Request) error {
if len(via) >= 3 {
return fmt.Errorf("stopped after 3 redirects")
}
if strings.HasPrefix(req.URL.String(), "magnet:") {
// Stop the redirect chain
magnetLink = req.URL.String()
return http.ErrUseLastResponse
}
return nil
},
redirectFunc := func(req *http.Request, via []*http.Request) error {
if len(via) >= 3 {
return fmt.Errorf("stopped after 3 redirects")
}
if strings.HasPrefix(req.URL.String(), "magnet:") {
// Stop the redirect chain
magnetLink = req.URL.String()
return http.ErrUseLastResponse
}
return nil
}
client := request.New(
request.WithTimeout(30*time.Second),
request.WithRedirectPolicy(redirectFunc),
)
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
if err != nil {
return "", err
@@ -255,21 +259,14 @@ func GetInfohashFromURL(url string) (string, error) {
return infoHash, nil
}
func JoinURL(base string, paths ...string) (string, error) {
// Parse the base URL
u, err := url.Parse(base)
if err != nil {
return "", err
func ConstructMagnet(infoHash, name string) *Magnet {
// Create a magnet link from the infohash and name
name = url.QueryEscape(strings.TrimSpace(name))
magnetUri := fmt.Sprintf("magnet:?xt=urn:btih:%s&dn=%s", infoHash, name)
return &Magnet{
InfoHash: infoHash,
Name: name,
Size: 0,
Link: magnetUri,
}
// Join the path components
u.Path = path.Join(u.Path, path.Join(paths...))
// Return the resulting URL as a string
return u.String(), nil
}
func FileReady(path string) bool {
_, err := os.Stat(path)
return !os.IsNotExist(err) // Returns true if the file exists
}

View File

@@ -0,0 +1,198 @@
package utils
import (
"net/http"
"net/http/httptest"
"os"
"path/filepath"
"strings"
"testing"
"github.com/sirrobot01/decypharr/internal/testutil"
)
// checkMagnet is a helper function that verifies magnet properties
func checkMagnet(t *testing.T, magnet *Magnet, expectedInfoHash, expectedName, expectedLink string, expectedTrackerCount int, shouldBeTorrent bool) {
t.Helper() // This marks the function as a test helper
// Verify basic properties
if magnet.Name != expectedName {
t.Errorf("Expected name '%s', got '%s'", expectedName, magnet.Name)
}
if magnet.InfoHash != expectedInfoHash {
t.Errorf("Expected InfoHash '%s', got '%s'", expectedInfoHash, magnet.InfoHash)
}
if magnet.Link != expectedLink {
t.Errorf("Expected Link '%s', got '%s'", expectedLink, magnet.Link)
}
// Verify the magnet link contains the essential info hash
if !strings.Contains(magnet.Link, "xt=urn:btih:"+expectedInfoHash) {
t.Error("Magnet link should contain info hash")
}
// Verify tracker count
trCount := strings.Count(magnet.Link, "tr=")
if trCount != expectedTrackerCount {
t.Errorf("Expected %d tracker URLs, got %d", expectedTrackerCount, trCount)
}
}
// testMagnetFromFile is a helper function for tests that use GetMagnetFromFile with file operations
func testMagnetFromFile(t *testing.T, filePath string, rmTrackerUrls bool, expectedInfoHash, expectedName, expectedLink string, expectedTrackerCount int) {
t.Helper()
file, err := os.Open(filePath)
if err != nil {
t.Fatalf("Failed to open torrent file %s: %v", filePath, err)
}
defer file.Close()
magnet, err := GetMagnetFromFile(file, filepath.Base(filePath), rmTrackerUrls)
if err != nil {
t.Fatalf("GetMagnetFromFile failed: %v", err)
}
checkMagnet(t, magnet, expectedInfoHash, expectedName, expectedLink, expectedTrackerCount, true)
// Log the result
if rmTrackerUrls {
t.Logf("Generated clean magnet link: %s", magnet.Link)
} else {
t.Logf("Generated magnet link with trackers: %s", magnet.Link)
}
}
func TestGetMagnetFromFile_RealTorrentFile_StripTrue(t *testing.T) {
expectedInfoHash := "8a19577fb5f690970ca43a57ff1011ae202244b8"
expectedName := "ubuntu-25.04-desktop-amd64.iso"
expectedLink := "magnet:?xt=urn:btih:8a19577fb5f690970ca43a57ff1011ae202244b8&dn=ubuntu-25.04-desktop-amd64.iso"
expectedTrackerCount := 0 // Should be 0 when stripping trackers
torrentPath := testutil.GetTestTorrentPath()
testMagnetFromFile(t, torrentPath, true, expectedInfoHash, expectedName, expectedLink, expectedTrackerCount)
}
func TestGetMagnetFromFile_RealTorrentFile_StripFalse(t *testing.T) {
expectedInfoHash := "8a19577fb5f690970ca43a57ff1011ae202244b8"
expectedName := "ubuntu-25.04-desktop-amd64.iso"
expectedLink := "magnet:?xt=urn:btih:8a19577fb5f690970ca43a57ff1011ae202244b8&dn=ubuntu-25.04-desktop-amd64.iso&tr=https%3A%2F%2Ftorrent.ubuntu.com%2Fannounce&tr=https%3A%2F%2Fipv6.torrent.ubuntu.com%2Fannounce"
expectedTrackerCount := 2 // Should be 2 when preserving trackers
torrentPath := testutil.GetTestTorrentPath()
testMagnetFromFile(t, torrentPath, false, expectedInfoHash, expectedName, expectedLink, expectedTrackerCount)
}
func TestGetMagnetFromFile_MagnetFile_StripTrue(t *testing.T) {
expectedInfoHash := "8a19577fb5f690970ca43a57ff1011ae202244b8"
expectedName := "ubuntu-25.04-desktop-amd64.iso"
expectedLink := "magnet:?xt=urn:btih:8a19577fb5f690970ca43a57ff1011ae202244b8&dn=ubuntu-25.04-desktop-amd64.iso"
expectedTrackerCount := 0 // Should be 0 when stripping trackers
torrentPath := testutil.GetTestMagnetPath()
testMagnetFromFile(t, torrentPath, true, expectedInfoHash, expectedName, expectedLink, expectedTrackerCount)
}
func TestGetMagnetFromFile_MagnetFile_StripFalse(t *testing.T) {
expectedInfoHash := "8a19577fb5f690970ca43a57ff1011ae202244b8"
expectedName := "ubuntu-25.04-desktop-amd64.iso"
expectedLink := "magnet:?xt=urn:btih:8a19577fb5f690970ca43a57ff1011ae202244b8&dn=ubuntu-25.04-desktop-amd64.iso&tr=https%3A%2F%2Fipv6.torrent.ubuntu.com%2Fannounce&tr=https%3A%2F%2Ftorrent.ubuntu.com%2Fannounce"
expectedTrackerCount := 2
torrentPath := testutil.GetTestMagnetPath()
testMagnetFromFile(t, torrentPath, false, expectedInfoHash, expectedName, expectedLink, expectedTrackerCount)
}
func TestGetMagnetFromUrl_MagnetLink_StripTrue(t *testing.T) {
expectedInfoHash := "8a19577fb5f690970ca43a57ff1011ae202244b8"
expectedName := "ubuntu-25.04-desktop-amd64.iso"
expectedLink := "magnet:?xt=urn:btih:8a19577fb5f690970ca43a57ff1011ae202244b8&dn=ubuntu-25.04-desktop-amd64.iso"
expectedTrackerCount := 0
// Load the magnet URL from the test file
magnetUrl, err := testutil.GetTestMagnetContent()
if err != nil {
t.Fatalf("Failed to load magnet URL from test file: %v", err)
}
magnet, err := GetMagnetFromUrl(magnetUrl, true)
if err != nil {
t.Fatalf("GetMagnetFromUrl failed: %v", err)
}
checkMagnet(t, magnet, expectedInfoHash, expectedName, expectedLink, expectedTrackerCount, false)
t.Logf("Generated clean magnet link: %s", magnet.Link)
}
func TestGetMagnetFromUrl_MagnetLink_StripFalse(t *testing.T) {
expectedInfoHash := "8a19577fb5f690970ca43a57ff1011ae202244b8"
expectedName := "ubuntu-25.04-desktop-amd64.iso"
expectedLink := "magnet:?xt=urn:btih:8a19577fb5f690970ca43a57ff1011ae202244b8&dn=ubuntu-25.04-desktop-amd64.iso&tr=https%3A%2F%2Fipv6.torrent.ubuntu.com%2Fannounce&tr=https%3A%2F%2Ftorrent.ubuntu.com%2Fannounce"
expectedTrackerCount := 2
// Load the magnet URL from the test file
magnetUrl, err := testutil.GetTestMagnetContent()
if err != nil {
t.Fatalf("Failed to load magnet URL from test file: %v", err)
}
magnet, err := GetMagnetFromUrl(magnetUrl, false)
if err != nil {
t.Fatalf("GetMagnetFromUrl failed: %v", err)
}
checkMagnet(t, magnet, expectedInfoHash, expectedName, expectedLink, expectedTrackerCount, false)
t.Logf("Generated magnet link with trackers: %s", magnet.Link)
}
// testMagnetFromHttpTorrent is a helper function for tests that use GetMagnetFromUrl with HTTP torrent links
func testMagnetFromHttpTorrent(t *testing.T, torrentPath string, rmTrackerUrls bool, expectedInfoHash, expectedName, expectedLink string, expectedTrackerCount int) {
t.Helper()
// Read the torrent file content
torrentData, err := testutil.GetTestDataBytes(torrentPath)
if err != nil {
t.Fatalf("Failed to read torrent file: %v", err)
}
// Create a test HTTP server that serves the torrent file
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/x-bittorrent")
w.Write(torrentData)
}))
defer server.Close()
// Test the function with the mock server URL
magnet, err := GetMagnetFromUrl(server.URL, rmTrackerUrls)
if err != nil {
t.Fatalf("GetMagnetFromUrl failed: %v", err)
}
checkMagnet(t, magnet, expectedInfoHash, expectedName, expectedLink, expectedTrackerCount, true)
// Log the result
if rmTrackerUrls {
t.Logf("Generated clean magnet link from HTTP torrent: %s", magnet.Link)
} else {
t.Logf("Generated magnet link with trackers from HTTP torrent: %s", magnet.Link)
}
}
func TestGetMagnetFromUrl_TorrentLink_StripTrue(t *testing.T) {
expectedInfoHash := "8a19577fb5f690970ca43a57ff1011ae202244b8"
expectedName := "ubuntu-25.04-desktop-amd64.iso"
expectedLink := "magnet:?xt=urn:btih:8a19577fb5f690970ca43a57ff1011ae202244b8&dn=ubuntu-25.04-desktop-amd64.iso"
expectedTrackerCount := 0
testMagnetFromHttpTorrent(t, "ubuntu-25.04-desktop-amd64.iso.torrent", true, expectedInfoHash, expectedName, expectedLink, expectedTrackerCount)
}
func TestGetMagnetFromUrl_TorrentLink_StripFalse(t *testing.T) {
expectedInfoHash := "8a19577fb5f690970ca43a57ff1011ae202244b8"
expectedName := "ubuntu-25.04-desktop-amd64.iso"
expectedLink := "magnet:?xt=urn:btih:8a19577fb5f690970ca43a57ff1011ae202244b8&dn=ubuntu-25.04-desktop-amd64.iso&tr=https%3A%2F%2Ftorrent.ubuntu.com%2Fannounce&tr=https%3A%2F%2Fipv6.torrent.ubuntu.com%2Fannounce"
expectedTrackerCount := 2
testMagnetFromHttpTorrent(t, "ubuntu-25.04-desktop-amd64.iso.torrent", false, expectedInfoHash, expectedName, expectedLink, expectedTrackerCount)
}

36
internal/utils/misc.go Normal file
View File

@@ -0,0 +1,36 @@
package utils
func RemoveItem[S ~[]E, E comparable](s S, values ...E) S {
result := make(S, 0, len(s))
outer:
for _, item := range s {
for _, v := range values {
if item == v {
continue outer
}
}
result = append(result, item)
}
return result
}
func Contains(slice []string, value string) bool {
for _, item := range slice {
if item == value {
return true
}
}
return false
}
func Mask(text string) string {
res := ""
if len(text) > 12 {
res = text[:8] + "****" + text[len(text)-4:]
} else if len(text) > 8 {
res = text[:4] + "****" + text[len(text)-2:]
} else {
res = "****"
}
return res
}

59
internal/utils/regex.go Normal file
View File

@@ -0,0 +1,59 @@
package utils
import (
"path/filepath"
"regexp"
"strings"
)
var (
videoMatch = "(?i)(\\.)(webm|m4v|3gp|nsv|ty|strm|rm|rmvb|m3u|ifo|mov|qt|divx|xvid|bivx|nrg|pva|wmv|asf|asx|ogm|ogv|m2v|avi|bin|dat|dvr-ms|mpg|mpeg|mp4|avc|vp3|svq3|nuv|viv|dv|fli|flv|wpl|vob|mkv|mk3d|ts|wtv|m2ts)$"
musicMatch = "(?i)(\\.)(mp2|mp3|m4a|m4b|m4p|ogg|oga|opus|wma|wav|wv|flac|ape|aif|aiff|aifc)$"
sampleMatch = `(?i)(^|[\s/\\])(sample|trailer|thumb|special|extras?)s?[-/]|(\((sample|trailer|thumb|special|extras?)s?\))|(-\s*(sample|trailer|thumb|special|extras?)s?)`
)
var (
mediaRegex = regexp.MustCompile(videoMatch + "|" + musicMatch)
sampleRegex = regexp.MustCompile(sampleMatch)
)
func RegexMatch(re *regexp.Regexp, value string) bool {
return re.MatchString(value)
}
func RemoveInvalidChars(value string) string {
return strings.Map(func(r rune) rune {
if r == filepath.Separator || r == ':' {
return r
}
if filepath.IsAbs(string(r)) {
return r
}
if strings.ContainsRune(filepath.VolumeName("C:"+string(r)), r) {
return r
}
if r < 32 || strings.ContainsRune(`<>:"/\|?*`, r) {
return -1
}
return r
}, value)
}
func RemoveExtension(value string) string {
if loc := mediaRegex.FindStringIndex(value); loc != nil {
return value[:loc[0]]
}
return value
}
func IsMediaFile(path string) bool {
return RegexMatch(mediaRegex, path)
}
func IsSampleFile(path string) bool {
filename := filepath.Base(path)
if strings.HasSuffix(strings.ToLower(filename), "sample.mkv") {
return true
}
return RegexMatch(sampleRegex, path)
}

View File

@@ -0,0 +1,56 @@
package utils
import (
"fmt"
"github.com/go-co-op/gocron/v2"
"github.com/robfig/cron/v3"
"strconv"
"strings"
"time"
)
// ConvertToJobDef converts a string interval to a gocron.JobDefinition.
func ConvertToJobDef(interval string) (gocron.JobDefinition, error) {
// Parse the interval string
// Interval could be in the format "1h", "30m", "15s" or "1h30m" or "04:05"
var jd gocron.JobDefinition
if t, ok := parseClockTime(interval); ok {
return gocron.DailyJob(1, gocron.NewAtTimes(
gocron.NewAtTime(uint(t.Hour()), uint(t.Minute()), uint(t.Second())),
)), nil
}
if _, err := cron.ParseStandard(interval); err == nil {
return gocron.CronJob(interval, false), nil
}
if dur, err := time.ParseDuration(interval); err == nil {
return gocron.DurationJob(dur), nil
}
return jd, fmt.Errorf("invalid interval format: %s", interval)
}
func parseClockTime(s string) (time.Time, bool) {
parts := strings.Split(s, ":")
if len(parts) != 2 {
return time.Time{}, false
}
h, err := strconv.Atoi(parts[0])
if err != nil || h < 0 || h > 23 {
return time.Time{}, false
}
m, err := strconv.Atoi(parts[1])
if err != nil || m < 0 || m > 59 {
return time.Time{}, false
}
now := time.Now()
// build a time.Time for today at h:m:00 in the local zone
t := time.Date(
now.Year(), now.Month(), now.Day(),
h, m, 0, 0,
time.Local,
)
return t, true
}

29
main.go
View File

@@ -1,22 +1,35 @@
package main
import (
"context"
"flag"
"goBlack/cmd"
"goBlack/common"
"github.com/sirrobot01/decypharr/cmd/decypharr"
"github.com/sirrobot01/decypharr/internal/config"
"log"
"os"
"os/signal"
"runtime/debug"
"syscall"
)
func main() {
defer func() {
if r := recover(); r != nil {
log.Printf("FATAL: Recovered from panic in main: %v\n", r)
debug.PrintStack()
}
}()
var configPath string
flag.StringVar(&configPath, "config", "config.json", "path to the config file")
flag.StringVar(&configPath, "config", "/data", "path to the data folder")
flag.Parse()
config.SetConfigPath(configPath)
config.Get()
// Load the config file
conf, err := common.LoadConfig(configPath)
if err != nil {
// Create a context canceled on SIGINT/SIGTERM
ctx, stop := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM)
defer stop()
if err := decypharr.Start(ctx); err != nil {
log.Fatal(err)
}
cmd.Start(conf)
}

1624
package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

19
package.json Normal file
View File

@@ -0,0 +1,19 @@
{
"name": "decypharr",
"version": "1.0.0",
"description": "Media management tool",
"scripts": {
"build-css": "tailwindcss -i ./pkg/web/assets/styles.css -o ./pkg/web/assets/build/css/styles.css --minify",
"minify-js": "node scripts/minify-js.js",
"download-assets": "node scripts/download-assets.js",
"build": "npm run build-css && npm run minify-js",
"build-all": "npm run download-assets && npm run build",
"dev": "npm run build && air"
},
"devDependencies": {
"tailwindcss": "^3.4.0",
"daisyui": "^4.12.10",
"terser": "^5.24.0",
"clean-css": "^5.3.3"
}
}

330
pkg/arr/arr.go Normal file
View File

@@ -0,0 +1,330 @@
package arr
import (
"bytes"
"cmp"
"context"
"crypto/tls"
"encoding/json"
"fmt"
"io"
"net/http"
"strings"
"sync"
"time"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/internal/request"
)
// Type is a type of arr
type Type string
var sharedClient = &http.Client{
Transport: &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
},
Timeout: 60 * time.Second,
}
const (
Sonarr Type = "sonarr"
Radarr Type = "radarr"
Lidarr Type = "lidarr"
Readarr Type = "readarr"
Others Type = "others"
)
type Arr struct {
Name string `json:"name"`
Host string `json:"host"`
Token string `json:"token"`
Type Type `json:"type"`
Cleanup bool `json:"cleanup"`
SkipRepair bool `json:"skip_repair"`
DownloadUncached *bool `json:"download_uncached"`
SelectedDebrid string `json:"selected_debrid,omitempty"` // The debrid service selected for this arr
Source string `json:"source,omitempty"` // The source of the arr, e.g. "auto", "manual". Auto means it was automatically detected from the arr
}
func New(name, host, token string, cleanup, skipRepair bool, downloadUncached *bool, selectedDebrid, source string) *Arr {
return &Arr{
Name: name,
Host: host,
Token: strings.TrimSpace(token),
Type: InferType(host, name),
Cleanup: cleanup,
SkipRepair: skipRepair,
DownloadUncached: downloadUncached,
SelectedDebrid: selectedDebrid,
Source: source,
}
}
func (a *Arr) Request(method, endpoint string, payload interface{}) (*http.Response, error) {
if a.Token == "" || a.Host == "" {
return nil, fmt.Errorf("arr not configured")
}
url, err := request.JoinURL(a.Host, endpoint)
if err != nil {
return nil, err
}
var body io.Reader
if payload != nil {
b, err := json.Marshal(payload)
if err != nil {
return nil, err
}
body = bytes.NewReader(b)
}
req, err := http.NewRequest(method, url, body)
if err != nil {
return nil, err
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("X-Api-Key", a.Token)
var resp *http.Response
for attempts := 0; attempts < 5; attempts++ {
resp, err = sharedClient.Do(req)
if err != nil {
return nil, err
}
// If we got a 401, wait briefly and retry
if resp.StatusCode == http.StatusUnauthorized {
resp.Body.Close() // Don't leak response bodies
if attempts < 4 { // Don't sleep on the last attempt
time.Sleep(time.Duration(attempts+1) * 100 * time.Millisecond)
continue
}
}
return resp, nil
}
return resp, err
}
func (a *Arr) Validate() error {
if a.Token == "" || a.Host == "" {
return fmt.Errorf("arr not configured")
}
if request.ValidateURL(a.Host) != nil {
return fmt.Errorf("invalid arr host URL")
}
resp, err := a.Request("GET", "/api/v3/health", nil)
if err != nil {
return err
}
defer resp.Body.Close()
// If response is not 200 or 404(this is the case for Lidarr, etc), return an error
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusNotFound {
return fmt.Errorf("failed to validate arr %s: %s", a.Name, resp.Status)
}
return nil
}
type Storage struct {
Arrs map[string]*Arr // name -> arr
mu sync.Mutex
logger zerolog.Logger
}
func (s *Storage) Cleanup() {
s.mu.Lock()
defer s.mu.Unlock()
s.Arrs = make(map[string]*Arr)
}
func InferType(host, name string) Type {
switch {
case strings.Contains(host, "sonarr") || strings.Contains(name, "sonarr"):
return Sonarr
case strings.Contains(host, "radarr") || strings.Contains(name, "radarr"):
return Radarr
case strings.Contains(host, "lidarr") || strings.Contains(name, "lidarr"):
return Lidarr
case strings.Contains(host, "readarr") || strings.Contains(name, "readarr"):
return Readarr
default:
return Others
}
}
func NewStorage() *Storage {
arrs := make(map[string]*Arr)
for _, a := range config.Get().Arrs {
if a.Host == "" || a.Token == "" || a.Name == "" {
continue // Skip if host or token is not set
}
name := a.Name
as := New(name, a.Host, a.Token, a.Cleanup, a.SkipRepair, a.DownloadUncached, a.SelectedDebrid, a.Source)
if request.ValidateURL(as.Host) != nil {
continue
}
arrs[a.Name] = as
}
return &Storage{
Arrs: arrs,
logger: logger.New("arr"),
}
}
func (s *Storage) AddOrUpdate(arr *Arr) {
s.mu.Lock()
defer s.mu.Unlock()
if arr.Host == "" || arr.Token == "" || arr.Name == "" {
return
}
// Check the host URL
if request.ValidateURL(arr.Host) != nil {
return
}
s.Arrs[arr.Name] = arr
}
func (s *Storage) Get(name string) *Arr {
s.mu.Lock()
defer s.mu.Unlock()
return s.Arrs[name]
}
func (s *Storage) GetAll() []*Arr {
s.mu.Lock()
defer s.mu.Unlock()
arrs := make([]*Arr, 0, len(s.Arrs))
for _, arr := range s.Arrs {
arrs = append(arrs, arr)
}
return arrs
}
func (s *Storage) SyncToConfig() []config.Arr {
s.mu.Lock()
defer s.mu.Unlock()
cfg := config.Get()
arrConfigs := make(map[string]config.Arr)
for _, a := range cfg.Arrs {
if a.Host == "" || a.Token == "" {
continue // Skip empty arrs
}
arrConfigs[a.Name] = a
}
for name, arr := range s.Arrs {
exists, ok := arrConfigs[name]
if ok {
// Update existing arr config
// Check if the host URL is valid
if request.ValidateURL(arr.Host) == nil {
exists.Host = arr.Host
}
exists.Token = cmp.Or(exists.Token, arr.Token)
exists.Cleanup = arr.Cleanup
exists.SkipRepair = arr.SkipRepair
exists.DownloadUncached = arr.DownloadUncached
exists.SelectedDebrid = arr.SelectedDebrid
arrConfigs[name] = exists
} else {
// Add new arr config
arrConfigs[name] = config.Arr{
Name: arr.Name,
Host: arr.Host,
Token: arr.Token,
Cleanup: arr.Cleanup,
SkipRepair: arr.SkipRepair,
DownloadUncached: arr.DownloadUncached,
SelectedDebrid: arr.SelectedDebrid,
Source: arr.Source,
}
}
}
// Convert map to slice
arrs := make([]config.Arr, 0, len(arrConfigs))
for _, a := range arrConfigs {
arrs = append(arrs, a)
}
return arrs
}
func (s *Storage) SyncFromConfig(arrs []config.Arr) {
s.mu.Lock()
defer s.mu.Unlock()
arrConfigs := make(map[string]*Arr)
for _, a := range arrs {
arrConfigs[a.Name] = New(a.Name, a.Host, a.Token, a.Cleanup, a.SkipRepair, a.DownloadUncached, a.SelectedDebrid, a.Source)
}
// Add or update arrs from config
for name, arr := range s.Arrs {
if ac, ok := arrConfigs[name]; ok {
// Update existing arr
// is the host URL valid?
if request.ValidateURL(ac.Host) == nil {
ac.Host = arr.Host
}
ac.Token = cmp.Or(ac.Token, arr.Token)
ac.Cleanup = arr.Cleanup
ac.SkipRepair = arr.SkipRepair
ac.DownloadUncached = arr.DownloadUncached
ac.SelectedDebrid = arr.SelectedDebrid
ac.Source = arr.Source
arrConfigs[name] = ac
} else {
arrConfigs[name] = arr
}
}
// Replace the arrs map
s.Arrs = arrConfigs
}
func (s *Storage) StartWorker(ctx context.Context) error {
ticker := time.NewTicker(10 * time.Second)
select {
case <-ticker.C:
s.cleanupArrsQueue()
case <-ctx.Done():
ticker.Stop()
return nil
}
return nil
}
func (s *Storage) cleanupArrsQueue() {
arrs := make([]*Arr, 0)
for _, arr := range s.Arrs {
if !arr.Cleanup {
continue
}
arrs = append(arrs, arr)
}
if len(arrs) > 0 {
for _, arr := range arrs {
if err := arr.CleanupQueue(); err != nil {
s.logger.Error().Err(err).Msgf("Failed to cleanup arr %s", arr.Name)
}
}
}
}
func (a *Arr) Refresh() {
payload := struct {
Name string `json:"name"`
}{
Name: "RefreshMonitoredDownloads",
}
_, _ = a.Request(http.MethodPost, "api/v3/command", payload)
}

335
pkg/arr/content.go Normal file
View File

@@ -0,0 +1,335 @@
package arr
import (
"context"
"encoding/json"
"fmt"
"golang.org/x/sync/errgroup"
"net/http"
"strconv"
"strings"
)
type episode struct {
Id int `json:"id"`
EpisodeFileID int `json:"episodeFileId"`
}
type sonarrSearch struct {
Name string `json:"name"`
SeasonNumber int `json:"seasonNumber"`
SeriesId int `json:"seriesId"`
}
type radarrSearch struct {
Name string `json:"name"`
MovieIds []int `json:"movieIds"`
}
func (a *Arr) GetMedia(mediaId string) ([]Content, error) {
// Get series
if a.Type == Radarr {
return GetMovies(a, mediaId)
}
// This is likely Sonarr
resp, err := a.Request(http.MethodGet, fmt.Sprintf("api/v3/series?tvdbId=%s", mediaId), nil)
if err != nil {
return nil, err
}
defer resp.Body.Close()
if resp.StatusCode == http.StatusNotFound {
// This is likely Radarr
return GetMovies(a, mediaId)
}
a.Type = Sonarr
type series struct {
Title string `json:"title"`
Id int `json:"id"`
}
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("failed to get series: %s", resp.Status)
}
var data []series
if err = json.NewDecoder(resp.Body).Decode(&data); err != nil {
return nil, fmt.Errorf("failed to decode series: %v", err)
}
// Get series files
contents := make([]Content, 0)
for _, d := range data {
resp, err = a.Request(http.MethodGet, fmt.Sprintf("api/v3/episodefile?seriesId=%d", d.Id), nil)
if err != nil {
continue
}
var ct Content
var seriesFiles []seriesFile
episodeFileIDMap := make(map[int]int)
func() {
defer resp.Body.Close()
if err = json.NewDecoder(resp.Body).Decode(&seriesFiles); err != nil {
return
}
ct = Content{
Title: d.Title,
Id: d.Id,
}
}()
resp, err = a.Request(http.MethodGet, fmt.Sprintf("api/v3/episode?seriesId=%d", d.Id), nil)
if err != nil {
continue
}
func() {
defer resp.Body.Close()
var episodes []episode
if err = json.NewDecoder(resp.Body).Decode(&episodes); err != nil {
return
}
for _, e := range episodes {
episodeFileIDMap[e.EpisodeFileID] = e.Id
}
}()
files := make([]ContentFile, 0)
for _, file := range seriesFiles {
eId, ok := episodeFileIDMap[file.Id]
if !ok {
eId = 0
}
if file.Id == 0 || file.Path == "" {
// Skip files without path
continue
}
files = append(files, ContentFile{
FileId: file.Id,
Path: file.Path,
Id: d.Id,
EpisodeId: eId,
SeasonNumber: file.SeasonNumber,
Size: file.Size,
})
}
if len(files) == 0 {
// Skip series without files
continue
}
ct.Files = files
contents = append(contents, ct)
}
return contents, nil
}
func GetMovies(a *Arr, tvId string) ([]Content, error) {
resp, err := a.Request(http.MethodGet, fmt.Sprintf("api/v3/movie?tmdbId=%s", tvId), nil)
if err != nil {
return nil, err
}
if resp.StatusCode == http.StatusNotFound {
// This is likely Lidarr or Readarr
return nil, fmt.Errorf("failed to get movies: %s", resp.Status)
}
a.Type = Radarr
defer resp.Body.Close()
var movies []Movie
if err = json.NewDecoder(resp.Body).Decode(&movies); err != nil {
return nil, fmt.Errorf("failed to decode movies: %v", err)
}
contents := make([]Content, 0)
for _, movie := range movies {
if movie.MovieFile.Id == 0 || movie.MovieFile.Path == "" {
// Skip movies without files
continue
}
ct := Content{
Title: movie.Title,
Id: movie.Id,
}
files := make([]ContentFile, 0)
files = append(files, ContentFile{
FileId: movie.MovieFile.Id,
Id: movie.Id,
Path: movie.MovieFile.Path,
Size: movie.MovieFile.Size,
})
ct.Files = files
contents = append(contents, ct)
}
return contents, nil
}
// searchSonarr searches for missing files in the arr
// map ids are series id and season number
func (a *Arr) searchSonarr(files []ContentFile) error {
ids := make(map[string]any)
for _, f := range files {
// Join series id and season number
id := fmt.Sprintf("%d-%d", f.Id, f.SeasonNumber)
ids[id] = nil
}
g, ctx := errgroup.WithContext(context.Background())
// Limit concurrent goroutines
g.SetLimit(10)
for id := range ids {
id := id
g.Go(func() error {
select {
case <-ctx.Done():
return ctx.Err()
default:
}
parts := strings.Split(id, "-")
if len(parts) != 2 {
return fmt.Errorf("invalid id: %s", id)
}
seriesId, err := strconv.Atoi(parts[0])
if err != nil {
return err
}
seasonNumber, err := strconv.Atoi(parts[1])
if err != nil {
return err
}
payload := sonarrSearch{
Name: "SeasonSearch",
SeasonNumber: seasonNumber,
SeriesId: seriesId,
}
resp, err := a.Request(http.MethodPost, "api/v3/command", payload)
if err != nil {
return fmt.Errorf("failed to automatic search: %v", err)
}
if resp.StatusCode >= 300 || resp.StatusCode < 200 {
return fmt.Errorf("failed to automatic search. Status Code: %s", resp.Status)
}
return nil
})
}
if err := g.Wait(); err != nil {
return err
}
return nil
}
func (a *Arr) searchRadarr(files []ContentFile) error {
ids := make([]int, 0)
for _, f := range files {
ids = append(ids, f.Id)
}
payload := radarrSearch{
Name: "MoviesSearch",
MovieIds: ids,
}
resp, err := a.Request(http.MethodPost, "api/v3/command", payload)
if err != nil {
return fmt.Errorf("failed to automatic search: %v", err)
}
if statusOk := strconv.Itoa(resp.StatusCode)[0] == '2'; !statusOk {
return fmt.Errorf("failed to automatic search. Status Code: %s", resp.Status)
}
return nil
}
func (a *Arr) SearchMissing(files []ContentFile) error {
if len(files) == 0 {
return nil
}
return a.batchSearchMissing(files)
}
func (a *Arr) batchSearchMissing(files []ContentFile) error {
if len(files) == 0 {
return nil
}
BatchSize := 50
// Batch search for missing files
if len(files) > BatchSize {
for i := 0; i < len(files); i += BatchSize {
end := i + BatchSize
if end > len(files) {
end = len(files)
}
if err := a.searchMissing(files[i:end]); err != nil {
// continue searching the rest of the files
continue
}
}
return nil
}
return a.searchMissing(files)
}
func (a *Arr) searchMissing(files []ContentFile) error {
switch a.Type {
case Sonarr:
return a.searchSonarr(files)
case Radarr:
return a.searchRadarr(files)
default:
return fmt.Errorf("unknown arr type: %s", a.Type)
}
}
func (a *Arr) DeleteFiles(files []ContentFile) error {
if len(files) == 0 {
return nil
}
BatchSize := 50
// Batch delete files
if len(files) > BatchSize {
for i := 0; i < len(files); i += BatchSize {
end := i + BatchSize
if end > len(files) {
end = len(files)
}
if err := a.batchDeleteFiles(files[i:end]); err != nil {
// continue deleting the rest of the files
continue
}
}
return nil
}
return a.batchDeleteFiles(files)
}
func (a *Arr) batchDeleteFiles(files []ContentFile) error {
ids := make([]int, 0)
for _, f := range files {
ids = append(ids, f.FileId)
}
defer func() {
// Delete files, or at least try
for _, f := range files {
f.Delete()
}
}()
var payload interface{}
switch a.Type {
case Sonarr:
payload = struct {
EpisodeFileIds []int `json:"episodeFileIds"`
}{
EpisodeFileIds: ids,
}
_, err := a.Request(http.MethodDelete, "api/v3/episodefile/bulk", payload)
if err != nil {
return err
}
case Radarr:
payload = struct {
MovieFileIds []int `json:"movieFileIds"`
}{
MovieFileIds: ids,
}
_, err := a.Request(http.MethodDelete, "api/v3/moviefile/bulk", payload)
if err != nil {
return err
}
default:
return fmt.Errorf("unknown arr type: %s", a.Type)
}
return nil
}

191
pkg/arr/history.go Normal file
View File

@@ -0,0 +1,191 @@
package arr
import (
"encoding/json"
"io"
"net/http"
gourl "net/url"
"strconv"
"strings"
)
type HistorySchema struct {
Page int `json:"page"`
PageSize int `json:"pageSize"`
SortKey string `json:"sortKey"`
SortDirection string `json:"sortDirection"`
TotalRecords int `json:"totalRecords"`
Records []struct {
ID int `json:"id"`
DownloadID string `json:"downloadId"`
} `json:"records"`
}
type QueueResponseScheme struct {
Page int `json:"page"`
PageSize int `json:"pageSize"`
SortKey string `json:"sortKey"`
SortDirection string `json:"sortDirection"`
TotalRecords int `json:"totalRecords"`
Records []QueueSchema `json:"records"`
}
type QueueSchema struct {
SeriesId int `json:"seriesId"`
EpisodeId int `json:"episodeId"`
SeasonNumber int `json:"seasonNumber"`
Title string `json:"title"`
Status string `json:"status"`
TrackedDownloadStatus string `json:"trackedDownloadStatus"`
TrackedDownloadState string `json:"trackedDownloadState"`
StatusMessages []struct {
Title string `json:"title"`
Messages []string `json:"messages"`
} `json:"statusMessages"`
DownloadId string `json:"downloadId"`
Protocol string `json:"protocol"`
DownloadClient string `json:"downloadClient"`
DownloadClientHasPostImportCategory bool `json:"downloadClientHasPostImportCategory"`
Indexer string `json:"indexer"`
OutputPath string `json:"outputPath"`
EpisodeHasFile bool `json:"episodeHasFile"`
Id int `json:"id"`
}
func (a *Arr) GetHistory(downloadId, eventType string) *HistorySchema {
query := gourl.Values{}
if downloadId != "" {
query.Add("downloadId", downloadId)
}
query.Add("eventType", eventType)
query.Add("pageSize", "100")
url := "api/v3/history" + "?" + query.Encode()
resp, err := a.Request(http.MethodGet, url, nil)
if err != nil {
return nil
}
defer resp.Body.Close()
var data *HistorySchema
if err = json.NewDecoder(resp.Body).Decode(&data); err != nil {
return nil
}
return data
}
func (a *Arr) GetQueue() []QueueSchema {
query := gourl.Values{}
query.Add("page", "1")
query.Add("pageSize", "200")
results := make([]QueueSchema, 0)
for {
url := "api/v3/queue" + "?" + query.Encode()
resp, err := a.Request(http.MethodGet, url, nil)
if err != nil {
break
}
func() {
defer func(Body io.ReadCloser) {
err := Body.Close()
if err != nil {
return
}
}(resp.Body)
var data QueueResponseScheme
if err = json.NewDecoder(resp.Body).Decode(&data); err != nil {
return
}
results = append(results, data.Records...)
if len(results) >= data.TotalRecords {
// We've fetched all records
err = io.EOF // Signal to exit the loop
return
}
query.Set("page", strconv.Itoa(data.Page+1))
}()
if err != nil {
break
}
}
return results
}
func (a *Arr) CleanupQueue() error {
queue := a.GetQueue()
type messedUp struct {
id int
episodeId int
seasonNum int
}
cleanups := make(map[int][]messedUp)
for _, q := range queue {
isMessedUp := false
if q.Protocol == "torrent" && q.Status == "completed" && q.TrackedDownloadStatus == "warning" && q.TrackedDownloadState == "importPending" {
messages := q.StatusMessages
if len(messages) > 0 {
for _, m := range messages {
if strings.Contains(strings.Join(m.Messages, " "), "No files found are eligible") {
isMessedUp = true
break
}
if strings.Contains(m.Title, "One or more episodes expected in this release were not imported or missing from the release") {
isMessedUp = true
break
}
}
}
}
if isMessedUp {
cleanups[q.SeriesId] = append(cleanups[q.SeriesId], messedUp{
id: q.Id,
episodeId: q.EpisodeId,
seasonNum: q.SeasonNumber,
})
}
}
if len(cleanups) == 0 {
return nil
}
queueIds := make([]int, 0)
for _, c := range cleanups {
// Delete the messed up episodes from queue
for _, m := range c {
queueIds = append(queueIds, m.id)
}
}
// Delete the messed up episodes from queue
payload := struct {
Ids []int `json:"ids"`
}{
Ids: queueIds,
}
// Blocklist that hash(it's typically not complete, then research the episode)
query := gourl.Values{}
query.Add("removeFromClient", "true")
query.Add("blocklist", "true")
query.Add("skipRedownload", "false")
query.Add("changeCategory", "false")
url := "api/v3/queue/bulk" + "?" + query.Encode()
_, err := a.Request(http.MethodDelete, url, payload)
if err != nil {
return err
}
return nil
}

208
pkg/arr/import.go Normal file
View File

@@ -0,0 +1,208 @@
package arr
import (
"encoding/json"
"fmt"
"io"
"net/http"
gourl "net/url"
"strconv"
"time"
)
type ImportResponseSchema struct {
Path string `json:"path"`
RelativePath string `json:"relativePath"`
FolderName string `json:"folderName"`
Name string `json:"name"`
Size int `json:"size"`
Series struct {
Title string `json:"title"`
SortTitle string `json:"sortTitle"`
Status string `json:"status"`
Ended bool `json:"ended"`
Overview string `json:"overview"`
Network string `json:"network"`
AirTime string `json:"airTime"`
Images []struct {
CoverType string `json:"coverType"`
RemoteUrl string `json:"remoteUrl"`
} `json:"images"`
OriginalLanguage struct {
Id int `json:"id"`
Name string `json:"name"`
} `json:"originalLanguage"`
Seasons []struct {
SeasonNumber int `json:"seasonNumber"`
Monitored bool `json:"monitored"`
} `json:"seasons"`
Year int `json:"year"`
Path string `json:"path"`
QualityProfileId int `json:"qualityProfileId"`
SeasonFolder bool `json:"seasonFolder"`
Monitored bool `json:"monitored"`
MonitorNewItems string `json:"monitorNewItems"`
UseSceneNumbering bool `json:"useSceneNumbering"`
Runtime int `json:"runtime"`
TvdbId int `json:"tvdbId"`
TvRageId int `json:"tvRageId"`
TvMazeId int `json:"tvMazeId"`
TmdbId int `json:"tmdbId"`
FirstAired time.Time `json:"firstAired"`
LastAired time.Time `json:"lastAired"`
SeriesType string `json:"seriesType"`
CleanTitle string `json:"cleanTitle"`
ImdbId string `json:"imdbId"`
TitleSlug string `json:"titleSlug"`
Certification string `json:"certification"`
Genres []string `json:"genres"`
Tags []interface{} `json:"tags"`
Added time.Time `json:"added"`
Ratings struct {
Votes int `json:"votes"`
Value float64 `json:"value"`
} `json:"ratings"`
LanguageProfileId int `json:"languageProfileId"`
Id int `json:"id"`
} `json:"series"`
SeasonNumber int `json:"seasonNumber"`
Episodes []struct {
SeriesId int `json:"seriesId"`
TvdbId int `json:"tvdbId"`
EpisodeFileId int `json:"episodeFileId"`
SeasonNumber int `json:"seasonNumber"`
EpisodeNumber int `json:"episodeNumber"`
Title string `json:"title"`
AirDate string `json:"airDate"`
AirDateUtc time.Time `json:"airDateUtc"`
Runtime int `json:"runtime"`
Overview string `json:"overview"`
HasFile bool `json:"hasFile"`
Monitored bool `json:"monitored"`
AbsoluteEpisodeNumber int `json:"absoluteEpisodeNumber"`
UnverifiedSceneNumbering bool `json:"unverifiedSceneNumbering"`
Id int `json:"id"`
FinaleType string `json:"finaleType,omitempty"`
} `json:"episodes"`
ReleaseGroup string `json:"releaseGroup"`
Quality struct {
Quality struct {
Id int `json:"id"`
Name string `json:"name"`
Source string `json:"source"`
Resolution int `json:"resolution"`
} `json:"quality"`
Revision struct {
Version int `json:"version"`
Real int `json:"real"`
IsRepack bool `json:"isRepack"`
} `json:"revision"`
} `json:"quality"`
Languages []struct {
Id int `json:"id"`
Name string `json:"name"`
} `json:"languages"`
QualityWeight int `json:"qualityWeight"`
CustomFormats []interface{} `json:"customFormats"`
CustomFormatScore int `json:"customFormatScore"`
IndexerFlags int `json:"indexerFlags"`
ReleaseType string `json:"releaseType"`
Rejections []struct {
Reason string `json:"reason"`
Type string `json:"type"`
} `json:"rejections"`
Id int `json:"id"`
}
type ManualImportRequestFile struct {
Path string `json:"path"`
SeriesId int `json:"seriesId"`
SeasonNumber int `json:"seasonNumber"`
EpisodeIds []int `json:"episodeIds"`
Quality struct {
Quality struct {
Id int `json:"id"`
Name string `json:"name"`
Source string `json:"source"`
Resolution int `json:"resolution"`
} `json:"quality"`
Revision struct {
Version int `json:"version"`
Real int `json:"real"`
IsRepack bool `json:"isRepack"`
} `json:"revision"`
} `json:"quality"`
Languages []struct {
Id int `json:"id"`
Name string `json:"name"`
} `json:"languages"`
ReleaseGroup string `json:"releaseGroup"`
CustomFormats []interface{} `json:"customFormats"`
CustomFormatScore int `json:"customFormatScore"`
IndexerFlags int `json:"indexerFlags"`
ReleaseType string `json:"releaseType"`
Rejections []struct {
Reason string `json:"reason"`
Type string `json:"type"`
} `json:"rejections"`
}
type ManualImportRequestSchema struct {
Name string `json:"name"`
Files []ManualImportRequestFile `json:"files"`
ImportMode string `json:"importMode"`
}
func (a *Arr) Import(path string, seriesId int, seasons []int) (io.ReadCloser, error) {
query := gourl.Values{}
query.Add("folder", path)
if seriesId != 0 {
query.Add("seriesId", strconv.Itoa(seriesId))
}
url := "api/v3/manualimport" + "?" + query.Encode()
resp, err := a.Request(http.MethodGet, url, nil)
if err != nil {
return nil, fmt.Errorf("failed to import, invalid file: %w", err)
}
defer resp.Body.Close()
var data []ImportResponseSchema
if err = json.NewDecoder(resp.Body).Decode(&data); err != nil {
return nil, fmt.Errorf("failed to decode response: %w", err)
}
var files []ManualImportRequestFile
for _, d := range data {
episodesIds := []int{}
for _, e := range d.Episodes {
episodesIds = append(episodesIds, e.Id)
}
file := ManualImportRequestFile{
Path: d.Path,
SeriesId: d.Series.Id,
SeasonNumber: d.SeasonNumber,
EpisodeIds: episodesIds,
Quality: d.Quality,
Languages: d.Languages,
ReleaseGroup: d.ReleaseGroup,
CustomFormats: d.CustomFormats,
CustomFormatScore: d.CustomFormatScore,
IndexerFlags: d.IndexerFlags,
ReleaseType: d.ReleaseType,
Rejections: d.Rejections,
}
files = append(files, file)
}
request := ManualImportRequestSchema{
Name: "ManualImport",
Files: files,
ImportMode: "copy",
}
url = "api/v3/command"
resp, err = a.Request(http.MethodPost, url, request)
if err != nil {
return nil, fmt.Errorf("failed to import: %w", err)
}
defer resp.Body.Close()
return resp.Body, nil
}

51
pkg/arr/types.go Normal file
View File

@@ -0,0 +1,51 @@
package arr
import "os"
type Movie struct {
Title string `json:"title"`
OriginalTitle string `json:"originalTitle"`
Path string `json:"path"`
MovieFile struct {
MovieId int `json:"movieId"`
RelativePath string `json:"relativePath"`
Path string `json:"path"`
Id int `json:"id"`
Size int64 `json:"size"`
} `json:"movieFile"`
Id int `json:"id"`
}
type ContentFile struct {
Name string `json:"name"`
Path string `json:"path"`
Id int `json:"id"`
EpisodeId int `json:"showId"`
FileId int `json:"fileId"`
TargetPath string `json:"targetPath"`
IsSymlink bool `json:"isSymlink"`
IsBroken bool `json:"isBroken"`
SeasonNumber int `json:"seasonNumber"`
Processed bool `json:"processed"`
Size int64 `json:"size"`
}
func (file *ContentFile) Delete() {
// This is useful for when sonarr bulk delete fails(this usually happens)
// and we need to delete the file manually
_ = os.Remove(file.Path) // nolint:errcheck
}
type Content struct {
Title string `json:"title"`
Id int `json:"id"`
Files []ContentFile `json:"files"`
}
type seriesFile struct {
SeriesId int `json:"seriesId"`
SeasonNumber int `json:"seasonNumber"`
Path string `json:"path"`
Id int `json:"id"`
Size int64 `json:"size"`
}

View File

@@ -0,0 +1,119 @@
package account
import (
"fmt"
"net/http"
"sync/atomic"
"github.com/puzpuzpuz/xsync/v4"
"github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
)
type Account struct {
Debrid string `json:"debrid"` // The debrid service name, e.g. "realdebrid"
links *xsync.Map[string, types.DownloadLink] // key is the sliced file link
Index int `json:"index"` // The index of the account in the config
Disabled atomic.Bool `json:"disabled"`
Token string `json:"token"`
TrafficUsed atomic.Int64 `json:"traffic_used"` // Traffic used in bytes
Username string `json:"username"` // Username for the account
httpClient *request.Client
// Account reactivation tracking
DisableCount atomic.Int32 `json:"disable_count"`
}
func (a *Account) Equals(other *Account) bool {
if other == nil {
return false
}
return a.Token == other.Token && a.Debrid == other.Debrid
}
func (a *Account) Client() *request.Client {
return a.httpClient
}
// slice download link
func (a *Account) sliceFileLink(fileLink string) string {
if a.Debrid != "realdebrid" {
return fileLink
}
if len(fileLink) < 39 {
return fileLink
}
return fileLink[0:39]
}
func (a *Account) GetDownloadLink(fileLink string) (types.DownloadLink, error) {
slicedLink := a.sliceFileLink(fileLink)
dl, ok := a.links.Load(slicedLink)
if !ok {
return types.DownloadLink{}, types.ErrDownloadLinkNotFound
}
return dl, nil
}
func (a *Account) StoreDownloadLink(dl types.DownloadLink) {
slicedLink := a.sliceFileLink(dl.Link)
a.links.Store(slicedLink, dl)
}
func (a *Account) DeleteDownloadLink(fileLink string) {
slicedLink := a.sliceFileLink(fileLink)
a.links.Delete(slicedLink)
}
func (a *Account) ClearDownloadLinks() {
a.links.Clear()
}
func (a *Account) DownloadLinksCount() int {
return a.links.Size()
}
func (a *Account) StoreDownloadLinks(dls map[string]*types.DownloadLink) {
for _, dl := range dls {
a.StoreDownloadLink(*dl)
}
}
// MarkDisabled marks the account as disabled and increments the disable count
func (a *Account) MarkDisabled() {
a.Disabled.Store(true)
a.DisableCount.Add(1)
}
func (a *Account) Reset() {
a.DisableCount.Store(0)
a.Disabled.Store(false)
}
func (a *Account) CheckBandwidth() error {
// Get a one of the download links to check if the account is still valid
downloadLink := ""
a.links.Range(func(key string, dl types.DownloadLink) bool {
if dl.DownloadLink != "" {
downloadLink = dl.DownloadLink
return false
}
return true
})
if downloadLink == "" {
return fmt.Errorf("no download link found")
}
// Let's check the download link status
req, err := http.NewRequest(http.MethodGet, downloadLink, nil)
if err != nil {
return err
}
// Use a simple client
client := http.DefaultClient
resp, err := client.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusPartialContent {
return fmt.Errorf("account check failed with status code %d", resp.StatusCode)
}
return nil
}

View File

@@ -0,0 +1,239 @@
package account
import (
"fmt"
"slices"
"sync/atomic"
"github.com/puzpuzpuz/xsync/v4"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
"go.uber.org/ratelimit"
)
const (
MaxDisableCount = 3
)
type Manager struct {
debrid string
current atomic.Pointer[Account]
accounts *xsync.Map[string, *Account]
logger zerolog.Logger
}
func NewManager(debridConf config.Debrid, downloadRL ratelimit.Limiter, logger zerolog.Logger) *Manager {
m := &Manager{
debrid: debridConf.Name,
accounts: xsync.NewMap[string, *Account](),
logger: logger,
}
var firstAccount *Account
for idx, token := range debridConf.DownloadAPIKeys {
if token == "" {
continue
}
headers := map[string]string{
"Authorization": fmt.Sprintf("Bearer %s", token),
}
account := &Account{
Debrid: debridConf.Name,
Token: token,
Index: idx,
links: xsync.NewMap[string, types.DownloadLink](),
httpClient: request.New(
request.WithRateLimiter(downloadRL),
request.WithLogger(logger),
request.WithHeaders(headers),
request.WithMaxRetries(3),
request.WithRetryableStatus(429, 447, 502),
request.WithProxy(debridConf.Proxy),
),
}
m.accounts.Store(token, account)
if firstAccount == nil {
firstAccount = account
}
}
m.current.Store(firstAccount)
return m
}
func (m *Manager) Active() []*Account {
activeAccounts := make([]*Account, 0)
m.accounts.Range(func(key string, acc *Account) bool {
if !acc.Disabled.Load() {
activeAccounts = append(activeAccounts, acc)
}
return true
})
slices.SortFunc(activeAccounts, func(i, j *Account) int {
return i.Index - j.Index
})
return activeAccounts
}
func (m *Manager) All() []*Account {
allAccounts := make([]*Account, 0)
m.accounts.Range(func(key string, acc *Account) bool {
allAccounts = append(allAccounts, acc)
return true
})
slices.SortFunc(allAccounts, func(i, j *Account) int {
return i.Index - j.Index
})
return allAccounts
}
func (m *Manager) Current() *Account {
// Fast path - most common case
current := m.current.Load()
if current != nil && !current.Disabled.Load() {
return current
}
// Slow path - find new current account
activeAccounts := m.Active()
if len(activeAccounts) == 0 {
// No active accounts left, try to use disabled ones
m.logger.Warn().Str("debrid", m.debrid).Msg("No active accounts available, all accounts are disabled")
allAccounts := m.All()
if len(allAccounts) == 0 {
m.logger.Error().Str("debrid", m.debrid).Msg("No accounts configured")
m.current.Store(nil)
return nil
}
m.current.Store(allAccounts[0])
return allAccounts[0]
}
newCurrent := activeAccounts[0]
m.current.Store(newCurrent)
return newCurrent
}
func (m *Manager) Disable(account *Account) {
if account == nil {
return
}
account.MarkDisabled()
// If we're disabling the current account, it will be replaced
// on the next Current() call - no need to proactively update
current := m.current.Load()
if current != nil && current.Token == account.Token {
// Optional: immediately find replacement
activeAccounts := m.Active()
if len(activeAccounts) > 0 {
m.current.Store(activeAccounts[0])
} else {
m.current.Store(nil)
}
}
}
func (m *Manager) Reset() {
m.accounts.Range(func(key string, acc *Account) bool {
acc.Reset()
return true
})
// Set current to first active account
activeAccounts := m.Active()
if len(activeAccounts) > 0 {
m.current.Store(activeAccounts[0])
} else {
m.current.Store(nil)
}
}
func (m *Manager) GetAccount(token string) (*Account, error) {
if token == "" {
return nil, fmt.Errorf("token cannot be empty")
}
acc, ok := m.accounts.Load(token)
if !ok {
return nil, fmt.Errorf("account not found for token")
}
return acc, nil
}
func (m *Manager) GetDownloadLink(fileLink string) (types.DownloadLink, error) {
current := m.Current()
if current == nil {
return types.DownloadLink{}, fmt.Errorf("no active account for debrid service %s", m.debrid)
}
return current.GetDownloadLink(fileLink)
}
func (m *Manager) GetAccountFromDownloadLink(downloadLink types.DownloadLink) (*Account, error) {
if downloadLink.Link == "" {
return nil, fmt.Errorf("cannot get account from empty download link")
}
if downloadLink.Token == "" {
return nil, fmt.Errorf("cannot get account from download link without token")
}
return m.GetAccount(downloadLink.Token)
}
func (m *Manager) StoreDownloadLink(downloadLink types.DownloadLink) {
if downloadLink.Link == "" || downloadLink.Token == "" {
return
}
account, err := m.GetAccount(downloadLink.Token)
if err != nil || account == nil {
return
}
account.StoreDownloadLink(downloadLink)
}
func (m *Manager) Stats() []map[string]any {
stats := make([]map[string]any, 0)
for _, acc := range m.All() {
maskedToken := utils.Mask(acc.Token)
accountDetail := map[string]any{
"in_use": acc.Equals(m.Current()),
"order": acc.Index,
"disabled": acc.Disabled.Load(),
"token_masked": maskedToken,
"username": acc.Username,
"traffic_used": acc.TrafficUsed.Load(),
"links_count": acc.DownloadLinksCount(),
"debrid": acc.Debrid,
}
stats = append(stats, accountDetail)
}
return stats
}
func (m *Manager) CheckAndResetBandwidth() {
found := false
m.accounts.Range(func(key string, acc *Account) bool {
if acc.Disabled.Load() && acc.DisableCount.Load() < MaxDisableCount {
if err := acc.CheckBandwidth(); err == nil {
acc.Disabled.Store(false)
found = true
m.logger.Info().Str("debrid", m.debrid).Str("token", utils.Mask(acc.Token)).Msg("Re-activated disabled account")
} else {
m.logger.Debug().Err(err).Str("debrid", m.debrid).Str("token", utils.Mask(acc.Token)).Msg("Account still disabled")
}
}
return true
})
if found {
// If we re-activated any account, reset current to first active
activeAccounts := m.Active()
if len(activeAccounts) > 0 {
m.current.Store(activeAccounts[0])
}
}
}

View File

@@ -0,0 +1,31 @@
package common
import (
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/pkg/debrid/account"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
)
type Client interface {
SubmitMagnet(tr *types.Torrent) (*types.Torrent, error)
CheckStatus(tr *types.Torrent) (*types.Torrent, error)
GetFileDownloadLinks(tr *types.Torrent) error
GetDownloadLink(tr *types.Torrent, file *types.File) (types.DownloadLink, error)
DeleteTorrent(torrentId string) error
IsAvailable(infohashes []string) map[string]bool
GetDownloadUncached() bool
UpdateTorrent(torrent *types.Torrent) error
GetTorrent(torrentId string) (*types.Torrent, error)
GetTorrents() ([]*types.Torrent, error)
Name() string
Logger() zerolog.Logger
GetDownloadingStatus() []string
RefreshDownloadLinks() error
CheckLink(link string) error
GetMountPath() string
AccountManager() *account.Manager // Returns the active download account/token
GetProfile() (*types.Profile, error)
GetAvailableSlots() (int, error)
SyncAccounts() error // Updates each accounts details(like traffic, username, etc.)
DeleteDownloadLink(account *account.Account, downloadLink types.DownloadLink) error
}

View File

@@ -1,148 +1,366 @@
package debrid
import (
"cmp"
"context"
"errors"
"fmt"
"github.com/anacrolix/torrent/metainfo"
"goBlack/common"
"log"
"path/filepath"
"sync"
"time"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/arr"
"github.com/sirrobot01/decypharr/pkg/debrid/common"
"github.com/sirrobot01/decypharr/pkg/debrid/providers/alldebrid"
"github.com/sirrobot01/decypharr/pkg/debrid/providers/debridlink"
"github.com/sirrobot01/decypharr/pkg/debrid/providers/realdebrid"
"github.com/sirrobot01/decypharr/pkg/debrid/providers/torbox"
debridStore "github.com/sirrobot01/decypharr/pkg/debrid/store"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
"github.com/sirrobot01/decypharr/pkg/rclone"
"go.uber.org/ratelimit"
)
type Service interface {
SubmitMagnet(torrent *Torrent) (*Torrent, error)
CheckStatus(torrent *Torrent, isSymlink bool) (*Torrent, error)
GetDownloadLinks(torrent *Torrent) error
DeleteTorrent(torrent *Torrent)
IsAvailable(infohashes []string) map[string]bool
GetMountPath() string
GetDownloadUncached() bool
GetTorrent(id string) (*Torrent, error)
GetName() string
GetLogger() *log.Logger
}
type Debrid struct {
Host string `json:"host"`
APIKey string
DownloadUncached bool
client *common.RLHTTPClient
cache *common.Cache
MountPath string
logger *log.Logger
cache *debridStore.Cache // Could be nil if not using WebDAV
client common.Client // HTTP client for making requests to the debrid service
}
func NewDebrid(dc common.DebridConfig, cache *common.Cache) Service {
func (de *Debrid) Client() common.Client {
return de.client
}
func (de *Debrid) Cache() *debridStore.Cache {
return de.cache
}
func (de *Debrid) Reset() {
if de.cache != nil {
de.cache.Reset()
}
}
type Storage struct {
debrids map[string]*Debrid
mu sync.RWMutex
lastUsed string
}
func NewStorage(rcManager *rclone.Manager) *Storage {
cfg := config.Get()
_logger := logger.Default()
debrids := make(map[string]*Debrid)
bindAddress := cfg.BindAddress
if bindAddress == "" {
bindAddress = "localhost"
}
webdavUrl := fmt.Sprintf("http://%s:%s%s/webdav", bindAddress, cfg.Port, cfg.URLBase)
for _, dc := range cfg.Debrids {
client, err := createDebridClient(dc)
if err != nil {
_logger.Error().Err(err).Str("Debrid", dc.Name).Msg("failed to connect to debrid client")
continue
}
var (
cache *debridStore.Cache
mounter *rclone.Mount
)
_log := client.Logger()
if dc.UseWebDav {
if cfg.Rclone.Enabled && rcManager != nil {
mounter = rclone.NewMount(dc.Name, dc.RcloneMountPath, webdavUrl, rcManager)
}
cache = debridStore.NewDebridCache(dc, client, mounter)
_log.Info().Msg("Debrid Service started with WebDAV")
} else {
_log.Info().Msg("Debrid Service started")
}
debrids[dc.Name] = &Debrid{
cache: cache,
client: client,
}
}
d := &Storage{
debrids: debrids,
lastUsed: "",
}
return d
}
func (d *Storage) Debrid(name string) *Debrid {
d.mu.RLock()
defer d.mu.RUnlock()
if debrid, exists := d.debrids[name]; exists {
return debrid
}
return nil
}
func (d *Storage) StartWorker(ctx context.Context) error {
if ctx == nil {
ctx = context.Background()
}
// Start syncAccounts worker
go d.syncAccountsWorker(ctx)
// Start bandwidth reset worker
go d.checkBandwidthWorker(ctx)
return nil
}
func (d *Storage) checkBandwidthWorker(ctx context.Context) {
if ctx == nil {
ctx = context.Background()
}
ticker := time.NewTicker(30 * time.Minute)
go func() {
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
d.checkAccountBandwidth()
}
}
}()
}
func (d *Storage) checkAccountBandwidth() {
d.mu.Lock()
defer d.mu.Unlock()
for _, debrid := range d.debrids {
if debrid == nil || debrid.client == nil {
continue
}
accountManager := debrid.client.AccountManager()
if accountManager == nil {
continue
}
accountManager.CheckAndResetBandwidth()
}
}
func (d *Storage) syncAccountsWorker(ctx context.Context) {
if ctx == nil {
ctx = context.Background()
}
_ = d.syncAccounts()
ticker := time.NewTicker(5 * time.Minute)
go func() {
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
_ = d.syncAccounts()
}
}
}()
}
func (d *Storage) syncAccounts() error {
d.mu.Lock()
defer d.mu.Unlock()
for name, debrid := range d.debrids {
if debrid == nil || debrid.client == nil {
continue
}
_log := debrid.client.Logger()
if err := debrid.client.SyncAccounts(); err != nil {
_log.Error().Err(err).Msgf("Failed to sync account for %s", name)
continue
}
}
return nil
}
func (d *Storage) Debrids() map[string]*Debrid {
d.mu.RLock()
defer d.mu.RUnlock()
debridsCopy := make(map[string]*Debrid)
for name, debrid := range d.debrids {
if debrid != nil {
debridsCopy[name] = debrid
}
}
return debridsCopy
}
func (d *Storage) Client(name string) common.Client {
d.mu.RLock()
defer d.mu.RUnlock()
if client, exists := d.debrids[name]; exists {
return client.client
}
return nil
}
func (d *Storage) Reset() {
d.mu.Lock()
defer d.mu.Unlock()
// Reset all debrid clients and caches
for _, debrid := range d.debrids {
if debrid != nil {
debrid.Reset()
}
}
// Reinitialize the debrids map
d.debrids = make(map[string]*Debrid)
d.lastUsed = ""
}
func (d *Storage) Clients() map[string]common.Client {
d.mu.RLock()
defer d.mu.RUnlock()
clientsCopy := make(map[string]common.Client)
for name, debrid := range d.debrids {
if debrid != nil && debrid.client != nil {
clientsCopy[name] = debrid.client
}
}
return clientsCopy
}
func (d *Storage) Caches() map[string]*debridStore.Cache {
d.mu.RLock()
defer d.mu.RUnlock()
cachesCopy := make(map[string]*debridStore.Cache)
for name, debrid := range d.debrids {
if debrid != nil && debrid.cache != nil {
cachesCopy[name] = debrid.cache
}
}
return cachesCopy
}
func (d *Storage) FilterClients(filter func(common.Client) bool) map[string]common.Client {
d.mu.Lock()
defer d.mu.Unlock()
filteredClients := make(map[string]common.Client)
for name, client := range d.debrids {
if client != nil && filter(client.client) {
filteredClients[name] = client.client
}
}
return filteredClients
}
func createDebridClient(dc config.Debrid) (common.Client, error) {
rateLimits := map[string]ratelimit.Limiter{}
mainRL := request.ParseRateLimit(dc.RateLimit)
repairRL := request.ParseRateLimit(cmp.Or(dc.RepairRateLimit, dc.RateLimit))
downloadRL := request.ParseRateLimit(cmp.Or(dc.DownloadRateLimit, dc.RateLimit))
rateLimits["main"] = mainRL
rateLimits["repair"] = repairRL
rateLimits["download"] = downloadRL
switch dc.Name {
case "realdebrid":
return NewRealDebrid(dc, cache)
return realdebrid.New(dc, rateLimits)
case "torbox":
return torbox.New(dc, rateLimits)
case "debridlink":
return debridlink.New(dc, rateLimits)
case "alldebrid":
return alldebrid.New(dc, rateLimits)
default:
return NewRealDebrid(dc, cache)
return realdebrid.New(dc, rateLimits)
}
}
func GetTorrentInfo(filePath string) (*Torrent, error) {
// Open and read the .torrent file
if filepath.Ext(filePath) == ".torrent" {
return getTorrentInfo(filePath)
} else {
return torrentFromMagnetFile(filePath)
}
func Process(ctx context.Context, store *Storage, selectedDebrid string, magnet *utils.Magnet, a *arr.Arr, action string, overrideDownloadUncached bool) (*types.Torrent, error) {
}
func torrentFromMagnetFile(filePath string) (*Torrent, error) {
magnetLink := common.OpenMagnetFile(filePath)
magnet, err := common.GetMagnetInfo(magnetLink)
if err != nil {
return nil, err
}
torrent := &Torrent{
InfoHash: magnet.InfoHash,
Name: magnet.Name,
Size: magnet.Size,
Magnet: magnet,
Filename: filePath,
}
return torrent, nil
}
func getTorrentInfo(filePath string) (*Torrent, error) {
mi, err := metainfo.LoadFromFile(filePath)
if err != nil {
return nil, err
}
hash := mi.HashInfoBytes()
infoHash := hash.HexString()
info, err := mi.UnmarshalInfo()
if err != nil {
return nil, err
}
magnet := &common.Magnet{
InfoHash: infoHash,
Name: info.Name,
Size: info.Length,
Link: mi.Magnet(&hash, &info).String(),
}
torrent := &Torrent{
InfoHash: infoHash,
Name: info.Name,
Size: info.Length,
Magnet: magnet,
Filename: filePath,
}
return torrent, nil
}
func GetLocalCache(infohashes []string, cache *common.Cache) ([]string, map[string]bool) {
result := make(map[string]bool)
hashes := make([]string, len(infohashes))
if len(infohashes) == 0 {
return hashes, result
}
if len(infohashes) == 1 {
if cache.Exists(infohashes[0]) {
return hashes, map[string]bool{infohashes[0]: true}
}
return infohashes, result
}
cachedHashes := cache.GetMultiple(infohashes)
for _, h := range infohashes {
_, exists := cachedHashes[h]
if !exists {
hashes = append(hashes, h)
} else {
result[h] = true
}
}
return hashes, result
}
func ProcessQBitTorrent(d Service, magnet *common.Magnet, arr *Arr, isSymlink bool) (*Torrent, error) {
debridTorrent := &Torrent{
debridTorrent := &types.Torrent{
InfoHash: magnet.InfoHash,
Magnet: magnet,
Name: magnet.Name,
Arr: arr,
Arr: a,
Size: magnet.Size,
}
logger := d.GetLogger()
logger.Printf("Torrent Hash: %s", debridTorrent.InfoHash)
if !d.GetDownloadUncached() {
hash, exists := d.IsAvailable([]string{debridTorrent.InfoHash})[debridTorrent.InfoHash]
if !exists || !hash {
return debridTorrent, fmt.Errorf("torrent: %s is not cached", debridTorrent.Name)
} else {
logger.Printf("Torrent: %s is cached(or downloading)", debridTorrent.Name)
}
Files: make(map[string]types.File),
}
debridTorrent, err := d.SubmitMagnet(debridTorrent)
if err != nil || debridTorrent.Id == "" {
logger.Printf("Error submitting magnet: %s", err)
return nil, err
clients := store.FilterClients(func(c common.Client) bool {
if selectedDebrid != "" && c.Name() != selectedDebrid {
return false
}
return true
})
if len(clients) == 0 {
return nil, fmt.Errorf("no debrid clients available")
}
return d.CheckStatus(debridTorrent, isSymlink)
errs := make([]error, 0, len(clients))
// Override first, arr second, debrid third
if !overrideDownloadUncached && a.DownloadUncached != nil {
// Arr cached is set
overrideDownloadUncached = *a.DownloadUncached
}
for _, db := range clients {
_logger := db.Logger()
_logger.Info().
Str("Debrid", db.Name()).
Str("Arr", a.Name).
Str("Hash", debridTorrent.InfoHash).
Str("Name", debridTorrent.Name).
Str("Action", action).
Msg("Processing torrent")
// If debrid.DownloadUnached is true, it overrides everything
if db.GetDownloadUncached() || overrideDownloadUncached {
debridTorrent.DownloadUncached = true
}
dbt, err := db.SubmitMagnet(debridTorrent)
if err != nil || dbt == nil || dbt.Id == "" {
errs = append(errs, err)
continue
}
dbt.Arr = a
_logger.Info().Str("id", dbt.Id).Msgf("Torrent: %s submitted to %s", dbt.Name, db.Name())
store.lastUsed = db.Name()
torrent, err := db.CheckStatus(dbt)
if err != nil && torrent != nil && torrent.Id != "" {
// Delete the torrent if it was not downloaded
go func(id string) {
_ = db.DeleteTorrent(id)
}(torrent.Id)
}
if err != nil {
errs = append(errs, err)
continue
}
if torrent == nil {
errs = append(errs, fmt.Errorf("torrent %s returned nil after checking status", dbt.Name))
continue
}
return torrent, nil
}
if len(errs) == 0 {
return nil, fmt.Errorf("failed to process torrent: no clients available")
}
joinedErrors := errors.Join(errs...)
return nil, fmt.Errorf("failed to process torrent: %w", joinedErrors)
}

View File

@@ -0,0 +1,504 @@
package alldebrid
import (
"encoding/json"
"fmt"
"net/http"
gourl "net/url"
"path/filepath"
"strconv"
"sync"
"time"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/debrid/account"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
"go.uber.org/ratelimit"
)
type AllDebrid struct {
name string
Host string `json:"host"`
APIKey string
accountsManager *account.Manager
autoExpiresLinksAfter time.Duration
DownloadUncached bool
client *request.Client
Profile *types.Profile `json:"profile"`
MountPath string
logger zerolog.Logger
checkCached bool
addSamples bool
minimumFreeSlot int
}
func New(dc config.Debrid, ratelimits map[string]ratelimit.Limiter) (*AllDebrid, error) {
headers := map[string]string{
"Authorization": fmt.Sprintf("Bearer %s", dc.APIKey),
}
_log := logger.New(dc.Name)
client := request.New(
request.WithHeaders(headers),
request.WithLogger(_log),
request.WithRateLimiter(ratelimits["main"]),
request.WithProxy(dc.Proxy),
)
autoExpiresLinksAfter, err := time.ParseDuration(dc.AutoExpireLinksAfter)
if autoExpiresLinksAfter == 0 || err != nil {
autoExpiresLinksAfter = 48 * time.Hour
}
return &AllDebrid{
name: "alldebrid",
Host: "http://api.alldebrid.com/v4.1",
APIKey: dc.APIKey,
accountsManager: account.NewManager(dc, ratelimits["download"], _log),
DownloadUncached: dc.DownloadUncached,
autoExpiresLinksAfter: autoExpiresLinksAfter,
client: client,
MountPath: dc.Folder,
logger: logger.New(dc.Name),
checkCached: dc.CheckCached,
addSamples: dc.AddSamples,
minimumFreeSlot: dc.MinimumFreeSlot,
}, nil
}
func (ad *AllDebrid) Name() string {
return ad.name
}
func (ad *AllDebrid) Logger() zerolog.Logger {
return ad.logger
}
func (ad *AllDebrid) IsAvailable(hashes []string) map[string]bool {
// Check if the infohashes are available in the local cache
result := make(map[string]bool)
// Divide hashes into groups of 100
// AllDebrid does not support checking cached infohashes
return result
}
func (ad *AllDebrid) SubmitMagnet(torrent *types.Torrent) (*types.Torrent, error) {
url := fmt.Sprintf("%s/magnet/upload", ad.Host)
query := gourl.Values{}
query.Add("magnets[]", torrent.Magnet.Link)
url += "?" + query.Encode()
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := ad.client.MakeRequest(req)
if err != nil {
return nil, err
}
var data UploadMagnetResponse
err = json.Unmarshal(resp, &data)
if err != nil {
return nil, err
}
magnets := data.Data.Magnets
if len(magnets) == 0 {
return nil, fmt.Errorf("error adding torrent. No magnets returned")
}
magnet := magnets[0]
torrentId := strconv.Itoa(magnet.ID)
torrent.Id = torrentId
torrent.Added = time.Now().Format(time.RFC3339)
return torrent, nil
}
func getAlldebridStatus(statusCode int) string {
switch {
case statusCode == 4:
return "downloaded"
case statusCode >= 0 && statusCode <= 3:
return "downloading"
default:
return "error"
}
}
func (ad *AllDebrid) flattenFiles(torrentId string, files []MagnetFile, parentPath string, index *int) map[string]types.File {
result := make(map[string]types.File)
cfg := config.Get()
for _, f := range files {
currentPath := f.Name
if parentPath != "" {
currentPath = filepath.Join(parentPath, f.Name)
}
if f.Elements != nil {
// This is a folder, recurse into it
subFiles := ad.flattenFiles(torrentId, f.Elements, currentPath, index)
for k, v := range subFiles {
if _, ok := result[k]; ok {
// File already exists, use path as key
result[v.Path] = v
} else {
result[k] = v
}
}
} else {
// This is a file
fileName := filepath.Base(f.Name)
// Skip sample files
if !ad.addSamples && utils.IsSampleFile(f.Name) {
continue
}
if !cfg.IsAllowedFile(fileName) {
continue
}
if !cfg.IsSizeAllowed(f.Size) {
continue
}
*index++
file := types.File{
TorrentId: torrentId,
Id: strconv.Itoa(*index),
Name: fileName,
Size: f.Size,
Path: currentPath,
Link: f.Link,
}
result[file.Name] = file
}
}
return result
}
func (ad *AllDebrid) GetTorrent(torrentId string) (*types.Torrent, error) {
url := fmt.Sprintf("%s/magnet/status?id=%s", ad.Host, torrentId)
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := ad.client.MakeRequest(req)
if err != nil {
return nil, err
}
var res TorrentInfoResponse
err = json.Unmarshal(resp, &res)
if err != nil {
ad.logger.Error().Err(err).Msgf("Error unmarshalling torrent info")
return nil, err
}
data := res.Data.Magnets
status := getAlldebridStatus(data.StatusCode)
name := data.Filename
t := &types.Torrent{
Id: strconv.Itoa(data.Id),
Name: name,
Status: status,
Filename: name,
OriginalFilename: name,
Files: make(map[string]types.File),
InfoHash: data.Hash,
Debrid: ad.name,
MountPath: ad.MountPath,
Added: time.Unix(data.CompletionDate, 0).Format(time.RFC3339),
}
t.Bytes = data.Size
t.Seeders = data.Seeders
if status == "downloaded" {
t.Progress = 100
index := -1
files := ad.flattenFiles(t.Id, data.Files, "", &index)
t.Files = files
} else {
t.Progress = float64(data.Downloaded) / float64(data.Size) * 100
t.Speed = data.DownloadSpeed
}
return t, nil
}
func (ad *AllDebrid) UpdateTorrent(t *types.Torrent) error {
url := fmt.Sprintf("%s/magnet/status?id=%s", ad.Host, t.Id)
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := ad.client.MakeRequest(req)
if err != nil {
return err
}
var res TorrentInfoResponse
err = json.Unmarshal(resp, &res)
if err != nil {
ad.logger.Error().Err(err).Msgf("Error unmarshalling torrent info")
return err
}
data := res.Data.Magnets
status := getAlldebridStatus(data.StatusCode)
name := data.Filename
t.Name = name
t.Status = status
t.Filename = name
t.OriginalFilename = name
t.Folder = name
t.MountPath = ad.MountPath
t.Debrid = ad.name
t.Bytes = data.Size
t.Seeders = data.Seeders
t.Added = time.Unix(data.CompletionDate, 0).Format(time.RFC3339)
if status == "downloaded" {
t.Progress = 100
index := -1
files := ad.flattenFiles(t.Id, data.Files, "", &index)
t.Files = files
} else {
t.Progress = float64(data.Downloaded) / float64(data.Size) * 100
t.Speed = data.DownloadSpeed
}
return nil
}
func (ad *AllDebrid) CheckStatus(torrent *types.Torrent) (*types.Torrent, error) {
for {
err := ad.UpdateTorrent(torrent)
if err != nil || torrent == nil {
return torrent, err
}
status := torrent.Status
if status == "downloaded" {
ad.logger.Info().Msgf("Torrent: %s downloaded", torrent.Name)
return torrent, nil
} else if utils.Contains(ad.GetDownloadingStatus(), status) {
if !torrent.DownloadUncached {
return torrent, fmt.Errorf("torrent: %s not cached", torrent.Name)
}
// Break out of the loop if the torrent is downloading.
// This is necessary to prevent infinite loop since we moved to sync downloading and async processing
return torrent, nil
} else {
return torrent, fmt.Errorf("torrent: %s has error", torrent.Name)
}
}
}
func (ad *AllDebrid) DeleteTorrent(torrentId string) error {
url := fmt.Sprintf("%s/magnet/delete?id=%s", ad.Host, torrentId)
req, _ := http.NewRequest(http.MethodGet, url, nil)
if _, err := ad.client.MakeRequest(req); err != nil {
return err
}
ad.logger.Info().Msgf("Torrent %s deleted from AD", torrentId)
return nil
}
func (ad *AllDebrid) GetFileDownloadLinks(t *types.Torrent) error {
filesCh := make(chan types.File, len(t.Files))
linksCh := make(chan types.DownloadLink, len(t.Files))
errCh := make(chan error, len(t.Files))
var wg sync.WaitGroup
wg.Add(len(t.Files))
for _, file := range t.Files {
go func(file types.File) {
defer wg.Done()
link, err := ad.GetDownloadLink(t, &file)
if err != nil {
errCh <- err
return
}
linksCh <- link
file.DownloadLink = link
filesCh <- file
}(file)
}
go func() {
wg.Wait()
close(filesCh)
close(linksCh)
close(errCh)
}()
files := make(map[string]types.File, len(t.Files))
for file := range filesCh {
files[file.Name] = file
}
// Collect download links
links := make(map[string]types.DownloadLink, len(t.Files))
for link := range linksCh {
if link.Empty() {
continue
}
links[link.Link] = link
}
// Check for errors
for err := range errCh {
if err != nil {
return err
}
}
t.Files = files
return nil
}
func (ad *AllDebrid) GetDownloadLink(t *types.Torrent, file *types.File) (types.DownloadLink, error) {
url := fmt.Sprintf("%s/link/unlock", ad.Host)
query := gourl.Values{}
query.Add("link", file.Link)
url += "?" + query.Encode()
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := ad.client.MakeRequest(req)
if err != nil {
return types.DownloadLink{}, err
}
var data DownloadLink
if err = json.Unmarshal(resp, &data); err != nil {
return types.DownloadLink{}, err
}
if data.Error != nil {
return types.DownloadLink{}, fmt.Errorf("error getting download link: %s", data.Error.Message)
}
link := data.Data.Link
if link == "" {
return types.DownloadLink{}, fmt.Errorf("download link is empty")
}
now := time.Now()
dl := types.DownloadLink{
Token: ad.APIKey,
Link: file.Link,
DownloadLink: link,
Id: data.Data.Id,
Size: file.Size,
Filename: file.Name,
Generated: now,
ExpiresAt: now.Add(ad.autoExpiresLinksAfter),
}
// Set the download link in the account
ad.accountsManager.StoreDownloadLink(dl)
return dl, nil
}
func (ad *AllDebrid) GetTorrents() ([]*types.Torrent, error) {
url := fmt.Sprintf("%s/magnet/status?status=ready", ad.Host)
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := ad.client.MakeRequest(req)
torrents := make([]*types.Torrent, 0)
if err != nil {
return torrents, err
}
var res TorrentsListResponse
err = json.Unmarshal(resp, &res)
if err != nil {
ad.logger.Error().Err(err).Msgf("Error unmarshalling torrent info")
return torrents, err
}
for _, magnet := range res.Data.Magnets {
torrents = append(torrents, &types.Torrent{
Id: strconv.Itoa(magnet.Id),
Name: magnet.Filename,
Bytes: magnet.Size,
Status: getAlldebridStatus(magnet.StatusCode),
Filename: magnet.Filename,
OriginalFilename: magnet.Filename,
Files: make(map[string]types.File),
InfoHash: magnet.Hash,
Debrid: ad.name,
MountPath: ad.MountPath,
Added: time.Unix(magnet.CompletionDate, 0).Format(time.RFC3339),
})
}
return torrents, nil
}
func (ad *AllDebrid) RefreshDownloadLinks() error {
return nil
}
func (ad *AllDebrid) GetDownloadingStatus() []string {
return []string{"downloading"}
}
func (ad *AllDebrid) GetDownloadUncached() bool {
return ad.DownloadUncached
}
func (ad *AllDebrid) CheckLink(link string) error {
return nil
}
func (ad *AllDebrid) GetMountPath() string {
return ad.MountPath
}
func (ad *AllDebrid) GetAvailableSlots() (int, error) {
// This function is a placeholder for AllDebrid
//TODO: Implement the logic to check available slots for AllDebrid
return 0, fmt.Errorf("GetAvailableSlots not implemented for AllDebrid")
}
func (ad *AllDebrid) GetProfile() (*types.Profile, error) {
if ad.Profile != nil {
return ad.Profile, nil
}
url := fmt.Sprintf("%s/user", ad.Host)
req, err := http.NewRequest(http.MethodGet, url, nil)
if err != nil {
return nil, err
}
resp, err := ad.client.MakeRequest(req)
if err != nil {
return nil, err
}
var res UserProfileResponse
err = json.Unmarshal(resp, &res)
if err != nil {
ad.logger.Error().Err(err).Msgf("Error unmarshalling user profile")
return nil, err
}
if res.Status != "success" {
message := "unknown error"
if res.Error != nil {
message = res.Error.Message
}
return nil, fmt.Errorf("error getting user profile: %s", message)
}
userData := res.Data.User
expiration := time.Unix(userData.PremiumUntil, 0)
profile := &types.Profile{
Id: 1,
Name: ad.name,
Username: userData.Username,
Email: userData.Email,
Points: userData.FidelityPoints,
Premium: userData.PremiumUntil,
Expiration: expiration,
}
if userData.IsPremium {
profile.Type = "premium"
} else if userData.IsTrial {
profile.Type = "trial"
} else {
profile.Type = "free"
}
ad.Profile = profile
return profile, nil
}
func (ad *AllDebrid) AccountManager() *account.Manager {
return ad.accountsManager
}
func (ad *AllDebrid) SyncAccounts() error {
return nil
}
func (ad *AllDebrid) DeleteDownloadLink(account *account.Account, downloadLink types.DownloadLink) error {
account.DeleteDownloadLink(downloadLink.Link)
return nil
}

View File

@@ -0,0 +1,133 @@
package alldebrid
import (
"encoding/json"
"fmt"
)
type errorResponse struct {
Code string `json:"code"`
Message string `json:"message"`
}
type MagnetFile struct {
Name string `json:"n"`
Size int64 `json:"s"`
Link string `json:"l"`
Elements []MagnetFile `json:"e"`
}
type magnetInfo struct {
Id int `json:"id"`
Filename string `json:"filename"`
Size int64 `json:"size"`
Hash string `json:"hash"`
Status string `json:"status"`
StatusCode int `json:"statusCode"`
UploadDate int64 `json:"uploadDate"`
Downloaded int64 `json:"downloaded"`
Uploaded int64 `json:"uploaded"`
DownloadSpeed int64 `json:"downloadSpeed"`
UploadSpeed int64 `json:"uploadSpeed"`
Seeders int `json:"seeders"`
CompletionDate int64 `json:"completionDate"`
Type string `json:"type"`
Notified bool `json:"notified"`
Version int `json:"version"`
NbLinks int `json:"nbLinks"`
Files []MagnetFile `json:"files"`
}
type Magnets []magnetInfo
type TorrentInfoResponse struct {
Status string `json:"status"`
Data struct {
Magnets magnetInfo `json:"magnets"`
} `json:"data"`
Error *errorResponse `json:"error"`
}
type TorrentsListResponse struct {
Status string `json:"status"`
Data struct {
Magnets Magnets `json:"magnets"`
} `json:"data"`
Error *errorResponse `json:"error"`
}
type UploadMagnetResponse struct {
Status string `json:"status"`
Data struct {
Magnets []struct {
Magnet string `json:"magnet"`
Hash string `json:"hash"`
Name string `json:"name"`
FilenameOriginal string `json:"filename_original"`
Size int64 `json:"size"`
Ready bool `json:"ready"`
ID int `json:"id"`
} `json:"magnets"`
}
Error *errorResponse `json:"error"`
}
type DownloadLink struct {
Status string `json:"status"`
Data struct {
Link string `json:"link"`
Host string `json:"host"`
Filename string `json:"filename"`
Streaming []interface{} `json:"streaming"`
Paws bool `json:"paws"`
Filesize int `json:"filesize"`
Id string `json:"id"`
Path []struct {
Name string `json:"n"`
Size int `json:"s"`
} `json:"path"`
} `json:"data"`
Error *errorResponse `json:"error"`
}
// UnmarshalJSON implements custom unmarshaling for Magnets type
// It can handle both an array of magnetInfo objects or a map with string keys.
// If the input is an array, it will be unmarshaled directly into the Magnets slice.
// If the input is a map, it will extract the values and append them to the Magnets slice.
// If the input is neither, it will return an error.
func (m *Magnets) UnmarshalJSON(data []byte) error {
// Try to unmarshal as array
var arr []magnetInfo
if err := json.Unmarshal(data, &arr); err == nil {
*m = arr
return nil
}
// Try to unmarshal as map
var obj map[string]magnetInfo
if err := json.Unmarshal(data, &obj); err == nil {
for _, v := range obj {
*m = append(*m, v)
}
return nil
}
return fmt.Errorf("magnets: unsupported JSON format")
}
type UserProfileResponse struct {
Status string `json:"status"`
Error *errorResponse `json:"error"`
Data struct {
User struct {
Username string `json:"username"`
Email string `json:"email"`
IsPremium bool `json:"isPremium"`
IsSubscribed bool `json:"isSubscribed"`
IsTrial bool `json:"isTrial"`
PremiumUntil int64 `json:"premiumUntil"`
Lang string `json:"lang"`
FidelityPoints int `json:"fidelityPoints"`
LimitedHostersQuotas map[string]int `json:"limitedHostersQuotas"`
Notifications []string `json:"notifications"`
} `json:"user"`
} `json:"data"`
}

View File

@@ -0,0 +1,525 @@
package debridlink
import (
"bytes"
"encoding/json"
"fmt"
"time"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/debrid/account"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
"go.uber.org/ratelimit"
"net/http"
"strings"
)
type DebridLink struct {
name string
Host string `json:"host"`
APIKey string
accountsManager *account.Manager
DownloadUncached bool
client *request.Client
autoExpiresLinksAfter time.Duration
MountPath string
logger zerolog.Logger
checkCached bool
addSamples bool
Profile *types.Profile `json:"profile,omitempty"`
}
func New(dc config.Debrid, ratelimits map[string]ratelimit.Limiter) (*DebridLink, error) {
headers := map[string]string{
"Authorization": fmt.Sprintf("Bearer %s", dc.APIKey),
"Content-Type": "application/json",
}
_log := logger.New(dc.Name)
client := request.New(
request.WithHeaders(headers),
request.WithLogger(_log),
request.WithRateLimiter(ratelimits["main"]),
request.WithProxy(dc.Proxy),
)
autoExpiresLinksAfter, err := time.ParseDuration(dc.AutoExpireLinksAfter)
if autoExpiresLinksAfter == 0 || err != nil {
autoExpiresLinksAfter = 48 * time.Hour
}
return &DebridLink{
name: "debridlink",
Host: "https://debrid-link.com/api/v2",
APIKey: dc.APIKey,
accountsManager: account.NewManager(dc, ratelimits["download"], _log),
DownloadUncached: dc.DownloadUncached,
autoExpiresLinksAfter: autoExpiresLinksAfter,
client: client,
MountPath: dc.Folder,
logger: logger.New(dc.Name),
checkCached: dc.CheckCached,
addSamples: dc.AddSamples,
}, nil
}
func (dl *DebridLink) Name() string {
return dl.name
}
func (dl *DebridLink) Logger() zerolog.Logger {
return dl.logger
}
func (dl *DebridLink) IsAvailable(hashes []string) map[string]bool {
// Check if the infohashes are available in the local cache
result := make(map[string]bool)
// Divide hashes into groups of 100
for i := 0; i < len(hashes); i += 100 {
end := i + 100
if end > len(hashes) {
end = len(hashes)
}
// Filter out empty strings
validHashes := make([]string, 0, end-i)
for _, hash := range hashes[i:end] {
if hash != "" {
validHashes = append(validHashes, hash)
}
}
// If no valid hashes in this batch, continue to the next batch
if len(validHashes) == 0 {
continue
}
hashStr := strings.Join(validHashes, ",")
url := fmt.Sprintf("%s/seedbox/cached/%s", dl.Host, hashStr)
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := dl.client.MakeRequest(req)
if err != nil {
dl.logger.Error().Err(err).Msgf("Error checking availability")
return result
}
var data AvailableResponse
err = json.Unmarshal(resp, &data)
if err != nil {
dl.logger.Error().Err(err).Msgf("Error marshalling availability")
return result
}
if data.Value == nil {
return result
}
value := *data.Value
for _, h := range hashes[i:end] {
_, exists := value[h]
if exists {
result[h] = true
}
}
}
return result
}
func (dl *DebridLink) GetTorrent(torrentId string) (*types.Torrent, error) {
url := fmt.Sprintf("%s/seedbox/%s", dl.Host, torrentId)
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := dl.client.MakeRequest(req)
if err != nil {
return nil, err
}
var res torrentInfo
err = json.Unmarshal(resp, &res)
if err != nil {
return nil, err
}
if !res.Success || res.Value == nil {
return nil, fmt.Errorf("error getting torrent")
}
data := *res.Value
if len(data) == 0 {
return nil, fmt.Errorf("torrent not found")
}
t := data[0]
name := utils.RemoveInvalidChars(t.Name)
torrent := &types.Torrent{
Id: t.ID,
Name: name,
Bytes: t.TotalSize,
Status: "downloaded",
Filename: name,
OriginalFilename: name,
MountPath: dl.MountPath,
Debrid: dl.name,
Added: time.Unix(t.Created, 0).Format(time.RFC3339),
}
cfg := config.Get()
for _, f := range t.Files {
if !cfg.IsSizeAllowed(f.Size) {
continue
}
file := types.File{
TorrentId: t.ID,
Id: f.ID,
Name: f.Name,
Size: f.Size,
Path: f.Name,
Link: f.DownloadURL,
}
torrent.Files[file.Name] = file
}
return torrent, nil
}
func (dl *DebridLink) UpdateTorrent(t *types.Torrent) error {
url := fmt.Sprintf("%s/seedbox/list?ids=%s", dl.Host, t.Id)
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := dl.client.MakeRequest(req)
if err != nil {
return err
}
var res torrentInfo
err = json.Unmarshal(resp, &res)
if err != nil {
return err
}
if !res.Success {
return fmt.Errorf("error getting torrent")
}
if res.Value == nil {
return fmt.Errorf("torrent not found")
}
dt := *res.Value
if len(dt) == 0 {
return fmt.Errorf("torrent not found")
}
data := dt[0]
status := "downloading"
if data.Status == 100 {
status = "downloaded"
}
name := utils.RemoveInvalidChars(data.Name)
t.Id = data.ID
t.Name = name
t.Bytes = data.TotalSize
t.Folder = name
t.Progress = data.DownloadPercent
t.Status = status
t.Speed = data.DownloadSpeed
t.Seeders = data.PeersConnected
t.Filename = name
t.OriginalFilename = name
t.Added = time.Unix(data.Created, 0).Format(time.RFC3339)
cfg := config.Get()
now := time.Now()
for _, f := range data.Files {
if !cfg.IsSizeAllowed(f.Size) {
continue
}
file := types.File{
TorrentId: t.Id,
Id: f.ID,
Name: f.Name,
Size: f.Size,
Path: f.Name,
Link: f.DownloadURL,
}
link := types.DownloadLink{
Token: dl.APIKey,
Filename: f.Name,
Link: f.DownloadURL,
DownloadLink: f.DownloadURL,
Generated: now,
ExpiresAt: now.Add(dl.autoExpiresLinksAfter),
}
file.DownloadLink = link
t.Files[f.Name] = file
dl.accountsManager.StoreDownloadLink(link)
}
return nil
}
func (dl *DebridLink) SubmitMagnet(t *types.Torrent) (*types.Torrent, error) {
url := fmt.Sprintf("%s/seedbox/add", dl.Host)
payload := map[string]string{"url": t.Magnet.Link}
jsonPayload, _ := json.Marshal(payload)
req, _ := http.NewRequest(http.MethodPost, url, bytes.NewBuffer(jsonPayload))
resp, err := dl.client.MakeRequest(req)
if err != nil {
return nil, err
}
var res SubmitTorrentInfo
err = json.Unmarshal(resp, &res)
if err != nil {
return nil, err
}
if !res.Success || res.Value == nil {
return nil, fmt.Errorf("error adding torrent")
}
data := *res.Value
status := "downloading"
name := utils.RemoveInvalidChars(data.Name)
t.Id = data.ID
t.Name = name
t.Bytes = data.TotalSize
t.Folder = name
t.Progress = data.DownloadPercent
t.Status = status
t.Speed = data.DownloadSpeed
t.Seeders = data.PeersConnected
t.Filename = name
t.OriginalFilename = name
t.MountPath = dl.MountPath
t.Debrid = dl.name
t.Added = time.Unix(data.Created, 0).Format(time.RFC3339)
now := time.Now()
for _, f := range data.Files {
file := types.File{
TorrentId: t.Id,
Id: f.ID,
Name: f.Name,
Size: f.Size,
Path: f.Name,
Link: f.DownloadURL,
Generated: now,
}
link := types.DownloadLink{
Token: dl.APIKey,
Filename: f.Name,
Link: f.DownloadURL,
DownloadLink: f.DownloadURL,
Generated: now,
ExpiresAt: now.Add(dl.autoExpiresLinksAfter),
}
file.DownloadLink = link
t.Files[f.Name] = file
dl.accountsManager.StoreDownloadLink(link)
}
return t, nil
}
func (dl *DebridLink) CheckStatus(torrent *types.Torrent) (*types.Torrent, error) {
for {
err := dl.UpdateTorrent(torrent)
if err != nil || torrent == nil {
return torrent, err
}
status := torrent.Status
if status == "downloaded" {
dl.logger.Info().Msgf("Torrent: %s downloaded", torrent.Name)
return torrent, nil
} else if utils.Contains(dl.GetDownloadingStatus(), status) {
if !torrent.DownloadUncached {
return torrent, fmt.Errorf("torrent: %s not cached", torrent.Name)
}
// Break out of the loop if the torrent is downloading.
// This is necessary to prevent infinite loop since we moved to sync downloading and async processing
return torrent, nil
} else {
return torrent, fmt.Errorf("torrent: %s has error", torrent.Name)
}
}
}
func (dl *DebridLink) DeleteTorrent(torrentId string) error {
url := fmt.Sprintf("%s/seedbox/%s/remove", dl.Host, torrentId)
req, _ := http.NewRequest(http.MethodDelete, url, nil)
if _, err := dl.client.MakeRequest(req); err != nil {
return err
}
dl.logger.Info().Msgf("Torrent: %s deleted from DebridLink", torrentId)
return nil
}
func (dl *DebridLink) GetFileDownloadLinks(t *types.Torrent) error {
// Download links are already generated
return nil
}
func (dl *DebridLink) RefreshDownloadLinks() error {
return nil
}
func (dl *DebridLink) GetDownloadLink(t *types.Torrent, file *types.File) (types.DownloadLink, error) {
return dl.accountsManager.GetDownloadLink(file.Link)
}
func (dl *DebridLink) GetDownloadingStatus() []string {
return []string{"downloading"}
}
func (dl *DebridLink) GetDownloadUncached() bool {
return dl.DownloadUncached
}
func (dl *DebridLink) GetTorrents() ([]*types.Torrent, error) {
page := 0
perPage := 100
torrents := make([]*types.Torrent, 0)
for {
t, err := dl.getTorrents(page, perPage)
if err != nil {
break
}
if len(t) == 0 {
break
}
torrents = append(torrents, t...)
page++
}
return torrents, nil
}
func (dl *DebridLink) getTorrents(page, perPage int) ([]*types.Torrent, error) {
url := fmt.Sprintf("%s/seedbox/list?page=%d&perPage=%d", dl.Host, page, perPage)
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := dl.client.MakeRequest(req)
torrents := make([]*types.Torrent, 0)
if err != nil {
return torrents, err
}
var res torrentInfo
err = json.Unmarshal(resp, &res)
if err != nil {
dl.logger.Error().Err(err).Msgf("Error unmarshalling torrent info")
return torrents, err
}
data := *res.Value
if len(data) == 0 {
return torrents, nil
}
for _, t := range data {
if t.Status != 100 {
continue
}
torrent := &types.Torrent{
Id: t.ID,
Name: t.Name,
Bytes: t.TotalSize,
Status: "downloaded",
Filename: t.Name,
OriginalFilename: t.Name,
InfoHash: t.HashString,
Files: make(map[string]types.File),
Debrid: dl.name,
MountPath: dl.MountPath,
Added: time.Unix(t.Created, 0).Format(time.RFC3339),
}
cfg := config.Get()
now := time.Now()
for _, f := range t.Files {
if !cfg.IsSizeAllowed(f.Size) {
continue
}
file := types.File{
TorrentId: torrent.Id,
Id: f.ID,
Name: f.Name,
Size: f.Size,
Path: f.Name,
Link: f.DownloadURL,
}
link := types.DownloadLink{
Token: dl.APIKey,
Filename: f.Name,
Link: f.DownloadURL,
DownloadLink: f.DownloadURL,
Generated: now,
ExpiresAt: now.Add(dl.autoExpiresLinksAfter),
}
file.DownloadLink = link
torrent.Files[f.Name] = file
dl.accountsManager.StoreDownloadLink(link)
}
torrents = append(torrents, torrent)
}
return torrents, nil
}
func (dl *DebridLink) CheckLink(link string) error {
return nil
}
func (dl *DebridLink) GetMountPath() string {
return dl.MountPath
}
func (dl *DebridLink) GetAvailableSlots() (int, error) {
//TODO: Implement the logic to check available slots for DebridLink
return 0, fmt.Errorf("GetAvailableSlots not implemented for DebridLink")
}
func (dl *DebridLink) GetProfile() (*types.Profile, error) {
if dl.Profile != nil {
return dl.Profile, nil
}
url := fmt.Sprintf("%s/account/infos", dl.Host)
req, err := http.NewRequest(http.MethodGet, url, nil)
if err != nil {
return nil, err
}
resp, err := dl.client.MakeRequest(req)
if err != nil {
return nil, err
}
var res UserInfo
err = json.Unmarshal(resp, &res)
if err != nil {
dl.logger.Error().Err(err).Msgf("Error unmarshalling user info")
return nil, err
}
if !res.Success || res.Value == nil {
return nil, fmt.Errorf("error getting user info")
}
data := *res.Value
expiration := time.Unix(data.PremiumLeft, 0)
profile := &types.Profile{
Id: 1,
Username: data.Username,
Name: dl.name,
Email: data.Email,
Points: data.Points,
Premium: data.PremiumLeft,
Expiration: expiration,
}
if expiration.IsZero() {
profile.Expiration = time.Now().AddDate(1, 0, 0) // Default to 1 year if no expiration
}
if data.PremiumLeft > 0 {
profile.Type = "premium"
} else {
profile.Type = "free"
}
dl.Profile = profile
return profile, nil
}
func (dl *DebridLink) AccountManager() *account.Manager {
return dl.accountsManager
}
func (dl *DebridLink) SyncAccounts() error {
return nil
}
func (dl *DebridLink) DeleteDownloadLink(account *account.Account, downloadLink types.DownloadLink) error {
account.DeleteDownloadLink(downloadLink.Link)
return nil
}

View File

@@ -0,0 +1,54 @@
package debridlink
type APIResponse[T any] struct {
Success bool `json:"success"`
Value *T `json:"value"` // Use pointer to allow nil
}
type AvailableResponse APIResponse[map[string]map[string]struct {
Name string `json:"name"`
HashString string `json:"hashString"`
Files []struct {
Name string `json:"name"`
Size int `json:"size"`
} `json:"files"`
}]
type _torrentInfo struct {
ID string `json:"id"`
Name string `json:"name"`
HashString string `json:"hashString"`
UploadRatio float64 `json:"uploadRatio"`
ServerID string `json:"serverId"`
Wait bool `json:"wait"`
PeersConnected int `json:"peersConnected"`
Status int `json:"status"`
TotalSize int64 `json:"totalSize"`
Files []struct {
ID string `json:"id"`
Name string `json:"name"`
DownloadURL string `json:"downloadUrl"`
Size int64 `json:"size"`
DownloadPercent int `json:"downloadPercent"`
} `json:"files"`
Trackers []struct {
Announce string `json:"announce"`
} `json:"trackers"`
Created int64 `json:"created"`
DownloadPercent float64 `json:"downloadPercent"`
DownloadSpeed int64 `json:"downloadSpeed"`
UploadSpeed int64 `json:"uploadSpeed"`
}
type torrentInfo APIResponse[[]_torrentInfo]
type SubmitTorrentInfo APIResponse[_torrentInfo]
type UserInfo APIResponse[struct {
Username string `json:"username"`
Email string `json:"email"`
AccountType int `json:"accountType"`
PremiumLeft int64 `json:"premiumLeft"`
Points int `json:"pts"`
Trafficshare int `json:"trafficshare"`
}]

View File

@@ -0,0 +1 @@
package realdebrid

File diff suppressed because it is too large Load Diff

View File

@@ -1,13 +1,14 @@
package structs
package realdebrid
import (
"encoding/json"
"fmt"
"time"
)
type RealDebridAvailabilityResponse map[string]Hoster
type AvailabilityResponse map[string]Hoster
func (r *RealDebridAvailabilityResponse) UnmarshalJSON(data []byte) error {
func (r *AvailabilityResponse) UnmarshalJSON(data []byte) error {
// First, try to unmarshal as an object
var objectData map[string]Hoster
err := json.Unmarshal(data, &objectData)
@@ -64,18 +65,18 @@ type FileVariant struct {
Filesize int `json:"filesize"`
}
type RealDebridAddMagnetSchema struct {
type AddMagnetSchema struct {
Id string `json:"id"`
Uri string `json:"uri"`
}
type RealDebridTorrentInfo struct {
type torrentInfo struct {
ID string `json:"id"`
Filename string `json:"filename"`
OriginalFilename string `json:"original_filename"`
Hash string `json:"hash"`
Bytes int64 `json:"bytes"`
OriginalBytes int `json:"original_bytes"`
OriginalBytes int64 `json:"original_bytes"`
Host string `json:"host"`
Split int `json:"split"`
Progress float64 `json:"progress"`
@@ -84,7 +85,7 @@ type RealDebridTorrentInfo struct {
Files []struct {
ID int `json:"id"`
Path string `json:"path"`
Bytes int `json:"bytes"`
Bytes int64 `json:"bytes"`
Selected int `json:"selected"`
} `json:"files"`
Links []string `json:"links"`
@@ -93,15 +94,72 @@ type RealDebridTorrentInfo struct {
Seeders int `json:"seeders,omitempty"`
}
type RealDebridUnrestrictResponse struct {
type UnrestrictResponse struct {
Id string `json:"id"`
Filename string `json:"filename"`
MimeType string `json:"mimeType"`
Filesize int64 `json:"filesize"`
Link string `json:"link"`
Host string `json:"host"`
Chunks int64 `json:"chunks"`
Crc int64 `json:"crc"`
Chunks int `json:"chunks"`
Crc int `json:"crc"`
Download string `json:"download"`
Streamable int `json:"streamable"`
}
type TorrentsResponse struct {
Id string `json:"id"`
Filename string `json:"filename"`
Hash string `json:"hash"`
Bytes int64 `json:"bytes"`
Host string `json:"host"`
Split int64 `json:"split"`
Progress float64 `json:"progress"`
Status string `json:"status"`
Added time.Time `json:"added"`
Links []string `json:"links"`
Ended time.Time `json:"ended"`
}
type DownloadsResponse struct {
Id string `json:"id"`
Filename string `json:"filename"`
MimeType string `json:"mimeType"`
Filesize int64 `json:"filesize"`
Link string `json:"link"`
Host string `json:"host"`
HostIcon string `json:"host_icon"`
Chunks int64 `json:"chunks"`
Download string `json:"download"`
Streamable int `json:"streamable"`
Generated time.Time `json:"generated"`
}
type ErrorResponse struct {
Error string `json:"error"`
ErrorCode int `json:"error_code"`
}
type profileResponse struct {
Id int64 `json:"id"`
Username string `json:"username"`
Email string `json:"email"`
Points int `json:"points"`
Locale string `json:"locale"`
Avatar string `json:"avatar"`
Type string `json:"type"`
Premium int64 `json:"premium"`
Expiration time.Time `json:"expiration"`
}
type AvailableSlotsResponse struct {
ActiveSlots int `json:"nb"`
TotalSlots int `json:"limit"`
}
type hostData struct {
Host map[string]int64 `json:"host"`
Bytes int64 `json:"bytes"`
}
type TrafficResponse map[string]hostData

View File

@@ -0,0 +1,661 @@
package torbox
import (
"bytes"
"encoding/json"
"fmt"
"mime/multipart"
"net/http"
gourl "net/url"
"path"
"path/filepath"
"runtime"
"strconv"
"strings"
"sync"
"time"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/debrid/account"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
"github.com/sirrobot01/decypharr/pkg/version"
"go.uber.org/ratelimit"
)
type Torbox struct {
name string
Host string `json:"host"`
APIKey string
accountsManager *account.Manager
autoExpiresLinksAfter time.Duration
DownloadUncached bool
client *request.Client
MountPath string
logger zerolog.Logger
checkCached bool
addSamples bool
}
func New(dc config.Debrid, ratelimits map[string]ratelimit.Limiter) (*Torbox, error) {
headers := map[string]string{
"Authorization": fmt.Sprintf("Bearer %s", dc.APIKey),
"User-Agent": fmt.Sprintf("Decypharr/%s (%s; %s)", version.GetInfo(), runtime.GOOS, runtime.GOARCH),
}
_log := logger.New(dc.Name)
client := request.New(
request.WithHeaders(headers),
request.WithRateLimiter(ratelimits["main"]),
request.WithLogger(_log),
request.WithProxy(dc.Proxy),
)
autoExpiresLinksAfter, err := time.ParseDuration(dc.AutoExpireLinksAfter)
if autoExpiresLinksAfter == 0 || err != nil {
autoExpiresLinksAfter = 48 * time.Hour
}
return &Torbox{
name: "torbox",
Host: "https://api.torbox.app/v1",
APIKey: dc.APIKey,
accountsManager: account.NewManager(dc, ratelimits["download"], _log),
DownloadUncached: dc.DownloadUncached,
autoExpiresLinksAfter: autoExpiresLinksAfter,
client: client,
MountPath: dc.Folder,
logger: _log,
checkCached: dc.CheckCached,
addSamples: dc.AddSamples,
}, nil
}
func (tb *Torbox) Name() string {
return tb.name
}
func (tb *Torbox) Logger() zerolog.Logger {
return tb.logger
}
func (tb *Torbox) IsAvailable(hashes []string) map[string]bool {
// Check if the infohashes are available in the local cache
result := make(map[string]bool)
// Divide hashes into groups of 100
for i := 0; i < len(hashes); i += 100 {
end := i + 100
if end > len(hashes) {
end = len(hashes)
}
// Filter out empty strings
validHashes := make([]string, 0, end-i)
for _, hash := range hashes[i:end] {
if hash != "" {
validHashes = append(validHashes, hash)
}
}
// If no valid hashes in this batch, continue to the next batch
if len(validHashes) == 0 {
continue
}
hashStr := strings.Join(validHashes, ",")
url := fmt.Sprintf("%s/api/torrents/checkcached?hash=%s", tb.Host, hashStr)
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := tb.client.MakeRequest(req)
if err != nil {
tb.logger.Error().Err(err).Msgf("Error checking availability")
return result
}
var res AvailableResponse
err = json.Unmarshal(resp, &res)
if err != nil {
tb.logger.Error().Err(err).Msgf("Error marshalling availability")
return result
}
if res.Data == nil {
return result
}
for h, c := range *res.Data {
if c.Size > 0 {
result[strings.ToUpper(h)] = true
}
}
}
return result
}
func (tb *Torbox) SubmitMagnet(torrent *types.Torrent) (*types.Torrent, error) {
url := fmt.Sprintf("%s/api/torrents/createtorrent", tb.Host)
payload := &bytes.Buffer{}
writer := multipart.NewWriter(payload)
_ = writer.WriteField("magnet", torrent.Magnet.Link)
if !torrent.DownloadUncached {
_ = writer.WriteField("add_only_if_cached", "true")
}
err := writer.Close()
if err != nil {
return nil, err
}
req, _ := http.NewRequest(http.MethodPost, url, payload)
req.Header.Set("Content-Type", writer.FormDataContentType())
resp, err := tb.client.MakeRequest(req)
if err != nil {
return nil, err
}
var data AddMagnetResponse
err = json.Unmarshal(resp, &data)
if err != nil {
return nil, err
}
if data.Data == nil {
return nil, fmt.Errorf("error adding torrent")
}
dt := *data.Data
torrentId := strconv.Itoa(dt.Id)
torrent.Id = torrentId
torrent.MountPath = tb.MountPath
torrent.Debrid = tb.name
torrent.Added = time.Now().Format(time.RFC3339)
return torrent, nil
}
func (tb *Torbox) getTorboxStatus(status string, finished bool) string {
if finished {
return "downloaded"
}
downloading := []string{"completed", "cached", "paused", "downloading", "uploading",
"checkingResumeData", "metaDL", "pausedUP", "queuedUP", "checkingUP",
"forcedUP", "allocating", "downloading", "metaDL", "pausedDL",
"queuedDL", "checkingDL", "forcedDL", "checkingResumeData", "moving"}
var determinedStatus string
switch {
case utils.Contains(downloading, status):
determinedStatus = "downloading"
default:
determinedStatus = "error"
}
return determinedStatus
}
func (tb *Torbox) GetTorrent(torrentId string) (*types.Torrent, error) {
url := fmt.Sprintf("%s/api/torrents/mylist/?id=%s", tb.Host, torrentId)
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := tb.client.MakeRequest(req)
if err != nil {
return nil, err
}
var res InfoResponse
err = json.Unmarshal(resp, &res)
if err != nil {
return nil, err
}
data := res.Data
if data == nil {
return nil, fmt.Errorf("error getting torrent")
}
t := &types.Torrent{
Id: strconv.Itoa(data.Id),
Name: data.Name,
Bytes: data.Size,
Folder: data.Name,
Progress: data.Progress * 100,
Status: tb.getTorboxStatus(data.DownloadState, data.DownloadFinished),
Speed: data.DownloadSpeed,
Seeders: data.Seeds,
Filename: data.Name,
OriginalFilename: data.Name,
MountPath: tb.MountPath,
Debrid: tb.name,
Files: make(map[string]types.File),
Added: data.CreatedAt.Format(time.RFC3339),
}
cfg := config.Get()
totalFiles := 0
skippedSamples := 0
skippedFileType := 0
skippedSize := 0
validFiles := 0
filesWithLinks := 0
for _, f := range data.Files {
totalFiles++
fileName := filepath.Base(f.Name)
if !tb.addSamples && utils.IsSampleFile(f.AbsolutePath) {
skippedSamples++
continue
}
if !cfg.IsAllowedFile(fileName) {
skippedFileType++
continue
}
if !cfg.IsSizeAllowed(f.Size) {
skippedSize++
continue
}
validFiles++
file := types.File{
TorrentId: t.Id,
Id: strconv.Itoa(f.Id),
Name: fileName,
Size: f.Size,
Path: f.Name,
}
// For downloaded torrents, set a placeholder link to indicate file is available
if data.DownloadFinished {
file.Link = fmt.Sprintf("torbox://%s/%d", t.Id, f.Id)
filesWithLinks++
}
t.Files[fileName] = file
}
// Log summary only if there are issues or for debugging
tb.logger.Debug().
Str("torrent_id", t.Id).
Str("torrent_name", t.Name).
Bool("download_finished", data.DownloadFinished).
Str("status", t.Status).
Int("total_files", totalFiles).
Int("valid_files", validFiles).
Int("final_file_count", len(t.Files)).
Msg("Torrent file processing completed")
var cleanPath string
if len(t.Files) > 0 {
cleanPath = path.Clean(data.Files[0].Name)
} else {
cleanPath = path.Clean(data.Name)
}
t.OriginalFilename = strings.Split(cleanPath, "/")[0]
t.Debrid = tb.name
return t, nil
}
func (tb *Torbox) UpdateTorrent(t *types.Torrent) error {
url := fmt.Sprintf("%s/api/torrents/mylist/?id=%s", tb.Host, t.Id)
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := tb.client.MakeRequest(req)
if err != nil {
return err
}
var res InfoResponse
err = json.Unmarshal(resp, &res)
if err != nil {
return err
}
data := res.Data
name := data.Name
t.Name = name
t.Bytes = data.Size
t.Folder = name
t.Progress = data.Progress * 100
t.Status = tb.getTorboxStatus(data.DownloadState, data.DownloadFinished)
t.Speed = data.DownloadSpeed
t.Seeders = data.Seeds
t.Filename = name
t.OriginalFilename = name
t.MountPath = tb.MountPath
t.Debrid = tb.name
// Clear existing files map to rebuild it
t.Files = make(map[string]types.File)
cfg := config.Get()
validFiles := 0
filesWithLinks := 0
for _, f := range data.Files {
fileName := filepath.Base(f.Name)
if !tb.addSamples && utils.IsSampleFile(f.AbsolutePath) {
continue
}
if !cfg.IsAllowedFile(fileName) {
continue
}
if !cfg.IsSizeAllowed(f.Size) {
continue
}
validFiles++
file := types.File{
TorrentId: t.Id,
Id: strconv.Itoa(f.Id),
Name: fileName,
Size: f.Size,
Path: fileName,
}
// For downloaded torrents, set a placeholder link to indicate file is available
if data.DownloadFinished {
file.Link = fmt.Sprintf("torbox://%s/%s", t.Id, strconv.Itoa(f.Id))
filesWithLinks++
}
t.Files[fileName] = file
}
var cleanPath string
if len(t.Files) > 0 {
cleanPath = path.Clean(data.Files[0].Name)
} else {
cleanPath = path.Clean(data.Name)
}
t.OriginalFilename = strings.Split(cleanPath, "/")[0]
t.Debrid = tb.name
return nil
}
func (tb *Torbox) CheckStatus(torrent *types.Torrent) (*types.Torrent, error) {
for {
err := tb.UpdateTorrent(torrent)
if err != nil || torrent == nil {
return torrent, err
}
status := torrent.Status
if status == "downloaded" {
tb.logger.Info().Msgf("Torrent: %s downloaded", torrent.Name)
return torrent, nil
} else if utils.Contains(tb.GetDownloadingStatus(), status) {
if !torrent.DownloadUncached {
return torrent, fmt.Errorf("torrent: %s not cached", torrent.Name)
}
// Break out of the loop if the torrent is downloading.
// This is necessary to prevent infinite loop since we moved to sync downloading and async processing
return torrent, nil
} else {
return torrent, fmt.Errorf("torrent: %s has error", torrent.Name)
}
}
}
func (tb *Torbox) DeleteTorrent(torrentId string) error {
url := fmt.Sprintf("%s/api/torrents/controltorrent/%s", tb.Host, torrentId)
payload := map[string]string{"torrent_id": torrentId, "action": "Delete"}
jsonPayload, _ := json.Marshal(payload)
req, _ := http.NewRequest(http.MethodDelete, url, bytes.NewBuffer(jsonPayload))
if _, err := tb.client.MakeRequest(req); err != nil {
return err
}
tb.logger.Info().Msgf("Torrent %s deleted from Torbox", torrentId)
return nil
}
func (tb *Torbox) GetFileDownloadLinks(t *types.Torrent) error {
filesCh := make(chan types.File, len(t.Files))
linkCh := make(chan types.DownloadLink)
errCh := make(chan error, len(t.Files))
var wg sync.WaitGroup
wg.Add(len(t.Files))
for _, file := range t.Files {
go func() {
defer wg.Done()
link, err := tb.GetDownloadLink(t, &file)
if err != nil {
errCh <- err
return
}
if link.DownloadLink != "" {
linkCh <- link
file.DownloadLink = link
}
filesCh <- file
}()
}
go func() {
wg.Wait()
close(filesCh)
close(linkCh)
close(errCh)
}()
// Collect results
files := make(map[string]types.File, len(t.Files))
for file := range filesCh {
files[file.Name] = file
}
// Check for errors
for err := range errCh {
if err != nil {
return err // Return the first error encountered
}
}
t.Files = files
return nil
}
func (tb *Torbox) GetDownloadLink(t *types.Torrent, file *types.File) (types.DownloadLink, error) {
url := fmt.Sprintf("%s/api/torrents/requestdl/", tb.Host)
query := gourl.Values{}
query.Add("torrent_id", t.Id)
query.Add("token", tb.APIKey)
query.Add("file_id", file.Id)
url += "?" + query.Encode()
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := tb.client.MakeRequest(req)
if err != nil {
tb.logger.Error().
Err(err).
Str("torrent_id", t.Id).
Str("file_id", file.Id).
Msg("Failed to make request to Torbox API")
return types.DownloadLink{}, err
}
var data DownloadLinksResponse
if err = json.Unmarshal(resp, &data); err != nil {
tb.logger.Error().
Err(err).
Str("torrent_id", t.Id).
Str("file_id", file.Id).
Msg("Failed to unmarshal Torbox API response")
return types.DownloadLink{}, err
}
if data.Data == nil {
tb.logger.Error().
Str("torrent_id", t.Id).
Str("file_id", file.Id).
Bool("success", data.Success).
Interface("error", data.Error).
Str("detail", data.Detail).
Msg("Torbox API returned no data")
return types.DownloadLink{}, fmt.Errorf("error getting download links")
}
link := *data.Data
if link == "" {
tb.logger.Error().
Str("torrent_id", t.Id).
Str("file_id", file.Id).
Msg("Torbox API returned empty download link")
return types.DownloadLink{}, fmt.Errorf("error getting download links")
}
now := time.Now()
dl := types.DownloadLink{
Token: tb.APIKey,
Link: file.Link,
DownloadLink: link,
Id: file.Id,
Generated: now,
ExpiresAt: now.Add(tb.autoExpiresLinksAfter),
}
tb.accountsManager.StoreDownloadLink(dl)
return dl, nil
}
func (tb *Torbox) GetDownloadingStatus() []string {
return []string{"downloading"}
}
func (tb *Torbox) GetTorrents() ([]*types.Torrent, error) {
offset := 0
allTorrents := make([]*types.Torrent, 0)
for {
torrents, err := tb.getTorrents(offset)
if err != nil {
break
}
if len(torrents) == 0 {
break
}
allTorrents = append(allTorrents, torrents...)
offset += len(torrents)
}
return allTorrents, nil
}
func (tb *Torbox) getTorrents(offset int) ([]*types.Torrent, error) {
url := fmt.Sprintf("%s/api/torrents/mylist?offset=%d", tb.Host, offset)
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := tb.client.MakeRequest(req)
if err != nil {
return nil, err
}
var res TorrentsListResponse
err = json.Unmarshal(resp, &res)
if err != nil {
return nil, err
}
if !res.Success || res.Data == nil {
return nil, fmt.Errorf("torbox API error: %v", res.Error)
}
torrents := make([]*types.Torrent, 0, len(*res.Data))
cfg := config.Get()
for _, data := range *res.Data {
t := &types.Torrent{
Id: strconv.Itoa(data.Id),
Name: data.Name,
Bytes: data.Size,
Folder: data.Name,
Progress: data.Progress * 100,
Status: tb.getTorboxStatus(data.DownloadState, data.DownloadFinished),
Speed: data.DownloadSpeed,
Seeders: data.Seeds,
Filename: data.Name,
OriginalFilename: data.Name,
MountPath: tb.MountPath,
Debrid: tb.name,
Files: make(map[string]types.File),
Added: data.CreatedAt.Format(time.RFC3339),
InfoHash: data.Hash,
}
// Process files
for _, f := range data.Files {
fileName := filepath.Base(f.Name)
if !tb.addSamples && utils.IsSampleFile(f.AbsolutePath) {
// Skip sample files
continue
}
if !cfg.IsAllowedFile(fileName) {
continue
}
if !cfg.IsSizeAllowed(f.Size) {
continue
}
file := types.File{
TorrentId: t.Id,
Id: strconv.Itoa(f.Id),
Name: fileName,
Size: f.Size,
Path: f.Name,
}
// For downloaded torrents, set a placeholder link to indicate file is available
if data.DownloadFinished {
file.Link = fmt.Sprintf("torbox://%s/%d", t.Id, f.Id)
}
t.Files[fileName] = file
}
// Set original filename based on first file or torrent name
var cleanPath string
if len(t.Files) > 0 {
cleanPath = path.Clean(data.Files[0].Name)
} else {
cleanPath = path.Clean(data.Name)
}
t.OriginalFilename = strings.Split(cleanPath, "/")[0]
torrents = append(torrents, t)
}
return torrents, nil
}
func (tb *Torbox) GetDownloadUncached() bool {
return tb.DownloadUncached
}
func (tb *Torbox) RefreshDownloadLinks() error {
return nil
}
func (tb *Torbox) CheckLink(link string) error {
return nil
}
func (tb *Torbox) GetMountPath() string {
return tb.MountPath
}
func (tb *Torbox) GetAvailableSlots() (int, error) {
//TODO: Implement the logic to check available slots for Torbox
return 0, fmt.Errorf("not implemented")
}
func (tb *Torbox) GetProfile() (*types.Profile, error) {
return nil, nil
}
func (tb *Torbox) AccountManager() *account.Manager {
return tb.accountsManager
}
func (tb *Torbox) SyncAccounts() error {
return nil
}
func (tb *Torbox) DeleteDownloadLink(account *account.Account, downloadLink types.DownloadLink) error {
account.DeleteDownloadLink(downloadLink.Link)
return nil
}

View File

@@ -0,0 +1,77 @@
package torbox
import "time"
type APIResponse[T any] struct {
Success bool `json:"success"`
Error any `json:"error"`
Detail string `json:"detail"`
Data *T `json:"data"` // Use pointer to allow nil
}
type AvailableResponse APIResponse[map[string]struct {
Name string `json:"name"`
Size int `json:"size"`
Hash string `json:"hash"`
}]
type AddMagnetResponse APIResponse[struct {
Id int `json:"torrent_id"`
Hash string `json:"hash"`
}]
type torboxInfo struct {
Id int `json:"id"`
AuthId string `json:"auth_id"`
Server int `json:"server"`
Hash string `json:"hash"`
Name string `json:"name"`
Magnet interface{} `json:"magnet"`
Size int64 `json:"size"`
Active bool `json:"active"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
DownloadState string `json:"download_state"`
Seeds int `json:"seeds"`
Peers int `json:"peers"`
Ratio float64 `json:"ratio"`
Progress float64 `json:"progress"`
DownloadSpeed int64 `json:"download_speed"`
UploadSpeed int `json:"upload_speed"`
Eta int `json:"eta"`
TorrentFile bool `json:"torrent_file"`
ExpiresAt interface{} `json:"expires_at"`
DownloadPresent bool `json:"download_present"`
Files []struct {
Id int `json:"id"`
Md5 interface{} `json:"md5"`
Hash string `json:"hash"`
Name string `json:"name"`
Size int64 `json:"size"`
Zipped bool `json:"zipped"`
S3Path string `json:"s3_path"`
Infected bool `json:"infected"`
Mimetype string `json:"mimetype"`
ShortName string `json:"short_name"`
AbsolutePath string `json:"absolute_path"`
} `json:"files"`
DownloadPath string `json:"download_path"`
InactiveCheck int `json:"inactive_check"`
Availability float64 `json:"availability"`
DownloadFinished bool `json:"download_finished"`
Tracker interface{} `json:"tracker"`
TotalUploaded int `json:"total_uploaded"`
TotalDownloaded int `json:"total_downloaded"`
Cached bool `json:"cached"`
Owner string `json:"owner"`
SeedTorrent bool `json:"seed_torrent"`
AllowZipped bool `json:"allow_zipped"`
LongTermSeeding bool `json:"long_term_seeding"`
TrackerMessage interface{} `json:"tracker_message"`
}
type InfoResponse APIResponse[torboxInfo]
type DownloadLinksResponse APIResponse[string]
type TorrentsListResponse APIResponse[[]torboxInfo]

View File

@@ -1,287 +0,0 @@
package debrid
import (
"encoding/json"
"fmt"
"goBlack/common"
"goBlack/pkg/debrid/structs"
"log"
"net/http"
gourl "net/url"
"os"
"path/filepath"
"strconv"
"strings"
)
type RealDebrid struct {
Host string `json:"host"`
APIKey string
DownloadUncached bool
client *common.RLHTTPClient
cache *common.Cache
MountPath string
logger *log.Logger
}
func (r *RealDebrid) GetMountPath() string {
return r.MountPath
}
func (r *RealDebrid) GetName() string {
return "realdebrid"
}
func (r *RealDebrid) GetLogger() *log.Logger {
return r.logger
}
func GetTorrentFiles(data structs.RealDebridTorrentInfo) []TorrentFile {
files := make([]TorrentFile, 0)
for _, f := range data.Files {
name := filepath.Base(f.Path)
if (!common.RegexMatch(common.VIDEOMATCH, name) &&
!common.RegexMatch(common.SUBMATCH, name) &&
!common.RegexMatch(common.MUSICMATCH, name)) || common.RegexMatch(common.SAMPLEMATCH, name) {
continue
}
fileId := f.ID
file := &TorrentFile{
Name: name,
Path: name,
Size: int64(f.Bytes),
Id: strconv.Itoa(fileId),
}
files = append(files, *file)
}
return files
}
func (r *RealDebrid) IsAvailable(infohashes []string) map[string]bool {
// Check if the infohashes are available in the local cache
hashes, result := GetLocalCache(infohashes, r.cache)
if len(hashes) == 0 {
// Either all the infohashes are locally cached or none are
r.cache.AddMultiple(result)
return result
}
// Divide hashes into groups of 100
for i := 0; i < len(hashes); i += 200 {
end := i + 200
if end > len(hashes) {
end = len(hashes)
}
// Filter out empty strings
validHashes := make([]string, 0, end-i)
for _, hash := range hashes[i:end] {
if hash != "" {
validHashes = append(validHashes, hash)
}
}
// If no valid hashes in this batch, continue to the next batch
if len(validHashes) == 0 {
continue
}
hashStr := strings.Join(validHashes, "/")
url := fmt.Sprintf("%s/torrents/instantAvailability/%s", r.Host, hashStr)
resp, err := r.client.MakeRequest(http.MethodGet, url, nil)
if err != nil {
log.Println("Error checking availability:", err)
return result
}
var data structs.RealDebridAvailabilityResponse
err = json.Unmarshal(resp, &data)
if err != nil {
log.Println("Error marshalling availability:", err)
return result
}
for _, h := range hashes[i:end] {
hosters, exists := data[strings.ToLower(h)]
if exists && len(hosters.Rd) > 0 {
result[h] = true
}
}
}
r.cache.AddMultiple(result) // Add the results to the cache
return result
}
func (r *RealDebrid) SubmitMagnet(torrent *Torrent) (*Torrent, error) {
url := fmt.Sprintf("%s/torrents/addMagnet", r.Host)
payload := gourl.Values{
"magnet": {torrent.Magnet.Link},
}
var data structs.RealDebridAddMagnetSchema
resp, err := r.client.MakeRequest(http.MethodPost, url, strings.NewReader(payload.Encode()))
if err != nil {
return nil, err
}
err = json.Unmarshal(resp, &data)
log.Printf("Torrent: %s added with id: %s\n", torrent.Name, data.Id)
torrent.Id = data.Id
return torrent, nil
}
func (r *RealDebrid) GetTorrent(id string) (*Torrent, error) {
torrent := &Torrent{}
url := fmt.Sprintf("%s/torrents/info/%s", r.Host, id)
resp, err := r.client.MakeRequest(http.MethodGet, url, nil)
if err != nil {
return torrent, err
}
var data structs.RealDebridTorrentInfo
err = json.Unmarshal(resp, &data)
if err != nil {
return torrent, err
}
name := common.RemoveInvalidChars(data.OriginalFilename)
torrent.Id = id
torrent.Name = name
torrent.Bytes = data.Bytes
torrent.Folder = name
torrent.Progress = data.Progress
torrent.Status = data.Status
torrent.Speed = data.Speed
torrent.Seeders = data.Seeders
torrent.Filename = data.Filename
torrent.OriginalFilename = data.OriginalFilename
torrent.Links = data.Links
files := GetTorrentFiles(data)
torrent.Files = files
return torrent, nil
}
func (r *RealDebrid) CheckStatus(torrent *Torrent, isSymlink bool) (*Torrent, error) {
url := fmt.Sprintf("%s/torrents/info/%s", r.Host, torrent.Id)
for {
resp, err := r.client.MakeRequest(http.MethodGet, url, nil)
if err != nil {
log.Println("ERROR Checking file: ", err)
return torrent, err
}
var data structs.RealDebridTorrentInfo
err = json.Unmarshal(resp, &data)
status := data.Status
name := common.RemoveInvalidChars(data.OriginalFilename)
torrent.Name = name // Important because some magnet changes the name
torrent.Folder = name
torrent.Filename = data.Filename
torrent.OriginalFilename = data.OriginalFilename
torrent.Bytes = data.Bytes
torrent.Progress = data.Progress
torrent.Speed = data.Speed
torrent.Seeders = data.Seeders
torrent.Links = data.Links
torrent.Status = status
if status == "error" || status == "dead" || status == "magnet_error" {
return torrent, fmt.Errorf("torrent: %s has error", torrent.Name)
} else if status == "waiting_files_selection" {
files := GetTorrentFiles(data)
torrent.Files = files
if len(files) == 0 {
return torrent, fmt.Errorf("no video files found")
}
filesId := make([]string, 0)
for _, f := range files {
filesId = append(filesId, f.Id)
}
p := gourl.Values{
"files": {strings.Join(filesId, ",")},
}
payload := strings.NewReader(p.Encode())
_, err = r.client.MakeRequest(http.MethodPost, fmt.Sprintf("%s/torrents/selectFiles/%s", r.Host, torrent.Id), payload)
if err != nil {
return torrent, err
}
} else if status == "downloaded" {
files := GetTorrentFiles(data)
torrent.Files = files
log.Printf("Torrent: %s downloaded to RD\n", torrent.Name)
if !isSymlink {
err = r.GetDownloadLinks(torrent)
if err != nil {
return torrent, err
}
}
break
} else if status == "downloading" {
if !r.DownloadUncached {
go r.DeleteTorrent(torrent)
return torrent, fmt.Errorf("torrent: %s not cached", torrent.Name)
}
// Break out of the loop if the torrent is downloading.
// This is necessary to prevent infinite loop since we moved to sync downloading and async processing
break
}
}
return torrent, nil
}
func (r *RealDebrid) DeleteTorrent(torrent *Torrent) {
url := fmt.Sprintf("%s/torrents/delete/%s", r.Host, torrent.Id)
_, err := r.client.MakeRequest(http.MethodDelete, url, nil)
if err == nil {
r.logger.Printf("Torrent: %s deleted\n", torrent.Name)
} else {
r.logger.Printf("Error deleting torrent: %s", err)
}
}
func (r *RealDebrid) GetDownloadLinks(torrent *Torrent) error {
url := fmt.Sprintf("%s/unrestrict/link/", r.Host)
downloadLinks := make([]TorrentDownloadLinks, 0)
for _, link := range torrent.Links {
if link == "" {
continue
}
payload := gourl.Values{
"link": {link},
}
resp, err := r.client.MakeRequest(http.MethodPost, url, strings.NewReader(payload.Encode()))
if err != nil {
return err
}
var data structs.RealDebridUnrestrictResponse
if err = json.Unmarshal(resp, &data); err != nil {
return err
}
download := TorrentDownloadLinks{
Link: data.Link,
Filename: data.Filename,
DownloadLink: data.Download,
}
downloadLinks = append(downloadLinks, download)
}
torrent.DownloadLinks = downloadLinks
return nil
}
func (r *RealDebrid) GetDownloadUncached() bool {
return r.DownloadUncached
}
func NewRealDebrid(dc common.DebridConfig, cache *common.Cache) *RealDebrid {
rl := common.ParseRateLimit(dc.RateLimit)
headers := map[string]string{
"Authorization": fmt.Sprintf("Bearer %s", dc.APIKey),
}
client := common.NewRLHTTPClient(rl, headers)
logger := common.NewLogger(dc.Name, os.Stdout)
return &RealDebrid{
Host: dc.Host,
APIKey: dc.APIKey,
DownloadUncached: dc.DownloadUncached,
client: client,
cache: cache,
MountPath: dc.Folder,
logger: logger,
}
}

931
pkg/debrid/store/cache.go Normal file
View File

@@ -0,0 +1,931 @@
package store
import (
"bufio"
"cmp"
"context"
"crypto/tls"
"errors"
"fmt"
"net/http"
"os"
"path"
"path/filepath"
"regexp"
"strconv"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/puzpuzpuz/xsync/v4"
"github.com/sirrobot01/decypharr/pkg/debrid/common"
"github.com/sirrobot01/decypharr/pkg/rclone"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
"golang.org/x/sync/singleflight"
"encoding/json"
_ "time/tzdata"
"github.com/go-co-op/gocron/v2"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/internal/utils"
)
type WebDavFolderNaming string
const (
WebDavUseFileName WebDavFolderNaming = "filename"
WebDavUseOriginalName WebDavFolderNaming = "original"
WebDavUseFileNameNoExt WebDavFolderNaming = "filename_no_ext"
WebDavUseOriginalNameNoExt WebDavFolderNaming = "original_no_ext"
WebDavUseID WebDavFolderNaming = "id"
WebdavUseHash WebDavFolderNaming = "infohash"
)
type CachedTorrent struct {
*types.Torrent
AddedOn time.Time `json:"added_on"`
IsComplete bool `json:"is_complete"`
Bad bool `json:"bad"`
}
func (c CachedTorrent) copy() CachedTorrent {
return CachedTorrent{
Torrent: c.Torrent,
AddedOn: c.AddedOn,
IsComplete: c.IsComplete,
Bad: c.Bad,
}
}
type RepairType string
const (
RepairTypeReinsert RepairType = "reinsert"
RepairTypeDelete RepairType = "delete"
)
type RepairRequest struct {
Type RepairType
TorrentID string
Priority int
FileName string
}
type Cache struct {
dir string
client common.Client
logger zerolog.Logger
torrents *torrentCache
folderNaming WebDavFolderNaming
listingDebouncer *utils.Debouncer[bool]
// monitors
invalidDownloadLinks *xsync.Map[string, string]
repairRequest *xsync.Map[string, *reInsertRequest]
failedToReinsert *xsync.Map[string, struct{}]
failedLinksCounter *xsync.Map[string, atomic.Int32] // link -> counter
// repair
repairChan chan RepairRequest
// readiness
ready chan struct{}
// config
workers int
torrentRefreshInterval string
downloadLinksRefreshInterval string
// refresh mutex
downloadLinksRefreshMu sync.RWMutex // for refreshing download links
torrentsRefreshMu sync.RWMutex // for refreshing torrents
scheduler gocron.Scheduler
cetScheduler gocron.Scheduler
saveSemaphore chan struct{}
config config.Debrid
customFolders []string
mounter *rclone.Mount
downloadSG singleflight.Group
streamClient *http.Client
}
func NewDebridCache(dc config.Debrid, client common.Client, mounter *rclone.Mount) *Cache {
cfg := config.Get()
cet, err := time.LoadLocation("CET")
if err != nil {
cet, err = time.LoadLocation("Europe/Berlin") // Fallback to Berlin if CET fails
if err != nil {
cet = time.FixedZone("CET", 1*60*60) // Fallback to a fixed CET zone
}
}
cetSc, err := gocron.NewScheduler(gocron.WithLocation(cet))
if err != nil {
// If we can't create a CET scheduler, fallback to local time
cetSc, _ = gocron.NewScheduler(gocron.WithLocation(time.Local), gocron.WithGlobalJobOptions(
gocron.WithTags("decypharr-"+dc.Name)))
}
scheduler, err := gocron.NewScheduler(
gocron.WithLocation(time.Local),
gocron.WithGlobalJobOptions(
gocron.WithTags("decypharr-"+dc.Name)))
if err != nil {
// If we can't create a local scheduler, fallback to CET
scheduler = cetSc
}
var customFolders []string
dirFilters := map[string][]directoryFilter{}
for name, value := range dc.Directories {
for filterType, v := range value.Filters {
df := directoryFilter{filterType: filterType, value: v}
switch filterType {
case filterByRegex, filterByNotRegex:
df.regex = regexp.MustCompile(v)
case filterBySizeGT, filterBySizeLT:
df.sizeThreshold, _ = config.ParseSize(v)
case filterBLastAdded:
df.ageThreshold, _ = time.ParseDuration(v)
}
dirFilters[name] = append(dirFilters[name], df)
}
customFolders = append(customFolders, name)
}
_log := logger.New(fmt.Sprintf("%s-webdav", client.Name()))
transport := &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
TLSHandshakeTimeout: 30 * time.Second,
ResponseHeaderTimeout: 60 * time.Second,
MaxIdleConns: 100,
MaxIdleConnsPerHost: 20,
IdleConnTimeout: 90 * time.Second,
DisableKeepAlives: false,
ForceAttemptHTTP2: false,
}
httpClient := &http.Client{
Transport: transport,
Timeout: 0,
}
c := &Cache{
dir: filepath.Join(cfg.Path, "cache", dc.Name), // path to save cache files
torrents: newTorrentCache(dirFilters),
client: client,
logger: _log,
workers: dc.Workers,
torrentRefreshInterval: dc.TorrentsRefreshInterval,
downloadLinksRefreshInterval: dc.DownloadLinksRefreshInterval,
folderNaming: WebDavFolderNaming(dc.FolderNaming),
saveSemaphore: make(chan struct{}, 50),
cetScheduler: cetSc,
scheduler: scheduler,
config: dc,
customFolders: customFolders,
mounter: mounter,
ready: make(chan struct{}),
invalidDownloadLinks: xsync.NewMap[string, string](),
repairRequest: xsync.NewMap[string, *reInsertRequest](),
failedToReinsert: xsync.NewMap[string, struct{}](),
failedLinksCounter: xsync.NewMap[string, atomic.Int32](),
streamClient: httpClient,
repairChan: make(chan RepairRequest, 100), // Initialize the repair channel, max 100 requests buffered
}
c.listingDebouncer = utils.NewDebouncer[bool](100*time.Millisecond, func(refreshRclone bool) {
c.RefreshListings(refreshRclone)
})
return c
}
func (c *Cache) IsReady() chan struct{} {
return c.ready
}
func (c *Cache) StreamWithRclone() bool {
return c.config.ServeFromRclone
}
// Reset clears all internal state so the Cache can be reused without leaks.
// Call this after stopping the old Cache (so no goroutines are holding references),
// and before you discard the instance on a restart.
func (c *Cache) Reset() {
// Unmount first
if c.mounter != nil && c.mounter.IsMounted() {
if err := c.mounter.Unmount(); err != nil {
c.logger.Error().Err(err).Msgf("Failed to unmount %s", c.config.Name)
} else {
c.logger.Info().Msgf("Unmounted %s", c.config.Name)
}
}
go func() {
// Shutdown the scheduler (this will stop all jobs)
if err := c.scheduler.Shutdown(); err != nil {
c.logger.Error().Err(err).Msg("Failed to stop scheduler")
}
}()
// Stop the listing debouncer
c.listingDebouncer.Stop()
// Close the repair channel
if c.repairChan != nil {
close(c.repairChan)
}
// 1. Reset torrent storage
c.torrents.reset()
// 3. Clear any sync.Maps
c.invalidDownloadLinks = xsync.NewMap[string, string]()
c.repairRequest = xsync.NewMap[string, *reInsertRequest]()
c.failedToReinsert = xsync.NewMap[string, struct{}]()
// 5. Rebuild the listing debouncer
c.listingDebouncer = utils.NewDebouncer[bool](
100*time.Millisecond,
func(refreshRclone bool) {
c.RefreshListings(refreshRclone)
},
)
// 6. Reset repair channel so the next Start() can spin it up
c.repairChan = make(chan RepairRequest, 100)
// Reset the ready channel
c.ready = make(chan struct{})
}
func (c *Cache) Start(ctx context.Context) error {
if err := os.MkdirAll(c.dir, 0755); err != nil {
return fmt.Errorf("failed to create cache directory: %w", err)
}
c.logger.Info().Msgf("Started indexing...")
if err := c.Sync(ctx); err != nil {
return fmt.Errorf("failed to sync cache: %w", err)
}
// Fire the ready channel
close(c.ready)
c.logger.Info().Msgf("Indexing complete, %d torrents loaded", len(c.torrents.getAll()))
// initial download links
go c.refreshDownloadLinks(ctx)
go c.repairWorker(ctx)
cfg := config.Get()
name := c.client.Name()
addr := cfg.BindAddress + ":" + cfg.Port + cfg.URLBase + "webdav/" + name + "/"
c.logger.Info().Msgf("%s WebDav server running at %s", name, addr)
if c.mounter != nil {
if err := c.mounter.Mount(ctx); err != nil {
c.logger.Error().Err(err).Msgf("Failed to mount %s", c.config.Name)
}
} else {
c.logger.Warn().Msgf("Mounting is disabled for %s", c.config.Name)
}
return nil
}
func (c *Cache) load(ctx context.Context) (map[string]CachedTorrent, error) {
mu := sync.Mutex{}
if err := os.MkdirAll(c.dir, 0755); err != nil {
return nil, fmt.Errorf("failed to create cache directory: %w", err)
}
files, err := os.ReadDir(c.dir)
if err != nil {
return nil, fmt.Errorf("failed to read cache directory: %w", err)
}
// Get only json files
var jsonFiles []os.DirEntry
for _, file := range files {
if !file.IsDir() && filepath.Ext(file.Name()) == ".json" {
jsonFiles = append(jsonFiles, file)
}
}
if len(jsonFiles) == 0 {
return nil, nil
}
// Create channels with appropriate buffering
workChan := make(chan os.DirEntry, min(c.workers, len(jsonFiles)))
// Create a wait group for workers
var wg sync.WaitGroup
torrents := make(map[string]CachedTorrent, len(jsonFiles))
// Start workers
for i := 0; i < c.workers; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for {
file, ok := <-workChan
if !ok {
return // Channel closed, exit goroutine
}
fileName := file.Name()
filePath := filepath.Join(c.dir, fileName)
data, err := os.ReadFile(filePath)
if err != nil {
c.logger.Error().Err(err).Msgf("Failed to read file: %s", filePath)
continue
}
var ct CachedTorrent
if err := json.Unmarshal(data, &ct); err != nil {
c.logger.Error().Err(err).Msgf("Failed to unmarshal file: %s", filePath)
continue
}
isComplete := true
if len(ct.GetFiles()) != 0 {
// Check if all files are valid, if not, delete the file.json and remove from cache.
fs := make(map[string]types.File, len(ct.GetFiles()))
for _, f := range ct.GetFiles() {
if f.Link == "" {
isComplete = false
break
}
f.TorrentId = ct.Id
fs[f.Name] = f
}
if isComplete {
if addedOn, err := time.Parse(time.RFC3339, ct.Added); err == nil {
ct.AddedOn = addedOn
}
ct.IsComplete = true
ct.Files = fs
ct.Name = path.Clean(ct.Name)
mu.Lock()
torrents[ct.Id] = ct
mu.Unlock()
}
}
}
}()
}
// Feed work to workers
for _, file := range jsonFiles {
select {
case <-ctx.Done():
break // Context cancelled
default:
workChan <- file
}
}
// Signal workers that no more work is coming
close(workChan)
// Wait for all workers to complete
wg.Wait()
return torrents, nil
}
func (c *Cache) Sync(ctx context.Context) error {
cachedTorrents, err := c.load(ctx)
if err != nil {
c.logger.Error().Err(err).Msg("Failed to load cache")
}
torrents, err := c.client.GetTorrents()
if err != nil {
return fmt.Errorf("failed to sync torrents: %v", err)
}
totalTorrents := len(torrents)
c.logger.Info().Msgf("%d torrents found from %s", totalTorrents, c.client.Name())
newTorrents := make([]*types.Torrent, 0)
idStore := make(map[string]struct{}, totalTorrents)
for _, t := range torrents {
idStore[t.Id] = struct{}{}
if _, ok := cachedTorrents[t.Id]; !ok {
newTorrents = append(newTorrents, t)
}
}
// Check for deleted torrents
deletedTorrents := make([]string, 0)
for _, t := range cachedTorrents {
if _, ok := idStore[t.Id]; !ok {
deletedTorrents = append(deletedTorrents, t.Id)
}
}
if len(deletedTorrents) > 0 {
c.logger.Info().Msgf("Found %d deleted torrents", len(deletedTorrents))
for _, id := range deletedTorrents {
// Remove from cache and debrid service
delete(cachedTorrents, id)
// Remove the json file from disk
c.removeFile(id, false)
}
}
// Write these torrents to the cache
c.setTorrents(cachedTorrents, func() {
c.listingDebouncer.Call(false)
}) // Initial calls
c.logger.Info().Msgf("Loaded %d torrents from cache", len(cachedTorrents))
if len(newTorrents) > 0 {
c.logger.Info().Msgf("Found %d new torrents", len(newTorrents))
if err := c.sync(ctx, newTorrents); err != nil {
return fmt.Errorf("failed to sync torrents: %v", err)
}
}
return nil
}
func (c *Cache) sync(ctx context.Context, torrents []*types.Torrent) error {
// Create channels with appropriate buffering
workChan := make(chan *types.Torrent, min(c.workers, len(torrents)))
// Use an atomic counter for progress tracking
var processed int64
var errorCount int64
// Create a wait group for workers
var wg sync.WaitGroup
// Start workers
for i := 0; i < c.workers; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for {
select {
case t, ok := <-workChan:
if !ok {
return // Channel closed, exit goroutine
}
if err := c.ProcessTorrent(t); err != nil {
c.logger.Error().Err(err).Str("torrent", t.Name).Msg("sync error")
atomic.AddInt64(&errorCount, 1)
}
count := atomic.AddInt64(&processed, 1)
if count%1000 == 0 {
c.logger.Info().Msgf("Progress: %d/%d torrents processed", count, len(torrents))
}
case <-ctx.Done():
return // Context cancelled, exit goroutine
}
}
}()
}
// Feed work to workers
for _, t := range torrents {
select {
case workChan <- t:
// Work sent successfully
case <-ctx.Done():
break // Context cancelled
}
}
// Signal workers that no more work is coming
close(workChan)
// Wait for all workers to complete
wg.Wait()
c.listingDebouncer.Call(false) // final refresh
c.logger.Info().Msgf("Sync complete: %d torrents processed, %d errors", len(torrents), errorCount)
return nil
}
func (c *Cache) GetTorrentFolder(torrent *types.Torrent) string {
switch c.folderNaming {
case WebDavUseFileName:
return path.Clean(torrent.Filename)
case WebDavUseOriginalName:
return path.Clean(torrent.OriginalFilename)
case WebDavUseFileNameNoExt:
return path.Clean(utils.RemoveExtension(torrent.Filename))
case WebDavUseOriginalNameNoExt:
return path.Clean(utils.RemoveExtension(torrent.OriginalFilename))
case WebDavUseID:
return torrent.Id
case WebdavUseHash:
return strings.ToLower(torrent.InfoHash)
default:
return path.Clean(torrent.Filename)
}
}
func (c *Cache) setTorrent(t CachedTorrent, callback func(torrent CachedTorrent)) {
torrentName := c.GetTorrentFolder(t.Torrent)
updatedTorrent := t.copy()
if o, ok := c.torrents.getByName(torrentName); ok && o.Id != t.Id {
// If another torrent with the same name exists, merge the files, if the same file exists,
// keep the one with the most recent added date
// Save the most recent torrent
mergedFiles := mergeFiles(o, updatedTorrent) // Useful for merging files across multiple torrents, while keeping the most recent
updatedTorrent.Files = mergedFiles
}
c.torrents.set(torrentName, t)
go c.SaveTorrent(t)
if callback != nil {
go callback(updatedTorrent)
}
}
func (c *Cache) setTorrents(torrents map[string]CachedTorrent, callback func()) {
for _, t := range torrents {
torrentName := c.GetTorrentFolder(t.Torrent)
updatedTorrent := t.copy()
if o, ok := c.torrents.getByName(torrentName); ok && o.Id != t.Id {
// Save the most recent torrent
mergedFiles := mergeFiles(o, updatedTorrent)
updatedTorrent.Files = mergedFiles
}
c.torrents.set(torrentName, t)
}
c.SaveTorrents()
if callback != nil {
callback()
}
}
// GetListing returns a sorted list of torrents(READ-ONLY)
func (c *Cache) GetListing(folder string) []os.FileInfo {
switch folder {
case "__all__", "torrents":
return c.torrents.getListing()
default:
return c.torrents.getFolderListing(folder)
}
}
func (c *Cache) GetCustomFolders() []string {
return c.customFolders
}
func (c *Cache) Close() error {
return nil
}
func (c *Cache) GetTorrents() map[string]CachedTorrent {
return c.torrents.getAll()
}
func (c *Cache) TotalTorrents() int {
return c.torrents.getAllCount()
}
func (c *Cache) GetTorrentByName(name string) *CachedTorrent {
if torrent, ok := c.torrents.getByName(name); ok {
return &torrent
}
return nil
}
func (c *Cache) GetTorrentsName() map[string]CachedTorrent {
return c.torrents.getAllByName()
}
func (c *Cache) GetTorrent(torrentId string) *CachedTorrent {
if torrent, ok := c.torrents.getByID(torrentId); ok {
return &torrent
}
return nil
}
func (c *Cache) SaveTorrents() {
torrents := c.torrents.getAll()
for _, torrent := range torrents {
c.SaveTorrent(torrent)
}
}
func (c *Cache) SaveTorrent(ct CachedTorrent) {
marshaled, err := json.MarshalIndent(ct, "", " ")
if err != nil {
c.logger.Error().Err(err).Msgf("Failed to marshal torrent: %s", ct.Id)
return
}
// Store just the essential info needed for the file operation
saveInfo := struct {
id string
jsonData []byte
}{
id: ct.Torrent.Id,
jsonData: marshaled,
}
// Try to acquire semaphore without blocking
select {
case c.saveSemaphore <- struct{}{}:
go func() {
defer func() { <-c.saveSemaphore }()
c.saveTorrent(saveInfo.id, saveInfo.jsonData)
}()
default:
c.saveTorrent(saveInfo.id, saveInfo.jsonData)
}
}
func (c *Cache) saveTorrent(id string, data []byte) {
fileName := id + ".json"
filePath := filepath.Join(c.dir, fileName)
// Use a unique temporary filename for concurrent safety
tmpFile := filePath + ".tmp." + strconv.FormatInt(time.Now().UnixNano(), 10)
f, err := os.Create(tmpFile)
if err != nil {
c.logger.Error().Err(err).Msgf("Failed to create file: %s", tmpFile)
return
}
// Track if we've closed the file
fileClosed := false
defer func() {
// Only close if not already closed
if !fileClosed {
_ = f.Close()
}
// Clean up the temp file if it still exists and rename failed
_ = os.Remove(tmpFile)
}()
w := bufio.NewWriter(f)
if _, err := w.Write(data); err != nil {
c.logger.Error().Err(err).Msgf("Failed to write data: %s", tmpFile)
return
}
if err := w.Flush(); err != nil {
c.logger.Error().Err(err).Msgf("Failed to flush data: %s", tmpFile)
return
}
// Close the file before renaming
_ = f.Close()
fileClosed = true
if err := os.Rename(tmpFile, filePath); err != nil {
c.logger.Error().Err(err).Msgf("Failed to rename file: %s", tmpFile)
return
}
}
func (c *Cache) ProcessTorrent(t *types.Torrent) error {
isComplete := func(files map[string]types.File) bool {
_complete := len(files) > 0
for _, file := range files {
if file.Link == "" {
_complete = false
break
}
}
return _complete
}
if !isComplete(t.Files) {
if err := c.client.UpdateTorrent(t); err != nil {
return fmt.Errorf("failed to update torrent: %w", err)
}
}
if !isComplete(t.Files) {
c.logger.Debug().
Str("torrent_id", t.Id).
Str("torrent_name", t.Name).
Int("total_files", len(t.Files)).
Msg("Torrent still not complete after refresh, marking as bad")
} else {
addedOn, err := time.Parse(time.RFC3339, t.Added)
if err != nil {
addedOn = time.Now()
}
ct := CachedTorrent{
Torrent: t,
IsComplete: len(t.Files) > 0,
AddedOn: addedOn,
}
c.setTorrent(ct, func(tor CachedTorrent) {
c.listingDebouncer.Call(false)
})
}
return nil
}
func (c *Cache) Add(t *types.Torrent) error {
if len(t.Files) == 0 {
c.logger.Warn().Msgf("Torrent %s has no files to add. Refreshing", t.Id)
if err := c.client.UpdateTorrent(t); err != nil {
return fmt.Errorf("failed to update torrent: %w", err)
}
}
addedOn, err := time.Parse(time.RFC3339, t.Added)
if err != nil {
addedOn = time.Now()
}
ct := CachedTorrent{
Torrent: t,
IsComplete: len(t.Files) > 0,
AddedOn: addedOn,
}
c.setTorrent(ct, func(tor CachedTorrent) {
c.RefreshListings(true)
})
go c.GetFileDownloadLinks(ct)
return nil
}
func (c *Cache) Client() common.Client {
return c.client
}
func (c *Cache) DeleteTorrent(id string) error {
c.torrentsRefreshMu.Lock()
defer c.torrentsRefreshMu.Unlock()
if c.deleteTorrent(id, true) {
go c.RefreshListings(true)
return nil
}
return nil
}
func (c *Cache) validateAndDeleteTorrents(torrents []string) {
wg := sync.WaitGroup{}
for _, torrent := range torrents {
wg.Add(1)
go func(t string) {
defer wg.Done()
// Check if torrent is truly deleted
if _, err := c.client.GetTorrent(t); err != nil {
c.deleteTorrent(t, false) // Since it's removed from debrid already
}
}(torrent)
}
wg.Wait()
c.listingDebouncer.Call(true)
}
// deleteTorrent deletes the torrent from the cache and debrid service
// It also handles torrents with the same name but different IDs
func (c *Cache) deleteTorrent(id string, removeFromDebrid bool) bool {
if torrent, ok := c.torrents.getByID(id); ok {
c.torrents.removeId(id) // Delete id from cache
defer func() {
c.removeFile(id, false)
if removeFromDebrid {
_ = c.client.DeleteTorrent(id) // Skip error handling, we don't care if it fails
}
}() // defer delete from debrid
torrentName := c.GetTorrentFolder(torrent.Torrent)
if t, ok := c.torrents.getByName(torrentName); ok {
newFiles := map[string]types.File{}
newId := ""
for _, file := range t.GetFiles() {
if file.TorrentId != "" && file.TorrentId != id {
if newId == "" && file.TorrentId != "" {
newId = file.TorrentId
}
newFiles[file.Name] = file
}
}
if len(newFiles) == 0 {
// Delete the torrent since no files are left
c.torrents.remove(torrentName)
} else {
t.Files = newFiles
newId = cmp.Or(newId, t.Id)
t.Id = newId
c.setTorrent(t, nil) // This gets called after calling deleteTorrent
}
}
return true
}
return false
}
func (c *Cache) DeleteTorrents(ids []string) {
c.logger.Info().Msgf("Deleting %d torrents", len(ids))
for _, id := range ids {
_ = c.deleteTorrent(id, true)
}
c.listingDebouncer.Call(true)
}
func (c *Cache) removeFile(torrentId string, moveToTrash bool) {
// Moves the torrent file to the trash
filePath := filepath.Join(c.dir, torrentId+".json")
// Check if the file exists
if _, err := os.Stat(filePath); errors.Is(err, os.ErrNotExist) {
return
}
if !moveToTrash {
// If not moving to trash, delete the file directly
if err := os.Remove(filePath); err != nil {
c.logger.Error().Err(err).Msgf("Failed to remove file: %s", filePath)
return
}
return
}
// Move the file to the trash
trashPath := filepath.Join(c.dir, "trash", torrentId+".json")
if err := os.MkdirAll(filepath.Dir(trashPath), 0755); err != nil {
return
}
if err := os.Rename(filePath, trashPath); err != nil {
return
}
}
func (c *Cache) OnRemove(torrentId string) {
c.logger.Debug().Msgf("OnRemove triggered for %s", torrentId)
err := c.DeleteTorrent(torrentId)
if err != nil {
c.logger.Error().Err(err).Msgf("Failed to delete torrent: %s", torrentId)
return
}
}
// RemoveFile removes a file from the torrent cache
// TODO sends a re-insert that removes the file from debrid
func (c *Cache) RemoveFile(torrentId string, filename string) error {
c.logger.Debug().Str("torrent_id", torrentId).Msgf("Removing file %s", filename)
torrent, ok := c.torrents.getByID(torrentId)
if !ok {
return fmt.Errorf("torrent %s not found", torrentId)
}
file, ok := torrent.GetFile(filename)
if !ok {
return fmt.Errorf("file %s not found in torrent %s", filename, torrentId)
}
file.Deleted = true
torrent.Files[filename] = file
// If the torrent has no files left, delete it
if len(torrent.GetFiles()) == 0 {
c.logger.Debug().Msgf("Torrent %s has no files left, deleting it", torrentId)
if err := c.DeleteTorrent(torrentId); err != nil {
return fmt.Errorf("failed to delete torrent %s: %w", torrentId, err)
}
return nil
}
c.setTorrent(torrent, func(torrent CachedTorrent) {
c.listingDebouncer.Call(true)
}) // Update the torrent in the cache
return nil
}
func (c *Cache) Logger() zerolog.Logger {
return c.logger
}
func (c *Cache) GetConfig() config.Debrid {
return c.config
}

View File

@@ -0,0 +1,218 @@
package store
import (
"errors"
"fmt"
"sync/atomic"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
)
const (
MaxLinkFailures = 10
)
func (c *Cache) GetDownloadLink(torrentName, filename, fileLink string) (types.DownloadLink, error) {
// Check
counter, ok := c.failedLinksCounter.Load(fileLink)
if ok && counter.Load() >= MaxLinkFailures {
return types.DownloadLink{}, fmt.Errorf("file link %s has failed %d times, not retrying", fileLink, counter.Load())
}
// Use singleflight to deduplicate concurrent requests
v, err, _ := c.downloadSG.Do(fileLink, func() (interface{}, error) {
// Double-check cache inside singleflight (another goroutine might have filled it)
if dl, err := c.checkDownloadLink(fileLink); err == nil && !dl.Empty() {
return dl, nil
}
// Fetch the download link
dl, err := c.fetchDownloadLink(torrentName, filename, fileLink)
if err != nil {
c.downloadSG.Forget(fileLink)
return types.DownloadLink{}, err
}
if dl.Empty() {
c.downloadSG.Forget(fileLink)
err = fmt.Errorf("download link is empty for %s in torrent %s", filename, torrentName)
return types.DownloadLink{}, err
}
return dl, nil
})
if err != nil {
return types.DownloadLink{}, err
}
return v.(types.DownloadLink), nil
}
func (c *Cache) fetchDownloadLink(torrentName, filename, fileLink string) (types.DownloadLink, error) {
emptyDownloadLink := types.DownloadLink{}
ct := c.GetTorrentByName(torrentName)
if ct == nil {
return emptyDownloadLink, fmt.Errorf("torrent not found")
}
file, ok := ct.GetFile(filename)
if !ok {
return emptyDownloadLink, fmt.Errorf("file %s not found in torrent %s", filename, torrentName)
}
if file.Link == "" {
// file link is empty, refresh the torrent to get restricted links
ct = c.refreshTorrent(file.TorrentId) // Refresh the torrent from the debrid
if ct == nil {
return emptyDownloadLink, fmt.Errorf("failed to refresh torrent")
} else {
file, ok = ct.GetFile(filename)
if !ok {
return emptyDownloadLink, fmt.Errorf("file %s not found in refreshed torrent %s", filename, torrentName)
}
}
}
// If file.Link is still empty, return
if file.Link == "" {
// Try to reinsert the torrent?
newCt, err := c.reInsertTorrent(ct)
if err != nil {
return emptyDownloadLink, fmt.Errorf("failed to reinsert torrent. %w", err)
}
ct = newCt
file, ok = ct.GetFile(filename)
if !ok {
return emptyDownloadLink, fmt.Errorf("file %s not found in reinserted torrent %s", filename, torrentName)
}
}
c.logger.Trace().Msgf("Getting download link for %s(%s)", filename, file.Link)
downloadLink, err := c.client.GetDownloadLink(ct.Torrent, &file)
if err != nil {
if errors.Is(err, utils.HosterUnavailableError) {
c.logger.Trace().
Str("token", utils.Mask(downloadLink.Token)).
Str("filename", filename).
Str("torrent_id", ct.Id).
Msg("Hoster unavailable, attempting to reinsert torrent")
newCt, err := c.reInsertTorrent(ct)
if err != nil {
return emptyDownloadLink, fmt.Errorf("failed to reinsert torrent: %w", err)
}
ct = newCt
file, ok = ct.GetFile(filename)
if !ok {
return emptyDownloadLink, fmt.Errorf("file %s not found in reinserted torrent %s", filename, torrentName)
}
// Retry getting the download link
downloadLink, err = c.client.GetDownloadLink(ct.Torrent, &file)
if err != nil {
return emptyDownloadLink, fmt.Errorf("retry failed to get download link: %w", err)
}
if downloadLink.Empty() {
return emptyDownloadLink, fmt.Errorf("download link is empty after retry")
}
return emptyDownloadLink, fmt.Errorf("download link is empty after retry")
} else if errors.Is(err, utils.TrafficExceededError) {
// This is likely a fair usage limit error
return emptyDownloadLink, err
} else {
return emptyDownloadLink, fmt.Errorf("failed to get download link: %w", err)
}
}
if downloadLink.Empty() {
return emptyDownloadLink, fmt.Errorf("download link is empty")
}
return downloadLink, nil
}
func (c *Cache) GetFileDownloadLinks(t CachedTorrent) {
if err := c.client.GetFileDownloadLinks(t.Torrent); err != nil {
c.logger.Error().Err(err).Str("torrent", t.Name).Msg("Failed to generate download links")
return
}
}
func (c *Cache) checkDownloadLink(link string) (types.DownloadLink, error) {
dl, err := c.client.AccountManager().GetDownloadLink(link)
if err != nil {
return dl, err
}
if !c.downloadLinkIsInvalid(dl.DownloadLink) {
return dl, nil
}
return types.DownloadLink{}, fmt.Errorf("download link not found for %s", link)
}
func (c *Cache) IncrementFailedLinkCounter(link string) int32 {
counter, _ := c.failedLinksCounter.LoadOrCompute(link, func() (atomic.Int32, bool) {
return atomic.Int32{}, true
})
return counter.Add(1)
}
func (c *Cache) MarkLinkAsInvalid(downloadLink types.DownloadLink, reason string) {
// Increment file link error counter
c.IncrementFailedLinkCounter(downloadLink.Link)
c.invalidDownloadLinks.Store(downloadLink.DownloadLink, reason)
// Remove the download api key from active
if reason == "bandwidth_exceeded" {
// Disable the account
accountManager := c.client.AccountManager()
account, err := accountManager.GetAccount(downloadLink.Token)
if err != nil {
c.logger.Error().Err(err).Str("token", utils.Mask(downloadLink.Token)).Msg("Failed to get account to disable")
return
}
if account == nil {
c.logger.Error().Str("token", utils.Mask(downloadLink.Token)).Msg("Account not found to disable")
return
}
accountManager.Disable(account)
} else if reason == "link_not_found" {
// Let's try to delete the download link from the account, so we can fetch a new one next time
accountManager := c.client.AccountManager()
account, err := accountManager.GetAccount(downloadLink.Token)
if err != nil {
c.logger.Error().Err(err).Str("token", utils.Mask(downloadLink.Token)).Msg("Failed to get account to delete download link")
return
}
if account == nil {
c.logger.Error().Str("token", utils.Mask(downloadLink.Token)).Msg("Account not found to delete download link")
return
}
if err := c.client.DeleteDownloadLink(account, downloadLink); err != nil {
c.logger.Error().Err(err).Str("token", utils.Mask(downloadLink.Token)).Msg("Failed to delete download link from account")
return
}
}
}
func (c *Cache) downloadLinkIsInvalid(downloadLink string) bool {
if _, ok := c.invalidDownloadLinks.Load(downloadLink); ok {
return true
}
return false
}
func (c *Cache) GetDownloadByteRange(torrentName, filename string) (*[2]int64, error) {
ct := c.GetTorrentByName(torrentName)
if ct == nil {
return nil, fmt.Errorf("torrent not found")
}
file := ct.Files[filename]
return file.ByteRange, nil
}
func (c *Cache) GetTotalActiveDownloadLinks() int {
total := 0
allAccounts := c.client.AccountManager().Active()
for _, acc := range allAccounts {
total += acc.DownloadLinksCount()
}
return total
}

43
pkg/debrid/store/misc.go Normal file
View File

@@ -0,0 +1,43 @@
package store
import (
"sort"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
)
// MergeFiles merges the files from multiple torrents into a single map.
// It uses the file name as the key and the file object as the value.
// This is useful for deduplicating files across multiple torrents.
// The order of the torrents is determined by the AddedOn time, with the earliest added torrent first.
// If a file with the same name exists in multiple torrents, the last one will be used.
func mergeFiles(torrents ...CachedTorrent) map[string]types.File {
merged := make(map[string]types.File)
// order torrents by added time
sort.Slice(torrents, func(i, j int) bool {
return torrents[i].AddedOn.Before(torrents[j].AddedOn)
})
for _, torrent := range torrents {
for _, file := range torrent.GetFiles() {
merged[file.Name] = file
}
}
return merged
}
func (c *Cache) GetIngests() ([]types.IngestData, error) {
torrents := c.GetTorrents()
debridName := c.client.Name()
var ingests []types.IngestData
for _, torrent := range torrents {
ingests = append(ingests, types.IngestData{
Debrid: debridName,
Name: torrent.Filename,
Hash: torrent.InfoHash,
Size: torrent.Bytes,
})
}
return ingests, nil
}

253
pkg/debrid/store/refresh.go Normal file
View File

@@ -0,0 +1,253 @@
package store
import (
"context"
"fmt"
"io"
"net/http"
"os"
"strings"
"sync"
"time"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
)
type fileInfo struct {
id string
name string
size int64
mode os.FileMode
modTime time.Time
isDir bool
}
func (fi *fileInfo) Name() string { return fi.name }
func (fi *fileInfo) Size() int64 { return fi.size }
func (fi *fileInfo) Mode() os.FileMode { return fi.mode }
func (fi *fileInfo) ModTime() time.Time { return fi.modTime }
func (fi *fileInfo) IsDir() bool { return fi.isDir }
func (fi *fileInfo) ID() string { return fi.id }
func (fi *fileInfo) Sys() interface{} { return nil }
func (c *Cache) RefreshListings(refreshRclone bool) {
// Copy the torrents to a string|time map
c.torrents.refreshListing() // refresh torrent listings
if refreshRclone {
if err := c.refreshRclone(); err != nil {
c.logger.Error().Err(err).Msg("Failed to refresh rclone") // silent error
}
}
}
func (c *Cache) refreshTorrents(ctx context.Context) {
select {
case <-ctx.Done():
return
default:
}
if !c.torrentsRefreshMu.TryLock() {
return
}
defer c.torrentsRefreshMu.Unlock()
// Get all torrents from the debrid service
debTorrents, err := c.client.GetTorrents()
if err != nil {
c.logger.Error().Err(err).Msg("Failed to get torrents")
return
}
if len(debTorrents) == 0 {
// Maybe an error occurred
return
}
currentTorrentIds := make(map[string]struct{}, len(debTorrents))
for _, t := range debTorrents {
currentTorrentIds[t.Id] = struct{}{}
}
// Let's implement deleting torrents removed from debrid
deletedTorrents := make([]string, 0)
cachedTorrents := c.torrents.getIdMaps()
for id := range cachedTorrents {
if _, exists := currentTorrentIds[id]; !exists {
deletedTorrents = append(deletedTorrents, id)
}
}
if len(deletedTorrents) > 0 {
go c.validateAndDeleteTorrents(deletedTorrents)
}
newTorrents := make([]*types.Torrent, 0)
for _, t := range debTorrents {
if _, exists := cachedTorrents[t.Id]; !exists {
newTorrents = append(newTorrents, t)
}
}
if len(newTorrents) == 0 {
return
}
c.logger.Trace().Msgf("Found %d new torrents", len(newTorrents))
workChan := make(chan *types.Torrent, min(100, len(newTorrents)))
errChan := make(chan error, len(newTorrents))
var wg sync.WaitGroup
counter := 0
for i := 0; i < c.workers; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for t := range workChan {
if err := c.ProcessTorrent(t); err != nil {
c.logger.Error().Err(err).Msgf("Failed to process new torrent %s", t.Id)
errChan <- err
}
counter++
}
}()
}
for _, t := range newTorrents {
workChan <- t
}
close(workChan)
wg.Wait()
c.listingDebouncer.Call(true)
c.logger.Debug().Msgf("Processed %d new torrents", counter)
}
func (c *Cache) refreshRclone() error {
cfg := c.config
dirs := strings.FieldsFunc(cfg.RcRefreshDirs, func(r rune) bool {
return r == ',' || r == '&'
})
if len(dirs) == 0 {
dirs = []string{"__all__"}
}
if c.mounter != nil {
return c.mounter.RefreshDir(dirs)
} else {
return c.refreshRcloneWithRC(dirs)
}
}
func (c *Cache) refreshRcloneWithRC(dirs []string) error {
cfg := c.config
if cfg.RcUrl == "" {
return nil
}
client := http.DefaultClient
// Create form data
data := c.buildRcloneRequestData(dirs)
if err := c.sendRcloneRequest(client, "vfs/forget", data); err != nil {
c.logger.Error().Err(err).Msg("Failed to send rclone vfs/forget request")
}
if err := c.sendRcloneRequest(client, "vfs/refresh", data); err != nil {
c.logger.Error().Err(err).Msg("Failed to send rclone vfs/refresh request")
}
return nil
}
func (c *Cache) buildRcloneRequestData(dirs []string) string {
var data strings.Builder
for index, dir := range dirs {
if dir != "" {
if index == 0 {
data.WriteString("dir=" + dir)
} else {
data.WriteString("&dir" + fmt.Sprint(index+1) + "=" + dir)
}
}
}
return data.String()
}
func (c *Cache) sendRcloneRequest(client *http.Client, endpoint, data string) error {
req, err := http.NewRequest("POST", fmt.Sprintf("%s/%s", c.config.RcUrl, endpoint), strings.NewReader(data))
if err != nil {
return err
}
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
if c.config.RcUser != "" && c.config.RcPass != "" {
req.SetBasicAuth(c.config.RcUser, c.config.RcPass)
}
resp, err := client.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode != 200 {
body, _ := io.ReadAll(io.LimitReader(resp.Body, 1024))
return fmt.Errorf("failed to perform %s: %s - %s", endpoint, resp.Status, string(body))
}
_, _ = io.Copy(io.Discard, resp.Body)
return nil
}
func (c *Cache) refreshTorrent(torrentId string) *CachedTorrent {
if torrentId == "" {
c.logger.Error().Msg("Torrent ID is empty")
return nil
}
torrent, err := c.client.GetTorrent(torrentId)
if err != nil {
c.logger.Error().Err(err).Msgf("Failed to get torrent %s", torrentId)
return nil
}
addedOn, err := time.Parse(time.RFC3339, torrent.Added)
if err != nil {
addedOn = time.Now()
}
ct := CachedTorrent{
Torrent: torrent,
AddedOn: addedOn,
IsComplete: len(torrent.Files) > 0,
}
c.setTorrent(ct, func(torrent CachedTorrent) {
go c.listingDebouncer.Call(true)
})
return &ct
}
func (c *Cache) refreshDownloadLinks(ctx context.Context) {
select {
case <-ctx.Done():
return
default:
}
if !c.downloadLinksRefreshMu.TryLock() {
return
}
defer c.downloadLinksRefreshMu.Unlock()
if err := c.client.RefreshDownloadLinks(); err != nil {
c.logger.Error().Err(err).Msg("Failed to get download links")
return
}
c.logger.Debug().Msgf("Refreshed download links")
}

314
pkg/debrid/store/repair.go Normal file
View File

@@ -0,0 +1,314 @@
package store
import (
"context"
"errors"
"fmt"
"sync"
"time"
"github.com/puzpuzpuz/xsync/v4"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
)
type reInsertRequest struct {
result *CachedTorrent
err error
done chan struct{}
}
func newReInsertRequest() *reInsertRequest {
return &reInsertRequest{
done: make(chan struct{}),
}
}
func (r *reInsertRequest) Complete(result *CachedTorrent, err error) {
r.result = result
r.err = err
close(r.done)
}
func (r *reInsertRequest) Wait() (*CachedTorrent, error) {
<-r.done
return r.result, r.err
}
func (c *Cache) markAsFailedToReinsert(torrentId string) {
c.failedToReinsert.Store(torrentId, struct{}{})
// Remove the torrent from the directory if it has failed to reinsert, max retries are hardcoded to 5
if torrent, ok := c.torrents.getByID(torrentId); ok {
torrent.Bad = true
c.setTorrent(torrent, func(t CachedTorrent) {
c.RefreshListings(false)
})
}
}
func (c *Cache) markAsSuccessfullyReinserted(torrentId string) {
if _, ok := c.failedToReinsert.Load(torrentId); !ok {
return
}
c.failedToReinsert.Delete(torrentId)
if torrent, ok := c.torrents.getByID(torrentId); ok {
torrent.Bad = false
c.setTorrent(torrent, func(torrent CachedTorrent) {
c.RefreshListings(false)
})
}
}
// GetBrokenFiles checks the files in the torrent for broken links.
// It also attempts to reinsert the torrent if any files are broken.
func (c *Cache) GetBrokenFiles(t *CachedTorrent, filenames []string) []string {
files := make(map[string]types.File)
repairStrategy := config.Get().Repair.Strategy
brokenFiles := make([]string, 0)
if len(filenames) > 0 {
for name, f := range t.Files {
if utils.Contains(filenames, name) {
files[name] = f
}
}
} else {
files = t.Files
}
for _, f := range files {
// Check if file is missing
if f.Link == "" {
// refresh torrent and then break
if newT := c.refreshTorrent(f.TorrentId); newT != nil {
t = newT
} else {
c.logger.Error().Str("torrentId", t.Torrent.Id).Msg("Failed to refresh torrent")
return filenames // Return original filenames if refresh fails(torrent is somehow botched)
}
}
}
if t.Torrent == nil {
c.logger.Error().Str("torrentId", t.Torrent.Id).Msg("Failed to refresh torrent")
return filenames // Return original filenames if refresh fails(torrent is somehow botched)
}
files = t.Files
var wg sync.WaitGroup
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// Use a mutex to protect brokenFiles slice and torrent-wide failure flag
var mu sync.Mutex
torrentWideFailed := false
wg.Add(len(files))
for _, f := range files {
go func(f types.File) {
defer wg.Done()
select {
case <-ctx.Done():
return
default:
}
if f.Link == "" {
mu.Lock()
if repairStrategy == config.RepairStrategyPerTorrent {
torrentWideFailed = true
mu.Unlock()
cancel() // Signal all other goroutines to stop
return
} else {
// per_file strategy - only mark this file as broken
brokenFiles = append(brokenFiles, f.Name)
}
mu.Unlock()
return
}
if err := c.client.CheckLink(f.Link); err != nil {
if errors.Is(err, utils.HosterUnavailableError) {
mu.Lock()
if repairStrategy == config.RepairStrategyPerTorrent {
torrentWideFailed = true
mu.Unlock()
cancel() // Signal all other goroutines to stop
return
} else {
// per_file strategy - only mark this file as broken
brokenFiles = append(brokenFiles, f.Name)
}
mu.Unlock()
}
}
}(f)
}
wg.Wait()
// Handle the result based on strategy
if repairStrategy == config.RepairStrategyPerTorrent && torrentWideFailed {
// Mark all files as broken for per_torrent strategy
for _, f := range files {
brokenFiles = append(brokenFiles, f.Name)
}
}
// For per_file strategy, brokenFiles already contains only the broken ones
// Try to reinsert the torrent if it's broken
if len(brokenFiles) > 0 && t.Torrent != nil {
// Check if the torrent is already in progress
if _, err := c.reInsertTorrent(t); err != nil {
c.logger.Error().Err(err).Str("torrentId", t.Torrent.Id).Msg("Failed to reinsert torrent")
return brokenFiles // Return broken files if reinsert fails
}
return nil // Return nil if the torrent was successfully reinserted
}
return brokenFiles
}
func (c *Cache) repairWorker(ctx context.Context) {
// This watches a channel for torrents to repair and can be cancelled via context
for {
select {
case <-ctx.Done():
return
case req, ok := <-c.repairChan:
// Channel was closed
if !ok {
c.logger.Debug().Msg("Repair channel closed, shutting down worker")
return
}
torrentId := req.TorrentID
c.logger.Debug().Str("torrentId", req.TorrentID).Msg("Received repair request")
// Get the torrent from the cache
cachedTorrent := c.GetTorrent(torrentId)
if cachedTorrent == nil {
c.logger.Warn().Str("torrentId", torrentId).Msg("Torrent not found in cache")
continue
}
switch req.Type {
case RepairTypeReinsert:
c.logger.Debug().Str("torrentId", torrentId).Msg("Reinserting torrent")
if _, err := c.reInsertTorrent(cachedTorrent); err != nil {
c.logger.Error().Err(err).Str("torrentId", cachedTorrent.Id).Msg("Failed to reinsert torrent")
continue
}
case RepairTypeDelete:
c.logger.Debug().Str("torrentId", torrentId).Msg("Deleting torrent")
if err := c.DeleteTorrent(torrentId); err != nil {
c.logger.Error().Err(err).Str("torrentId", torrentId).Msg("Failed to delete torrent")
continue
}
}
}
}
}
func (c *Cache) reInsertTorrent(ct *CachedTorrent) (*CachedTorrent, error) {
// Check if Magnet is not empty, if empty, reconstruct the magnet
torrent := ct.Torrent
oldID := torrent.Id // Store the old ID
if _, ok := c.failedToReinsert.Load(oldID); ok {
return ct, fmt.Errorf("can't retry re-insert for %s", torrent.Id)
}
if req, inFlight := c.repairRequest.Load(oldID); inFlight {
c.logger.Debug().Msgf("Waiting for existing reinsert request to complete for torrent %s", oldID)
return req.Wait()
}
req := newReInsertRequest()
c.repairRequest.Store(oldID, req)
// Make sure we clean up even if there's a panic
defer func() {
c.repairRequest.Delete(oldID)
}()
// Submit the magnet to the debrid service
newTorrent := &types.Torrent{
Name: torrent.Name,
Magnet: utils.ConstructMagnet(torrent.InfoHash, torrent.Name),
InfoHash: torrent.InfoHash,
Size: torrent.Size,
Files: make(map[string]types.File),
Arr: torrent.Arr,
DownloadUncached: false,
}
var err error
newTorrent, err = c.client.SubmitMagnet(newTorrent)
if err != nil {
c.markAsFailedToReinsert(oldID)
// Remove the old torrent from the cache and debrid service
return ct, fmt.Errorf("failed to submit magnet: %w", err)
}
// Check if the torrent was submitted
if newTorrent == nil || newTorrent.Id == "" {
c.markAsFailedToReinsert(oldID)
return ct, fmt.Errorf("failed to submit magnet: empty torrent")
}
newTorrent.DownloadUncached = false // Set to false, avoid re-downloading
newTorrent, err = c.client.CheckStatus(newTorrent)
if err != nil {
if newTorrent != nil && newTorrent.Id != "" {
// Delete the torrent if it was not downloaded
_ = c.client.DeleteTorrent(newTorrent.Id)
}
c.markAsFailedToReinsert(oldID)
return ct, fmt.Errorf("failed to check torrent: %w", err)
}
// Update the torrent in the cache
addedOn, err := time.Parse(time.RFC3339, newTorrent.Added)
if err != nil {
addedOn = time.Now()
}
for _, f := range newTorrent.GetFiles() {
if f.Link == "" {
c.markAsFailedToReinsert(oldID)
return ct, fmt.Errorf("failed to reinsert torrent: empty link")
}
}
// Set torrent to newTorrent
newCt := CachedTorrent{
Torrent: newTorrent,
AddedOn: addedOn,
IsComplete: len(newTorrent.Files) > 0,
}
c.setTorrent(newCt, func(torrent CachedTorrent) {
c.RefreshListings(true)
})
ct = &newCt // Update ct to point to the new torrent
// We can safely delete the old torrent here
if oldID != "" {
if err := c.DeleteTorrent(oldID); err != nil {
return ct, fmt.Errorf("failed to delete old torrent: %w", err)
}
}
req.Complete(ct, nil)
c.markAsSuccessfullyReinserted(oldID)
c.logger.Debug().Str("torrentId", torrent.Id).Msg("Torrent successfully reinserted")
return ct, nil
}
func (c *Cache) resetInvalidLinks(ctx context.Context) {
c.logger.Debug().Msgf("Resetting accounts")
c.invalidDownloadLinks = xsync.NewMap[string, string]()
c.client.AccountManager().Reset() // Reset the active download keys
// Refresh the download links
c.refreshDownloadLinks(ctx)
}

236
pkg/debrid/store/stream.go Normal file
View File

@@ -0,0 +1,236 @@
package store
import (
"context"
"errors"
"fmt"
"io"
"math/rand"
"net"
"net/http"
"strings"
"time"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
)
const (
MaxNetworkRetries = 5
MaxLinkRetries = 10
)
type StreamError struct {
Err error
Retryable bool
LinkError bool // true if we should try a new link
}
func (e StreamError) Error() string {
return e.Err.Error()
}
// isConnectionError checks if the error is related to connection issues
func (c *Cache) isConnectionError(err error) bool {
if err == nil {
return false
}
errStr := err.Error()
// Check for common connection errors
if strings.Contains(errStr, "EOF") ||
strings.Contains(errStr, "connection reset by peer") ||
strings.Contains(errStr, "broken pipe") ||
strings.Contains(errStr, "connection refused") {
return true
}
// Check for net.Error types
var netErr net.Error
return errors.As(err, &netErr)
}
func (c *Cache) Stream(ctx context.Context, start, end int64, linkFunc func() (types.DownloadLink, error)) (*http.Response, error) {
var lastErr error
downloadLink, err := linkFunc()
if err != nil {
return nil, fmt.Errorf("failed to get download link: %w", err)
}
// Outer loop: Link retries
for retry := 0; retry < MaxLinkRetries; retry++ {
select {
case <-ctx.Done():
return nil, ctx.Err()
default:
}
resp, err := c.doRequest(ctx, downloadLink.DownloadLink, start, end)
if err != nil {
// Network/connection error
lastErr = err
c.logger.Trace().
Int("retries", retry).
Err(err).
Msg("Network request failed, retrying")
// Backoff and continue network retry
if retry < MaxLinkRetries {
backoff := time.Duration(retry+1) * time.Second
jitter := time.Duration(rand.Intn(1000)) * time.Millisecond
select {
case <-time.After(backoff + jitter):
case <-ctx.Done():
return nil, ctx.Err()
}
continue
} else {
return nil, fmt.Errorf("network request failed after retries: %w", lastErr)
}
}
// Got response - check status
if resp.StatusCode == http.StatusOK || resp.StatusCode == http.StatusPartialContent {
return resp, nil
}
// Bad status code - handle error
streamErr := c.handleHTTPError(resp, downloadLink)
resp.Body.Close()
if !streamErr.Retryable {
return nil, streamErr // Fatal error
}
if streamErr.LinkError {
lastErr = streamErr
// Try new link
downloadLink, err = linkFunc()
if err != nil {
return nil, fmt.Errorf("failed to get download link: %w", err)
}
continue
}
// Retryable HTTP error (429, 503, 404 etc.) - retry network
lastErr = streamErr
c.logger.Trace().
Err(lastErr).
Str("downloadLink", downloadLink.DownloadLink).
Str("link", downloadLink.Link).
Int("retries", retry).
Int("statusCode", resp.StatusCode).
Msg("HTTP error, retrying")
if retry < MaxNetworkRetries-1 {
backoff := time.Duration(retry+1) * time.Second
jitter := time.Duration(rand.Intn(1000)) * time.Millisecond
select {
case <-time.After(backoff + jitter):
case <-ctx.Done():
return nil, ctx.Err()
}
}
}
return nil, fmt.Errorf("stream failed after %d link retries: %w", MaxLinkRetries, lastErr)
}
func (c *Cache) StreamReader(ctx context.Context, start, end int64, linkFunc func() (types.DownloadLink, error)) (io.ReadCloser, error) {
resp, err := c.Stream(ctx, start, end, linkFunc)
if err != nil {
return nil, err
}
// Validate we got the expected content
if resp.ContentLength == 0 {
resp.Body.Close()
return nil, fmt.Errorf("received empty response")
}
return resp.Body, nil
}
func (c *Cache) doRequest(ctx context.Context, url string, start, end int64) (*http.Response, error) {
var lastErr error
// Retry loop specifically for connection-level failures (EOF, reset, etc.)
for connRetry := 0; connRetry < 3; connRetry++ {
req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
if err != nil {
return nil, StreamError{Err: err, Retryable: false}
}
// Set range header
if start > 0 || end > 0 {
rangeHeader := fmt.Sprintf("bytes=%d-", start)
if end > 0 {
rangeHeader = fmt.Sprintf("bytes=%d-%d", start, end)
}
req.Header.Set("Range", rangeHeader)
}
// Set optimized headers for streaming
req.Header.Set("Connection", "keep-alive")
req.Header.Set("Accept-Encoding", "identity") // Disable compression for streaming
req.Header.Set("Cache-Control", "no-cache")
resp, err := c.streamClient.Do(req)
if err != nil {
lastErr = err
// Check if it's a connection error that we should retry
if c.isConnectionError(err) && connRetry < 2 {
// Brief backoff before retrying with fresh connection
time.Sleep(time.Duration(connRetry+1) * 100 * time.Millisecond)
continue
}
return nil, StreamError{Err: err, Retryable: true}
}
return resp, nil
}
return nil, StreamError{Err: fmt.Errorf("connection retry exhausted: %w", lastErr), Retryable: true}
}
func (c *Cache) handleHTTPError(resp *http.Response, downloadLink types.DownloadLink) StreamError {
switch resp.StatusCode {
case http.StatusNotFound:
c.MarkLinkAsInvalid(downloadLink, "link_not_found")
return StreamError{
Err: errors.New("download link not found"),
Retryable: true,
LinkError: true,
}
case http.StatusServiceUnavailable:
body, _ := io.ReadAll(resp.Body)
bodyStr := strings.ToLower(string(body))
if strings.Contains(bodyStr, "bandwidth") || strings.Contains(bodyStr, "traffic") {
c.MarkLinkAsInvalid(downloadLink, "bandwidth_exceeded")
return StreamError{
Err: errors.New("bandwidth limit exceeded"),
Retryable: true,
LinkError: true,
}
}
fallthrough
case http.StatusTooManyRequests:
return StreamError{
Err: fmt.Errorf("HTTP %d: rate limited", resp.StatusCode),
Retryable: true,
LinkError: false,
}
default:
retryable := resp.StatusCode >= 500
body, _ := io.ReadAll(resp.Body)
return StreamError{
Err: fmt.Errorf("HTTP %d: %s", resp.StatusCode, string(body)),
Retryable: retryable,
LinkError: false,
}
}
}

476
pkg/debrid/store/torrent.go Normal file
View File

@@ -0,0 +1,476 @@
package store
import (
"fmt"
"os"
"regexp"
"sort"
"strings"
"sync"
"sync/atomic"
"time"
)
const (
filterByInclude string = "include"
filterByExclude string = "exclude"
filterByStartsWith string = "starts_with"
filterByEndsWith string = "ends_with"
filterByNotStartsWith string = "not_starts_with"
filterByNotEndsWith string = "not_ends_with"
filterByRegex string = "regex"
filterByNotRegex string = "not_regex"
filterByExactMatch string = "exact_match"
filterByNotExactMatch string = "not_exact_match"
filterBySizeGT string = "size_gt"
filterBySizeLT string = "size_lt"
filterBLastAdded string = "last_added"
)
type directoryFilter struct {
filterType string
value string
regex *regexp.Regexp // only for regex/not_regex
sizeThreshold int64 // only for size_gt/size_lt
ageThreshold time.Duration // only for last_added
}
type folders struct {
sync.RWMutex
listing map[string][]os.FileInfo // folder name to file listing
}
type CachedTorrentEntry struct {
CachedTorrent
deleted bool // Tombstone flag
}
type torrentCache struct {
mu sync.RWMutex
torrents []CachedTorrentEntry // Changed to store entries with tombstone
// Lookup indices
idIndex map[string]int
nameIndex map[string]int
// Compaction tracking
deletedCount atomic.Int32
compactThreshold int // Trigger compaction when deletedCount exceeds this
listing atomic.Value
folders folders
directoriesFilters map[string][]directoryFilter
sortNeeded atomic.Bool
}
type sortableFile struct {
id string
name string
modTime time.Time
size int64
bad bool
}
func newTorrentCache(dirFilters map[string][]directoryFilter) *torrentCache {
tc := &torrentCache{
torrents: []CachedTorrentEntry{},
idIndex: make(map[string]int),
nameIndex: make(map[string]int),
compactThreshold: 100, // Compact when 100+ deleted entries
folders: folders{
listing: make(map[string][]os.FileInfo),
},
directoriesFilters: dirFilters,
}
tc.sortNeeded.Store(false)
tc.listing.Store(make([]os.FileInfo, 0))
return tc
}
func (tc *torrentCache) reset() {
tc.mu.Lock()
tc.torrents = tc.torrents[:0] // Clear the slice
tc.idIndex = make(map[string]int) // Reset the ID index
tc.nameIndex = make(map[string]int) // Reset the name index
tc.deletedCount.Store(0)
tc.mu.Unlock()
// reset the sorted listing
tc.sortNeeded.Store(false)
tc.listing.Store(make([]os.FileInfo, 0))
// reset any per-folder views
tc.folders.Lock()
tc.folders.listing = make(map[string][]os.FileInfo)
tc.folders.Unlock()
}
func (tc *torrentCache) getByID(id string) (CachedTorrent, bool) {
tc.mu.RLock()
defer tc.mu.RUnlock()
if index, exists := tc.idIndex[id]; exists && index < len(tc.torrents) {
entry := tc.torrents[index]
if !entry.deleted {
return entry.CachedTorrent, true
}
}
return CachedTorrent{}, false
}
func (tc *torrentCache) getByName(name string) (CachedTorrent, bool) {
tc.mu.RLock()
defer tc.mu.RUnlock()
if index, exists := tc.nameIndex[name]; exists && index < len(tc.torrents) {
entry := tc.torrents[index]
if !entry.deleted {
return entry.CachedTorrent, true
}
}
return CachedTorrent{}, false
}
func (tc *torrentCache) set(name string, torrent CachedTorrent) {
tc.mu.Lock()
defer tc.mu.Unlock()
// Check if this torrent already exists (update case)
if existingIndex, exists := tc.idIndex[torrent.Id]; exists && existingIndex < len(tc.torrents) {
if !tc.torrents[existingIndex].deleted {
// Update existing entry
tc.torrents[existingIndex].CachedTorrent = torrent
tc.sortNeeded.Store(true)
return
}
}
// Add new torrent
entry := CachedTorrentEntry{
CachedTorrent: torrent,
deleted: false,
}
tc.torrents = append(tc.torrents, entry)
index := len(tc.torrents) - 1
tc.idIndex[torrent.Id] = index
tc.nameIndex[name] = index
tc.sortNeeded.Store(true)
}
func (tc *torrentCache) removeId(id string) {
tc.mu.Lock()
defer tc.mu.Unlock()
if index, exists := tc.idIndex[id]; exists && index < len(tc.torrents) {
if !tc.torrents[index].deleted {
// Mark as deleted (tombstone)
tc.torrents[index].deleted = true
tc.deletedCount.Add(1)
// Remove from indices
delete(tc.idIndex, id)
// Find and remove from name index
for name, idx := range tc.nameIndex {
if idx == index {
delete(tc.nameIndex, name)
break
}
}
tc.sortNeeded.Store(true)
// Trigger compaction if threshold exceeded
if tc.deletedCount.Load() > int32(tc.compactThreshold) {
go tc.compact()
}
}
}
}
func (tc *torrentCache) remove(name string) {
tc.mu.Lock()
defer tc.mu.Unlock()
if index, exists := tc.nameIndex[name]; exists && index < len(tc.torrents) {
if !tc.torrents[index].deleted {
// Mark as deleted (tombstone)
torrentID := tc.torrents[index].CachedTorrent.Id
tc.torrents[index].deleted = true
tc.deletedCount.Add(1)
// Remove from indices
delete(tc.nameIndex, name)
delete(tc.idIndex, torrentID)
tc.sortNeeded.Store(true)
// Trigger compaction if threshold exceeded
if tc.deletedCount.Load() > int32(tc.compactThreshold) {
go tc.compact()
}
}
}
}
// Compact removes tombstoned entries and rebuilds indices
func (tc *torrentCache) compact() {
tc.mu.Lock()
defer tc.mu.Unlock()
deletedCount := tc.deletedCount.Load()
if deletedCount == 0 {
return // Nothing to compact
}
// Create new slice with only non-deleted entries
newTorrents := make([]CachedTorrentEntry, 0, len(tc.torrents)-int(deletedCount))
newIdIndex := make(map[string]int, len(tc.idIndex))
newNameIndex := make(map[string]int, len(tc.nameIndex))
// Copy non-deleted entries
for oldIndex, entry := range tc.torrents {
if !entry.deleted {
newIndex := len(newTorrents)
newTorrents = append(newTorrents, entry)
// Find the name for this torrent (reverse lookup)
for name, nameIndex := range tc.nameIndex {
if nameIndex == oldIndex {
newNameIndex[name] = newIndex
break
}
}
newIdIndex[entry.CachedTorrent.Id] = newIndex
}
}
// Replace old data with compacted data
tc.torrents = newTorrents
tc.idIndex = newIdIndex
tc.nameIndex = newNameIndex
tc.deletedCount.Store(0)
tc.sortNeeded.Store(true)
}
func (tc *torrentCache) ForceCompact() {
tc.compact()
}
func (tc *torrentCache) GetStats() (total, active, deleted int) {
tc.mu.RLock()
defer tc.mu.RUnlock()
total = len(tc.torrents)
deleted = int(tc.deletedCount.Load())
active = total - deleted
return total, active, deleted
}
func (tc *torrentCache) refreshListing() {
tc.mu.RLock()
all := make([]sortableFile, 0, len(tc.nameIndex))
for name, index := range tc.nameIndex {
if index < len(tc.torrents) && !tc.torrents[index].deleted {
t := tc.torrents[index].CachedTorrent
all = append(all, sortableFile{t.Id, name, t.AddedOn, t.Bytes, t.Bad})
}
}
tc.sortNeeded.Store(false)
tc.mu.RUnlock()
sort.Slice(all, func(i, j int) bool {
if all[i].name != all[j].name {
return all[i].name < all[j].name
}
return all[i].modTime.Before(all[j].modTime)
})
wg := sync.WaitGroup{}
wg.Add(1) // for all listing
go func() {
defer wg.Done()
listing := make([]os.FileInfo, len(all))
for i, sf := range all {
listing[i] = &fileInfo{sf.id, sf.name, sf.size, 0755 | os.ModeDir, sf.modTime, true}
}
tc.listing.Store(listing)
}()
wg.Add(1)
// For __bad__
go func() {
defer wg.Done()
listing := make([]os.FileInfo, 0)
for _, sf := range all {
if sf.bad {
listing = append(listing, &fileInfo{
id: sf.id,
name: fmt.Sprintf("%s || %s", sf.name, sf.id),
size: sf.size,
mode: 0755 | os.ModeDir,
modTime: sf.modTime,
isDir: true,
})
}
}
tc.folders.Lock()
if len(listing) > 0 {
tc.folders.listing["__bad__"] = listing
} else {
delete(tc.folders.listing, "__bad__")
}
tc.folders.Unlock()
}()
now := time.Now()
wg.Add(len(tc.directoriesFilters)) // for each directory filter
for dir, filters := range tc.directoriesFilters {
go func(dir string, filters []directoryFilter) {
defer wg.Done()
var matched []os.FileInfo
for _, sf := range all {
if tc.torrentMatchDirectory(filters, sf, now) {
matched = append(matched, &fileInfo{
id: sf.id,
name: sf.name, size: sf.size,
mode: 0755 | os.ModeDir, modTime: sf.modTime, isDir: true,
})
}
}
tc.folders.Lock()
if len(matched) > 0 {
tc.folders.listing[dir] = matched
} else {
delete(tc.folders.listing, dir)
}
tc.folders.Unlock()
}(dir, filters)
}
wg.Wait()
}
func (tc *torrentCache) getListing() []os.FileInfo {
// Fast path: if we have a sorted list and no changes since last sort
if !tc.sortNeeded.Load() {
return tc.listing.Load().([]os.FileInfo)
}
// Slow path: need to sort
tc.refreshListing()
return tc.listing.Load().([]os.FileInfo)
}
func (tc *torrentCache) getFolderListing(folderName string) []os.FileInfo {
tc.folders.RLock()
defer tc.folders.RUnlock()
if folderName == "" {
return tc.getListing()
}
if folder, ok := tc.folders.listing[folderName]; ok {
return folder
}
// If folder not found, return empty slice
return []os.FileInfo{}
}
func (tc *torrentCache) torrentMatchDirectory(filters []directoryFilter, file sortableFile, now time.Time) bool {
torrentName := strings.ToLower(file.name)
for _, filter := range filters {
matched := false
switch filter.filterType {
case filterByInclude:
matched = strings.Contains(torrentName, filter.value)
case filterByStartsWith:
matched = strings.HasPrefix(torrentName, filter.value)
case filterByEndsWith:
matched = strings.HasSuffix(torrentName, filter.value)
case filterByExactMatch:
matched = torrentName == filter.value
case filterByExclude:
matched = !strings.Contains(torrentName, filter.value)
case filterByNotStartsWith:
matched = !strings.HasPrefix(torrentName, filter.value)
case filterByNotEndsWith:
matched = !strings.HasSuffix(torrentName, filter.value)
case filterByRegex:
matched = filter.regex.MatchString(torrentName)
case filterByNotRegex:
matched = !filter.regex.MatchString(torrentName)
case filterByNotExactMatch:
matched = torrentName != filter.value
case filterBySizeGT:
matched = file.size > filter.sizeThreshold
case filterBySizeLT:
matched = file.size < filter.sizeThreshold
case filterBLastAdded:
matched = file.modTime.After(now.Add(-filter.ageThreshold))
}
if !matched {
return false // All filters must match
}
}
// If we get here, all filters matched
return true
}
func (tc *torrentCache) getAll() map[string]CachedTorrent {
tc.mu.RLock()
defer tc.mu.RUnlock()
result := make(map[string]CachedTorrent)
for _, entry := range tc.torrents {
if !entry.deleted {
result[entry.CachedTorrent.Id] = entry.CachedTorrent
}
}
return result
}
func (tc *torrentCache) getAllCount() int {
tc.mu.RLock()
defer tc.mu.RUnlock()
return len(tc.torrents) - int(tc.deletedCount.Load())
}
func (tc *torrentCache) getAllByName() map[string]CachedTorrent {
tc.mu.RLock()
defer tc.mu.RUnlock()
results := make(map[string]CachedTorrent, len(tc.nameIndex))
for name, index := range tc.nameIndex {
if index < len(tc.torrents) && !tc.torrents[index].deleted {
results[name] = tc.torrents[index].CachedTorrent
}
}
return results
}
func (tc *torrentCache) getIdMaps() map[string]struct{} {
tc.mu.RLock()
defer tc.mu.RUnlock()
res := make(map[string]struct{}, len(tc.idIndex))
for id, index := range tc.idIndex {
if index < len(tc.torrents) && !tc.torrents[index].deleted {
res[id] = struct{}{}
}
}
return res
}

View File

@@ -0,0 +1,64 @@
package store
import (
"context"
"github.com/go-co-op/gocron/v2"
"github.com/sirrobot01/decypharr/internal/utils"
)
func (c *Cache) StartWorker(ctx context.Context) error {
// For now, we just want to refresh the listing and download links
// Stop any existing jobs before starting new ones
c.scheduler.RemoveByTags("decypharr-%s", c.GetConfig().Name)
// Schedule download link refresh job
if jd, err := utils.ConvertToJobDef(c.downloadLinksRefreshInterval); err != nil {
c.logger.Error().Err(err).Msg("Failed to convert download link refresh interval to job definition")
} else {
// Schedule the job
if _, err := c.scheduler.NewJob(jd, gocron.NewTask(func() {
c.refreshDownloadLinks(ctx)
}), gocron.WithContext(ctx)); err != nil {
c.logger.Error().Err(err).Msg("Failed to create download link refresh job")
} else {
c.logger.Debug().Msgf("Download link refresh job scheduled for every %s", c.downloadLinksRefreshInterval)
}
}
// Schedule torrent refresh job
if jd, err := utils.ConvertToJobDef(c.torrentRefreshInterval); err != nil {
c.logger.Error().Err(err).Msg("Failed to convert torrent refresh interval to job definition")
} else {
// Schedule the job
if _, err := c.scheduler.NewJob(jd, gocron.NewTask(func() {
c.refreshTorrents(ctx)
}), gocron.WithContext(ctx)); err != nil {
c.logger.Error().Err(err).Msg("Failed to create torrent refresh job")
} else {
c.logger.Debug().Msgf("Torrent refresh job scheduled for every %s", c.torrentRefreshInterval)
}
}
// Schedule the reset invalid links job
// This job will run every at 00:00 CET
// and reset the invalid links in the cache
if jd, err := utils.ConvertToJobDef("00:00"); err != nil {
c.logger.Error().Err(err).Msg("Failed to convert link reset interval to job definition")
} else {
// Schedule the job
if _, err := c.cetScheduler.NewJob(jd, gocron.NewTask(func() {
c.resetInvalidLinks(ctx)
}), gocron.WithContext(ctx)); err != nil {
c.logger.Error().Err(err).Msg("Failed to create link reset job")
} else {
c.logger.Debug().Msgf("Link reset job scheduled for every midnight, CET")
}
}
// Start the scheduler
c.scheduler.Start()
c.cetScheduler.Start()
return nil
}

1
pkg/debrid/store/xml.go Normal file
View File

@@ -0,0 +1 @@
package store

View File

@@ -1,103 +0,0 @@
package debrid
import (
"goBlack/common"
"os"
"path/filepath"
)
type Arr struct {
Name string `json:"name"`
Token string `json:"token"`
Host string `json:"host"`
}
type ArrHistorySchema struct {
Page int `json:"page"`
PageSize int `json:"pageSize"`
SortKey string `json:"sortKey"`
SortDirection string `json:"sortDirection"`
TotalRecords int `json:"totalRecords"`
Records []struct {
ID int `json:"id"`
DownloadID string `json:"downloadId"`
} `json:"records"`
}
type Torrent struct {
Id string `json:"id"`
InfoHash string `json:"info_hash"`
Name string `json:"name"`
Folder string `json:"folder"`
Filename string `json:"filename"`
OriginalFilename string `json:"original_filename"`
Size int64 `json:"size"`
Bytes int64 `json:"bytes"` // Size of only the files that are downloaded
Magnet *common.Magnet `json:"magnet"`
Files []TorrentFile `json:"files"`
Status string `json:"status"`
Progress float64 `json:"progress"`
Speed int64 `json:"speed"`
Seeders int `json:"seeders"`
Links []string `json:"links"`
DownloadLinks []TorrentDownloadLinks `json:"download_links"`
Debrid *Debrid
Arr *Arr
}
type TorrentDownloadLinks struct {
Filename string `json:"filename"`
Link string `json:"link"`
DownloadLink string `json:"download_link"`
}
func (t *Torrent) GetSymlinkFolder(parent string) string {
return filepath.Join(parent, t.Arr.Name, t.Folder)
}
func (t *Torrent) GetMountFolder(rClonePath string) string {
pathWithNoExt := common.RemoveExtension(t.OriginalFilename)
if common.FileReady(filepath.Join(rClonePath, t.OriginalFilename)) {
return t.OriginalFilename
} else if common.FileReady(filepath.Join(rClonePath, t.Filename)) {
return t.Filename
} else if common.FileReady(filepath.Join(rClonePath, pathWithNoExt)) {
return pathWithNoExt
} else {
return ""
}
}
type TorrentFile struct {
Id string `json:"id"`
Name string `json:"name"`
Size int64 `json:"size"`
Path string `json:"path"`
}
func getEventId(eventType string) int {
switch eventType {
case "grabbed":
return 1
case "seriesFolderDownloaded":
return 2
case "DownloadFolderImported":
return 3
case "DownloadFailed":
return 4
case "DownloadIgnored":
return 7
default:
return 0
}
}
func (t *Torrent) Cleanup(remove bool) {
if remove {
err := os.Remove(t.Filename)
if err != nil {
return
}
}
}

30
pkg/debrid/types/error.go Normal file
View File

@@ -0,0 +1,30 @@
package types
type Error struct {
Message string `json:"message"`
Code string `json:"code"`
}
func (e *Error) Error() string {
return e.Message
}
var NoActiveAccountsError = &Error{
Message: "No active accounts",
Code: "no_active_accounts",
}
var ErrDownloadLinkNotFound = &Error{
Message: "No download link found",
Code: "no_download_link",
}
var DownloadLinkExpiredError = &Error{
Message: "Download link expired",
Code: "download_link_expired",
}
var EmptyDownloadLinkError = &Error{
Message: "Download link is empty",
Code: "empty_download_link",
}

Some files were not shown because too many files have changed in this diff Show More