45 Commits

Author SHA1 Message Date
Mukhtar Akere
fa6920f94a Merge branch 'beta'
Some checks failed
GoReleaser / goreleaser (push) Has been cancelled
Release Docker Build / docker (push) Has been cancelled
2025-07-09 05:14:39 +01:00
Mukhtar Akere
dba5604d79 fix refresh rclone http client 2025-07-07 00:08:48 +01:00
iPromKnight
f656b7e4e2 feat: Allow deleting all __bad__ with a single button (#98) 2025-07-04 20:13:12 +01:00
Mukhtar Akere
c7b07137c5 Fix repair bug 2025-07-03 23:36:30 +01:00
Mukhtar Akere
c0aa4eaeba Fix modtime bug 2025-07-02 01:17:31 +01:00
Mukhtar Akere
2c90e518aa fix playback issues 2025-07-01 16:10:23 +01:00
Mukhtar Akere
dec7d93272 fix streaming 2025-07-01 15:28:19 +01:00
Mukhtar Akere
8d092615db Update stream client; Add repair strategy 2025-07-01 04:42:33 +01:00
iPromKnight
a4ee0973cc fix: AllDebrid webdav compatibility, and uncached downloads (#97) 2025-07-01 04:10:21 +01:00
Mukhtar Akere
ab12610346 Merge branch 'beta'
Some checks failed
GoReleaser / goreleaser (push) Has been cancelled
Release Docker Build / docker (push) Has been cancelled
2025-06-26 21:15:22 +01:00
Mukhtar Akere
1d19be9013 hotfix repair html table 2025-06-26 07:31:12 +01:00
Mukhtar Akere
cee0e20fe1 hotfix repair and download rate limit 2025-06-26 06:08:50 +01:00
Mukhtar Akere
a3e698e04f Add repair and download rate limit 2025-06-26 05:45:20 +01:00
Mukhtar Akere
e123a2fd5e Hotfix issues with 1.0.3 2025-06-26 03:51:28 +01:00
Mukhtar Akere
817051589e Move to per-torrent repair; Fix issues issues with adding torrents 2025-06-23 18:54:52 +01:00
Mukhtar Akere
705de2d2bc Merge branch 'beta'
Some checks failed
GoReleaser / goreleaser (push) Has been cancelled
Release Docker Build / docker (push) Has been cancelled
2025-06-23 12:00:53 +01:00
Mukhtar Akere
54c421a480 Update Docs 2025-06-23 11:59:26 +01:00
Mukhtar Akere
1b98b994b7 Add size to arr ContentFile 2025-06-19 18:23:38 +01:00
Mukhtar Akere
06096c3748 Hotfix empty arr setup 2025-06-19 17:58:30 +01:00
Mukhtar Akere
7474011ef0 Update repair tool 2025-06-19 15:56:01 +01:00
Mukhtar Akere
086aa3b1ff Improve Arr integerations 2025-06-19 14:40:12 +01:00
Mukhtar Akere
c15e9d8f70 Updste repair 2025-06-18 12:44:05 +01:00
Mukhtar Akere
b2e99585f7 Fix issues with repair, move to a different streaming option 2025-06-18 10:42:44 +01:00
Mukhtar Akere
5661b05ec1 added CET timezone 2025-06-16 22:54:11 +01:00
Mukhtar Akere
b7226b21ec added CET timezone 2025-06-16 22:41:46 +01:00
Mukhtar Akere
605d5b81c2 Fix duration bug in config 2025-06-16 13:55:02 +01:00
Mukhtar Akere
8d87c602b9 - Add remove stalled torrent
- Few cleanup
2025-06-15 22:46:07 +01:00
Mukhtar Akere
7cf25f53e7 hotfix 2025-06-14 19:32:50 +01:00
Mukhtar Akere
22280f15cf cleanup torrent cache 2025-06-14 16:55:45 +01:00
Mukhtar Akere
a539aa53bd - Speed up repairs when checking links \n
- Remove run on start for repairs since it causes issues \n
- Add support for arr-specific debrid
- Support for queuing system
- Support for no-op when sending torrents to debrid
2025-06-14 16:09:28 +01:00
Mukhtar Akere
3efda45304 - IMplement multi-download api tokens
- Move things around a bit
2025-06-08 19:06:17 +01:00
Mukhtar Akere
5bf1dab5e6 Torrent Queuing for Botched torrent (#83)
* Implement a queue for handling failed torrent

* Add checks for getting slots

* Few other cleanups, change some function names
2025-06-07 17:23:41 +01:00
Mukhtar Akere
84603b084b Some improvements to beta 2025-06-07 10:03:01 +01:00
Mukhtar Akere
dfcf8708f1 final prep for 1.0.3 2025-06-03 10:45:23 +01:00
Mukhtar Akere
30a1dd74a7 Add Basic healtcheck 2025-06-02 20:45:39 +01:00
Mukhtar Akere
f041ef47a7 fix cloudflare, probably? 2025-06-02 20:04:41 +01:00
Mukhtar Akere
349a13468b fix cloudflare, maybe? 2025-06-02 15:44:03 +01:00
Mukhtar Akere
9c6c44d785 - Revamp decypharr arch \n
- Add callback_ur, download_folder to addContent API \n
- Fix few bugs \n
- More declarative UI keywords
- Speed up repairs
- Few other improvements/bug fixes
2025-06-02 12:57:36 +01:00
Mukhtar Akere
1cd09239f9 - Add more indepth stats like number of torrents, profile details etc
- Add torrent ingest endpoints
- Add issue template
2025-05-29 04:05:44 +01:00
Elias Benbourenane
f9c49cbbef Torrent list context menu (#40)
* feat: Torrent list context menu

* style: Leave more padding on the context menu for smaller screens
2025-05-28 07:29:18 -07:00
Mukhtar Akere
60b8d87f1c hotfix rar PR 2025-05-28 00:14:43 +01:00
Elias Benbourenane
fbd6cd5038 Random access for RARed RealDebrid torrents (#61)
* feat: AI translated port of RARAR.py in Go

* feat: Extract and cache byte ranges of RARed RD torrents

* feat: Stream and download files with byte ranges if specified

* refactor: Use a more structured data format for byte ranges

* fix: Rework streaming to fix error handling

* perf: More efficient RAR file pre-processing

* feat: Made the RAR unpacker an optional config option

* refactor: Remove unnecessary Rar prefix for more idiomatic code

* refactor: More appropriate private method declaration

* feat: Error handling for parsing RARed torrents with retry requests and EOF validation

* fix: Correctly parse unicode file names

* fix: Handle special character conversion for RAR torrent file names

* refactor: Removed debug logs

* feat: Only allow two concurrent RAR unpacking tasks

* fix: Include "<" and ">" as unsafe chars for RAR unpacking

* refactor: Seperate types into their own file

* refactor: Don't read RAR files on reader initialization
2025-05-27 16:10:23 -07:00
Mukhtar Akere
87bf8d0574 Merge branch 'beta'
Some checks failed
GoReleaser / goreleaser (push) Has been cancelled
Release Docker Build / docker (push) Has been cancelled
2025-05-27 23:45:13 +01:00
Mukhtar Akere
7f25599b60 - Add support for per-file deletion
- Per-file repair instead of per-torrent
- Fix issues with LoadLocation
- Fix other minor bug fixes woth torbox
2025-05-27 19:31:19 +01:00
Mukhtar Akere
d313ed0712 hotfix non-webdav symlinker
Some checks failed
GoReleaser / goreleaser (push) Has been cancelled
Release Docker Build / docker (push) Has been cancelled
2025-05-26 00:16:46 +01:00
99 changed files with 6811 additions and 3581 deletions

76
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View File

@@ -0,0 +1,76 @@
name: Bug Report
description: 'Report a new bug'
labels: ['Type: Bug', 'Status: Needs Triage']
body:
- type: checkboxes
attributes:
label: Is there an existing issue for this?
description: Please search to see if an open or closed issue already exists for the bug you encountered. If a bug exists and is closed note that it may only be fixed in an unstable branch.
options:
- label: I have searched the existing open and closed issues
required: true
- type: textarea
attributes:
label: Current Behavior
description: A concise description of what you're experiencing.
validations:
required: true
- type: textarea
attributes:
label: Expected Behavior
description: A concise description of what you expected to happen.
validations:
required: true
- type: textarea
attributes:
label: Steps To Reproduce
description: Steps to reproduce the behavior.
placeholder: |
1. In this environment...
2. With this config...
3. Run '...'
4. See error...
validations:
required: false
- type: textarea
attributes:
label: Environment
description: |
examples:
- **OS**: Ubuntu 20.04
- **Version**: v1.0.0
- **Docker Install**: Yes
- **Browser**: Firefox 90 (If UI related)
value: |
- OS:
- Version:
- Docker Install:
- Browser:
render: markdown
validations:
required: true
- type: dropdown
attributes:
label: What branch are you running?
options:
- Main/Latest
- Beta
- Experimental
validations:
required: true
- type: textarea
attributes:
label: Trace Logs? **Not Optional**
description: |
Trace Logs
- are **required** for bug reports
- are not optional
validations:
required: true
- type: checkboxes
attributes:
label: Trace Logs have been provided as applicable
description: Trace logs are **generally required** and are not optional for all bug reports and contain `trace`. Info logs are invalid for bug reports and do not contain `debug` nor `trace`
options:
- label: I have read and followed the steps in the documentation link and provided the required trace logs - the logs contain `trace` - that are relevant and show this issue.
required: true

View File

@@ -0,0 +1,38 @@
name: Feature Request
description: 'Suggest an idea for Decypharr'
labels: ['Type: Feature Request', 'Status: Needs Triage']
body:
- type: checkboxes
attributes:
label: Is there an existing issue for this?
description: Please search to see if an open or closed issue already exists for the feature you are requesting. If a request exists and is closed note that it may only be fixed in an unstable branch.
options:
- label: I have searched the existing open and closed issues
required: true
- type: textarea
attributes:
label: Is your feature request related to a problem? Please describe
description: A clear and concise description of what the problem is.
validations:
required: true
- type: textarea
attributes:
label: Describe the solution you'd like
description: A clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
attributes:
label: Describe alternatives you've considered
description: A clear and concise description of any alternative solutions or features you've considered.
validations:
required: true
- type: textarea
attributes:
label: Anything else?
description: |
Links? References? Mockups? Anything that will give us more context about the feature you are encountering!
Tip: You can attach images or log files by clicking this area to highlight it and then dragging files in.
validations:
required: true

View File

@@ -61,6 +61,8 @@ EXPOSE 8282
VOLUME ["/app"] VOLUME ["/app"]
USER nonroot:nonroot USER nonroot:nonroot
HEALTHCHECK --interval=3s --retries=10 CMD ["/usr/bin/healthcheck", "--config", "/app"]
# Base healthcheck
HEALTHCHECK --interval=3s --retries=10 CMD ["/usr/bin/healthcheck", "--config", "/app", "--basic"]
CMD ["/usr/bin/decypharr", "--config", "/app"] CMD ["/usr/bin/decypharr", "--config", "/app"]

View File

@@ -1,21 +1,21 @@
# DecyphArr # Decypharr
![ui](docs/docs/images/main.png) ![ui](docs/docs/images/main.png)
**DecyphArr** is an implementation of QbitTorrent with **Multiple Debrid service support**, written in Go. **Decypharr** is an implementation of QbitTorrent with **Multiple Debrid service support**, written in Go.
## What is DecyphArr? ## What is Decypharr?
DecyphArr combines the power of QBittorrent with popular Debrid services to enhance your media management. It provides a familiar interface for Sonarr, Radarr, and other \*Arr applications while leveraging the capabilities of Debrid providers. Decypharr combines the power of QBittorrent with popular Debrid services to enhance your media management. It provides a familiar interface for Sonarr, Radarr, and other \*Arr applications.
## Features ## Features
- 🔄 Mock Qbittorent API that supports the Arrs (Sonarr, Radarr, Lidarr etc) - Mock Qbittorent API that supports the Arrs (Sonarr, Radarr, Lidarr etc)
- 🖥️ Full-fledged UI for managing torrents - Full-fledged UI for managing torrents
- 🛡️ Proxy support for filtering out un-cached Debrid torrents - Proxy support for filtering out un-cached Debrid torrents
- 🔌 Multiple Debrid providers support - Multiple Debrid providers support
- 📁 WebDAV server support for each debrid provider - WebDAV server support for each debrid provider
- 🔧 Repair Worker for missing files - Repair Worker for missing files
## Supported Debrid Providers ## Supported Debrid Providers
@@ -36,14 +36,9 @@ services:
container_name: decypharr container_name: decypharr
ports: ports:
- "8282:8282" # qBittorrent - "8282:8282" # qBittorrent
user: "1000:1000"
volumes: volumes:
- /mnt/:/mnt - /mnt/:/mnt
- ./configs/:/app # config.json must be in this directory - ./configs/:/app # config.json must be in this directory
environment:
- PUID=1000
- PGID=1000
- UMASK=002
restart: unless-stopped restart: unless-stopped
``` ```

View File

@@ -7,11 +7,10 @@ import (
"github.com/sirrobot01/decypharr/internal/logger" "github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/pkg/qbit" "github.com/sirrobot01/decypharr/pkg/qbit"
"github.com/sirrobot01/decypharr/pkg/server" "github.com/sirrobot01/decypharr/pkg/server"
"github.com/sirrobot01/decypharr/pkg/service" "github.com/sirrobot01/decypharr/pkg/store"
"github.com/sirrobot01/decypharr/pkg/version" "github.com/sirrobot01/decypharr/pkg/version"
"github.com/sirrobot01/decypharr/pkg/web" "github.com/sirrobot01/decypharr/pkg/web"
"github.com/sirrobot01/decypharr/pkg/webdav" "github.com/sirrobot01/decypharr/pkg/webdav"
"github.com/sirrobot01/decypharr/pkg/worker"
"net/http" "net/http"
"os" "os"
"runtime" "runtime"
@@ -62,7 +61,7 @@ func Start(ctx context.Context) error {
qb := qbit.New() qb := qbit.New()
wd := webdav.New() wd := webdav.New()
ui := web.New(qb).Routes() ui := web.New().Routes()
webdavRoutes := wd.Routes() webdavRoutes := wd.Routes()
qbitRoutes := qb.Routes() qbitRoutes := qb.Routes()
@@ -76,7 +75,7 @@ func Start(ctx context.Context) error {
done := make(chan struct{}) done := make(chan struct{})
go func(ctx context.Context) { go func(ctx context.Context) {
if err := startServices(ctx, wd, srv); err != nil { if err := startServices(ctx, cancelSvc, wd, srv); err != nil {
_log.Error().Err(err).Msg("Error starting services") _log.Error().Err(err).Msg("Error starting services")
cancelSvc() cancelSvc()
} }
@@ -95,20 +94,20 @@ func Start(ctx context.Context) error {
_log.Info().Msg("Restarting Decypharr...") _log.Info().Msg("Restarting Decypharr...")
<-done // wait for them to finish <-done // wait for them to finish
qb.Reset() qb.Reset()
service.Reset() store.Reset()
// rebuild svcCtx off the original parent // rebuild svcCtx off the original parent
svcCtx, cancelSvc = context.WithCancel(ctx) svcCtx, cancelSvc = context.WithCancel(ctx)
runtime.GC() runtime.GC()
config.Reload() config.Reload()
service.Reset() store.Reset()
// loop will restart services automatically // loop will restart services automatically
} }
} }
} }
func startServices(ctx context.Context, wd *webdav.WebDav, srv *server.Server) error { func startServices(ctx context.Context, cancelSvc context.CancelFunc, wd *webdav.WebDav, srv *server.Server) error {
var wg sync.WaitGroup var wg sync.WaitGroup
errChan := make(chan error) errChan := make(chan error)
@@ -146,11 +145,7 @@ func startServices(ctx context.Context, wd *webdav.WebDav, srv *server.Server) e
}) })
safeGo(func() error { safeGo(func() error {
return worker.Start(ctx) arr := store.Get().Arr()
})
safeGo(func() error {
arr := service.GetService().Arr
if arr == nil { if arr == nil {
return nil return nil
} }
@@ -159,9 +154,9 @@ func startServices(ctx context.Context, wd *webdav.WebDav, srv *server.Server) e
if cfg := config.Get(); cfg.Repair.Enabled { if cfg := config.Get(); cfg.Repair.Enabled {
safeGo(func() error { safeGo(func() error {
r := service.GetService().Repair repair := store.Get().Repair()
if r != nil { if repair != nil {
if err := r.Start(ctx); err != nil { if err := repair.Start(ctx); err != nil {
_log.Error().Err(err).Msg("repair failed") _log.Error().Err(err).Msg("repair failed")
} }
} }
@@ -169,6 +164,10 @@ func startServices(ctx context.Context, wd *webdav.WebDav, srv *server.Server) e
}) })
} }
safeGo(func() error {
return store.Get().StartQueueSchedule(ctx)
})
go func() { go func() {
wg.Wait() wg.Wait()
close(errChan) close(errChan)
@@ -178,7 +177,11 @@ func startServices(ctx context.Context, wd *webdav.WebDav, srv *server.Server) e
for err := range errChan { for err := range errChan {
if err != nil { if err != nil {
_log.Error().Err(err).Msg("Service error detected") _log.Error().Err(err).Msg("Service error detected")
// Don't shut down the whole app // If the error is critical, return it to stop the main loop
if ctx.Err() == nil {
_log.Error().Msg("Stopping services due to error")
cancelSvc() // Cancel the service context to stop all services
}
} }
} }
}() }()

View File

@@ -22,8 +22,14 @@ type HealthStatus struct {
} }
func main() { func main() {
var configPath string var (
configPath string
isBasicCheck bool
debug bool
)
flag.StringVar(&configPath, "config", "/data", "path to the data folder") flag.StringVar(&configPath, "config", "/data", "path to the data folder")
flag.BoolVar(&isBasicCheck, "basic", false, "perform basic health check without WebDAV")
flag.BoolVar(&debug, "debug", false, "enable debug mode for detailed output")
flag.Parse() flag.Parse()
config.SetConfigPath(configPath) config.SetConfigPath(configPath)
cfg := config.Get() cfg := config.Get()
@@ -63,16 +69,17 @@ func main() {
status.WebUI = true status.WebUI = true
} }
// Check WebDAV if enabled if isBasicCheck {
if webdavPath != "" { status.WebDAVService = checkBaseWebdav(ctx, baseUrl, port)
if checkWebDAV(ctx, baseUrl, port, webdavPath) { } else {
// If not a basic check, check WebDAV with debrid path
if webdavPath != "" {
status.WebDAVService = checkDebridWebDAV(ctx, baseUrl, port, webdavPath)
} else {
// If no WebDAV path is set, consider it healthy
status.WebDAVService = true status.WebDAVService = true
} }
} else {
// If WebDAV is not enabled, consider it healthy
status.WebDAVService = true
} }
// Determine overall status // Determine overall status
// Consider the application healthy if core services are running // Consider the application healthy if core services are running
status.OverallStatus = status.QbitAPI && status.WebUI status.OverallStatus = status.QbitAPI && status.WebUI
@@ -81,7 +88,7 @@ func main() {
} }
// Optional: output health status as JSON for logging // Optional: output health status as JSON for logging
if os.Getenv("DEBUG") == "true" { if debug {
statusJSON, _ := json.MarshalIndent(status, "", " ") statusJSON, _ := json.MarshalIndent(status, "", " ")
fmt.Println(string(statusJSON)) fmt.Println(string(statusJSON))
} }
@@ -132,7 +139,24 @@ func checkWebUI(ctx context.Context, baseUrl, port string) bool {
return resp.StatusCode == http.StatusOK return resp.StatusCode == http.StatusOK
} }
func checkWebDAV(ctx context.Context, baseUrl, port, path string) bool { func checkBaseWebdav(ctx context.Context, baseUrl, port string) bool {
url := fmt.Sprintf("http://localhost:%s%swebdav/", port, baseUrl)
req, err := http.NewRequestWithContext(ctx, "PROPFIND", url, nil)
if err != nil {
return false
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
return false
}
defer resp.Body.Close()
return resp.StatusCode == http.StatusMultiStatus ||
resp.StatusCode == http.StatusOK
}
func checkDebridWebDAV(ctx context.Context, baseUrl, port, path string) bool {
url := fmt.Sprintf("http://localhost:%s%swebdav/%s", port, baseUrl, path) url := fmt.Sprintf("http://localhost:%s%swebdav/%s", port, baseUrl, path)
req, err := http.NewRequestWithContext(ctx, "PROPFIND", url, nil) req, err := http.NewRequestWithContext(ctx, "PROPFIND", url, nil)
if err != nil { if err != nil {
@@ -145,5 +169,7 @@ func checkWebDAV(ctx context.Context, baseUrl, port, path string) bool {
} }
defer resp.Body.Close() defer resp.Body.Close()
return resp.StatusCode == 207 || resp.StatusCode == http.StatusOK return resp.StatusCode == http.StatusMultiStatus ||
resp.StatusCode == http.StatusOK
} }

View File

@@ -14,7 +14,7 @@ Here are the fundamental configuration options:
"discord_webhook_url": "", "discord_webhook_url": "",
"min_file_size": 0, "min_file_size": 0,
"max_file_size": 0, "max_file_size": 0,
"allowed_file_types": [".mp4", ".mkv", ".avi", ...], "allowed_file_types": ["mp4", "mkv", "avi", ...],
} }
``` ```
@@ -55,8 +55,8 @@ When enabled, you'll need to provide a username and password to access the Decyp
You can set minimum and maximum file size limits for torrents: You can set minimum and maximum file size limits for torrents:
```json ```json
"min_file_size": 0, // Minimum file size in bytes (0 = no minimum) "min_file_size": 0,
"max_file_size": 0 // Maximum file size in bytes (0 = no maximum) "max_file_size": 0
``` ```
#### Allowed File Types #### Allowed File Types
@@ -64,9 +64,9 @@ You can restrict the types of files that Decypharr will process by specifying al
```json ```json
"allowed_file_types": [ "allowed_file_types": [
".mp4", ".mkv", ".avi", ".mov", "mp4", "mkv", "avi", "mov",
".m4v", ".mpg", ".mpeg", ".wmv", "m4v", "mpg", "mpeg", "wmv",
".m4a", ".mp3", ".flac", ".wav" "m4a", "mp3", "flac", "wav"
] ]
``` ```

View File

@@ -23,8 +23,7 @@ Here's a minimal configuration to get started:
}, },
"repair": { "repair": {
"enabled": false, "enabled": false,
"interval": "12h", "interval": "12h"
"run_on_start": false
}, },
"use_auth": false, "use_auth": false,
"log_level": "info" "log_level": "info"

View File

@@ -1,5 +1,7 @@
# Repair Worker # Repair Worker
![Repair Worker](../images/repair.png)
The Repair Worker is a powerful feature that helps maintain the health of your media library by scanning for and fixing issues with files. The Repair Worker is a powerful feature that helps maintain the health of your media library by scanning for and fixing issues with files.
## What It Does ## What It Does
@@ -19,7 +21,6 @@ To enable and configure the Repair Worker, add the following to your `config.jso
"repair": { "repair": {
"enabled": true, "enabled": true,
"interval": "12h", "interval": "12h",
"run_on_start": false,
"use_webdav": false, "use_webdav": false,
"zurg_url": "http://localhost:9999", "zurg_url": "http://localhost:9999",
"auto_process": true "auto_process": true
@@ -30,7 +31,6 @@ To enable and configure the Repair Worker, add the following to your `config.jso
- `enabled`: Set to `true` to enable the Repair Worker. - `enabled`: Set to `true` to enable the Repair Worker.
- `interval`: The time interval for the Repair Worker to run (e.g., `12h`, `1d`). - `interval`: The time interval for the Repair Worker to run (e.g., `12h`, `1d`).
- `run_on_start`: If set to `true`, the Repair Worker will run immediately after Decypharr starts.
- `use_webdav`: If set to `true`, the Repair Worker will use WebDAV for file operations. - `use_webdav`: If set to `true`, the Repair Worker will use WebDAV for file operations.
- `zurg_url`: The URL for the Zurg service (if using). - `zurg_url`: The URL for the Zurg service (if using).
- `auto_process`: If set to `true`, the Repair Worker will automatically process files that it finds issues with. - `auto_process`: If set to `true`, the Repair Worker will automatically process files that it finds issues with.

View File

@@ -1,5 +1,7 @@
# WebDAV Server # WebDAV Server
![WebDAV Server](../images/webdav.png)
Decypharr includes a built-in WebDAV server that provides direct access to your Debrid files, making them easily accessible to media players and other applications. Decypharr includes a built-in WebDAV server that provides direct access to your Debrid files, making them easily accessible to media players and other applications.

View File

@@ -0,0 +1,22 @@
### Downloading with Decypharr
While Decypharr provides a Qbittorent API for integration with media management applications, it also allows you to manually download torrents directly through its interface. This guide will walk you through the process of downloading torrents using Decypharr.
- You can either use the Decypharr UI to add torrents manually or use its API to automate the process.
## Manual Downloading
![Downloading UI](../images/download.png)
To manually download a torrent using Decypharr, follow these steps:
1. **Access the Download Page**: Navigate to the "Download" section in the Decypharr UI.
2. You can either upload torrent file(s) or paste magnet links directly into the input fields
3. Select the action(defaults to Symlink)
5. Add any additional options, such as:
- *Download Folder*: Specify the folder where the downloaded files will be saved.
- *Arr Category*: Choose the category for the download, which helps in organizing files in your media management applications.
- **Debrid Provider**: Choose which Debrid service to use for the download(if you have multiple)
- **File Size Limits**: Set minimum and maximum file size limits if needed.
- **Allowed File Types**: Specify which file types are allowed for download.
Note:
- If you use an arr category, your download will go into **{download_folder}/{arr}**

View File

@@ -1,4 +1,5 @@
# Guides for setting up Decypharr # Guides for setting up Decypharr
- [Setting up with Rclone](rclone.md) - [Setting up with Rclone](rclone.md)
- [Manual Downloading with Decypharr](downloading.md)

View File

@@ -5,7 +5,7 @@ This guide will help you set up Decypharr with Rclone, allowing you to use your
#### Rclone #### Rclone
Make sure you have Rclone installed and configured on your system. You can follow the [Rclone installation guide](https://rclone.org/install/) for instructions. Make sure you have Rclone installed and configured on your system. You can follow the [Rclone installation guide](https://rclone.org/install/) for instructions.
It's recommended to use docker version of Rclone, as it provides a consistent environment across different platforms. It's recommended to use a docker version of Rclone, as it provides a consistent environment across different platforms.
### Steps ### Steps
@@ -35,7 +35,7 @@ Create a `rclone.conf` file in `/opt/rclone/` with your Rclone configuration.
```conf ```conf
[decypharr] [decypharr]
type = webdav type = webdav
url = https://your-ip-or-domain:8282/webdav/realdebrid url = http://your-ip-or-domain:8282/webdav/realdebrid
vendor = other vendor = other
pacer_min_sleep = 0 pacer_min_sleep = 0
``` ```
@@ -51,7 +51,7 @@ Create a `config.json` file in `/opt/decypharr/` with your Decypharr configurati
"folder": "/mnt/remote/realdebrid/__all__/", "folder": "/mnt/remote/realdebrid/__all__/",
"rate_limit": "250/minute", "rate_limit": "250/minute",
"use_webdav": true, "use_webdav": true,
"rc_url": "http://your-ip-address:5572" // Rclone RC URL "rc_url": "rclone:5572"
} }
], ],
"qbittorrent": { "qbittorrent": {
@@ -62,6 +62,11 @@ Create a `config.json` file in `/opt/decypharr/` with your Decypharr configurati
``` ```
### Docker Compose Setup
- Check your current user and group IDs by running `id -u` and `id -g` in your terminal. You can use these values to set the `PUID` and `PGID` environment variables in the Docker Compose file.
- You should also set `user` to your user ID and group ID in the Docker Compose file to ensure proper file permissions.
Create a `docker-compose.yml` file with the following content: Create a `docker-compose.yml` file with the following content:
```yaml ```yaml
@@ -69,14 +74,14 @@ services:
decypharr: decypharr:
image: cy01/blackhole:latest image: cy01/blackhole:latest
container_name: decypharr container_name: decypharr
user: "1000:1000" user: "${PUID:-1000}:${PGID:-1000}"
volumes: volumes:
- /mnt/:/mnt - /mnt/:/mnt:rslave
- /opt/decypharr/:/app - /opt/decypharr/:/app
environment: environment:
- PUID=1000
- PGID=1000
- UMASK=002 - UMASK=002
- PUID=1000 # Replace with your user ID
- PGID=1000 # Replace with your group ID
ports: ports:
- "8282:8282/tcp" - "8282:8282/tcp"
restart: unless-stopped restart: unless-stopped
@@ -87,14 +92,11 @@ services:
restart: unless-stopped restart: unless-stopped
environment: environment:
TZ: UTC TZ: UTC
PUID: 1000
PGID: 1000
ports: ports:
- 5572:5572 - 5572:5572
volumes: volumes:
- /mnt/remote/realdebrid:/data:rshared - /mnt/remote/realdebrid:/data:rshared
- /opt/rclone/rclone.conf:/config/rclone/rclone.conf - /opt/rclone/rclone.conf:/config/rclone/rclone.conf
- /mnt:/mnt
cap_add: cap_add:
- SYS_ADMIN - SYS_ADMIN
security_opt: security_opt:
@@ -105,9 +107,17 @@ services:
decypharr: decypharr:
condition: service_healthy condition: service_healthy
restart: true restart: true
command: "mount decypharr: /data --allow-non-empty --allow-other --uid=1000 --gid=1000 --umask=002 --dir-cache-time 10s --rc --rc-addr :5572 --rc-no-auth " command: "mount decypharr: /data --allow-non-empty --allow-other --dir-cache-time 10s --rc --rc-addr :5572 --rc-no-auth"
``` ```
#### Docker Notes
- Ensure that the `/mnt/` directory is mounted correctly to access your media files.
- You can check your current user and group IDs and UMASK by running `id -a` and `umask` commands in your terminal.
- You can adjust the `PUID` and `PGID` environment variables to match your user and group IDs for proper file permissions.
- Also adding `--uid=$YOUR_PUID --gid=$YOUR_PGID` to the `rclone mount` command can help with permissions.
- The `UMASK` environment variable can be set to control file permissions created by Decypharr.
Start the containers: Start the containers:
```bash ```bash
docker-compose up -d docker-compose up -d
@@ -132,7 +142,7 @@ For each provider, you'll need a different rclone. OR you can change your `rclon
```apache ```apache
[decypharr] [decypharr]
type = webdav type = webdav
url = https://your-ip-or-domain:8282/webdav/ url = http://your-ip-or-domain:8282/webdav/
vendor = other vendor = other
pacer_min_sleep = 0 pacer_min_sleep = 0
``` ```

Binary file not shown.

After

Width:  |  Height:  |  Size: 218 KiB

BIN
docs/docs/images/repair.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 226 KiB

BIN
docs/docs/images/webdav.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

View File

@@ -45,21 +45,15 @@ docker run -d \
Create a `docker-compose.yml` file with the following content: Create a `docker-compose.yml` file with the following content:
```yaml ```yaml
version: '3.7'
services: services:
decypharr: decypharr:
image: cy01/blackhole:latest image: cy01/blackhole:latest
container_name: decypharr container_name: decypharr
ports: ports:
- "8282:8282" - "8282:8282"
user: "1000:1000"
volumes: volumes:
- /mnt/:/mnt # Mount your media directory - /mnt/:/mnt:rslave # Mount your media directory
- ./config/:/app # config.json must be in this directory - ./config/:/app # config.json must be in this directory
environment:
- PUID=1000
- PGID=1000
- UMASK=002
- QBIT_PORT=8282 # qBittorrent Port (optional) - QBIT_PORT=8282 # qBittorrent Port (optional)
restart: unless-stopped restart: unless-stopped
``` ```
@@ -73,9 +67,10 @@ docker-compose up -d
## Binary Installation ## Binary Installation
If you prefer not to use Docker, you can download and run the binary directly. If you prefer not to use Docker, you can download and run the binary directly.
Download the binary from the releases page Download your OS-specific release from the [releases page](https://github.com/sirrobot01/decypharr/releases).
Create a configuration file (see Configuration) Create a configuration file (see Configuration)
Run the binary: Run the binary:
```bash ```bash
chmod +x decypharr chmod +x decypharr
./decypharr --config /path/to/config/folder ./decypharr --config /path/to/config/folder
@@ -109,8 +104,28 @@ You can also configure Decypharr through the web interface, but it's recommended
} }
``` ```
### Few Notes ### Notes for Docker Users
- Make sure decypharr has access to the directories specified in the configuration file. - Ensure that the `/mnt/` directory is mounted correctly to access your media files.
- Ensure decypharr have write permissions to the qbittorrent download folder. - The `./config/` directory should contain your `config.json` file.
- Make sure decypharr can write to the `./config/` directory. - You can adjust the `PUID` and `PGID` environment variables to match your user and group IDs for proper file permissions.
- The `UMASK` environment variable can be set to control file permissions created by Decypharr.
##### Health Checks
- Health checks are disabled by default. You can enable them by adding a `healthcheck` section in your `docker-compose.yml` file.
- Health checks checks for availability of several parts of the application;
- The main web interface
- The qBittorrent API
- The WebDAV server (if enabled). You should disable health checks for the initial indexes as they can take a long time to complete.
```yaml
services:
decypharr:
...
...
healthcheck:
test: ["CMD", "/usr/bin/healthcheck", "--config", "/app/"]
interval: 5s
timeout: 10s
retries: 3
```

3
go.mod
View File

@@ -14,16 +14,17 @@ require (
github.com/robfig/cron/v3 v3.0.1 github.com/robfig/cron/v3 v3.0.1
github.com/rs/zerolog v1.33.0 github.com/rs/zerolog v1.33.0
github.com/stanNthe5/stringbuf v0.0.3 github.com/stanNthe5/stringbuf v0.0.3
go.uber.org/ratelimit v0.3.1
golang.org/x/crypto v0.33.0 golang.org/x/crypto v0.33.0
golang.org/x/net v0.35.0 golang.org/x/net v0.35.0
golang.org/x/sync v0.12.0 golang.org/x/sync v0.12.0
golang.org/x/time v0.8.0
gopkg.in/natefinch/lumberjack.v2 v2.2.1 gopkg.in/natefinch/lumberjack.v2 v2.2.1
) )
require ( require (
github.com/anacrolix/missinggo v1.3.0 // indirect github.com/anacrolix/missinggo v1.3.0 // indirect
github.com/anacrolix/missinggo/v2 v2.7.3 // indirect github.com/anacrolix/missinggo/v2 v2.7.3 // indirect
github.com/benbjohnson/clock v1.3.0 // indirect
github.com/bradfitz/iter v0.0.0-20191230175014-e8f45d346db8 // indirect github.com/bradfitz/iter v0.0.0-20191230175014-e8f45d346db8 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/google/go-cmp v0.6.0 // indirect github.com/google/go-cmp v0.6.0 // indirect

8
go.sum
View File

@@ -36,6 +36,8 @@ github.com/anacrolix/tagflag v1.1.0/go.mod h1:Scxs9CV10NQatSmbyjqmqmeQNwGzlNe0CM
github.com/anacrolix/torrent v1.55.0 h1:s9yh/YGdPmbN9dTa+0Inh2dLdrLQRvEAj1jdFW/Hdd8= github.com/anacrolix/torrent v1.55.0 h1:s9yh/YGdPmbN9dTa+0Inh2dLdrLQRvEAj1jdFW/Hdd8=
github.com/anacrolix/torrent v1.55.0/go.mod h1:sBdZHBSZNj4de0m+EbYg7vvs/G/STubxu/GzzNbojsE= github.com/anacrolix/torrent v1.55.0/go.mod h1:sBdZHBSZNj4de0m+EbYg7vvs/G/STubxu/GzzNbojsE=
github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ= github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/benbjohnson/clock v1.3.0 h1:ip6w0uFQkncKQ979AypyG0ER7mqUSBdKLOgAle/AT8A=
github.com/benbjohnson/clock v1.3.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
github.com/benbjohnson/immutable v0.2.0/go.mod h1:uc6OHo6PN2++n98KHLxW8ef4W42ylHiQSENghE1ezxI= github.com/benbjohnson/immutable v0.2.0/go.mod h1:uc6OHo6PN2++n98KHLxW8ef4W42ylHiQSENghE1ezxI=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q= github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8= github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
@@ -216,8 +218,12 @@ github.com/willf/bitset v1.1.10/go.mod h1:RjeCKbqT1RxIR/KWY6phxZiaY1IyutSBfGjNPy
go.opencensus.io v0.20.1/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk= go.opencensus.io v0.20.1/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
go.opencensus.io v0.20.2/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk= go.opencensus.io v0.20.2/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw= go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.uber.org/atomic v1.7.0 h1:ADUqmZGgLDDfbSL9ZmPxKTybcoEYHgpYfELNoN+7hsw=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/ratelimit v0.3.1 h1:K4qVE+byfv/B3tC+4nYWP7v/6SimcO7HzHekoMNBma0=
go.uber.org/ratelimit v0.3.1/go.mod h1:6euWsTB6U/Nb3X++xEUXA8ciPJvr19Q/0h1+oDcJhRk=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.33.0 h1:IOBPskki6Lysi0lo9qQvbxiQ+FvsCC/YWOecCHAixus= golang.org/x/crypto v0.33.0 h1:IOBPskki6Lysi0lo9qQvbxiQ+FvsCC/YWOecCHAixus=
@@ -266,8 +272,6 @@ golang.org/x/sys v0.30.0 h1:QjkSwP/36a20jFYWkSue1YwXzLmsV5Gfq7Eiy72C1uc=
golang.org/x/sys v0.30.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/sys v0.30.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/time v0.8.0 h1:9i3RxcPv3PZnitoVGMPDKZSq1xW1gK1Xy3ArNOGZfEg=
golang.org/x/time v0.8.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=

View File

@@ -12,6 +12,13 @@ import (
"sync" "sync"
) )
type RepairStrategy string
const (
RepairStrategyPerFile RepairStrategy = "per_file"
RepairStrategyPerTorrent RepairStrategy = "per_torrent"
)
var ( var (
instance *Config instance *Config
once sync.Once once sync.Once
@@ -19,15 +26,19 @@ var (
) )
type Debrid struct { type Debrid struct {
Name string `json:"name,omitempty"` Name string `json:"name,omitempty"`
APIKey string `json:"api_key,omitempty"` APIKey string `json:"api_key,omitempty"`
DownloadAPIKeys []string `json:"download_api_keys,omitempty"` DownloadAPIKeys []string `json:"download_api_keys,omitempty"`
Folder string `json:"folder,omitempty"` Folder string `json:"folder,omitempty"`
DownloadUncached bool `json:"download_uncached,omitempty"` DownloadUncached bool `json:"download_uncached,omitempty"`
CheckCached bool `json:"check_cached,omitempty"` CheckCached bool `json:"check_cached,omitempty"`
RateLimit string `json:"rate_limit,omitempty"` // 200/minute or 10/second RateLimit string `json:"rate_limit,omitempty"` // 200/minute or 10/second
Proxy string `json:"proxy,omitempty"` RepairRateLimit string `json:"repair_rate_limit,omitempty"`
AddSamples bool `json:"add_samples,omitempty"` DownloadRateLimit string `json:"download_rate_limit,omitempty"`
Proxy string `json:"proxy,omitempty"`
UnpackRar bool `json:"unpack_rar,omitempty"`
AddSamples bool `json:"add_samples,omitempty"`
MinimumFreeSlot int `json:"minimum_free_slot,omitempty"` // Minimum active pots to use this debrid
UseWebDav bool `json:"use_webdav,omitempty"` UseWebDav bool `json:"use_webdav,omitempty"`
WebDav WebDav
@@ -51,17 +62,19 @@ type Arr struct {
Cleanup bool `json:"cleanup,omitempty"` Cleanup bool `json:"cleanup,omitempty"`
SkipRepair bool `json:"skip_repair,omitempty"` SkipRepair bool `json:"skip_repair,omitempty"`
DownloadUncached *bool `json:"download_uncached,omitempty"` DownloadUncached *bool `json:"download_uncached,omitempty"`
SelectedDebrid string `json:"selected_debrid,omitempty"`
Source string `json:"source,omitempty"` // The source of the arr, e.g. "auto", "config", "". Auto means it was automatically detected from the arr
} }
type Repair struct { type Repair struct {
Enabled bool `json:"enabled,omitempty"` Enabled bool `json:"enabled,omitempty"`
Interval string `json:"interval,omitempty"` Interval string `json:"interval,omitempty"`
RunOnStart bool `json:"run_on_start,omitempty"` ZurgURL string `json:"zurg_url,omitempty"`
ZurgURL string `json:"zurg_url,omitempty"` AutoProcess bool `json:"auto_process,omitempty"`
AutoProcess bool `json:"auto_process,omitempty"` UseWebDav bool `json:"use_webdav,omitempty"`
UseWebDav bool `json:"use_webdav,omitempty"` Workers int `json:"workers,omitempty"`
Workers int `json:"workers,omitempty"` ReInsert bool `json:"reinsert,omitempty"`
ReInsert bool `json:"reinsert,omitempty"` Strategy RepairStrategy `json:"strategy,omitempty"`
} }
type Auth struct { type Auth struct {
@@ -75,19 +88,20 @@ type Config struct {
URLBase string `json:"url_base,omitempty"` URLBase string `json:"url_base,omitempty"`
Port string `json:"port,omitempty"` Port string `json:"port,omitempty"`
LogLevel string `json:"log_level,omitempty"` LogLevel string `json:"log_level,omitempty"`
Debrids []Debrid `json:"debrids,omitempty"` Debrids []Debrid `json:"debrids,omitempty"`
QBitTorrent QBitTorrent `json:"qbittorrent,omitempty"` QBitTorrent QBitTorrent `json:"qbittorrent,omitempty"`
Arrs []Arr `json:"arrs,omitempty"` Arrs []Arr `json:"arrs,omitempty"`
Repair Repair `json:"repair,omitempty"` Repair Repair `json:"repair,omitempty"`
WebDav WebDav `json:"webdav,omitempty"` WebDav WebDav `json:"webdav,omitempty"`
AllowedExt []string `json:"allowed_file_types,omitempty"` AllowedExt []string `json:"allowed_file_types,omitempty"`
MinFileSize string `json:"min_file_size,omitempty"` // Minimum file size to download, 10MB, 1GB, etc MinFileSize string `json:"min_file_size,omitempty"` // Minimum file size to download, 10MB, 1GB, etc
MaxFileSize string `json:"max_file_size,omitempty"` // Maximum file size to download (0 means no limit) MaxFileSize string `json:"max_file_size,omitempty"` // Maximum file size to download (0 means no limit)
Path string `json:"-"` // Path to save the config file Path string `json:"-"` // Path to save the config file
UseAuth bool `json:"use_auth,omitempty"` UseAuth bool `json:"use_auth,omitempty"`
Auth *Auth `json:"-"` Auth *Auth `json:"-"`
DiscordWebhook string `json:"discord_webhook_url,omitempty"` DiscordWebhook string `json:"discord_webhook_url,omitempty"`
RemoveStalledAfter string `json:"remove_stalled_after,omitzero"`
} }
func (c *Config) JsonFile() string { func (c *Config) JsonFile() string {
@@ -97,6 +111,10 @@ func (c *Config) AuthFile() string {
return filepath.Join(c.Path, "auth.json") return filepath.Join(c.Path, "auth.json")
} }
func (c *Config) TorrentsFile() string {
return filepath.Join(c.Path, "torrents.json")
}
func (c *Config) loadConfig() error { func (c *Config) loadConfig() error {
// Load the config file // Load the config file
if configPath == "" { if configPath == "" {
@@ -271,9 +289,15 @@ func (c *Config) updateDebrid(d Debrid) Debrid {
workers := runtime.NumCPU() * 50 workers := runtime.NumCPU() * 50
perDebrid := workers / len(c.Debrids) perDebrid := workers / len(c.Debrids)
if len(d.DownloadAPIKeys) == 0 { var downloadKeys []string
d.DownloadAPIKeys = append(d.DownloadAPIKeys, d.APIKey)
if len(d.DownloadAPIKeys) > 0 {
downloadKeys = d.DownloadAPIKeys
} else {
// If no download API keys are specified, use the main API key
downloadKeys = []string{d.APIKey}
} }
d.DownloadAPIKeys = downloadKeys
if !d.UseWebDav { if !d.UseWebDav {
return d return d
@@ -336,6 +360,11 @@ func (c *Config) setDefaults() {
c.URLBase += "/" c.URLBase += "/"
} }
// Set repair defaults
if c.Repair.Strategy == "" {
c.Repair.Strategy = RepairStrategyPerTorrent
}
// Load the auth file // Load the auth file
c.Auth = c.GetAuth() c.Auth = c.GetAuth()
} }
@@ -379,3 +408,7 @@ func Reload() {
instance = nil instance = nil
once = sync.Once{} once = sync.Once{}
} }
func DefaultFreeSlot() int {
return 10
}

View File

@@ -2,7 +2,6 @@ package request
import ( import (
"bytes" "bytes"
"compress/gzip"
"context" "context"
"crypto/tls" "crypto/tls"
"encoding/json" "encoding/json"
@@ -10,10 +9,9 @@ import (
"fmt" "fmt"
"github.com/rs/zerolog" "github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/logger" "github.com/sirrobot01/decypharr/internal/logger"
"go.uber.org/ratelimit"
"golang.org/x/net/proxy" "golang.org/x/net/proxy"
"golang.org/x/time/rate"
"io" "io"
"math"
"math/rand" "math/rand"
"net" "net"
"net/http" "net/http"
@@ -53,7 +51,7 @@ type ClientOption func(*Client)
// Client represents an HTTP client with additional capabilities // Client represents an HTTP client with additional capabilities
type Client struct { type Client struct {
client *http.Client client *http.Client
rateLimiter *rate.Limiter rateLimiter ratelimit.Limiter
headers map[string]string headers map[string]string
headersMu sync.RWMutex headersMu sync.RWMutex
maxRetries int maxRetries int
@@ -85,7 +83,7 @@ func WithRedirectPolicy(policy func(req *http.Request, via []*http.Request) erro
} }
// WithRateLimiter sets a rate limiter // WithRateLimiter sets a rate limiter
func WithRateLimiter(rl *rate.Limiter) ClientOption { func WithRateLimiter(rl ratelimit.Limiter) ClientOption {
return func(c *Client) { return func(c *Client) {
c.rateLimiter = rl c.rateLimiter = rl
} }
@@ -137,9 +135,11 @@ func WithProxy(proxyURL string) ClientOption {
// doRequest performs a single HTTP request with rate limiting // doRequest performs a single HTTP request with rate limiting
func (c *Client) doRequest(req *http.Request) (*http.Response, error) { func (c *Client) doRequest(req *http.Request) (*http.Response, error) {
if c.rateLimiter != nil { if c.rateLimiter != nil {
err := c.rateLimiter.Wait(req.Context()) select {
if err != nil { case <-req.Context().Done():
return nil, fmt.Errorf("rate limiter wait: %w", err) return nil, req.Context().Err()
default:
c.rateLimiter.Take()
} }
} }
@@ -340,7 +340,10 @@ func New(options ...ClientOption) *Client {
return client return client
} }
func ParseRateLimit(rateStr string) *rate.Limiter { func ParseRateLimit(rateStr string) ratelimit.Limiter {
if rateStr == "" {
return nil
}
parts := strings.SplitN(rateStr, "/", 2) parts := strings.SplitN(rateStr, "/", 2)
if len(parts) != 2 { if len(parts) != 2 {
return nil return nil
@@ -352,23 +355,21 @@ func ParseRateLimit(rateStr string) *rate.Limiter {
return nil return nil
} }
// Set slack size to 10%
slackSize := count / 10
// normalize unit // normalize unit
unit := strings.ToLower(strings.TrimSpace(parts[1])) unit := strings.ToLower(strings.TrimSpace(parts[1]))
unit = strings.TrimSuffix(unit, "s") unit = strings.TrimSuffix(unit, "s")
burstSize := int(math.Ceil(float64(count) * 0.1))
if burstSize < 1 {
burstSize = 1
}
if burstSize > count {
burstSize = count
}
switch unit { switch unit {
case "minute", "min": case "minute", "min":
return rate.NewLimiter(rate.Limit(float64(count)/60.0), burstSize) return ratelimit.New(count, ratelimit.Per(time.Minute), ratelimit.WithSlack(slackSize))
case "second", "sec": case "second", "sec":
return rate.NewLimiter(rate.Limit(float64(count)), burstSize) return ratelimit.New(count, ratelimit.Per(time.Second), ratelimit.WithSlack(slackSize))
case "hour", "hr": case "hour", "hr":
return rate.NewLimiter(rate.Limit(float64(count)/3600.0), burstSize) return ratelimit.New(count, ratelimit.Per(time.Hour), ratelimit.WithSlack(slackSize))
case "day", "d":
return ratelimit.New(count, ratelimit.Per(24*time.Hour), ratelimit.WithSlack(slackSize))
default: default:
return nil return nil
} }
@@ -383,31 +384,6 @@ func JSONResponse(w http.ResponseWriter, data interface{}, code int) {
} }
} }
func Gzip(body []byte) []byte {
if len(body) == 0 {
return nil
}
// Check if the pool is nil
buf := bytes.NewBuffer(make([]byte, 0, len(body)))
gz, err := gzip.NewWriterLevel(buf, gzip.BestSpeed)
if err != nil {
return nil
}
if _, err := gz.Write(body); err != nil {
return nil
}
if err := gz.Close(); err != nil {
return nil
}
result := make([]byte, buf.Len())
copy(result, buf.Bytes())
return result
}
func Default() *Client { func Default() *Client {
once.Do(func() { once.Do(func() {
instance = New() instance = New()
@@ -435,7 +411,7 @@ func isRetryableError(err error) bool {
var netErr net.Error var netErr net.Error
if errors.As(err, &netErr) { if errors.As(err, &netErr) {
// Retry on timeout errors and temporary errors // Retry on timeout errors and temporary errors
return netErr.Timeout() || netErr.Temporary() return netErr.Timeout()
} }
// Not a retryable error // Not a retryable error

View File

@@ -1,4 +1,6 @@
package request package utils
import "errors"
type HTTPError struct { type HTTPError struct {
StatusCode int StatusCode int
@@ -33,3 +35,13 @@ var TorrentNotFoundError = &HTTPError{
Message: "Torrent not found", Message: "Torrent not found",
Code: "torrent_not_found", Code: "torrent_not_found",
} }
var TooManyActiveDownloadsError = &HTTPError{
StatusCode: 509,
Message: "Too many active downloads",
Code: "too_many_active_downloads",
}
func IsTooManyActiveDownloadsError(err error) bool {
return errors.As(err, &TooManyActiveDownloadsError)
}

View File

@@ -1,7 +1,10 @@
package utils package utils
import ( import (
"fmt"
"io"
"net/url" "net/url"
"os"
"strings" "strings"
) )
@@ -19,3 +22,65 @@ func PathUnescape(path string) string {
return unescapedPath return unescapedPath
} }
func PreCacheFile(filePaths []string) error {
if len(filePaths) == 0 {
return fmt.Errorf("no file paths provided")
}
for _, filePath := range filePaths {
err := func(f string) error {
file, err := os.Open(f)
if err != nil {
if os.IsNotExist(err) {
// File has probably been moved by arr, return silently
return nil
}
return fmt.Errorf("failed to open file: %s: %v", f, err)
}
defer file.Close()
// Pre-cache the file header (first 256KB) using 16KB chunks.
if err := readSmallChunks(file, 0, 256*1024, 16*1024); err != nil {
return err
}
if err := readSmallChunks(file, 1024*1024, 64*1024, 16*1024); err != nil {
return err
}
return nil
}(filePath)
if err != nil {
return err
}
}
return nil
}
func readSmallChunks(file *os.File, startPos int64, totalToRead int, chunkSize int) error {
_, err := file.Seek(startPos, 0)
if err != nil {
return err
}
buf := make([]byte, chunkSize)
bytesRemaining := totalToRead
for bytesRemaining > 0 {
toRead := chunkSize
if bytesRemaining < chunkSize {
toRead = bytesRemaining
}
n, err := file.Read(buf[:toRead])
if err != nil {
if err == io.EOF {
break
}
return err
}
bytesRemaining -= n
}
return nil
}

View File

@@ -25,11 +25,11 @@ var (
) )
type Magnet struct { type Magnet struct {
Name string Name string `json:"name"`
InfoHash string InfoHash string `json:"infoHash"`
Size int64 Size int64 `json:"size"`
Link string Link string `json:"link"`
File []byte File []byte `json:"-"`
} }
func (m *Magnet) IsTorrent() bool { func (m *Magnet) IsTorrent() bool {
@@ -83,7 +83,6 @@ func GetMagnetFromBytes(torrentData []byte) (*Magnet, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
log.Println("InfoHash: ", infoHash)
magnet := &Magnet{ magnet := &Magnet{
InfoHash: infoHash, InfoHash: infoHash,
Name: info.Name, Name: info.Name,

View File

@@ -40,12 +40,10 @@ func RemoveInvalidChars(value string) string {
} }
func RemoveExtension(value string) string { func RemoveExtension(value string) string {
loc := mediaRegex.FindStringIndex(value) if loc := mediaRegex.FindStringIndex(value); loc != nil {
if loc != nil {
return value[:loc[0]] return value[:loc[0]]
} else {
return value
} }
return value
} }
func IsMediaFile(path string) bool { func IsMediaFile(path string) bool {

View File

@@ -3,6 +3,7 @@ package arr
import ( import (
"bytes" "bytes"
"context" "context"
"crypto/tls"
"encoding/json" "encoding/json"
"fmt" "fmt"
"github.com/rs/zerolog" "github.com/rs/zerolog"
@@ -11,7 +12,6 @@ import (
"github.com/sirrobot01/decypharr/internal/request" "github.com/sirrobot01/decypharr/internal/request"
"io" "io"
"net/http" "net/http"
"strconv"
"strings" "strings"
"sync" "sync"
"time" "time"
@@ -20,6 +20,13 @@ import (
// Type is a type of arr // Type is a type of arr
type Type string type Type string
var sharedClient = &http.Client{
Transport: &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
},
Timeout: 60 * time.Second,
}
const ( const (
Sonarr Type = "sonarr" Sonarr Type = "sonarr"
Radarr Type = "radarr" Radarr Type = "radarr"
@@ -35,10 +42,11 @@ type Arr struct {
Cleanup bool `json:"cleanup"` Cleanup bool `json:"cleanup"`
SkipRepair bool `json:"skip_repair"` SkipRepair bool `json:"skip_repair"`
DownloadUncached *bool `json:"download_uncached"` DownloadUncached *bool `json:"download_uncached"`
client *request.Client SelectedDebrid string `json:"selected_debrid,omitempty"` // The debrid service selected for this arr
Source string `json:"source,omitempty"` // The source of the arr, e.g. "auto", "manual". Auto means it was automatically detected from the arr
} }
func New(name, host, token string, cleanup, skipRepair bool, downloadUncached *bool) *Arr { func New(name, host, token string, cleanup, skipRepair bool, downloadUncached *bool, selectedDebrid, source string) *Arr {
return &Arr{ return &Arr{
Name: name, Name: name,
Host: host, Host: host,
@@ -47,7 +55,8 @@ func New(name, host, token string, cleanup, skipRepair bool, downloadUncached *b
Cleanup: cleanup, Cleanup: cleanup,
SkipRepair: skipRepair, SkipRepair: skipRepair,
DownloadUncached: downloadUncached, DownloadUncached: downloadUncached,
client: request.New(), SelectedDebrid: selectedDebrid,
Source: source,
} }
} }
@@ -74,14 +83,11 @@ func (a *Arr) Request(method, endpoint string, payload interface{}) (*http.Respo
} }
req.Header.Set("Content-Type", "application/json") req.Header.Set("Content-Type", "application/json")
req.Header.Set("X-Api-Key", a.Token) req.Header.Set("X-Api-Key", a.Token)
if a.client == nil {
a.client = request.New()
}
var resp *http.Response var resp *http.Response
for attempts := 0; attempts < 5; attempts++ { for attempts := 0; attempts < 5; attempts++ {
resp, err = a.client.Do(req) resp, err = sharedClient.Do(req)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -103,14 +109,16 @@ func (a *Arr) Request(method, endpoint string, payload interface{}) (*http.Respo
func (a *Arr) Validate() error { func (a *Arr) Validate() error {
if a.Token == "" || a.Host == "" { if a.Token == "" || a.Host == "" {
return nil return fmt.Errorf("arr not configured: %s", a.Name)
} }
resp, err := a.Request("GET", "/api/v3/health", nil) resp, err := a.Request("GET", "/api/v3/health", nil)
if err != nil { if err != nil {
return err return err
} }
if resp.StatusCode != http.StatusOK { defer resp.Body.Close()
return fmt.Errorf("arr test failed: %s", resp.Status) // If response is not 200 or 404(this is the case for Lidarr, etc), return an error
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusNotFound {
return fmt.Errorf("failed to validate arr %s: %s", a.Name, resp.Status)
} }
return nil return nil
} }
@@ -121,10 +129,10 @@ type Storage struct {
logger zerolog.Logger logger zerolog.Logger
} }
func (as *Storage) Cleanup() { func (s *Storage) Cleanup() {
as.mu.Lock() s.mu.Lock()
defer as.mu.Unlock() defer s.mu.Unlock()
as.Arrs = make(map[string]*Arr) s.Arrs = make(map[string]*Arr)
} }
func InferType(host, name string) Type { func InferType(host, name string) Type {
@@ -145,8 +153,11 @@ func InferType(host, name string) Type {
func NewStorage() *Storage { func NewStorage() *Storage {
arrs := make(map[string]*Arr) arrs := make(map[string]*Arr)
for _, a := range config.Get().Arrs { for _, a := range config.Get().Arrs {
if a.Host == "" || a.Token == "" || a.Name == "" {
continue // Skip if host or token is not set
}
name := a.Name name := a.Name
arrs[name] = New(name, a.Host, a.Token, a.Cleanup, a.SkipRepair, a.DownloadUncached) arrs[name] = New(name, a.Host, a.Token, a.Cleanup, a.SkipRepair, a.DownloadUncached, a.SelectedDebrid, a.Source)
} }
return &Storage{ return &Storage{
Arrs: arrs, Arrs: arrs,
@@ -154,46 +165,38 @@ func NewStorage() *Storage {
} }
} }
func (as *Storage) AddOrUpdate(arr *Arr) { func (s *Storage) AddOrUpdate(arr *Arr) {
as.mu.Lock() s.mu.Lock()
defer as.mu.Unlock() defer s.mu.Unlock()
if arr.Name == "" { if arr.Host == "" || arr.Token == "" || arr.Name == "" {
return return
} }
as.Arrs[arr.Name] = arr s.Arrs[arr.Name] = arr
} }
func (as *Storage) Get(name string) *Arr { func (s *Storage) Get(name string) *Arr {
as.mu.Lock() s.mu.Lock()
defer as.mu.Unlock() defer s.mu.Unlock()
return as.Arrs[name] return s.Arrs[name]
} }
func (as *Storage) GetAll() []*Arr { func (s *Storage) GetAll() []*Arr {
as.mu.Lock() s.mu.Lock()
defer as.mu.Unlock() defer s.mu.Unlock()
arrs := make([]*Arr, 0, len(as.Arrs)) arrs := make([]*Arr, 0, len(s.Arrs))
for _, arr := range as.Arrs { for _, arr := range s.Arrs {
if arr.Host != "" && arr.Token != "" { arrs = append(arrs, arr)
arrs = append(arrs, arr)
}
} }
return arrs return arrs
} }
func (as *Storage) Clear() { func (s *Storage) StartSchedule(ctx context.Context) error {
as.mu.Lock()
defer as.mu.Unlock()
as.Arrs = make(map[string]*Arr)
}
func (as *Storage) StartSchedule(ctx context.Context) error {
ticker := time.NewTicker(10 * time.Second) ticker := time.NewTicker(10 * time.Second)
select { select {
case <-ticker.C: case <-ticker.C:
as.cleanupArrsQueue() s.cleanupArrsQueue()
case <-ctx.Done(): case <-ctx.Done():
ticker.Stop() ticker.Stop()
return nil return nil
@@ -201,9 +204,9 @@ func (as *Storage) StartSchedule(ctx context.Context) error {
return nil return nil
} }
func (as *Storage) cleanupArrsQueue() { func (s *Storage) cleanupArrsQueue() {
arrs := make([]*Arr, 0) arrs := make([]*Arr, 0)
for _, arr := range as.Arrs { for _, arr := range s.Arrs {
if !arr.Cleanup { if !arr.Cleanup {
continue continue
} }
@@ -212,26 +215,18 @@ func (as *Storage) cleanupArrsQueue() {
if len(arrs) > 0 { if len(arrs) > 0 {
for _, arr := range arrs { for _, arr := range arrs {
if err := arr.CleanupQueue(); err != nil { if err := arr.CleanupQueue(); err != nil {
as.logger.Error().Err(err).Msgf("Failed to cleanup arr %s", arr.Name) s.logger.Error().Err(err).Msgf("Failed to cleanup arr %s", arr.Name)
} }
} }
} }
} }
func (a *Arr) Refresh() error { func (a *Arr) Refresh() {
payload := struct { payload := struct {
Name string `json:"name"` Name string `json:"name"`
}{ }{
Name: "RefreshMonitoredDownloads", Name: "RefreshMonitoredDownloads",
} }
resp, err := a.Request(http.MethodPost, "api/v3/command", payload) _, _ = a.Request(http.MethodPost, "api/v3/command", payload)
if err == nil && resp != nil {
statusOk := strconv.Itoa(resp.StatusCode)[0] == '2'
if statusOk {
return nil
}
}
return fmt.Errorf("failed to refresh: %v", err)
} }

View File

@@ -105,6 +105,7 @@ func (a *Arr) GetMedia(mediaId string) ([]Content, error) {
Id: d.Id, Id: d.Id,
EpisodeId: eId, EpisodeId: eId,
SeasonNumber: file.SeasonNumber, SeasonNumber: file.SeasonNumber,
Size: file.Size,
}) })
} }
if len(files) == 0 { if len(files) == 0 {
@@ -148,6 +149,7 @@ func GetMovies(a *Arr, tvId string) ([]Content, error) {
FileId: movie.MovieFile.Id, FileId: movie.MovieFile.Id,
Id: movie.Id, Id: movie.Id,
Path: movie.MovieFile.Path, Path: movie.MovieFile.Path,
Size: movie.MovieFile.Size,
}) })
ct.Files = files ct.Files = files
contents = append(contents, ct) contents = append(contents, ct)

View File

@@ -205,5 +205,4 @@ func (a *Arr) Import(path string, seriesId int, seasons []int) (io.ReadCloser, e
} }
defer resp.Body.Close() defer resp.Body.Close()
return resp.Body, nil return resp.Body, nil
} }

View File

@@ -11,6 +11,7 @@ type Movie struct {
RelativePath string `json:"relativePath"` RelativePath string `json:"relativePath"`
Path string `json:"path"` Path string `json:"path"`
Id int `json:"id"` Id int `json:"id"`
Size int64 `json:"size"`
} `json:"movieFile"` } `json:"movieFile"`
Id int `json:"id"` Id int `json:"id"`
} }
@@ -25,6 +26,8 @@ type ContentFile struct {
IsSymlink bool `json:"isSymlink"` IsSymlink bool `json:"isSymlink"`
IsBroken bool `json:"isBroken"` IsBroken bool `json:"isBroken"`
SeasonNumber int `json:"seasonNumber"` SeasonNumber int `json:"seasonNumber"`
Processed bool `json:"processed"`
Size int64 `json:"size"`
} }
func (file *ContentFile) Delete() { func (file *ContentFile) Delete() {
@@ -44,4 +47,5 @@ type seriesFile struct {
SeasonNumber int `json:"seasonNumber"` SeasonNumber int `json:"seasonNumber"`
Path string `json:"path"` Path string `json:"path"`
Id int `json:"id"` Id int `json:"id"`
Size int64 `json:"size"`
} }

241
pkg/debrid/debrid.go Normal file
View File

@@ -0,0 +1,241 @@
package debrid
import (
"context"
"errors"
"fmt"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/arr"
"github.com/sirrobot01/decypharr/pkg/debrid/providers/alldebrid"
"github.com/sirrobot01/decypharr/pkg/debrid/providers/debrid_link"
"github.com/sirrobot01/decypharr/pkg/debrid/providers/realdebrid"
"github.com/sirrobot01/decypharr/pkg/debrid/providers/torbox"
"github.com/sirrobot01/decypharr/pkg/debrid/store"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
"sync"
)
type Debrid struct {
cache *store.Cache // Could be nil if not using WebDAV
client types.Client // HTTP client for making requests to the debrid service
}
func (de *Debrid) Client() types.Client {
return de.client
}
func (de *Debrid) Cache() *store.Cache {
return de.cache
}
type Storage struct {
debrids map[string]*Debrid
mu sync.RWMutex
lastUsed string
}
func NewStorage() *Storage {
cfg := config.Get()
_logger := logger.Default()
debrids := make(map[string]*Debrid)
for _, dc := range cfg.Debrids {
client, err := createDebridClient(dc)
if err != nil {
_logger.Error().Err(err).Str("Debrid", dc.Name).Msg("failed to connect to debrid client")
continue
}
var cache *store.Cache
_log := client.Logger()
if dc.UseWebDav {
cache = store.NewDebridCache(dc, client)
_log.Info().Msg("Debrid Service started with WebDAV")
} else {
_log.Info().Msg("Debrid Service started")
}
debrids[dc.Name] = &Debrid{
cache: cache,
client: client,
}
}
d := &Storage{
debrids: debrids,
lastUsed: "",
}
return d
}
func (d *Storage) Debrid(name string) *Debrid {
d.mu.RLock()
defer d.mu.RUnlock()
if debrid, exists := d.debrids[name]; exists {
return debrid
}
return nil
}
func (d *Storage) Debrids() map[string]*Debrid {
d.mu.RLock()
defer d.mu.RUnlock()
debridsCopy := make(map[string]*Debrid)
for name, debrid := range d.debrids {
if debrid != nil {
debridsCopy[name] = debrid
}
}
return debridsCopy
}
func (d *Storage) Client(name string) types.Client {
d.mu.RLock()
defer d.mu.RUnlock()
if client, exists := d.debrids[name]; exists {
return client.client
}
return nil
}
func (d *Storage) Reset() {
d.mu.Lock()
d.debrids = make(map[string]*Debrid)
d.mu.Unlock()
d.lastUsed = ""
}
func (d *Storage) Clients() map[string]types.Client {
d.mu.RLock()
defer d.mu.RUnlock()
clientsCopy := make(map[string]types.Client)
for name, debrid := range d.debrids {
if debrid != nil && debrid.client != nil {
clientsCopy[name] = debrid.client
}
}
return clientsCopy
}
func (d *Storage) Caches() map[string]*store.Cache {
d.mu.RLock()
defer d.mu.RUnlock()
cachesCopy := make(map[string]*store.Cache)
for name, debrid := range d.debrids {
if debrid != nil && debrid.cache != nil {
cachesCopy[name] = debrid.cache
}
}
return cachesCopy
}
func (d *Storage) FilterClients(filter func(types.Client) bool) map[string]types.Client {
d.mu.Lock()
defer d.mu.Unlock()
filteredClients := make(map[string]types.Client)
for name, client := range d.debrids {
if client != nil && filter(client.client) {
filteredClients[name] = client.client
}
}
return filteredClients
}
func createDebridClient(dc config.Debrid) (types.Client, error) {
switch dc.Name {
case "realdebrid":
return realdebrid.New(dc)
case "torbox":
return torbox.New(dc)
case "debridlink":
return debrid_link.New(dc)
case "alldebrid":
return alldebrid.New(dc)
default:
return realdebrid.New(dc)
}
}
func Process(ctx context.Context, store *Storage, selectedDebrid string, magnet *utils.Magnet, a *arr.Arr, action string, overrideDownloadUncached bool) (*types.Torrent, error) {
debridTorrent := &types.Torrent{
InfoHash: magnet.InfoHash,
Magnet: magnet,
Name: magnet.Name,
Arr: a,
Size: magnet.Size,
Files: make(map[string]types.File),
}
clients := store.FilterClients(func(c types.Client) bool {
if selectedDebrid != "" && c.Name() != selectedDebrid {
return false
}
return true
})
if len(clients) == 0 {
return nil, fmt.Errorf("no debrid clients available")
}
errs := make([]error, 0, len(clients))
// Override first, arr second, debrid third
if overrideDownloadUncached {
debridTorrent.DownloadUncached = true
} else if a.DownloadUncached != nil {
// Arr cached is set
debridTorrent.DownloadUncached = *a.DownloadUncached
} else {
debridTorrent.DownloadUncached = false
}
for index, db := range clients {
_logger := db.Logger()
_logger.Info().
Str("Debrid", db.Name()).
Str("Arr", a.Name).
Str("Hash", debridTorrent.InfoHash).
Str("Name", debridTorrent.Name).
Str("Action", action).
Msg("Processing torrent")
if !overrideDownloadUncached && a.DownloadUncached == nil {
debridTorrent.DownloadUncached = db.GetDownloadUncached()
}
dbt, err := db.SubmitMagnet(debridTorrent)
if err != nil || dbt == nil || dbt.Id == "" {
errs = append(errs, err)
continue
}
dbt.Arr = a
_logger.Info().Str("id", dbt.Id).Msgf("Torrent: %s submitted to %s", dbt.Name, db.Name())
store.lastUsed = index
torrent, err := db.CheckStatus(dbt)
if err != nil && torrent != nil && torrent.Id != "" {
// Delete the torrent if it was not downloaded
go func(id string) {
_ = db.DeleteTorrent(id)
}(torrent.Id)
}
if err != nil {
errs = append(errs, err)
continue
}
if torrent == nil {
errs = append(errs, fmt.Errorf("torrent %s returned nil after checking status", dbt.Name))
continue
}
return torrent, nil
}
if len(errs) == 0 {
return nil, fmt.Errorf("failed to process torrent: no clients available")
}
joinedErrors := errors.Join(errs...)
return nil, fmt.Errorf("failed to process torrent: %w", joinedErrors)
}

View File

@@ -1,103 +0,0 @@
package debrid
import (
"fmt"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/arr"
"github.com/sirrobot01/decypharr/pkg/debrid/alldebrid"
"github.com/sirrobot01/decypharr/pkg/debrid/debrid_link"
"github.com/sirrobot01/decypharr/pkg/debrid/realdebrid"
"github.com/sirrobot01/decypharr/pkg/debrid/torbox"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
"strings"
)
func createDebridClient(dc config.Debrid) types.Client {
switch dc.Name {
case "realdebrid":
return realdebrid.New(dc)
case "torbox":
return torbox.New(dc)
case "debridlink":
return debrid_link.New(dc)
case "alldebrid":
return alldebrid.New(dc)
default:
return realdebrid.New(dc)
}
}
func ProcessTorrent(d *Engine, magnet *utils.Magnet, a *arr.Arr, isSymlink, overrideDownloadUncached bool) (*types.Torrent, error) {
debridTorrent := &types.Torrent{
InfoHash: magnet.InfoHash,
Magnet: magnet,
Name: magnet.Name,
Arr: a,
Size: magnet.Size,
Files: make(map[string]types.File),
}
errs := make([]error, 0, len(d.Clients))
// Override first, arr second, debrid third
if overrideDownloadUncached {
debridTorrent.DownloadUncached = true
} else if a.DownloadUncached != nil {
// Arr cached is set
debridTorrent.DownloadUncached = *a.DownloadUncached
} else {
debridTorrent.DownloadUncached = false
}
for index, db := range d.Clients {
logger := db.GetLogger()
logger.Info().Str("Debrid", db.GetName()).Str("Hash", debridTorrent.InfoHash).Msg("Processing torrent")
if !overrideDownloadUncached && a.DownloadUncached == nil {
debridTorrent.DownloadUncached = db.GetDownloadUncached()
}
//if db.GetCheckCached() {
// hash, exists := db.IsAvailable([]string{debridTorrent.InfoHash})[debridTorrent.InfoHash]
// if !exists || !hash {
// logger.Info().Msgf("Torrent: %s is not cached", debridTorrent.Name)
// continue
// } else {
// logger.Info().Msgf("Torrent: %s is cached(or downloading)", debridTorrent.Name)
// }
//}
dbt, err := db.SubmitMagnet(debridTorrent)
if err != nil || dbt == nil || dbt.Id == "" {
errs = append(errs, err)
continue
}
dbt.Arr = a
logger.Info().Str("id", dbt.Id).Msgf("Torrent: %s submitted to %s", dbt.Name, db.GetName())
d.LastUsed = index
torrent, err := db.CheckStatus(dbt, isSymlink)
if err != nil && torrent != nil && torrent.Id != "" {
// Delete the torrent if it was not downloaded
go func(id string) {
_ = db.DeleteTorrent(id)
}(torrent.Id)
}
return torrent, err
}
if len(errs) == 0 {
return nil, fmt.Errorf("failed to process torrent: no clients available")
}
if len(errs) == 1 {
return nil, fmt.Errorf("failed to process torrent: %w", errs[0])
} else {
errStrings := make([]string, 0, len(errs))
for _, err := range errs {
errStrings = append(errStrings, err.Error())
}
return nil, fmt.Errorf("failed to process torrent: %s", strings.Join(errStrings, ", "))
}
}

View File

@@ -1,224 +0,0 @@
package debrid
import (
"errors"
"fmt"
"github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
"sync"
"time"
)
type linkCache struct {
Id string
link string
accountId string
expiresAt time.Time
}
type downloadLinkCache struct {
data map[string]linkCache
mu sync.Mutex
}
func newDownloadLinkCache() *downloadLinkCache {
return &downloadLinkCache{
data: make(map[string]linkCache),
}
}
func (c *downloadLinkCache) reset() {
c.mu.Lock()
c.data = make(map[string]linkCache)
c.mu.Unlock()
}
func (c *downloadLinkCache) Load(key string) (linkCache, bool) {
c.mu.Lock()
defer c.mu.Unlock()
dl, ok := c.data[key]
return dl, ok
}
func (c *downloadLinkCache) Store(key string, value linkCache) {
c.mu.Lock()
defer c.mu.Unlock()
c.data[key] = value
}
func (c *downloadLinkCache) Delete(key string) {
c.mu.Lock()
defer c.mu.Unlock()
delete(c.data, key)
}
type downloadLinkRequest struct {
result string
err error
done chan struct{}
}
func newDownloadLinkRequest() *downloadLinkRequest {
return &downloadLinkRequest{
done: make(chan struct{}),
}
}
func (r *downloadLinkRequest) Complete(result string, err error) {
r.result = result
r.err = err
close(r.done)
}
func (r *downloadLinkRequest) Wait() (string, error) {
<-r.done
return r.result, r.err
}
func (c *Cache) GetDownloadLink(torrentName, filename, fileLink string) (string, error) {
// Check link cache
if dl := c.checkDownloadLink(fileLink); dl != "" {
return dl, nil
}
if req, inFlight := c.downloadLinkRequests.Load(fileLink); inFlight {
// Wait for the other request to complete and use its result
result := req.(*downloadLinkRequest)
return result.Wait()
}
// Create a new request object
req := newDownloadLinkRequest()
c.downloadLinkRequests.Store(fileLink, req)
downloadLink, err := c.fetchDownloadLink(torrentName, filename, fileLink)
// Complete the request and remove it from the map
req.Complete(downloadLink, err)
c.downloadLinkRequests.Delete(fileLink)
return downloadLink, err
}
func (c *Cache) fetchDownloadLink(torrentName, filename, fileLink string) (string, error) {
ct := c.GetTorrentByName(torrentName)
if ct == nil {
return "", fmt.Errorf("torrent not found")
}
file := ct.Files[filename]
if file.Link == "" {
// file link is empty, refresh the torrent to get restricted links
ct = c.refreshTorrent(file.TorrentId) // Refresh the torrent from the debrid
if ct == nil {
return "", fmt.Errorf("failed to refresh torrent")
} else {
file = ct.Files[filename]
}
}
// If file.Link is still empty, return
if file.Link == "" {
// Try to reinsert the torrent?
newCt, err := c.reInsertTorrent(ct)
if err != nil {
return "", fmt.Errorf("failed to reinsert torrent. %w", err)
}
ct = newCt
file = ct.Files[filename]
}
c.logger.Trace().Msgf("Getting download link for %s(%s)", filename, file.Link)
downloadLink, err := c.client.GetDownloadLink(ct.Torrent, &file)
if err != nil {
if errors.Is(err, request.HosterUnavailableError) {
newCt, err := c.reInsertTorrent(ct)
if err != nil {
return "", fmt.Errorf("failed to reinsert torrent: %w", err)
}
ct = newCt
file = ct.Files[filename]
// Retry getting the download link
downloadLink, err = c.client.GetDownloadLink(ct.Torrent, &file)
if err != nil {
return "", err
}
if downloadLink == nil {
return "", fmt.Errorf("download link is empty for")
}
c.updateDownloadLink(downloadLink)
return "", nil
} else if errors.Is(err, request.TrafficExceededError) {
// This is likely a fair usage limit error
return "", err
} else {
return "", fmt.Errorf("failed to get download link: %w", err)
}
}
if downloadLink == nil {
return "", fmt.Errorf("download link is empty")
}
c.updateDownloadLink(downloadLink)
return downloadLink.DownloadLink, nil
}
func (c *Cache) GenerateDownloadLinks(t CachedTorrent) {
if err := c.client.GenerateDownloadLinks(t.Torrent); err != nil {
c.logger.Error().Err(err).Str("torrent", t.Name).Msg("Failed to generate download links")
return
}
for _, file := range t.Files {
if file.DownloadLink != nil {
c.updateDownloadLink(file.DownloadLink)
}
}
c.setTorrent(t, nil)
}
func (c *Cache) updateDownloadLink(dl *types.DownloadLink) {
c.downloadLinks.Store(dl.Link, linkCache{
Id: dl.Id,
link: dl.DownloadLink,
expiresAt: time.Now().Add(c.autoExpiresLinksAfterDuration),
accountId: dl.AccountId,
})
}
func (c *Cache) checkDownloadLink(link string) string {
if dl, ok := c.downloadLinks.Load(link); ok {
if dl.expiresAt.After(time.Now()) && !c.IsDownloadLinkInvalid(dl.link) {
return dl.link
}
}
return ""
}
func (c *Cache) MarkDownloadLinkAsInvalid(link, downloadLink, reason string) {
c.invalidDownloadLinks.Store(downloadLink, reason)
// Remove the download api key from active
if reason == "bandwidth_exceeded" {
if dl, ok := c.downloadLinks.Load(link); ok {
if dl.accountId != "" && dl.link == downloadLink {
c.client.DisableAccount(dl.accountId)
}
}
}
c.removeDownloadLink(link)
}
func (c *Cache) removeDownloadLink(link string) {
if dl, ok := c.downloadLinks.Load(link); ok {
// Delete dl from cache
c.downloadLinks.Delete(link)
// Delete dl from debrid
if dl.Id != "" {
_ = c.client.DeleteDownloadLink(dl.Id)
}
}
}
func (c *Cache) IsDownloadLinkInvalid(downloadLink string) bool {
if reason, ok := c.invalidDownloadLinks.Load(downloadLink); ok {
c.logger.Debug().Msgf("Download link %s is invalid: %s", downloadLink, reason)
return true
}
return false
}

View File

@@ -1,61 +0,0 @@
package debrid
import (
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
"sync"
)
type Engine struct {
Clients map[string]types.Client
clientsMu sync.Mutex
Caches map[string]*Cache
CacheMu sync.Mutex
LastUsed string
}
func NewEngine() *Engine {
cfg := config.Get()
clients := make(map[string]types.Client)
caches := make(map[string]*Cache)
for _, dc := range cfg.Debrids {
client := createDebridClient(dc)
logger := client.GetLogger()
if dc.UseWebDav {
caches[dc.Name] = New(dc, client)
logger.Info().Msg("Debrid Service started with WebDAV")
} else {
logger.Info().Msg("Debrid Service started")
}
clients[dc.Name] = client
}
d := &Engine{
Clients: clients,
LastUsed: "",
Caches: caches,
}
return d
}
func (d *Engine) GetClient(name string) types.Client {
d.clientsMu.Lock()
defer d.clientsMu.Unlock()
return d.Clients[name]
}
func (d *Engine) Reset() {
d.clientsMu.Lock()
d.Clients = make(map[string]types.Client)
d.clientsMu.Unlock()
d.CacheMu.Lock()
d.Caches = make(map[string]*Cache)
d.CacheMu.Unlock()
}
func (d *Engine) GetDebrids() map[string]types.Client {
return d.Clients
}

View File

@@ -1 +0,0 @@
package debrid

View File

@@ -18,20 +18,26 @@ import (
) )
type AllDebrid struct { type AllDebrid struct {
Name string name string
Host string `json:"host"` Host string `json:"host"`
APIKey string APIKey string
accounts map[string]types.Account accounts *types.Accounts
DownloadUncached bool autoExpiresLinksAfter time.Duration
client *request.Client DownloadUncached bool
client *request.Client
MountPath string MountPath string
logger zerolog.Logger logger zerolog.Logger
checkCached bool checkCached bool
addSamples bool addSamples bool
minimumFreeSlot int
} }
func New(dc config.Debrid) *AllDebrid { func (ad *AllDebrid) GetProfile() (*types.Profile, error) {
return nil, nil
}
func New(dc config.Debrid) (*AllDebrid, error) {
rl := request.ParseRateLimit(dc.RateLimit) rl := request.ParseRateLimit(dc.RateLimit)
headers := map[string]string{ headers := map[string]string{
@@ -45,34 +51,31 @@ func New(dc config.Debrid) *AllDebrid {
request.WithProxy(dc.Proxy), request.WithProxy(dc.Proxy),
) )
accounts := make(map[string]types.Account) autoExpiresLinksAfter, err := time.ParseDuration(dc.AutoExpireLinksAfter)
for idx, key := range dc.DownloadAPIKeys { if autoExpiresLinksAfter == 0 || err != nil {
id := strconv.Itoa(idx) autoExpiresLinksAfter = 48 * time.Hour
accounts[id] = types.Account{
Name: key,
ID: id,
Token: key,
}
} }
return &AllDebrid{ return &AllDebrid{
Name: "alldebrid", name: "alldebrid",
Host: "http://api.alldebrid.com/v4.1", Host: "http://api.alldebrid.com/v4.1",
APIKey: dc.APIKey, APIKey: dc.APIKey,
accounts: accounts, accounts: types.NewAccounts(dc),
DownloadUncached: dc.DownloadUncached, DownloadUncached: dc.DownloadUncached,
client: client, autoExpiresLinksAfter: autoExpiresLinksAfter,
MountPath: dc.Folder, client: client,
logger: logger.New(dc.Name), MountPath: dc.Folder,
checkCached: dc.CheckCached, logger: logger.New(dc.Name),
addSamples: dc.AddSamples, checkCached: dc.CheckCached,
} addSamples: dc.AddSamples,
minimumFreeSlot: dc.MinimumFreeSlot,
}, nil
} }
func (ad *AllDebrid) GetName() string { func (ad *AllDebrid) Name() string {
return ad.Name return ad.name
} }
func (ad *AllDebrid) GetLogger() zerolog.Logger { func (ad *AllDebrid) Logger() zerolog.Logger {
return ad.logger return ad.logger
} }
@@ -186,7 +189,7 @@ func (ad *AllDebrid) GetTorrent(torrentId string) (*types.Torrent, error) {
var res TorrentInfoResponse var res TorrentInfoResponse
err = json.Unmarshal(resp, &res) err = json.Unmarshal(resp, &res)
if err != nil { if err != nil {
ad.logger.Info().Msgf("Error unmarshalling torrent info: %s", err) ad.logger.Error().Err(err).Msgf("Error unmarshalling torrent info")
return nil, err return nil, err
} }
data := res.Data.Magnets data := res.Data.Magnets
@@ -200,7 +203,7 @@ func (ad *AllDebrid) GetTorrent(torrentId string) (*types.Torrent, error) {
OriginalFilename: name, OriginalFilename: name,
Files: make(map[string]types.File), Files: make(map[string]types.File),
InfoHash: data.Hash, InfoHash: data.Hash,
Debrid: ad.Name, Debrid: ad.name,
MountPath: ad.MountPath, MountPath: ad.MountPath,
Added: time.Unix(data.CompletionDate, 0).Format(time.RFC3339), Added: time.Unix(data.CompletionDate, 0).Format(time.RFC3339),
} }
@@ -228,7 +231,7 @@ func (ad *AllDebrid) UpdateTorrent(t *types.Torrent) error {
var res TorrentInfoResponse var res TorrentInfoResponse
err = json.Unmarshal(resp, &res) err = json.Unmarshal(resp, &res)
if err != nil { if err != nil {
ad.logger.Info().Msgf("Error unmarshalling torrent info: %s", err) ad.logger.Error().Err(err).Msgf("Error unmarshalling torrent info")
return err return err
} }
data := res.Data.Magnets data := res.Data.Magnets
@@ -240,7 +243,7 @@ func (ad *AllDebrid) UpdateTorrent(t *types.Torrent) error {
t.OriginalFilename = name t.OriginalFilename = name
t.Folder = name t.Folder = name
t.MountPath = ad.MountPath t.MountPath = ad.MountPath
t.Debrid = ad.Name t.Debrid = ad.name
t.Bytes = data.Size t.Bytes = data.Size
t.Seeders = data.Seeders t.Seeders = data.Seeders
t.Added = time.Unix(data.CompletionDate, 0).Format(time.RFC3339) t.Added = time.Unix(data.CompletionDate, 0).Format(time.RFC3339)
@@ -256,7 +259,7 @@ func (ad *AllDebrid) UpdateTorrent(t *types.Torrent) error {
return nil return nil
} }
func (ad *AllDebrid) CheckStatus(torrent *types.Torrent, isSymlink bool) (*types.Torrent, error) { func (ad *AllDebrid) CheckStatus(torrent *types.Torrent) (*types.Torrent, error) {
for { for {
err := ad.UpdateTorrent(torrent) err := ad.UpdateTorrent(torrent)
@@ -266,13 +269,7 @@ func (ad *AllDebrid) CheckStatus(torrent *types.Torrent, isSymlink bool) (*types
status := torrent.Status status := torrent.Status
if status == "downloaded" { if status == "downloaded" {
ad.logger.Info().Msgf("Torrent: %s downloaded", torrent.Name) ad.logger.Info().Msgf("Torrent: %s downloaded", torrent.Name)
if !isSymlink { return torrent, nil
err = ad.GenerateDownloadLinks(torrent)
if err != nil {
return torrent, err
}
}
break
} else if utils.Contains(ad.GetDownloadingStatus(), status) { } else if utils.Contains(ad.GetDownloadingStatus(), status) {
if !torrent.DownloadUncached { if !torrent.DownloadUncached {
return torrent, fmt.Errorf("torrent: %s not cached", torrent.Name) return torrent, fmt.Errorf("torrent: %s not cached", torrent.Name)
@@ -285,7 +282,6 @@ func (ad *AllDebrid) CheckStatus(torrent *types.Torrent, isSymlink bool) (*types
} }
} }
return torrent, nil
} }
func (ad *AllDebrid) DeleteTorrent(torrentId string) error { func (ad *AllDebrid) DeleteTorrent(torrentId string) error {
@@ -298,8 +294,9 @@ func (ad *AllDebrid) DeleteTorrent(torrentId string) error {
return nil return nil
} }
func (ad *AllDebrid) GenerateDownloadLinks(t *types.Torrent) error { func (ad *AllDebrid) GetFileDownloadLinks(t *types.Torrent) error {
filesCh := make(chan types.File, len(t.Files)) filesCh := make(chan types.File, len(t.Files))
linksCh := make(chan *types.DownloadLink, len(t.Files))
errCh := make(chan error, len(t.Files)) errCh := make(chan error, len(t.Files))
var wg sync.WaitGroup var wg sync.WaitGroup
@@ -312,17 +309,19 @@ func (ad *AllDebrid) GenerateDownloadLinks(t *types.Torrent) error {
errCh <- err errCh <- err
return return
} }
file.DownloadLink = link if link == nil {
if link != nil {
errCh <- fmt.Errorf("download link is empty") errCh <- fmt.Errorf("download link is empty")
return return
} }
linksCh <- link
file.DownloadLink = link
filesCh <- file filesCh <- file
}(file) }(file)
} }
go func() { go func() {
wg.Wait() wg.Wait()
close(filesCh) close(filesCh)
close(linksCh)
close(errCh) close(errCh)
}() }()
files := make(map[string]types.File, len(t.Files)) files := make(map[string]types.File, len(t.Files))
@@ -330,10 +329,22 @@ func (ad *AllDebrid) GenerateDownloadLinks(t *types.Torrent) error {
files[file.Name] = file files[file.Name] = file
} }
// Collect download links
links := make(map[string]*types.DownloadLink, len(t.Files))
for link := range linksCh {
if link == nil {
continue
}
links[link.Link] = link
}
// Update the files with download links
ad.accounts.SetDownloadLinks(links)
// Check for errors // Check for errors
for err := range errCh { for err := range errCh {
if err != nil { if err != nil {
return err // Return the first error encountered return err
} }
} }
@@ -363,21 +374,18 @@ func (ad *AllDebrid) GetDownloadLink(t *types.Torrent, file *types.File) (*types
if link == "" { if link == "" {
return nil, fmt.Errorf("download link is empty") return nil, fmt.Errorf("download link is empty")
} }
now := time.Now()
return &types.DownloadLink{ return &types.DownloadLink{
Link: file.Link, Link: file.Link,
DownloadLink: link, DownloadLink: link,
Id: data.Data.Id, Id: data.Data.Id,
Size: file.Size, Size: file.Size,
Filename: file.Name, Filename: file.Name,
Generated: time.Now(), Generated: now,
AccountId: "0", ExpiresAt: now.Add(ad.autoExpiresLinksAfter),
}, nil }, nil
} }
func (ad *AllDebrid) GetCheckCached() bool {
return ad.checkCached
}
func (ad *AllDebrid) GetTorrents() ([]*types.Torrent, error) { func (ad *AllDebrid) GetTorrents() ([]*types.Torrent, error) {
url := fmt.Sprintf("%s/magnet/status?status=ready", ad.Host) url := fmt.Sprintf("%s/magnet/status?status=ready", ad.Host)
req, _ := http.NewRequest(http.MethodGet, url, nil) req, _ := http.NewRequest(http.MethodGet, url, nil)
@@ -389,7 +397,7 @@ func (ad *AllDebrid) GetTorrents() ([]*types.Torrent, error) {
var res TorrentsListResponse var res TorrentsListResponse
err = json.Unmarshal(resp, &res) err = json.Unmarshal(resp, &res)
if err != nil { if err != nil {
ad.logger.Info().Msgf("Error unmarshalling torrent info: %s", err) ad.logger.Error().Err(err).Msgf("Error unmarshalling torrent info")
return torrents, err return torrents, err
} }
for _, magnet := range res.Data.Magnets { for _, magnet := range res.Data.Magnets {
@@ -402,7 +410,7 @@ func (ad *AllDebrid) GetTorrents() ([]*types.Torrent, error) {
OriginalFilename: magnet.Filename, OriginalFilename: magnet.Filename,
Files: make(map[string]types.File), Files: make(map[string]types.File),
InfoHash: magnet.Hash, InfoHash: magnet.Hash,
Debrid: ad.Name, Debrid: ad.name,
MountPath: ad.MountPath, MountPath: ad.MountPath,
Added: time.Unix(magnet.CompletionDate, 0).Format(time.RFC3339), Added: time.Unix(magnet.CompletionDate, 0).Format(time.RFC3339),
}) })
@@ -411,7 +419,7 @@ func (ad *AllDebrid) GetTorrents() ([]*types.Torrent, error) {
return torrents, nil return torrents, nil
} }
func (ad *AllDebrid) GetDownloads() (map[string]types.DownloadLink, error) { func (ad *AllDebrid) GetDownloadLinks() (map[string]*types.DownloadLink, error) {
return nil, nil return nil, nil
} }
@@ -431,12 +439,16 @@ func (ad *AllDebrid) GetMountPath() string {
return ad.MountPath return ad.MountPath
} }
func (ad *AllDebrid) DisableAccount(accountId string) {
}
func (ad *AllDebrid) ResetActiveDownloadKeys() {
}
func (ad *AllDebrid) DeleteDownloadLink(linkId string) error { func (ad *AllDebrid) DeleteDownloadLink(linkId string) error {
return nil return nil
} }
func (ad *AllDebrid) GetAvailableSlots() (int, error) {
// This function is a placeholder for AllDebrid
//TODO: Implement the logic to check available slots for AllDebrid
return 0, fmt.Errorf("GetAvailableSlots not implemented for AllDebrid")
}
func (ad *AllDebrid) Accounts() *types.Accounts {
return ad.accounts
}

View File

@@ -1,5 +1,10 @@
package alldebrid package alldebrid
import (
"encoding/json"
"fmt"
)
type errorResponse struct { type errorResponse struct {
Code string `json:"code"` Code string `json:"code"`
Message string `json:"message"` Message string `json:"message"`
@@ -32,6 +37,8 @@ type magnetInfo struct {
Files []MagnetFile `json:"files"` Files []MagnetFile `json:"files"`
} }
type Magnets []magnetInfo
type TorrentInfoResponse struct { type TorrentInfoResponse struct {
Status string `json:"status"` Status string `json:"status"`
Data struct { Data struct {
@@ -43,7 +50,7 @@ type TorrentInfoResponse struct {
type TorrentsListResponse struct { type TorrentsListResponse struct {
Status string `json:"status"` Status string `json:"status"`
Data struct { Data struct {
Magnets []magnetInfo `json:"magnets"` Magnets Magnets `json:"magnets"`
} `json:"data"` } `json:"data"`
Error *errorResponse `json:"error"` Error *errorResponse `json:"error"`
} }
@@ -81,3 +88,27 @@ type DownloadLink struct {
} `json:"data"` } `json:"data"`
Error *errorResponse `json:"error"` Error *errorResponse `json:"error"`
} }
// UnmarshalJSON implements custom unmarshaling for Magnets type
// It can handle both an array of magnetInfo objects or a map with string keys.
// If the input is an array, it will be unmarshaled directly into the Magnets slice.
// If the input is a map, it will extract the values and append them to the Magnets slice.
// If the input is neither, it will return an error.
func (m *Magnets) UnmarshalJSON(data []byte) error {
// Try to unmarshal as array
var arr []magnetInfo
if err := json.Unmarshal(data, &arr); err == nil {
*m = arr
return nil
}
// Try to unmarshal as map
var obj map[string]magnetInfo
if err := json.Unmarshal(data, &obj); err == nil {
for _, v := range obj {
*m = append(*m, v)
}
return nil
}
return fmt.Errorf("magnets: unsupported JSON format")
}

View File

@@ -10,7 +10,6 @@ import (
"github.com/sirrobot01/decypharr/internal/request" "github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/internal/utils" "github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/debrid/types" "github.com/sirrobot01/decypharr/pkg/debrid/types"
"strconv"
"time" "time"
"net/http" "net/http"
@@ -18,24 +17,64 @@ import (
) )
type DebridLink struct { type DebridLink struct {
Name string name string
Host string `json:"host"` Host string `json:"host"`
APIKey string APIKey string
accounts map[string]types.Account accounts *types.Accounts
DownloadUncached bool DownloadUncached bool
client *request.Client client *request.Client
autoExpiresLinksAfter time.Duration
MountPath string MountPath string
logger zerolog.Logger logger zerolog.Logger
checkCached bool checkCached bool
addSamples bool addSamples bool
} }
func (dl *DebridLink) GetName() string { func New(dc config.Debrid) (*DebridLink, error) {
return dl.Name rl := request.ParseRateLimit(dc.RateLimit)
headers := map[string]string{
"Authorization": fmt.Sprintf("Bearer %s", dc.APIKey),
"Content-Type": "application/json",
}
_log := logger.New(dc.Name)
client := request.New(
request.WithHeaders(headers),
request.WithLogger(_log),
request.WithRateLimiter(rl),
request.WithProxy(dc.Proxy),
)
autoExpiresLinksAfter, err := time.ParseDuration(dc.AutoExpireLinksAfter)
if autoExpiresLinksAfter == 0 || err != nil {
autoExpiresLinksAfter = 48 * time.Hour
}
return &DebridLink{
name: "debridlink",
Host: "https://debrid-link.com/api/v2",
APIKey: dc.APIKey,
accounts: types.NewAccounts(dc),
DownloadUncached: dc.DownloadUncached,
autoExpiresLinksAfter: autoExpiresLinksAfter,
client: client,
MountPath: dc.Folder,
logger: logger.New(dc.Name),
checkCached: dc.CheckCached,
addSamples: dc.AddSamples,
}, nil
} }
func (dl *DebridLink) GetLogger() zerolog.Logger { func (dl *DebridLink) GetProfile() (*types.Profile, error) {
return nil, nil
}
func (dl *DebridLink) Name() string {
return dl.name
}
func (dl *DebridLink) Logger() zerolog.Logger {
return dl.logger return dl.logger
} }
@@ -68,13 +107,13 @@ func (dl *DebridLink) IsAvailable(hashes []string) map[string]bool {
req, _ := http.NewRequest(http.MethodGet, url, nil) req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := dl.client.MakeRequest(req) resp, err := dl.client.MakeRequest(req)
if err != nil { if err != nil {
dl.logger.Info().Msgf("Error checking availability: %v", err) dl.logger.Error().Err(err).Msgf("Error checking availability")
return result return result
} }
var data AvailableResponse var data AvailableResponse
err = json.Unmarshal(resp, &data) err = json.Unmarshal(resp, &data)
if err != nil { if err != nil {
dl.logger.Info().Msgf("Error marshalling availability: %v", err) dl.logger.Error().Err(err).Msgf("Error marshalling availability")
return result return result
} }
if data.Value == nil { if data.Value == nil {
@@ -121,7 +160,7 @@ func (dl *DebridLink) GetTorrent(torrentId string) (*types.Torrent, error) {
Filename: name, Filename: name,
OriginalFilename: name, OriginalFilename: name,
MountPath: dl.MountPath, MountPath: dl.MountPath,
Debrid: dl.Name, Debrid: dl.name,
Added: time.Unix(t.Created, 0).Format(time.RFC3339), Added: time.Unix(t.Created, 0).Format(time.RFC3339),
} }
cfg := config.Get() cfg := config.Get()
@@ -135,14 +174,7 @@ func (dl *DebridLink) GetTorrent(torrentId string) (*types.Torrent, error) {
Name: f.Name, Name: f.Name,
Size: f.Size, Size: f.Size,
Path: f.Name, Path: f.Name,
DownloadLink: &types.DownloadLink{ Link: f.DownloadURL,
Filename: f.Name,
Link: f.DownloadURL,
DownloadLink: f.DownloadURL,
Generated: time.Now(),
AccountId: "0",
},
Link: f.DownloadURL,
} }
torrent.Files[file.Name] = file torrent.Files[file.Name] = file
} }
@@ -191,6 +223,8 @@ func (dl *DebridLink) UpdateTorrent(t *types.Torrent) error {
t.OriginalFilename = name t.OriginalFilename = name
t.Added = time.Unix(data.Created, 0).Format(time.RFC3339) t.Added = time.Unix(data.Created, 0).Format(time.RFC3339)
cfg := config.Get() cfg := config.Get()
links := make(map[string]*types.DownloadLink)
now := time.Now()
for _, f := range data.Files { for _, f := range data.Files {
if !cfg.IsSizeAllowed(f.Size) { if !cfg.IsSizeAllowed(f.Size) {
continue continue
@@ -201,17 +235,21 @@ func (dl *DebridLink) UpdateTorrent(t *types.Torrent) error {
Name: f.Name, Name: f.Name,
Size: f.Size, Size: f.Size,
Path: f.Name, Path: f.Name,
DownloadLink: &types.DownloadLink{ Link: f.DownloadURL,
Filename: f.Name,
Link: f.DownloadURL,
DownloadLink: f.DownloadURL,
Generated: time.Now(),
AccountId: "0",
},
Link: f.DownloadURL,
} }
link := &types.DownloadLink{
Filename: f.Name,
Link: f.DownloadURL,
DownloadLink: f.DownloadURL,
Generated: now,
ExpiresAt: now.Add(dl.autoExpiresLinksAfter),
}
links[file.Link] = link
file.DownloadLink = link
t.Files[f.Name] = file t.Files[f.Name] = file
} }
dl.accounts.SetDownloadLinks(links)
return nil return nil
} }
@@ -246,8 +284,11 @@ func (dl *DebridLink) SubmitMagnet(t *types.Torrent) (*types.Torrent, error) {
t.Filename = name t.Filename = name
t.OriginalFilename = name t.OriginalFilename = name
t.MountPath = dl.MountPath t.MountPath = dl.MountPath
t.Debrid = dl.Name t.Debrid = dl.name
t.Added = time.Unix(data.Created, 0).Format(time.RFC3339) t.Added = time.Unix(data.Created, 0).Format(time.RFC3339)
links := make(map[string]*types.DownloadLink)
now := time.Now()
for _, f := range data.Files { for _, f := range data.Files {
file := types.File{ file := types.File{
TorrentId: t.Id, TorrentId: t.Id,
@@ -256,22 +297,26 @@ func (dl *DebridLink) SubmitMagnet(t *types.Torrent) (*types.Torrent, error) {
Size: f.Size, Size: f.Size,
Path: f.Name, Path: f.Name,
Link: f.DownloadURL, Link: f.DownloadURL,
DownloadLink: &types.DownloadLink{ Generated: now,
Filename: f.Name,
Link: f.DownloadURL,
DownloadLink: f.DownloadURL,
Generated: time.Now(),
AccountId: "0",
},
Generated: time.Now(),
} }
link := &types.DownloadLink{
Filename: f.Name,
Link: f.DownloadURL,
DownloadLink: f.DownloadURL,
Generated: now,
ExpiresAt: now.Add(dl.autoExpiresLinksAfter),
}
links[file.Link] = link
file.DownloadLink = link
t.Files[f.Name] = file t.Files[f.Name] = file
} }
dl.accounts.SetDownloadLinks(links)
return t, nil return t, nil
} }
func (dl *DebridLink) CheckStatus(torrent *types.Torrent, isSymlink bool) (*types.Torrent, error) { func (dl *DebridLink) CheckStatus(torrent *types.Torrent) (*types.Torrent, error) {
for { for {
err := dl.UpdateTorrent(torrent) err := dl.UpdateTorrent(torrent)
if err != nil || torrent == nil { if err != nil || torrent == nil {
@@ -280,11 +325,7 @@ func (dl *DebridLink) CheckStatus(torrent *types.Torrent, isSymlink bool) (*type
status := torrent.Status status := torrent.Status
if status == "downloaded" { if status == "downloaded" {
dl.logger.Info().Msgf("Torrent: %s downloaded", torrent.Name) dl.logger.Info().Msgf("Torrent: %s downloaded", torrent.Name)
err = dl.GenerateDownloadLinks(torrent) return torrent, nil
if err != nil {
return torrent, err
}
break
} else if utils.Contains(dl.GetDownloadingStatus(), status) { } else if utils.Contains(dl.GetDownloadingStatus(), status) {
if !torrent.DownloadUncached { if !torrent.DownloadUncached {
return torrent, fmt.Errorf("torrent: %s not cached", torrent.Name) return torrent, fmt.Errorf("torrent: %s not cached", torrent.Name)
@@ -297,7 +338,6 @@ func (dl *DebridLink) CheckStatus(torrent *types.Torrent, isSymlink bool) (*type
} }
} }
return torrent, nil
} }
func (dl *DebridLink) DeleteTorrent(torrentId string) error { func (dl *DebridLink) DeleteTorrent(torrentId string) error {
@@ -310,69 +350,27 @@ func (dl *DebridLink) DeleteTorrent(torrentId string) error {
return nil return nil
} }
func (dl *DebridLink) GenerateDownloadLinks(t *types.Torrent) error { func (dl *DebridLink) GetFileDownloadLinks(t *types.Torrent) error {
// Download links are already generated // Download links are already generated
return nil return nil
} }
func (dl *DebridLink) GetDownloads() (map[string]types.DownloadLink, error) { func (dl *DebridLink) GetDownloadLinks() (map[string]*types.DownloadLink, error) {
return nil, nil return nil, nil
} }
func (dl *DebridLink) GetDownloadLink(t *types.Torrent, file *types.File) (*types.DownloadLink, error) { func (dl *DebridLink) GetDownloadLink(t *types.Torrent, file *types.File) (*types.DownloadLink, error) {
return file.DownloadLink, nil return dl.accounts.GetDownloadLink(file.Link)
} }
func (dl *DebridLink) GetDownloadingStatus() []string { func (dl *DebridLink) GetDownloadingStatus() []string {
return []string{"downloading"} return []string{"downloading"}
} }
func (dl *DebridLink) GetCheckCached() bool {
return dl.checkCached
}
func (dl *DebridLink) GetDownloadUncached() bool { func (dl *DebridLink) GetDownloadUncached() bool {
return dl.DownloadUncached return dl.DownloadUncached
} }
func New(dc config.Debrid) *DebridLink {
rl := request.ParseRateLimit(dc.RateLimit)
headers := map[string]string{
"Authorization": fmt.Sprintf("Bearer %s", dc.APIKey),
"Content-Type": "application/json",
}
_log := logger.New(dc.Name)
client := request.New(
request.WithHeaders(headers),
request.WithLogger(_log),
request.WithRateLimiter(rl),
request.WithProxy(dc.Proxy),
)
accounts := make(map[string]types.Account)
for idx, key := range dc.DownloadAPIKeys {
id := strconv.Itoa(idx)
accounts[id] = types.Account{
Name: key,
ID: id,
Token: key,
}
}
return &DebridLink{
Name: "debridlink",
Host: "https://debrid-link.com/api/v2",
APIKey: dc.APIKey,
accounts: accounts,
DownloadUncached: dc.DownloadUncached,
client: client,
MountPath: dc.Folder,
logger: logger.New(dc.Name),
checkCached: dc.CheckCached,
addSamples: dc.AddSamples,
}
}
func (dl *DebridLink) GetTorrents() ([]*types.Torrent, error) { func (dl *DebridLink) GetTorrents() ([]*types.Torrent, error) {
page := 0 page := 0
perPage := 100 perPage := 100
@@ -402,11 +400,12 @@ func (dl *DebridLink) getTorrents(page, perPage int) ([]*types.Torrent, error) {
var res torrentInfo var res torrentInfo
err = json.Unmarshal(resp, &res) err = json.Unmarshal(resp, &res)
if err != nil { if err != nil {
dl.logger.Info().Msgf("Error unmarshalling torrent info: %s", err) dl.logger.Error().Err(err).Msgf("Error unmarshalling torrent info")
return torrents, err return torrents, err
} }
data := *res.Value data := *res.Value
links := make(map[string]*types.DownloadLink)
if len(data) == 0 { if len(data) == 0 {
return torrents, nil return torrents, nil
@@ -424,11 +423,12 @@ func (dl *DebridLink) getTorrents(page, perPage int) ([]*types.Torrent, error) {
OriginalFilename: t.Name, OriginalFilename: t.Name,
InfoHash: t.HashString, InfoHash: t.HashString,
Files: make(map[string]types.File), Files: make(map[string]types.File),
Debrid: dl.Name, Debrid: dl.name,
MountPath: dl.MountPath, MountPath: dl.MountPath,
Added: time.Unix(t.Created, 0).Format(time.RFC3339), Added: time.Unix(t.Created, 0).Format(time.RFC3339),
} }
cfg := config.Get() cfg := config.Get()
now := time.Now()
for _, f := range t.Files { for _, f := range t.Files {
if !cfg.IsSizeAllowed(f.Size) { if !cfg.IsSizeAllowed(f.Size) {
continue continue
@@ -439,19 +439,23 @@ func (dl *DebridLink) getTorrents(page, perPage int) ([]*types.Torrent, error) {
Name: f.Name, Name: f.Name,
Size: f.Size, Size: f.Size,
Path: f.Name, Path: f.Name,
DownloadLink: &types.DownloadLink{ Link: f.DownloadURL,
Filename: f.Name,
Link: f.DownloadURL,
DownloadLink: f.DownloadURL,
Generated: time.Now(),
AccountId: "0",
},
Link: f.DownloadURL,
} }
link := &types.DownloadLink{
Filename: f.Name,
Link: f.DownloadURL,
DownloadLink: f.DownloadURL,
Generated: now,
ExpiresAt: now.Add(dl.autoExpiresLinksAfter),
}
links[file.Link] = link
file.DownloadLink = link
torrent.Files[f.Name] = file torrent.Files[f.Name] = file
} }
torrents = append(torrents, torrent) torrents = append(torrents, torrent)
} }
dl.accounts.SetDownloadLinks(links)
return torrents, nil return torrents, nil
} }
@@ -463,12 +467,15 @@ func (dl *DebridLink) GetMountPath() string {
return dl.MountPath return dl.MountPath
} }
func (dl *DebridLink) DisableAccount(accountId string) {
}
func (dl *DebridLink) ResetActiveDownloadKeys() {
}
func (dl *DebridLink) DeleteDownloadLink(linkId string) error { func (dl *DebridLink) DeleteDownloadLink(linkId string) error {
return nil return nil
} }
func (dl *DebridLink) GetAvailableSlots() (int, error) {
//TODO: Implement the logic to check available slots for DebridLink
return 0, fmt.Errorf("GetAvailableSlots not implemented for DebridLink")
}
func (dl *DebridLink) Accounts() *types.Accounts {
return dl.accounts
}

View File

@@ -0,0 +1 @@
package realdebrid

View File

@@ -2,129 +2,237 @@ package realdebrid
import ( import (
"bytes" "bytes"
"cmp"
"encoding/json" "encoding/json"
"errors" "errors"
"fmt" "fmt"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/debrid/types" "github.com/sirrobot01/decypharr/pkg/debrid/types"
"io" "io"
"net/http" "net/http"
gourl "net/url" gourl "net/url"
"path/filepath" "path/filepath"
"sort"
"strconv" "strconv"
"strings" "strings"
"sync" "sync"
"time" "time"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/rar"
) )
type RealDebrid struct { type RealDebrid struct {
Name string name string
Host string `json:"host"` Host string `json:"host"`
APIKey string APIKey string
currentDownloadKey string accounts *types.Accounts
accounts map[string]types.Account
accountsMutex sync.RWMutex
DownloadUncached bool DownloadUncached bool
client *request.Client client *request.Client
downloadClient *request.Client downloadClient *request.Client
repairClient *request.Client
autoExpiresLinksAfter time.Duration
MountPath string
logger zerolog.Logger
UnpackRar bool
rarSemaphore chan struct{}
checkCached bool
addSamples bool
Profile *types.Profile
minimumFreeSlot int // Minimum number of active pots to maintain (used for cached stuffs, etc.)
MountPath string
logger zerolog.Logger
checkCached bool
addSamples bool
} }
func New(dc config.Debrid) *RealDebrid { func New(dc config.Debrid) (*RealDebrid, error) {
rl := request.ParseRateLimit(dc.RateLimit) rl := request.ParseRateLimit(dc.RateLimit)
repairRl := request.ParseRateLimit(cmp.Or(dc.RepairRateLimit, dc.RateLimit))
downloadRl := request.ParseRateLimit(cmp.Or(dc.DownloadRateLimit, dc.RateLimit))
headers := map[string]string{ headers := map[string]string{
"Authorization": fmt.Sprintf("Bearer %s", dc.APIKey), "Authorization": fmt.Sprintf("Bearer %s", dc.APIKey),
} }
_log := logger.New(dc.Name) _log := logger.New(dc.Name)
accounts := make(map[string]types.Account) autoExpiresLinksAfter, err := time.ParseDuration(dc.AutoExpireLinksAfter)
currentDownloadKey := dc.DownloadAPIKeys[0] if autoExpiresLinksAfter == 0 || err != nil {
for idx, key := range dc.DownloadAPIKeys { autoExpiresLinksAfter = 48 * time.Hour
id := strconv.Itoa(idx)
accounts[id] = types.Account{
Name: key,
ID: id,
Token: key,
}
} }
downloadHeaders := map[string]string{ r := &RealDebrid{
"Authorization": fmt.Sprintf("Bearer %s", currentDownloadKey), name: "realdebrid",
} Host: "https://api.real-debrid.com/rest/1.0",
APIKey: dc.APIKey,
return &RealDebrid{ accounts: types.NewAccounts(dc),
Name: "realdebrid", DownloadUncached: dc.DownloadUncached,
Host: "https://api.real-debrid.com/rest/1.0", autoExpiresLinksAfter: autoExpiresLinksAfter,
APIKey: dc.APIKey, UnpackRar: dc.UnpackRar,
accounts: accounts,
DownloadUncached: dc.DownloadUncached,
client: request.New( client: request.New(
request.WithHeaders(headers), request.WithHeaders(headers),
request.WithRateLimiter(rl), request.WithRateLimiter(rl),
request.WithLogger(_log), request.WithLogger(_log),
request.WithMaxRetries(5), request.WithMaxRetries(10),
request.WithRetryableStatus(429, 502), request.WithRetryableStatus(429, 502),
request.WithProxy(dc.Proxy), request.WithProxy(dc.Proxy),
), ),
downloadClient: request.New( downloadClient: request.New(
request.WithHeaders(downloadHeaders), request.WithRateLimiter(downloadRl),
request.WithLogger(_log), request.WithLogger(_log),
request.WithMaxRetries(10), request.WithMaxRetries(10),
request.WithRetryableStatus(429, 447, 502), request.WithRetryableStatus(429, 447, 502),
request.WithProxy(dc.Proxy), request.WithProxy(dc.Proxy),
), ),
currentDownloadKey: currentDownloadKey, repairClient: request.New(
MountPath: dc.Folder, request.WithRateLimiter(repairRl),
logger: logger.New(dc.Name), request.WithHeaders(headers),
checkCached: dc.CheckCached, request.WithLogger(_log),
addSamples: dc.AddSamples, request.WithMaxRetries(4),
request.WithRetryableStatus(429, 502),
request.WithProxy(dc.Proxy),
),
MountPath: dc.Folder,
logger: logger.New(dc.Name),
rarSemaphore: make(chan struct{}, 2),
checkCached: dc.CheckCached,
addSamples: dc.AddSamples,
minimumFreeSlot: dc.MinimumFreeSlot,
}
if _, err := r.GetProfile(); err != nil {
return nil, err
} else {
return r, nil
} }
} }
func (r *RealDebrid) GetName() string { func (r *RealDebrid) Name() string {
return r.Name return r.name
} }
func (r *RealDebrid) GetLogger() zerolog.Logger { func (r *RealDebrid) Logger() zerolog.Logger {
return r.logger return r.logger
} }
func getSelectedFiles(t *types.Torrent, data torrentInfo) map[string]types.File { func (r *RealDebrid) getSelectedFiles(t *types.Torrent, data torrentInfo) (map[string]types.File, error) {
files := make(map[string]types.File)
selectedFiles := make([]types.File, 0) selectedFiles := make([]types.File, 0)
for _, f := range data.Files { for _, f := range data.Files {
if f.Selected == 1 { if f.Selected == 1 {
name := filepath.Base(f.Path) selectedFiles = append(selectedFiles, types.File{
file := types.File{
TorrentId: t.Id, TorrentId: t.Id,
Name: name, Name: filepath.Base(f.Path),
Path: name, Path: filepath.Base(f.Path),
Size: f.Bytes, Size: f.Bytes,
Id: strconv.Itoa(f.ID), Id: strconv.Itoa(f.ID),
} })
selectedFiles = append(selectedFiles, file)
} }
} }
if len(selectedFiles) == 0 {
return files, nil
}
// Handle RARed torrents (single link, multiple files)
if len(data.Links) == 1 && len(selectedFiles) > 1 {
return r.handleRarArchive(t, data, selectedFiles)
}
// Standard case - map files to links
if len(selectedFiles) > len(data.Links) {
r.logger.Warn().Msgf("More files than links available: %d files, %d links for %s", len(selectedFiles), len(data.Links), t.Name)
}
for i, f := range selectedFiles {
if i < len(data.Links) {
f.Link = data.Links[i]
files[f.Name] = f
} else {
r.logger.Warn().Str("file", f.Name).Msg("No link available for file")
}
}
return files, nil
}
// handleRarArchive processes RAR archives with multiple files
func (r *RealDebrid) handleRarArchive(t *types.Torrent, data torrentInfo, selectedFiles []types.File) (map[string]types.File, error) {
// This will block if 2 RAR operations are already in progress
r.rarSemaphore <- struct{}{}
defer func() {
<-r.rarSemaphore
}()
files := make(map[string]types.File) files := make(map[string]types.File)
for index, f := range selectedFiles {
if index >= len(data.Links) { if !r.UnpackRar {
break r.logger.Debug().Msgf("RAR file detected, but unpacking is disabled: %s", t.Name)
// Create a single file representing the RAR archive
file := types.File{
TorrentId: t.Id,
Id: "0",
Name: t.Name + ".rar",
Size: 0,
IsRar: true,
ByteRange: nil,
Path: t.Name + ".rar",
Link: data.Links[0],
Generated: time.Now(),
} }
f.Link = data.Links[index] files[file.Name] = file
files[f.Name] = f return files, nil
} }
return files
r.logger.Info().Msgf("RAR file detected, unpacking: %s", t.Name)
linkFile := &types.File{TorrentId: t.Id, Link: data.Links[0]}
downloadLinkObj, err := r.GetDownloadLink(t, linkFile)
if err != nil {
return nil, fmt.Errorf("failed to get download link for RAR file: %w", err)
}
dlLink := downloadLinkObj.DownloadLink
reader, err := rar.NewReader(dlLink)
if err != nil {
return nil, fmt.Errorf("failed to create RAR reader: %w", err)
}
rarFiles, err := reader.GetFiles()
if err != nil {
return nil, fmt.Errorf("failed to read RAR files: %w", err)
}
// Create lookup map for faster matching
fileMap := make(map[string]*types.File)
for i := range selectedFiles {
// RD converts special chars to '_' for RAR file paths
// @TODO: there might be more special chars to replace
safeName := strings.NewReplacer("|", "_", "\"", "_", "\\", "_", "?", "_", "*", "_", ":", "_", "<", "_", ">", "_").Replace(selectedFiles[i].Name)
fileMap[safeName] = &selectedFiles[i]
}
now := time.Now()
for _, rarFile := range rarFiles {
if file, exists := fileMap[rarFile.Name()]; exists {
file.IsRar = true
file.ByteRange = rarFile.ByteRange()
file.Link = data.Links[0]
file.Generated = now
files[file.Name] = *file
} else if !rarFile.IsDirectory {
r.logger.Warn().Msgf("RAR file %s not found in torrent files", rarFile.Name())
}
}
return files, nil
} }
// getTorrentFiles returns a list of torrent files from the torrent info // getTorrentFiles returns a list of torrent files from the torrent info
@@ -191,13 +299,13 @@ func (r *RealDebrid) IsAvailable(hashes []string) map[string]bool {
req, _ := http.NewRequest(http.MethodGet, url, nil) req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := r.client.MakeRequest(req) resp, err := r.client.MakeRequest(req)
if err != nil { if err != nil {
r.logger.Info().Msgf("Error checking availability: %v", err) r.logger.Error().Err(err).Msgf("Error checking availability")
return result return result
} }
var data AvailabilityResponse var data AvailabilityResponse
err = json.Unmarshal(resp, &data) err = json.Unmarshal(resp, &data)
if err != nil { if err != nil {
r.logger.Info().Msgf("Error marshalling availability: %v", err) r.logger.Error().Err(err).Msgf("Error marshalling availability")
return result return result
} }
for _, h := range hashes[i:end] { for _, h := range hashes[i:end] {
@@ -226,15 +334,30 @@ func (r *RealDebrid) addTorrent(t *types.Torrent) (*types.Torrent, error) {
return nil, err return nil, err
} }
req.Header.Add("Content-Type", "application/x-bittorrent") req.Header.Add("Content-Type", "application/x-bittorrent")
resp, err := r.client.MakeRequest(req) resp, err := r.client.Do(req)
if err != nil { if err != nil {
return nil, err return nil, err
} }
if err = json.Unmarshal(resp, &data); err != nil { if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusCreated {
// Handle multiple_downloads
if resp.StatusCode == 509 {
return nil, utils.TooManyActiveDownloadsError
}
bodyBytes, _ := io.ReadAll(resp.Body)
return nil, fmt.Errorf("realdebrid API error: Status: %d || Body: %s", resp.StatusCode, string(bodyBytes))
}
defer resp.Body.Close()
bodyBytes, err := io.ReadAll(resp.Body)
if err != nil {
return nil, fmt.Errorf("reading response body: %w", err)
}
if err = json.Unmarshal(bodyBytes, &data); err != nil {
return nil, err return nil, err
} }
t.Id = data.Id t.Id = data.Id
t.Debrid = r.Name t.Debrid = r.name
t.MountPath = r.MountPath t.MountPath = r.MountPath
return t, nil return t, nil
} }
@@ -246,15 +369,30 @@ func (r *RealDebrid) addMagnet(t *types.Torrent) (*types.Torrent, error) {
} }
var data AddMagnetSchema var data AddMagnetSchema
req, _ := http.NewRequest(http.MethodPost, url, strings.NewReader(payload.Encode())) req, _ := http.NewRequest(http.MethodPost, url, strings.NewReader(payload.Encode()))
resp, err := r.client.MakeRequest(req) resp, err := r.client.Do(req)
if err != nil { if err != nil {
return nil, err return nil, err
} }
if err = json.Unmarshal(resp, &data); err != nil { if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusCreated {
// Handle multiple_downloads
if resp.StatusCode == 509 {
return nil, utils.TooManyActiveDownloadsError
}
bodyBytes, _ := io.ReadAll(resp.Body)
return nil, fmt.Errorf("realdebrid API error: Status: %d || Body: %s", resp.StatusCode, string(bodyBytes))
}
defer resp.Body.Close()
bodyBytes, err := io.ReadAll(resp.Body)
if err != nil {
return nil, fmt.Errorf("reading response body: %w", err)
}
if err = json.Unmarshal(bodyBytes, &data); err != nil {
return nil, err return nil, err
} }
t.Id = data.Id t.Id = data.Id
t.Debrid = r.Name t.Debrid = r.name
t.MountPath = r.MountPath t.MountPath = r.MountPath
return t, nil return t, nil
} }
@@ -273,7 +411,7 @@ func (r *RealDebrid) GetTorrent(torrentId string) (*types.Torrent, error) {
} }
if resp.StatusCode != http.StatusOK { if resp.StatusCode != http.StatusOK {
if resp.StatusCode == http.StatusNotFound { if resp.StatusCode == http.StatusNotFound {
return nil, request.TorrentNotFoundError return nil, utils.TorrentNotFoundError
} }
return nil, fmt.Errorf("realdebrid API error: Status: %d || Body: %s", resp.StatusCode, string(bodyBytes)) return nil, fmt.Errorf("realdebrid API error: Status: %d || Body: %s", resp.StatusCode, string(bodyBytes))
} }
@@ -295,7 +433,7 @@ func (r *RealDebrid) GetTorrent(torrentId string) (*types.Torrent, error) {
Filename: data.Filename, Filename: data.Filename,
OriginalFilename: data.OriginalFilename, OriginalFilename: data.OriginalFilename,
Links: data.Links, Links: data.Links,
Debrid: r.Name, Debrid: r.name,
MountPath: r.MountPath, MountPath: r.MountPath,
} }
t.Files = r.getTorrentFiles(t, data) // Get selected files t.Files = r.getTorrentFiles(t, data) // Get selected files
@@ -316,7 +454,7 @@ func (r *RealDebrid) UpdateTorrent(t *types.Torrent) error {
} }
if resp.StatusCode != http.StatusOK { if resp.StatusCode != http.StatusOK {
if resp.StatusCode == http.StatusNotFound { if resp.StatusCode == http.StatusNotFound {
return request.TorrentNotFoundError return utils.TorrentNotFoundError
} }
return fmt.Errorf("realdebrid API error: Status: %d || Body: %s", resp.StatusCode, string(bodyBytes)) return fmt.Errorf("realdebrid API error: Status: %d || Body: %s", resp.StatusCode, string(bodyBytes))
} }
@@ -336,13 +474,14 @@ func (r *RealDebrid) UpdateTorrent(t *types.Torrent) error {
t.OriginalFilename = data.OriginalFilename t.OriginalFilename = data.OriginalFilename
t.Links = data.Links t.Links = data.Links
t.MountPath = r.MountPath t.MountPath = r.MountPath
t.Debrid = r.Name t.Debrid = r.name
t.Added = data.Added t.Added = data.Added
t.Files = getSelectedFiles(t, data) // Get selected files t.Files, _ = r.getSelectedFiles(t, data) // Get selected files
return nil return nil
} }
func (r *RealDebrid) CheckStatus(t *types.Torrent, isSymlink bool) (*types.Torrent, error) { func (r *RealDebrid) CheckStatus(t *types.Torrent) (*types.Torrent, error) {
url := fmt.Sprintf("%s/torrents/info/%s", r.Host, t.Id) url := fmt.Sprintf("%s/torrents/info/%s", r.Host, t.Id)
req, _ := http.NewRequest(http.MethodGet, url, nil) req, _ := http.NewRequest(http.MethodGet, url, nil)
for { for {
@@ -366,7 +505,7 @@ func (r *RealDebrid) CheckStatus(t *types.Torrent, isSymlink bool) (*types.Torre
t.Seeders = data.Seeders t.Seeders = data.Seeders
t.Links = data.Links t.Links = data.Links
t.Status = status t.Status = status
t.Debrid = r.Name t.Debrid = r.name
t.MountPath = r.MountPath t.MountPath = r.MountPath
if status == "waiting_files_selection" { if status == "waiting_files_selection" {
t.Files = r.getTorrentFiles(t, data) t.Files = r.getTorrentFiles(t, data)
@@ -387,18 +526,19 @@ func (r *RealDebrid) CheckStatus(t *types.Torrent, isSymlink bool) (*types.Torre
return t, err return t, err
} }
if res.StatusCode != http.StatusNoContent { if res.StatusCode != http.StatusNoContent {
if res.StatusCode == 509 {
return nil, utils.TooManyActiveDownloadsError
}
return t, fmt.Errorf("realdebrid API error: Status: %d", res.StatusCode) return t, fmt.Errorf("realdebrid API error: Status: %d", res.StatusCode)
} }
} else if status == "downloaded" { } else if status == "downloaded" {
t.Files = getSelectedFiles(t, data) // Get selected files t.Files, err = r.getSelectedFiles(t, data) // Get selected files
r.logger.Info().Msgf("Torrent: %s downloaded to RD", t.Name) if err != nil {
if !isSymlink { return t, err
err = r.GenerateDownloadLinks(t)
if err != nil {
return t, err
}
} }
break
r.logger.Info().Msgf("Torrent: %s downloaded to RD", t.Name)
return t, nil
} else if utils.Contains(r.GetDownloadingStatus(), status) { } else if utils.Contains(r.GetDownloadingStatus(), status) {
if !t.DownloadUncached { if !t.DownloadUncached {
return t, fmt.Errorf("torrent: %s not cached", t.Name) return t, fmt.Errorf("torrent: %s not cached", t.Name)
@@ -409,7 +549,6 @@ func (r *RealDebrid) CheckStatus(t *types.Torrent, isSymlink bool) (*types.Torre
} }
} }
return t, nil
} }
func (r *RealDebrid) DeleteTorrent(torrentId string) error { func (r *RealDebrid) DeleteTorrent(torrentId string) error {
@@ -422,46 +561,56 @@ func (r *RealDebrid) DeleteTorrent(torrentId string) error {
return nil return nil
} }
func (r *RealDebrid) GenerateDownloadLinks(t *types.Torrent) error { func (r *RealDebrid) GetFileDownloadLinks(t *types.Torrent) error {
filesCh := make(chan types.File, len(t.Files))
errCh := make(chan error, len(t.Files))
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(len(t.Files)) var mu sync.Mutex
for _, f := range t.Files { var firstErr error
files := make(map[string]types.File)
links := make(map[string]*types.DownloadLink)
_files := t.GetFiles()
wg.Add(len(_files))
for _, f := range _files {
go func(file types.File) { go func(file types.File) {
defer wg.Done() defer wg.Done()
link, err := r.GetDownloadLink(t, &file) link, err := r.GetDownloadLink(t, &file)
if err != nil { if err != nil {
errCh <- err mu.Lock()
if firstErr == nil {
firstErr = err
}
mu.Unlock()
return
}
if link == nil {
mu.Lock()
if firstErr == nil {
firstErr = fmt.Errorf("realdebrid API error: download link not found for file %s", file.Name)
}
mu.Unlock()
return return
} }
file.DownloadLink = link file.DownloadLink = link
filesCh <- file
mu.Lock()
files[file.Name] = file
links[link.Link] = link
mu.Unlock()
}(f) }(f)
} }
go func() { wg.Wait()
wg.Wait()
close(filesCh)
close(errCh)
}()
// Collect results if firstErr != nil {
files := make(map[string]types.File, len(t.Files)) return firstErr
for file := range filesCh {
files[file.Name] = file
}
// Check for errors
for err := range errCh {
if err != nil {
return err // Return the first error encountered
}
} }
// Add links to cache
r.accounts.SetDownloadLinks(links)
t.Files = files t.Files = files
return nil return nil
} }
@@ -472,20 +621,24 @@ func (r *RealDebrid) CheckLink(link string) error {
"link": {link}, "link": {link},
} }
req, _ := http.NewRequest(http.MethodPost, url, strings.NewReader(payload.Encode())) req, _ := http.NewRequest(http.MethodPost, url, strings.NewReader(payload.Encode()))
resp, err := r.client.Do(req) resp, err := r.repairClient.Do(req)
if err != nil { if err != nil {
return err return err
} }
if resp.StatusCode == http.StatusNotFound { if resp.StatusCode == http.StatusNotFound {
return request.HosterUnavailableError // File has been removed return utils.HosterUnavailableError // File has been removed
} }
return nil return nil
} }
func (r *RealDebrid) _getDownloadLink(file *types.File) (*types.DownloadLink, error) { func (r *RealDebrid) _getDownloadLink(file *types.File) (*types.DownloadLink, error) {
url := fmt.Sprintf("%s/unrestrict/link/", r.Host) url := fmt.Sprintf("%s/unrestrict/link/", r.Host)
_link := file.Link
if strings.HasPrefix(file.Link, "https://real-debrid.com/d/") && len(file.Link) > 39 {
_link = file.Link[0:39]
}
payload := gourl.Values{ payload := gourl.Values{
"link": {file.Link}, "link": {_link},
} }
req, _ := http.NewRequest(http.MethodPost, url, strings.NewReader(payload.Encode())) req, _ := http.NewRequest(http.MethodPost, url, strings.NewReader(payload.Encode()))
resp, err := r.downloadClient.Do(req) resp, err := r.downloadClient.Do(req)
@@ -506,17 +659,17 @@ func (r *RealDebrid) _getDownloadLink(file *types.File) (*types.DownloadLink, er
} }
switch data.ErrorCode { switch data.ErrorCode {
case 19: case 19:
return nil, request.HosterUnavailableError // File has been removed return nil, utils.HosterUnavailableError // File has been removed
case 23: case 23:
return nil, request.TrafficExceededError return nil, utils.TrafficExceededError
case 24: case 24:
return nil, request.HosterUnavailableError // Link has been nerfed return nil, utils.HosterUnavailableError // Link has been nerfed
case 34: case 34:
return nil, request.TrafficExceededError // traffic exceeded return nil, utils.TrafficExceededError // traffic exceeded
case 35: case 35:
return nil, request.HosterUnavailableError return nil, utils.HosterUnavailableError
case 36: case 36:
return nil, request.TrafficExceededError // traffic exceeded return nil, utils.TrafficExceededError // traffic exceeded
default: default:
return nil, fmt.Errorf("realdebrid API error: Status: %d || Code: %d", resp.StatusCode, data.ErrorCode) return nil, fmt.Errorf("realdebrid API error: Status: %d || Code: %d", resp.StatusCode, data.ErrorCode)
} }
@@ -532,58 +685,54 @@ func (r *RealDebrid) _getDownloadLink(file *types.File) (*types.DownloadLink, er
if data.Download == "" { if data.Download == "" {
return nil, fmt.Errorf("realdebrid API error: download link not found") return nil, fmt.Errorf("realdebrid API error: download link not found")
} }
now := time.Now()
return &types.DownloadLink{ return &types.DownloadLink{
Filename: data.Filename, Filename: data.Filename,
Size: data.Filesize, Size: data.Filesize,
Link: data.Link, Link: data.Link,
DownloadLink: data.Download, DownloadLink: data.Download,
Generated: time.Now(), Generated: now,
ExpiresAt: now.Add(r.autoExpiresLinksAfter),
}, nil }, nil
} }
func (r *RealDebrid) GetDownloadLink(t *types.Torrent, file *types.File) (*types.DownloadLink, error) { func (r *RealDebrid) GetDownloadLink(t *types.Torrent, file *types.File) (*types.DownloadLink, error) {
if r.currentDownloadKey == "" { accounts := r.accounts.All()
// If no download key is set, use the first one
accounts := r.getActiveAccounts()
if len(accounts) < 1 {
// No active download keys. It's likely that the key has reached bandwidth limit
return nil, fmt.Errorf("no active download keys")
}
r.currentDownloadKey = accounts[0].Token
}
r.downloadClient.SetHeader("Authorization", fmt.Sprintf("Bearer %s", r.currentDownloadKey)) for _, account := range accounts {
downloadLink, err := r._getDownloadLink(file) r.downloadClient.SetHeader("Authorization", fmt.Sprintf("Bearer %s", account.Token))
retries := 0 downloadLink, err := r._getDownloadLink(file)
if err != nil {
if errors.Is(err, request.TrafficExceededError) { if err == nil {
return downloadLink, nil
}
retries := 0
if errors.Is(err, utils.TrafficExceededError) {
// Retries generating // Retries generating
retries = 5 retries = 5
} else { } else {
// If the error is not traffic exceeded, return the error // If the error is not traffic exceeded, return the error
return nil, err return nil, err
} }
} backOff := 1 * time.Second
backOff := 1 * time.Second for retries > 0 {
for retries > 0 { downloadLink, err = r._getDownloadLink(file)
downloadLink, err = r._getDownloadLink(file) if err == nil {
if err == nil { return downloadLink, nil
return downloadLink, nil }
if !errors.Is(err, utils.TrafficExceededError) {
return nil, err
}
// Add a delay before retrying
time.Sleep(backOff)
backOff *= 2 // Exponential backoff
retries--
} }
if !errors.Is(err, request.TrafficExceededError) {
return nil, err
}
// Add a delay before retrying
time.Sleep(backOff)
backOff *= 2 // Exponential backoff
} }
return downloadLink, nil return nil, fmt.Errorf("realdebrid API error: download link not found")
}
func (r *RealDebrid) GetCheckCached() bool {
return r.checkCached
} }
func (r *RealDebrid) getTorrents(offset int, limit int) (int, []*types.Torrent, error) { func (r *RealDebrid) getTorrents(offset int, limit int) (int, []*types.Torrent, error) {
@@ -634,7 +783,7 @@ func (r *RealDebrid) getTorrents(offset int, limit int) (int, []*types.Torrent,
Links: t.Links, Links: t.Links,
Files: make(map[string]types.File), Files: make(map[string]types.File),
InfoHash: t.Hash, InfoHash: t.Hash,
Debrid: r.Name, Debrid: r.name,
MountPath: r.MountPath, MountPath: r.MountPath,
Added: t.Added.Format(time.RFC3339), Added: t.Added.Format(time.RFC3339),
}) })
@@ -672,18 +821,19 @@ func (r *RealDebrid) GetTorrents() ([]*types.Torrent, error) {
return allTorrents, nil return allTorrents, nil
} }
func (r *RealDebrid) GetDownloads() (map[string]types.DownloadLink, error) { func (r *RealDebrid) GetDownloadLinks() (map[string]*types.DownloadLink, error) {
links := make(map[string]types.DownloadLink) links := make(map[string]*types.DownloadLink)
offset := 0 offset := 0
limit := 1000 limit := 1000
accounts := r.getActiveAccounts() accounts := r.accounts.All()
if len(accounts) < 1 { if len(accounts) < 1 {
// No active download keys. It's likely that the key has reached bandwidth limit // No active download keys. It's likely that the key has reached bandwidth limit
return nil, fmt.Errorf("no active download keys") return links, fmt.Errorf("no active download keys")
} }
r.downloadClient.SetHeader("Authorization", fmt.Sprintf("Bearer %s", accounts[0].Token)) activeAccount := accounts[0]
r.downloadClient.SetHeader("Authorization", fmt.Sprintf("Bearer %s", activeAccount.Token))
for { for {
dl, err := r._getDownloads(offset, limit) dl, err := r._getDownloads(offset, limit)
if err != nil { if err != nil {
@@ -698,11 +848,12 @@ func (r *RealDebrid) GetDownloads() (map[string]types.DownloadLink, error) {
// This is ordered by date, so we can skip the rest // This is ordered by date, so we can skip the rest
continue continue
} }
links[d.Link] = d links[d.Link] = &d
} }
offset += len(dl) offset += len(dl)
} }
return links, nil return links, nil
} }
@@ -728,6 +879,7 @@ func (r *RealDebrid) _getDownloads(offset int, limit int) ([]types.DownloadLink,
Link: d.Link, Link: d.Link,
DownloadLink: d.Download, DownloadLink: d.Download,
Generated: d.Generated, Generated: d.Generated,
ExpiresAt: d.Generated.Add(r.autoExpiresLinksAfter),
Id: d.Id, Id: d.Id,
}) })
@@ -747,49 +899,6 @@ func (r *RealDebrid) GetMountPath() string {
return r.MountPath return r.MountPath
} }
func (r *RealDebrid) DisableAccount(accountId string) {
r.accountsMutex.Lock()
defer r.accountsMutex.Unlock()
if len(r.accounts) == 1 {
r.logger.Info().Msgf("Cannot disable last account: %s", accountId)
return
}
r.currentDownloadKey = ""
if value, ok := r.accounts[accountId]; ok {
value.Disabled = true
r.accounts[accountId] = value
r.logger.Info().Msgf("Disabled account Index: %s", value.ID)
}
}
func (r *RealDebrid) ResetActiveDownloadKeys() {
r.accountsMutex.Lock()
defer r.accountsMutex.Unlock()
for key, value := range r.accounts {
value.Disabled = false
r.accounts[key] = value
}
}
func (r *RealDebrid) getActiveAccounts() []types.Account {
r.accountsMutex.RLock()
defer r.accountsMutex.RUnlock()
accounts := make([]types.Account, 0)
for _, value := range r.accounts {
if value.Disabled {
continue
}
accounts = append(accounts, value)
}
// Sort accounts by ID
sort.Slice(accounts, func(i, j int) bool {
return accounts[i].ID < accounts[j].ID
})
return accounts
}
func (r *RealDebrid) DeleteDownloadLink(linkId string) error { func (r *RealDebrid) DeleteDownloadLink(linkId string) error {
url := fmt.Sprintf("%s/downloads/delete/%s", r.Host, linkId) url := fmt.Sprintf("%s/downloads/delete/%s", r.Host, linkId)
req, _ := http.NewRequest(http.MethodDelete, url, nil) req, _ := http.NewRequest(http.MethodDelete, url, nil)
@@ -798,3 +907,49 @@ func (r *RealDebrid) DeleteDownloadLink(linkId string) error {
} }
return nil return nil
} }
func (r *RealDebrid) GetProfile() (*types.Profile, error) {
if r.Profile != nil {
return r.Profile, nil
}
url := fmt.Sprintf("%s/user", r.Host)
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := r.client.MakeRequest(req)
if err != nil {
return nil, err
}
var data profileResponse
if json.Unmarshal(resp, &data) != nil {
return nil, err
}
profile := &types.Profile{
Id: data.Id,
Username: data.Username,
Email: data.Email,
Points: data.Points,
Premium: data.Premium,
Expiration: data.Expiration,
Type: data.Type,
}
r.Profile = profile
return profile, nil
}
func (r *RealDebrid) GetAvailableSlots() (int, error) {
url := fmt.Sprintf("%s/torrents/activeCount", r.Host)
req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := r.client.MakeRequest(req)
if err != nil {
return 0, nil
}
var data AvailableSlotsResponse
if json.Unmarshal(resp, &data) != nil {
return 0, fmt.Errorf("error unmarshalling available slots response: %w", err)
}
return data.TotalSlots - data.ActiveSlots - r.minimumFreeSlot, nil // Ensure we maintain minimum active pots
}
func (r *RealDebrid) Accounts() *types.Accounts {
return r.accounts
}

View File

@@ -139,3 +139,20 @@ type ErrorResponse struct {
Error string `json:"error"` Error string `json:"error"`
ErrorCode int `json:"error_code"` ErrorCode int `json:"error_code"`
} }
type profileResponse struct {
Id int64 `json:"id"`
Username string `json:"username"`
Email string `json:"email"`
Points int64 `json:"points"`
Locale string `json:"locale"`
Avatar string `json:"avatar"`
Type string `json:"type"`
Premium int `json:"premium"`
Expiration time.Time `json:"expiration"`
}
type AvailableSlotsResponse struct {
ActiveSlots int `json:"nb"`
TotalSlots int `json:"limit"`
}

View File

@@ -24,10 +24,12 @@ import (
) )
type Torbox struct { type Torbox struct {
Name string name string
Host string `json:"host"` Host string `json:"host"`
APIKey string APIKey string
accounts map[string]types.Account accounts *types.Accounts
autoExpiresLinksAfter time.Duration
DownloadUncached bool DownloadUncached bool
client *request.Client client *request.Client
@@ -37,7 +39,11 @@ type Torbox struct {
addSamples bool addSamples bool
} }
func New(dc config.Debrid) *Torbox { func (tb *Torbox) GetProfile() (*types.Profile, error) {
return nil, nil
}
func New(dc config.Debrid) (*Torbox, error) {
rl := request.ParseRateLimit(dc.RateLimit) rl := request.ParseRateLimit(dc.RateLimit)
headers := map[string]string{ headers := map[string]string{
@@ -51,36 +57,31 @@ func New(dc config.Debrid) *Torbox {
request.WithLogger(_log), request.WithLogger(_log),
request.WithProxy(dc.Proxy), request.WithProxy(dc.Proxy),
) )
autoExpiresLinksAfter, err := time.ParseDuration(dc.AutoExpireLinksAfter)
accounts := make(map[string]types.Account) if autoExpiresLinksAfter == 0 || err != nil {
for idx, key := range dc.DownloadAPIKeys { autoExpiresLinksAfter = 48 * time.Hour
id := strconv.Itoa(idx)
accounts[id] = types.Account{
Name: key,
ID: id,
Token: key,
}
} }
return &Torbox{ return &Torbox{
Name: "torbox", name: "torbox",
Host: "https://api.torbox.app/v1", Host: "https://api.torbox.app/v1",
APIKey: dc.APIKey, APIKey: dc.APIKey,
accounts: accounts, accounts: types.NewAccounts(dc),
DownloadUncached: dc.DownloadUncached, DownloadUncached: dc.DownloadUncached,
client: client, autoExpiresLinksAfter: autoExpiresLinksAfter,
MountPath: dc.Folder, client: client,
logger: _log, MountPath: dc.Folder,
checkCached: dc.CheckCached, logger: _log,
addSamples: dc.AddSamples, checkCached: dc.CheckCached,
} addSamples: dc.AddSamples,
}, nil
} }
func (tb *Torbox) GetName() string { func (tb *Torbox) Name() string {
return tb.Name return tb.name
} }
func (tb *Torbox) GetLogger() zerolog.Logger { func (tb *Torbox) Logger() zerolog.Logger {
return tb.logger return tb.logger
} }
@@ -113,13 +114,13 @@ func (tb *Torbox) IsAvailable(hashes []string) map[string]bool {
req, _ := http.NewRequest(http.MethodGet, url, nil) req, _ := http.NewRequest(http.MethodGet, url, nil)
resp, err := tb.client.MakeRequest(req) resp, err := tb.client.MakeRequest(req)
if err != nil { if err != nil {
tb.logger.Info().Msgf("Error checking availability: %v", err) tb.logger.Error().Err(err).Msgf("Error checking availability")
return result return result
} }
var res AvailableResponse var res AvailableResponse
err = json.Unmarshal(resp, &res) err = json.Unmarshal(resp, &res)
if err != nil { if err != nil {
tb.logger.Info().Msgf("Error marshalling availability: %v", err) tb.logger.Error().Err(err).Msgf("Error marshalling availability")
return result return result
} }
if res.Data == nil { if res.Data == nil {
@@ -162,7 +163,7 @@ func (tb *Torbox) SubmitMagnet(torrent *types.Torrent) (*types.Torrent, error) {
torrentId := strconv.Itoa(dt.Id) torrentId := strconv.Itoa(dt.Id)
torrent.Id = torrentId torrent.Id = torrentId
torrent.MountPath = tb.MountPath torrent.MountPath = tb.MountPath
torrent.Debrid = tb.Name torrent.Debrid = tb.name
return torrent, nil return torrent, nil
} }
@@ -211,7 +212,7 @@ func (tb *Torbox) GetTorrent(torrentId string) (*types.Torrent, error) {
Filename: data.Name, Filename: data.Name,
OriginalFilename: data.Name, OriginalFilename: data.Name,
MountPath: tb.MountPath, MountPath: tb.MountPath,
Debrid: tb.Name, Debrid: tb.name,
Files: make(map[string]types.File), Files: make(map[string]types.File),
Added: data.CreatedAt.Format(time.RFC3339), Added: data.CreatedAt.Format(time.RFC3339),
} }
@@ -234,7 +235,7 @@ func (tb *Torbox) GetTorrent(torrentId string) (*types.Torrent, error) {
Id: strconv.Itoa(f.Id), Id: strconv.Itoa(f.Id),
Name: fileName, Name: fileName,
Size: f.Size, Size: f.Size,
Path: fileName, Path: f.Name,
} }
t.Files[fileName] = file t.Files[fileName] = file
} }
@@ -246,7 +247,7 @@ func (tb *Torbox) GetTorrent(torrentId string) (*types.Torrent, error) {
} }
t.OriginalFilename = strings.Split(cleanPath, "/")[0] t.OriginalFilename = strings.Split(cleanPath, "/")[0]
t.Debrid = tb.Name t.Debrid = tb.name
return t, nil return t, nil
} }
@@ -275,7 +276,7 @@ func (tb *Torbox) UpdateTorrent(t *types.Torrent) error {
t.Filename = name t.Filename = name
t.OriginalFilename = name t.OriginalFilename = name
t.MountPath = tb.MountPath t.MountPath = tb.MountPath
t.Debrid = tb.Name t.Debrid = tb.name
cfg := config.Get() cfg := config.Get()
for _, f := range data.Files { for _, f := range data.Files {
fileName := filepath.Base(f.Name) fileName := filepath.Base(f.Name)
@@ -307,11 +308,11 @@ func (tb *Torbox) UpdateTorrent(t *types.Torrent) error {
} }
t.OriginalFilename = strings.Split(cleanPath, "/")[0] t.OriginalFilename = strings.Split(cleanPath, "/")[0]
t.Debrid = tb.Name t.Debrid = tb.name
return nil return nil
} }
func (tb *Torbox) CheckStatus(torrent *types.Torrent, isSymlink bool) (*types.Torrent, error) { func (tb *Torbox) CheckStatus(torrent *types.Torrent) (*types.Torrent, error) {
for { for {
err := tb.UpdateTorrent(torrent) err := tb.UpdateTorrent(torrent)
@@ -321,13 +322,7 @@ func (tb *Torbox) CheckStatus(torrent *types.Torrent, isSymlink bool) (*types.To
status := torrent.Status status := torrent.Status
if status == "downloaded" { if status == "downloaded" {
tb.logger.Info().Msgf("Torrent: %s downloaded", torrent.Name) tb.logger.Info().Msgf("Torrent: %s downloaded", torrent.Name)
if !isSymlink { return torrent, nil
err = tb.GenerateDownloadLinks(torrent)
if err != nil {
return torrent, err
}
}
break
} else if utils.Contains(tb.GetDownloadingStatus(), status) { } else if utils.Contains(tb.GetDownloadingStatus(), status) {
if !torrent.DownloadUncached { if !torrent.DownloadUncached {
return torrent, fmt.Errorf("torrent: %s not cached", torrent.Name) return torrent, fmt.Errorf("torrent: %s not cached", torrent.Name)
@@ -340,7 +335,6 @@ func (tb *Torbox) CheckStatus(torrent *types.Torrent, isSymlink bool) (*types.To
} }
} }
return torrent, nil
} }
func (tb *Torbox) DeleteTorrent(torrentId string) error { func (tb *Torbox) DeleteTorrent(torrentId string) error {
@@ -355,8 +349,9 @@ func (tb *Torbox) DeleteTorrent(torrentId string) error {
return nil return nil
} }
func (tb *Torbox) GenerateDownloadLinks(t *types.Torrent) error { func (tb *Torbox) GetFileDownloadLinks(t *types.Torrent) error {
filesCh := make(chan types.File, len(t.Files)) filesCh := make(chan types.File, len(t.Files))
linkCh := make(chan *types.DownloadLink)
errCh := make(chan error, len(t.Files)) errCh := make(chan error, len(t.Files))
var wg sync.WaitGroup var wg sync.WaitGroup
@@ -369,13 +364,17 @@ func (tb *Torbox) GenerateDownloadLinks(t *types.Torrent) error {
errCh <- err errCh <- err
return return
} }
file.DownloadLink = link if link != nil {
linkCh <- link
file.DownloadLink = link
}
filesCh <- file filesCh <- file
}() }()
} }
go func() { go func() {
wg.Wait() wg.Wait()
close(filesCh) close(filesCh)
close(linkCh)
close(errCh) close(errCh)
}() }()
@@ -385,6 +384,13 @@ func (tb *Torbox) GenerateDownloadLinks(t *types.Torrent) error {
files[file.Name] = file files[file.Name] = file
} }
// Collect download links
for link := range linkCh {
if link != nil {
tb.accounts.SetDownloadLink(link.Link, link)
}
}
// Check for errors // Check for errors
for err := range errCh { for err := range errCh {
if err != nil { if err != nil {
@@ -419,12 +425,13 @@ func (tb *Torbox) GetDownloadLink(t *types.Torrent, file *types.File) (*types.Do
if link == "" { if link == "" {
return nil, fmt.Errorf("error getting download links") return nil, fmt.Errorf("error getting download links")
} }
now := time.Now()
return &types.DownloadLink{ return &types.DownloadLink{
Link: file.Link, Link: file.Link,
DownloadLink: link, DownloadLink: link,
Id: file.Id, Id: file.Id,
AccountId: "0", Generated: now,
Generated: time.Now(), ExpiresAt: now.Add(tb.autoExpiresLinksAfter),
}, nil }, nil
} }
@@ -432,10 +439,6 @@ func (tb *Torbox) GetDownloadingStatus() []string {
return []string{"downloading"} return []string{"downloading"}
} }
func (tb *Torbox) GetCheckCached() bool {
return tb.checkCached
}
func (tb *Torbox) GetTorrents() ([]*types.Torrent, error) { func (tb *Torbox) GetTorrents() ([]*types.Torrent, error) {
return nil, nil return nil, nil
} }
@@ -444,7 +447,7 @@ func (tb *Torbox) GetDownloadUncached() bool {
return tb.DownloadUncached return tb.DownloadUncached
} }
func (tb *Torbox) GetDownloads() (map[string]types.DownloadLink, error) { func (tb *Torbox) GetDownloadLinks() (map[string]*types.DownloadLink, error) {
return nil, nil return nil, nil
} }
@@ -456,13 +459,15 @@ func (tb *Torbox) GetMountPath() string {
return tb.MountPath return tb.MountPath
} }
func (tb *Torbox) DisableAccount(accountId string) {
}
func (tb *Torbox) ResetActiveDownloadKeys() {
}
func (tb *Torbox) DeleteDownloadLink(linkId string) error { func (tb *Torbox) DeleteDownloadLink(linkId string) error {
return nil return nil
} }
func (tb *Torbox) GetAvailableSlots() (int, error) {
//TODO: Implement the logic to check available slots for Torbox
return 0, fmt.Errorf("not implemented")
}
func (tb *Torbox) Accounts() *types.Accounts {
return tb.accounts
}

View File

@@ -1,4 +1,4 @@
package debrid package store
import ( import (
"bufio" "bufio"
@@ -6,6 +6,7 @@ import (
"context" "context"
"errors" "errors"
"fmt" "fmt"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
"os" "os"
"path" "path"
"path/filepath" "path/filepath"
@@ -22,7 +23,7 @@ import (
"github.com/sirrobot01/decypharr/internal/config" "github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger" "github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/internal/utils" "github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/debrid/types" _ "time/tzdata"
) )
type WebDavFolderNaming string type WebDavFolderNaming string
@@ -72,7 +73,6 @@ type Cache struct {
logger zerolog.Logger logger zerolog.Logger
torrents *torrentCache torrents *torrentCache
downloadLinks *downloadLinkCache
invalidDownloadLinks sync.Map invalidDownloadLinks sync.Map
folderNaming WebDavFolderNaming folderNaming WebDavFolderNaming
@@ -89,10 +89,9 @@ type Cache struct {
ready chan struct{} ready chan struct{}
// config // config
workers int workers int
torrentRefreshInterval string torrentRefreshInterval string
downloadLinksRefreshInterval string downloadLinksRefreshInterval string
autoExpiresLinksAfterDuration time.Duration
// refresh mutex // refresh mutex
downloadLinksRefreshMu sync.RWMutex // for refreshing download links downloadLinksRefreshMu sync.RWMutex // for refreshing download links
@@ -107,16 +106,26 @@ type Cache struct {
customFolders []string customFolders []string
} }
func New(dc config.Debrid, client types.Client) *Cache { func NewDebridCache(dc config.Debrid, client types.Client) *Cache {
cfg := config.Get() cfg := config.Get()
cet, _ := time.LoadLocation("CET") cet, err := time.LoadLocation("CET")
cetSc, _ := gocron.NewScheduler(gocron.WithLocation(cet)) if err != nil {
scheduler, _ := gocron.NewScheduler(gocron.WithLocation(time.Local)) cet, err = time.LoadLocation("Europe/Berlin") // Fallback to Berlin if CET fails
if err != nil {
autoExpiresLinksAfter, err := time.ParseDuration(dc.AutoExpireLinksAfter) cet = time.FixedZone("CET", 1*60*60) // Fallback to a fixed CET zone
if autoExpiresLinksAfter == 0 || err != nil { }
autoExpiresLinksAfter = 48 * time.Hour
} }
cetSc, err := gocron.NewScheduler(gocron.WithLocation(cet))
if err != nil {
// If we can't create a CET scheduler, fallback to local time
cetSc, _ = gocron.NewScheduler(gocron.WithLocation(time.Local))
}
scheduler, err := gocron.NewScheduler(gocron.WithLocation(time.Local))
if err != nil {
// If we can't create a local scheduler, fallback to CET
scheduler = cetSc
}
var customFolders []string var customFolders []string
dirFilters := map[string][]directoryFilter{} dirFilters := map[string][]directoryFilter{}
for name, value := range dc.Directories { for name, value := range dc.Directories {
@@ -135,22 +144,20 @@ func New(dc config.Debrid, client types.Client) *Cache {
customFolders = append(customFolders, name) customFolders = append(customFolders, name)
} }
_log := logger.New(fmt.Sprintf("%s-webdav", client.GetName())) _log := logger.New(fmt.Sprintf("%s-webdav", client.Name()))
c := &Cache{ c := &Cache{
dir: filepath.Join(cfg.Path, "cache", dc.Name), // path to save cache files dir: filepath.Join(cfg.Path, "cache", dc.Name), // path to save cache files
torrents: newTorrentCache(dirFilters), torrents: newTorrentCache(dirFilters),
client: client, client: client,
logger: _log, logger: _log,
workers: dc.Workers, workers: dc.Workers,
downloadLinks: newDownloadLinkCache(), torrentRefreshInterval: dc.TorrentsRefreshInterval,
torrentRefreshInterval: dc.TorrentsRefreshInterval, downloadLinksRefreshInterval: dc.DownloadLinksRefreshInterval,
downloadLinksRefreshInterval: dc.DownloadLinksRefreshInterval, folderNaming: WebDavFolderNaming(dc.FolderNaming),
folderNaming: WebDavFolderNaming(dc.FolderNaming), saveSemaphore: make(chan struct{}, 50),
autoExpiresLinksAfterDuration: autoExpiresLinksAfter, cetScheduler: cetSc,
saveSemaphore: make(chan struct{}, 50), scheduler: scheduler,
cetScheduler: cetSc,
scheduler: scheduler,
config: dc, config: dc,
customFolders: customFolders, customFolders: customFolders,
@@ -194,9 +201,6 @@ func (c *Cache) Reset() {
// 1. Reset torrent storage // 1. Reset torrent storage
c.torrents.reset() c.torrents.reset()
// 2. Reset download-link cache
c.downloadLinks.reset()
// 3. Clear any sync.Maps // 3. Clear any sync.Maps
c.invalidDownloadLinks = sync.Map{} c.invalidDownloadLinks = sync.Map{}
c.repairRequest = sync.Map{} c.repairRequest = sync.Map{}
@@ -220,9 +224,14 @@ func (c *Cache) Start(ctx context.Context) error {
return fmt.Errorf("failed to create cache directory: %w", err) return fmt.Errorf("failed to create cache directory: %w", err)
} }
c.logger.Info().Msgf("Started indexing...")
if err := c.Sync(ctx); err != nil { if err := c.Sync(ctx); err != nil {
return fmt.Errorf("failed to sync cache: %w", err) return fmt.Errorf("failed to sync cache: %w", err)
} }
// Fire the ready channel
close(c.ready)
c.logger.Info().Msgf("Indexing complete, %d torrents loaded", len(c.torrents.getAll()))
// initial download links // initial download links
go c.refreshDownloadLinks(ctx) go c.refreshDownloadLinks(ctx)
@@ -231,13 +240,11 @@ func (c *Cache) Start(ctx context.Context) error {
c.logger.Error().Err(err).Msg("Failed to start cache worker") c.logger.Error().Err(err).Msg("Failed to start cache worker")
} }
c.repairChan = make(chan RepairRequest, 100) c.repairChan = make(chan RepairRequest, 100) // Initialize the repair channel, max 100 requests buffered
go c.repairWorker(ctx) go c.repairWorker(ctx)
// Fire the ready channel
close(c.ready)
cfg := config.Get() cfg := config.Get()
name := c.client.GetName() name := c.client.Name()
addr := cfg.BindAddress + ":" + cfg.Port + cfg.URLBase + "webdav/" + name + "/" addr := cfg.BindAddress + ":" + cfg.Port + cfg.URLBase + "webdav/" + name + "/"
c.logger.Info().Msgf("%s WebDav server running at %s", name, addr) c.logger.Info().Msgf("%s WebDav server running at %s", name, addr)
@@ -307,10 +314,10 @@ func (c *Cache) load(ctx context.Context) (map[string]CachedTorrent, error) {
} }
isComplete := true isComplete := true
if len(ct.Files) != 0 { if len(ct.GetFiles()) != 0 {
// Check if all files are valid, if not, delete the file.json and remove from cache. // Check if all files are valid, if not, delete the file.json and remove from cache.
fs := make(map[string]types.File, len(ct.Files)) fs := make(map[string]types.File, len(ct.GetFiles()))
for _, f := range ct.Files { for _, f := range ct.GetFiles() {
if f.Link == "" { if f.Link == "" {
isComplete = false isComplete = false
break break
@@ -368,7 +375,7 @@ func (c *Cache) Sync(ctx context.Context) error {
totalTorrents := len(torrents) totalTorrents := len(torrents)
c.logger.Info().Msgf("%d torrents found from %s", totalTorrents, c.client.GetName()) c.logger.Info().Msgf("%d torrents found from %s", totalTorrents, c.client.Name())
newTorrents := make([]*types.Torrent, 0) newTorrents := make([]*types.Torrent, 0)
idStore := make(map[string]struct{}, totalTorrents) idStore := make(map[string]struct{}, totalTorrents)
@@ -390,9 +397,11 @@ func (c *Cache) Sync(ctx context.Context) error {
if len(deletedTorrents) > 0 { if len(deletedTorrents) > 0 {
c.logger.Info().Msgf("Found %d deleted torrents", len(deletedTorrents)) c.logger.Info().Msgf("Found %d deleted torrents", len(deletedTorrents))
for _, id := range deletedTorrents { for _, id := range deletedTorrents {
if _, ok := cachedTorrents[id]; ok { // Remove from cache and debrid service
c.deleteTorrent(id, false) // delete from cache delete(cachedTorrents, id)
} // Remove the json file from disk
c.removeFile(id, false)
} }
} }
@@ -505,9 +514,9 @@ func (c *Cache) setTorrent(t CachedTorrent, callback func(torrent CachedTorrent)
updatedTorrent.Files = mergedFiles updatedTorrent.Files = mergedFiles
} }
c.torrents.set(torrentName, t, updatedTorrent) c.torrents.set(torrentName, t, updatedTorrent)
c.SaveTorrent(t) go c.SaveTorrent(t)
if callback != nil { if callback != nil {
callback(updatedTorrent) go callback(updatedTorrent)
} }
} }
@@ -550,6 +559,10 @@ func (c *Cache) GetTorrents() map[string]CachedTorrent {
return c.torrents.getAll() return c.torrents.getAll()
} }
func (c *Cache) TotalTorrents() int {
return c.torrents.getAllCount()
}
func (c *Cache) GetTorrentByName(name string) *CachedTorrent { func (c *Cache) GetTorrentByName(name string) *CachedTorrent {
if torrent, ok := c.torrents.getByName(name); ok { if torrent, ok := c.torrents.getByName(name); ok {
return &torrent return &torrent
@@ -557,6 +570,10 @@ func (c *Cache) GetTorrentByName(name string) *CachedTorrent {
return nil return nil
} }
func (c *Cache) GetTorrentsName() map[string]CachedTorrent {
return c.torrents.getAllByName()
}
func (c *Cache) GetTorrent(torrentId string) *CachedTorrent { func (c *Cache) GetTorrent(torrentId string) *CachedTorrent {
if torrent, ok := c.torrents.getByID(torrentId); ok { if torrent, ok := c.torrents.getByID(torrentId); ok {
return &torrent return &torrent
@@ -683,8 +700,9 @@ func (c *Cache) ProcessTorrent(t *types.Torrent) error {
return nil return nil
} }
func (c *Cache) AddTorrent(t *types.Torrent) error { func (c *Cache) Add(t *types.Torrent) error {
if len(t.Files) == 0 { if len(t.Files) == 0 {
c.logger.Warn().Msgf("Torrent %s has no files to add. Refreshing", t.Id)
if err := c.client.UpdateTorrent(t); err != nil { if err := c.client.UpdateTorrent(t); err != nil {
return fmt.Errorf("failed to update torrent: %w", err) return fmt.Errorf("failed to update torrent: %w", err)
} }
@@ -701,12 +719,12 @@ func (c *Cache) AddTorrent(t *types.Torrent) error {
c.setTorrent(ct, func(tor CachedTorrent) { c.setTorrent(ct, func(tor CachedTorrent) {
c.RefreshListings(true) c.RefreshListings(true)
}) })
go c.GenerateDownloadLinks(ct) go c.GetFileDownloadLinks(ct)
return nil return nil
} }
func (c *Cache) GetClient() types.Client { func (c *Cache) Client() types.Client {
return c.client return c.client
} }
@@ -744,19 +762,19 @@ func (c *Cache) deleteTorrent(id string, removeFromDebrid bool) bool {
if torrent, ok := c.torrents.getByID(id); ok { if torrent, ok := c.torrents.getByID(id); ok {
c.torrents.removeId(id) // Delete id from cache c.torrents.removeId(id) // Delete id from cache
defer func() { defer func() {
c.removeFromDB(id) c.removeFile(id, false)
if removeFromDebrid { if removeFromDebrid {
_ = c.client.DeleteTorrent(id) // Skip error handling, we don't care if it fails _ = c.client.DeleteTorrent(id) // Skip error handling, we don't care if it fails
} }
}() // defer delete from debrid }() // defer delete from debrid
torrentName := torrent.Name torrentName := c.GetTorrentFolder(torrent.Torrent)
if t, ok := c.torrents.getByName(torrentName); ok { if t, ok := c.torrents.getByName(torrentName); ok {
newFiles := map[string]types.File{} newFiles := map[string]types.File{}
newId := "" newId := ""
for _, file := range t.Files { for _, file := range t.GetFiles() {
if file.TorrentId != "" && file.TorrentId != id { if file.TorrentId != "" && file.TorrentId != id {
if newId == "" && file.TorrentId != "" { if newId == "" && file.TorrentId != "" {
newId = file.TorrentId newId = file.TorrentId
@@ -787,7 +805,7 @@ func (c *Cache) DeleteTorrents(ids []string) {
c.listingDebouncer.Call(true) c.listingDebouncer.Call(true)
} }
func (c *Cache) removeFromDB(torrentId string) { func (c *Cache) removeFile(torrentId string, moveToTrash bool) {
// Moves the torrent file to the trash // Moves the torrent file to the trash
filePath := filepath.Join(c.dir, torrentId+".json") filePath := filepath.Join(c.dir, torrentId+".json")
@@ -796,6 +814,14 @@ func (c *Cache) removeFromDB(torrentId string) {
return return
} }
if !moveToTrash {
// If not moving to trash, delete the file directly
if err := os.Remove(filePath); err != nil {
c.logger.Error().Err(err).Msgf("Failed to remove file: %s", filePath)
return
}
return
}
// Move the file to the trash // Move the file to the trash
trashPath := filepath.Join(c.dir, "trash", torrentId+".json") trashPath := filepath.Join(c.dir, "trash", torrentId+".json")
if err := os.MkdirAll(filepath.Dir(trashPath), 0755); err != nil { if err := os.MkdirAll(filepath.Dir(trashPath), 0755); err != nil {
@@ -815,6 +841,36 @@ func (c *Cache) OnRemove(torrentId string) {
} }
} }
func (c *Cache) GetLogger() zerolog.Logger { // RemoveFile removes a file from the torrent cache
// TODO sends a re-insert that removes the file from debrid
func (c *Cache) RemoveFile(torrentId string, filename string) error {
c.logger.Debug().Str("torrent_id", torrentId).Msgf("Removing file %s", filename)
torrent, ok := c.torrents.getByID(torrentId)
if !ok {
return fmt.Errorf("torrent %s not found", torrentId)
}
file, ok := torrent.GetFile(filename)
if !ok {
return fmt.Errorf("file %s not found in torrent %s", filename, torrentId)
}
file.Deleted = true
torrent.Files[filename] = file
// If the torrent has no files left, delete it
if len(torrent.GetFiles()) == 0 {
c.logger.Debug().Msgf("Torrent %s has no files left, deleting it", torrentId)
if err := c.DeleteTorrent(torrentId); err != nil {
return fmt.Errorf("failed to delete torrent %s: %w", torrentId, err)
}
return nil
}
c.setTorrent(torrent, func(torrent CachedTorrent) {
c.listingDebouncer.Call(true)
}) // Update the torrent in the cache
return nil
}
func (c *Cache) Logger() zerolog.Logger {
return c.logger return c.logger
} }

View File

@@ -0,0 +1,193 @@
package store
import (
"errors"
"fmt"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
)
type downloadLinkRequest struct {
result string
err error
done chan struct{}
}
func newDownloadLinkRequest() *downloadLinkRequest {
return &downloadLinkRequest{
done: make(chan struct{}),
}
}
func (r *downloadLinkRequest) Complete(result string, err error) {
r.result = result
r.err = err
close(r.done)
}
func (r *downloadLinkRequest) Wait() (string, error) {
<-r.done
return r.result, r.err
}
func (c *Cache) GetDownloadLink(torrentName, filename, fileLink string) (string, error) {
// Check link cache
if dl, err := c.checkDownloadLink(fileLink); dl != "" && err == nil {
return dl, nil
}
if req, inFlight := c.downloadLinkRequests.Load(fileLink); inFlight {
// Wait for the other request to complete and use its result
result := req.(*downloadLinkRequest)
return result.Wait()
}
// Create a new request object
req := newDownloadLinkRequest()
c.downloadLinkRequests.Store(fileLink, req)
dl, err := c.fetchDownloadLink(torrentName, filename, fileLink)
if err != nil {
req.Complete("", err)
c.downloadLinkRequests.Delete(fileLink)
return "", err
}
if dl == nil || dl.DownloadLink == "" {
err = fmt.Errorf("download link is empty for %s in torrent %s", filename, torrentName)
req.Complete("", err)
c.downloadLinkRequests.Delete(fileLink)
return "", err
}
req.Complete(dl.DownloadLink, err)
c.downloadLinkRequests.Delete(fileLink)
return dl.DownloadLink, err
}
func (c *Cache) fetchDownloadLink(torrentName, filename, fileLink string) (*types.DownloadLink, error) {
ct := c.GetTorrentByName(torrentName)
if ct == nil {
return nil, fmt.Errorf("torrent not found")
}
file, ok := ct.GetFile(filename)
if !ok {
return nil, fmt.Errorf("file %s not found in torrent %s", filename, torrentName)
}
if file.Link == "" {
// file link is empty, refresh the torrent to get restricted links
ct = c.refreshTorrent(file.TorrentId) // Refresh the torrent from the debrid
if ct == nil {
return nil, fmt.Errorf("failed to refresh torrent")
} else {
file, ok = ct.GetFile(filename)
if !ok {
return nil, fmt.Errorf("file %s not found in refreshed torrent %s", filename, torrentName)
}
}
}
// If file.Link is still empty, return
if file.Link == "" {
// Try to reinsert the torrent?
newCt, err := c.reInsertTorrent(ct)
if err != nil {
return nil, fmt.Errorf("failed to reinsert torrent. %w", err)
}
ct = newCt
file, ok = ct.GetFile(filename)
if !ok {
return nil, fmt.Errorf("file %s not found in reinserted torrent %s", filename, torrentName)
}
}
c.logger.Trace().Msgf("Getting download link for %s(%s)", filename, file.Link)
downloadLink, err := c.client.GetDownloadLink(ct.Torrent, &file)
if err != nil {
if errors.Is(err, utils.HosterUnavailableError) {
newCt, err := c.reInsertTorrent(ct)
if err != nil {
return nil, fmt.Errorf("failed to reinsert torrent: %w", err)
}
ct = newCt
file, ok = ct.GetFile(filename)
if !ok {
return nil, fmt.Errorf("file %s not found in reinserted torrent %s", filename, torrentName)
}
// Retry getting the download link
downloadLink, err = c.client.GetDownloadLink(ct.Torrent, &file)
if err != nil {
return nil, err
}
if downloadLink == nil {
return nil, fmt.Errorf("download link is empty for")
}
return nil, nil
} else if errors.Is(err, utils.TrafficExceededError) {
// This is likely a fair usage limit error
return nil, err
} else {
return nil, fmt.Errorf("failed to get download link: %w", err)
}
}
if downloadLink == nil {
return nil, fmt.Errorf("download link is empty")
}
// Set link to cache
go c.client.Accounts().SetDownloadLink(fileLink, downloadLink)
return downloadLink, nil
}
func (c *Cache) GetFileDownloadLinks(t CachedTorrent) {
if err := c.client.GetFileDownloadLinks(t.Torrent); err != nil {
c.logger.Error().Err(err).Str("torrent", t.Name).Msg("Failed to generate download links")
return
}
}
func (c *Cache) checkDownloadLink(link string) (string, error) {
dl, err := c.client.Accounts().GetDownloadLink(link)
if err != nil {
return "", err
}
if !c.downloadLinkIsInvalid(dl.DownloadLink) {
return dl.DownloadLink, nil
}
return "", fmt.Errorf("download link not found for %s", link)
}
func (c *Cache) MarkDownloadLinkAsInvalid(link, downloadLink, reason string) {
c.invalidDownloadLinks.Store(downloadLink, reason)
// Remove the download api key from active
if reason == "bandwidth_exceeded" {
// Disable the account
_, account, err := c.client.Accounts().GetDownloadLinkWithAccount(link)
if err != nil {
return
}
c.client.Accounts().Disable(account)
}
}
func (c *Cache) downloadLinkIsInvalid(downloadLink string) bool {
if reason, ok := c.invalidDownloadLinks.Load(downloadLink); ok {
c.logger.Debug().Msgf("Download link %s is invalid: %s", downloadLink, reason)
return true
}
return false
}
func (c *Cache) GetDownloadByteRange(torrentName, filename string) (*[2]int64, error) {
ct := c.GetTorrentByName(torrentName)
if ct == nil {
return nil, fmt.Errorf("torrent not found")
}
file := ct.Files[filename]
return file.ByteRange, nil
}
func (c *Cache) GetTotalActiveDownloadLinks() int {
return c.client.Accounts().GetLinksCount()
}

View File

@@ -1,4 +1,4 @@
package debrid package store
import ( import (
"github.com/sirrobot01/decypharr/pkg/debrid/types" "github.com/sirrobot01/decypharr/pkg/debrid/types"
@@ -19,9 +19,24 @@ func mergeFiles(torrents ...CachedTorrent) map[string]types.File {
}) })
for _, torrent := range torrents { for _, torrent := range torrents {
for _, file := range torrent.Files { for _, file := range torrent.GetFiles() {
merged[file.Name] = file merged[file.Name] = file
} }
} }
return merged return merged
} }
func (c *Cache) GetIngests() ([]types.IngestData, error) {
torrents := c.GetTorrents()
debridName := c.client.Name()
var ingests []types.IngestData
for _, torrent := range torrents {
ingests = append(ingests, types.IngestData{
Debrid: debridName,
Name: torrent.Filename,
Hash: torrent.InfoHash,
Size: torrent.Bytes,
})
}
return ingests, nil
}

View File

@@ -1,4 +1,4 @@
package debrid package store
import ( import (
"context" "context"
@@ -136,67 +136,67 @@ func (c *Cache) refreshRclone() error {
return nil return nil
} }
client := &http.Client{ client := http.DefaultClient
Timeout: 10 * time.Second,
Transport: &http.Transport{
MaxIdleConns: 10,
IdleConnTimeout: 30 * time.Second,
DisableCompression: false,
MaxIdleConnsPerHost: 5,
},
}
// Create form data // Create form data
data := "" data := c.buildRcloneRequestData()
if err := c.sendRcloneRequest(client, "vfs/forget", data); err != nil {
c.logger.Error().Err(err).Msg("Failed to send rclone vfs/forget request")
}
if err := c.sendRcloneRequest(client, "vfs/refresh", data); err != nil {
c.logger.Error().Err(err).Msg("Failed to send rclone vfs/refresh request")
}
return nil
}
func (c *Cache) buildRcloneRequestData() string {
cfg := c.config
dirs := strings.FieldsFunc(cfg.RcRefreshDirs, func(r rune) bool { dirs := strings.FieldsFunc(cfg.RcRefreshDirs, func(r rune) bool {
return r == ',' || r == '&' return r == ',' || r == '&'
}) })
if len(dirs) == 0 { if len(dirs) == 0 {
data = "dir=__all__" return "dir=__all__"
} else { }
for index, dir := range dirs {
if dir != "" { var data strings.Builder
if index == 0 { for index, dir := range dirs {
data += "dir=" + dir if dir != "" {
} else { if index == 0 {
data += "&dir" + fmt.Sprint(index+1) + "=" + dir data.WriteString("dir=" + dir)
} } else {
data.WriteString("&dir" + fmt.Sprint(index+1) + "=" + dir)
} }
} }
} }
return data.String()
}
sendRequest := func(endpoint string) error { func (c *Cache) sendRcloneRequest(client *http.Client, endpoint, data string) error {
req, err := http.NewRequest("POST", fmt.Sprintf("%s/%s", cfg.RcUrl, endpoint), strings.NewReader(data)) req, err := http.NewRequest("POST", fmt.Sprintf("%s/%s", c.config.RcUrl, endpoint), strings.NewReader(data))
if err != nil { if err != nil {
return err
}
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
if cfg.RcUser != "" && cfg.RcPass != "" {
req.SetBasicAuth(cfg.RcUser, cfg.RcPass)
}
resp, err := client.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode != 200 {
body, _ := io.ReadAll(io.LimitReader(resp.Body, 1024))
return fmt.Errorf("failed to perform %s: %s - %s", endpoint, resp.Status, string(body))
}
_, _ = io.Copy(io.Discard, resp.Body)
return nil
}
if err := sendRequest("vfs/forget"); err != nil {
return err
}
if err := sendRequest("vfs/refresh"); err != nil {
return err return err
} }
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
if c.config.RcUser != "" && c.config.RcPass != "" {
req.SetBasicAuth(c.config.RcUser, c.config.RcPass)
}
resp, err := client.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode != 200 {
body, _ := io.ReadAll(io.LimitReader(resp.Body, 1024))
return fmt.Errorf("failed to perform %s: %s - %s", endpoint, resp.Status, string(body))
}
_, _ = io.Copy(io.Discard, resp.Body)
return nil return nil
} }
@@ -241,27 +241,14 @@ func (c *Cache) refreshDownloadLinks(ctx context.Context) {
} }
defer c.downloadLinksRefreshMu.Unlock() defer c.downloadLinksRefreshMu.Unlock()
downloadLinks, err := c.client.GetDownloads() links, err := c.client.GetDownloadLinks()
if err != nil { if err != nil {
c.logger.Error().Err(err).Msg("Failed to get download links") c.logger.Error().Err(err).Msg("Failed to get download links")
return return
} }
for k, v := range downloadLinks {
// if link is generated in the last 24 hours, add it to cache
timeSince := time.Since(v.Generated)
if timeSince < c.autoExpiresLinksAfterDuration {
c.downloadLinks.Store(k, linkCache{
Id: v.Id,
accountId: v.AccountId,
link: v.DownloadLink,
expiresAt: v.Generated.Add(c.autoExpiresLinksAfterDuration - timeSince),
})
} else {
c.downloadLinks.Delete(k)
}
}
c.logger.Trace().Msgf("Refreshed %d download links", len(downloadLinks)) c.client.Accounts().SetDownloadLinks(links)
c.logger.Debug().Msgf("Refreshed download %d links", c.client.Accounts().GetLinksCount())
} }

View File

@@ -1,10 +1,10 @@
package debrid package store
import ( import (
"context" "context"
"errors" "errors"
"fmt" "fmt"
"github.com/sirrobot01/decypharr/internal/request" "github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/utils" "github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/debrid/types" "github.com/sirrobot01/decypharr/pkg/debrid/types"
"sync" "sync"
@@ -59,11 +59,10 @@ func (c *Cache) markAsSuccessfullyReinserted(torrentId string) {
} }
} }
func (c *Cache) IsTorrentBroken(t *CachedTorrent, filenames []string) bool { func (c *Cache) GetBrokenFiles(t *CachedTorrent, filenames []string) []string {
// Check torrent files
isBroken := false
files := make(map[string]types.File) files := make(map[string]types.File)
repairStrategy := config.Get().Repair.Strategy
brokenFiles := make([]string, 0)
if len(filenames) > 0 { if len(filenames) > 0 {
for name, f := range t.Files { for name, f := range t.Files {
if utils.Contains(filenames, name) { if utils.Contains(filenames, name) {
@@ -73,8 +72,6 @@ func (c *Cache) IsTorrentBroken(t *CachedTorrent, filenames []string) bool {
} else { } else {
files = t.Files files = t.Files
} }
// Check empty links
for _, f := range files { for _, f := range files {
// Check if file is missing // Check if file is missing
if f.Link == "" { if f.Link == "" {
@@ -83,44 +80,92 @@ func (c *Cache) IsTorrentBroken(t *CachedTorrent, filenames []string) bool {
t = newT t = newT
} else { } else {
c.logger.Error().Str("torrentId", t.Torrent.Id).Msg("Failed to refresh torrent") c.logger.Error().Str("torrentId", t.Torrent.Id).Msg("Failed to refresh torrent")
return true return filenames // Return original filenames if refresh fails(torrent is somehow botched)
} }
} }
} }
if t.Torrent == nil { if t.Torrent == nil {
c.logger.Error().Str("torrentId", t.Torrent.Id).Msg("Failed to refresh torrent") c.logger.Error().Str("torrentId", t.Torrent.Id).Msg("Failed to refresh torrent")
return true return filenames // Return original filenames if refresh fails(torrent is somehow botched)
} }
files = t.Files files = t.Files
var wg sync.WaitGroup
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// Use a mutex to protect brokenFiles slice and torrent-wide failure flag
var mu sync.Mutex
torrentWideFailed := false
wg.Add(len(files))
for _, f := range files { for _, f := range files {
// Check if file link is still missing go func(f types.File) {
if f.Link == "" { defer wg.Done()
isBroken = true
break select {
} else { case <-ctx.Done():
// Check if file.Link not in the downloadLink Cache return
default:
}
if f.Link == "" {
mu.Lock()
if repairStrategy == config.RepairStrategyPerTorrent {
torrentWideFailed = true
mu.Unlock()
cancel() // Signal all other goroutines to stop
return
} else {
// per_file strategy - only mark this file as broken
brokenFiles = append(brokenFiles, f.Name)
}
mu.Unlock()
return
}
if err := c.client.CheckLink(f.Link); err != nil { if err := c.client.CheckLink(f.Link); err != nil {
if errors.Is(err, request.HosterUnavailableError) { if errors.Is(err, utils.HosterUnavailableError) {
isBroken = true mu.Lock()
break if repairStrategy == config.RepairStrategyPerTorrent {
torrentWideFailed = true
mu.Unlock()
cancel() // Signal all other goroutines to stop
return
} else {
// per_file strategy - only mark this file as broken
brokenFiles = append(brokenFiles, f.Name)
}
mu.Unlock()
} }
} }
}(f)
}
wg.Wait()
// Handle the result based on strategy
if repairStrategy == config.RepairStrategyPerTorrent && torrentWideFailed {
// Mark all files as broken for per_torrent strategy
for _, f := range files {
brokenFiles = append(brokenFiles, f.Name)
} }
} }
// For per_file strategy, brokenFiles already contains only the broken ones
// Try to reinsert the torrent if it's broken // Try to reinsert the torrent if it's broken
if isBroken && t.Torrent != nil { if len(brokenFiles) > 0 && t.Torrent != nil {
// Check if the torrent is already in progress // Check if the torrent is already in progress
if _, err := c.reInsertTorrent(t); err != nil { if _, err := c.reInsertTorrent(t); err != nil {
c.logger.Error().Err(err).Str("torrentId", t.Torrent.Id).Msg("Failed to reinsert torrent") c.logger.Error().Err(err).Str("torrentId", t.Torrent.Id).Msg("Failed to reinsert torrent")
return true return brokenFiles // Return broken files if reinsert fails
} }
return false return nil // Return nil if the torrent was successfully reinserted
} }
return isBroken return brokenFiles
} }
func (c *Cache) repairWorker(ctx context.Context) { func (c *Cache) repairWorker(ctx context.Context) {
@@ -208,7 +253,7 @@ func (c *Cache) reInsertTorrent(ct *CachedTorrent) (*CachedTorrent, error) {
return ct, fmt.Errorf("failed to submit magnet: empty torrent") return ct, fmt.Errorf("failed to submit magnet: empty torrent")
} }
newTorrent.DownloadUncached = false // Set to false, avoid re-downloading newTorrent.DownloadUncached = false // Set to false, avoid re-downloading
newTorrent, err = c.client.CheckStatus(newTorrent, true) newTorrent, err = c.client.CheckStatus(newTorrent)
if err != nil { if err != nil {
if newTorrent != nil && newTorrent.Id != "" { if newTorrent != nil && newTorrent.Id != "" {
// Delete the torrent if it was not downloaded // Delete the torrent if it was not downloaded
@@ -223,7 +268,7 @@ func (c *Cache) reInsertTorrent(ct *CachedTorrent) (*CachedTorrent, error) {
if err != nil { if err != nil {
addedOn = time.Now() addedOn = time.Now()
} }
for _, f := range newTorrent.Files { for _, f := range newTorrent.GetFiles() {
if f.Link == "" { if f.Link == "" {
c.markAsFailedToReinsert(oldID) c.markAsFailedToReinsert(oldID)
return ct, fmt.Errorf("failed to reinsert torrent: empty link") return ct, fmt.Errorf("failed to reinsert torrent: empty link")
@@ -256,7 +301,11 @@ func (c *Cache) reInsertTorrent(ct *CachedTorrent) (*CachedTorrent, error) {
return ct, nil return ct, nil
} }
func (c *Cache) resetInvalidLinks() { func (c *Cache) resetInvalidLinks(ctx context.Context) {
c.logger.Debug().Msgf("Resetting accounts")
c.invalidDownloadLinks = sync.Map{} c.invalidDownloadLinks = sync.Map{}
c.client.ResetActiveDownloadKeys() // Reset the active download keys c.client.Accounts().Reset() // Reset the active download keys
// Refresh the download links
c.refreshDownloadLinks(ctx)
} }

View File

@@ -1,4 +1,4 @@
package debrid package store
import ( import (
"fmt" "fmt"
@@ -40,13 +40,22 @@ type directoryFilter struct {
ageThreshold time.Duration // only for last_added ageThreshold time.Duration // only for last_added
} }
type torrents struct {
sync.RWMutex
byID map[string]CachedTorrent
byName map[string]CachedTorrent
}
type folders struct {
sync.RWMutex
listing map[string][]os.FileInfo // folder name to file listing
}
type torrentCache struct { type torrentCache struct {
mu sync.Mutex torrents torrents
byID map[string]CachedTorrent
byName map[string]CachedTorrent
listing atomic.Value listing atomic.Value
folderListing map[string][]os.FileInfo folders folders
folderListingMu sync.RWMutex
directoriesFilters map[string][]directoryFilter directoriesFilters map[string][]directoryFilter
sortNeeded atomic.Bool sortNeeded atomic.Bool
} }
@@ -62,9 +71,13 @@ type sortableFile struct {
func newTorrentCache(dirFilters map[string][]directoryFilter) *torrentCache { func newTorrentCache(dirFilters map[string][]directoryFilter) *torrentCache {
tc := &torrentCache{ tc := &torrentCache{
byID: make(map[string]CachedTorrent), torrents: torrents{
byName: make(map[string]CachedTorrent), byID: make(map[string]CachedTorrent),
folderListing: make(map[string][]os.FileInfo), byName: make(map[string]CachedTorrent),
},
folders: folders{
listing: make(map[string][]os.FileInfo),
},
directoriesFilters: dirFilters, directoriesFilters: dirFilters,
} }
@@ -74,41 +87,42 @@ func newTorrentCache(dirFilters map[string][]directoryFilter) *torrentCache {
} }
func (tc *torrentCache) reset() { func (tc *torrentCache) reset() {
tc.mu.Lock() tc.torrents.Lock()
tc.byID = make(map[string]CachedTorrent) tc.torrents.byID = make(map[string]CachedTorrent)
tc.byName = make(map[string]CachedTorrent) tc.torrents.byName = make(map[string]CachedTorrent)
tc.mu.Unlock() tc.torrents.Unlock()
// reset the sorted listing // reset the sorted listing
tc.sortNeeded.Store(false) tc.sortNeeded.Store(false)
tc.listing.Store(make([]os.FileInfo, 0)) tc.listing.Store(make([]os.FileInfo, 0))
// reset any per-folder views // reset any per-folder views
tc.folderListingMu.Lock() tc.folders.Lock()
tc.folderListing = make(map[string][]os.FileInfo) tc.folders.listing = make(map[string][]os.FileInfo)
tc.folderListingMu.Unlock() tc.folders.Unlock()
} }
func (tc *torrentCache) getByID(id string) (CachedTorrent, bool) { func (tc *torrentCache) getByID(id string) (CachedTorrent, bool) {
tc.mu.Lock() tc.torrents.RLock()
defer tc.mu.Unlock() defer tc.torrents.RUnlock()
torrent, exists := tc.byID[id] torrent, exists := tc.torrents.byID[id]
return torrent, exists return torrent, exists
} }
func (tc *torrentCache) getByName(name string) (CachedTorrent, bool) { func (tc *torrentCache) getByName(name string) (CachedTorrent, bool) {
tc.mu.Lock() tc.torrents.RLock()
defer tc.mu.Unlock() defer tc.torrents.RUnlock()
torrent, exists := tc.byName[name] torrent, exists := tc.torrents.byName[name]
return torrent, exists return torrent, exists
} }
func (tc *torrentCache) set(name string, torrent, newTorrent CachedTorrent) { func (tc *torrentCache) set(name string, torrent, newTorrent CachedTorrent) {
tc.mu.Lock() tc.torrents.Lock()
// Set the id first // Set the id first
tc.byID[newTorrent.Id] = torrent // This is the unadulterated torrent
tc.byName[name] = newTorrent // This is likely the modified torrent tc.torrents.byName[name] = torrent
tc.mu.Unlock() tc.torrents.byID[torrent.Id] = torrent // This is the unadulterated torrent
tc.torrents.Unlock()
tc.sortNeeded.Store(true) tc.sortNeeded.Store(true)
} }
@@ -124,12 +138,12 @@ func (tc *torrentCache) getListing() []os.FileInfo {
} }
func (tc *torrentCache) getFolderListing(folderName string) []os.FileInfo { func (tc *torrentCache) getFolderListing(folderName string) []os.FileInfo {
tc.folderListingMu.RLock() tc.folders.RLock()
defer tc.folderListingMu.RUnlock() defer tc.folders.RUnlock()
if folderName == "" { if folderName == "" {
return tc.getListing() return tc.getListing()
} }
if folder, ok := tc.folderListing[folderName]; ok { if folder, ok := tc.folders.listing[folderName]; ok {
return folder return folder
} }
// If folder not found, return empty slice // If folder not found, return empty slice
@@ -138,13 +152,13 @@ func (tc *torrentCache) getFolderListing(folderName string) []os.FileInfo {
func (tc *torrentCache) refreshListing() { func (tc *torrentCache) refreshListing() {
tc.mu.Lock() tc.torrents.RLock()
all := make([]sortableFile, 0, len(tc.byName)) all := make([]sortableFile, 0, len(tc.torrents.byName))
for name, t := range tc.byName { for name, t := range tc.torrents.byName {
all = append(all, sortableFile{t.Id, name, t.AddedOn, t.Bytes, t.Bad}) all = append(all, sortableFile{t.Id, name, t.AddedOn, t.Bytes, t.Bad})
} }
tc.sortNeeded.Store(false) tc.sortNeeded.Store(false)
tc.mu.Unlock() tc.torrents.RUnlock()
sort.Slice(all, func(i, j int) bool { sort.Slice(all, func(i, j int) bool {
if all[i].name != all[j].name { if all[i].name != all[j].name {
@@ -157,17 +171,18 @@ func (tc *torrentCache) refreshListing() {
wg.Add(1) // for all listing wg.Add(1) // for all listing
go func() { go func() {
defer wg.Done()
listing := make([]os.FileInfo, len(all)) listing := make([]os.FileInfo, len(all))
for i, sf := range all { for i, sf := range all {
listing[i] = &fileInfo{sf.id, sf.name, sf.size, 0755 | os.ModeDir, sf.modTime, true} listing[i] = &fileInfo{sf.id, sf.name, sf.size, 0755 | os.ModeDir, sf.modTime, true}
} }
tc.listing.Store(listing) tc.listing.Store(listing)
}() }()
wg.Done()
wg.Add(1) wg.Add(1)
// For __bad__ // For __bad__
go func() { go func() {
defer wg.Done()
listing := make([]os.FileInfo, 0) listing := make([]os.FileInfo, 0)
for _, sf := range all { for _, sf := range all {
if sf.bad { if sf.bad {
@@ -181,15 +196,14 @@ func (tc *torrentCache) refreshListing() {
}) })
} }
} }
tc.folderListingMu.Lock() tc.folders.Lock()
if len(listing) > 0 { if len(listing) > 0 {
tc.folderListing["__bad__"] = listing tc.folders.listing["__bad__"] = listing
} else { } else {
delete(tc.folderListing, "__bad__") delete(tc.folders.listing, "__bad__")
} }
tc.folderListingMu.Unlock() tc.folders.Unlock()
}() }()
wg.Done()
now := time.Now() now := time.Now()
wg.Add(len(tc.directoriesFilters)) // for each directory filter wg.Add(len(tc.directoriesFilters)) // for each directory filter
@@ -207,13 +221,13 @@ func (tc *torrentCache) refreshListing() {
} }
} }
tc.folderListingMu.Lock() tc.folders.Lock()
if len(matched) > 0 { if len(matched) > 0 {
tc.folderListing[dir] = matched tc.folders.listing[dir] = matched
} else { } else {
delete(tc.folderListing, dir) delete(tc.folders.listing, dir)
} }
tc.folderListingMu.Unlock() tc.folders.Unlock()
}(dir, filters) }(dir, filters)
} }
@@ -264,35 +278,51 @@ func (tc *torrentCache) torrentMatchDirectory(filters []directoryFilter, file so
} }
func (tc *torrentCache) getAll() map[string]CachedTorrent { func (tc *torrentCache) getAll() map[string]CachedTorrent {
tc.mu.Lock() tc.torrents.RLock()
defer tc.mu.Unlock() defer tc.torrents.RUnlock()
result := make(map[string]CachedTorrent) result := make(map[string]CachedTorrent, len(tc.torrents.byID))
for name, torrent := range tc.byID { for name, torrent := range tc.torrents.byID {
result[name] = torrent result[name] = torrent
} }
return result return result
} }
func (tc *torrentCache) getAllCount() int {
tc.torrents.RLock()
defer tc.torrents.RUnlock()
return len(tc.torrents.byID)
}
func (tc *torrentCache) getAllByName() map[string]CachedTorrent {
tc.torrents.RLock()
defer tc.torrents.RUnlock()
results := make(map[string]CachedTorrent, len(tc.torrents.byName))
for name, torrent := range tc.torrents.byName {
results[name] = torrent
}
return results
}
func (tc *torrentCache) getIdMaps() map[string]struct{} { func (tc *torrentCache) getIdMaps() map[string]struct{} {
tc.mu.Lock() tc.torrents.RLock()
defer tc.mu.Unlock() defer tc.torrents.RUnlock()
res := make(map[string]struct{}, len(tc.byID)) res := make(map[string]struct{}, len(tc.torrents.byID))
for id := range tc.byID { for id := range tc.torrents.byID {
res[id] = struct{}{} res[id] = struct{}{}
} }
return res return res
} }
func (tc *torrentCache) removeId(id string) { func (tc *torrentCache) removeId(id string) {
tc.mu.Lock() tc.torrents.Lock()
defer tc.mu.Unlock() defer tc.torrents.Unlock()
delete(tc.byID, id) delete(tc.torrents.byID, id)
tc.sortNeeded.Store(true) tc.sortNeeded.Store(true)
} }
func (tc *torrentCache) remove(name string) { func (tc *torrentCache) remove(name string) {
tc.mu.Lock() tc.torrents.Lock()
defer tc.mu.Unlock() defer tc.torrents.Unlock()
delete(tc.byName, name) delete(tc.torrents.byName, name)
tc.sortNeeded.Store(true) tc.sortNeeded.Store(true)
} }

View File

@@ -1,4 +1,4 @@
package debrid package store
import ( import (
"context" "context"
@@ -45,7 +45,7 @@ func (c *Cache) StartSchedule(ctx context.Context) error {
} else { } else {
// Schedule the job // Schedule the job
if _, err := c.cetScheduler.NewJob(jd, gocron.NewTask(func() { if _, err := c.cetScheduler.NewJob(jd, gocron.NewTask(func() {
c.resetInvalidLinks() c.resetInvalidLinks(ctx)
}), gocron.WithContext(ctx)); err != nil { }), gocron.WithContext(ctx)); err != nil {
c.logger.Error().Err(err).Msg("Failed to create link reset job") c.logger.Error().Err(err).Msg("Failed to create link reset job")
} else { } else {

1
pkg/debrid/store/xml.go Normal file
View File

@@ -0,0 +1 @@
package store

243
pkg/debrid/types/account.go Normal file
View File

@@ -0,0 +1,243 @@
package types
import (
"github.com/sirrobot01/decypharr/internal/config"
"sync"
"time"
)
type Accounts struct {
current *Account
accounts []*Account
mu sync.RWMutex
}
func NewAccounts(debridConf config.Debrid) *Accounts {
accounts := make([]*Account, 0)
for idx, token := range debridConf.DownloadAPIKeys {
if token == "" {
continue
}
account := newAccount(debridConf.Name, token, idx)
accounts = append(accounts, account)
}
var current *Account
if len(accounts) > 0 {
current = accounts[0]
}
return &Accounts{
accounts: accounts,
current: current,
}
}
type Account struct {
Debrid string // e.g., "realdebrid", "torbox", etc.
Order int
Disabled bool
Token string
links map[string]*DownloadLink
mu sync.RWMutex
}
func (a *Accounts) All() []*Account {
a.mu.RLock()
defer a.mu.RUnlock()
activeAccounts := make([]*Account, 0)
for _, acc := range a.accounts {
if !acc.Disabled {
activeAccounts = append(activeAccounts, acc)
}
}
return activeAccounts
}
func (a *Accounts) Current() *Account {
a.mu.RLock()
if a.current != nil {
current := a.current
a.mu.RUnlock()
return current
}
a.mu.RUnlock()
a.mu.Lock()
defer a.mu.Unlock()
// Double-check after acquiring write lock
if a.current != nil {
return a.current
}
activeAccounts := make([]*Account, 0)
for _, acc := range a.accounts {
if !acc.Disabled {
activeAccounts = append(activeAccounts, acc)
}
}
if len(activeAccounts) > 0 {
a.current = activeAccounts[0]
}
return a.current
}
func (a *Accounts) Disable(account *Account) {
a.mu.Lock()
defer a.mu.Unlock()
account.disable()
if a.current == account {
var newCurrent *Account
for _, acc := range a.accounts {
if !acc.Disabled {
newCurrent = acc
break
}
}
a.current = newCurrent
}
}
func (a *Accounts) Reset() {
a.mu.Lock()
defer a.mu.Unlock()
for _, acc := range a.accounts {
acc.resetDownloadLinks()
acc.Disabled = false
}
if len(a.accounts) > 0 {
a.current = a.accounts[0]
} else {
a.current = nil
}
}
func (a *Accounts) GetDownloadLink(fileLink string) (*DownloadLink, error) {
if a.Current() == nil {
return nil, NoActiveAccountsError
}
dl, ok := a.Current().getLink(fileLink)
if !ok {
return nil, NoDownloadLinkError
}
if dl.ExpiresAt.IsZero() || dl.ExpiresAt.Before(time.Now()) {
return nil, DownloadLinkExpiredError
}
if dl.DownloadLink == "" {
return nil, EmptyDownloadLinkError
}
return dl, nil
}
func (a *Accounts) GetDownloadLinkWithAccount(fileLink string) (*DownloadLink, *Account, error) {
currentAccount := a.Current()
if currentAccount == nil {
return nil, nil, NoActiveAccountsError
}
dl, ok := currentAccount.getLink(fileLink)
if !ok {
return nil, nil, NoDownloadLinkError
}
if dl.ExpiresAt.IsZero() || dl.ExpiresAt.Before(time.Now()) {
return nil, currentAccount, DownloadLinkExpiredError
}
if dl.DownloadLink == "" {
return nil, currentAccount, EmptyDownloadLinkError
}
return dl, currentAccount, nil
}
func (a *Accounts) SetDownloadLink(fileLink string, dl *DownloadLink) {
if a.Current() == nil {
return
}
a.Current().setLink(fileLink, dl)
}
func (a *Accounts) DeleteDownloadLink(fileLink string) {
if a.Current() == nil {
return
}
a.Current().deleteLink(fileLink)
}
func (a *Accounts) GetLinksCount() int {
if a.Current() == nil {
return 0
}
return a.Current().LinksCount()
}
func (a *Accounts) SetDownloadLinks(links map[string]*DownloadLink) {
if a.Current() == nil {
return
}
a.Current().setLinks(links)
}
func newAccount(debridName, token string, index int) *Account {
return &Account{
Debrid: debridName,
Token: token,
Order: index,
links: make(map[string]*DownloadLink),
}
}
func (a *Account) getLink(fileLink string) (*DownloadLink, bool) {
a.mu.RLock()
defer a.mu.RUnlock()
dl, ok := a.links[a.sliceFileLink(fileLink)]
return dl, ok
}
func (a *Account) setLink(fileLink string, dl *DownloadLink) {
a.mu.Lock()
defer a.mu.Unlock()
a.links[a.sliceFileLink(fileLink)] = dl
}
func (a *Account) deleteLink(fileLink string) {
a.mu.Lock()
defer a.mu.Unlock()
delete(a.links, a.sliceFileLink(fileLink))
}
func (a *Account) resetDownloadLinks() {
a.mu.Lock()
defer a.mu.Unlock()
a.links = make(map[string]*DownloadLink)
}
func (a *Account) LinksCount() int {
a.mu.RLock()
defer a.mu.RUnlock()
return len(a.links)
}
func (a *Account) disable() {
a.Disabled = true
}
func (a *Account) setLinks(links map[string]*DownloadLink) {
a.mu.Lock()
defer a.mu.Unlock()
now := time.Now()
for _, dl := range links {
if !dl.ExpiresAt.IsZero() && dl.ExpiresAt.Before(now) {
// Expired, continue
continue
}
a.links[a.sliceFileLink(dl.Link)] = dl
}
}
// slice download link
func (a *Account) sliceFileLink(fileLink string) string {
if a.Debrid != "realdebrid" {
return fileLink
}
if len(fileLink) < 39 {
return fileLink
}
return fileLink[0:39]
}

View File

@@ -6,23 +6,23 @@ import (
type Client interface { type Client interface {
SubmitMagnet(tr *Torrent) (*Torrent, error) SubmitMagnet(tr *Torrent) (*Torrent, error)
CheckStatus(tr *Torrent, isSymlink bool) (*Torrent, error) CheckStatus(tr *Torrent) (*Torrent, error)
GenerateDownloadLinks(tr *Torrent) error GetFileDownloadLinks(tr *Torrent) error
GetDownloadLink(tr *Torrent, file *File) (*DownloadLink, error) GetDownloadLink(tr *Torrent, file *File) (*DownloadLink, error)
DeleteTorrent(torrentId string) error DeleteTorrent(torrentId string) error
IsAvailable(infohashes []string) map[string]bool IsAvailable(infohashes []string) map[string]bool
GetCheckCached() bool
GetDownloadUncached() bool GetDownloadUncached() bool
UpdateTorrent(torrent *Torrent) error UpdateTorrent(torrent *Torrent) error
GetTorrent(torrentId string) (*Torrent, error) GetTorrent(torrentId string) (*Torrent, error)
GetTorrents() ([]*Torrent, error) GetTorrents() ([]*Torrent, error)
GetName() string Name() string
GetLogger() zerolog.Logger Logger() zerolog.Logger
GetDownloadingStatus() []string GetDownloadingStatus() []string
GetDownloads() (map[string]DownloadLink, error) GetDownloadLinks() (map[string]*DownloadLink, error)
CheckLink(link string) error CheckLink(link string) error
GetMountPath() string GetMountPath() string
DisableAccount(string) Accounts() *Accounts // Returns the active download account/token
ResetActiveDownloadKeys()
DeleteDownloadLink(linkId string) error DeleteDownloadLink(linkId string) error
GetProfile() (*Profile, error)
GetAvailableSlots() (int, error)
} }

30
pkg/debrid/types/error.go Normal file
View File

@@ -0,0 +1,30 @@
package types
type Error struct {
Message string `json:"message"`
Code string `json:"code"`
}
func (e *Error) Error() string {
return e.Message
}
var NoActiveAccountsError = &Error{
Message: "No active accounts",
Code: "no_active_accounts",
}
var NoDownloadLinkError = &Error{
Message: "No download link found",
Code: "no_download_link",
}
var DownloadLinkExpiredError = &Error{
Message: "Download link expired",
Code: "download_link_expired",
}
var EmptyDownloadLinkError = &Error{
Message: "Download link is empty",
Code: "empty_download_link",
}

View File

@@ -2,13 +2,14 @@ package types
import ( import (
"fmt" "fmt"
"github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/arr"
"os" "os"
"path/filepath" "path/filepath"
"sync" "sync"
"time" "time"
"github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/arr"
) )
type Torrent struct { type Torrent struct {
@@ -29,27 +30,16 @@ type Torrent struct {
Seeders int `json:"seeders"` Seeders int `json:"seeders"`
Links []string `json:"links"` Links []string `json:"links"`
MountPath string `json:"mount_path"` MountPath string `json:"mount_path"`
DeletedFiles []string `json:"deleted_files"`
Debrid string `json:"debrid"` Debrid string `json:"debrid"`
Arr *arr.Arr `json:"arr"` Arr *arr.Arr `json:"arr"`
Mu sync.Mutex `json:"-"`
SizeDownloaded int64 `json:"-"` // This is used for local download
DownloadUncached bool `json:"-"`
}
type DownloadLink struct { SizeDownloaded int64 `json:"-"` // This is used for local download
Filename string `json:"filename"` DownloadUncached bool `json:"-"`
Link string `json:"link"`
DownloadLink string `json:"download_link"`
Generated time.Time `json:"generated"`
Size int64 `json:"size"`
Id string `json:"id"`
AccountId string `json:"account_id"`
}
func (d *DownloadLink) String() string { sync.Mutex
return d.DownloadLink
} }
func (t *Torrent) GetSymlinkFolder(parent string) string { func (t *Torrent) GetSymlinkFolder(parent string) string {
@@ -75,16 +65,37 @@ func (t *Torrent) GetMountFolder(rClonePath string) (string, error) {
return "", fmt.Errorf("no path found") return "", fmt.Errorf("no path found")
} }
func (t *Torrent) GetFile(filename string) (File, bool) {
f, ok := t.Files[filename]
if !ok {
return File{}, false
}
return f, !f.Deleted
}
func (t *Torrent) GetFiles() []File {
files := make([]File, 0, len(t.Files))
for _, f := range t.Files {
if !f.Deleted {
files = append(files, f)
}
}
return files
}
type File struct { type File struct {
TorrentId string `json:"torrent_id"` TorrentId string `json:"torrent_id"`
Id string `json:"id"` Id string `json:"id"`
Name string `json:"name"` Name string `json:"name"`
Size int64 `json:"size"` Size int64 `json:"size"`
IsRar bool `json:"is_rar"`
ByteRange *[2]int64 `json:"byte_range,omitempty"`
Path string `json:"path"` Path string `json:"path"`
Link string `json:"link"` Link string `json:"link"`
DownloadLink *DownloadLink `json:"-"`
AccountId string `json:"account_id"` AccountId string `json:"account_id"`
Generated time.Time `json:"generated"` Generated time.Time `json:"generated"`
Deleted bool `json:"deleted"`
DownloadLink *DownloadLink `json:"-"`
} }
func (t *Torrent) Cleanup(remove bool) { func (t *Torrent) Cleanup(remove bool) {
@@ -96,18 +107,38 @@ func (t *Torrent) Cleanup(remove bool) {
} }
} }
func (t *Torrent) GetFile(id string) *File { type IngestData struct {
for _, f := range t.Files { Debrid string `json:"debrid"`
if f.Id == id { Name string `json:"name"`
return &f Hash string `json:"hash"`
} Size int64 `json:"size"`
}
return nil
} }
type Account struct { type Profile struct {
ID string `json:"id"` Name string `json:"name"`
Disabled bool `json:"disabled"` Id int64 `json:"id"`
Name string `json:"name"` Username string `json:"username"`
Token string `json:"token"` Email string `json:"email"`
Points int64 `json:"points"`
Type string `json:"type"`
Premium int `json:"premium"`
Expiration time.Time `json:"expiration"`
LibrarySize int `json:"library_size"`
BadTorrents int `json:"bad_torrents"`
ActiveLinks int `json:"active_links"`
}
type DownloadLink struct {
Filename string `json:"filename"`
Link string `json:"link"`
DownloadLink string `json:"download_link"`
Generated time.Time `json:"generated"`
Size int64 `json:"size"`
Id string `json:"id"`
ExpiresAt time.Time
}
func (d *DownloadLink) String() string {
return d.DownloadLink
} }

178
pkg/qbit/context.go Normal file
View File

@@ -0,0 +1,178 @@
package qbit
import (
"context"
"encoding/base64"
"fmt"
"github.com/go-chi/chi/v5"
"github.com/sirrobot01/decypharr/pkg/arr"
"github.com/sirrobot01/decypharr/pkg/store"
"net/http"
"net/url"
"strings"
)
type contextKey string
const (
categoryKey contextKey = "category"
hashesKey contextKey = "hashes"
arrKey contextKey = "arr"
)
func validateServiceURL(urlStr string) error {
if urlStr == "" {
return fmt.Errorf("URL cannot be empty")
}
// Try parsing as full URL first
u, err := url.Parse(urlStr)
if err == nil && u.Scheme != "" && u.Host != "" {
// It's a full URL, validate scheme
if u.Scheme != "http" && u.Scheme != "https" {
return fmt.Errorf("URL scheme must be http or https")
}
return nil
}
// Check if it's a host:port format (no scheme)
if strings.Contains(urlStr, ":") && !strings.Contains(urlStr, "://") {
// Try parsing with http:// prefix
testURL := "http://" + urlStr
u, err := url.Parse(testURL)
if err != nil {
return fmt.Errorf("invalid host:port format: %w", err)
}
if u.Host == "" {
return fmt.Errorf("host is required in host:port format")
}
// Validate port number
if u.Port() == "" {
return fmt.Errorf("port is required in host:port format")
}
return nil
}
return fmt.Errorf("invalid URL format: %s", urlStr)
}
func getCategory(ctx context.Context) string {
if category, ok := ctx.Value(categoryKey).(string); ok {
return category
}
return ""
}
func getHashes(ctx context.Context) []string {
if hashes, ok := ctx.Value(hashesKey).([]string); ok {
return hashes
}
return nil
}
func getArrFromContext(ctx context.Context) *arr.Arr {
if a, ok := ctx.Value(arrKey).(*arr.Arr); ok {
return a
}
return nil
}
func decodeAuthHeader(header string) (string, string, error) {
encodedTokens := strings.Split(header, " ")
if len(encodedTokens) != 2 {
return "", "", nil
}
encodedToken := encodedTokens[1]
bytes, err := base64.StdEncoding.DecodeString(encodedToken)
if err != nil {
return "", "", err
}
bearer := string(bytes)
colonIndex := strings.LastIndex(bearer, ":")
host := bearer[:colonIndex]
token := bearer[colonIndex+1:]
return host, token, nil
}
func (q *QBit) categoryContext(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
category := strings.Trim(r.URL.Query().Get("category"), "")
if category == "" {
// Get from form
_ = r.ParseForm()
category = r.Form.Get("category")
if category == "" {
// Get from multipart form
_ = r.ParseMultipartForm(32 << 20)
category = r.FormValue("category")
}
}
ctx := context.WithValue(r.Context(), categoryKey, strings.TrimSpace(category))
next.ServeHTTP(w, r.WithContext(ctx))
})
}
// authContext creates a middleware that extracts the Arr host and token from the Authorization header
// and adds it to the request context.
// This is used to identify the Arr instance for the request.
// Only a valid host and token will be added to the context/config. The rest are manual
func (q *QBit) authContext(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
host, token, err := decodeAuthHeader(r.Header.Get("Authorization"))
category := getCategory(r.Context())
arrs := store.Get().Arr()
// Check if arr exists
a := arrs.Get(category)
if a == nil {
// Arr is not configured, create a new one
downloadUncached := false
a = arr.New(category, "", "", false, false, &downloadUncached, "", "auto")
}
if err == nil {
host = strings.TrimSpace(host)
if host != "" {
a.Host = host
}
token = strings.TrimSpace(token)
if token != "" {
a.Token = token
}
}
a.Source = "auto"
if err := validateServiceURL(a.Host); err != nil {
// Return silently, no need to raise a problem. Just do not add the Arr to the context/config.json
next.ServeHTTP(w, r)
return
}
arrs.AddOrUpdate(a)
ctx := context.WithValue(r.Context(), arrKey, a)
next.ServeHTTP(w, r.WithContext(ctx))
})
}
func hashesContext(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_hashes := chi.URLParam(r, "hashes")
var hashes []string
if _hashes != "" {
hashes = strings.Split(_hashes, "|")
}
if hashes == nil {
// Get hashes from form
_ = r.ParseForm()
hashes = r.Form["hashes"]
}
for i, hash := range hashes {
hashes[i] = strings.TrimSpace(hash)
}
ctx := context.WithValue(r.Context(), hashesKey, hashes)
next.ServeHTTP(w, r.WithContext(ctx))
})
}

View File

@@ -1,371 +0,0 @@
package qbit
import (
"fmt"
"github.com/cavaliergopher/grab/v3"
"github.com/sirrobot01/decypharr/internal/utils"
debridTypes "github.com/sirrobot01/decypharr/pkg/debrid/types"
"io"
"net/http"
"os"
"path/filepath"
"sync"
"time"
)
func Download(client *grab.Client, url, filename string, progressCallback func(int64, int64)) error {
req, err := grab.NewRequest(filename, url)
if err != nil {
return err
}
resp := client.Do(req)
t := time.NewTicker(time.Second * 2)
defer t.Stop()
var lastReported int64
Loop:
for {
select {
case <-t.C:
current := resp.BytesComplete()
speed := int64(resp.BytesPerSecond())
if current != lastReported {
if progressCallback != nil {
progressCallback(current-lastReported, speed)
}
lastReported = current
}
case <-resp.Done:
break Loop
}
}
// Report final bytes
if progressCallback != nil {
progressCallback(resp.BytesComplete()-lastReported, 0)
}
return resp.Err()
}
func (q *QBit) ProcessManualFile(torrent *Torrent) (string, error) {
debridTorrent := torrent.DebridTorrent
q.logger.Info().Msgf("Downloading %d files...", len(debridTorrent.Files))
torrentPath := filepath.Join(q.DownloadFolder, debridTorrent.Arr.Name, utils.RemoveExtension(debridTorrent.OriginalFilename))
torrentPath = utils.RemoveInvalidChars(torrentPath)
err := os.MkdirAll(torrentPath, os.ModePerm)
if err != nil {
// add previous error to the error and return
return "", fmt.Errorf("failed to create directory: %s: %v", torrentPath, err)
}
q.downloadFiles(torrent, torrentPath)
return torrentPath, nil
}
func (q *QBit) downloadFiles(torrent *Torrent, parent string) {
debridTorrent := torrent.DebridTorrent
var wg sync.WaitGroup
totalSize := int64(0)
for _, file := range debridTorrent.Files {
totalSize += file.Size
}
debridTorrent.Mu.Lock()
debridTorrent.SizeDownloaded = 0 // Reset downloaded bytes
debridTorrent.Progress = 0 // Reset progress
debridTorrent.Mu.Unlock()
progressCallback := func(downloaded int64, speed int64) {
debridTorrent.Mu.Lock()
defer debridTorrent.Mu.Unlock()
torrent.Mu.Lock()
defer torrent.Mu.Unlock()
// Update total downloaded bytes
debridTorrent.SizeDownloaded += downloaded
debridTorrent.Speed = speed
// Calculate overall progress
if totalSize > 0 {
debridTorrent.Progress = float64(debridTorrent.SizeDownloaded) / float64(totalSize) * 100
}
q.UpdateTorrentMin(torrent, debridTorrent)
}
client := &grab.Client{
UserAgent: "Decypharr[QBitTorrent]",
HTTPClient: &http.Client{
Transport: &http.Transport{
Proxy: http.ProxyFromEnvironment,
},
},
}
errChan := make(chan error, len(debridTorrent.Files))
for _, file := range debridTorrent.Files {
if file.DownloadLink == nil {
q.logger.Info().Msgf("No download link found for %s", file.Name)
continue
}
wg.Add(1)
q.downloadSemaphore <- struct{}{}
go func(file debridTypes.File) {
defer wg.Done()
defer func() { <-q.downloadSemaphore }()
filename := file.Name
err := Download(
client,
file.DownloadLink.DownloadLink,
filepath.Join(parent, filename),
progressCallback,
)
if err != nil {
q.logger.Error().Msgf("Failed to download %s: %v", filename, err)
errChan <- err
} else {
q.logger.Info().Msgf("Downloaded %s", filename)
}
}(file)
}
wg.Wait()
close(errChan)
var errors []error
for err := range errChan {
if err != nil {
errors = append(errors, err)
}
}
if len(errors) > 0 {
q.logger.Error().Msgf("Errors occurred during download: %v", errors)
return
}
q.logger.Info().Msgf("Downloaded all files for %s", debridTorrent.Name)
}
func (q *QBit) ProcessSymlink(torrent *Torrent) (string, error) {
debridTorrent := torrent.DebridTorrent
files := debridTorrent.Files
if len(files) == 0 {
return "", fmt.Errorf("no video files found")
}
q.logger.Info().Msgf("Checking symlinks for %d files...", len(files))
rCloneBase := debridTorrent.MountPath
torrentPath, err := q.getTorrentPath(rCloneBase, debridTorrent) // /MyTVShow/
// This returns filename.ext for alldebrid instead of the parent folder filename/
torrentFolder := torrentPath
if err != nil {
return "", fmt.Errorf("failed to get torrent path: %v", err)
}
// Check if the torrent path is a file
torrentRclonePath := filepath.Join(rCloneBase, torrentPath) // leave it as is
if debridTorrent.Debrid == "alldebrid" && utils.IsMediaFile(torrentPath) {
// Alldebrid hotfix for single file torrents
torrentFolder = utils.RemoveExtension(torrentFolder)
torrentRclonePath = rCloneBase // /mnt/rclone/magnets/ // Remove the filename since it's in the root folder
}
return q.createSymlinks(debridTorrent, torrentRclonePath, torrentFolder) // verify cos we're using external webdav
}
func (q *QBit) createSymlinksWebdav(debridTorrent *debridTypes.Torrent, rclonePath, torrentFolder string) (string, error) {
files := debridTorrent.Files
symlinkPath := filepath.Join(q.DownloadFolder, debridTorrent.Arr.Name, torrentFolder) // /mnt/symlinks/{category}/MyTVShow/
err := os.MkdirAll(symlinkPath, os.ModePerm)
if err != nil {
return "", fmt.Errorf("failed to create directory: %s: %v", symlinkPath, err)
}
remainingFiles := make(map[string]debridTypes.File)
for _, file := range files {
remainingFiles[file.Name] = file
}
ticker := time.NewTicker(100 * time.Millisecond)
defer ticker.Stop()
timeout := time.After(30 * time.Minute)
filePaths := make([]string, 0, len(files))
for len(remainingFiles) > 0 {
select {
case <-ticker.C:
entries, err := os.ReadDir(rclonePath)
if err != nil {
continue
}
// Check which files exist in this batch
for _, entry := range entries {
filename := entry.Name()
if file, exists := remainingFiles[filename]; exists {
fullFilePath := filepath.Join(rclonePath, filename)
fileSymlinkPath := filepath.Join(symlinkPath, file.Name)
if err := os.Symlink(fullFilePath, fileSymlinkPath); err != nil && !os.IsExist(err) {
q.logger.Debug().Msgf("Failed to create symlink: %s: %v", fileSymlinkPath, err)
} else {
filePaths = append(filePaths, fileSymlinkPath)
delete(remainingFiles, filename)
q.logger.Info().Msgf("File is ready: %s", file.Name)
}
}
}
case <-timeout:
q.logger.Warn().Msgf("Timeout waiting for files, %d files still pending", len(remainingFiles))
return symlinkPath, fmt.Errorf("timeout waiting for files")
}
}
if q.SkipPreCache {
return symlinkPath, nil
}
go func() {
if err := q.preCacheFile(debridTorrent.Name, filePaths); err != nil {
q.logger.Error().Msgf("Failed to pre-cache file: %s", err)
} else {
q.logger.Debug().Msgf("Pre-cached %d files", len(filePaths))
}
}() // Pre-cache the files in the background
// Pre-cache the first 256KB and 1MB of the file
return symlinkPath, nil
}
func (q *QBit) createSymlinks(debridTorrent *debridTypes.Torrent, rclonePath, torrentFolder string) (string, error) {
files := debridTorrent.Files
symlinkPath := filepath.Join(q.DownloadFolder, debridTorrent.Arr.Name, torrentFolder) // /mnt/symlinks/{category}/MyTVShow/
err := os.MkdirAll(symlinkPath, os.ModePerm)
if err != nil {
return "", fmt.Errorf("failed to create directory: %s: %v", symlinkPath, err)
}
remainingFiles := make(map[string]debridTypes.File)
for _, file := range files {
remainingFiles[file.Path] = file
}
ticker := time.NewTicker(100 * time.Millisecond)
defer ticker.Stop()
timeout := time.After(30 * time.Minute)
filePaths := make([]string, 0, len(files))
for len(remainingFiles) > 0 {
select {
case <-ticker.C:
entries, err := os.ReadDir(rclonePath)
if err != nil {
continue
}
// Check which files exist in this batch
for _, entry := range entries {
filename := entry.Name()
if file, exists := remainingFiles[filename]; exists {
fullFilePath := filepath.Join(rclonePath, filename)
fileSymlinkPath := filepath.Join(symlinkPath, file.Name)
if err := os.Symlink(fullFilePath, fileSymlinkPath); err != nil && !os.IsExist(err) {
q.logger.Debug().Msgf("Failed to create symlink: %s: %v", fileSymlinkPath, err)
} else {
filePaths = append(filePaths, fileSymlinkPath)
delete(remainingFiles, filename)
q.logger.Info().Msgf("File is ready: %s", file.Name)
}
}
}
case <-timeout:
q.logger.Warn().Msgf("Timeout waiting for files, %d files still pending", len(remainingFiles))
return symlinkPath, fmt.Errorf("timeout waiting for files")
}
}
if q.SkipPreCache {
return symlinkPath, nil
}
go func() {
if err := q.preCacheFile(debridTorrent.Name, filePaths); err != nil {
q.logger.Error().Msgf("Failed to pre-cache file: %s", err)
} else {
q.logger.Trace().Msgf("Pre-cached %d files", len(filePaths))
}
}() // Pre-cache the files in the background
return symlinkPath, nil
}
func (q *QBit) getTorrentPath(rclonePath string, debridTorrent *debridTypes.Torrent) (string, error) {
for {
torrentPath, err := debridTorrent.GetMountFolder(rclonePath)
if err == nil {
q.logger.Debug().Msgf("Found torrent path: %s", torrentPath)
return torrentPath, err
}
time.Sleep(100 * time.Millisecond)
}
}
func (q *QBit) preCacheFile(name string, filePaths []string) error {
q.logger.Trace().Msgf("Pre-caching torrent: %s", name)
if len(filePaths) == 0 {
return fmt.Errorf("no file paths provided")
}
for _, filePath := range filePaths {
err := func(f string) error {
file, err := os.Open(f)
if err != nil {
if os.IsNotExist(err) {
// File has probably been moved by arr, return silently
return nil
}
return fmt.Errorf("failed to open file: %s: %v", f, err)
}
defer file.Close()
// Pre-cache the file header (first 256KB) using 16KB chunks.
if err := q.readSmallChunks(file, 0, 256*1024, 16*1024); err != nil {
return err
}
if err := q.readSmallChunks(file, 1024*1024, 64*1024, 16*1024); err != nil {
return err
}
return nil
}(filePath)
if err != nil {
return err
}
}
return nil
}
func (q *QBit) readSmallChunks(file *os.File, startPos int64, totalToRead int, chunkSize int) error {
_, err := file.Seek(startPos, 0)
if err != nil {
return err
}
buf := make([]byte, chunkSize)
bytesRemaining := totalToRead
for bytesRemaining > 0 {
toRead := chunkSize
if bytesRemaining < chunkSize {
toRead = bytesRemaining
}
n, err := file.Read(buf[:toRead])
if err != nil {
if err == io.EOF {
break
}
return err
}
bytesRemaining -= n
}
return nil
}

View File

@@ -1,114 +1,24 @@
package qbit package qbit
import ( import (
"context"
"encoding/base64"
"github.com/go-chi/chi/v5"
"github.com/sirrobot01/decypharr/internal/request" "github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/pkg/arr" "github.com/sirrobot01/decypharr/pkg/arr"
"github.com/sirrobot01/decypharr/pkg/service"
"net/http" "net/http"
"path/filepath" "path/filepath"
"strings" "strings"
) )
func decodeAuthHeader(header string) (string, string, error) {
encodedTokens := strings.Split(header, " ")
if len(encodedTokens) != 2 {
return "", "", nil
}
encodedToken := encodedTokens[1]
bytes, err := base64.StdEncoding.DecodeString(encodedToken)
if err != nil {
return "", "", err
}
bearer := string(bytes)
colonIndex := strings.LastIndex(bearer, ":")
host := bearer[:colonIndex]
token := bearer[colonIndex+1:]
return host, token, nil
}
func (q *QBit) CategoryContext(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
category := strings.Trim(r.URL.Query().Get("category"), "")
if category == "" {
// Get from form
_ = r.ParseForm()
category = r.Form.Get("category")
if category == "" {
// Get from multipart form
_ = r.ParseMultipartForm(32 << 20)
category = r.FormValue("category")
}
}
ctx := context.WithValue(r.Context(), "category", strings.TrimSpace(category))
next.ServeHTTP(w, r.WithContext(ctx))
})
}
func (q *QBit) authContext(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
host, token, err := decodeAuthHeader(r.Header.Get("Authorization"))
category := r.Context().Value("category").(string)
svc := service.GetService()
// Check if arr exists
a := svc.Arr.Get(category)
if a == nil {
downloadUncached := false
a = arr.New(category, "", "", false, false, &downloadUncached)
}
if err == nil {
host = strings.TrimSpace(host)
if host != "" {
a.Host = host
}
token = strings.TrimSpace(token)
if token != "" {
a.Token = token
}
}
svc.Arr.AddOrUpdate(a)
ctx := context.WithValue(r.Context(), "arr", a)
next.ServeHTTP(w, r.WithContext(ctx))
})
}
func HashesCtx(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_hashes := chi.URLParam(r, "hashes")
var hashes []string
if _hashes != "" {
hashes = strings.Split(_hashes, "|")
}
if hashes == nil {
// Get hashes from form
_ = r.ParseForm()
hashes = r.Form["hashes"]
}
for i, hash := range hashes {
hashes[i] = strings.TrimSpace(hash)
}
ctx := context.WithValue(r.Context(), "hashes", hashes)
next.ServeHTTP(w, r.WithContext(ctx))
})
}
func (q *QBit) handleLogin(w http.ResponseWriter, r *http.Request) { func (q *QBit) handleLogin(w http.ResponseWriter, r *http.Request) {
ctx := r.Context() ctx := r.Context()
_arr := ctx.Value("arr").(*arr.Arr) _arr := getArrFromContext(ctx)
if _arr == nil { if _arr == nil {
// No arr // Arr not in context, return OK
_, _ = w.Write([]byte("Ok.")) _, _ = w.Write([]byte("Ok."))
return return
} }
if err := _arr.Validate(); err != nil { if err := _arr.Validate(); err != nil {
q.logger.Info().Msgf("Error validating arr: %v", err) q.logger.Error().Err(err).Msgf("Error validating arr")
http.Error(w, "Invalid arr configuration", http.StatusBadRequest)
} }
_, _ = w.Write([]byte("Ok.")) _, _ = w.Write([]byte("Ok."))
} }
@@ -122,7 +32,7 @@ func (q *QBit) handleWebAPIVersion(w http.ResponseWriter, r *http.Request) {
} }
func (q *QBit) handlePreferences(w http.ResponseWriter, r *http.Request) { func (q *QBit) handlePreferences(w http.ResponseWriter, r *http.Request) {
preferences := NewAppPreferences() preferences := getAppPreferences()
preferences.WebUiUsername = q.Username preferences.WebUiUsername = q.Username
preferences.SavePath = q.DownloadFolder preferences.SavePath = q.DownloadFolder
@@ -150,10 +60,10 @@ func (q *QBit) handleShutdown(w http.ResponseWriter, r *http.Request) {
func (q *QBit) handleTorrentsInfo(w http.ResponseWriter, r *http.Request) { func (q *QBit) handleTorrentsInfo(w http.ResponseWriter, r *http.Request) {
//log all url params //log all url params
ctx := r.Context() ctx := r.Context()
category := ctx.Value("category").(string) category := getCategory(ctx)
filter := strings.Trim(r.URL.Query().Get("filter"), "") filter := strings.Trim(r.URL.Query().Get("filter"), "")
hashes, _ := ctx.Value("hashes").([]string) hashes := getHashes(ctx)
torrents := q.Storage.GetAllSorted(category, filter, hashes, "added_on", false) torrents := q.storage.GetAllSorted(category, filter, hashes, "added_on", false)
request.JSONResponse(w, torrents, http.StatusOK) request.JSONResponse(w, torrents, http.StatusOK)
} }
@@ -164,13 +74,13 @@ func (q *QBit) handleTorrentsAdd(w http.ResponseWriter, r *http.Request) {
contentType := r.Header.Get("Content-Type") contentType := r.Header.Get("Content-Type")
if strings.Contains(contentType, "multipart/form-data") { if strings.Contains(contentType, "multipart/form-data") {
if err := r.ParseMultipartForm(32 << 20); err != nil { if err := r.ParseMultipartForm(32 << 20); err != nil {
q.logger.Info().Msgf("Error parsing multipart form: %v", err) q.logger.Error().Err(err).Msgf("Error parsing multipart form")
http.Error(w, err.Error(), http.StatusBadRequest) http.Error(w, err.Error(), http.StatusBadRequest)
return return
} }
} else if strings.Contains(contentType, "application/x-www-form-urlencoded") { } else if strings.Contains(contentType, "application/x-www-form-urlencoded") {
if err := r.ParseForm(); err != nil { if err := r.ParseForm(); err != nil {
q.logger.Info().Msgf("Error parsing form: %v", err) q.logger.Error().Err(err).Msgf("Error parsing form")
http.Error(w, err.Error(), http.StatusBadRequest) http.Error(w, err.Error(), http.StatusBadRequest)
return return
} }
@@ -179,10 +89,18 @@ func (q *QBit) handleTorrentsAdd(w http.ResponseWriter, r *http.Request) {
return return
} }
isSymlink := strings.ToLower(r.FormValue("sequentialDownload")) != "true" action := "symlink"
if strings.ToLower(r.FormValue("sequentialDownload")) == "true" {
action = "download"
}
debridName := r.FormValue("debrid")
category := r.FormValue("category") category := r.FormValue("category")
_arr := getArrFromContext(ctx)
if _arr == nil {
// Arr is not in context
_arr = arr.New(category, "", "", false, false, nil, "", "")
}
atleastOne := false atleastOne := false
ctx = context.WithValue(ctx, "isSymlink", isSymlink)
// Handle magnet URLs // Handle magnet URLs
if urls := r.FormValue("urls"); urls != "" { if urls := r.FormValue("urls"); urls != "" {
@@ -191,8 +109,8 @@ func (q *QBit) handleTorrentsAdd(w http.ResponseWriter, r *http.Request) {
urlList = append(urlList, strings.TrimSpace(u)) urlList = append(urlList, strings.TrimSpace(u))
} }
for _, url := range urlList { for _, url := range urlList {
if err := q.AddMagnet(ctx, url, category); err != nil { if err := q.addMagnet(ctx, url, _arr, debridName, action); err != nil {
q.logger.Info().Msgf("Error adding magnet: %v", err) q.logger.Debug().Msgf("Error adding magnet: %s", err.Error())
http.Error(w, err.Error(), http.StatusBadRequest) http.Error(w, err.Error(), http.StatusBadRequest)
return return
} }
@@ -204,8 +122,8 @@ func (q *QBit) handleTorrentsAdd(w http.ResponseWriter, r *http.Request) {
if r.MultipartForm != nil && r.MultipartForm.File != nil { if r.MultipartForm != nil && r.MultipartForm.File != nil {
if files := r.MultipartForm.File["torrents"]; len(files) > 0 { if files := r.MultipartForm.File["torrents"]; len(files) > 0 {
for _, fileHeader := range files { for _, fileHeader := range files {
if err := q.AddTorrent(ctx, fileHeader, category); err != nil { if err := q.addTorrent(ctx, fileHeader, _arr, debridName, action); err != nil {
q.logger.Info().Msgf("Error adding torrent: %v", err) q.logger.Debug().Err(err).Msgf("Error adding torrent")
http.Error(w, err.Error(), http.StatusBadRequest) http.Error(w, err.Error(), http.StatusBadRequest)
return return
} }
@@ -224,14 +142,14 @@ func (q *QBit) handleTorrentsAdd(w http.ResponseWriter, r *http.Request) {
func (q *QBit) handleTorrentsDelete(w http.ResponseWriter, r *http.Request) { func (q *QBit) handleTorrentsDelete(w http.ResponseWriter, r *http.Request) {
ctx := r.Context() ctx := r.Context()
hashes, _ := ctx.Value("hashes").([]string) hashes := getHashes(ctx)
if len(hashes) == 0 { if len(hashes) == 0 {
http.Error(w, "No hashes provided", http.StatusBadRequest) http.Error(w, "No hashes provided", http.StatusBadRequest)
return return
} }
category := ctx.Value("category").(string) category := getCategory(ctx)
for _, hash := range hashes { for _, hash := range hashes {
q.Storage.Delete(hash, category, false) q.storage.Delete(hash, category, false)
} }
w.WriteHeader(http.StatusOK) w.WriteHeader(http.StatusOK)
@@ -239,10 +157,10 @@ func (q *QBit) handleTorrentsDelete(w http.ResponseWriter, r *http.Request) {
func (q *QBit) handleTorrentsPause(w http.ResponseWriter, r *http.Request) { func (q *QBit) handleTorrentsPause(w http.ResponseWriter, r *http.Request) {
ctx := r.Context() ctx := r.Context()
hashes, _ := ctx.Value("hashes").([]string) hashes := getHashes(ctx)
category := ctx.Value("category").(string) category := getCategory(ctx)
for _, hash := range hashes { for _, hash := range hashes {
torrent := q.Storage.Get(hash, category) torrent := q.storage.Get(hash, category)
if torrent == nil { if torrent == nil {
continue continue
} }
@@ -254,10 +172,10 @@ func (q *QBit) handleTorrentsPause(w http.ResponseWriter, r *http.Request) {
func (q *QBit) handleTorrentsResume(w http.ResponseWriter, r *http.Request) { func (q *QBit) handleTorrentsResume(w http.ResponseWriter, r *http.Request) {
ctx := r.Context() ctx := r.Context()
hashes, _ := ctx.Value("hashes").([]string) hashes := getHashes(ctx)
category := ctx.Value("category").(string) category := getCategory(ctx)
for _, hash := range hashes { for _, hash := range hashes {
torrent := q.Storage.Get(hash, category) torrent := q.storage.Get(hash, category)
if torrent == nil { if torrent == nil {
continue continue
} }
@@ -269,10 +187,10 @@ func (q *QBit) handleTorrentsResume(w http.ResponseWriter, r *http.Request) {
func (q *QBit) handleTorrentRecheck(w http.ResponseWriter, r *http.Request) { func (q *QBit) handleTorrentRecheck(w http.ResponseWriter, r *http.Request) {
ctx := r.Context() ctx := r.Context()
hashes, _ := ctx.Value("hashes").([]string) hashes := getHashes(ctx)
category := ctx.Value("category").(string) category := getCategory(ctx)
for _, hash := range hashes { for _, hash := range hashes {
torrent := q.Storage.Get(hash, category) torrent := q.storage.Get(hash, category)
if torrent == nil { if torrent == nil {
continue continue
} }
@@ -315,7 +233,7 @@ func (q *QBit) handleCreateCategory(w http.ResponseWriter, r *http.Request) {
func (q *QBit) handleTorrentProperties(w http.ResponseWriter, r *http.Request) { func (q *QBit) handleTorrentProperties(w http.ResponseWriter, r *http.Request) {
ctx := r.Context() ctx := r.Context()
hash := r.URL.Query().Get("hash") hash := r.URL.Query().Get("hash")
torrent := q.Storage.Get(hash, ctx.Value("category").(string)) torrent := q.storage.Get(hash, getCategory(ctx))
properties := q.GetTorrentProperties(torrent) properties := q.GetTorrentProperties(torrent)
request.JSONResponse(w, properties, http.StatusOK) request.JSONResponse(w, properties, http.StatusOK)
@@ -324,22 +242,21 @@ func (q *QBit) handleTorrentProperties(w http.ResponseWriter, r *http.Request) {
func (q *QBit) handleTorrentFiles(w http.ResponseWriter, r *http.Request) { func (q *QBit) handleTorrentFiles(w http.ResponseWriter, r *http.Request) {
ctx := r.Context() ctx := r.Context()
hash := r.URL.Query().Get("hash") hash := r.URL.Query().Get("hash")
torrent := q.Storage.Get(hash, ctx.Value("category").(string)) torrent := q.storage.Get(hash, getCategory(ctx))
if torrent == nil { if torrent == nil {
return return
} }
files := q.GetTorrentFiles(torrent) request.JSONResponse(w, torrent.Files, http.StatusOK)
request.JSONResponse(w, files, http.StatusOK)
} }
func (q *QBit) handleSetCategory(w http.ResponseWriter, r *http.Request) { func (q *QBit) handleSetCategory(w http.ResponseWriter, r *http.Request) {
ctx := r.Context() ctx := r.Context()
category := ctx.Value("category").(string) category := getCategory(ctx)
hashes, _ := ctx.Value("hashes").([]string) hashes := getHashes(ctx)
torrents := q.Storage.GetAll("", "", hashes) torrents := q.storage.GetAll("", "", hashes)
for _, torrent := range torrents { for _, torrent := range torrents {
torrent.Category = category torrent.Category = category
q.Storage.AddOrUpdate(torrent) q.storage.AddOrUpdate(torrent)
} }
request.JSONResponse(w, nil, http.StatusOK) request.JSONResponse(w, nil, http.StatusOK)
} }
@@ -351,14 +268,14 @@ func (q *QBit) handleAddTorrentTags(w http.ResponseWriter, r *http.Request) {
return return
} }
ctx := r.Context() ctx := r.Context()
hashes, _ := ctx.Value("hashes").([]string) hashes := getHashes(ctx)
tags := strings.Split(r.FormValue("tags"), ",") tags := strings.Split(r.FormValue("tags"), ",")
for i, tag := range tags { for i, tag := range tags {
tags[i] = strings.TrimSpace(tag) tags[i] = strings.TrimSpace(tag)
} }
torrents := q.Storage.GetAll("", "", hashes) torrents := q.storage.GetAll("", "", hashes)
for _, t := range torrents { for _, t := range torrents {
q.SetTorrentTags(t, tags) q.setTorrentTags(t, tags)
} }
request.JSONResponse(w, nil, http.StatusOK) request.JSONResponse(w, nil, http.StatusOK)
} }
@@ -370,14 +287,14 @@ func (q *QBit) handleRemoveTorrentTags(w http.ResponseWriter, r *http.Request) {
return return
} }
ctx := r.Context() ctx := r.Context()
hashes, _ := ctx.Value("hashes").([]string) hashes := getHashes(ctx)
tags := strings.Split(r.FormValue("tags"), ",") tags := strings.Split(r.FormValue("tags"), ",")
for i, tag := range tags { for i, tag := range tags {
tags[i] = strings.TrimSpace(tag) tags[i] = strings.TrimSpace(tag)
} }
torrents := q.Storage.GetAll("", "", hashes) torrents := q.storage.GetAll("", "", hashes)
for _, torrent := range torrents { for _, torrent := range torrents {
q.RemoveTorrentTags(torrent, tags) q.removeTorrentTags(torrent, tags)
} }
request.JSONResponse(w, nil, http.StatusOK) request.JSONResponse(w, nil, http.StatusOK)
@@ -397,6 +314,6 @@ func (q *QBit) handleCreateTags(w http.ResponseWriter, r *http.Request) {
for i, tag := range tags { for i, tag := range tags {
tags[i] = strings.TrimSpace(tag) tags[i] = strings.TrimSpace(tag)
} }
q.AddTags(tags) q.addTags(tags)
request.JSONResponse(w, nil, http.StatusOK) request.JSONResponse(w, nil, http.StatusOK)
} }

View File

@@ -1,80 +0,0 @@
package qbit
import (
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/debrid/debrid"
"github.com/sirrobot01/decypharr/pkg/service"
"time"
"github.com/google/uuid"
"github.com/sirrobot01/decypharr/pkg/arr"
)
type ImportRequest struct {
ID string `json:"id"`
Path string `json:"path"`
Magnet *utils.Magnet `json:"magnet"`
Arr *arr.Arr `json:"arr"`
IsSymlink bool `json:"isSymlink"`
SeriesId int `json:"series"`
Seasons []int `json:"seasons"`
Episodes []string `json:"episodes"`
DownloadUncached bool `json:"downloadUncached"`
Failed bool `json:"failed"`
FailedAt time.Time `json:"failedAt"`
Reason string `json:"reason"`
Completed bool `json:"completed"`
CompletedAt time.Time `json:"completedAt"`
Async bool `json:"async"`
}
type ManualImportResponseSchema struct {
Priority string `json:"priority"`
Status string `json:"status"`
Result string `json:"result"`
Queued time.Time `json:"queued"`
Trigger string `json:"trigger"`
SendUpdatesToClient bool `json:"sendUpdatesToClient"`
UpdateScheduledTask bool `json:"updateScheduledTask"`
Id int `json:"id"`
}
func NewImportRequest(magnet *utils.Magnet, arr *arr.Arr, isSymlink, downloadUncached bool) *ImportRequest {
return &ImportRequest{
ID: uuid.NewString(),
Magnet: magnet,
Arr: arr,
Failed: false,
Completed: false,
Async: false,
IsSymlink: isSymlink,
DownloadUncached: downloadUncached,
}
}
func (i *ImportRequest) Fail(reason string) {
i.Failed = true
i.FailedAt = time.Now()
i.Reason = reason
}
func (i *ImportRequest) Complete() {
i.Completed = true
i.CompletedAt = time.Now()
}
func (i *ImportRequest) Process(q *QBit) (err error) {
// Use this for now.
// This sends the torrent to the arr
svc := service.GetService()
torrent := createTorrentFromMagnet(i.Magnet, i.Arr.Name, "manual")
debridTorrent, err := debrid.ProcessTorrent(svc.Debrid, i.Magnet, i.Arr, i.IsSymlink, i.DownloadUncached)
if err != nil {
return err
}
torrent = q.UpdateTorrentMin(torrent, debridTorrent)
q.Storage.AddOrUpdate(torrent)
go q.ProcessFiles(torrent, debridTorrent, i.Arr, i.IsSymlink)
return nil
}

View File

@@ -1,52 +1,38 @@
package qbit package qbit
import ( import (
"cmp"
"github.com/rs/zerolog" "github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/config" "github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger" "github.com/sirrobot01/decypharr/internal/logger"
"os" "github.com/sirrobot01/decypharr/pkg/store"
"path/filepath"
) )
type QBit struct { type QBit struct {
Username string `json:"username"` Username string
Password string `json:"password"` Password string
Port string `json:"port"` DownloadFolder string
DownloadFolder string `json:"download_folder"` Categories []string
Categories []string `json:"categories"` storage *store.TorrentStorage
Storage *TorrentStorage logger zerolog.Logger
logger zerolog.Logger Tags []string
Tags []string
RefreshInterval int
SkipPreCache bool
downloadSemaphore chan struct{}
} }
func New() *QBit { func New() *QBit {
_cfg := config.Get() _cfg := config.Get()
cfg := _cfg.QBitTorrent cfg := _cfg.QBitTorrent
port := cmp.Or(_cfg.Port, os.Getenv("QBIT_PORT"), "8282")
refreshInterval := cmp.Or(cfg.RefreshInterval, 10)
return &QBit{ return &QBit{
Username: cfg.Username, Username: cfg.Username,
Password: cfg.Password, Password: cfg.Password,
Port: port, DownloadFolder: cfg.DownloadFolder,
DownloadFolder: cfg.DownloadFolder, Categories: cfg.Categories,
Categories: cfg.Categories, storage: store.Get().Torrents(),
Storage: NewTorrentStorage(filepath.Join(_cfg.Path, "torrents.json")), logger: logger.New("qbit"),
logger: logger.New("qbit"),
RefreshInterval: refreshInterval,
SkipPreCache: cfg.SkipPreCache,
downloadSemaphore: make(chan struct{}, cmp.Or(cfg.MaxDownloads, 5)),
} }
} }
func (q *QBit) Reset() { func (q *QBit) Reset() {
if q.Storage != nil { if q.storage != nil {
q.Storage.Reset() q.storage.Reset()
} }
q.Tags = nil q.Tags = nil
close(q.downloadSemaphore)
} }

View File

@@ -7,12 +7,12 @@ import (
func (q *QBit) Routes() http.Handler { func (q *QBit) Routes() http.Handler {
r := chi.NewRouter() r := chi.NewRouter()
r.Use(q.CategoryContext) r.Use(q.categoryContext)
r.Group(func(r chi.Router) { r.Group(func(r chi.Router) {
r.Use(q.authContext) r.Use(q.authContext)
r.Post("/auth/login", q.handleLogin) r.Post("/auth/login", q.handleLogin)
r.Route("/torrents", func(r chi.Router) { r.Route("/torrents", func(r chi.Router) {
r.Use(HashesCtx) r.Use(hashesContext)
r.Get("/info", q.handleTorrentsInfo) r.Get("/info", q.handleTorrentsInfo)
r.Post("/add", q.handleTorrentsAdd) r.Post("/add", q.handleTorrentsAdd)
r.Post("/delete", q.handleTorrentsDelete) r.Post("/delete", q.handleTorrentsDelete)

View File

@@ -1,38 +1,35 @@
package qbit package qbit
import ( import (
"cmp"
"context" "context"
"fmt" "fmt"
"github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/internal/utils" "github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/arr" "github.com/sirrobot01/decypharr/pkg/arr"
"github.com/sirrobot01/decypharr/pkg/debrid/debrid" "github.com/sirrobot01/decypharr/pkg/store"
debridTypes "github.com/sirrobot01/decypharr/pkg/debrid/types"
"github.com/sirrobot01/decypharr/pkg/service"
"io" "io"
"mime/multipart" "mime/multipart"
"os"
"path/filepath"
"strings" "strings"
"time" "time"
) )
// All torrent related helpers goes here // All torrent-related helpers goes here
func (q *QBit) addMagnet(ctx context.Context, url string, arr *arr.Arr, debrid string, action string) error {
func (q *QBit) AddMagnet(ctx context.Context, url, category string) error {
magnet, err := utils.GetMagnetFromUrl(url) magnet, err := utils.GetMagnetFromUrl(url)
if err != nil { if err != nil {
return fmt.Errorf("error parsing magnet link: %w", err) return fmt.Errorf("error parsing magnet link: %w", err)
} }
err = q.Process(ctx, magnet, category) _store := store.Get()
importReq := store.NewImportRequest(debrid, q.DownloadFolder, magnet, arr, action, false, "", store.ImportTypeQBitTorrent)
err = _store.AddTorrent(ctx, importReq)
if err != nil { if err != nil {
return fmt.Errorf("failed to process torrent: %w", err) return fmt.Errorf("failed to process torrent: %w", err)
} }
return nil return nil
} }
func (q *QBit) AddTorrent(ctx context.Context, fileHeader *multipart.FileHeader, category string) error { func (q *QBit) addTorrent(ctx context.Context, fileHeader *multipart.FileHeader, arr *arr.Arr, debrid string, action string) error {
file, _ := fileHeader.Open() file, _ := fileHeader.Open()
defer file.Close() defer file.Close()
var reader io.Reader = file var reader io.Reader = file
@@ -40,226 +37,28 @@ func (q *QBit) AddTorrent(ctx context.Context, fileHeader *multipart.FileHeader,
if err != nil { if err != nil {
return fmt.Errorf("error reading file: %s \n %w", fileHeader.Filename, err) return fmt.Errorf("error reading file: %s \n %w", fileHeader.Filename, err)
} }
err = q.Process(ctx, magnet, category) _store := store.Get()
importReq := store.NewImportRequest(debrid, q.DownloadFolder, magnet, arr, action, false, "", store.ImportTypeQBitTorrent)
err = _store.AddTorrent(ctx, importReq)
if err != nil { if err != nil {
return fmt.Errorf("failed to process torrent: %w", err) return fmt.Errorf("failed to process torrent: %w", err)
} }
return nil return nil
} }
func (q *QBit) Process(ctx context.Context, magnet *utils.Magnet, category string) error { func (q *QBit) ResumeTorrent(t *store.Torrent) bool {
svc := service.GetService()
torrent := createTorrentFromMagnet(magnet, category, "auto")
a, ok := ctx.Value("arr").(*arr.Arr)
if !ok {
return fmt.Errorf("arr not found in context")
}
isSymlink := ctx.Value("isSymlink").(bool)
debridTorrent, err := debrid.ProcessTorrent(svc.Debrid, magnet, a, isSymlink, false)
if err != nil || debridTorrent == nil {
if err == nil {
err = fmt.Errorf("failed to process torrent")
}
return err
}
torrent = q.UpdateTorrentMin(torrent, debridTorrent)
q.Storage.AddOrUpdate(torrent)
go q.ProcessFiles(torrent, debridTorrent, a, isSymlink) // We can send async for file processing not to delay the response
return nil
}
func (q *QBit) ProcessFiles(torrent *Torrent, debridTorrent *debridTypes.Torrent, arr *arr.Arr, isSymlink bool) {
svc := service.GetService()
client := svc.Debrid.GetClient(debridTorrent.Debrid)
downloadingStatuses := client.GetDownloadingStatus()
for debridTorrent.Status != "downloaded" {
q.logger.Debug().Msgf("%s <- (%s) Download Progress: %.2f%%", debridTorrent.Debrid, debridTorrent.Name, debridTorrent.Progress)
dbT, err := client.CheckStatus(debridTorrent, isSymlink)
if err != nil {
if dbT != nil && dbT.Id != "" {
// Delete the torrent if it was not downloaded
go func() {
_ = client.DeleteTorrent(dbT.Id)
}()
}
q.logger.Error().Msgf("Error checking status: %v", err)
q.MarkAsFailed(torrent)
go func() {
if err := arr.Refresh(); err != nil {
q.logger.Error().Msgf("Error refreshing arr: %v", err)
}
}()
return
}
debridTorrent = dbT
torrent = q.UpdateTorrentMin(torrent, debridTorrent)
// Exit the loop for downloading statuses to prevent memory buildup
if debridTorrent.Status == "downloaded" || !utils.Contains(downloadingStatuses, debridTorrent.Status) {
break
}
if !utils.Contains(client.GetDownloadingStatus(), debridTorrent.Status) {
break
}
time.Sleep(time.Duration(q.RefreshInterval) * time.Second)
}
var torrentSymlinkPath string
var err error
debridTorrent.Arr = arr
// Check if debrid supports webdav by checking cache
if isSymlink {
timer := time.Now()
cache, useWebdav := svc.Debrid.Caches[debridTorrent.Debrid]
if useWebdav {
q.logger.Info().Msgf("Using internal webdav for %s", debridTorrent.Debrid)
// Use webdav to download the file
if err := cache.AddTorrent(debridTorrent); err != nil {
q.logger.Error().Msgf("Error adding torrent to cache: %v", err)
q.MarkAsFailed(torrent)
return
}
rclonePath := filepath.Join(debridTorrent.MountPath, cache.GetTorrentFolder(debridTorrent)) // /mnt/remote/realdebrid/MyTVShow
torrentFolderNoExt := utils.RemoveExtension(debridTorrent.Name)
torrentSymlinkPath, err = q.createSymlinksWebdav(debridTorrent, rclonePath, torrentFolderNoExt) // /mnt/symlinks/{category}/MyTVShow/
} else {
// User is using either zurg or debrid webdav
torrentSymlinkPath, err = q.ProcessSymlink(torrent) // /mnt/symlinks/{category}/MyTVShow/
}
q.logger.Info().Msgf("Adding %s took %s", debridTorrent.Name, time.Since(timer))
} else {
torrentSymlinkPath, err = q.ProcessManualFile(torrent)
}
if err != nil {
q.MarkAsFailed(torrent)
go func() {
_ = client.DeleteTorrent(debridTorrent.Id)
}()
q.logger.Info().Msgf("Error: %v", err)
return
}
torrent.TorrentPath = torrentSymlinkPath
q.UpdateTorrent(torrent, debridTorrent)
go func() {
if err := request.SendDiscordMessage("download_complete", "success", torrent.discordContext()); err != nil {
q.logger.Error().Msgf("Error sending discord message: %v", err)
}
}()
if err := arr.Refresh(); err != nil {
q.logger.Error().Msgf("Error refreshing arr: %v", err)
}
}
func (q *QBit) MarkAsFailed(t *Torrent) *Torrent {
t.State = "error"
q.Storage.AddOrUpdate(t)
go func() {
if err := request.SendDiscordMessage("download_failed", "error", t.discordContext()); err != nil {
q.logger.Error().Msgf("Error sending discord message: %v", err)
}
}()
return t
}
func (q *QBit) UpdateTorrentMin(t *Torrent, debridTorrent *debridTypes.Torrent) *Torrent {
if debridTorrent == nil {
return t
}
addedOn, err := time.Parse(time.RFC3339, debridTorrent.Added)
if err != nil {
addedOn = time.Now()
}
totalSize := debridTorrent.Bytes
progress := (cmp.Or(debridTorrent.Progress, 0.0)) / 100.0
sizeCompleted := int64(float64(totalSize) * progress)
var speed int64
if debridTorrent.Speed != 0 {
speed = debridTorrent.Speed
}
var eta int
if speed != 0 {
eta = int((totalSize - sizeCompleted) / speed)
}
t.ID = debridTorrent.Id
t.Name = debridTorrent.Name
t.AddedOn = addedOn.Unix()
t.DebridTorrent = debridTorrent
t.Debrid = debridTorrent.Debrid
t.Size = totalSize
t.Completed = sizeCompleted
t.Downloaded = sizeCompleted
t.DownloadedSession = sizeCompleted
t.Uploaded = sizeCompleted
t.UploadedSession = sizeCompleted
t.AmountLeft = totalSize - sizeCompleted
t.Progress = progress
t.Eta = eta
t.Dlspeed = speed
t.Upspeed = speed
t.SavePath = filepath.Join(q.DownloadFolder, t.Category) + string(os.PathSeparator)
t.ContentPath = filepath.Join(t.SavePath, t.Name) + string(os.PathSeparator)
return t
}
func (q *QBit) UpdateTorrent(t *Torrent, debridTorrent *debridTypes.Torrent) *Torrent {
if debridTorrent == nil {
return t
}
if debridClient := service.GetDebrid().GetClient(debridTorrent.Debrid); debridClient != nil {
if debridTorrent.Status != "downloaded" {
_ = debridClient.UpdateTorrent(debridTorrent)
}
}
t = q.UpdateTorrentMin(t, debridTorrent)
t.ContentPath = t.TorrentPath + string(os.PathSeparator)
if t.IsReady() {
t.State = "pausedUP"
q.Storage.Update(t)
return t
}
ticker := time.NewTicker(100 * time.Millisecond)
defer ticker.Stop()
for {
select {
case <-ticker.C:
if t.IsReady() {
t.State = "pausedUP"
q.Storage.Update(t)
return t
}
updatedT := q.UpdateTorrent(t, debridTorrent)
t = updatedT
case <-time.After(10 * time.Minute): // Add a timeout
return t
}
}
}
func (q *QBit) ResumeTorrent(t *Torrent) bool {
return true return true
} }
func (q *QBit) PauseTorrent(t *Torrent) bool { func (q *QBit) PauseTorrent(t *store.Torrent) bool {
return true return true
} }
func (q *QBit) RefreshTorrent(t *Torrent) bool { func (q *QBit) RefreshTorrent(t *store.Torrent) bool {
return true return true
} }
func (q *QBit) GetTorrentProperties(t *Torrent) *TorrentProperties { func (q *QBit) GetTorrentProperties(t *store.Torrent) *TorrentProperties {
return &TorrentProperties{ return &TorrentProperties{
AdditionDate: t.AddedOn, AdditionDate: t.AddedOn,
Comment: "Debrid Blackhole <https://github.com/sirrobot01/decypharr>", Comment: "Debrid Blackhole <https://github.com/sirrobot01/decypharr>",
@@ -284,21 +83,7 @@ func (q *QBit) GetTorrentProperties(t *Torrent) *TorrentProperties {
} }
} }
func (q *QBit) GetTorrentFiles(t *Torrent) []*TorrentFile { func (q *QBit) setTorrentTags(t *store.Torrent, tags []string) bool {
files := make([]*TorrentFile, 0)
if t.DebridTorrent == nil {
return files
}
for _, file := range t.DebridTorrent.Files {
files = append(files, &TorrentFile{
Name: file.Path,
Size: file.Size,
})
}
return files
}
func (q *QBit) SetTorrentTags(t *Torrent, tags []string) bool {
torrentTags := strings.Split(t.Tags, ",") torrentTags := strings.Split(t.Tags, ",")
for _, tag := range tags { for _, tag := range tags {
if tag == "" { if tag == "" {
@@ -312,20 +97,20 @@ func (q *QBit) SetTorrentTags(t *Torrent, tags []string) bool {
} }
} }
t.Tags = strings.Join(torrentTags, ",") t.Tags = strings.Join(torrentTags, ",")
q.Storage.Update(t) q.storage.Update(t)
return true return true
} }
func (q *QBit) RemoveTorrentTags(t *Torrent, tags []string) bool { func (q *QBit) removeTorrentTags(t *store.Torrent, tags []string) bool {
torrentTags := strings.Split(t.Tags, ",") torrentTags := strings.Split(t.Tags, ",")
newTorrentTags := utils.RemoveItem(torrentTags, tags...) newTorrentTags := utils.RemoveItem(torrentTags, tags...)
q.Tags = utils.RemoveItem(q.Tags, tags...) q.Tags = utils.RemoveItem(q.Tags, tags...)
t.Tags = strings.Join(newTorrentTags, ",") t.Tags = strings.Join(newTorrentTags, ",")
q.Storage.Update(t) q.storage.Update(t)
return true return true
} }
func (q *QBit) AddTags(tags []string) bool { func (q *QBit) addTags(tags []string) bool {
for _, tag := range tags { for _, tag := range tags {
if tag == "" { if tag == "" {
continue continue
@@ -336,8 +121,3 @@ func (q *QBit) AddTags(tags []string) bool {
} }
return true return true
} }
func (q *QBit) RemoveTags(tags []string) bool {
q.Tags = utils.RemoveItem(q.Tags, tags...)
return true
}

View File

@@ -1,11 +1,5 @@
package qbit package qbit
import (
"fmt"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
"sync"
)
type BuildInfo struct { type BuildInfo struct {
Libtorrent string `json:"libtorrent"` Libtorrent string `json:"libtorrent"`
Bitness int `json:"bitness"` Bitness int `json:"bitness"`
@@ -172,76 +166,6 @@ type TorrentCategory struct {
SavePath string `json:"savePath"` SavePath string `json:"savePath"`
} }
type Torrent struct {
ID string `json:"id"`
DebridTorrent *types.Torrent `json:"-"`
Debrid string `json:"debrid"`
TorrentPath string `json:"-"`
AddedOn int64 `json:"added_on,omitempty"`
AmountLeft int64 `json:"amount_left"`
AutoTmm bool `json:"auto_tmm"`
Availability float64 `json:"availability,omitempty"`
Category string `json:"category,omitempty"`
Completed int64 `json:"completed"`
CompletionOn int `json:"completion_on,omitempty"`
ContentPath string `json:"content_path"`
DlLimit int `json:"dl_limit"`
Dlspeed int64 `json:"dlspeed"`
Downloaded int64 `json:"downloaded"`
DownloadedSession int64 `json:"downloaded_session"`
Eta int `json:"eta"`
FlPiecePrio bool `json:"f_l_piece_prio,omitempty"`
ForceStart bool `json:"force_start,omitempty"`
Hash string `json:"hash"`
LastActivity int64 `json:"last_activity,omitempty"`
MagnetUri string `json:"magnet_uri,omitempty"`
MaxRatio int `json:"max_ratio,omitempty"`
MaxSeedingTime int `json:"max_seeding_time,omitempty"`
Name string `json:"name,omitempty"`
NumComplete int `json:"num_complete,omitempty"`
NumIncomplete int `json:"num_incomplete,omitempty"`
NumLeechs int `json:"num_leechs,omitempty"`
NumSeeds int `json:"num_seeds,omitempty"`
Priority int `json:"priority,omitempty"`
Progress float64 `json:"progress"`
Ratio int `json:"ratio,omitempty"`
RatioLimit int `json:"ratio_limit,omitempty"`
SavePath string `json:"save_path"`
SeedingTimeLimit int `json:"seeding_time_limit,omitempty"`
SeenComplete int64 `json:"seen_complete,omitempty"`
SeqDl bool `json:"seq_dl"`
Size int64 `json:"size,omitempty"`
State string `json:"state,omitempty"`
SuperSeeding bool `json:"super_seeding"`
Tags string `json:"tags,omitempty"`
TimeActive int `json:"time_active,omitempty"`
TotalSize int64 `json:"total_size,omitempty"`
Tracker string `json:"tracker,omitempty"`
UpLimit int64 `json:"up_limit,omitempty"`
Uploaded int64 `json:"uploaded,omitempty"`
UploadedSession int64 `json:"uploaded_session,omitempty"`
Upspeed int64 `json:"upspeed,omitempty"`
Source string `json:"source,omitempty"`
Mu sync.Mutex `json:"-"`
}
func (t *Torrent) IsReady() bool {
return (t.AmountLeft <= 0 || t.Progress == 1) && t.TorrentPath != ""
}
func (t *Torrent) discordContext() string {
format := `
**Name:** %s
**Arr:** %s
**Hash:** %s
**MagnetURI:** %s
**Debrid:** %s
`
return fmt.Sprintf(format, t.Name, t.Category, t.Hash, t.MagnetUri, t.Debrid)
}
type TorrentProperties struct { type TorrentProperties struct {
AdditionDate int64 `json:"addition_date,omitempty"` AdditionDate int64 `json:"addition_date,omitempty"`
Comment string `json:"comment,omitempty"` Comment string `json:"comment,omitempty"`
@@ -278,18 +202,7 @@ type TorrentProperties struct {
UpSpeedAvg int `json:"up_speed_avg,omitempty"` UpSpeedAvg int `json:"up_speed_avg,omitempty"`
} }
type TorrentFile struct { func getAppPreferences() *AppPreferences {
Index int `json:"index,omitempty"`
Name string `json:"name,omitempty"`
Size int64 `json:"size,omitempty"`
Progress int `json:"progress,omitempty"`
Priority int `json:"priority,omitempty"`
IsSeed bool `json:"is_seed,omitempty"`
PieceRange []int `json:"piece_range,omitempty"`
Availability float64 `json:"availability,omitempty"`
}
func NewAppPreferences() *AppPreferences {
preferences := &AppPreferences{ preferences := &AppPreferences{
AddTrackers: "", AddTrackers: "",
AddTrackersEnabled: false, AddTrackersEnabled: false,

701
pkg/rar/rarar.go Normal file
View File

@@ -0,0 +1,701 @@
// Source: https://github.com/eliasbenb/RARAR.py
// Note that this code only translates the original Python for RAR3 (not RAR5) support.
package rar
import (
"bytes"
"encoding/binary"
"errors"
"fmt"
"io"
"math/rand"
"net/http"
"strings"
"time"
"unicode/utf8"
)
// Constants from the Python code
var (
// Chunk sizes
DefaultChunkSize = 4096
HttpChunkSize = 32768
MaxSearchSize = 1 << 20 // 1MB
// RAR marker and block types
Rar3Marker = []byte{0x52, 0x61, 0x72, 0x21, 0x1A, 0x07, 0x00}
BlockFile = byte(0x74)
BlockHeader = byte(0x73)
BlockMarker = byte(0x72)
BlockEnd = byte(0x7B)
// Header flags
FlagDirectory = 0xE0
FlagHasHighSize = 0x100
FlagHasUnicodeName = 0x200
FlagHasData = 0x8000
)
// Compression methods
var CompressionMethods = map[byte]string{
0x30: "Store",
0x31: "Fastest",
0x32: "Fast",
0x33: "Normal",
0x34: "Good",
0x35: "Best",
}
// Error definitions
var (
ErrMarkerNotFound = errors.New("RAR marker not found within search limit")
ErrInvalidFormat = errors.New("invalid RAR format")
ErrNetworkError = errors.New("network error")
ErrRangeRequestsNotSupported = errors.New("server does not support range requests")
ErrCompressionNotSupported = errors.New("compression method not supported")
ErrDirectoryExtractNotSupported = errors.New("directory extract not supported")
)
// Name returns the base filename of the file
func (f *File) Name() string {
if i := strings.LastIndexAny(f.Path, "\\/"); i >= 0 {
return f.Path[i+1:]
}
return f.Path
}
func (f *File) ByteRange() *[2]int64 {
return &[2]int64{f.DataOffset, f.DataOffset + f.CompressedSize - 1}
}
func NewHttpFile(url string) (*HttpFile, error) {
client := &http.Client{}
file := &HttpFile{
URL: url,
Position: 0,
Client: client,
MaxRetries: 3,
RetryDelay: time.Second,
}
// Get file size
size, err := file.getFileSize()
if err != nil {
return nil, fmt.Errorf("failed to get file size: %w", err)
}
file.FileSize = size
return file, nil
}
func (f *HttpFile) doWithRetry(operation func() (interface{}, error)) (interface{}, error) {
var lastErr error
for attempt := 0; attempt <= f.MaxRetries; attempt++ {
if attempt > 0 {
// Jitter + exponential backoff delay
delay := f.RetryDelay * time.Duration(1<<uint(attempt-1))
jitter := time.Duration(rand.Int63n(int64(delay / 4)))
time.Sleep(delay + jitter)
}
result, err := operation()
if err == nil {
return result, nil
}
lastErr = err
// Only retry on network errors
if !errors.Is(err, ErrNetworkError) {
return nil, err
}
}
return nil, fmt.Errorf("after %d retries: %w", f.MaxRetries, lastErr)
}
// getFileSize gets the total file size from the server
func (f *HttpFile) getFileSize() (int64, error) {
result, err := f.doWithRetry(func() (interface{}, error) {
resp, err := f.Client.Head(f.URL)
if err != nil {
return int64(0), fmt.Errorf("%w: %v", ErrNetworkError, err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return int64(0), fmt.Errorf("%w: unexpected status code: %d", ErrNetworkError, resp.StatusCode)
}
contentLength := resp.Header.Get("Content-Length")
if contentLength == "" {
return int64(0), fmt.Errorf("%w: content length not provided", ErrNetworkError)
}
var size int64
_, err = fmt.Sscanf(contentLength, "%d", &size)
if err != nil {
return int64(0), fmt.Errorf("%w: %v", ErrNetworkError, err)
}
return size, nil
})
if err != nil {
return 0, err
}
return result.(int64), nil
}
// ReadAt implements the io.ReaderAt interface
func (f *HttpFile) ReadAt(p []byte, off int64) (n int, err error) {
if len(p) == 0 {
return 0, nil
}
// Ensure we don't read past the end of the file
size := int64(len(p))
if f.FileSize > 0 {
remaining := f.FileSize - off
if remaining <= 0 {
return 0, io.EOF
}
if size > remaining {
size = remaining
p = p[:size]
}
}
result, err := f.doWithRetry(func() (interface{}, error) {
// Create HTTP request with Range header
req, err := http.NewRequest("GET", f.URL, nil)
if err != nil {
return 0, fmt.Errorf("%w: %v", ErrNetworkError, err)
}
end := off + size - 1
req.Header.Set("Range", fmt.Sprintf("bytes=%d-%d", off, end))
// Make the request
resp, err := f.Client.Do(req)
if err != nil {
return 0, fmt.Errorf("%w: %v", ErrNetworkError, err)
}
defer resp.Body.Close()
// Handle response
switch resp.StatusCode {
case http.StatusPartialContent:
// Read the content
bytesRead, err := io.ReadFull(resp.Body, p)
return bytesRead, err
case http.StatusOK:
// Some servers return the full content instead of partial
fullData, err := io.ReadAll(resp.Body)
if err != nil {
return 0, fmt.Errorf("%w: %v", ErrNetworkError, err)
}
if int64(len(fullData)) <= off {
return 0, io.EOF
}
end = off + size
if int64(len(fullData)) < end {
end = int64(len(fullData))
}
copy(p, fullData[off:end])
return int(end - off), nil
case http.StatusRequestedRangeNotSatisfiable:
// We're at EOF
return 0, io.EOF
default:
return 0, fmt.Errorf("%w: unexpected status code: %d", ErrNetworkError, resp.StatusCode)
}
})
if err != nil {
return 0, err
}
return result.(int), nil
}
// NewReader creates a new RAR3 reader
func NewReader(url string) (*Reader, error) {
file, err := NewHttpFile(url)
if err != nil {
return nil, err
}
reader := &Reader{
File: file,
ChunkSize: HttpChunkSize,
Files: make([]*File, 0),
}
// Find RAR marker
marker, err := reader.findMarker()
if err != nil {
return nil, err
}
reader.Marker = marker
pos := reader.Marker + int64(len(Rar3Marker)) // Skip marker block
headerData, err := reader.readBytes(pos, 7)
if err != nil {
return nil, err
}
if len(headerData) < 7 {
return nil, ErrInvalidFormat
}
headType := headerData[2]
headSize := int(binary.LittleEndian.Uint16(headerData[5:7]))
if headType != BlockHeader {
return nil, ErrInvalidFormat
}
// Store the position after the archive header
reader.HeaderEndPos = pos + int64(headSize)
return reader, nil
}
// readBytes reads a range of bytes from the file
func (r *Reader) readBytes(start int64, length int) ([]byte, error) {
if length <= 0 {
return []byte{}, nil
}
data := make([]byte, length)
n, err := r.File.ReadAt(data, start)
if err != nil && err != io.EOF {
return nil, err
}
if n < length {
// Partial read, return what we got
return data[:n], nil
}
return data, nil
}
// findMarker finds the RAR marker in the file
func (r *Reader) findMarker() (int64, error) {
// First try to find marker in the first chunk
firstChunkSize := 8192 // 8KB
chunk, err := r.readBytes(0, firstChunkSize)
if err != nil {
return 0, err
}
markerPos := bytes.Index(chunk, Rar3Marker)
if markerPos != -1 {
return int64(markerPos), nil
}
// If not found, continue searching
position := int64(firstChunkSize - len(Rar3Marker) + 1)
maxSearch := int64(MaxSearchSize)
for position < maxSearch {
chunkSize := min(r.ChunkSize, int(maxSearch-position))
chunk, err := r.readBytes(position, chunkSize)
if err != nil || len(chunk) == 0 {
break
}
markerPos := bytes.Index(chunk, Rar3Marker)
if markerPos != -1 {
return position + int64(markerPos), nil
}
// Move forward by chunk size minus the marker length
position += int64(max(1, len(chunk)-len(Rar3Marker)+1))
}
return 0, ErrMarkerNotFound
}
// decodeUnicode decodes RAR3 Unicode encoding
func decodeUnicode(asciiStr string, unicodeData []byte) string {
if len(unicodeData) == 0 {
return asciiStr
}
result := []rune{}
asciiPos := 0
dataPos := 0
highByte := byte(0)
for dataPos < len(unicodeData) {
flags := unicodeData[dataPos]
dataPos++
// Determine the number of character positions this flag byte controls
var flagBits uint
var flagCount int
var bitCount int
if flags&0x80 != 0 {
// Extended flag - controls up to 32 characters (16 bit pairs)
flagBits = uint(flags)
bitCount = 1
for (flagBits&(0x80>>bitCount) != 0) && dataPos < len(unicodeData) {
flagBits = ((flagBits & ((0x80 >> bitCount) - 1)) << 8) | uint(unicodeData[dataPos])
dataPos++
bitCount++
}
flagCount = bitCount * 4
} else {
// Simple flag - controls 4 characters (4 bit pairs)
flagBits = uint(flags)
flagCount = 4
}
// Process each 2-bit flag
for i := 0; i < flagCount; i++ {
if asciiPos >= len(asciiStr) && dataPos >= len(unicodeData) {
break
}
flagValue := (flagBits >> (i * 2)) & 0x03
switch flagValue {
case 0:
// Use ASCII character
if asciiPos < len(asciiStr) {
result = append(result, rune(asciiStr[asciiPos]))
asciiPos++
}
case 1:
// Unicode character with high byte 0
if dataPos < len(unicodeData) {
result = append(result, rune(unicodeData[dataPos]))
dataPos++
}
case 2:
// Unicode character with current high byte
if dataPos < len(unicodeData) {
lowByte := uint(unicodeData[dataPos])
dataPos++
result = append(result, rune(lowByte|(uint(highByte)<<8)))
}
case 3:
// Set new high byte
if dataPos < len(unicodeData) {
highByte = unicodeData[dataPos]
dataPos++
}
}
}
}
// Append any remaining ASCII characters
for asciiPos < len(asciiStr) {
result = append(result, rune(asciiStr[asciiPos]))
asciiPos++
}
return string(result)
}
// readFiles reads all file entries in the archive
func (r *Reader) readFiles() error {
pos := r.Marker
pos += int64(len(Rar3Marker)) // Skip marker block
// Read archive header
headerData, err := r.readBytes(pos, 7)
if err != nil {
return err
}
if len(headerData) < 7 {
return ErrInvalidFormat
}
headType := headerData[2]
headSize := int(binary.LittleEndian.Uint16(headerData[5:7]))
if headType != BlockHeader {
return ErrInvalidFormat
}
pos += int64(headSize) // Skip archive header
// Track whether we've found the end marker
foundEndMarker := false
// Process file entries
for !foundEndMarker {
headerData, err := r.readBytes(pos, 7)
if err != nil {
// Don't stop on EOF, might be temporary network error
// For definitive errors, return the error
if !errors.Is(err, io.EOF) && !errors.Is(err, ErrNetworkError) {
return fmt.Errorf("error reading block header: %w", err)
}
// If we get EOF or network error, retry a few times
retryCount := 0
maxRetries := 3
retryDelay := time.Second
for retryCount < maxRetries {
time.Sleep(retryDelay * time.Duration(1<<uint(retryCount)))
retryCount++
headerData, err = r.readBytes(pos, 7)
if err == nil && len(headerData) >= 7 {
break // Successfully got data
}
}
if len(headerData) < 7 {
return fmt.Errorf("failed to read block header after retries: %w", err)
}
}
if len(headerData) < 7 {
return fmt.Errorf("incomplete block header at position %d", pos)
}
headType := headerData[2]
headFlags := int(binary.LittleEndian.Uint16(headerData[3:5]))
headSize := int(binary.LittleEndian.Uint16(headerData[5:7]))
if headType == BlockEnd {
// End of archive
foundEndMarker = true
break
}
if headType == BlockFile {
// Get complete header data
completeHeader, err := r.readBytes(pos, headSize)
if err != nil || len(completeHeader) < headSize {
// Retry logic for incomplete headers
retryCount := 0
maxRetries := 3
retryDelay := time.Second
for retryCount < maxRetries && (err != nil || len(completeHeader) < headSize) {
time.Sleep(retryDelay * time.Duration(1<<uint(retryCount)))
retryCount++
completeHeader, err = r.readBytes(pos, headSize)
if err == nil && len(completeHeader) >= headSize {
break // Successfully got data
}
}
if len(completeHeader) < headSize {
return fmt.Errorf("failed to read complete file header after retries: %w", err)
}
}
fileInfo, err := r.parseFileHeader(completeHeader, pos)
if err == nil && fileInfo != nil {
r.Files = append(r.Files, fileInfo)
pos = fileInfo.NextOffset
} else {
pos += int64(headSize)
}
} else {
// Skip non-file block
pos += int64(headSize)
// Skip data if present
if headFlags&FlagHasData != 0 {
// Read data size
sizeData, err := r.readBytes(pos-4, 4)
if err != nil || len(sizeData) < 4 {
// Retry logic for data size read errors
retryCount := 0
maxRetries := 3
retryDelay := time.Second
for retryCount < maxRetries && (err != nil || len(sizeData) < 4) {
time.Sleep(retryDelay * time.Duration(1<<uint(retryCount)))
retryCount++
sizeData, err = r.readBytes(pos-4, 4)
if err == nil && len(sizeData) >= 4 {
break // Successfully got data
}
}
if len(sizeData) < 4 {
return fmt.Errorf("failed to read data size after retries: %w", err)
}
}
dataSize := int64(binary.LittleEndian.Uint32(sizeData))
pos += dataSize
}
}
}
if !foundEndMarker {
return fmt.Errorf("end marker not found in archive")
}
return nil
}
// parseFileHeader parses a file header and returns file info
func (r *Reader) parseFileHeader(headerData []byte, position int64) (*File, error) {
if len(headerData) < 7 {
return nil, fmt.Errorf("header data too short")
}
headType := headerData[2]
headFlags := int(binary.LittleEndian.Uint16(headerData[3:5]))
headSize := int(binary.LittleEndian.Uint16(headerData[5:7]))
if headType != BlockFile {
return nil, fmt.Errorf("not a file block")
}
// Check if we have enough data
if len(headerData) < 32 {
return nil, fmt.Errorf("file header too short")
}
// Parse basic file header fields
packSize := binary.LittleEndian.Uint32(headerData[7:11])
unpackSize := binary.LittleEndian.Uint32(headerData[11:15])
// fileOS := headerData[15]
fileCRC := binary.LittleEndian.Uint32(headerData[16:20])
// fileTime := binary.LittleEndian.Uint32(headerData[20:24])
// unpVer := headerData[24]
method := headerData[25]
nameSize := binary.LittleEndian.Uint16(headerData[26:28])
// fileAttr := binary.LittleEndian.Uint32(headerData[28:32])
// Handle high pack/unp sizes
highPackSize := uint32(0)
highUnpSize := uint32(0)
offset := 32 // Start after basic header fields
if headFlags&FlagHasHighSize != 0 {
if offset+8 <= len(headerData) {
highPackSize = binary.LittleEndian.Uint32(headerData[offset : offset+4])
highUnpSize = binary.LittleEndian.Uint32(headerData[offset+4 : offset+8])
}
offset += 8
}
// Calculate actual sizes
fullPackSize := int64(packSize) + (int64(highPackSize) << 32)
fullUnpSize := int64(unpackSize) + (int64(highUnpSize) << 32)
// Read filename
var fileName string
if offset+int(nameSize) <= len(headerData) {
fileNameBytes := headerData[offset : offset+int(nameSize)]
if headFlags&FlagHasUnicodeName != 0 {
zeroPos := bytes.IndexByte(fileNameBytes, 0)
if zeroPos != -1 {
// Try UTF-8 first
asciiPart := fileNameBytes[:zeroPos]
if utf8.Valid(asciiPart) {
fileName = string(asciiPart)
} else {
// Fall back to custom decoder
asciiStr := string(asciiPart)
unicodePart := fileNameBytes[zeroPos+1:]
fileName = decodeUnicode(asciiStr, unicodePart)
}
} else {
// No null byte
if utf8.Valid(fileNameBytes) {
fileName = string(fileNameBytes)
} else {
fileName = string(fileNameBytes) // Last resort
}
}
} else {
// Non-Unicode filename
if utf8.Valid(fileNameBytes) {
fileName = string(fileNameBytes)
} else {
fileName = string(fileNameBytes) // Fallback
}
}
} else {
fileName = fmt.Sprintf("UnknownFile%d", len(r.Files))
}
isDirectory := (headFlags & FlagDirectory) == FlagDirectory
// Calculate data offsets
dataOffset := position + int64(headSize)
nextOffset := dataOffset
// Only add data size if it's not a directory and has data
if !isDirectory && headFlags&FlagHasData != 0 {
nextOffset += fullPackSize
}
return &File{
Path: fileName,
Size: fullUnpSize,
CompressedSize: fullPackSize,
Method: method,
CRC: fileCRC,
IsDirectory: isDirectory,
DataOffset: dataOffset,
NextOffset: nextOffset,
}, nil
}
// GetFiles returns all files in the archive
func (r *Reader) GetFiles() ([]*File, error) {
if len(r.Files) == 0 {
err := r.readFiles()
if err != nil {
return nil, err
}
}
return r.Files, nil
}
// ExtractFile extracts a file from the archive
func (r *Reader) ExtractFile(file *File) ([]byte, error) {
if file.IsDirectory {
return nil, ErrDirectoryExtractNotSupported
}
// Only support "Store" method
if file.Method != 0x30 { // 0x30 = "Store"
return nil, ErrCompressionNotSupported
}
return r.readBytes(file.DataOffset, int(file.CompressedSize))
}
// Helper functions
func min(a, b int) int {
if a < b {
return a
}
return b
}
func max(a, b int) int {
if a > b {
return a
}
return b
}

37
pkg/rar/types.go Normal file
View File

@@ -0,0 +1,37 @@
package rar
import (
"net/http"
"time"
)
// File represents a file entry in a RAR archive
type File struct {
Path string
Size int64
CompressedSize int64
Method byte
CRC uint32
IsDirectory bool
DataOffset int64
NextOffset int64
}
// Access point for a RAR archive served through HTTP
type HttpFile struct {
URL string
Position int64
Client *http.Client
FileSize int64
MaxRetries int
RetryDelay time.Duration
}
// Reader reads RAR3 format archives
type Reader struct {
File *HttpFile
ChunkSize int
Marker int64
HeaderEndPos int64 // Position after the archive header
Files []*File
}

View File

@@ -1,159 +0,0 @@
package repair
//func (r *Repair) clean(job *Job) error {
// // Create a new error group
// g, ctx := errgroup.WithContext(context.Background())
//
// uniqueItems := make(map[string]string)
// mu := sync.Mutex{}
//
// // Limit concurrent goroutines
// g.SetLimit(10)
//
// for _, a := range job.Arrs {
// a := a // Capture range variable
// g.Go(func() error {
// // Check if context was canceled
// select {
// case <-ctx.Done():
// return ctx.Err()
// default:
// }
//
// items, err := r.cleanArr(job, a, "")
// if err != nil {
// r.logger.Error().Err(err).Msgf("Error cleaning %s", a)
// return err
// }
//
// // Safely append the found items to the shared slice
// if len(items) > 0 {
// mu.Lock()
// for k, v := range items {
// uniqueItems[k] = v
// }
// mu.Unlock()
// }
//
// return nil
// })
// }
//
// if err := g.Wait(); err != nil {
// return err
// }
//
// if len(uniqueItems) == 0 {
// job.CompletedAt = time.Now()
// job.Status = JobCompleted
//
// go func() {
// if err := request.SendDiscordMessage("repair_clean_complete", "success", job.discordContext()); err != nil {
// r.logger.Error().Msgf("Error sending discord message: %v", err)
// }
// }()
//
// return nil
// }
//
// cache := r.deb.Caches["realdebrid"]
// if cache == nil {
// return fmt.Errorf("cache not found")
// }
// torrents := cache.GetTorrents()
//
// dangling := make([]string, 0)
// for _, t := range torrents {
// if _, ok := uniqueItems[t.Name]; !ok {
// dangling = append(dangling, t.Id)
// }
// }
//
// r.logger.Info().Msgf("Found %d delapitated items", len(dangling))
//
// if len(dangling) == 0 {
// job.CompletedAt = time.Now()
// job.Status = JobCompleted
// return nil
// }
//
// client := r.deb.Clients["realdebrid"]
// if client == nil {
// return fmt.Errorf("client not found")
// }
// for _, id := range dangling {
// err := client.DeleteTorrent(id)
// if err != nil {
// return err
// }
// }
//
// return nil
//}
//
//func (r *Repair) cleanArr(j *Job, _arr string, tmdbId string) (map[string]string, error) {
// uniqueItems := make(map[string]string)
// a := r.arrs.Get(_arr)
//
// r.logger.Info().Msgf("Starting repair for %s", a.Name)
// media, err := a.GetMedia(tmdbId)
// if err != nil {
// r.logger.Info().Msgf("Failed to get %s media: %v", a.Name, err)
// return uniqueItems, err
// }
//
// // Create a new error group
// g, ctx := errgroup.WithContext(context.Background())
//
// mu := sync.Mutex{}
//
// // Limit concurrent goroutines
// g.SetLimit(runtime.NumCPU() * 4)
//
// for _, m := range media {
// m := m // Create a new variable scoped to the loop iteration
// g.Go(func() error {
// // Check if context was canceled
// select {
// case <-ctx.Done():
// return ctx.Err()
// default:
// }
//
// u := r.getUniquePaths(m)
// for k, v := range u {
// mu.Lock()
// uniqueItems[k] = v
// mu.Unlock()
// }
// return nil
// })
// }
//
// if err := g.Wait(); err != nil {
// return uniqueItems, err
// }
//
// r.logger.Info().Msgf("Repair completed for %s. %d unique items", a.Name, len(uniqueItems))
// return uniqueItems, nil
//}
//func (r *Repair) getUniquePaths(media arr.Content) map[string]string {
// // Use zurg setup to check file availability with zurg
// // This reduces bandwidth usage significantly
//
// uniqueParents := make(map[string]string)
// files := media.Files
// for _, file := range files {
// target := getSymlinkTarget(file.Path)
// if target != "" {
// file.IsSymlink = true
// dir, f := filepath.Split(target)
// parent := filepath.Base(filepath.Clean(dir))
// // Set target path folder/file.mkv
// file.TargetPath = f
// uniqueParents[parent] = target
// }
// }
// return uniqueParents
//}

View File

@@ -3,6 +3,8 @@ package repair
import ( import (
"fmt" "fmt"
"github.com/sirrobot01/decypharr/pkg/arr" "github.com/sirrobot01/decypharr/pkg/arr"
"github.com/sirrobot01/decypharr/pkg/debrid/store"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
"os" "os"
"path/filepath" "path/filepath"
) )
@@ -82,3 +84,96 @@ func collectFiles(media arr.Content) map[string][]arr.ContentFile {
} }
return uniqueParents return uniqueParents
} }
func (r *Repair) checkTorrentFiles(torrentPath string, files []arr.ContentFile, clients map[string]types.Client, caches map[string]*store.Cache) []arr.ContentFile {
brokenFiles := make([]arr.ContentFile, 0)
emptyFiles := make([]arr.ContentFile, 0)
r.logger.Debug().Msgf("Checking %s", torrentPath)
// Get the debrid client
dir := filepath.Dir(torrentPath)
debridName := r.findDebridForPath(dir, clients)
if debridName == "" {
r.logger.Debug().Msgf("No debrid found for %s. Skipping", torrentPath)
return emptyFiles
}
cache, ok := caches[debridName]
if !ok {
r.logger.Debug().Msgf("No cache found for %s. Skipping", debridName)
return emptyFiles
}
tor, ok := r.torrentsMap.Load(debridName)
if !ok {
r.logger.Debug().Msgf("Could not find torrents for %s. Skipping", debridName)
return emptyFiles
}
torrentsMap := tor.(map[string]store.CachedTorrent)
// Check if torrent exists
torrentName := filepath.Clean(filepath.Base(torrentPath))
torrent, ok := torrentsMap[torrentName]
if !ok {
r.logger.Debug().Msgf("Can't find torrent %s in %s. Marking as broken", torrentName, debridName)
// Return all files as broken
return files
}
// Batch check files
filePaths := make([]string, len(files))
for i, file := range files {
filePaths[i] = file.TargetPath
}
brokenFilePaths := cache.GetBrokenFiles(&torrent, filePaths)
if len(brokenFilePaths) > 0 {
r.logger.Debug().Msgf("%d broken files found in %s", len(brokenFilePaths), torrentName)
// Create a set for O(1) lookup
brokenSet := make(map[string]bool, len(brokenFilePaths))
for _, brokenPath := range brokenFilePaths {
brokenSet[brokenPath] = true
}
// Filter broken files
for _, contentFile := range files {
if brokenSet[contentFile.TargetPath] {
brokenFiles = append(brokenFiles, contentFile)
}
}
}
return brokenFiles
}
func (r *Repair) findDebridForPath(dir string, clients map[string]types.Client) string {
// Check cache first
if debridName, exists := r.debridPathCache.Load(dir); exists {
return debridName.(string)
}
// Find debrid client
for _, client := range clients {
mountPath := client.GetMountPath()
if mountPath == "" {
continue
}
if filepath.Clean(mountPath) == filepath.Clean(dir) {
debridName := client.Name()
// Cache the result
r.debridPathCache.Store(dir, debridName)
return debridName
}
}
// Cache empty result to avoid repeated lookups
r.debridPathCache.Store(dir, "")
return ""
}

View File

@@ -3,6 +3,7 @@ package repair
import ( import (
"context" "context"
"encoding/json" "encoding/json"
"errors"
"fmt" "fmt"
"github.com/go-co-op/gocron/v2" "github.com/go-co-op/gocron/v2"
"github.com/google/uuid" "github.com/google/uuid"
@@ -12,7 +13,7 @@ import (
"github.com/sirrobot01/decypharr/internal/request" "github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/internal/utils" "github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/arr" "github.com/sirrobot01/decypharr/pkg/arr"
"github.com/sirrobot01/decypharr/pkg/debrid/debrid" "github.com/sirrobot01/decypharr/pkg/debrid"
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
"net" "net"
"net/http" "net/http"
@@ -29,9 +30,8 @@ import (
type Repair struct { type Repair struct {
Jobs map[string]*Job Jobs map[string]*Job
arrs *arr.Storage arrs *arr.Storage
deb *debrid.Engine deb *debrid.Storage
interval string interval string
runOnStart bool
ZurgURL string ZurgURL string
IsZurg bool IsZurg bool
useWebdav bool useWebdav bool
@@ -40,7 +40,10 @@ type Repair struct {
filename string filename string
workers int workers int
scheduler gocron.Scheduler scheduler gocron.Scheduler
ctx context.Context
debridPathCache sync.Map // debridPath:debridName cache.Emptied after each run
torrentsMap sync.Map //debridName: map[string]*store.CacheTorrent. Emptied after each run
ctx context.Context
} }
type JobStatus string type JobStatus string
@@ -51,6 +54,7 @@ const (
JobFailed JobStatus = "failed" JobFailed JobStatus = "failed"
JobCompleted JobStatus = "completed" JobCompleted JobStatus = "completed"
JobProcessing JobStatus = "processing" JobProcessing JobStatus = "processing"
JobCancelled JobStatus = "cancelled"
) )
type Job struct { type Job struct {
@@ -66,9 +70,12 @@ type Job struct {
Recurrent bool `json:"recurrent"` Recurrent bool `json:"recurrent"`
Error string `json:"error"` Error string `json:"error"`
cancelFunc context.CancelFunc
ctx context.Context
} }
func New(arrs *arr.Storage, engine *debrid.Engine) *Repair { func New(arrs *arr.Storage, engine *debrid.Storage) *Repair {
cfg := config.Get() cfg := config.Get()
workers := runtime.NumCPU() * 20 workers := runtime.NumCPU() * 20
if cfg.Repair.Workers > 0 { if cfg.Repair.Workers > 0 {
@@ -78,7 +85,6 @@ func New(arrs *arr.Storage, engine *debrid.Engine) *Repair {
arrs: arrs, arrs: arrs,
logger: logger.New("repair"), logger: logger.New("repair"),
interval: cfg.Repair.Interval, interval: cfg.Repair.Interval,
runOnStart: cfg.Repair.RunOnStart,
ZurgURL: cfg.Repair.ZurgURL, ZurgURL: cfg.Repair.ZurgURL,
useWebdav: cfg.Repair.UseWebDav, useWebdav: cfg.Repair.UseWebDav,
autoProcess: cfg.Repair.AutoProcess, autoProcess: cfg.Repair.AutoProcess,
@@ -113,15 +119,6 @@ func (r *Repair) Reset() {
} }
func (r *Repair) Start(ctx context.Context) error { func (r *Repair) Start(ctx context.Context) error {
//r.ctx = ctx
if r.runOnStart {
r.logger.Info().Msgf("Running initial repair")
go func() {
if err := r.AddJob([]string{}, []string{}, r.autoProcess, true); err != nil {
r.logger.Error().Err(err).Msg("Error running initial repair")
}
}()
}
r.scheduler, _ = gocron.NewScheduler(gocron.WithLocation(time.Local)) r.scheduler, _ = gocron.NewScheduler(gocron.WithLocation(time.Local))
@@ -217,10 +214,32 @@ func (r *Repair) newJob(arrsNames []string, mediaIDs []string) *Job {
} }
} }
// initRun initializes the repair run, setting up necessary configurations, checks and caches
func (r *Repair) initRun(ctx context.Context) {
if r.useWebdav {
// Webdav use is enabled, initialize debrid torrent caches
caches := r.deb.Caches()
if len(caches) == 0 {
return
}
for name, cache := range caches {
r.torrentsMap.Store(name, cache.GetTorrentsName())
}
}
}
// // onComplete is called when the repair job is completed
func (r *Repair) onComplete() {
// Set the cache maps to nil
r.torrentsMap = sync.Map{} // Clear the torrent map
r.debridPathCache = sync.Map{}
}
func (r *Repair) preRunChecks() error { func (r *Repair) preRunChecks() error {
if r.useWebdav { if r.useWebdav {
if len(r.deb.Caches) == 0 { caches := r.deb.Caches()
if len(caches) == 0 {
return fmt.Errorf("no caches found") return fmt.Errorf("no caches found")
} }
return nil return nil
@@ -254,31 +273,75 @@ func (r *Repair) AddJob(arrsNames []string, mediaIDs []string, autoProcess, recu
job.AutoProcess = autoProcess job.AutoProcess = autoProcess
job.Recurrent = recurrent job.Recurrent = recurrent
r.reset(job) r.reset(job)
job.ctx, job.cancelFunc = context.WithCancel(r.ctx)
r.Jobs[key] = job r.Jobs[key] = job
go r.saveToFile() go r.saveToFile()
go func() { go func() {
if err := r.repair(job); err != nil { if err := r.repair(job); err != nil {
r.logger.Error().Err(err).Msg("Error running repair") r.logger.Error().Err(err).Msg("Error running repair")
r.logger.Error().Err(err).Msg("Error running repair") if !errors.Is(job.ctx.Err(), context.Canceled) {
job.FailedAt = time.Now() job.FailedAt = time.Now()
job.Error = err.Error() job.Error = err.Error()
job.Status = JobFailed job.Status = JobFailed
job.CompletedAt = time.Now() job.CompletedAt = time.Now()
} else {
job.FailedAt = time.Now()
job.Error = err.Error()
job.Status = JobFailed
job.CompletedAt = time.Now()
}
} }
r.onComplete() // Clear caches and maps after job completion
}() }()
return nil return nil
} }
func (r *Repair) StopJob(id string) error {
job := r.GetJob(id)
if job == nil {
return fmt.Errorf("job %s not found", id)
}
// Check if job can be stopped
if job.Status != JobStarted && job.Status != JobProcessing {
return fmt.Errorf("job %s cannot be stopped (status: %s)", id, job.Status)
}
// Cancel the job
if job.cancelFunc != nil {
job.cancelFunc()
r.logger.Info().Msgf("Job %s cancellation requested", id)
go func() {
if job.Status == JobStarted || job.Status == JobProcessing {
job.Status = JobCancelled
job.BrokenItems = nil
job.ctx = nil // Clear context to prevent further processing
job.CompletedAt = time.Now()
job.Error = "Job was cancelled by user"
r.saveToFile()
}
}()
return nil
}
return fmt.Errorf("job %s cannot be cancelled", id)
}
func (r *Repair) repair(job *Job) error { func (r *Repair) repair(job *Job) error {
defer r.saveToFile() defer r.saveToFile()
if err := r.preRunChecks(); err != nil { if err := r.preRunChecks(); err != nil {
return err return err
} }
// Initialize the run
r.initRun(job.ctx)
// Use a mutex to protect concurrent access to brokenItems // Use a mutex to protect concurrent access to brokenItems
var mu sync.Mutex var mu sync.Mutex
brokenItems := map[string][]arr.ContentFile{} brokenItems := map[string][]arr.ContentFile{}
g, ctx := errgroup.WithContext(r.ctx) g, ctx := errgroup.WithContext(job.ctx)
for _, a := range job.Arrs { for _, a := range job.Arrs {
a := a // Capture range variable a := a // Capture range variable
@@ -321,6 +384,14 @@ func (r *Repair) repair(job *Job) error {
// Wait for all goroutines to complete and check for errors // Wait for all goroutines to complete and check for errors
if err := g.Wait(); err != nil { if err := g.Wait(); err != nil {
// Check if j0b was canceled
if errors.Is(ctx.Err(), context.Canceled) {
job.Status = JobCancelled
job.CompletedAt = time.Now()
job.Error = "Job was cancelled"
return fmt.Errorf("job cancelled")
}
job.FailedAt = time.Now() job.FailedAt = time.Now()
job.Error = err.Error() job.Error = err.Error()
job.Status = JobFailed job.Status = JobFailed
@@ -367,7 +438,7 @@ func (r *Repair) repair(job *Job) error {
return nil return nil
} }
func (r *Repair) repairArr(j *Job, _arr string, tmdbId string) ([]arr.ContentFile, error) { func (r *Repair) repairArr(job *Job, _arr string, tmdbId string) ([]arr.ContentFile, error) {
brokenItems := make([]arr.ContentFile, 0) brokenItems := make([]arr.ContentFile, 0)
a := r.arrs.Get(_arr) a := r.arrs.Get(_arr)
@@ -384,9 +455,9 @@ func (r *Repair) repairArr(j *Job, _arr string, tmdbId string) ([]arr.ContentFil
return brokenItems, nil return brokenItems, nil
} }
// Check first media to confirm mounts are accessible // Check first media to confirm mounts are accessible
if !r.isMediaAccessible(media[0]) { if err := r.checkMountUp(media); err != nil {
r.logger.Info().Msgf("Skipping repair. Parent directory not accessible for. Check your mounts") r.logger.Error().Err(err).Msgf("Mount check failed for %s", a.Name)
return brokenItems, nil return brokenItems, fmt.Errorf("mount check failed: %w", err)
} }
// Mutex for brokenItems // Mutex for brokenItems
@@ -400,14 +471,14 @@ func (r *Repair) repairArr(j *Job, _arr string, tmdbId string) ([]arr.ContentFil
defer wg.Done() defer wg.Done()
for m := range workerChan { for m := range workerChan {
select { select {
case <-r.ctx.Done(): case <-job.ctx.Done():
return return
default: default:
} }
items := r.getBrokenFiles(m) items := r.getBrokenFiles(job, m)
if items != nil { if items != nil {
r.logger.Debug().Msgf("Found %d broken files for %s", len(items), m.Title) r.logger.Debug().Msgf("Found %d broken files for %s", len(items), m.Title)
if j.AutoProcess { if job.AutoProcess {
r.logger.Info().Msgf("Auto processing %d broken items for %s", len(items), m.Title) r.logger.Info().Msgf("Auto processing %d broken items for %s", len(items), m.Title)
// Delete broken items // Delete broken items
@@ -429,16 +500,17 @@ func (r *Repair) repairArr(j *Job, _arr string, tmdbId string) ([]arr.ContentFil
}() }()
} }
for _, m := range media { go func() {
select { defer close(workerChan)
case <-r.ctx.Done(): for _, m := range media {
break select {
default: case <-job.ctx.Done():
workerChan <- m return
case workerChan <- m:
}
} }
} }()
close(workerChan)
wg.Wait() wg.Wait()
if len(brokenItems) == 0 { if len(brokenItems) == 0 {
r.logger.Info().Msgf("No broken items found for %s", a.Name) r.logger.Info().Msgf("No broken items found for %s", a.Name)
@@ -449,58 +521,60 @@ func (r *Repair) repairArr(j *Job, _arr string, tmdbId string) ([]arr.ContentFil
return brokenItems, nil return brokenItems, nil
} }
func (r *Repair) isMediaAccessible(m arr.Content) bool { // checkMountUp checks if the mounts are accessible
files := m.Files func (r *Repair) checkMountUp(media []arr.Content) error {
if len(files) == 0 { firstMedia := media[0]
return false for _, m := range media {
} if len(m.Files) > 0 {
firstFile := files[0] firstMedia = m
r.logger.Debug().Msgf("Checking parent directory for %s", firstFile.Path) break
//if _, err := os.Stat(firstFile.Path); os.IsNotExist(err) {
// r.logger.Debug().Msgf("Parent directory not accessible for %s", firstFile.Path)
// return false
//}
// Check symlink parent directory
symlinkPath := getSymlinkTarget(firstFile.Path)
r.logger.Debug().Msgf("Checking symlink parent directory for %s", symlinkPath)
if symlinkPath != "" {
parentSymlink := filepath.Dir(filepath.Dir(symlinkPath)) // /mnt/zurg/torrents/movie/movie.mkv -> /mnt/zurg/torrents
if _, err := os.Stat(parentSymlink); os.IsNotExist(err) {
return false
} }
} }
return true files := firstMedia.Files
if len(files) == 0 {
return fmt.Errorf("no files found in media %s", firstMedia.Title)
}
firstFile := files[0]
symlinkPath := getSymlinkTarget(firstFile.Path)
if symlinkPath == "" {
return fmt.Errorf("no symlink target found for %s", firstFile.Path)
}
r.logger.Debug().Msgf("Checking symlink parent directory for %s", symlinkPath)
parentSymlink := filepath.Dir(filepath.Dir(symlinkPath)) // /mnt/zurg/torrents/movie/movie.mkv -> /mnt/zurg/torrents
if _, err := os.Stat(parentSymlink); os.IsNotExist(err) {
return fmt.Errorf("parent directory %s not accessible for %s", parentSymlink, firstFile.Path)
}
return nil
} }
func (r *Repair) getBrokenFiles(media arr.Content) []arr.ContentFile { func (r *Repair) getBrokenFiles(job *Job, media arr.Content) []arr.ContentFile {
if r.useWebdav { if r.useWebdav {
return r.getWebdavBrokenFiles(media) return r.getWebdavBrokenFiles(job, media)
} else if r.IsZurg { } else if r.IsZurg {
return r.getZurgBrokenFiles(media) return r.getZurgBrokenFiles(job, media)
} else { } else {
return r.getFileBrokenFiles(media) return r.getFileBrokenFiles(job, media)
} }
} }
func (r *Repair) getFileBrokenFiles(media arr.Content) []arr.ContentFile { func (r *Repair) getFileBrokenFiles(job *Job, media arr.Content) []arr.ContentFile {
// This checks symlink target, try to get read a tiny bit of the file // This checks symlink target, try to get read a tiny bit of the file
brokenFiles := make([]arr.ContentFile, 0) brokenFiles := make([]arr.ContentFile, 0)
uniqueParents := collectFiles(media) uniqueParents := collectFiles(media)
for parent, f := range uniqueParents { for parent, files := range uniqueParents {
// Check stat // Check stat
// Check file stat first // Check file stat first
firstFile := f[0] for _, file := range files {
// Read a tiny bit of the file if err := fileIsReadable(file.Path); err != nil {
if err := fileIsReadable(firstFile.Path); err != nil { r.logger.Debug().Msgf("Broken file found at: %s", parent)
r.logger.Debug().Msgf("Broken file found at: %s", parent) brokenFiles = append(brokenFiles, file)
brokenFiles = append(brokenFiles, f...) }
continue
} }
} }
if len(brokenFiles) == 0 { if len(brokenFiles) == 0 {
@@ -511,7 +585,7 @@ func (r *Repair) getFileBrokenFiles(media arr.Content) []arr.ContentFile {
return brokenFiles return brokenFiles
} }
func (r *Repair) getZurgBrokenFiles(media arr.Content) []arr.ContentFile { func (r *Repair) getZurgBrokenFiles(job *Job, media arr.Content) []arr.ContentFile {
// Use zurg setup to check file availability with zurg // Use zurg setup to check file availability with zurg
// This reduces bandwidth usage significantly // This reduces bandwidth usage significantly
@@ -526,41 +600,49 @@ func (r *Repair) getZurgBrokenFiles(media arr.Content) []arr.ContentFile {
} }
client := request.New(request.WithTimeout(0), request.WithTransport(tr)) client := request.New(request.WithTimeout(0), request.WithTransport(tr))
// Access zurg url + symlink folder + first file(encoded) // Access zurg url + symlink folder + first file(encoded)
for parent, f := range uniqueParents { for parent, files := range uniqueParents {
r.logger.Debug().Msgf("Checking %s", parent) r.logger.Debug().Msgf("Checking %s", parent)
torrentName := url.PathEscape(filepath.Base(parent)) torrentName := url.PathEscape(filepath.Base(parent))
encodedFile := url.PathEscape(f[0].TargetPath)
fullURL := fmt.Sprintf("%s/http/__all__/%s/%s", r.ZurgURL, torrentName, encodedFile) if len(files) == 0 {
// Check file stat first r.logger.Debug().Msgf("No files found for %s. Skipping", torrentName)
if _, err := os.Stat(f[0].Path); os.IsNotExist(err) {
r.logger.Debug().Msgf("Broken symlink found: %s", fullURL)
brokenFiles = append(brokenFiles, f...)
continue continue
} }
resp, err := client.Get(fullURL) for _, file := range files {
if err != nil { encodedFile := url.PathEscape(file.TargetPath)
r.logger.Error().Err(err).Msgf("Failed to reach %s", fullURL) fullURL := fmt.Sprintf("%s/http/__all__/%s/%s", r.ZurgURL, torrentName, encodedFile)
brokenFiles = append(brokenFiles, f...) if _, err := os.Stat(file.Path); os.IsNotExist(err) {
continue r.logger.Debug().Msgf("Broken symlink found: %s", fullURL)
} brokenFiles = append(brokenFiles, file)
continue
}
resp, err := client.Get(fullURL)
if err != nil {
r.logger.Error().Err(err).Msgf("Failed to reach %s", fullURL)
brokenFiles = append(brokenFiles, file)
continue
}
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
r.logger.Debug().Msgf("Failed to get download url for %s", fullURL)
if err := resp.Body.Close(); err != nil {
return nil
}
brokenFiles = append(brokenFiles, file)
continue
}
downloadUrl := resp.Request.URL.String()
if resp.StatusCode < 200 || resp.StatusCode >= 300 { if err := resp.Body.Close(); err != nil {
r.logger.Debug().Msgf("Failed to get download url for %s", fullURL) return nil
resp.Body.Close() }
brokenFiles = append(brokenFiles, f...) if downloadUrl != "" {
continue r.logger.Trace().Msgf("Found download url: %s", downloadUrl)
} } else {
r.logger.Debug().Msgf("Failed to get download url for %s", fullURL)
downloadUrl := resp.Request.URL.String() brokenFiles = append(brokenFiles, file)
resp.Body.Close() continue
}
if downloadUrl != "" {
r.logger.Trace().Msgf("Found download url: %s", downloadUrl)
} else {
r.logger.Debug().Msgf("Failed to get download url for %s", fullURL)
brokenFiles = append(brokenFiles, f...)
continue
} }
} }
if len(brokenFiles) == 0 { if len(brokenFiles) == 0 {
@@ -571,16 +653,16 @@ func (r *Repair) getZurgBrokenFiles(media arr.Content) []arr.ContentFile {
return brokenFiles return brokenFiles
} }
func (r *Repair) getWebdavBrokenFiles(media arr.Content) []arr.ContentFile { func (r *Repair) getWebdavBrokenFiles(job *Job, media arr.Content) []arr.ContentFile {
// Use internal webdav setup to check file availability // Use internal webdav setup to check file availability
caches := r.deb.Caches caches := r.deb.Caches()
if len(caches) == 0 { if len(caches) == 0 {
r.logger.Info().Msg("No caches found. Can't use webdav") r.logger.Info().Msg("No caches found. Can't use webdav")
return nil return nil
} }
clients := r.deb.Clients clients := r.deb.Clients()
if len(clients) == 0 { if len(clients) == 0 {
r.logger.Info().Msg("No clients found. Can't use webdav") r.logger.Info().Msg("No clients found. Can't use webdav")
return nil return nil
@@ -588,55 +670,18 @@ func (r *Repair) getWebdavBrokenFiles(media arr.Content) []arr.ContentFile {
brokenFiles := make([]arr.ContentFile, 0) brokenFiles := make([]arr.ContentFile, 0)
uniqueParents := collectFiles(media) uniqueParents := collectFiles(media)
// Access zurg url + symlink folder + first file(encoded) for torrentPath, files := range uniqueParents {
for torrentPath, f := range uniqueParents { select {
r.logger.Debug().Msgf("Checking %s", torrentPath) case <-job.ctx.Done():
// Get the debrid first return brokenFiles
dir := filepath.Dir(torrentPath) default:
debridName := ""
for _, client := range clients {
mountPath := client.GetMountPath()
if mountPath == "" {
continue
}
if filepath.Clean(mountPath) == filepath.Clean(dir) {
debridName = client.GetName()
break
}
} }
if debridName == "" { brokenFilesForTorrent := r.checkTorrentFiles(torrentPath, files, clients, caches)
r.logger.Debug().Msgf("No debrid found for %s. Skipping", torrentPath) if len(brokenFilesForTorrent) > 0 {
continue brokenFiles = append(brokenFiles, brokenFilesForTorrent...)
} }
cache, ok := caches[debridName]
if !ok {
r.logger.Debug().Msgf("No cache found for %s. Skipping", debridName)
continue
}
// Check if torrent exists
torrentName := filepath.Clean(filepath.Base(torrentPath))
torrent := cache.GetTorrentByName(torrentName)
if torrent == nil {
r.logger.Debug().Msgf("No torrent found for %s. Skipping", torrentName)
brokenFiles = append(brokenFiles, f...)
continue
}
files := make([]string, 0)
for _, file := range f {
files = append(files, file.TargetPath)
}
if cache.IsTorrentBroken(torrent, files) {
r.logger.Debug().Msgf("[webdav] Broken symlink found: %s", torrentPath)
// Delete the torrent?
brokenFiles = append(brokenFiles, f...)
continue
}
} }
if len(brokenFiles) == 0 { if len(brokenFiles) == 0 {
r.logger.Debug().Msgf("No broken files found for %s", media.Title)
return nil return nil
} }
r.logger.Debug().Msgf("%d broken files found for %s", len(brokenFiles), media.Title) r.logger.Debug().Msgf("%d broken files found for %s", len(brokenFiles), media.Title)
@@ -669,7 +714,6 @@ func (r *Repair) ProcessJob(id string) error {
if job == nil { if job == nil {
return fmt.Errorf("job %s not found", id) return fmt.Errorf("job %s not found", id)
} }
// All validation checks remain the same
if job.Status != JobPending { if job.Status != JobPending {
return fmt.Errorf("job %s not pending", id) return fmt.Errorf("job %s not pending", id)
} }
@@ -691,7 +735,11 @@ func (r *Repair) ProcessJob(id string) error {
return nil return nil
} }
g, ctx := errgroup.WithContext(r.ctx) if job.ctx == nil || job.ctx.Err() != nil {
job.ctx, job.cancelFunc = context.WithCancel(r.ctx)
}
g, ctx := errgroup.WithContext(job.ctx)
g.SetLimit(r.workers) g.SetLimit(r.workers)
for arrName, items := range brokenItems { for arrName, items := range brokenItems {

122
pkg/server/debug.go Normal file
View File

@@ -0,0 +1,122 @@
package server
import (
"fmt"
"github.com/go-chi/chi/v5"
"github.com/sirrobot01/decypharr/internal/request"
debridTypes "github.com/sirrobot01/decypharr/pkg/debrid/types"
"github.com/sirrobot01/decypharr/pkg/store"
"net/http"
"runtime"
)
func (s *Server) handleIngests(w http.ResponseWriter, r *http.Request) {
ingests := make([]debridTypes.IngestData, 0)
_store := store.Get()
debrids := _store.Debrid()
if debrids == nil {
http.Error(w, "Debrid service is not enabled", http.StatusInternalServerError)
return
}
for _, cache := range debrids.Caches() {
if cache == nil {
s.logger.Error().Msg("Debrid cache is nil, skipping")
continue
}
data, err := cache.GetIngests()
if err != nil {
s.logger.Error().Err(err).Msg("Failed to get ingests from debrid cache")
http.Error(w, "Failed to get ingests: "+err.Error(), http.StatusInternalServerError)
return
}
ingests = append(ingests, data...)
}
request.JSONResponse(w, ingests, 200)
}
func (s *Server) handleIngestsByDebrid(w http.ResponseWriter, r *http.Request) {
debridName := chi.URLParam(r, "debrid")
if debridName == "" {
http.Error(w, "Debrid name is required", http.StatusBadRequest)
return
}
_store := store.Get()
debrids := _store.Debrid()
if debrids == nil {
http.Error(w, "Debrid service is not enabled", http.StatusInternalServerError)
return
}
caches := debrids.Caches()
cache, exists := caches[debridName]
if !exists {
http.Error(w, "Debrid cache not found: "+debridName, http.StatusNotFound)
return
}
data, err := cache.GetIngests()
if err != nil {
s.logger.Error().Err(err).Msg("Failed to get ingests from debrid cache")
http.Error(w, "Failed to get ingests: "+err.Error(), http.StatusInternalServerError)
return
}
request.JSONResponse(w, data, 200)
}
func (s *Server) handleStats(w http.ResponseWriter, r *http.Request) {
var memStats runtime.MemStats
runtime.ReadMemStats(&memStats)
stats := map[string]any{
// Memory stats
"heap_alloc_mb": fmt.Sprintf("%.2fMB", float64(memStats.HeapAlloc)/1024/1024),
"total_alloc_mb": fmt.Sprintf("%.2fMB", float64(memStats.TotalAlloc)/1024/1024),
"memory_used": fmt.Sprintf("%.2fMB", float64(memStats.Sys)/1024/1024),
// GC stats
"gc_cycles": memStats.NumGC,
// Goroutine stats
"goroutines": runtime.NumGoroutine(),
// System info
"num_cpu": runtime.NumCPU(),
// OS info
"os": runtime.GOOS,
"arch": runtime.GOARCH,
"go_version": runtime.Version(),
}
debrids := store.Get().Debrid()
if debrids == nil {
request.JSONResponse(w, stats, http.StatusOK)
return
}
clients := debrids.Clients()
caches := debrids.Caches()
profiles := make([]*debridTypes.Profile, 0)
for debridName, client := range clients {
profile, err := client.GetProfile()
profile.Name = debridName
if err != nil {
s.logger.Error().Err(err).Msg("Failed to get debrid profile")
continue
}
cache, ok := caches[debridName]
if ok {
// Get torrent data
profile.LibrarySize = cache.TotalTorrents()
profile.BadTorrents = len(cache.GetListing("__bad__"))
profile.ActiveLinks = cache.GetTotalActiveDownloadLinks()
}
profiles = append(profiles, profile)
}
stats["debrids"] = profiles
request.JSONResponse(w, stats, http.StatusOK)
}

View File

@@ -9,12 +9,10 @@ import (
"github.com/rs/zerolog" "github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/config" "github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger" "github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/internal/request"
"io" "io"
"net/http" "net/http"
"net/url" "net/url"
"os" "os"
"runtime"
) )
type Server struct { type Server struct {
@@ -45,8 +43,12 @@ func New(handlers map[string]http.Handler) *Server {
//logs //logs
r.Get("/logs", s.getLogs) r.Get("/logs", s.getLogs)
//stats //debugs
r.Get("/stats", s.getStats) r.Route("/debug", func(r chi.Router) {
r.Get("/stats", s.handleStats)
r.Get("/ingests", s.handleIngests)
r.Get("/ingests/{debrid}", s.handleIngestsByDebrid)
})
//webhooks //webhooks
r.Post("/webhooks/tautulli", s.handleTautulli) r.Post("/webhooks/tautulli", s.handleTautulli)
@@ -68,7 +70,7 @@ func (s *Server) Start(ctx context.Context) error {
go func() { go func() {
if err := srv.ListenAndServe(); err != nil && !errors.Is(err, http.ErrServerClosed) { if err := srv.ListenAndServe(); err != nil && !errors.Is(err, http.ErrServerClosed) {
s.logger.Info().Msgf("Error starting server: %v", err) s.logger.Error().Err(err).Msgf("Error starting server")
} }
}() }()
@@ -101,36 +103,5 @@ func (s *Server) getLogs(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Expires", "0") w.Header().Set("Expires", "0")
// Stream the file // Stream the file
_, err = io.Copy(w, file) _, _ = io.Copy(w, file)
if err != nil {
s.logger.Error().Err(err).Msg("Error streaming log file")
http.Error(w, "Error streaming log file", http.StatusInternalServerError)
return
}
}
func (s *Server) getStats(w http.ResponseWriter, r *http.Request) {
var memStats runtime.MemStats
runtime.ReadMemStats(&memStats)
stats := map[string]interface{}{
// Memory stats
"heap_alloc_mb": fmt.Sprintf("%.2fMB", float64(memStats.HeapAlloc)/1024/1024),
"total_alloc_mb": fmt.Sprintf("%.2fMB", float64(memStats.TotalAlloc)/1024/1024),
"memory_used": fmt.Sprintf("%.2fMB", float64(memStats.Sys)/1024/1024),
// GC stats
"gc_cycles": memStats.NumGC,
// Goroutine stats
"goroutines": runtime.NumGoroutine(),
// System info
"num_cpu": runtime.NumCPU(),
// OS info
"os": runtime.GOOS,
"arch": runtime.GOARCH,
"go_version": runtime.Version(),
}
request.JSONResponse(w, stats, http.StatusOK)
} }

View File

@@ -3,7 +3,7 @@ package server
import ( import (
"cmp" "cmp"
"encoding/json" "encoding/json"
"github.com/sirrobot01/decypharr/pkg/service" "github.com/sirrobot01/decypharr/pkg/store"
"net/http" "net/http"
) )
@@ -38,8 +38,7 @@ func (s *Server) handleTautulli(w http.ResponseWriter, r *http.Request) {
http.Error(w, "Invalid ID", http.StatusBadRequest) http.Error(w, "Invalid ID", http.StatusBadRequest)
return return
} }
svc := service.GetService() repair := store.Get().Repair()
repair := svc.Repair
mediaId := cmp.Or(payload.TmdbID, payload.TvdbID) mediaId := cmp.Or(payload.TmdbID, payload.TvdbID)

View File

@@ -1,53 +0,0 @@
package service
import (
"github.com/sirrobot01/decypharr/pkg/arr"
"github.com/sirrobot01/decypharr/pkg/debrid/debrid"
"github.com/sirrobot01/decypharr/pkg/repair"
"sync"
)
type Service struct {
Repair *repair.Repair
Arr *arr.Storage
Debrid *debrid.Engine
}
var (
instance *Service
once sync.Once
)
// GetService returns the singleton instance
func GetService() *Service {
once.Do(func() {
arrs := arr.NewStorage()
deb := debrid.NewEngine()
instance = &Service{
Repair: repair.New(arrs, deb),
Arr: arrs,
Debrid: deb,
}
})
return instance
}
func Reset() {
if instance != nil {
if instance.Debrid != nil {
instance.Debrid.Reset()
}
if instance.Arr != nil {
//instance.Arr.Reset()
}
if instance.Repair != nil {
//instance.Repair.Reset()
}
}
once = sync.Once{}
instance = nil
}
func GetDebrid() *debrid.Engine {
return GetService().Debrid
}

317
pkg/store/downloader.go Normal file
View File

@@ -0,0 +1,317 @@
package store
import (
"fmt"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
"net/http"
"os"
"path/filepath"
"sync"
"time"
"github.com/cavaliergopher/grab/v3"
"github.com/sirrobot01/decypharr/internal/utils"
)
func grabber(client *grab.Client, url, filename string, byterange *[2]int64, progressCallback func(int64, int64)) error {
req, err := grab.NewRequest(filename, url)
if err != nil {
return err
}
// Set byte range if specified
if byterange != nil {
byterangeStr := fmt.Sprintf("%d-%d", byterange[0], byterange[1])
req.HTTPRequest.Header.Set("Range", "bytes="+byterangeStr)
}
resp := client.Do(req)
t := time.NewTicker(time.Second * 2)
defer t.Stop()
var lastReported int64
Loop:
for {
select {
case <-t.C:
current := resp.BytesComplete()
speed := int64(resp.BytesPerSecond())
if current != lastReported {
if progressCallback != nil {
progressCallback(current-lastReported, speed)
}
lastReported = current
}
case <-resp.Done:
break Loop
}
}
// Report final bytes
if progressCallback != nil {
progressCallback(resp.BytesComplete()-lastReported, 0)
}
return resp.Err()
}
func (s *Store) processDownload(torrent *Torrent, debridTorrent *types.Torrent) (string, error) {
s.logger.Info().Msgf("Downloading %d files...", len(debridTorrent.Files))
torrentPath := filepath.Join(torrent.SavePath, utils.RemoveExtension(debridTorrent.OriginalFilename))
torrentPath = utils.RemoveInvalidChars(torrentPath)
err := os.MkdirAll(torrentPath, os.ModePerm)
if err != nil {
// add the previous error to the error and return
return "", fmt.Errorf("failed to create directory: %s: %v", torrentPath, err)
}
s.downloadFiles(torrent, debridTorrent, torrentPath)
return torrentPath, nil
}
func (s *Store) downloadFiles(torrent *Torrent, debridTorrent *types.Torrent, parent string) {
var wg sync.WaitGroup
totalSize := int64(0)
for _, file := range debridTorrent.GetFiles() {
totalSize += file.Size
}
debridTorrent.Lock()
debridTorrent.SizeDownloaded = 0 // Reset downloaded bytes
debridTorrent.Progress = 0 // Reset progress
debridTorrent.Unlock()
progressCallback := func(downloaded int64, speed int64) {
debridTorrent.Lock()
defer debridTorrent.Unlock()
torrent.Lock()
defer torrent.Unlock()
// Update total downloaded bytes
debridTorrent.SizeDownloaded += downloaded
debridTorrent.Speed = speed
// Calculate overall progress
if totalSize > 0 {
debridTorrent.Progress = float64(debridTorrent.SizeDownloaded) / float64(totalSize) * 100
}
s.partialTorrentUpdate(torrent, debridTorrent)
}
client := &grab.Client{
UserAgent: "Decypharr[QBitTorrent]",
HTTPClient: &http.Client{
Transport: &http.Transport{
Proxy: http.ProxyFromEnvironment,
},
},
}
errChan := make(chan error, len(debridTorrent.Files))
for _, file := range debridTorrent.GetFiles() {
if file.DownloadLink == nil {
s.logger.Info().Msgf("No download link found for %s", file.Name)
continue
}
wg.Add(1)
s.downloadSemaphore <- struct{}{}
go func(file types.File) {
defer wg.Done()
defer func() { <-s.downloadSemaphore }()
filename := file.Name
err := grabber(
client,
file.DownloadLink.DownloadLink,
filepath.Join(parent, filename),
file.ByteRange,
progressCallback,
)
if err != nil {
s.logger.Error().Msgf("Failed to download %s: %v", filename, err)
errChan <- err
} else {
s.logger.Info().Msgf("Downloaded %s", filename)
}
}(file)
}
wg.Wait()
close(errChan)
var errors []error
for err := range errChan {
if err != nil {
errors = append(errors, err)
}
}
if len(errors) > 0 {
s.logger.Error().Msgf("Errors occurred during download: %v", errors)
return
}
s.logger.Info().Msgf("Downloaded all files for %s", debridTorrent.Name)
}
func (s *Store) processSymlink(torrent *Torrent, debridTorrent *types.Torrent) (string, error) {
files := debridTorrent.Files
if len(files) == 0 {
return "", fmt.Errorf("no video files found")
}
s.logger.Info().Msgf("Checking symlinks for %d files...", len(files))
rCloneBase := debridTorrent.MountPath
torrentPath, err := s.getTorrentPath(rCloneBase, debridTorrent) // /MyTVShow/
// This returns filename.ext for alldebrid instead of the parent folder filename/
torrentFolder := torrentPath
if err != nil {
return "", fmt.Errorf("failed to get torrent path: %v", err)
}
// Check if the torrent path is a file
torrentRclonePath := filepath.Join(rCloneBase, torrentPath) // leave it as is
if debridTorrent.Debrid == "alldebrid" && utils.IsMediaFile(torrentPath) {
// Alldebrid hotfix for single file torrents
torrentFolder = utils.RemoveExtension(torrentFolder)
torrentRclonePath = rCloneBase // /mnt/rclone/magnets/ // Remove the filename since it's in the root folder
}
torrentSymlinkPath := filepath.Join(torrent.SavePath, torrentFolder) // /mnt/symlinks/{category}/MyTVShow/
err = os.MkdirAll(torrentSymlinkPath, os.ModePerm)
if err != nil {
return "", fmt.Errorf("failed to create directory: %s: %v", torrentSymlinkPath, err)
}
realPaths := make(map[string]string)
err = filepath.WalkDir(torrentRclonePath, func(path string, d os.DirEntry, err error) error {
if err != nil {
return nil
}
if !d.IsDir() {
filename := d.Name()
rel, _ := filepath.Rel(torrentRclonePath, path)
realPaths[filename] = rel
}
return nil
})
if err != nil {
s.logger.Warn().Msgf("Error while scanning rclone path: %v", err)
}
pending := make(map[string]types.File)
for _, file := range files {
if realRelPath, ok := realPaths[file.Name]; ok {
file.Path = realRelPath
}
pending[file.Path] = file
}
ticker := time.NewTicker(200 * time.Millisecond)
defer ticker.Stop()
timeout := time.After(30 * time.Minute)
filePaths := make([]string, 0, len(pending))
for len(pending) > 0 {
select {
case <-ticker.C:
for path, file := range pending {
fullFilePath := filepath.Join(torrentRclonePath, file.Path)
if _, err := os.Stat(fullFilePath); !os.IsNotExist(err) {
fileSymlinkPath := filepath.Join(torrentSymlinkPath, file.Name)
if err := os.Symlink(fullFilePath, fileSymlinkPath); err != nil && !os.IsExist(err) {
s.logger.Debug().Msgf("Failed to create symlink: %s: %v", fileSymlinkPath, err)
} else {
filePaths = append(filePaths, fileSymlinkPath)
delete(pending, path)
s.logger.Info().Msgf("File is ready: %s", file.Name)
}
}
}
case <-timeout:
s.logger.Warn().Msgf("Timeout waiting for files, %d files still pending", len(pending))
return torrentSymlinkPath, fmt.Errorf("timeout waiting for files: %d files still pending", len(pending))
}
}
if s.skipPreCache {
return torrentSymlinkPath, nil
}
go func() {
s.logger.Debug().Msgf("Pre-caching %s", debridTorrent.Name)
if err := utils.PreCacheFile(filePaths); err != nil {
s.logger.Error().Msgf("Failed to pre-cache file: %s", err)
} else {
s.logger.Trace().Msgf("Pre-cached %d files", len(filePaths))
}
}()
return torrentSymlinkPath, nil
}
func (s *Store) createSymlinksWebdav(torrent *Torrent, debridTorrent *types.Torrent, rclonePath, torrentFolder string) (string, error) {
files := debridTorrent.Files
symlinkPath := filepath.Join(torrent.SavePath, torrentFolder) // /mnt/symlinks/{category}/MyTVShow/
err := os.MkdirAll(symlinkPath, os.ModePerm)
if err != nil {
return "", fmt.Errorf("failed to create directory: %s: %v", symlinkPath, err)
}
remainingFiles := make(map[string]types.File)
for _, file := range files {
remainingFiles[file.Name] = file
}
ticker := time.NewTicker(100 * time.Millisecond)
defer ticker.Stop()
timeout := time.After(30 * time.Minute)
filePaths := make([]string, 0, len(files))
for len(remainingFiles) > 0 {
select {
case <-ticker.C:
entries, err := os.ReadDir(rclonePath)
if err != nil {
continue
}
// Check which files exist in this batch
for _, entry := range entries {
filename := entry.Name()
if file, exists := remainingFiles[filename]; exists {
fullFilePath := filepath.Join(rclonePath, filename)
fileSymlinkPath := filepath.Join(symlinkPath, file.Name)
if err := os.Symlink(fullFilePath, fileSymlinkPath); err != nil && !os.IsExist(err) {
s.logger.Debug().Msgf("Failed to create symlink: %s: %v", fileSymlinkPath, err)
} else {
filePaths = append(filePaths, fileSymlinkPath)
delete(remainingFiles, filename)
s.logger.Info().Msgf("File is ready: %s", file.Name)
}
}
}
case <-timeout:
s.logger.Warn().Msgf("Timeout waiting for files, %d files still pending", len(remainingFiles))
return symlinkPath, fmt.Errorf("timeout waiting for files")
}
}
if s.skipPreCache {
return symlinkPath, nil
}
go func() {
s.logger.Debug().Msgf("Pre-caching %s", debridTorrent.Name)
if err := utils.PreCacheFile(filePaths); err != nil {
s.logger.Error().Msgf("Failed to pre-cache file: %s", err)
} else {
s.logger.Debug().Msgf("Pre-cached %d files", len(filePaths))
}
}() // Pre-cache the files in the background
// Pre-cache the first 256KB and 1MB of the file
return symlinkPath, nil
}
func (s *Store) getTorrentPath(rclonePath string, debridTorrent *types.Torrent) (string, error) {
for {
torrentPath, err := debridTorrent.GetMountFolder(rclonePath)
if err == nil {
s.logger.Debug().Msgf("Found torrent path: %s", torrentPath)
return torrentPath, err
}
time.Sleep(100 * time.Millisecond)
}
}

View File

@@ -1,18 +1,21 @@
package qbit package store
import ( import (
"github.com/sirrobot01/decypharr/internal/utils" "os"
"path/filepath"
"strings" "strings"
) )
func createTorrentFromMagnet(magnet *utils.Magnet, category, source string) *Torrent { func createTorrentFromMagnet(req *ImportRequest) *Torrent {
magnet := req.Magnet
arrName := req.Arr.Name
torrent := &Torrent{ torrent := &Torrent{
ID: "", ID: req.Id,
Hash: strings.ToLower(magnet.InfoHash), Hash: strings.ToLower(magnet.InfoHash),
Name: magnet.Name, Name: magnet.Name,
Size: magnet.Size, Size: magnet.Size,
Category: category, Category: arrName,
Source: source, Source: string(req.Type),
State: "downloading", State: "downloading",
MagnetUri: magnet.Link, MagnetUri: magnet.Link,
@@ -22,6 +25,7 @@ func createTorrentFromMagnet(magnet *utils.Magnet, category, source string) *Tor
AutoTmm: false, AutoTmm: false,
Ratio: 1, Ratio: 1,
RatioLimit: 1, RatioLimit: 1,
SavePath: filepath.Join(req.DownloadFolder, arrName) + string(os.PathSeparator),
} }
return torrent return torrent
} }

143
pkg/store/queue.go Normal file
View File

@@ -0,0 +1,143 @@
package store
import (
"context"
"fmt"
"time"
)
func (s *Store) addToQueue(importReq *ImportRequest) error {
if importReq.Magnet == nil {
return fmt.Errorf("magnet is required")
}
if importReq.Arr == nil {
return fmt.Errorf("arr is required")
}
importReq.Status = "queued"
importReq.CompletedAt = time.Time{}
importReq.Error = nil
err := s.importsQueue.Push(importReq)
if err != nil {
return err
}
return nil
}
func (s *Store) StartQueueSchedule(ctx context.Context) error {
// Start the slots processing in a separate goroutine
go func() {
if err := s.processSlotsQueue(ctx); err != nil {
s.logger.Error().Err(err).Msg("Error processing slots queue")
}
}()
// Start the remove stalled torrents processing in a separate goroutine
go func() {
if err := s.processRemoveStalledTorrents(ctx); err != nil {
s.logger.Error().Err(err).Msg("Error processing remove stalled torrents")
}
}()
return nil
}
func (s *Store) processSlotsQueue(ctx context.Context) error {
s.trackAvailableSlots(ctx) // Initial tracking of available slots
ticker := time.NewTicker(30 * time.Second)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return nil
case <-ticker.C:
s.trackAvailableSlots(ctx)
}
}
}
func (s *Store) processRemoveStalledTorrents(ctx context.Context) error {
if s.removeStalledAfter <= 0 {
return nil // No need to remove stalled torrents if the duration is not set
}
ticker := time.NewTicker(time.Minute)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return nil
case <-ticker.C:
if err := s.removeStalledTorrents(ctx); err != nil {
s.logger.Error().Err(err).Msg("Error removing stalled torrents")
}
}
}
}
func (s *Store) trackAvailableSlots(ctx context.Context) {
// This function tracks the available slots for each debrid client
availableSlots := make(map[string]int)
for name, deb := range s.debrid.Debrids() {
slots, err := deb.Client().GetAvailableSlots()
if err != nil {
continue
}
availableSlots[name] = slots
}
if s.importsQueue.Size() <= 0 {
// Queue is empty, no need to process
return
}
for name, slots := range availableSlots {
s.logger.Debug().Msgf("Available slots for %s: %d", name, slots)
// If slots are available, process the next import request from the queue
for slots > 0 {
select {
case <-ctx.Done():
return // Exit if context is done
default:
if err := s.processFromQueue(ctx); err != nil {
s.logger.Error().Err(err).Msg("Error processing from queue")
return // Exit on error
}
slots-- // Decrease the available slots after processing
}
}
}
}
func (s *Store) processFromQueue(ctx context.Context) error {
// Pop the next import request from the queue
importReq, err := s.importsQueue.Pop()
if err != nil {
return err
}
if importReq == nil {
return nil
}
return s.AddTorrent(ctx, importReq)
}
func (s *Store) removeStalledTorrents(ctx context.Context) error {
// This function checks for stalled torrents and removes them
stalledTorrents := s.torrents.GetStalledTorrents(s.removeStalledAfter)
if len(stalledTorrents) == 0 {
return nil // No stalled torrents to remove
}
for _, torrent := range stalledTorrents {
s.logger.Warn().Msgf("Removing stalled torrent: %s", torrent.Name)
s.torrents.Delete(torrent.Hash, torrent.Category, true) // Remove from store and delete from debrid
}
return nil
}

239
pkg/store/request.go Normal file
View File

@@ -0,0 +1,239 @@
package store
import (
"bytes"
"cmp"
"context"
"encoding/json"
"fmt"
"github.com/google/uuid"
"github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/arr"
debridTypes "github.com/sirrobot01/decypharr/pkg/debrid/types"
"net/http"
"net/url"
"sync"
"time"
)
type ImportType string
const (
ImportTypeQBitTorrent ImportType = "qbit"
ImportTypeAPI ImportType = "api"
)
type ImportRequest struct {
Id string `json:"id"`
DownloadFolder string `json:"downloadFolder"`
SelectedDebrid string `json:"debrid"`
Magnet *utils.Magnet `json:"magnet"`
Arr *arr.Arr `json:"arr"`
Action string `json:"action"`
DownloadUncached bool `json:"downloadUncached"`
CallBackUrl string `json:"callBackUrl"`
Status string `json:"status"`
CompletedAt time.Time `json:"completedAt,omitempty"`
Error error `json:"error,omitempty"`
Type ImportType `json:"type"`
Async bool `json:"async"`
}
func NewImportRequest(debrid string, downloadFolder string, magnet *utils.Magnet, arr *arr.Arr, action string, downloadUncached bool, callBackUrl string, importType ImportType) *ImportRequest {
return &ImportRequest{
Id: uuid.New().String(),
Status: "started",
DownloadFolder: downloadFolder,
SelectedDebrid: cmp.Or(arr.SelectedDebrid, debrid), // Use debrid from arr if available
Magnet: magnet,
Arr: arr,
Action: action,
DownloadUncached: downloadUncached,
CallBackUrl: callBackUrl,
Type: importType,
}
}
type importResponse struct {
Status string `json:"status"`
CompletedAt time.Time `json:"completedAt"`
Error error `json:"error"`
Torrent *Torrent `json:"torrent"`
Debrid *debridTypes.Torrent `json:"debrid"`
}
func (i *ImportRequest) sendCallback(torrent *Torrent, debridTorrent *debridTypes.Torrent) {
if i.CallBackUrl == "" {
return
}
// Check if the callback URL is valid
if _, err := url.ParseRequestURI(i.CallBackUrl); err != nil {
return
}
client := request.New()
payload, err := json.Marshal(&importResponse{
Status: i.Status,
Error: i.Error,
CompletedAt: i.CompletedAt,
Torrent: torrent,
Debrid: debridTorrent,
})
if err != nil {
return
}
req, err := http.NewRequest("POST", i.CallBackUrl, bytes.NewReader(payload))
if err != nil {
return
}
req.Header.Set("Content-Type", "application/json")
_, _ = client.Do(req)
}
func (i *ImportRequest) markAsFailed(err error, torrent *Torrent, debridTorrent *debridTypes.Torrent) {
i.Status = "failed"
i.Error = err
i.CompletedAt = time.Now()
i.sendCallback(torrent, debridTorrent)
}
func (i *ImportRequest) markAsCompleted(torrent *Torrent, debridTorrent *debridTypes.Torrent) {
i.Status = "completed"
i.Error = nil
i.CompletedAt = time.Now()
i.sendCallback(torrent, debridTorrent)
}
type ImportQueue struct {
queue []*ImportRequest
mu sync.RWMutex
ctx context.Context
cancel context.CancelFunc
cond *sync.Cond // For blocking operations
}
func NewImportQueue(ctx context.Context, capacity int) *ImportQueue {
ctx, cancel := context.WithCancel(ctx)
iq := &ImportQueue{
queue: make([]*ImportRequest, 0, capacity),
ctx: ctx,
cancel: cancel,
}
iq.cond = sync.NewCond(&iq.mu)
return iq
}
func (iq *ImportQueue) Push(req *ImportRequest) error {
if req == nil {
return fmt.Errorf("import request cannot be nil")
}
iq.mu.Lock()
defer iq.mu.Unlock()
select {
case <-iq.ctx.Done():
return fmt.Errorf("queue is shutting down")
default:
}
if len(iq.queue) >= cap(iq.queue) {
return fmt.Errorf("queue is full")
}
iq.queue = append(iq.queue, req)
iq.cond.Signal() // Wake up any waiting Pop()
return nil
}
func (iq *ImportQueue) Pop() (*ImportRequest, error) {
iq.mu.Lock()
defer iq.mu.Unlock()
select {
case <-iq.ctx.Done():
return nil, fmt.Errorf("queue is shutting down")
default:
}
if len(iq.queue) == 0 {
return nil, fmt.Errorf("no import requests available")
}
req := iq.queue[0]
iq.queue = iq.queue[1:]
return req, nil
}
// Delete specific request by ID
func (iq *ImportQueue) Delete(requestID string) bool {
iq.mu.Lock()
defer iq.mu.Unlock()
for i, req := range iq.queue {
if req.Id == requestID {
// Remove from slice
iq.queue = append(iq.queue[:i], iq.queue[i+1:]...)
return true
}
}
return false
}
// DeleteWhere requests matching a condition
func (iq *ImportQueue) DeleteWhere(predicate func(*ImportRequest) bool) int {
iq.mu.Lock()
defer iq.mu.Unlock()
deleted := 0
for i := len(iq.queue) - 1; i >= 0; i-- {
if predicate(iq.queue[i]) {
iq.queue = append(iq.queue[:i], iq.queue[i+1:]...)
deleted++
}
}
return deleted
}
// Find request without removing it
func (iq *ImportQueue) Find(requestID string) *ImportRequest {
iq.mu.RLock()
defer iq.mu.RUnlock()
for _, req := range iq.queue {
if req.Id == requestID {
return req
}
}
return nil
}
func (iq *ImportQueue) Size() int {
iq.mu.RLock()
defer iq.mu.RUnlock()
return len(iq.queue)
}
func (iq *ImportQueue) IsEmpty() bool {
return iq.Size() == 0
}
// List all requests (copy to avoid race conditions)
func (iq *ImportQueue) List() []*ImportRequest {
iq.mu.RLock()
defer iq.mu.RUnlock()
result := make([]*ImportRequest, len(iq.queue))
copy(result, iq.queue)
return result
}
func (iq *ImportQueue) Close() {
iq.cancel()
iq.cond.Broadcast()
}

90
pkg/store/store.go Normal file
View File

@@ -0,0 +1,90 @@
package store
import (
"cmp"
"context"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/pkg/arr"
"github.com/sirrobot01/decypharr/pkg/debrid"
"github.com/sirrobot01/decypharr/pkg/repair"
"sync"
"time"
)
type Store struct {
repair *repair.Repair
arr *arr.Storage
debrid *debrid.Storage
importsQueue *ImportQueue // Queued import requests(probably from too_many_active_downloads)
torrents *TorrentStorage
logger zerolog.Logger
refreshInterval time.Duration
skipPreCache bool
downloadSemaphore chan struct{}
removeStalledAfter time.Duration // Duration after which stalled torrents are removed
}
var (
instance *Store
once sync.Once
)
// Get returns the singleton instance
func Get() *Store {
once.Do(func() {
arrs := arr.NewStorage()
deb := debrid.NewStorage()
cfg := config.Get()
qbitCfg := cfg.QBitTorrent
instance = &Store{
repair: repair.New(arrs, deb),
arr: arrs,
debrid: deb,
torrents: newTorrentStorage(cfg.TorrentsFile()),
logger: logger.Default(), // Use default logger [decypharr]
refreshInterval: time.Duration(cmp.Or(qbitCfg.RefreshInterval, 10)) * time.Minute,
skipPreCache: qbitCfg.SkipPreCache,
downloadSemaphore: make(chan struct{}, cmp.Or(qbitCfg.MaxDownloads, 5)),
importsQueue: NewImportQueue(context.Background(), 1000),
}
if cfg.RemoveStalledAfter != "" {
removeStalledAfter, err := time.ParseDuration(cfg.RemoveStalledAfter)
if err == nil {
instance.removeStalledAfter = removeStalledAfter
}
}
})
return instance
}
func Reset() {
if instance != nil {
if instance.debrid != nil {
instance.debrid.Reset()
}
if instance.importsQueue != nil {
instance.importsQueue.Close()
}
close(instance.downloadSemaphore)
}
once = sync.Once{}
instance = nil
}
func (s *Store) Arr() *arr.Storage {
return s.arr
}
func (s *Store) Debrid() *debrid.Storage {
return s.debrid
}
func (s *Store) Repair() *repair.Repair {
return s.repair
}
func (s *Store) Torrents() *TorrentStorage {
return s.torrents
}

290
pkg/store/torrent.go Normal file
View File

@@ -0,0 +1,290 @@
package store
import (
"cmp"
"context"
"errors"
"fmt"
"github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/internal/utils"
debridTypes "github.com/sirrobot01/decypharr/pkg/debrid"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
"math"
"os"
"path/filepath"
"time"
)
func (s *Store) AddTorrent(ctx context.Context, importReq *ImportRequest) error {
torrent := createTorrentFromMagnet(importReq)
debridTorrent, err := debridTypes.Process(ctx, s.debrid, importReq.SelectedDebrid, importReq.Magnet, importReq.Arr, importReq.Action, importReq.DownloadUncached)
if err != nil {
var httpErr *utils.HTTPError
if ok := errors.As(err, &httpErr); ok {
switch httpErr.Code {
case "too_many_active_downloads":
// Handle too much active downloads error
s.logger.Warn().Msgf("Too many active downloads for %s, adding to queue", importReq.Magnet.Name)
if err := s.addToQueue(importReq); err != nil {
s.logger.Error().Err(err).Msgf("Failed to add %s to queue", importReq.Magnet.Name)
return err
}
torrent.State = "queued"
default:
// Unhandled error, return it, caller logs it
return err
}
} else {
// Unhandled error, return it, caller logs it
return err
}
}
torrent = s.partialTorrentUpdate(torrent, debridTorrent)
s.torrents.AddOrUpdate(torrent)
go s.processFiles(torrent, debridTorrent, importReq) // We can send async for file processing not to delay the response
return nil
}
func (s *Store) processFiles(torrent *Torrent, debridTorrent *types.Torrent, importReq *ImportRequest) {
if debridTorrent == nil {
// Early return if debridTorrent is nil
return
}
deb := s.debrid.Debrid(debridTorrent.Debrid)
client := deb.Client()
downloadingStatuses := client.GetDownloadingStatus()
_arr := importReq.Arr
backoff := time.NewTimer(s.refreshInterval)
defer backoff.Stop()
for debridTorrent.Status != "downloaded" {
s.logger.Debug().Msgf("%s <- (%s) Download Progress: %.2f%%", debridTorrent.Debrid, debridTorrent.Name, debridTorrent.Progress)
dbT, err := client.CheckStatus(debridTorrent)
if err != nil {
if dbT != nil && dbT.Id != "" {
// Delete the torrent if it was not downloaded
go func() {
_ = client.DeleteTorrent(dbT.Id)
}()
}
s.logger.Error().Msgf("Error checking status: %v", err)
s.markTorrentAsFailed(torrent)
go func() {
_arr.Refresh()
}()
importReq.markAsFailed(err, torrent, debridTorrent)
return
}
debridTorrent = dbT
torrent = s.partialTorrentUpdate(torrent, debridTorrent)
// Exit the loop for downloading statuses to prevent memory buildup
if debridTorrent.Status == "downloaded" || !utils.Contains(downloadingStatuses, debridTorrent.Status) {
break
}
select {
case <-backoff.C:
// Increase interval gradually, cap at max
nextInterval := min(s.refreshInterval*2, 30*time.Second)
backoff.Reset(nextInterval)
}
}
var torrentSymlinkPath string
var err error
debridTorrent.Arr = _arr
// Check if debrid supports webdav by checking cache
timer := time.Now()
onFailed := func(err error) {
s.markTorrentAsFailed(torrent)
go func() {
if deleteErr := client.DeleteTorrent(debridTorrent.Id); deleteErr != nil {
s.logger.Warn().Err(deleteErr).Msgf("Failed to delete torrent %s", debridTorrent.Id)
}
}()
s.logger.Error().Err(err).Msgf("Error occured while processing torrent %s", debridTorrent.Name)
importReq.markAsFailed(err, torrent, debridTorrent)
return
}
onSuccess := func(torrentSymlinkPath string) {
torrent.TorrentPath = torrentSymlinkPath
s.updateTorrent(torrent, debridTorrent)
s.logger.Info().Msgf("Adding %s took %s", debridTorrent.Name, time.Since(timer))
go importReq.markAsCompleted(torrent, debridTorrent) // Mark the import request as completed, send callback if needed
go func() {
if err := request.SendDiscordMessage("download_complete", "success", torrent.discordContext()); err != nil {
s.logger.Error().Msgf("Error sending discord message: %v", err)
}
}()
go func() {
_arr.Refresh()
}()
}
switch importReq.Action {
case "symlink":
// Symlink action, we will create a symlink to the torrent
s.logger.Debug().Msgf("Post-Download Action: Symlink")
cache := deb.Cache()
if cache != nil {
s.logger.Info().Msgf("Using internal webdav for %s", debridTorrent.Debrid)
// Use webdav to download the file
if err := cache.Add(debridTorrent); err != nil {
onFailed(err)
return
}
rclonePath := filepath.Join(debridTorrent.MountPath, cache.GetTorrentFolder(debridTorrent)) // /mnt/remote/realdebrid/MyTVShow
torrentFolderNoExt := utils.RemoveExtension(debridTorrent.Name)
torrentSymlinkPath, err = s.createSymlinksWebdav(torrent, debridTorrent, rclonePath, torrentFolderNoExt) // /mnt/symlinks/{category}/MyTVShow/
} else {
// User is using either zurg or debrid webdav
torrentSymlinkPath, err = s.processSymlink(torrent, debridTorrent) // /mnt/symlinks/{category}/MyTVShow/
}
if err != nil {
onFailed(err)
return
}
if torrentSymlinkPath == "" {
err = fmt.Errorf("symlink path is empty for %s", debridTorrent.Name)
onFailed(err)
}
onSuccess(torrentSymlinkPath)
return
case "download":
// Download action, we will download the torrent to the specified folder
// Generate download links
s.logger.Debug().Msgf("Post-Download Action: Download")
if err := client.GetFileDownloadLinks(debridTorrent); err != nil {
onFailed(err)
return
}
torrentSymlinkPath, err = s.processDownload(torrent, debridTorrent)
if err != nil {
onFailed(err)
return
}
if torrentSymlinkPath == "" {
err = fmt.Errorf("download path is empty for %s", debridTorrent.Name)
onFailed(err)
return
}
onSuccess(torrentSymlinkPath)
case "none":
s.logger.Debug().Msgf("Post-Download Action: None")
// No action, just update the torrent and mark it as completed
onSuccess(torrent.TorrentPath)
default:
// Action is none, do nothing, fallthrough
}
}
func (s *Store) markTorrentAsFailed(t *Torrent) *Torrent {
t.State = "error"
s.torrents.AddOrUpdate(t)
go func() {
if err := request.SendDiscordMessage("download_failed", "error", t.discordContext()); err != nil {
s.logger.Error().Msgf("Error sending discord message: %v", err)
}
}()
return t
}
func (s *Store) partialTorrentUpdate(t *Torrent, debridTorrent *types.Torrent) *Torrent {
if debridTorrent == nil {
return t
}
addedOn, err := time.Parse(time.RFC3339, debridTorrent.Added)
if err != nil {
addedOn = time.Now()
}
totalSize := debridTorrent.Bytes
progress := (cmp.Or(debridTorrent.Progress, 0.0)) / 100.0
if math.IsNaN(progress) || math.IsInf(progress, 0) {
progress = 0
}
sizeCompleted := int64(float64(totalSize) * progress)
var speed int64
if debridTorrent.Speed != 0 {
speed = debridTorrent.Speed
}
var eta int
if speed != 0 {
eta = int((totalSize - sizeCompleted) / speed)
}
files := make([]*File, 0, len(debridTorrent.Files))
for index, file := range debridTorrent.GetFiles() {
files = append(files, &File{
Index: index,
Name: file.Path,
Size: file.Size,
})
}
t.DebridID = debridTorrent.Id
t.Name = debridTorrent.Name
t.AddedOn = addedOn.Unix()
t.Files = files
t.Debrid = debridTorrent.Debrid
t.Size = totalSize
t.Completed = sizeCompleted
t.NumSeeds = debridTorrent.Seeders
t.Downloaded = sizeCompleted
t.DownloadedSession = sizeCompleted
t.Uploaded = sizeCompleted
t.UploadedSession = sizeCompleted
t.AmountLeft = totalSize - sizeCompleted
t.Progress = progress
t.Eta = eta
t.Dlspeed = speed
t.Upspeed = speed
t.ContentPath = filepath.Join(t.SavePath, t.Name) + string(os.PathSeparator)
return t
}
func (s *Store) updateTorrent(t *Torrent, debridTorrent *types.Torrent) *Torrent {
if debridTorrent == nil {
return t
}
if debridClient := s.debrid.Clients()[debridTorrent.Debrid]; debridClient != nil {
if debridTorrent.Status != "downloaded" {
_ = debridClient.UpdateTorrent(debridTorrent)
}
}
t = s.partialTorrentUpdate(t, debridTorrent)
t.ContentPath = t.TorrentPath + string(os.PathSeparator)
if t.IsReady() {
t.State = "pausedUP"
s.torrents.Update(t)
return t
}
ticker := time.NewTicker(100 * time.Millisecond)
defer ticker.Stop()
for {
select {
case <-ticker.C:
if t.IsReady() {
t.State = "pausedUP"
s.torrents.Update(t)
return t
}
updatedT := s.updateTorrent(t, debridTorrent)
t = updatedT
case <-time.After(10 * time.Minute): // Add a timeout
return t
}
}
}

View File

@@ -1,18 +1,15 @@
package qbit package store
import ( import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"github.com/sirrobot01/decypharr/pkg/service"
"os" "os"
"sort" "sort"
"sync" "sync"
"time"
) )
func keyPair(hash, category string) string { func keyPair(hash, category string) string {
if category == "" {
category = "uncategorized"
}
return fmt.Sprintf("%s|%s", hash, category) return fmt.Sprintf("%s|%s", hash, category)
} }
@@ -36,13 +33,13 @@ func loadTorrentsFromJSON(filename string) (Torrents, error) {
return torrents, nil return torrents, nil
} }
func NewTorrentStorage(filename string) *TorrentStorage { func newTorrentStorage(filename string) *TorrentStorage {
// Open the JSON file and read the data // Open the JSON file and read the data
torrents, err := loadTorrentsFromJSON(filename) torrents, err := loadTorrentsFromJSON(filename)
if err != nil { if err != nil {
torrents = make(Torrents) torrents = make(Torrents)
} }
// Create a new TorrentStorage // Create a new Storage
return &TorrentStorage{ return &TorrentStorage{
torrents: torrents, torrents: torrents,
filename: filename, filename: filename,
@@ -186,13 +183,18 @@ func (ts *TorrentStorage) Delete(hash, category string, removeFromDebrid bool) {
if torrent == nil { if torrent == nil {
return return
} }
if removeFromDebrid && torrent.ID != "" && torrent.Debrid != "" { st := Get()
dbClient := service.GetDebrid().GetClient(torrent.Debrid) // Check if torrent is queued for download
if torrent.State == "queued" && torrent.ID != "" {
// Remove the torrent from the import queue if it exists
st.importsQueue.Delete(torrent.ID)
}
if removeFromDebrid && torrent.DebridID != "" && torrent.Debrid != "" {
dbClient := st.debrid.Client(torrent.Debrid)
if dbClient != nil { if dbClient != nil {
err := dbClient.DeleteTorrent(torrent.ID) _ = dbClient.DeleteTorrent(torrent.DebridID)
if err != nil {
fmt.Println(err)
}
} }
} }
@@ -218,14 +220,21 @@ func (ts *TorrentStorage) DeleteMultiple(hashes []string, removeFromDebrid bool)
defer ts.mu.Unlock() defer ts.mu.Unlock()
toDelete := make(map[string]string) toDelete := make(map[string]string)
st := Get()
for _, hash := range hashes { for _, hash := range hashes {
for key, torrent := range ts.torrents { for key, torrent := range ts.torrents {
if torrent == nil { if torrent == nil {
continue continue
} }
if torrent.State == "queued" && torrent.ID != "" {
// Remove the torrent from the import queue if it exists
st.importsQueue.Delete(torrent.ID)
}
if torrent.Hash == hash { if torrent.Hash == hash {
if removeFromDebrid && torrent.ID != "" && torrent.Debrid != "" { if removeFromDebrid && torrent.DebridID != "" && torrent.Debrid != "" {
toDelete[torrent.ID] = torrent.Debrid toDelete[torrent.DebridID] = torrent.Debrid
} }
delete(ts.torrents, key) delete(ts.torrents, key)
if torrent.ContentPath != "" { if torrent.ContentPath != "" {
@@ -244,10 +253,12 @@ func (ts *TorrentStorage) DeleteMultiple(hashes []string, removeFromDebrid bool)
} }
}() }()
clients := st.debrid.Clients()
go func() { go func() {
for id, debrid := range toDelete { for id, debrid := range toDelete {
dbClient := service.GetDebrid().GetClient(debrid) dbClient, ok := clients[debrid]
if dbClient == nil { if !ok {
continue continue
} }
err := dbClient.DeleteTorrent(id) err := dbClient.DeleteTorrent(id)
@@ -278,3 +289,22 @@ func (ts *TorrentStorage) Reset() {
defer ts.mu.Unlock() defer ts.mu.Unlock()
ts.torrents = make(Torrents) ts.torrents = make(Torrents)
} }
// GetStalledTorrents returns a list of torrents that are stalled
// A torrent is considered stalled if it has no seeds, no progress, and has been downloading for longer than removeStalledAfter
// The torrent must have a DebridID and be in the "downloading" state
func (ts *TorrentStorage) GetStalledTorrents(removeAfter time.Duration) []*Torrent {
ts.mu.RLock()
defer ts.mu.RUnlock()
stalled := make([]*Torrent, 0)
currentTime := time.Now()
for _, torrent := range ts.torrents {
if torrent.DebridID != "" && torrent.State == "downloading" && torrent.NumSeeds == 0 && torrent.Progress == 0 {
addedOn := time.Unix(torrent.AddedOn, 0)
if currentTime.Sub(addedOn) > removeAfter {
stalled = append(stalled, torrent)
}
}
}
return stalled
}

88
pkg/store/types.go Normal file
View File

@@ -0,0 +1,88 @@
package store
import (
"fmt"
"sync"
)
type File struct {
Index int `json:"index,omitempty"`
Name string `json:"name,omitempty"`
Size int64 `json:"size,omitempty"`
Progress int `json:"progress,omitempty"`
Priority int `json:"priority,omitempty"`
IsSeed bool `json:"is_seed,omitempty"`
PieceRange []int `json:"piece_range,omitempty"`
Availability float64 `json:"availability,omitempty"`
}
type Torrent struct {
ID string `json:"id"`
DebridID string `json:"debrid_id"`
Debrid string `json:"debrid"`
TorrentPath string `json:"-"`
Files []*File `json:"files,omitempty"`
AddedOn int64 `json:"added_on,omitempty"`
AmountLeft int64 `json:"amount_left"`
AutoTmm bool `json:"auto_tmm"`
Availability float64 `json:"availability,omitempty"`
Category string `json:"category,omitempty"`
Completed int64 `json:"completed"`
CompletionOn int `json:"completion_on,omitempty"`
ContentPath string `json:"content_path"`
DlLimit int `json:"dl_limit"`
Dlspeed int64 `json:"dlspeed"`
Downloaded int64 `json:"downloaded"`
DownloadedSession int64 `json:"downloaded_session"`
Eta int `json:"eta"`
FlPiecePrio bool `json:"f_l_piece_prio,omitempty"`
ForceStart bool `json:"force_start,omitempty"`
Hash string `json:"hash"`
LastActivity int64 `json:"last_activity,omitempty"`
MagnetUri string `json:"magnet_uri,omitempty"`
MaxRatio int `json:"max_ratio,omitempty"`
MaxSeedingTime int `json:"max_seeding_time,omitempty"`
Name string `json:"name,omitempty"`
NumComplete int `json:"num_complete,omitempty"`
NumIncomplete int `json:"num_incomplete,omitempty"`
NumLeechs int `json:"num_leechs,omitempty"`
NumSeeds int `json:"num_seeds,omitempty"`
Priority int `json:"priority,omitempty"`
Progress float64 `json:"progress"`
Ratio int `json:"ratio,omitempty"`
RatioLimit int `json:"ratio_limit,omitempty"`
SavePath string `json:"save_path"`
SeedingTimeLimit int `json:"seeding_time_limit,omitempty"`
SeenComplete int64 `json:"seen_complete,omitempty"`
SeqDl bool `json:"seq_dl"`
Size int64 `json:"size,omitempty"`
State string `json:"state,omitempty"`
SuperSeeding bool `json:"super_seeding"`
Tags string `json:"tags,omitempty"`
TimeActive int `json:"time_active,omitempty"`
TotalSize int64 `json:"total_size,omitempty"`
Tracker string `json:"tracker,omitempty"`
UpLimit int64 `json:"up_limit,omitempty"`
Uploaded int64 `json:"uploaded,omitempty"`
UploadedSession int64 `json:"uploaded_session,omitempty"`
Upspeed int64 `json:"upspeed,omitempty"`
Source string `json:"source,omitempty"`
sync.Mutex
}
func (t *Torrent) IsReady() bool {
return (t.AmountLeft <= 0 || t.Progress == 1) && t.TorrentPath != ""
}
func (t *Torrent) discordContext() string {
format := `
**Name:** %s
**Arr:** %s
**Hash:** %s
**MagnetURI:** %s
**Debrid:** %s
`
return fmt.Sprintf(format, t.Name, t.Category, t.Hash, t.MagnetUri, t.Debrid)
}

View File

@@ -2,6 +2,7 @@ package web
import ( import (
"fmt" "fmt"
"github.com/sirrobot01/decypharr/pkg/store"
"net/http" "net/http"
"strings" "strings"
"time" "time"
@@ -12,36 +13,40 @@ import (
"github.com/sirrobot01/decypharr/internal/request" "github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/internal/utils" "github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/arr" "github.com/sirrobot01/decypharr/pkg/arr"
"github.com/sirrobot01/decypharr/pkg/qbit"
"github.com/sirrobot01/decypharr/pkg/service"
"github.com/sirrobot01/decypharr/pkg/version" "github.com/sirrobot01/decypharr/pkg/version"
) )
func (ui *Handler) handleGetArrs(w http.ResponseWriter, r *http.Request) { func (wb *Web) handleGetArrs(w http.ResponseWriter, r *http.Request) {
svc := service.GetService() _store := store.Get()
request.JSONResponse(w, svc.Arr.GetAll(), http.StatusOK) request.JSONResponse(w, _store.Arr().GetAll(), http.StatusOK)
} }
func (ui *Handler) handleAddContent(w http.ResponseWriter, r *http.Request) { func (wb *Web) handleAddContent(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
if err := r.ParseMultipartForm(32 << 20); err != nil { if err := r.ParseMultipartForm(32 << 20); err != nil {
http.Error(w, err.Error(), http.StatusBadRequest) http.Error(w, err.Error(), http.StatusBadRequest)
return return
} }
svc := service.GetService() _store := store.Get()
results := make([]*qbit.ImportRequest, 0) results := make([]*store.ImportRequest, 0)
errs := make([]string, 0) errs := make([]string, 0)
arrName := r.FormValue("arr") arrName := r.FormValue("arr")
notSymlink := r.FormValue("notSymlink") == "true" action := r.FormValue("action")
downloadUncached := r.FormValue("downloadUncached") == "true" debridName := r.FormValue("debrid")
if arrName == "" { callbackUrl := r.FormValue("callbackUrl")
arrName = "uncategorized" downloadFolder := r.FormValue("downloadFolder")
if downloadFolder == "" {
downloadFolder = config.Get().QBitTorrent.DownloadFolder
} }
_arr := svc.Arr.Get(arrName) downloadUncached := r.FormValue("downloadUncached") == "true"
_arr := _store.Arr().Get(arrName)
if _arr == nil { if _arr == nil {
_arr = arr.New(arrName, "", "", false, false, &downloadUncached) // These are not found in the config. They are throwaway arrs.
_arr = arr.New(arrName, "", "", false, false, &downloadUncached, "", "")
} }
// Handle URLs // Handle URLs
@@ -59,8 +64,10 @@ func (ui *Handler) handleAddContent(w http.ResponseWriter, r *http.Request) {
errs = append(errs, fmt.Sprintf("Failed to parse URL %s: %v", url, err)) errs = append(errs, fmt.Sprintf("Failed to parse URL %s: %v", url, err))
continue continue
} }
importReq := qbit.NewImportRequest(magnet, _arr, !notSymlink, downloadUncached)
if err := importReq.Process(ui.qbit); err != nil { importReq := store.NewImportRequest(debridName, downloadFolder, magnet, _arr, action, downloadUncached, callbackUrl, store.ImportTypeAPI)
if err := _store.AddTorrent(ctx, importReq); err != nil {
wb.logger.Error().Err(err).Str("url", url).Msg("Failed to add torrent")
errs = append(errs, fmt.Sprintf("URL %s: %v", url, err)) errs = append(errs, fmt.Sprintf("URL %s: %v", url, err))
continue continue
} }
@@ -83,9 +90,10 @@ func (ui *Handler) handleAddContent(w http.ResponseWriter, r *http.Request) {
continue continue
} }
importReq := qbit.NewImportRequest(magnet, _arr, !notSymlink, downloadUncached) importReq := store.NewImportRequest(debridName, downloadFolder, magnet, _arr, action, downloadUncached, callbackUrl, store.ImportTypeAPI)
err = importReq.Process(ui.qbit) err = _store.AddTorrent(ctx, importReq)
if err != nil { if err != nil {
wb.logger.Error().Err(err).Str("file", fileHeader.Filename).Msg("Failed to add torrent")
errs = append(errs, fmt.Sprintf("File %s: %v", fileHeader.Filename, err)) errs = append(errs, fmt.Sprintf("File %s: %v", fileHeader.Filename, err))
continue continue
} }
@@ -94,27 +102,27 @@ func (ui *Handler) handleAddContent(w http.ResponseWriter, r *http.Request) {
} }
request.JSONResponse(w, struct { request.JSONResponse(w, struct {
Results []*qbit.ImportRequest `json:"results"` Results []*store.ImportRequest `json:"results"`
Errors []string `json:"errors,omitempty"` Errors []string `json:"errors,omitempty"`
}{ }{
Results: results, Results: results,
Errors: errs, Errors: errs,
}, http.StatusOK) }, http.StatusOK)
} }
func (ui *Handler) handleRepairMedia(w http.ResponseWriter, r *http.Request) { func (wb *Web) handleRepairMedia(w http.ResponseWriter, r *http.Request) {
var req RepairRequest var req RepairRequest
if err := json.NewDecoder(r.Body).Decode(&req); err != nil { if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
http.Error(w, err.Error(), http.StatusBadRequest) http.Error(w, err.Error(), http.StatusBadRequest)
return return
} }
svc := service.GetService() _store := store.Get()
var arrs []string var arrs []string
if req.ArrName != "" { if req.ArrName != "" {
_arr := svc.Arr.Get(req.ArrName) _arr := _store.Arr().Get(req.ArrName)
if _arr == nil { if _arr == nil {
http.Error(w, "No Arrs found to repair", http.StatusNotFound) http.Error(w, "No Arrs found to repair", http.StatusNotFound)
return return
@@ -124,15 +132,15 @@ func (ui *Handler) handleRepairMedia(w http.ResponseWriter, r *http.Request) {
if req.Async { if req.Async {
go func() { go func() {
if err := svc.Repair.AddJob(arrs, req.MediaIds, req.AutoProcess, false); err != nil { if err := _store.Repair().AddJob(arrs, req.MediaIds, req.AutoProcess, false); err != nil {
ui.logger.Error().Err(err).Msg("Failed to repair media") wb.logger.Error().Err(err).Msg("Failed to repair media")
} }
}() }()
request.JSONResponse(w, "Repair process started", http.StatusOK) request.JSONResponse(w, "Repair process started", http.StatusOK)
return return
} }
if err := svc.Repair.AddJob([]string{req.ArrName}, req.MediaIds, req.AutoProcess, false); err != nil { if err := _store.Repair().AddJob([]string{req.ArrName}, req.MediaIds, req.AutoProcess, false); err != nil {
http.Error(w, fmt.Sprintf("Failed to repair: %v", err), http.StatusInternalServerError) http.Error(w, fmt.Sprintf("Failed to repair: %v", err), http.StatusInternalServerError)
return return
} }
@@ -140,16 +148,16 @@ func (ui *Handler) handleRepairMedia(w http.ResponseWriter, r *http.Request) {
request.JSONResponse(w, "Repair completed", http.StatusOK) request.JSONResponse(w, "Repair completed", http.StatusOK)
} }
func (ui *Handler) handleGetVersion(w http.ResponseWriter, r *http.Request) { func (wb *Web) handleGetVersion(w http.ResponseWriter, r *http.Request) {
v := version.GetInfo() v := version.GetInfo()
request.JSONResponse(w, v, http.StatusOK) request.JSONResponse(w, v, http.StatusOK)
} }
func (ui *Handler) handleGetTorrents(w http.ResponseWriter, r *http.Request) { func (wb *Web) handleGetTorrents(w http.ResponseWriter, r *http.Request) {
request.JSONResponse(w, ui.qbit.Storage.GetAllSorted("", "", nil, "added_on", false), http.StatusOK) request.JSONResponse(w, wb.torrents.GetAllSorted("", "", nil, "added_on", false), http.StatusOK)
} }
func (ui *Handler) handleDeleteTorrent(w http.ResponseWriter, r *http.Request) { func (wb *Web) handleDeleteTorrent(w http.ResponseWriter, r *http.Request) {
hash := chi.URLParam(r, "hash") hash := chi.URLParam(r, "hash")
category := chi.URLParam(r, "category") category := chi.URLParam(r, "category")
removeFromDebrid := r.URL.Query().Get("removeFromDebrid") == "true" removeFromDebrid := r.URL.Query().Get("removeFromDebrid") == "true"
@@ -157,11 +165,11 @@ func (ui *Handler) handleDeleteTorrent(w http.ResponseWriter, r *http.Request) {
http.Error(w, "No hash provided", http.StatusBadRequest) http.Error(w, "No hash provided", http.StatusBadRequest)
return return
} }
ui.qbit.Storage.Delete(hash, category, removeFromDebrid) wb.torrents.Delete(hash, category, removeFromDebrid)
w.WriteHeader(http.StatusOK) w.WriteHeader(http.StatusOK)
} }
func (ui *Handler) handleDeleteTorrents(w http.ResponseWriter, r *http.Request) { func (wb *Web) handleDeleteTorrents(w http.ResponseWriter, r *http.Request) {
hashesStr := r.URL.Query().Get("hashes") hashesStr := r.URL.Query().Get("hashes")
removeFromDebrid := r.URL.Query().Get("removeFromDebrid") == "true" removeFromDebrid := r.URL.Query().Get("removeFromDebrid") == "true"
if hashesStr == "" { if hashesStr == "" {
@@ -169,33 +177,51 @@ func (ui *Handler) handleDeleteTorrents(w http.ResponseWriter, r *http.Request)
return return
} }
hashes := strings.Split(hashesStr, ",") hashes := strings.Split(hashesStr, ",")
ui.qbit.Storage.DeleteMultiple(hashes, removeFromDebrid) wb.torrents.DeleteMultiple(hashes, removeFromDebrid)
w.WriteHeader(http.StatusOK) w.WriteHeader(http.StatusOK)
} }
func (ui *Handler) handleGetConfig(w http.ResponseWriter, r *http.Request) { func (wb *Web) handleGetConfig(w http.ResponseWriter, r *http.Request) {
// Merge config arrs, with arr Storage
unique := map[string]config.Arr{}
cfg := config.Get() cfg := config.Get()
arrCfgs := make([]config.Arr, 0) arrStorage := store.Get().Arr()
svc := service.GetService()
for _, a := range svc.Arr.GetAll() { // Add existing Arrs from storage
arrCfgs = append(arrCfgs, config.Arr{ for _, a := range arrStorage.GetAll() {
Host: a.Host, if _, ok := unique[a.Name]; !ok {
Name: a.Name, // Only add if not already in the unique map
Token: a.Token, unique[a.Name] = config.Arr{
Cleanup: a.Cleanup, Name: a.Name,
SkipRepair: a.SkipRepair, Host: a.Host,
DownloadUncached: a.DownloadUncached, Token: a.Token,
}) Cleanup: a.Cleanup,
SkipRepair: a.SkipRepair,
DownloadUncached: a.DownloadUncached,
SelectedDebrid: a.SelectedDebrid,
Source: a.Source,
}
}
}
for _, a := range cfg.Arrs {
if a.Host == "" || a.Token == "" {
continue // Skip empty arrs
}
unique[a.Name] = a
}
cfg.Arrs = make([]config.Arr, 0, len(unique))
for _, a := range unique {
cfg.Arrs = append(cfg.Arrs, a)
} }
cfg.Arrs = arrCfgs
request.JSONResponse(w, cfg, http.StatusOK) request.JSONResponse(w, cfg, http.StatusOK)
} }
func (ui *Handler) handleUpdateConfig(w http.ResponseWriter, r *http.Request) { func (wb *Web) handleUpdateConfig(w http.ResponseWriter, r *http.Request) {
// Decode the JSON body // Decode the JSON body
var updatedConfig config.Config var updatedConfig config.Config
if err := json.NewDecoder(r.Body).Decode(&updatedConfig); err != nil { if err := json.NewDecoder(r.Body).Decode(&updatedConfig); err != nil {
ui.logger.Error().Err(err).Msg("Failed to decode config update request") wb.logger.Error().Err(err).Msg("Failed to decode config update request")
http.Error(w, "Invalid request body: "+err.Error(), http.StatusBadRequest) http.Error(w, "Invalid request body: "+err.Error(), http.StatusBadRequest)
return return
} }
@@ -207,6 +233,7 @@ func (ui *Handler) handleUpdateConfig(w http.ResponseWriter, r *http.Request) {
currentConfig.LogLevel = updatedConfig.LogLevel currentConfig.LogLevel = updatedConfig.LogLevel
currentConfig.MinFileSize = updatedConfig.MinFileSize currentConfig.MinFileSize = updatedConfig.MinFileSize
currentConfig.MaxFileSize = updatedConfig.MaxFileSize currentConfig.MaxFileSize = updatedConfig.MaxFileSize
currentConfig.RemoveStalledAfter = updatedConfig.RemoveStalledAfter
currentConfig.AllowedExt = updatedConfig.AllowedExt currentConfig.AllowedExt = updatedConfig.AllowedExt
currentConfig.DiscordWebhook = updatedConfig.DiscordWebhook currentConfig.DiscordWebhook = updatedConfig.DiscordWebhook
@@ -227,25 +254,43 @@ func (ui *Handler) handleUpdateConfig(w http.ResponseWriter, r *http.Request) {
// Clear legacy single debrid if using array // Clear legacy single debrid if using array
} }
if len(updatedConfig.Arrs) > 0 {
currentConfig.Arrs = updatedConfig.Arrs
}
// Update Arrs through the service // Update Arrs through the service
svc := service.GetService() storage := store.Get()
svc.Arr.Clear() // Clear existing arrs arrStorage := storage.Arr()
newConfigArrs := make([]config.Arr, 0)
for _, a := range updatedConfig.Arrs { for _, a := range updatedConfig.Arrs {
svc.Arr.AddOrUpdate(&arr.Arr{ if a.Name == "" || a.Host == "" || a.Token == "" {
Name: a.Name, // Skip empty or auto-generated arrs
Host: a.Host, continue
Token: a.Token, }
Cleanup: a.Cleanup, newConfigArrs = append(newConfigArrs, a)
SkipRepair: a.SkipRepair,
DownloadUncached: a.DownloadUncached,
})
} }
currentConfig.Arrs = updatedConfig.Arrs currentConfig.Arrs = newConfigArrs
// Add config arr into the config
for _, a := range currentConfig.Arrs {
if a.Host == "" || a.Token == "" {
continue // Skip empty arrs
}
existingArr := arrStorage.Get(a.Name)
if existingArr != nil {
// Update existing Arr
existingArr.Host = a.Host
existingArr.Token = a.Token
existingArr.Cleanup = a.Cleanup
existingArr.SkipRepair = a.SkipRepair
existingArr.DownloadUncached = a.DownloadUncached
existingArr.SelectedDebrid = a.SelectedDebrid
existingArr.Source = a.Source
arrStorage.AddOrUpdate(existingArr)
} else {
// Create new Arr if it doesn't exist
newArr := arr.New(a.Name, a.Host, a.Token, a.Cleanup, a.SkipRepair, a.DownloadUncached, a.SelectedDebrid, a.Source)
arrStorage.AddOrUpdate(newArr)
}
}
if err := currentConfig.Save(); err != nil { if err := currentConfig.Save(); err != nil {
http.Error(w, "Error saving config: "+err.Error(), http.StatusInternalServerError) http.Error(w, "Error saving config: "+err.Error(), http.StatusInternalServerError)
return return
@@ -263,25 +308,25 @@ func (ui *Handler) handleUpdateConfig(w http.ResponseWriter, r *http.Request) {
request.JSONResponse(w, map[string]string{"status": "success"}, http.StatusOK) request.JSONResponse(w, map[string]string{"status": "success"}, http.StatusOK)
} }
func (ui *Handler) handleGetRepairJobs(w http.ResponseWriter, r *http.Request) { func (wb *Web) handleGetRepairJobs(w http.ResponseWriter, r *http.Request) {
svc := service.GetService() _store := store.Get()
request.JSONResponse(w, svc.Repair.GetJobs(), http.StatusOK) request.JSONResponse(w, _store.Repair().GetJobs(), http.StatusOK)
} }
func (ui *Handler) handleProcessRepairJob(w http.ResponseWriter, r *http.Request) { func (wb *Web) handleProcessRepairJob(w http.ResponseWriter, r *http.Request) {
id := chi.URLParam(r, "id") id := chi.URLParam(r, "id")
if id == "" { if id == "" {
http.Error(w, "No job ID provided", http.StatusBadRequest) http.Error(w, "No job ID provided", http.StatusBadRequest)
return return
} }
svc := service.GetService() _store := store.Get()
if err := svc.Repair.ProcessJob(id); err != nil { if err := _store.Repair().ProcessJob(id); err != nil {
ui.logger.Error().Err(err).Msg("Failed to process repair job") wb.logger.Error().Err(err).Msg("Failed to process repair job")
} }
w.WriteHeader(http.StatusOK) w.WriteHeader(http.StatusOK)
} }
func (ui *Handler) handleDeleteRepairJob(w http.ResponseWriter, r *http.Request) { func (wb *Web) handleDeleteRepairJob(w http.ResponseWriter, r *http.Request) {
// Read ids from body // Read ids from body
var req struct { var req struct {
IDs []string `json:"ids"` IDs []string `json:"ids"`
@@ -295,7 +340,22 @@ func (ui *Handler) handleDeleteRepairJob(w http.ResponseWriter, r *http.Request)
return return
} }
svc := service.GetService() _store := store.Get()
svc.Repair.DeleteJobs(req.IDs) _store.Repair().DeleteJobs(req.IDs)
w.WriteHeader(http.StatusOK)
}
func (wb *Web) handleStopRepairJob(w http.ResponseWriter, r *http.Request) {
id := chi.URLParam(r, "id")
if id == "" {
http.Error(w, "No job ID provided", http.StatusBadRequest)
return
}
_store := store.Get()
if err := _store.Repair().StopJob(id); err != nil {
wb.logger.Error().Err(err).Msg("Failed to stop repair job")
http.Error(w, "Failed to stop job: "+err.Error(), http.StatusInternalServerError)
return
}
w.WriteHeader(http.StatusOK) w.WriteHeader(http.StatusOK)
} }

View File

@@ -6,7 +6,7 @@ import (
"net/http" "net/http"
) )
func (ui *Handler) verifyAuth(username, password string) bool { func (wb *Web) verifyAuth(username, password string) bool {
// If you're storing hashed password, use bcrypt to compare // If you're storing hashed password, use bcrypt to compare
if username == "" { if username == "" {
return false return false
@@ -22,11 +22,11 @@ func (ui *Handler) verifyAuth(username, password string) bool {
return err == nil return err == nil
} }
func (ui *Handler) skipAuthHandler(w http.ResponseWriter, r *http.Request) { func (wb *Web) skipAuthHandler(w http.ResponseWriter, r *http.Request) {
cfg := config.Get() cfg := config.Get()
cfg.UseAuth = false cfg.UseAuth = false
if err := cfg.Save(); err != nil { if err := cfg.Save(); err != nil {
ui.logger.Error().Err(err).Msg("failed to save config") wb.logger.Error().Err(err).Msg("failed to save config")
http.Error(w, "failed to save config", http.StatusInternalServerError) http.Error(w, "failed to save config", http.StatusInternalServerError)
return return
} }

View File

@@ -6,7 +6,7 @@ import (
"net/http" "net/http"
) )
func (ui *Handler) setupMiddleware(next http.Handler) http.Handler { func (wb *Web) setupMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
cfg := config.Get() cfg := config.Get()
needsAuth := cfg.NeedsSetup() needsAuth := cfg.NeedsSetup()
@@ -24,7 +24,7 @@ func (ui *Handler) setupMiddleware(next http.Handler) http.Handler {
}) })
} }
func (ui *Handler) authMiddleware(next http.Handler) http.Handler { func (wb *Web) authMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Check if setup is needed // Check if setup is needed
cfg := config.Get() cfg := config.Get()
@@ -38,7 +38,7 @@ func (ui *Handler) authMiddleware(next http.Handler) http.Handler {
return return
} }
session, _ := store.Get(r, "auth-session") session, _ := wb.cookie.Get(r, "auth-session")
auth, ok := session.Values["authenticated"].(bool) auth, ok := session.Values["authenticated"].(bool)
if !ok || !auth { if !ok || !auth {

View File

@@ -5,35 +5,36 @@ import (
"net/http" "net/http"
) )
func (ui *Handler) Routes() http.Handler { func (wb *Web) Routes() http.Handler {
r := chi.NewRouter() r := chi.NewRouter()
r.Get("/login", ui.LoginHandler) r.Get("/login", wb.LoginHandler)
r.Post("/login", ui.LoginHandler) r.Post("/login", wb.LoginHandler)
r.Get("/register", ui.RegisterHandler) r.Get("/register", wb.RegisterHandler)
r.Post("/register", ui.RegisterHandler) r.Post("/register", wb.RegisterHandler)
r.Get("/skip-auth", ui.skipAuthHandler) r.Get("/skip-auth", wb.skipAuthHandler)
r.Get("/version", ui.handleGetVersion) r.Get("/version", wb.handleGetVersion)
r.Group(func(r chi.Router) { r.Group(func(r chi.Router) {
r.Use(ui.authMiddleware) r.Use(wb.authMiddleware)
r.Use(ui.setupMiddleware) r.Use(wb.setupMiddleware)
r.Get("/", ui.IndexHandler) r.Get("/", wb.IndexHandler)
r.Get("/download", ui.DownloadHandler) r.Get("/download", wb.DownloadHandler)
r.Get("/repair", ui.RepairHandler) r.Get("/repair", wb.RepairHandler)
r.Get("/config", ui.ConfigHandler) r.Get("/config", wb.ConfigHandler)
r.Route("/api", func(r chi.Router) { r.Route("/api", func(r chi.Router) {
r.Get("/arrs", ui.handleGetArrs) r.Get("/arrs", wb.handleGetArrs)
r.Post("/add", ui.handleAddContent) r.Post("/add", wb.handleAddContent)
r.Post("/repair", ui.handleRepairMedia) r.Post("/repair", wb.handleRepairMedia)
r.Get("/repair/jobs", ui.handleGetRepairJobs) r.Get("/repair/jobs", wb.handleGetRepairJobs)
r.Post("/repair/jobs/{id}/process", ui.handleProcessRepairJob) r.Post("/repair/jobs/{id}/process", wb.handleProcessRepairJob)
r.Delete("/repair/jobs", ui.handleDeleteRepairJob) r.Post("/repair/jobs/{id}/stop", wb.handleStopRepairJob)
r.Get("/torrents", ui.handleGetTorrents) r.Delete("/repair/jobs", wb.handleDeleteRepairJob)
r.Delete("/torrents/{category}/{hash}", ui.handleDeleteTorrent) r.Get("/torrents", wb.handleGetTorrents)
r.Delete("/torrents/", ui.handleDeleteTorrents) r.Delete("/torrents/{category}/{hash}", wb.handleDeleteTorrent)
r.Get("/config", ui.handleGetConfig) r.Delete("/torrents/", wb.handleDeleteTorrents)
r.Post("/config", ui.handleUpdateConfig) r.Get("/config", wb.handleGetConfig)
r.Post("/config", wb.handleUpdateConfig)
}) })
}) })

View File

@@ -17,6 +17,37 @@
[data-bs-theme="dark"] .nav-pills .nav-link.active { [data-bs-theme="dark"] .nav-pills .nav-link.active {
color: white !important; color: white !important;
} }
.config-item.bg-light {
background-color: var(--bs-gray-100) !important;
border-left: 4px solid var(--bs-info) !important;
}
.config-item input[readonly] {
background-color: var(--bs-gray-200);
opacity: 1;
}
.config-item select[readonly] {
background-color: var(--bs-gray-200);
pointer-events: none;
}
/* Dark mode specific overrides */
[data-bs-theme="dark"] .config-item.bg-light {
background-color: var(--bs-gray-800) !important;
border-left: 4px solid var(--bs-info) !important;
}
[data-bs-theme="dark"] .config-item input[readonly] {
background-color: var(--bs-gray-700);
color: var(--bs-gray-300);
}
[data-bs-theme="dark"] .config-item select[readonly] {
background-color: var(--bs-gray-700);
color: var(--bs-gray-300);
}
</style> </style>
<div class="container mt-4"> <div class="container mt-4">
<form id="configForm"> <form id="configForm">
@@ -80,7 +111,8 @@
<label> <label>
<!-- Empty label to keep the button aligned --> <!-- Empty label to keep the button aligned -->
</label> </label>
<div class="btn btn-primary w-100" onclick="registerMagnetLinkHandler();" id="registerMagnetLink"> <div class="btn btn-primary w-100" onclick="registerMagnetLinkHandler();"
id="registerMagnetLink">
Open Magnet Links in Decypharr Open Magnet Links in Decypharr
</div> </div>
</div> </div>
@@ -103,7 +135,8 @@
id="bindAddress" id="bindAddress"
name="bind_address" name="bind_address"
placeholder=""> placeholder="">
<small class="form-text text-muted">Bind address for the application(default is all interface)</small> <small class="form-text text-muted">Bind address for the application(default is all
interface)</small>
</div> </div>
</div> </div>
<div class="col-md-2 mt-3"> <div class="col-md-2 mt-3">
@@ -142,7 +175,7 @@
</div> </div>
</div> </div>
</div> </div>
<div class="col-md-6 mt-3"> <div class="col-md-4 mt-3">
<div class="form-group"> <div class="form-group">
<label for="minFileSize">Minimum File Size</label> <label for="minFileSize">Minimum File Size</label>
<input type="text" <input type="text"
@@ -150,10 +183,11 @@
id="minFileSize" id="minFileSize"
name="min_file_size" name="min_file_size"
placeholder="e.g., 10MB, 1GB"> placeholder="e.g., 10MB, 1GB">
<small class="form-text text-muted">Minimum file size to download (Empty for no limit)</small> <small class="form-text text-muted">Minimum file size to download (Empty for no
limit)</small>
</div> </div>
</div> </div>
<div class="col-md-6 mt-3"> <div class="col-md-4 mt-3">
<div class="form-group"> <div class="form-group">
<label for="maxFileSize">Maximum File Size</label> <label for="maxFileSize">Maximum File Size</label>
<input type="text" <input type="text"
@@ -161,13 +195,27 @@
id="maxFileSize" id="maxFileSize"
name="max_file_size" name="max_file_size"
placeholder="e.g., 50GB, 100MB"> placeholder="e.g., 50GB, 100MB">
<small class="form-text text-muted">Maximum file size to download (Empty for no limit)</small> <small class="form-text text-muted">Maximum file size to download (Empty for no
limit)</small>
</div>
</div>
<div class="col-md-4 mt-3">
<div class="form-group">
<label for="removeStalledAfter">Remove Stalled Torrents After</label>
<input type="text"
class="form-control"
id="removeStalledAfter"
name="remove_stalled_after"
placeholder="e.g., 1m, 30s, 1h">
<small class="form-text text-muted">Remove torrents that have been stalled for this
duration</small>
</div> </div>
</div> </div>
</div> </div>
</div> </div>
<div class="mt-4 d-flex justify-content-end"> <div class="mt-4 d-flex justify-content-end">
<button type="button" class="btn btn-primary next-step" data-next="2">Next <i class="bi bi-arrow-right"></i></button> <button type="button" class="btn btn-primary next-step" data-next="2">Next <i
class="bi bi-arrow-right"></i></button>
</div> </div>
</div> </div>
@@ -185,7 +233,8 @@
<button type="button" class="btn btn-outline-secondary prev-step" data-prev="1"> <button type="button" class="btn btn-outline-secondary prev-step" data-prev="1">
<i class="bi bi-arrow-left"></i> Previous <i class="bi bi-arrow-left"></i> Previous
</button> </button>
<button type="button" class="btn btn-primary next-step" data-next="3">Next <i class="bi bi-arrow-right"></i></button> <button type="button" class="btn btn-primary next-step" data-next="3">Next <i
class="bi bi-arrow-right"></i></button>
</div> </div>
</div> </div>
@@ -195,23 +244,31 @@
<div class="row"> <div class="row">
<div class="col-md-4 mb-3"> <div class="col-md-4 mb-3">
<label class="form-label" for="qbit.download_folder">Symlink/Download Folder</label> <label class="form-label" for="qbit.download_folder">Symlink/Download Folder</label>
<input type="text" class="form-control" name="qbit.download_folder" id="qbit.download_folder"> <input type="text" class="form-control" name="qbit.download_folder"
<small class="form-text text-muted">Folder where the downloaded files will be stored</small> id="qbit.download_folder">
<small class="form-text text-muted">Folder where the downloaded files will be
stored</small>
</div> </div>
<div class="col-md-3 mb-3"> <div class="col-md-3 mb-3">
<label class="form-label" for="qbit.refresh_interval">Refresh Interval (seconds)</label> <label class="form-label" for="qbit.refresh_interval">Refresh Interval (seconds)</label>
<input type="number" class="form-control" name="qbit.refresh_interval" id="qbit.refresh_interval"> <input type="number" class="form-control" name="qbit.refresh_interval"
id="qbit.refresh_interval">
</div> </div>
<div class="col-md-5 mb-3"> <div class="col-md-5 mb-3">
<label class="form-label" for="qbit.max_downloads">Maximum Downloads Limit</label> <label class="form-label" for="qbit.max_downloads">Maximum Downloads Limit</label>
<input type="number" class="form-control" name="qbit.max_downloads" id="qbit.max_downloads"> <input type="number" class="form-control" name="qbit.max_downloads"
<small class="form-text text-muted">Maximum number of simultaneous local downloads across all torrents</small> id="qbit.max_downloads">
<small class="form-text text-muted">Maximum number of simultaneous local downloads
across all torrents</small>
</div> </div>
<div class="col mb-3"> <div class="col mb-3">
<div class="form-check me-3 d-inline-block"> <div class="form-check me-3 d-inline-block">
<input type="checkbox" class="form-check-input" name="qbit.skip_pre_cache" id="qbit.skip_pre_cache"> <input type="checkbox" class="form-check-input" name="qbit.skip_pre_cache"
<label class="form-check-label" for="qbit.skip_pre_cache">Disable Pre-Cache On Download</label> id="qbit.skip_pre_cache">
<small class="form-text text-muted">Unchecking this caches a tiny part of your file to speed up import</small> <label class="form-check-label" for="qbit.skip_pre_cache">Disable Pre-Cache On
Download</label>
<small class="form-text text-muted">Unchecking this caches a tiny part of your file
to speed up import</small>
</div> </div>
</div> </div>
</div> </div>
@@ -220,7 +277,8 @@
<button type="button" class="btn btn-outline-secondary prev-step" data-prev="2"> <button type="button" class="btn btn-outline-secondary prev-step" data-prev="2">
<i class="bi bi-arrow-left"></i> Previous <i class="bi bi-arrow-left"></i> Previous
</button> </button>
<button type="button" class="btn btn-primary next-step" data-next="4">Next <i class="bi bi-arrow-right"></i></button> <button type="button" class="btn btn-primary next-step" data-next="4">Next <i
class="bi bi-arrow-right"></i></button>
</div> </div>
</div> </div>
@@ -238,55 +296,72 @@
<button type="button" class="btn btn-outline-secondary prev-step" data-prev="3"> <button type="button" class="btn btn-outline-secondary prev-step" data-prev="3">
<i class="bi bi-arrow-left"></i> Previous <i class="bi bi-arrow-left"></i> Previous
</button> </button>
<button type="button" class="btn btn-primary next-step" data-next="5">Next <i class="bi bi-arrow-right"></i></button> <button type="button" class="btn btn-primary next-step" data-next="5">Next <i
class="bi bi-arrow-right"></i></button>
</div> </div>
</div> </div>
<!-- Step 5: Repair Configuration --> <!-- Step 5: Repair Configuration -->
<div class="setup-step d-none" id="step5"> <div class="setup-step d-none" id="step5">
<div class="section mb-5"> <div class="section mb-5">
<div class="row mb-2"> <div class="row mb-3">
<div class="col"> <div class="col">
<div class="form-check me-3 d-inline-block"> <div class="form-check me-3 d-inline-block">
<input type="checkbox" class="form-check-input" name="repair.enabled" id="repair.enabled"> <input type="checkbox" class="form-check-input" name="repair.enabled"
<label class="form-check-label" for="repair.enabled">Enable Repair</label> id="repair.enabled">
<label class="form-check-label" for="repair.enabled">Enable Scheduled Repair</label>
</div> </div>
</div> </div>
</div> </div>
<div id="repairCol" class="d-none"> <div>
<div class="row"> <div class="row">
<div class="col-md-4 mb-3"> <div class="col-md-4 mb-3">
<label class="form-label" for="repair.interval">Interval</label> <label class="form-label" for="repair.interval">Scheduled Interval</label>
<input type="text" class="form-control" name="repair.interval" id="repair.interval" placeholder="e.g., 24h"> <input type="text" class="form-control" name="repair.interval" id="repair.interval"
<small class="form-text text-muted">Interval for the repair process(e.g., 24h, 1d, 03:00, or a crontab)</small> placeholder="e.g., 24h">
<small class="form-text text-muted">Interval for the repair process(e.g., 24h, 1d,
03:00, or a crontab)</small>
</div>
<div class="col-md-3 mb-3">
<label class="form-label" for="repair.workers">Workers</label>
<input type="text" class="form-control" name="repair.workers" id="repair.workers">
<small class="form-text text-muted">Number of workers to use for the repair
process</small>
</div> </div>
<div class="col-md-5 mb-3"> <div class="col-md-5 mb-3">
<label class="form-label" for="repair.zurg_url">Zurg URL</label> <label class="form-label" for="repair.zurg_url">Zurg URL</label>
<input type="text" class="form-control" name="repair.zurg_url" id="repair.zurg_url" placeholder="http://zurg:9999"> <input type="text" class="form-control" name="repair.zurg_url" id="repair.zurg_url"
<small class="form-text text-muted">Speeds up the repair process by using Zurg</small> placeholder="http://zurg:9999">
<small class="form-text text-muted">If you have Zurg running, you can use it to
speed up the repair process</small>
</div> </div>
</div> </div>
<div class="row"> <div class="row">
<div class="col-md-3 mb-3"> <div class="col-md-4 mb-3">
<label class="form-label" for="repair.strategy">Repair Strategy</label>
<select class="form-select" name="repair.strategy" id="repair.strategy">
<option value="per_torrent" selected>Per Torrent</option>
<option value="per_file">Per File</option>
</select>
<small class="form-text text-muted">How to handle repairs, per torrent or per file</small>
</div>
<div class="col-md-4 mb-3">
<div class="form-check"> <div class="form-check">
<input type="checkbox" class="form-check-input" name="repair.use_webdav" id="repair.use_webdav"> <input type="checkbox" class="form-check-input" name="repair.use_webdav"
id="repair.use_webdav">
<label class="form-check-label" for="repair.use_webdav">Use Webdav</label> <label class="form-check-label" for="repair.use_webdav">Use Webdav</label>
</div> </div>
<small class="form-text text-muted">Use Internal Webdav for repair(make sure webdav is enabled in the debrid section</small> <small class="form-text text-muted">Use Internal Webdav for repair(make sure webdav
is enabled in the debrid section</small>
</div> </div>
<div class="col-md-3 mb-3"> <div class="col-md-4 mb-3">
<div class="form-check"> <div class="form-check">
<input type="checkbox" class="form-check-input" name="repair.run_on_start" id="repair.run_on_start"> <input type="checkbox" class="form-check-input" name="repair.auto_process"
<label class="form-check-label" for="repair.run_on_start">Run on Start</label> id="repair.auto_process">
</div>
<small class="form-text text-muted">Run repair on startup</small>
</div>
<div class="col-md-3 mb-3">
<div class="form-check">
<input type="checkbox" class="form-check-input" name="repair.auto_process" id="repair.auto_process">
<label class="form-check-label" for="repair.auto_process">Auto Process</label> <label class="form-check-label" for="repair.auto_process">Auto Process</label>
</div> </div>
<small class="form-text text-muted">Automatically process the repair job(delete broken symlinks and searches the arr again)</small> <small class="form-text text-muted">Automatically process the repair job(delete
broken symlinks and searches the arr again)</small>
</div> </div>
</div> </div>
</div> </div>
@@ -326,21 +401,57 @@
</div> </div>
<div class="col-md-6 mb-3"> <div class="col-md-6 mb-3">
<label class="form-label" for="debrid[${index}].api_key" >API Key</label> <label class="form-label" for="debrid[${index}].api_key" >API Key</label>
<input type="password" class="form-control" name="debrid[${index}].api_key" id="debrid[${index}].api_key" required> <div class="password-toggle-container">
<input type="password"
class="form-control has-toggle"
name="debrid[${index}].api_key"
id="debrid[${index}].api_key"
required>
<button type="button"
class="password-toggle-btn"
onclick="togglePassword('debrid[${index}].api_key');">
<i class="bi bi-eye" id="debrid[${index}].api_key_icon"></i>
</button>
</div>
<small class="form-text text-muted">API Key for the debrid service</small> <small class="form-text text-muted">API Key for the debrid service</small>
</div> </div>
<div class="col-md-6 mb-3"> <div class="col-md-6 mb-3">
<label class="form-label" for="debrid[${index}].download_api_keys">Download API Keys</label>
<div class="password-toggle-container">
<textarea class="form-control has-toggle"
name="debrid[${index}].download_api_keys"
id="debrid[${index}].download_api_keys"
rows="3"
style="font-family: monospace; resize: vertical;"
placeholder="Enter one API key per line&#10;key1&#10;key2&#10;key3"></textarea>
<button type="button"
class="password-toggle-btn"
style="top: 20px;"
onclick="togglePasswordTextarea('debrid[${index}].download_api_keys');">
<i class="bi bi-eye" id="debrid[${index}].download_api_keys_icon"></i>
</button>
</div>
<small class="form-text text-muted">Multiple API keys for download (one per line). If empty, main API key will be used.</small>
</div>
<div class="col-md-3 mb-3">
<label class="form-label" for="debrid[${index}].folder">Mount/Rclone Folder</label> <label class="form-label" for="debrid[${index}].folder">Mount/Rclone Folder</label>
<input type="text" class="form-control" name="debrid[${index}].folder" id="debrid[${index}].folder" placeholder="e.g. /mnt/remote/realdebrid" required> <input type="text" class="form-control" name="debrid[${index}].folder" id="debrid[${index}].folder" placeholder="e.g. /mnt/remote/realdebrid" required>
<small class="form-text text-muted">Path to where you've mounted the debrid files. Usually your rclone path</small> <small class="form-text text-muted">Path to where you've mounted the debrid files. Usually your rclone path</small>
</div> </div>
<div class="col-md-6 mb-3"> <div class="col-md-3 mb-3">
<label class="form-label" for="debrid[${index}].rate_limit" >Rate Limit</label> <label class="form-label" for="debrid[${index}].rate_limit" >Rate Limit</label>
<input type="text" class="form-control" name="debrid[${index}].rate_limit" id="debrid[${index}].rate_limit" placeholder="e.g., 200/minute" value="250/minute"> <input type="text" class="form-control" name="debrid[${index}].rate_limit" id="debrid[${index}].rate_limit" placeholder="e.g., 200/minute" value="250/minute">
<small class="form-text text-muted">Rate limit for the debrid service. Confirm your debrid service rate limit</small> <small class="form-text text-muted">Rate limit for the debrid service. Confirm your debrid service rate limit</small>
</div> </div>
</div> </div>
<div class="row"> <div class="row mb-3">
<div class="col-md-3">
<div class="form-check me-3">
<input type="checkbox" class="form-check-input useWebdav" name="debrid[${index}].use_webdav" id="debrid[${index}].use_webdav">
<label class="form-check-label" for="debrid[${index}].use_webdav">Enable WebDav Server</label>
</div>
<small class="form-text text-muted">Create an internal webdav for this debrid</small>
</div>
<div class="col-md-3"> <div class="col-md-3">
<div class="form-check me-3"> <div class="form-check me-3">
<input type="checkbox" class="form-check-input" name="debrid[${index}].download_uncached" id="debrid[${index}].download_uncached"> <input type="checkbox" class="form-check-input" name="debrid[${index}].download_uncached" id="debrid[${index}].download_uncached">
@@ -348,13 +459,6 @@
</div> </div>
<small class="form-text text-muted">Download uncached files from the debrid service</small> <small class="form-text text-muted">Download uncached files from the debrid service</small>
</div> </div>
<div class="col-md-3">
<div class="form-check me-3">
<input type="checkbox" class="form-check-input" name="debrid[${index}].check_cached" id="debrid[${index}].check_cached">
<label class="form-check-label" for="debrid[${index}].check_cached" disabled>Check Cached</label>
</div>
<small class="form-text text-muted">Check if the file is cached before downloading(Disabled)</small>
</div>
<div class="col-md-3"> <div class="col-md-3">
<div class="form-check me-3"> <div class="form-check me-3">
<input type="checkbox" class="form-check-input" name="debrid[${index}].add_samples" id="debrid[${index}].add_samples"> <input type="checkbox" class="form-check-input" name="debrid[${index}].add_samples" id="debrid[${index}].add_samples">
@@ -364,14 +468,15 @@
</div> </div>
<div class="col-md-3"> <div class="col-md-3">
<div class="form-check me-3"> <div class="form-check me-3">
<input type="checkbox" class="form-check-input useWebdav" name="debrid[${index}].use_webdav" id="debrid[${index}].use_webdav"> <input type="checkbox" class="form-check-input" name="debrid[${index}].unpack_rar" id="debrid[${index}].unpack_rar">
<label class="form-check-label" for="debrid[${index}].use_webdav">Enable WebDav Server</label> <label class="form-check-label" for="debrid[${index}].unpack_rar">Unpack RAR</label>
</div> </div>
<small class="form-text text-muted">Create an internal webdav for this debrid</small> <small class="form-text text-muted">Preprocess RARed torrents to allow reading the files inside</small>
</div> </div>
</div> </div>
<div class="webdav d-none"> <div class="webdav d-none mt-1">
<h6 class="pb-2">Webdav</h6> <hr/>
<h6 class="pb-2">Webdav Settings</h6>
<div class="row mt-3"> <div class="row mt-3">
<div class="col-md-3 mb-3"> <div class="col-md-3 mb-3">
<label class="form-label" for="debrid[${index}].torrents_refresh_interval">Torrents Refresh Interval</label> <label class="form-label" for="debrid[${index}].torrents_refresh_interval">Torrents Refresh Interval</label>
@@ -422,7 +527,17 @@
</div> </div>
<div class="col-md-3 mb-3"> <div class="col-md-3 mb-3">
<label class="form-label" for="debrid[${index}].rc_pass">Rclone RC Password</label> <label class="form-label" for="debrid[${index}].rc_pass">Rclone RC Password</label>
<input type="password" class="form-control webdav-field" name="debrid[${index}].rc_pass" id="debrid[${index}].rc_pass"> <div class="password-toggle-container">
<input type="password"
class="form-control webdav-field has-toggle"
name="debrid[${index}].rc_pass"
id="debrid[${index}].rc_pass">
<button type="button"
class="password-toggle-btn"
onclick="togglePassword('debrid[${index}].rc_pass');">
<i class="bi bi-eye" id="debrid[${index}].rc_pass_icon"></i>
</button>
</div>
<small class="form-text text-muted">Rclone RC Password for the webdav server</small> <small class="form-text text-muted">Rclone RC Password for the webdav server</small>
</div> </div>
<div class="col-md-3 mb-3"> <div class="col-md-3 mb-3">
@@ -434,12 +549,12 @@
</div> </div>
</div> </div>
<div class="row mt-3"> <div class="row mt-3">
<div class="col mt-3"> <div class="col">
<h6 class="pb-2">Custom Folders</h6> <h6 class="pb-2">Virtual Folders</h6>
<div class="col-12"> <div class="col-12">
<p class="text-muted small">Create virtual directories with filters to organize your content</p> <p class="text-muted small">Create virtual directories with filters to organize your content</p>
<div class="directories-container" id="debrid[${index}].directories"> <div class="directories-container" id="debrid[${index}].directories">
<!-- Dynamic directories will be added here -->
</div> </div>
<button type="button" class="btn btn-secondary mt-2 webdav-field" onclick="addDirectory(${index});"> <button type="button" class="btn btn-secondary mt-2 webdav-field" onclick="addDirectory(${index});">
<i class="bi bi-plus"></i> Add Directory <i class="bi bi-plus"></i> Add Directory
@@ -528,7 +643,7 @@
const filterTemplate = (debridIndex, dirIndex, filterIndex, filterType) => { const filterTemplate = (debridIndex, dirIndex, filterIndex, filterType) => {
let placeholder, label; let placeholder, label;
switch(filterType) { switch (filterType) {
case 'include': case 'include':
placeholder = "Text that should be included in filename"; placeholder = "Text that should be included in filename";
label = "Include"; label = "Include";
@@ -622,49 +737,87 @@
`; `;
}; };
const arrTemplate = (index) => ` const arrTemplate = (index, data = {}) => `
<div class="config-item position-relative mb-3 p-3 border rounded"> <div class="config-item position-relative mb-3 p-3 border rounded ${data.source === 'auto' ? 'bg-light' : ''}">
<button type="button" class="btn btn-sm btn-danger position-absolute top-0 end-0 m-2" ${data.source !== 'auto' ? `
onclick="if(confirm('Are you sure you want to delete this arr?')) this.closest('.config-item').remove();" <button type="button" class="btn btn-sm btn-danger position-absolute top-0 end-0 m-2"
title="Delete this arr"> onclick="if(confirm('Are you sure you want to delete this arr?')) this.closest('.config-item').remove();"
<i class="bi bi-trash"></i> title="Delete this arr">
</button> <i class="bi bi-trash"></i>
<div class="row"> </button>
<div class="col-md-4 mb-3"> ` : `
<label for="arr[${index}].name" class="form-label">Name</label> <div class="position-absolute top-0 end-0 m-2">
<input type="text" class="form-control" name="arr[${index}].name" id="arr[${index}].name" required> <span class="badge bg-info">Auto-detected</span>
</div> </div>
<div class="col-md-4 mb-3"> `}
<label for="arr[${index}].host" class="form-label">Host</label> <div class="row">
<input type="text" class="form-control" name="arr[${index}].host" id="arr[${index}].host" required> <input type="hidden" name="arr[${index}].source" value="${data.source || ''}">
</div> <div class="col-md-4 mb-3">
<div class="col-md-4 mb-3"> <label for="arr[${index}].name" class="form-label">Name</label>
<label for"arr[${index}].token" class="form-label">API Token</label> <input type="text"
<input type="password" class="form-control" name="arr[${index}].token" id="arr[${index}].token" required> class="form-control"
</div> name="arr[${index}].name"
</div> id="arr[${index}].name"
<div class="row"> ${data.source === 'auto' ? 'readonly' : 'required'}>
<div class="col-md-2 mb-3"> ${data.source === 'auto' ? '<input type="hidden" name="arr[' + index + '].source" value="auto">' : ''}
<div class="form-check"> </div>
<label for="arr[${index}].cleanup" class="form-check-label">Cleanup Queue</label> <div class="col-md-4 mb-3">
<input type="checkbox" class="form-check-input" name="arr[${index}].cleanup" id="arr[${index}].cleanup"> <label for="arr[${index}].host" class="form-label">Host</label>
<input type="text"
class="form-control"
name="arr[${index}].host"
id="arr[${index}].host"
${data.source === 'auto' ? 'readonly' : 'required'}>
</div>
<div class="col-md-4 mb-3">
<label for="arr[${index}].token" class="form-label">API Token</label>
<div class="password-toggle-container">
<input type="password"
class="form-control has-toggle"
name="arr[${index}].token"
id="arr[${index}].token"
${data.source === 'auto' ? 'readonly' : 'required'}>
<button type="button"
class="password-toggle-btn"
onclick="togglePassword('arr[${index}].token');"
${data.source === 'auto' ? 'disabled style="opacity: 0.5;"' : ''}>
<i class="bi bi-eye" id="arr[${index}].token_icon"></i>
</button>
</div>
</div> </div>
</div> </div>
<div class="col-md-2 mb-3"> <div class="row">
<div class="form-check"> <div class="col-md-4 mb-3">
<label for="arr[${index}].skip_repair" class="form-check-label">Skip Repair</label> <label for="arr[${index}].selected_debrid" class="form-label">Select Arr Debrid</label>
<input type="checkbox" class="form-check-input" name="arr[${index}].skip_repair" id="arr[${index}].skip_repair"> <select class="form-select" name="arr[${index}].selected_debrid" id="arr[${index}].selected_debrid">
<option value="" selected disabled>Select Arr Debrid</option>
<option value="realdebrid">Real Debrid</option>
<option value="alldebrid">AllDebrid</option>
<option value="debrid_link">Debrid Link</option>
<option value="torbox">Torbox</option>
</select>
</div> </div>
</div> <div class="col-md-2 mb-3">
<div class="col-md-2 mb-3"> <div class="form-check">
<div class="form-check"> <label for="arr[${index}].cleanup" class="form-check-label">Cleanup Queue</label>
<label for="arr[${index}].download_uncached" class="form-check-label">Download Uncached</label> <input type="checkbox" class="form-check-input" name="arr[${index}].cleanup" id="arr[${index}].cleanup">
<input type="checkbox" class="form-check-input" name="arr[${index}].download_uncached" id="arr[${index}].download_uncached"> </div>
</div>
<div class="col-md-2 mb-3">
<div class="form-check">
<label for="arr[${index}].skip_repair" class="form-check-label">Skip Repair</label>
<input type="checkbox" class="form-check-input" name="arr[${index}].skip_repair" id="arr[${index}].skip_repair">
</div>
</div>
<div class="col-md-2 mb-3">
<div class="form-check">
<label for="arr[${index}].download_uncached" class="form-check-label">Download Uncached</label>
<input type="checkbox" class="form-check-input" name="arr[${index}].download_uncached" id="arr[${index}].download_uncached">
</div>
</div> </div>
</div> </div>
</div> </div>
</div> `;
`;
const debridDirectoryCounts = {}; const debridDirectoryCounts = {};
const directoryFilterCounts = {}; const directoryFilterCounts = {};
@@ -720,6 +873,7 @@
debridDirectoryCounts[debridIndex]++; debridDirectoryCounts[debridIndex]++;
return dirIndex; return dirIndex;
} }
function addFilter(debridIndex, dirIndex, filterType, filterValue = "") { function addFilter(debridIndex, dirIndex, filterType, filterValue = "") {
const dirKey = `${debridIndex}-${dirIndex}`; const dirKey = `${debridIndex}-${dirIndex}`;
if (!directoryFilterCounts[dirKey]) { if (!directoryFilterCounts[dirKey]) {
@@ -752,7 +906,7 @@
} }
// Main functionality // Main functionality
document.addEventListener('DOMContentLoaded', function() { document.addEventListener('DOMContentLoaded', function () {
let debridCount = 0; let debridCount = 0;
let arrCount = 0; let arrCount = 0;
let currentStep = 1; let currentStep = 1;
@@ -766,21 +920,21 @@
// Step navigation // Step navigation
document.querySelectorAll('.nav-link').forEach(navLink => { document.querySelectorAll('.nav-link').forEach(navLink => {
navLink.addEventListener('click', function() { navLink.addEventListener('click', function () {
const stepNumber = parseInt(this.getAttribute('data-step')); const stepNumber = parseInt(this.getAttribute('data-step'));
goToStep(stepNumber); goToStep(stepNumber);
}); });
}); });
document.querySelectorAll('.next-step').forEach(button => { document.querySelectorAll('.next-step').forEach(button => {
button.addEventListener('click', function() { button.addEventListener('click', function () {
const nextStep = parseInt(this.getAttribute('data-next')); const nextStep = parseInt(this.getAttribute('data-next'));
goToStep(nextStep); goToStep(nextStep);
}); });
}); });
document.querySelectorAll('.prev-step').forEach(button => { document.querySelectorAll('.prev-step').forEach(button => {
button.addEventListener('click', function() { button.addEventListener('click', function () {
const prevStep = parseInt(this.getAttribute('data-prev')); const prevStep = parseInt(this.getAttribute('data-prev'));
goToStep(prevStep); goToStep(prevStep);
}); });
@@ -835,9 +989,6 @@
// Load Repair config // Load Repair config
if (config.repair) { if (config.repair) {
if (config.repair.enabled) {
document.getElementById('repairCol').classList.remove('d-none');
}
Object.entries(config.repair).forEach(([key, value]) => { Object.entries(config.repair).forEach(([key, value]) => {
const input = document.querySelector(`[name="repair.${key}"]`); const input = document.querySelector(`[name="repair.${key}"]`);
if (input) { if (input) {
@@ -862,6 +1013,9 @@
if (config.max_file_size) { if (config.max_file_size) {
document.querySelector('[name="max_file_size"]').value = config.max_file_size; document.querySelector('[name="max_file_size"]').value = config.max_file_size;
} }
if (config.remove_stalled_after) {
document.querySelector('[name="remove_stalled_after"]').value = config.remove_stalled_after;
}
if (config.discord_webhook_url) { if (config.discord_webhook_url) {
document.querySelector('[name="discord_webhook_url"]').value = config.discord_webhook_url; document.querySelector('[name="discord_webhook_url"]').value = config.discord_webhook_url;
} }
@@ -894,7 +1048,7 @@
addArrConfig(); addArrConfig();
}); });
$(document).on('change', '.useWebdav', function() { $(document).on('change', '.useWebdav', function () {
const webdavConfig = $(this).closest('.config-item').find(`.webdav`); const webdavConfig = $(this).closest('.config-item').find(`.webdav`);
if (webdavConfig.length === 0) return; if (webdavConfig.length === 0) return;
@@ -914,14 +1068,6 @@
} }
}); });
$(document).on('change', 'input[name="repair.enabled"]', function() {
if (this.checked) {
$('#repairCol').removeClass('d-none');
} else {
$('#repairCol').addClass('d-none');
}
});
async function saveConfig(e) { async function saveConfig(e) {
const submitButton = e.target.querySelector('button[type="submit"]'); const submitButton = e.target.querySelector('button[type="submit"]');
submitButton.disabled = true; submitButton.disabled = true;
@@ -945,7 +1091,7 @@
// Save config logic // Save config logic
const response = await fetcher('/api/config', { const response = await fetcher('/api/config', {
method: 'POST', method: 'POST',
headers: { 'Content-Type': 'application/json' }, headers: {'Content-Type': 'application/json'},
body: JSON.stringify(config) body: JSON.stringify(config)
}); });
@@ -997,7 +1143,7 @@
if (data.use_webdav && data.directories) { if (data.use_webdav && data.directories) {
Object.entries(data.directories).forEach(([dirName, dirData]) => { Object.entries(data.directories).forEach(([dirName, dirData]) => {
const dirIndex = addDirectory(debridCount, { name: dirName }); const dirIndex = addDirectory(debridCount, {name: dirName});
// Add filters if available // Add filters if available
if (dirData.filters) { if (dirData.filters) {
@@ -1007,6 +1153,20 @@
} }
}); });
} }
if (data.download_api_keys && Array.isArray(data.download_api_keys)) {
const downloadKeysTextarea = container.querySelector(`[name="debrid[${debridCount}].download_api_keys"]`);
if (downloadKeysTextarea) {
downloadKeysTextarea.value = data.download_api_keys.join('\n');
}
}
}
const downloadKeysTextarea = newDebrid.querySelector(`[name="debrid[${debridCount}].download_api_keys"]`);
if (downloadKeysTextarea) {
downloadKeysTextarea.style.webkitTextSecurity = 'disc';
downloadKeysTextarea.style.textSecurity = 'disc';
downloadKeysTextarea.setAttribute('data-password-visible', 'false');
} }
debridCount++; debridCount++;
@@ -1014,11 +1174,10 @@
function addArrConfig(data = {}) { function addArrConfig(data = {}) {
const container = document.getElementById('arrConfigs'); const container = document.getElementById('arrConfigs');
container.insertAdjacentHTML('beforeend', arrTemplate(arrCount)); container.insertAdjacentHTML('beforeend', arrTemplate(arrCount, data));
// Add a delete button to the new arr // Don't add delete button for auto-detected arrs since it's already handled in template
const newArr = container.lastElementChild; const newArr = container.lastElementChild;
addDeleteButton(newArr, `Delete this arr`);
if (data) { if (data) {
Object.entries(data).forEach(([key, value]) => { Object.entries(data).forEach(([key, value]) => {
@@ -1043,7 +1202,7 @@
deleteBtn.innerHTML = '<i class="bi bi-trash"></i>'; deleteBtn.innerHTML = '<i class="bi bi-trash"></i>';
deleteBtn.title = tooltip; deleteBtn.title = tooltip;
deleteBtn.addEventListener('click', function() { deleteBtn.addEventListener('click', function () {
if (confirm('Are you sure you want to delete this item?')) { if (confirm('Are you sure you want to delete this item?')) {
element.remove(); element.remove();
} }
@@ -1059,13 +1218,14 @@
allowed_file_types: document.getElementById('allowedExtensions').value.split(',').map(ext => ext.trim()).filter(Boolean), allowed_file_types: document.getElementById('allowedExtensions').value.split(',').map(ext => ext.trim()).filter(Boolean),
min_file_size: document.getElementById('minFileSize').value, min_file_size: document.getElementById('minFileSize').value,
max_file_size: document.getElementById('maxFileSize').value, max_file_size: document.getElementById('maxFileSize').value,
remove_stalled_after: document.getElementById('removeStalledAfter').value,
url_base: document.getElementById('urlBase').value, url_base: document.getElementById('urlBase').value,
bind_address: document.getElementById('bindAddress').value, bind_address: document.getElementById('bindAddress').value,
port: document.getElementById('port').value, port: document.getElementById('port').value,
debrids: [], debrids: [],
qbittorrent: { qbittorrent: {
download_folder: document.querySelector('[name="qbit.download_folder"]').value, download_folder: document.querySelector('[name="qbit.download_folder"]').value,
refresh_interval: parseInt(document.querySelector('[name="qbit.refresh_interval"]').value || '0', 10), refresh_interval: parseInt(document.querySelector('[name="qbit.refresh_interval"]').value, 10),
max_downloads: parseInt(document.querySelector('[name="qbit.max_downloads"]').value || '0', 5), max_downloads: parseInt(document.querySelector('[name="qbit.max_downloads"]').value || '0', 5),
skip_pre_cache: document.querySelector('[name="qbit.skip_pre_cache"]').checked skip_pre_cache: document.querySelector('[name="qbit.skip_pre_cache"]').checked
}, },
@@ -1073,8 +1233,9 @@
repair: { repair: {
enabled: document.querySelector('[name="repair.enabled"]').checked, enabled: document.querySelector('[name="repair.enabled"]').checked,
interval: document.querySelector('[name="repair.interval"]').value, interval: document.querySelector('[name="repair.interval"]').value,
run_on_start: document.querySelector('[name="repair.run_on_start"]').checked,
zurg_url: document.querySelector('[name="repair.zurg_url"]').value, zurg_url: document.querySelector('[name="repair.zurg_url"]').value,
strategy: document.querySelector('[name="repair.strategy"]').value,
workers: parseInt(document.querySelector('[name="repair.workers"]').value),
use_webdav: document.querySelector('[name="repair.use_webdav"]').checked, use_webdav: document.querySelector('[name="repair.use_webdav"]').checked,
auto_process: document.querySelector('[name="repair.auto_process"]').checked auto_process: document.querySelector('[name="repair.auto_process"]').checked
} }
@@ -1091,7 +1252,7 @@
folder: document.querySelector(`[name="debrid[${i}].folder"]`).value, folder: document.querySelector(`[name="debrid[${i}].folder"]`).value,
rate_limit: document.querySelector(`[name="debrid[${i}].rate_limit"]`).value, rate_limit: document.querySelector(`[name="debrid[${i}].rate_limit"]`).value,
download_uncached: document.querySelector(`[name="debrid[${i}].download_uncached"]`).checked, download_uncached: document.querySelector(`[name="debrid[${i}].download_uncached"]`).checked,
check_cached: document.querySelector(`[name="debrid[${i}].check_cached"]`).checked, unpack_rar: document.querySelector(`[name="debrid[${i}].unpack_rar"]`).checked,
add_samples: document.querySelector(`[name="debrid[${i}].add_samples"]`).checked, add_samples: document.querySelector(`[name="debrid[${i}].add_samples"]`).checked,
use_webdav: document.querySelector(`[name="debrid[${i}].use_webdav"]`).checked use_webdav: document.querySelector(`[name="debrid[${i}].use_webdav"]`).checked
}; };
@@ -1117,7 +1278,7 @@
const nameInput = document.querySelector(`[name="debrid[${i}].directory[${j}].name"]`); const nameInput = document.querySelector(`[name="debrid[${i}].directory[${j}].name"]`);
if (nameInput && nameInput.value) { if (nameInput && nameInput.value) {
const dirName = nameInput.value; const dirName = nameInput.value;
debrid.directories[dirName] = { filters: {} }; debrid.directories[dirName] = {filters: {}};
// Get directory key for filter counting // Get directory key for filter counting
const dirKey = `${i}-${j}`; const dirKey = `${i}-${j}`;
@@ -1137,6 +1298,14 @@
} }
} }
let downloadApiKeysTextarea = document.querySelector(`[name="debrid[${i}].download_api_keys"]`);
if (downloadApiKeysTextarea && downloadApiKeysTextarea.value.trim()) {
debrid.download_api_keys = downloadApiKeysTextarea.value
.split('\n')
.map(key => key.trim())
.filter(key => key.length > 0);
}
if (debrid.name && debrid.api_key) { if (debrid.name && debrid.api_key) {
config.debrids.push(debrid); config.debrids.push(debrid);
} }
@@ -1153,7 +1322,9 @@
token: document.querySelector(`[name="arr[${i}].token"]`).value, token: document.querySelector(`[name="arr[${i}].token"]`).value,
cleanup: document.querySelector(`[name="arr[${i}].cleanup"]`).checked, cleanup: document.querySelector(`[name="arr[${i}].cleanup"]`).checked,
skip_repair: document.querySelector(`[name="arr[${i}].skip_repair"]`).checked, skip_repair: document.querySelector(`[name="arr[${i}].skip_repair"]`).checked,
download_uncached: document.querySelector(`[name="arr[${i}].download_uncached"]`).checked download_uncached: document.querySelector(`[name="arr[${i}].download_uncached"]`).checked,
selected_debrid: document.querySelector(`[name="arr[${i}].selected_debrid"]`).value,
source: document.querySelector(`[name="arr[${i}].source"]`).value
}; };
if (arr.name && arr.host) { if (arr.name && arr.host) {

View File

@@ -17,18 +17,43 @@
<hr /> <hr />
<div class="mb-3"> <div class="row mb-3">
<label for="category" class="form-label">Enter Category</label> <div class="col">
<input type="text" class="form-control" id="category" name="arr" placeholder="Enter Category (e.g sonarr, radarr, radarr4k)"> <label for="downloadAction" class="form-label">Post Download Action</label>
<select class="form-select" id="downloadAction" name="downloadAction">
<option value="symlink" selected>Symlink</option>
<option value="download">Download</option>
<option value="none">None</option>
</select>
<small class="text-muted">Choose how to handle the added torrent (Default to symlinks)</small>
</div>
<div class="col">
<label for="downloadFolder" class="form-label">Download Folder</label>
<input type="text" class="form-control" id="downloadFolder" name="downloadFolder" placeholder="Enter Download Folder (e.g /downloads/torrents)">
<small class="text-muted">Default is your qbittorent download_folder</small>
</div>
<div class="col">
<label for="arr" class="form-label">Arr (if any)</label>
<input type="text" class="form-control" id="arr" name="arr" placeholder="Enter Category (e.g sonarr, radarr, radarr4k)">
<small class="text-muted">Optional, leave empty if not using Arr</small>
</div>
</div> </div>
{{ if .HasMultiDebrid }}
<div class="row mb-3"> <div class="row mb-3">
<div class="col-md-2 mb-3"> <div class="col-md-6">
<div class="form-check d-inline-block me-3"> <label for="debrid" class="form-label">Select Debrid</label>
<input type="checkbox" class="form-check-input" id="isSymlink" name="notSymlink"> <select class="form-select" id="debrid" name="debrid">
<label class="form-check-label" for="isSymlink">No Symlinks</label> {{ range $index, $debrid := .Debrids }}
</div> <option value="{{ $debrid }}" {{ if eq $index 0 }}selected{{end}}>{{ $debrid }}</option>
{{ end }}
</select>
<small class="text-muted">Select a debrid service to use for this download</small>
</div> </div>
</div>
{{ end }}
<div class="row mb-3">
<div class="col-md-2 mb-3"> <div class="col-md-2 mb-3">
<div class="form-check d-inline-block"> <div class="form-check d-inline-block">
<input type="checkbox" class="form-check-input" name="downloadUncached" id="downloadUncached"> <input type="checkbox" class="form-check-input" name="downloadUncached" id="downloadUncached">
@@ -48,23 +73,27 @@
</div> </div>
<script> <script>
let downloadFolder = '{{ .DownloadFolder }}';
document.addEventListener('DOMContentLoaded', () => { document.addEventListener('DOMContentLoaded', () => {
const loadSavedDownloadOptions = () => { const loadSavedDownloadOptions = () => {
const savedCategory = localStorage.getItem('downloadCategory'); const savedCategory = localStorage.getItem('downloadCategory');
const savedSymlink = localStorage.getItem('downloadSymlink'); const savedAction = localStorage.getItem('downloadAction');
const savedDownloadUncached = localStorage.getItem('downloadUncached'); const savedDownloadUncached = localStorage.getItem('downloadUncached');
document.getElementById('category').value = savedCategory || ''; document.getElementById('arr').value = savedCategory || '';
document.getElementById('isSymlink').checked = savedSymlink === 'true'; document.getElementById('downloadAction').value = savedAction || 'symlink';
document.getElementById('downloadUncached').checked = savedDownloadUncached === 'true'; document.getElementById('downloadUncached').checked = savedDownloadUncached === 'true';
document.getElementById('downloadFolder').value = localStorage.getItem('downloadFolder') || downloadFolder || '';
}; };
const saveCurrentDownloadOptions = () => { const saveCurrentDownloadOptions = () => {
const category = document.getElementById('category').value; const arr = document.getElementById('arr').value;
const isSymlink = document.getElementById('isSymlink').checked; const downloadAction = document.getElementById('downloadAction').value;
const downloadUncached = document.getElementById('downloadUncached').checked; const downloadUncached = document.getElementById('downloadUncached').checked;
localStorage.setItem('downloadCategory', category); const downloadFolder = document.getElementById('downloadFolder').value;
localStorage.setItem('downloadSymlink', isSymlink.toString()); localStorage.setItem('downloadCategory', arr);
localStorage.setItem('downloadAction', downloadAction);
localStorage.setItem('downloadUncached', downloadUncached.toString()); localStorage.setItem('downloadUncached', downloadUncached.toString());
localStorage.setItem('downloadFolder', downloadFolder);
}; };
// Load the last used download options from local storage // Load the last used download options from local storage
@@ -108,9 +137,11 @@
return; return;
} }
formData.append('arr', document.getElementById('category').value); formData.append('arr', document.getElementById('arr').value);
formData.append('notSymlink', document.getElementById('isSymlink').checked); formData.append('downloadFolder', document.getElementById('downloadFolder').value);
formData.append('action', document.getElementById('downloadAction').value);
formData.append('downloadUncached', document.getElementById('downloadUncached').checked); formData.append('downloadUncached', document.getElementById('downloadUncached').checked);
formData.append('debrid', document.getElementById('debrid') ? document.getElementById('debrid').value : '');
const response = await fetcher('/api/add', { const response = await fetcher('/api/add', {
method: 'POST', method: 'POST',
@@ -139,8 +170,8 @@
}); });
// Save the download options to local storage when they change // Save the download options to local storage when they change
document.getElementById('category').addEventListener('change', saveCurrentDownloadOptions); document.getElementById('arr').addEventListener('change', saveCurrentDownloadOptions);
document.getElementById('isSymlink').addEventListener('change', saveCurrentDownloadOptions); document.getElementById('downloadAction').addEventListener('change', saveCurrentDownloadOptions);
// Read the URL parameters for a magnet link and add it to the download queue if found // Read the URL parameters for a magnet link and add it to the download queue if found
const urlParams = new URLSearchParams(window.location.search); const urlParams = new URLSearchParams(window.location.search);

View File

@@ -64,6 +64,23 @@
</div> </div>
</div> </div>
</div> </div>
<!-- Context menu for torrent rows -->
<div class="dropdown-menu context-menu shadow" id="torrentContextMenu">
<h6 class="dropdown-header torrent-name text-truncate"></h6>
<div class="dropdown-divider"></div>
<button class="dropdown-item" data-action="copy-magnet">
<i class="bi bi-magnet me-2"></i>Copy Magnet Link
</button>
<button class="dropdown-item" data-action="copy-name">
<i class="bi bi-copy me-2"></i>Copy Name
</button>
<div class="dropdown-divider"></div>
<button class="dropdown-item text-danger" data-action="delete">
<i class="bi bi-trash me-2"></i>Delete
</button>
</div>
<script> <script>
let refs = { let refs = {
torrentsList: document.getElementById('torrentsList'), torrentsList: document.getElementById('torrentsList'),
@@ -73,6 +90,7 @@
selectAll: document.getElementById('selectAll'), selectAll: document.getElementById('selectAll'),
batchDeleteBtn: document.getElementById('batchDeleteBtn'), batchDeleteBtn: document.getElementById('batchDeleteBtn'),
refreshBtn: document.getElementById('refreshBtn'), refreshBtn: document.getElementById('refreshBtn'),
torrentContextMenu: document.getElementById('torrentContextMenu'),
paginationControls: document.getElementById('paginationControls'), paginationControls: document.getElementById('paginationControls'),
paginationInfo: document.getElementById('paginationInfo') paginationInfo: document.getElementById('paginationInfo')
}; };
@@ -83,13 +101,14 @@
states: new Set('downloading', 'pausedUP', 'error'), states: new Set('downloading', 'pausedUP', 'error'),
selectedCategory: refs.categoryFilter?.value || '', selectedCategory: refs.categoryFilter?.value || '',
selectedState: refs.stateFilter?.value || '', selectedState: refs.stateFilter?.value || '',
selectedTorrentContextMenu: null,
sortBy: refs.sortSelector?.value || 'added_on', sortBy: refs.sortSelector?.value || 'added_on',
itemsPerPage: 20, itemsPerPage: 20,
currentPage: 1 currentPage: 1
}; };
const torrentRowTemplate = (torrent) => ` const torrentRowTemplate = (torrent) => `
<tr data-hash="${torrent.hash}"> <tr data-hash="${torrent.hash}" data-magnet="${torrent.magnet || ''}" data-name="${torrent.name}">
<td> <td>
<input type="checkbox" class="form-check-input torrent-select" data-hash="${torrent.hash}" ${state.selectedTorrents.has(torrent.hash) ? 'checked' : ''}> <input type="checkbox" class="form-check-input torrent-select" data-hash="${torrent.hash}" ${state.selectedTorrents.has(torrent.hash) ? 'checked' : ''}>
</td> </td>
@@ -110,11 +129,11 @@
<td>${torrent.debrid || 'None'}</td> <td>${torrent.debrid || 'None'}</td>
<td><span class="badge ${getStateColor(torrent.state)}">${torrent.state}</span></td> <td><span class="badge ${getStateColor(torrent.state)}">${torrent.state}</span></td>
<td> <td>
<button class="btn btn-sm btn-outline-danger" onclick="deleteTorrent('${torrent.hash}', '${torrent.category}', false)"> <button class="btn btn-sm btn-outline-danger" onclick="deleteTorrent('${torrent.hash}', '${torrent.category || ''}', false)">
<i class="bi bi-trash"></i> <i class="bi bi-trash"></i>
</button> </button>
${torrent.debrid && torrent.id ? ` ${torrent.debrid && torrent.id ? `
<button class="btn btn-sm btn-outline-danger" onclick="deleteTorrent('${torrent.hash}', '${torrent.category}', true)"> <button class="btn btn-sm btn-outline-danger" onclick="deleteTorrent('${torrent.hash}', '${torrent.category || ''}', true)">
<i class="bi bi-trash"></i> Remove from Debrid <i class="bi bi-trash"></i> Remove from Debrid
</button> </button>
` : ''} ` : ''}
@@ -416,6 +435,66 @@
window.addEventListener('beforeunload', () => { window.addEventListener('beforeunload', () => {
clearInterval(refreshInterval); clearInterval(refreshInterval);
}); });
document.addEventListener('click', (e) => {
if (!refs.torrentContextMenu.contains(e.target)) {
refs.torrentContextMenu.style.display = 'none';
}
});
refs.torrentsList.addEventListener('contextmenu', (e) => {
const row = e.target.closest('tr');
if (!row) return;
e.preventDefault();
state.selectedTorrentContextMenu = row.dataset.hash;
refs.torrentContextMenu.querySelector('.torrent-name').textContent = row.dataset.name;
refs.torrentContextMenu.style.display = 'block';
const { pageX, pageY } = e;
const { clientWidth, clientHeight } = document.documentElement;
const { offsetWidth, offsetHeight } = refs.torrentContextMenu;
refs.torrentContextMenu.style.maxWidth = `${clientWidth - 72}px`;
refs.torrentContextMenu.style.left = `${Math.min(pageX, clientWidth - offsetWidth - 5)}px`;
refs.torrentContextMenu.style.top = `${Math.min(pageY, clientHeight - offsetHeight - 5)}px`;
});
refs.torrentContextMenu.addEventListener('click', async (e) => {
const action = e.target.closest('[data-action]')?.dataset.action;
if (!action) return;
const actions = {
'copy-magnet': async (torrent) => {
try {
await navigator.clipboard.writeText(`magnet:?xt=urn:btih:${torrent.hash}`);
createToast('Magnet link copied to clipboard');
} catch (error) {
console.error('Error copying magnet link:', error);
createToast('Failed to copy magnet link', 'error');
}
},
'copy-name': async (torrent) => {
try {
await navigator.clipboard.writeText(torrent.name);
createToast('Torrent name copied to clipboard');
} catch (error) {
console.error('Error copying torrent name:', error);
createToast('Failed to copy torrent name', 'error');
}
},
'delete': async (torrent) => {
await deleteTorrent(torrent.hash, torrent.category || '', false);
}
};
const torrent = state.torrents.find(t => t.hash === state.selectedTorrentContextMenu);
if (torrent && actions[action]) {
await actions[action](torrent);
refs.torrentContextMenu.style.display = 'none';
}
});
}); });
</script> </script>
{{ end }} {{ end }}

View File

@@ -36,6 +36,22 @@
background-color: var(--bg-color); background-color: var(--bg-color);
color: var(--text-color); color: var(--text-color);
transition: background-color 0.3s ease, color 0.3s ease; transition: background-color 0.3s ease, color 0.3s ease;
display: flex;
flex-direction: column;
min-height: 100vh;
}
footer {
background-color: var(--bg-color);
border-top: 1px solid var(--border-color);
}
footer a {
color: var(--text-color);
}
footer a:hover {
color: var(--primary-color);
} }
.navbar { .navbar {
@@ -105,6 +121,45 @@
.theme-toggle:hover { .theme-toggle:hover {
background-color: rgba(128, 128, 128, 0.2); background-color: rgba(128, 128, 128, 0.2);
} }
.password-toggle-container {
position: relative;
}
.password-toggle-btn {
position: absolute;
right: 10px;
top: 50%;
transform: translateY(-50%);
background: none;
border: none;
color: #6c757d;
cursor: pointer;
padding: 0;
z-index: 10;
}
.password-toggle-btn:hover {
color: #495057;
}
.form-control.has-toggle {
padding-right: 35px;
}
textarea.has-toggle {
-webkit-text-security: disc;
text-security: disc;
font-family: monospace !important;
}
textarea.has-toggle[data-password-visible="true"] {
-webkit-text-security: none;
text-security: none;
}
/* Adjust toggle button position for textareas */
.password-toggle-container textarea.has-toggle ~ .password-toggle-btn {
top: 20px;
}
</style> </style>
<script> <script>
(function() { (function() {
@@ -169,7 +224,7 @@
<i class="bi bi-sun-fill" id="lightIcon"></i> <i class="bi bi-sun-fill" id="lightIcon"></i>
<i class="bi bi-moon-fill d-none" id="darkIcon"></i> <i class="bi bi-moon-fill d-none" id="darkIcon"></i>
</div> </div>
<a href="{{.URLBase}}stats" class="me-2"> <a href="{{.URLBase}}debug/stats" class="me-2">
<i class="bi bi-bar-chart-line me-1"></i>Stats <i class="bi bi-bar-chart-line me-1"></i>Stats
</a> </a>
<span class="badge bg-primary" id="version-badge">Loading...</span> <span class="badge bg-primary" id="version-badge">Loading...</span>
@@ -193,6 +248,20 @@
{{ else }} {{ else }}
{{ end }} {{ end }}
<footer class="mt-auto py-2 text-center border-top">
<div class="container">
<small class="text-muted">
<a href="https://github.com/sirrobot01/decypharr" target="_blank" class="text-decoration-none me-3">
<i class="bi bi-github me-1"></i>GitHub
</a>
<a href="https://sirrobot01.github.io/decypharr" target="_blank" class="text-decoration-none">
<i class="bi bi-book me-1"></i>Documentation
</a>
</small>
</div>
</footer>
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0-alpha1/dist/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0-alpha1/dist/js/bootstrap.bundle.min.js"></script>
<script src="https://code.jquery.com/jquery-3.6.0.min.js"></script> <script src="https://code.jquery.com/jquery-3.6.0.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/select2@4.1.0-rc.0/dist/js/select2.min.js"></script> <script src="https://cdn.jsdelivr.net/npm/select2@4.1.0-rc.0/dist/js/select2.min.js"></script>
@@ -267,6 +336,57 @@
}); });
}; };
function createPasswordField(name, id, placeholder = "", required = false) {
return `
<div class="password-toggle-container">
<input type="password"
class="form-control has-toggle"
name="${name}"
id="${id}"
placeholder="${placeholder}"
${required ? 'required' : ''}>
<button type="button"
class="password-toggle-btn"
onclick="togglePassword('${id}');">
<i class="bi bi-eye" id="${id}_icon"></i>
</button>
</div>
`;
}
function togglePassword(fieldId) {
const field = document.getElementById(fieldId);
const icon = document.getElementById(fieldId + '_icon');
if (field.type === 'password') {
field.type = 'text';
icon.className = 'bi bi-eye-slash';
} else {
field.type = 'password';
icon.className = 'bi bi-eye';
}
}
// Add this function to handle textarea password toggling
function togglePasswordTextarea(fieldId) {
const field = document.getElementById(fieldId);
const icon = document.getElementById(fieldId + '_icon');
if (field.style.webkitTextSecurity === 'disc' || field.style.webkitTextSecurity === '') {
// Show text
field.style.webkitTextSecurity = 'none';
field.style.textSecurity = 'none'; // For other browsers
field.setAttribute('data-password-visible', 'true');
icon.className = 'bi bi-eye-slash';
} else {
// Hide text
field.style.webkitTextSecurity = 'disc';
field.style.textSecurity = 'disc'; // For other browsers
field.setAttribute('data-password-visible', 'false');
icon.className = 'bi bi-eye';
}
}
// Theme management // Theme management
const themeToggle = document.getElementById('themeToggle'); const themeToggle = document.getElementById('themeToggle');
const lightIcon = document.getElementById('lightIcon'); const lightIcon = document.getElementById('lightIcon');

View File

@@ -97,14 +97,15 @@
<!-- Job Details Modal --> <!-- Job Details Modal -->
<div class="modal fade" id="jobDetailsModal" tabindex="-1" aria-labelledby="jobDetailsModalLabel" aria-hidden="true"> <div class="modal fade" id="jobDetailsModal" tabindex="-1" aria-labelledby="jobDetailsModalLabel" aria-hidden="true">
<div class="modal-dialog modal-lg"> <div class="modal-dialog modal-xl">
<div class="modal-content"> <div class="modal-content">
<div class="modal-header"> <div class="modal-header">
<h5 class="modal-title" id="jobDetailsModalLabel">Job Details</h5> <h5 class="modal-title" id="jobDetailsModalLabel">Job Details</h5>
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button> <button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
</div> </div>
<div class="modal-body"> <div class="modal-body">
<div class="row mb-3"> <!-- Job Info -->
<div class="row mb-4">
<div class="col-md-6"> <div class="col-md-6">
<p><strong>Job ID:</strong> <span id="modalJobId"></span></p> <p><strong>Job ID:</strong> <span id="modalJobId"></span></p>
<p><strong>Status:</strong> <span id="modalJobStatus"></span></p> <p><strong>Status:</strong> <span id="modalJobStatus"></span></p>
@@ -122,33 +123,125 @@
<strong>Error:</strong> <span id="modalJobError"></span> <strong>Error:</strong> <span id="modalJobError"></span>
</div> </div>
<h6>Broken Items</h6> <!-- Broken Items Section -->
<div class="table-responsive"> <div class="row">
<table class="table table-sm table-striped"> <div class="col-12">
<thead> <div class="d-flex justify-content-between align-items-center mb-3">
<tr> <h6 class="mb-0">
<th>Arr</th> Broken Items
<th>Path</th> <span class="badge bg-secondary" id="totalItemsCount">0</span>
</tr> </h6>
</thead> </div>
<tbody id="brokenItemsTableBody">
<!-- Broken items will be loaded here --> <!-- Filters and Search -->
</tbody> <div class="row mb-3">
</table> <div class="col-md-4">
</div> <input type="text" class="form-control form-control-sm"
<div id="noBrokenItemsMessage" class="text-center py-2 d-none"> id="itemSearchInput"
<p class="text-muted">No broken items found</p> placeholder="Search by path...">
</div>
<div class="col-md-3">
<select class="form-select form-select-sm" id="arrFilterSelect">
<option value="">All Arrs</option>
</select>
</div>
<div class="col-md-3">
<select class="form-select form-select-sm" id="pathFilterSelect">
<option value="">All Types</option>
<option value="movie">Movies</option>
<option value="tv">TV Shows</option>
<option value="other">Other</option>
</select>
</div>
<div class="col-md-2">
<button class="btn btn-sm btn-outline-secondary w-100" id="clearFiltersBtn">
<i class="bi bi-x-circle me-1"></i>Clear
</button>
</div>
</div>
<!-- Items Table -->
<div class="table-responsive" style="max-height: 400px; overflow-y: auto;">
<table class="table table-sm table-striped table-hover">
<thead class="sticky-top">
<tr>
<th>Arr</th>
<th>Path</th>
<th style="width: 100px;">Type</th>
<th style="width: 80px;">Size</th>
</tr>
</thead>
<tbody id="brokenItemsTableBody">
<!-- Broken items will be loaded here -->
</tbody>
</table>
</div>
<!-- Items Pagination -->
<nav aria-label="Items pagination" class="mt-2">
<ul class="pagination pagination-sm justify-content-center" id="itemsPagination">
<!-- Pagination will be generated here -->
</ul>
</nav>
<div id="noBrokenItemsMessage" class="text-center py-3 d-none">
<p class="text-muted">No broken items found</p>
</div>
<div id="noFilteredItemsMessage" class="text-center py-3 d-none">
<p class="text-muted">No items match the current filters</p>
</div>
</div>
</div> </div>
</div> </div>
<div class="modal-footer"> <div class="modal-footer">
<div class="me-auto">
<small class="text-muted" id="modalFooterStats"></small>
</div>
<button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Close</button> <button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Close</button>
<button type="button" class="btn btn-primary" id="processJobBtn">Process Items</button> <button type="button" class="btn btn-primary" id="processJobBtn">Process All Items</button>
<button type="button" class="btn btn-warning d-none" id="stopJobBtn">
<i class="bi bi-stop-fill me-1"></i>Stop Job
</button>
</div> </div>
</div> </div>
</div> </div>
</div> </div>
</div> </div>
<style>
.sticky-top {
position: sticky;
top: 0;
z-index: 10;
}
.table-hover tbody tr:hover {
background-color: var(--bs-gray-100);
}
[data-bs-theme="dark"] .table-hover tbody tr:hover {
background-color: var(--bs-gray-800);
}
.item-row.selected {
background-color: var(--bs-primary-bg-subtle) !important;
}
.badge {
font-size: 0.75em;
}
#brokenItemsTableBody tr {
cursor: pointer;
}
.form-check-input:checked {
background-color: var(--bs-primary);
border-color: var(--bs-primary);
}
</style>
<script> <script>
document.addEventListener('DOMContentLoaded', () => { document.addEventListener('DOMContentLoaded', () => {
// Load Arr instances // Load Arr instances
@@ -190,7 +283,7 @@
if (!response.ok) throw new Error(await response.text()); if (!response.ok) throw new Error(await response.text());
createToast('Repair process initiated successfully!'); createToast('Repair process initiated successfully!');
loadJobs(1); // Refresh jobs after submission await loadJobs(1); // Refresh jobs after submission
} catch (error) { } catch (error) {
createToast(`Error starting repair: ${error.message}`, 'error'); createToast(`Error starting repair: ${error.message}`, 'error');
} finally { } finally {
@@ -204,11 +297,19 @@
const itemsPerPage = 10; const itemsPerPage = 10;
let allJobs = []; let allJobs = [];
// Modal state variables
let currentJob = null;
let allBrokenItems = [];
let filteredItems = [];
let selectedItems = new Set();
let currentItemsPage = 1;
const itemsPerModalPage = 20;
// Load jobs function // Load jobs function
async function loadJobs(page) { async function loadJobs(page) {
try { try {
const response = await fetcher('/api/repair/jobs'); const response = await fetcher('/api/repair/jobs');
if (!response.ok) throw new Error('Failed to fetcher jobs'); if (!response.ok) throw new Error('Failed to fetch jobs');
allJobs = await response.json(); allJobs = await response.json();
renderJobsTable(page); renderJobsTable(page);
@@ -218,7 +319,27 @@
} }
} }
// Render jobs table with pagination // Return status text and class based on job status
function getStatus(status) {
switch (status) {
case 'started':
return {text: 'In Progress', class: 'text-primary'};
case 'failed':
return {text: 'Failed', class: 'text-danger'};
case 'completed':
return {text: 'Completed', class: 'text-success'};
case 'pending':
return {text: 'Pending', class: 'text-warning'};
case 'cancelled':
return {text: 'Cancelled', class: 'text-secondary'};
case 'processing':
return {text: 'Processing', class: 'text-info'};
default:
return {text: status.charAt(0).toUpperCase() + status.slice(1), class: 'text-secondary'};
}
}
// Render jobs table with pagination (keeping existing implementation)
function renderJobsTable(page) { function renderJobsTable(page) {
const tableBody = document.getElementById('jobsTableBody'); const tableBody = document.getElementById('jobsTableBody');
const paginationElement = document.getElementById('jobsPagination'); const paginationElement = document.getElementById('jobsPagination');
@@ -254,25 +375,10 @@
const formattedDate = startedDate.toLocaleString(); const formattedDate = startedDate.toLocaleString();
// Determine status // Determine status
let status = 'In Progress'; let status = getStatus(job.status);
let statusClass = 'text-primary';
let canDelete = job.status !== "started"; let canDelete = job.status !== "started";
let totalItems = job.broken_items ? Object.values(job.broken_items).reduce((sum, arr) => sum + arr.length, 0) : 0; let totalItems = job.broken_items ? Object.values(job.broken_items).reduce((sum, arr) => sum + arr.length, 0) : 0;
if (job.status === 'failed') {
status = 'Failed';
statusClass = 'text-danger';
} else if (job.status === 'completed') {
status = 'Completed';
statusClass = 'text-success';
} else if (job.status === 'pending') {
status = 'Pending';
statusClass = 'text-warning';
} else if (job.status === "processing") {
status = 'Processing';
statusClass = 'text-info';
}
row.innerHTML = ` row.innerHTML = `
<td> <td>
<div class="form-check"> <div class="form-check">
@@ -283,34 +389,39 @@
<td><a href="#" class="text-link view-job" data-id="${job.id}"><small>${job.id.substring(0, 8)}</small></a></td> <td><a href="#" class="text-link view-job" data-id="${job.id}"><small>${job.id.substring(0, 8)}</small></a></td>
<td>${job.arrs.join(', ')}</td> <td>${job.arrs.join(', ')}</td>
<td><small>${formattedDate}</small></td> <td><small>${formattedDate}</small></td>
<td><span class="${statusClass}">${status}</span></td> <td><span class="${status.class}">${status.text}</span></td>
<td>${totalItems}</td> <td>${totalItems}</td>
<td> <td>
${job.status === "pending" ? ${job.status === "pending" ?
`<button class="btn btn-sm btn-primary process-job" data-id="${job.id}"> `<button class="btn btn-sm btn-primary process-job" data-id="${job.id}">
<i class="bi bi-play-fill"></i> Process <i class="bi bi-play-fill"></i> Process
</button>` : </button>` :
`<button class="btn btn-sm btn-primary" disabled> `<button class="btn btn-sm btn-primary" disabled>
<i class="bi bi-eye"></i> Process <i class="bi bi-eye"></i> Process
</button>` </button>`
} }
${(job.status === "started" || job.status === "processing") ?
`<button class="btn btn-sm btn-warning stop-job" data-id="${job.id}">
<i class="bi bi-stop-fill"></i> Stop
</button>` :
''
}
${canDelete ? ${canDelete ?
`<button class="btn btn-sm btn-danger delete-job" data-id="${job.id}"> `<button class="btn btn-sm btn-danger delete-job" data-id="${job.id}">
<i class="bi bi-trash"></i> <i class="bi bi-trash"></i>
</button>` : </button>` :
`<button class="btn btn-sm btn-danger" disabled> `<button class="btn btn-sm btn-danger" disabled>
<i class="bi bi-trash"></i> <i class="bi bi-trash"></i>
</button>` </button>`
} }
</td> </td>
`; `;
tableBody.appendChild(row); tableBody.appendChild(row);
} }
// Create pagination // Create pagination (keeping existing implementation)
if (totalPages > 1) { if (totalPages > 1) {
// Previous button
const prevLi = document.createElement('li'); const prevLi = document.createElement('li');
prevLi.className = `page-item ${page === 1 ? 'disabled' : ''}`; prevLi.className = `page-item ${page === 1 ? 'disabled' : ''}`;
prevLi.innerHTML = `<a class="page-link" href="#" aria-label="Previous" ${page !== 1 ? `data-page="${page - 1}"` : ''}> prevLi.innerHTML = `<a class="page-link" href="#" aria-label="Previous" ${page !== 1 ? `data-page="${page - 1}"` : ''}>
@@ -318,7 +429,6 @@
</a>`; </a>`;
paginationElement.appendChild(prevLi); paginationElement.appendChild(prevLi);
// Page numbers
for (let i = 1; i <= totalPages; i++) { for (let i = 1; i <= totalPages; i++) {
const pageLi = document.createElement('li'); const pageLi = document.createElement('li');
pageLi.className = `page-item ${i === page ? 'active' : ''}`; pageLi.className = `page-item ${i === page ? 'active' : ''}`;
@@ -326,7 +436,6 @@
paginationElement.appendChild(pageLi); paginationElement.appendChild(pageLi);
} }
// Next button
const nextLi = document.createElement('li'); const nextLi = document.createElement('li');
nextLi.className = `page-item ${page === totalPages ? 'disabled' : ''}`; nextLi.className = `page-item ${page === totalPages ? 'disabled' : ''}`;
nextLi.innerHTML = `<a class="page-link" href="#" aria-label="Next" ${page !== totalPages ? `data-page="${page + 1}"` : ''}> nextLi.innerHTML = `<a class="page-link" href="#" aria-label="Next" ${page !== totalPages ? `data-page="${page + 1}"` : ''}>
@@ -335,13 +444,12 @@
paginationElement.appendChild(nextLi); paginationElement.appendChild(nextLi);
} }
// Add event listeners to pagination // Add event listeners (keeping existing implementation)
document.querySelectorAll('#jobsPagination a[data-page]').forEach(link => { document.querySelectorAll('#jobsPagination a[data-page]').forEach(link => {
link.addEventListener('click', (e) => { link.addEventListener('click', (e) => {
e.preventDefault(); e.preventDefault();
const newPage = parseInt(e.currentTarget.dataset.page); currentPage = parseInt(e.currentTarget.dataset.page);
currentPage = newPage; renderJobsTable(currentPage);
renderJobsTable(newPage);
}); });
}); });
@@ -356,7 +464,6 @@
}); });
}); });
// Add event listeners to action buttons
document.querySelectorAll('.process-job').forEach(button => { document.querySelectorAll('.process-job').forEach(button => {
button.addEventListener('click', (e) => { button.addEventListener('click', (e) => {
const jobId = e.currentTarget.dataset.id; const jobId = e.currentTarget.dataset.id;
@@ -370,98 +477,202 @@
viewJobDetails(jobId); viewJobDetails(jobId);
}); });
}); });
}
document.getElementById('selectAllJobs').addEventListener('change', function() { document.querySelectorAll('.stop-job').forEach(button => {
const isChecked = this.checked; button.addEventListener('click', (e) => {
document.querySelectorAll('.job-checkbox:not(:disabled)').forEach(checkbox => { const jobId = e.currentTarget.dataset.id;
checkbox.checked = isChecked; stopJob(jobId);
});
}); });
updateDeleteButtonState();
});
// Function to update delete button state
function updateDeleteButtonState() {
const deleteBtn = document.getElementById('deleteSelectedJobs');
const selectedCheckboxes = document.querySelectorAll('.job-checkbox:checked');
deleteBtn.disabled = selectedCheckboxes.length === 0;
} }
// Delete selected jobs // Helper functions to determine file type and format size
document.getElementById('deleteSelectedJobs').addEventListener('click', async () => { function getFileType(path) {
const selectedIds = Array.from( const movieExtensions = ['.mp4', '.mkv', '.avi', '.mov', '.wmv', '.flv', '.webm'];
document.querySelectorAll('.job-checkbox:checked') const tvIndicators = ['/TV/', '/Television/', '/Series/', '/Shows/'];
).map(checkbox => checkbox.value);
if (!selectedIds.length) return; const pathLower = path.toLowerCase();
if (confirm(`Are you sure you want to delete ${selectedIds.length} job(s)?`)) { if (tvIndicators.some(indicator => pathLower.includes(indicator.toLowerCase()))) {
await deleteMultipleJobs(selectedIds); return 'tv';
} }
});
async function deleteJob(jobId) { if (movieExtensions.some(ext => pathLower.endsWith(ext))) {
if (confirm('Are you sure you want to delete this job?')) { return pathLower.includes('/movies/') || pathLower.includes('/film') ? 'movie' : 'tv';
try { }
const response = await fetcher(`/api/repair/jobs`, {
method: 'DELETE', return 'other';
headers: { }
'Content-Type': 'application/json'
}, function formatFileSize(bytes) {
body: JSON.stringify({ ids: [jobId] }) if (!bytes || bytes === 0) return 'Unknown';
const sizes = ['B', 'KB', 'MB', 'GB', 'TB'];
const i = Math.floor(Math.log(bytes) / Math.log(1024));
return Math.round(bytes / Math.pow(1024, i) * 100) / 100 + ' ' + sizes[i];
}
// modal functions
function processItemsData(brokenItems) {
const items = [];
for (const [arrName, itemsArray] of Object.entries(brokenItems)) {
if (itemsArray && itemsArray.length > 0) {
itemsArray.forEach(item => {
items.push({
id: item.fileId,
arr: arrName,
path: item.path,
size: item.size || 0,
type: getFileType(item.path),
selected: false
});
}); });
if (!response.ok) throw new Error(await response.text());
createToast('Job deleted successfully');
await loadJobs(currentPage); // Refresh the jobs list
} catch (error) {
createToast(`Error deleting job: ${error.message}`, 'error');
} }
} }
return items;
} }
async function deleteMultipleJobs(jobIds) { function applyFilters() {
try { const searchTerm = document.getElementById('itemSearchInput').value.toLowerCase();
const response = await fetcher(`/api/repair/jobs`, { const arrFilter = document.getElementById('arrFilterSelect').value;
method: 'DELETE', const pathFilter = document.getElementById('pathFilterSelect').value;
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({ ids: jobIds })
});
if (!response.ok) throw new Error(await response.text()); filteredItems = allBrokenItems.filter(item => {
createToast(`${jobIds.length} job(s) deleted successfully`); const matchesSearch = !searchTerm || item.path.toLowerCase().includes(searchTerm);
await loadJobs(currentPage); // Refresh the jobs list const matchesArr = !arrFilter || item.arr === arrFilter;
} catch (error) { const matchesPath = !pathFilter || item.type === pathFilter;
createToast(`Error deleting jobs: ${error.message}`, 'error');
return matchesSearch && matchesArr && matchesPath;
});
currentItemsPage = 1;
renderBrokenItemsTable();
updateItemsStats();
}
function renderBrokenItemsTable() {
const tableBody = document.getElementById('brokenItemsTableBody');
const paginationElement = document.getElementById('itemsPagination');
const noItemsMessage = document.getElementById('noBrokenItemsMessage');
const noFilteredMessage = document.getElementById('noFilteredItemsMessage');
tableBody.innerHTML = '';
paginationElement.innerHTML = '';
if (allBrokenItems.length === 0) {
noItemsMessage.classList.remove('d-none');
noFilteredMessage.classList.add('d-none');
return;
} }
}
// Process job function if (filteredItems.length === 0) {
async function processJob(jobId) { noItemsMessage.classList.add('d-none');
try { noFilteredMessage.classList.remove('d-none');
const response = await fetcher(`/api/repair/jobs/${jobId}/process`, { return;
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
});
if (!response.ok) throw new Error(await response.text());
createToast('Job processing started successfully');
await loadJobs(currentPage); // Refresh the jobs list
} catch (error) {
createToast(`Error processing job: ${error.message}`, 'error');
} }
noItemsMessage.classList.add('d-none');
noFilteredMessage.classList.add('d-none');
// Calculate pagination
const totalPages = Math.ceil(filteredItems.length / itemsPerModalPage);
const startIndex = (currentItemsPage - 1) * itemsPerModalPage;
const endIndex = Math.min(startIndex + itemsPerModalPage, filteredItems.length);
// Display items for current page
for (let i = startIndex; i < endIndex; i++) {
const item = filteredItems[i];
const row = document.createElement('tr');
row.className = `item-row ${selectedItems.has(item.id) ? 'selected' : ''}`;
row.dataset.itemId = item.id;
row.innerHTML = `
<td><span class="badge bg-info">${item.arr}</span></td>
<td><small class="text-muted" title="${item.path}">${item.path}</small></td>
<td><span class="badge ${item.type === 'movie' ? 'bg-primary' : item.type === 'tv' ? 'bg-success' : 'bg-secondary'}">${item.type}</span></td>
<td><small>${formatFileSize(item.size)}</small></td>
`;
tableBody.appendChild(row);
}
// Create pagination
if (totalPages > 1) {
const prevLi = document.createElement('li');
prevLi.className = `page-item ${currentItemsPage === 1 ? 'disabled' : ''}`;
prevLi.innerHTML = `<a class="page-link" href="#" aria-label="Previous" ${currentItemsPage !== 1 ? `data-items-page="${currentItemsPage - 1}"` : ''}>
<span aria-hidden="true">&laquo;</span>
</a>`;
paginationElement.appendChild(prevLi);
for (let i = 1; i <= totalPages; i++) {
const pageLi = document.createElement('li');
pageLi.className = `page-item ${i === currentItemsPage ? 'active' : ''}`;
pageLi.innerHTML = `<a class="page-link" href="#" data-items-page="${i}">${i}</a>`;
paginationElement.appendChild(pageLi);
}
const nextLi = document.createElement('li');
nextLi.className = `page-item ${currentItemsPage === totalPages ? 'disabled' : ''}`;
nextLi.innerHTML = `<a class="page-link" href="#" aria-label="Next" ${currentItemsPage !== totalPages ? `data-items-page="${currentItemsPage + 1}"` : ''}>
<span aria-hidden="true">&raquo;</span>
</a>`;
paginationElement.appendChild(nextLi);
}
// Add pagination event listeners
document.querySelectorAll('#itemsPagination a[data-items-page]').forEach(link => {
link.addEventListener('click', (e) => {
e.preventDefault();
currentItemsPage = parseInt(e.currentTarget.dataset.itemsPage);
renderBrokenItemsTable();
});
});
} }
// View job details function function updateItemsStats() {
document.getElementById('totalItemsCount').textContent = allBrokenItems.length;
// Update footer stats
const footerStats = document.getElementById('modalFooterStats');
footerStats.textContent = `Total: ${allBrokenItems.length} | Filtered: ${filteredItems.length}`;
}
function populateArrFilter() {
const arrFilter = document.getElementById('arrFilterSelect');
arrFilter.innerHTML = '<option value="">All Arrs</option>';
const uniqueArrs = [...new Set(allBrokenItems.map(item => item.arr))];
uniqueArrs.forEach(arr => {
const option = document.createElement('option');
option.value = arr;
option.textContent = arr;
arrFilter.appendChild(option);
});
}
// Filter event listeners
document.getElementById('itemSearchInput').addEventListener('input', applyFilters);
document.getElementById('arrFilterSelect').addEventListener('change', applyFilters);
document.getElementById('pathFilterSelect').addEventListener('change', applyFilters);
document.getElementById('clearFiltersBtn').addEventListener('click', () => {
document.getElementById('itemSearchInput').value = '';
document.getElementById('arrFilterSelect').value = '';
document.getElementById('pathFilterSelect').value = '';
applyFilters();
});
function viewJobDetails(jobId) { function viewJobDetails(jobId) {
// Find the job // Find the job
const job = allJobs.find(j => j.id === jobId); const job = allJobs.find(j => j.id === jobId);
if (!job) return; if (!job) return;
currentJob = job;
selectedItems.clear();
currentItemsPage = 1;
// Prepare modal data // Prepare modal data
document.getElementById('modalJobId').textContent = job.id.substring(0, 8); document.getElementById('modalJobId').textContent = job.id.substring(0, 8);
@@ -477,24 +688,8 @@
} }
// Set status with color // Set status with color
let status = 'In Progress'; let status = getStatus(job.status);
let statusClass = 'text-primary'; document.getElementById('modalJobStatus').innerHTML = `<span class="${status.class}">${status.text}</span>`;
if (job.status === 'failed') {
status = 'Failed';
statusClass = 'text-danger';
} else if (job.status === 'completed') {
status = 'Completed';
statusClass = 'text-success';
} else if (job.status === 'pending') {
status = 'Pending';
statusClass = 'text-warning';
} else if (job.status === "processing") {
status = 'Processing';
statusClass = 'text-info';
}
document.getElementById('modalJobStatus').innerHTML = `<span class="${statusClass}">${status}</span>`;
// Set other job details // Set other job details
document.getElementById('modalJobArrs').textContent = job.arrs.join(', '); document.getElementById('modalJobArrs').textContent = job.arrs.join(', ');
@@ -524,46 +719,139 @@
processBtn.classList.add('d-none'); processBtn.classList.add('d-none');
} }
// Populate broken items table // Stop button visibility
const brokenItemsTableBody = document.getElementById('brokenItemsTableBody'); const stopBtn = document.getElementById('stopJobBtn');
const noBrokenItemsMessage = document.getElementById('noBrokenItemsMessage'); if (job.status === 'started' || job.status === 'processing') {
brokenItemsTableBody.innerHTML = ''; stopBtn.classList.remove('d-none');
stopBtn.onclick = () => {
let hasBrokenItems = false; stopJob(job.id);
const modal = bootstrap.Modal.getInstance(document.getElementById('jobDetailsModal'));
// Check if broken_items exists and has entries modal.hide();
if (job.broken_items && Object.entries(job.broken_items).length > 0) { };
hasBrokenItems = true;
// Loop through each Arr's broken items
for (const [arrName, items] of Object.entries(job.broken_items)) {
if (items && items.length > 0) {
// Add each item to the table
items.forEach(item => {
const row = document.createElement('tr');
row.innerHTML = `
<td>${arrName}</td>
<td><small class="text-muted">${item.path}</small></td>
`;
brokenItemsTableBody.appendChild(row);
});
}
}
}
// Show/hide no items message
if (hasBrokenItems) {
noBrokenItemsMessage.classList.add('d-none');
} else { } else {
noBrokenItemsMessage.classList.remove('d-none'); stopBtn.classList.add('d-none');
} }
// Process broken items data
if (job.broken_items && Object.entries(job.broken_items).length > 0) {
allBrokenItems = processItemsData(job.broken_items);
filteredItems = [...allBrokenItems];
populateArrFilter();
renderBrokenItemsTable();
} else {
allBrokenItems = [];
filteredItems = [];
renderBrokenItemsTable();
}
updateItemsStats();
// Show the modal // Show the modal
const modal = new bootstrap.Modal(document.getElementById('jobDetailsModal')); const modal = new bootstrap.Modal(document.getElementById('jobDetailsModal'));
modal.show(); modal.show();
} }
// Add event listener for refresh button // Keep existing functions (selectAllJobs, updateDeleteButtonState, deleteJob, etc.)
document.getElementById('selectAllJobs').addEventListener('change', function() {
const isChecked = this.checked;
document.querySelectorAll('.job-checkbox:not(:disabled)').forEach(checkbox => {
checkbox.checked = isChecked;
});
updateDeleteButtonState();
});
function updateDeleteButtonState() {
const deleteBtn = document.getElementById('deleteSelectedJobs');
const selectedCheckboxes = document.querySelectorAll('.job-checkbox:checked');
deleteBtn.disabled = selectedCheckboxes.length === 0;
}
document.getElementById('deleteSelectedJobs').addEventListener('click', async () => {
const selectedIds = Array.from(
document.querySelectorAll('.job-checkbox:checked')
).map(checkbox => checkbox.value);
if (!selectedIds.length) return;
if (confirm(`Are you sure you want to delete ${selectedIds.length} job(s)?`)) {
await deleteMultipleJobs(selectedIds);
}
});
async function deleteJob(jobId) {
if (confirm('Are you sure you want to delete this job?')) {
try {
const response = await fetcher(`/api/repair/jobs`, {
method: 'DELETE',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({ ids: [jobId] })
});
if (!response.ok) throw new Error(await response.text());
createToast('Job deleted successfully');
await loadJobs(currentPage);
} catch (error) {
createToast(`Error deleting job: ${error.message}`, 'error');
}
}
}
async function deleteMultipleJobs(jobIds) {
try {
const response = await fetcher(`/api/repair/jobs`, {
method: 'DELETE',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({ ids: jobIds })
});
if (!response.ok) throw new Error(await response.text());
createToast(`${jobIds.length} job(s) deleted successfully`);
await loadJobs(currentPage);
} catch (error) {
createToast(`Error deleting jobs: ${error.message}`, 'error');
}
}
async function processJob(jobId) {
try {
const response = await fetcher(`/api/repair/jobs/${jobId}/process`, {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
});
if (!response.ok) throw new Error(await response.text());
createToast('Job processing started successfully');
await loadJobs(currentPage);
} catch (error) {
createToast(`Error processing job: ${error.message}`, 'error');
}
}
async function stopJob(jobId) {
if (confirm('Are you sure you want to stop this job?')) {
try {
const response = await fetcher(`/api/repair/jobs/${jobId}/stop`, {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
});
if (!response.ok) throw new Error(await response.text());
createToast('Job stop requested successfully');
await loadJobs(currentPage);
} catch (error) {
createToast(`Error stopping job: ${error.message}`, 'error');
}
}
}
document.getElementById('refreshJobs').addEventListener('click', () => { document.getElementById('refreshJobs').addEventListener('click', () => {
loadJobs(currentPage); loadJobs(currentPage);
}); });

View File

@@ -7,7 +7,7 @@ import (
"net/http" "net/http"
) )
func (ui *Handler) LoginHandler(w http.ResponseWriter, r *http.Request) { func (wb *Web) LoginHandler(w http.ResponseWriter, r *http.Request) {
cfg := config.Get() cfg := config.Get()
if cfg.NeedsAuth() { if cfg.NeedsAuth() {
http.Redirect(w, r, "/register", http.StatusSeeOther) http.Redirect(w, r, "/register", http.StatusSeeOther)
@@ -19,7 +19,7 @@ func (ui *Handler) LoginHandler(w http.ResponseWriter, r *http.Request) {
"Page": "login", "Page": "login",
"Title": "Login", "Title": "Login",
} }
_ = templates.ExecuteTemplate(w, "layout", data) _ = wb.templates.ExecuteTemplate(w, "layout", data)
return return
} }
@@ -33,8 +33,8 @@ func (ui *Handler) LoginHandler(w http.ResponseWriter, r *http.Request) {
return return
} }
if ui.verifyAuth(credentials.Username, credentials.Password) { if wb.verifyAuth(credentials.Username, credentials.Password) {
session, _ := store.Get(r, "auth-session") session, _ := wb.cookie.Get(r, "auth-session")
session.Values["authenticated"] = true session.Values["authenticated"] = true
session.Values["username"] = credentials.Username session.Values["username"] = credentials.Username
if err := session.Save(r, w); err != nil { if err := session.Save(r, w); err != nil {
@@ -48,8 +48,8 @@ func (ui *Handler) LoginHandler(w http.ResponseWriter, r *http.Request) {
http.Error(w, "Invalid credentials", http.StatusUnauthorized) http.Error(w, "Invalid credentials", http.StatusUnauthorized)
} }
func (ui *Handler) LogoutHandler(w http.ResponseWriter, r *http.Request) { func (wb *Web) LogoutHandler(w http.ResponseWriter, r *http.Request) {
session, _ := store.Get(r, "auth-session") session, _ := wb.cookie.Get(r, "auth-session")
session.Values["authenticated"] = false session.Values["authenticated"] = false
session.Options.MaxAge = -1 session.Options.MaxAge = -1
err := session.Save(r, w) err := session.Save(r, w)
@@ -59,7 +59,7 @@ func (ui *Handler) LogoutHandler(w http.ResponseWriter, r *http.Request) {
http.Redirect(w, r, "/login", http.StatusSeeOther) http.Redirect(w, r, "/login", http.StatusSeeOther)
} }
func (ui *Handler) RegisterHandler(w http.ResponseWriter, r *http.Request) { func (wb *Web) RegisterHandler(w http.ResponseWriter, r *http.Request) {
cfg := config.Get() cfg := config.Get()
authCfg := cfg.GetAuth() authCfg := cfg.GetAuth()
@@ -69,7 +69,7 @@ func (ui *Handler) RegisterHandler(w http.ResponseWriter, r *http.Request) {
"Page": "register", "Page": "register",
"Title": "Register", "Title": "Register",
} }
_ = templates.ExecuteTemplate(w, "layout", data) _ = wb.templates.ExecuteTemplate(w, "layout", data)
return return
} }
@@ -99,7 +99,7 @@ func (ui *Handler) RegisterHandler(w http.ResponseWriter, r *http.Request) {
} }
// Create a session // Create a session
session, _ := store.Get(r, "auth-session") session, _ := wb.cookie.Get(r, "auth-session")
session.Values["authenticated"] = true session.Values["authenticated"] = true
session.Values["username"] = username session.Values["username"] = username
if err := session.Save(r, w); err != nil { if err := session.Save(r, w); err != nil {
@@ -110,42 +110,49 @@ func (ui *Handler) RegisterHandler(w http.ResponseWriter, r *http.Request) {
http.Redirect(w, r, "/", http.StatusSeeOther) http.Redirect(w, r, "/", http.StatusSeeOther)
} }
func (ui *Handler) IndexHandler(w http.ResponseWriter, r *http.Request) { func (wb *Web) IndexHandler(w http.ResponseWriter, r *http.Request) {
cfg := config.Get() cfg := config.Get()
data := map[string]interface{}{ data := map[string]interface{}{
"URLBase": cfg.URLBase, "URLBase": cfg.URLBase,
"Page": "index", "Page": "index",
"Title": "Torrents", "Title": "Torrents",
} }
_ = templates.ExecuteTemplate(w, "layout", data) _ = wb.templates.ExecuteTemplate(w, "layout", data)
} }
func (ui *Handler) DownloadHandler(w http.ResponseWriter, r *http.Request) { func (wb *Web) DownloadHandler(w http.ResponseWriter, r *http.Request) {
cfg := config.Get() cfg := config.Get()
data := map[string]interface{}{ debrids := make([]string, 0)
"URLBase": cfg.URLBase, for _, d := range cfg.Debrids {
"Page": "download", debrids = append(debrids, d.Name)
"Title": "Download",
} }
_ = templates.ExecuteTemplate(w, "layout", data) data := map[string]interface{}{
"URLBase": cfg.URLBase,
"Page": "download",
"Title": "Download",
"Debrids": debrids,
"HasMultiDebrid": len(debrids) > 1,
"DownloadFolder": cfg.QBitTorrent.DownloadFolder,
}
_ = wb.templates.ExecuteTemplate(w, "layout", data)
} }
func (ui *Handler) RepairHandler(w http.ResponseWriter, r *http.Request) { func (wb *Web) RepairHandler(w http.ResponseWriter, r *http.Request) {
cfg := config.Get() cfg := config.Get()
data := map[string]interface{}{ data := map[string]interface{}{
"URLBase": cfg.URLBase, "URLBase": cfg.URLBase,
"Page": "repair", "Page": "repair",
"Title": "Repair", "Title": "Repair",
} }
_ = templates.ExecuteTemplate(w, "layout", data) _ = wb.templates.ExecuteTemplate(w, "layout", data)
} }
func (ui *Handler) ConfigHandler(w http.ResponseWriter, r *http.Request) { func (wb *Web) ConfigHandler(w http.ResponseWriter, r *http.Request) {
cfg := config.Get() cfg := config.Get()
data := map[string]interface{}{ data := map[string]interface{}{
"URLBase": cfg.URLBase, "URLBase": cfg.URLBase,
"Page": "config", "Page": "config",
"Title": "Config", "Title": "Config",
} }
_ = templates.ExecuteTemplate(w, "layout", data) _ = wb.templates.ExecuteTemplate(w, "layout", data)
} }

View File

@@ -6,7 +6,7 @@ import (
"github.com/gorilla/sessions" "github.com/gorilla/sessions"
"github.com/rs/zerolog" "github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/logger" "github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/pkg/qbit" "github.com/sirrobot01/decypharr/pkg/store"
"html/template" "html/template"
"os" "os"
) )
@@ -50,26 +50,15 @@ type RepairRequest struct {
//go:embed templates/* //go:embed templates/*
var content embed.FS var content embed.FS
type Handler struct { type Web struct {
qbit *qbit.QBit logger zerolog.Logger
logger zerolog.Logger cookie *sessions.CookieStore
}
func New(qbit *qbit.QBit) *Handler {
return &Handler{
qbit: qbit,
logger: logger.New("ui"),
}
}
var (
secretKey = cmp.Or(os.Getenv("DECYPHARR_SECRET_KEY"), "\"wqj(v%lj*!-+kf@4&i95rhh_!5_px5qnuwqbr%cjrvrozz_r*(\"")
store = sessions.NewCookieStore([]byte(secretKey))
templates *template.Template templates *template.Template
) torrents *store.TorrentStorage
}
func init() { func New() *Web {
templates = template.Must(template.ParseFS( templates := template.Must(template.ParseFS(
content, content,
"templates/layout.html", "templates/layout.html",
"templates/index.html", "templates/index.html",
@@ -79,10 +68,17 @@ func init() {
"templates/login.html", "templates/login.html",
"templates/register.html", "templates/register.html",
)) ))
secretKey := cmp.Or(os.Getenv("DECYPHARR_SECRET_KEY"), "\"wqj(v%lj*!-+kf@4&i95rhh_!5_px5qnuwqbr%cjrvrozz_r*(\"")
store.Options = &sessions.Options{ cookieStore := sessions.NewCookieStore([]byte(secretKey))
cookieStore.Options = &sessions.Options{
Path: "/", Path: "/",
MaxAge: 86400 * 7, MaxAge: 86400 * 7,
HttpOnly: false, HttpOnly: false,
} }
return &Web{
logger: logger.New("ui"),
templates: templates,
cookie: cookieStore,
torrents: store.Get().Torrents(),
}
} }

View File

@@ -3,57 +3,81 @@ package webdav
import ( import (
"crypto/tls" "crypto/tls"
"fmt" "fmt"
"github.com/sirrobot01/decypharr/pkg/debrid/debrid"
"io" "io"
"net/http" "net/http"
"os" "os"
"strings" "strings"
"time" "time"
"github.com/sirrobot01/decypharr/pkg/debrid/store"
) )
var streamingTransport = &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
MaxIdleConns: 200,
MaxIdleConnsPerHost: 100,
MaxConnsPerHost: 200,
IdleConnTimeout: 90 * time.Second,
TLSHandshakeTimeout: 10 * time.Second,
ResponseHeaderTimeout: 60 * time.Second, // give the upstream a minute to send headers
ExpectContinueTimeout: 1 * time.Second,
DisableKeepAlives: true, // close after each request
ForceAttemptHTTP2: false, // dont speak HTTP/2
// this line is what truly blocks HTTP/2:
TLSNextProto: make(map[string]func(string, *tls.Conn) http.RoundTripper),
}
var sharedClient = &http.Client{ var sharedClient = &http.Client{
Transport: &http.Transport{ Transport: streamingTransport,
TLSClientConfig: &tls.Config{InsecureSkipVerify: true}, Timeout: 0,
MaxIdleConns: 100, }
MaxIdleConnsPerHost: 20,
MaxConnsPerHost: 50, type streamError struct {
IdleConnTimeout: 90 * time.Second, Err error
TLSHandshakeTimeout: 10 * time.Second, StatusCode int
ResponseHeaderTimeout: 30 * time.Second, IsClientDisconnection bool
ExpectContinueTimeout: 1 * time.Second, }
DisableKeepAlives: false,
}, func (e *streamError) Error() string {
Timeout: 0, return e.Err.Error()
}
func (e *streamError) Unwrap() error {
return e.Err
} }
type File struct { type File struct {
cache *debrid.Cache
fileId string
torrentName string
modTime time.Time
size int64
offset int64
isDir bool
children []os.FileInfo
reader io.ReadCloser
seekPending bool
content []byte
name string name string
metadataOnly bool torrentName string
downloadLink string
link string link string
downloadLink string
size int64
isDir bool
fileId string
isRar bool
metadataOnly bool
content []byte
children []os.FileInfo // For directories
cache *store.Cache
modTime time.Time
// Minimal state for interface compliance only
readOffset int64 // Only used for Read() method compliance
} }
// File interface implementations for File // File interface implementations for File
func (f *File) Close() error { func (f *File) Close() error {
if f.reader != nil { if f.isDir {
f.reader.Close() return nil // No resources to close for directories
f.reader = nil
} }
// For files, we don't have any resources to close either
// This is just to satisfy the os.File interface
f.content = nil
f.children = nil
f.downloadLink = ""
f.readOffset = 0
return nil return nil
} }
@@ -74,187 +98,276 @@ func (f *File) getDownloadLink() (string, error) {
return "", os.ErrNotExist return "", os.ErrNotExist
} }
func (f *File) stream() (*http.Response, error) { func (f *File) getDownloadByteRange() (*[2]int64, error) {
client := sharedClient // Might be replaced with the custom client byteRange, err := f.cache.GetDownloadByteRange(f.torrentName, f.name)
_log := f.cache.GetLogger()
var (
err error
downloadLink string
)
downloadLink, err = f.getDownloadLink()
if err != nil { if err != nil {
return nil, err
_log.Trace().Msgf("Failed to get download link for %s. %s", f.name, err)
return nil, io.EOF
} }
return byteRange, nil
}
func (f *File) servePreloadedContent(w http.ResponseWriter, r *http.Request) error {
content := f.content
size := int64(len(content))
// Handle range requests for preloaded content
if rangeHeader := r.Header.Get("Range"); rangeHeader != "" {
ranges, err := parseRange(rangeHeader, size)
if err != nil || len(ranges) != 1 {
w.Header().Set("Content-Range", fmt.Sprintf("bytes */%d", size))
return &streamError{Err: fmt.Errorf("invalid range"), StatusCode: http.StatusRequestedRangeNotSatisfiable}
}
start, end := ranges[0].start, ranges[0].end
w.Header().Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", start, end, size))
w.Header().Set("Content-Length", fmt.Sprintf("%d", end-start+1))
w.Header().Set("Accept-Ranges", "bytes")
w.WriteHeader(http.StatusPartialContent)
_, err = w.Write(content[start : end+1])
return err
}
// Full content
w.Header().Set("Content-Length", fmt.Sprintf("%d", size))
w.Header().Set("Accept-Ranges", "bytes")
w.WriteHeader(http.StatusOK)
_, err := w.Write(content)
return err
}
func (f *File) StreamResponse(w http.ResponseWriter, r *http.Request) error {
// Handle preloaded content files
if f.content != nil {
return f.servePreloadedContent(w, r)
}
// Try streaming with retry logic
return f.streamWithRetry(w, r, 0)
}
func (f *File) streamWithRetry(w http.ResponseWriter, r *http.Request, retryCount int) error {
const maxRetries = 3
_log := f.cache.Logger()
// Get download link (with caching optimization)
downloadLink, err := f.getDownloadLink()
if err != nil {
return &streamError{Err: err, StatusCode: http.StatusPreconditionFailed}
}
if downloadLink == "" { if downloadLink == "" {
_log.Trace().Msgf("Failed to get download link for %s. Empty download link", f.name) return &streamError{Err: fmt.Errorf("empty download link"), StatusCode: http.StatusNotFound}
return nil, io.EOF
} }
req, err := http.NewRequest("GET", downloadLink, nil) // Create upstream request with streaming optimizations
upstreamReq, err := http.NewRequest("GET", downloadLink, nil)
if err != nil { if err != nil {
_log.Trace().Msgf("Failed to create HTTP request: %s", err) return &streamError{Err: err, StatusCode: http.StatusInternalServerError}
return nil, io.EOF
} }
if f.offset > 0 { setVideoStreamingHeaders(upstreamReq)
req.Header.Set("Range", fmt.Sprintf("bytes=%d-", f.offset))
// Handle range requests (critical for video seeking)
isRangeRequest := f.handleRangeRequest(upstreamReq, r, w)
if isRangeRequest == -1 {
return &streamError{Err: fmt.Errorf("invalid range"), StatusCode: http.StatusRequestedRangeNotSatisfiable}
} }
resp, err := client.Do(req) resp, err := sharedClient.Do(upstreamReq)
if err != nil { if err != nil {
return resp, io.EOF return &streamError{Err: err, StatusCode: http.StatusServiceUnavailable}
}
defer resp.Body.Close()
// Handle upstream errors with retry logic
shouldRetry, retryErr := f.handleUpstream(resp, retryCount, maxRetries)
if shouldRetry && retryCount < maxRetries {
// Retry with new download link
_log.Debug().
Int("retry_count", retryCount+1).
Str("file", f.name).
Msg("Retrying stream request")
return f.streamWithRetry(w, r, retryCount+1)
}
if retryErr != nil {
return retryErr
} }
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusPartialContent { setVideoResponseHeaders(w, resp, isRangeRequest == 1)
f.downloadLink = ""
closeResp := func() { return f.streamBuffer(w, resp.Body)
}
func (f *File) streamBuffer(w http.ResponseWriter, src io.Reader) error {
flusher, ok := w.(http.Flusher)
if !ok {
return fmt.Errorf("response does not support flushing")
}
smallBuf := make([]byte, 64*1024) // 64 KB
if n, err := src.Read(smallBuf); n > 0 {
if _, werr := w.Write(smallBuf[:n]); werr != nil {
return werr
}
flusher.Flush()
} else if err != nil && err != io.EOF {
return err
}
buf := make([]byte, 256*1024) // 256 KB
for {
n, readErr := src.Read(buf)
if n > 0 {
if _, writeErr := w.Write(buf[:n]); writeErr != nil {
if isClientDisconnection(writeErr) {
return &streamError{Err: writeErr, StatusCode: 0, IsClientDisconnection: true}
}
return writeErr
}
flusher.Flush()
}
if readErr != nil {
if readErr == io.EOF {
return nil
}
if isClientDisconnection(readErr) {
return &streamError{Err: readErr, StatusCode: 0, IsClientDisconnection: true}
}
return readErr
}
}
}
func (f *File) handleUpstream(resp *http.Response, retryCount, maxRetries int) (shouldRetry bool, err error) {
if resp.StatusCode == http.StatusOK || resp.StatusCode == http.StatusPartialContent {
return false, nil
}
_log := f.cache.Logger()
// Clean up response body properly
cleanupResp := func(resp *http.Response) {
if resp.Body != nil {
_, _ = io.Copy(io.Discard, resp.Body) _, _ = io.Copy(io.Discard, resp.Body)
resp.Body.Close() resp.Body.Close()
} }
}
if resp.StatusCode == http.StatusServiceUnavailable { switch resp.StatusCode {
b, _ := io.ReadAll(resp.Body) case http.StatusServiceUnavailable:
err := resp.Body.Close() // Read the body to check for specific error messages
if err != nil { body, readErr := io.ReadAll(resp.Body)
_log.Trace().Msgf("Failed to close response body: %s", err) cleanupResp(resp)
return nil, io.EOF
}
if strings.Contains(string(b), "You can not download this file because you have exceeded your traffic on this hoster") {
_log.Trace().Msgf("Bandwidth exceeded for %s. Download token will be disabled if you have more than one", f.name)
f.cache.MarkDownloadLinkAsInvalid(f.link, downloadLink, "bandwidth_exceeded")
// Retry with a different API key if it's available
return f.stream()
} else {
_log.Trace().Msgf("Failed to get download link for %s. %s", f.name, string(b))
return resp, io.EOF
}
} else if resp.StatusCode == http.StatusNotFound { if readErr != nil {
closeResp() _log.Error().Err(readErr).Msg("Failed to read response body")
// Mark download link as not found return false, &streamError{
// Regenerate a new download link Err: fmt.Errorf("failed to read error response: %w", readErr),
f.cache.MarkDownloadLinkAsInvalid(f.link, downloadLink, "link_not_found") StatusCode: http.StatusServiceUnavailable,
// Generate a new download link
downloadLink, err = f.getDownloadLink()
if err != nil {
_log.Trace().Msgf("Failed to get download link for %s. %s", f.name, err)
return nil, io.EOF
} }
if downloadLink == "" {
_log.Trace().Msgf("Failed to get download link for %s", f.name)
return nil, io.EOF
}
req, err = http.NewRequest("GET", downloadLink, nil)
if err != nil {
return nil, io.EOF
}
if f.offset > 0 {
req.Header.Set("Range", fmt.Sprintf("bytes=%d-", f.offset))
}
resp, err = client.Do(req)
if err != nil {
return resp, fmt.Errorf("HTTP request error: %w", err)
}
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusPartialContent {
closeResp()
// Read the body to consume the response
f.cache.MarkDownloadLinkAsInvalid(f.link, downloadLink, "link_not_found")
return resp, io.EOF
}
return resp, nil
} else {
closeResp()
return resp, io.EOF
} }
} bodyStr := string(body)
return resp, nil if strings.Contains(bodyStr, "you have exceeded your traffic") {
} _log.Debug().
Str("file", f.name).
Int("retry_count", retryCount).
Msg("Bandwidth exceeded. Marking link as invalid")
func (f *File) Read(p []byte) (n int, err error) { f.cache.MarkDownloadLinkAsInvalid(f.link, f.downloadLink, "bandwidth_exceeded")
if f.isDir {
return 0, os.ErrInvalid
}
if f.metadataOnly {
return 0, io.EOF
}
if f.content != nil {
if f.offset >= int64(len(f.content)) {
return 0, io.EOF
}
n = copy(p, f.content[f.offset:])
f.offset += int64(n)
return n, nil
}
// If we haven't started streaming the file yet or need to reposition // Retry with a different API key if available and we haven't exceeded retries
if f.reader == nil || f.seekPending { if retryCount < maxRetries {
if f.reader != nil && f.seekPending { return true, nil
f.reader.Close() }
f.reader = nil
return false, &streamError{
Err: fmt.Errorf("bandwidth exceeded after %d retries", retryCount),
StatusCode: http.StatusServiceUnavailable,
}
} }
// Make the request to get the file return false, &streamError{
resp, err := f.stream() Err: fmt.Errorf("service unavailable: %s", bodyStr),
if err != nil { StatusCode: http.StatusServiceUnavailable,
return 0, err
}
if resp == nil {
return 0, io.EOF
} }
f.reader = resp.Body case http.StatusNotFound:
f.seekPending = false cleanupResp(resp)
}
n, err = f.reader.Read(p) _log.Debug().
f.offset += int64(n) Str("file", f.name).
Int("retry_count", retryCount).
Msg("Link not found (404). Marking link as invalid and regenerating")
if err != nil { f.cache.MarkDownloadLinkAsInvalid(f.link, f.downloadLink, "link_not_found")
f.reader.Close()
f.reader = nil
}
return n, err // Try to regenerate download link if we haven't exceeded retries
} if retryCount < maxRetries {
// Clear cached link to force regeneration
f.downloadLink = ""
return true, nil
}
func (f *File) Seek(offset int64, whence int) (int64, error) { return false, &streamError{
if f.isDir { Err: fmt.Errorf("file not found after %d retries", retryCount),
return 0, os.ErrInvalid StatusCode: http.StatusNotFound,
} }
newOffset := f.offset
switch whence {
case io.SeekStart:
newOffset = offset
case io.SeekCurrent:
newOffset += offset
case io.SeekEnd:
newOffset = f.size + offset
default: default:
return 0, os.ErrInvalid body, _ := io.ReadAll(resp.Body)
} cleanupResp(resp)
if newOffset < 0 { _log.Error().
newOffset = 0 Int("status_code", resp.StatusCode).
} Str("file", f.name).
if newOffset > f.size { Str("response_body", string(body)).
newOffset = f.size Msg("Unexpected upstream error")
}
// Only mark seek as pending if position actually changed return false, &streamError{
if newOffset != f.offset { Err: fmt.Errorf("upstream error %d: %s", resp.StatusCode, string(body)),
f.offset = newOffset StatusCode: http.StatusBadGateway,
f.seekPending = true }
} }
return f.offset, nil
} }
func (f *File) handleRangeRequest(upstreamReq *http.Request, r *http.Request, w http.ResponseWriter) int {
rangeHeader := r.Header.Get("Range")
if rangeHeader == "" {
// For video files, apply byte range if exists
if byteRange, _ := f.getDownloadByteRange(); byteRange != nil {
upstreamReq.Header.Set("Range", fmt.Sprintf("bytes=%d-%d", byteRange[0], byteRange[1]))
}
return 0 // No range request
}
// Parse range request
ranges, err := parseRange(rangeHeader, f.size)
if err != nil || len(ranges) != 1 {
w.Header().Set("Content-Range", fmt.Sprintf("bytes */%d", f.size))
return -1 // Invalid range
}
// Apply byte range offset if exists
byteRange, _ := f.getDownloadByteRange()
start, end := ranges[0].start, ranges[0].end
if byteRange != nil {
start += byteRange[0]
end += byteRange[0]
}
upstreamReq.Header.Set("Range", fmt.Sprintf("bytes=%d-%d", start, end))
return 1 // Valid range request
}
/*
These are the methods that implement the os.File interface for the File type.
Only Stat and ReadDir are used
*/
func (f *File) Stat() (os.FileInfo, error) { func (f *File) Stat() (os.FileInfo, error) {
if f.isDir { if f.isDir {
return &FileInfo{ return &FileInfo{
@@ -275,18 +388,61 @@ func (f *File) Stat() (os.FileInfo, error) {
}, nil }, nil
} }
func (f *File) ReadAt(p []byte, off int64) (n int, err error) { func (f *File) Read(p []byte) (n int, err error) {
// Save current position if f.isDir {
return 0, os.ErrInvalid
// Seek to requested position
_, err = f.Seek(off, io.SeekStart)
if err != nil {
return 0, err
} }
// Read the data if f.metadataOnly {
n, err = f.Read(p) return 0, io.EOF
return n, err }
// For preloaded content files (like version.txt)
if f.content != nil {
if f.readOffset >= int64(len(f.content)) {
return 0, io.EOF
}
n = copy(p, f.content[f.readOffset:])
f.readOffset += int64(n)
return n, nil
}
// For streaming files, return an error to force use of StreamResponse
return 0, fmt.Errorf("use StreamResponse method for streaming files")
}
func (f *File) Seek(offset int64, whence int) (int64, error) {
if f.isDir {
return 0, os.ErrInvalid
}
// Only handle seeking for preloaded content
if f.content != nil {
newOffset := f.readOffset
switch whence {
case io.SeekStart:
newOffset = offset
case io.SeekCurrent:
newOffset += offset
case io.SeekEnd:
newOffset = int64(len(f.content)) + offset
default:
return 0, os.ErrInvalid
}
if newOffset < 0 {
newOffset = 0
}
if newOffset > int64(len(f.content)) {
newOffset = int64(len(f.content))
}
f.readOffset = newOffset
return f.readOffset, nil
}
// For streaming files, return error to force use of StreamResponse
return 0, fmt.Errorf("use StreamResponse method for streaming files")
} }
func (f *File) Write(p []byte) (n int, err error) { func (f *File) Write(p []byte) (n int, err error) {

View File

@@ -2,7 +2,10 @@ package webdav
import ( import (
"context" "context"
"errors"
"fmt" "fmt"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
"golang.org/x/net/webdav"
"io" "io"
"mime" "mime"
"net/http" "net/http"
@@ -15,21 +18,21 @@ import (
"github.com/rs/zerolog" "github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/utils" "github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/debrid/debrid" "github.com/sirrobot01/decypharr/pkg/debrid/store"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
"github.com/sirrobot01/decypharr/pkg/version" "github.com/sirrobot01/decypharr/pkg/version"
"golang.org/x/net/webdav"
) )
const DeleteAllBadTorrentKey = "DELETE_ALL_BAD_TORRENTS"
type Handler struct { type Handler struct {
Name string Name string
logger zerolog.Logger logger zerolog.Logger
cache *debrid.Cache cache *store.Cache
URLBase string URLBase string
RootPath string RootPath string
} }
func NewHandler(name, urlBase string, cache *debrid.Cache, logger zerolog.Logger) *Handler { func NewHandler(name, urlBase string, cache *store.Cache, logger zerolog.Logger) *Handler {
h := &Handler{ h := &Handler{
Name: name, Name: name,
cache: cache, cache: cache,
@@ -61,25 +64,58 @@ func (h *Handler) readinessMiddleware(next http.Handler) http.Handler {
// RemoveAll implements webdav.FileSystem // RemoveAll implements webdav.FileSystem
func (h *Handler) RemoveAll(ctx context.Context, name string) error { func (h *Handler) RemoveAll(ctx context.Context, name string) error {
if name[0] != '/' { if !strings.HasPrefix(name, "/") {
name = "/" + name name = "/" + name
} }
name = path.Clean(name) name = utils.PathUnescape(path.Clean(name))
rootDir := path.Clean(h.RootPath) rootDir := path.Clean(h.RootPath)
if name == rootDir { if name == rootDir {
return os.ErrPermission return os.ErrPermission
} }
torrentName, _ := getName(rootDir, name) // Skip if it's version.txt
cachedTorrent := h.cache.GetTorrentByName(torrentName) if name == path.Join(rootDir, "version.txt") {
if cachedTorrent == nil { return os.ErrPermission
h.logger.Debug().Msgf("Torrent not found: %s", torrentName) }
return nil // It's possible that the torrent was removed
// Check if the name is a parent path
if _, ok := h.isParentPath(name); ok {
return os.ErrPermission
}
// Check if the name is a torrent folder
rel := strings.TrimPrefix(name, rootDir+"/")
parts := strings.Split(rel, "/")
if len(parts) == 2 && utils.Contains(h.getParentItems(), parts[0]) {
torrentName := parts[1]
torrent := h.cache.GetTorrentByName(torrentName)
if torrent == nil {
return os.ErrNotExist
}
// Remove the torrent from the cache and debrid
h.cache.OnRemove(torrent.Id)
return nil
}
// If we reach here, it means the path is a file
if len(parts) >= 2 {
if utils.Contains(h.getParentItems(), parts[0]) {
torrentName := parts[1]
cached := h.cache.GetTorrentByName(torrentName)
if cached != nil && len(parts) >= 3 {
filename := filepath.Clean(path.Join(parts[2:]...))
if file, ok := cached.GetFile(filename); ok {
if err := h.cache.RemoveFile(cached.Id, file.Name); err != nil {
h.logger.Error().Err(err).Msgf("Failed to remove file %s from torrent %s", file.Name, torrentName)
return err
}
// If the file was successfully removed, we can return nil
return nil
}
}
}
} }
h.cache.OnRemove(cachedTorrent.Id)
return nil return nil
} }
@@ -146,7 +182,7 @@ func (h *Handler) getChildren(name string) []os.FileInfo {
if len(parts) == 2 && utils.Contains(h.getParentItems(), parts[0]) { if len(parts) == 2 && utils.Contains(h.getParentItems(), parts[0]) {
torrentName := parts[1] torrentName := parts[1]
if t := h.cache.GetTorrentByName(torrentName); t != nil { if t := h.cache.GetTorrentByName(torrentName); t != nil {
return h.getFileInfos(t.Torrent) return h.getFileInfos(t)
} }
} }
return nil return nil
@@ -158,7 +194,7 @@ func (h *Handler) OpenFile(ctx context.Context, name string, flag int, perm os.F
} }
name = utils.PathUnescape(path.Clean(name)) name = utils.PathUnescape(path.Clean(name))
rootDir := path.Clean(h.RootPath) rootDir := path.Clean(h.RootPath)
metadataOnly := ctx.Value("metadataOnly") != nil metadataOnly := ctx.Value(metadataOnlyKey) != nil
now := time.Now() now := time.Now()
// 1) special case version.txt // 1) special case version.txt
@@ -202,7 +238,7 @@ func (h *Handler) OpenFile(ctx context.Context, name string, flag int, perm os.F
cached := h.cache.GetTorrentByName(torrentName) cached := h.cache.GetTorrentByName(torrentName)
if cached != nil && len(parts) >= 3 { if cached != nil && len(parts) >= 3 {
filename := filepath.Clean(path.Join(parts[2:]...)) filename := filepath.Clean(path.Join(parts[2:]...))
if file, ok := cached.Files[filename]; ok { if file, ok := cached.GetFile(filename); ok && !file.Deleted {
return &File{ return &File{
cache: h.cache, cache: h.cache,
torrentName: torrentName, torrentName: torrentName,
@@ -212,6 +248,7 @@ func (h *Handler) OpenFile(ctx context.Context, name string, flag int, perm os.F
size: file.Size, size: file.Size,
link: file.Link, link: file.Link,
metadataOnly: metadataOnly, metadataOnly: metadataOnly,
isRar: file.IsRar,
modTime: cached.AddedOn, modTime: cached.AddedOn,
}, nil }, nil
} }
@@ -232,13 +269,13 @@ func (h *Handler) Stat(ctx context.Context, name string) (os.FileInfo, error) {
return f.Stat() return f.Stat()
} }
func (h *Handler) getFileInfos(torrent *types.Torrent) []os.FileInfo { func (h *Handler) getFileInfos(torrent *store.CachedTorrent) []os.FileInfo {
files := make([]os.FileInfo, 0, len(torrent.Files)) torrentFiles := torrent.GetFiles()
now := time.Now() files := make([]os.FileInfo, 0, len(torrentFiles))
// Sort by file name since the order is lost when using the map // Sort by file name since the order is lost when using the map
sortedFiles := make([]*types.File, 0, len(torrent.Files)) sortedFiles := make([]*types.File, 0, len(torrentFiles))
for _, file := range torrent.Files { for _, file := range torrentFiles {
sortedFiles = append(sortedFiles, &file) sortedFiles = append(sortedFiles, &file)
} }
slices.SortFunc(sortedFiles, func(a, b *types.File) int { slices.SortFunc(sortedFiles, func(a, b *types.File) int {
@@ -250,7 +287,7 @@ func (h *Handler) getFileInfos(torrent *types.Torrent) []os.FileInfo {
name: file.Name, name: file.Name,
size: file.Size, size: file.Size,
mode: 0644, mode: 0644,
modTime: now, modTime: torrent.AddedOn,
isDir: false, isDir: false,
}) })
} }
@@ -292,7 +329,6 @@ func (h *Handler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
}, },
} }
handler.ServeHTTP(w, r) handler.ServeHTTP(w, r)
return
} }
func getContentType(fileName string) string { func getContentType(fileName string) string {
@@ -353,21 +389,23 @@ func (h *Handler) serveDirectory(w http.ResponseWriter, r *http.Request, file we
// Prepare template data // Prepare template data
data := struct { data := struct {
Path string Path string
ParentPath string ParentPath string
ShowParent bool ShowParent bool
Children []os.FileInfo Children []os.FileInfo
URLBase string URLBase string
IsBadPath bool IsBadPath bool
CanDelete bool CanDelete bool
DeleteAllBadTorrentKey string
}{ }{
Path: cleanPath, Path: cleanPath,
ParentPath: parentPath, ParentPath: parentPath,
ShowParent: showParent, ShowParent: showParent,
Children: children, Children: children,
URLBase: h.URLBase, URLBase: h.URLBase,
IsBadPath: isBadPath, IsBadPath: isBadPath,
CanDelete: canDelete, CanDelete: canDelete,
DeleteAllBadTorrentKey: DeleteAllBadTorrentKey,
} }
w.Header().Set("Content-Type", "text/html; charset=utf-8") w.Header().Set("Content-Type", "text/html; charset=utf-8")
@@ -376,98 +414,95 @@ func (h *Handler) serveDirectory(w http.ResponseWriter, r *http.Request, file we
} }
} }
// Handlers
func (h *Handler) handleGet(w http.ResponseWriter, r *http.Request) { func (h *Handler) handleGet(w http.ResponseWriter, r *http.Request) {
fRaw, err := h.OpenFile(r.Context(), r.URL.Path, os.O_RDONLY, 0) fRaw, err := h.OpenFile(r.Context(), r.URL.Path, os.O_RDONLY, 0)
if err != nil { if err != nil {
h.logger.Error().Err(err).
Str("path", r.URL.Path).
Msg("Failed to open file")
http.NotFound(w, r) http.NotFound(w, r)
return return
} }
defer func(fRaw webdav.File) { defer fRaw.Close()
err := fRaw.Close()
if err != nil {
h.logger.Error().Err(err).Msg("Failed to close file")
return
}
}(fRaw)
fi, err := fRaw.Stat() fi, err := fRaw.Stat()
if err != nil { if err != nil {
h.logger.Error().Err(err).Msg("Failed to stat file")
http.Error(w, "Server Error", http.StatusInternalServerError) http.Error(w, "Server Error", http.StatusInternalServerError)
return return
} }
// If the target is a directory, use your directory listing logic.
if fi.IsDir() { if fi.IsDir() {
h.serveDirectory(w, r, fRaw) h.serveDirectory(w, r, fRaw)
return return
} }
// Checks if the file is a torrent file // Set common headers
// .content is nil if the file is a torrent file
// .content means file is preloaded, e.g version.txt
if file, ok := fRaw.(*File); ok && file.content == nil {
link, err := file.getDownloadLink()
if err != nil {
h.logger.Debug().
Err(err).
Str("link", file.link).
Str("path", r.URL.Path).
Msg("Could not fetch download link")
http.Error(w, "Could not fetch download link", http.StatusPreconditionFailed)
return
}
if link == "" {
http.NotFound(w, r)
return
}
file.downloadLink = link
if h.cache.StreamWithRclone() {
// Redirect to the download link
http.Redirect(w, r, file.downloadLink, http.StatusTemporaryRedirect)
return
}
}
// ETags
etag := fmt.Sprintf("\"%x-%x\"", fi.ModTime().Unix(), fi.Size()) etag := fmt.Sprintf("\"%x-%x\"", fi.ModTime().Unix(), fi.Size())
w.Header().Set("ETag", etag) w.Header().Set("ETag", etag)
w.Header().Set("Last-Modified", fi.ModTime().UTC().Format(http.TimeFormat))
// 7. Content-Type by extension
ext := filepath.Ext(fi.Name()) ext := filepath.Ext(fi.Name())
contentType := mime.TypeByExtension(ext) if contentType := mime.TypeByExtension(ext); contentType != "" {
if contentType == "" { w.Header().Set("Content-Type", contentType)
contentType = "application/octet-stream" } else {
w.Header().Set("Content-Type", "application/octet-stream")
} }
w.Header().Set("Content-Type", contentType)
rs, ok := fRaw.(io.ReadSeeker) // Handle File struct with direct streaming
if !ok { if file, ok := fRaw.(*File); ok {
if r.Header.Get("Range") != "" { // Handle nginx proxy (X-Accel-Redirect)
http.Error(w, "Range not supported", http.StatusRequestedRangeNotSatisfiable) if file.content == nil && !file.isRar && h.cache.StreamWithRclone() {
link, err := file.getDownloadLink()
if err != nil || link == "" {
http.Error(w, "Could not fetch download link", http.StatusPreconditionFailed)
return
}
w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", fi.Name()))
w.Header().Set("X-Accel-Redirect", link)
w.Header().Set("X-Accel-Buffering", "no")
http.Redirect(w, r, link, http.StatusFound)
return return
} }
w.Header().Set("Content-Length", fmt.Sprintf("%d", fi.Size()))
w.Header().Set("Last-Modified", fi.ModTime().UTC().Format(http.TimeFormat)) if err := file.StreamResponse(w, r); err != nil {
w.Header().Set("Accept-Ranges", "bytes") var streamErr *streamError
ctx := r.Context() if errors.As(err, &streamErr) {
done := make(chan struct{}) // Handle client disconnections silently (just debug log)
go func() { if errors.Is(streamErr.Err, context.Canceled) || errors.Is(streamErr.Err, context.DeadlineExceeded) || streamErr.IsClientDisconnection {
defer close(done) return // Don't log as error or try to write response
io.Copy(w, fRaw) }
}()
select { if streamErr.StatusCode > 0 && !hasHeadersWritten(w) {
case <-ctx.Done(): http.Error(w, streamErr.Error(), streamErr.StatusCode)
h.logger.Debug().Msg("Client cancelled download") } else {
return h.logger.Error().
case <-done: Err(streamErr.Err).
Str("path", r.URL.Path).
Msg("Stream error")
}
} else {
// Generic error
if !hasHeadersWritten(w) {
http.Error(w, "Stream error", http.StatusInternalServerError)
} else {
h.logger.Error().
Err(err).
Str("path", r.URL.Path).
Msg("Stream error after headers written")
}
}
} }
return return
} }
http.ServeContent(w, r, fi.Name(), fi.ModTime(), rs)
// Fallback to ServeContent for other webdav.File implementations
if rs, ok := fRaw.(io.ReadSeeker); ok {
http.ServeContent(w, r, fi.Name(), fi.ModTime(), rs)
} else {
w.Header().Set("Content-Length", fmt.Sprintf("%d", fi.Size()))
w.WriteHeader(http.StatusOK)
_, _ = io.Copy(w, fRaw)
}
} }
func (h *Handler) handleHead(w http.ResponseWriter, r *http.Request) { func (h *Handler) handleHead(w http.ResponseWriter, r *http.Request) {
@@ -503,7 +538,7 @@ func (h *Handler) handleOptions(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK) w.WriteHeader(http.StatusOK)
} }
// handleDelete deletes a torrent from using id // handleDelete deletes a torrent by id, or all bad torrents if the id is DeleteAllBadTorrentKey
func (h *Handler) handleDelete(w http.ResponseWriter, r *http.Request) error { func (h *Handler) handleDelete(w http.ResponseWriter, r *http.Request) error {
cleanPath := path.Clean(r.URL.Path) // Remove any leading slashes cleanPath := path.Clean(r.URL.Path) // Remove any leading slashes
@@ -512,7 +547,15 @@ func (h *Handler) handleDelete(w http.ResponseWriter, r *http.Request) error {
return os.ErrNotExist return os.ErrNotExist
} }
cachedTorrent := h.cache.GetTorrent(torrentId) if torrentId == DeleteAllBadTorrentKey {
return h.handleDeleteAll(w)
}
return h.handleDeleteById(w, torrentId)
}
func (h *Handler) handleDeleteById(w http.ResponseWriter, tId string) error {
cachedTorrent := h.cache.GetTorrent(tId)
if cachedTorrent == nil { if cachedTorrent == nil {
return os.ErrNotExist return os.ErrNotExist
} }
@@ -521,3 +564,22 @@ func (h *Handler) handleDelete(w http.ResponseWriter, r *http.Request) error {
w.WriteHeader(http.StatusNoContent) w.WriteHeader(http.StatusNoContent)
return nil return nil
} }
func (h *Handler) handleDeleteAll(w http.ResponseWriter) error {
badTorrents := h.cache.GetListing("__bad__")
if len(badTorrents) == 0 {
http.Error(w, "No bad torrents to delete", http.StatusNotFound)
return nil
}
for _, fi := range badTorrents {
tName := strings.TrimSpace(strings.SplitN(fi.Name(), "||", 2)[0])
t := h.cache.GetTorrentByName(tName)
if t != nil {
h.cache.OnRemove(t.Id)
}
}
w.WriteHeader(http.StatusNoContent)
return nil
}

View File

@@ -1,6 +1,7 @@
package webdav package webdav
import ( import (
"fmt"
"github.com/stanNthe5/stringbuf" "github.com/stanNthe5/stringbuf"
"net/http" "net/http"
"net/url" "net/url"
@@ -11,16 +12,6 @@ import (
"time" "time"
) )
// getName: Returns the torrent name and filename from the path
func getName(rootDir, path string) (string, string) {
path = strings.TrimPrefix(path, rootDir)
parts := strings.Split(strings.TrimPrefix(path, string(os.PathSeparator)), string(os.PathSeparator))
if len(parts) < 2 {
return "", ""
}
return parts[1], strings.Join(parts[2:], string(os.PathSeparator)) // Note the change from [0] to [1]
}
func isValidURL(str string) bool { func isValidURL(str string) bool {
u, err := url.Parse(str) u, err := url.Parse(str)
// A valid URL should parse without error, and have a non-empty scheme and host. // A valid URL should parse without error, and have a non-empty scheme and host.
@@ -65,7 +56,7 @@ type entry struct {
func filesToXML(urlPath string, fi os.FileInfo, children []os.FileInfo) stringbuf.StringBuf { func filesToXML(urlPath string, fi os.FileInfo, children []os.FileInfo) stringbuf.StringBuf {
now := time.Now().UTC().Format("2006-01-02T15:04:05.000-07:00") now := time.Now().UTC().Format(time.RFC3339)
entries := make([]entry, 0, len(children)+1) entries := make([]entry, 0, len(children)+1)
// Add the current file itself // Add the current file itself
@@ -74,7 +65,7 @@ func filesToXML(urlPath string, fi os.FileInfo, children []os.FileInfo) stringbu
escName: xmlEscape(fi.Name()), escName: xmlEscape(fi.Name()),
isDir: fi.IsDir(), isDir: fi.IsDir(),
size: fi.Size(), size: fi.Size(),
modTime: fi.ModTime().Format("2006-01-02T15:04:05.000-07:00"), modTime: fi.ModTime().Format(time.RFC3339),
}) })
for _, info := range children { for _, info := range children {
@@ -90,13 +81,11 @@ func filesToXML(urlPath string, fi os.FileInfo, children []os.FileInfo) stringbu
escName: xmlEscape(nm), escName: xmlEscape(nm),
isDir: info.IsDir(), isDir: info.IsDir(),
size: info.Size(), size: info.Size(),
modTime: info.ModTime().Format("2006-01-02T15:04:05.000-07:00"), modTime: info.ModTime().Format(time.RFC3339),
}) })
} }
sb := builderPool.Get().(stringbuf.StringBuf) sb := stringbuf.New("")
sb.Reset()
defer builderPool.Put(sb)
// XML header and main element // XML header and main element
_, _ = sb.WriteString(`<?xml version="1.0" encoding="UTF-8"?>`) _, _ = sb.WriteString(`<?xml version="1.0" encoding="UTF-8"?>`)
@@ -144,3 +133,110 @@ func writeXml(w http.ResponseWriter, status int, buf stringbuf.StringBuf) {
w.WriteHeader(status) w.WriteHeader(status)
_, _ = w.Write(buf.Bytes()) _, _ = w.Write(buf.Bytes())
} }
func hasHeadersWritten(w http.ResponseWriter) bool {
// Most ResponseWriter implementations support this
if hw, ok := w.(interface{ Written() bool }); ok {
return hw.Written()
}
return false
}
func isClientDisconnection(err error) bool {
if err == nil {
return false
}
errStr := err.Error()
// Common client disconnection error patterns
return strings.Contains(errStr, "broken pipe") ||
strings.Contains(errStr, "connection reset by peer") ||
strings.Contains(errStr, "write: connection reset") ||
strings.Contains(errStr, "read: connection reset") ||
strings.Contains(errStr, "context canceled") ||
strings.Contains(errStr, "context deadline exceeded") ||
strings.Contains(errStr, "client disconnected") ||
strings.Contains(errStr, "EOF")
}
type httpRange struct{ start, end int64 }
func parseRange(s string, size int64) ([]httpRange, error) {
if s == "" {
return nil, nil
}
const b = "bytes="
if !strings.HasPrefix(s, b) {
return nil, fmt.Errorf("invalid range")
}
var ranges []httpRange
for _, ra := range strings.Split(s[len(b):], ",") {
ra = strings.TrimSpace(ra)
if ra == "" {
continue
}
i := strings.Index(ra, "-")
if i < 0 {
return nil, fmt.Errorf("invalid range")
}
start, end := strings.TrimSpace(ra[:i]), strings.TrimSpace(ra[i+1:])
var r httpRange
if start == "" {
i, err := strconv.ParseInt(end, 10, 64)
if err != nil {
return nil, fmt.Errorf("invalid range")
}
if i > size {
i = size
}
r.start = size - i
r.end = size - 1
} else {
i, err := strconv.ParseInt(start, 10, 64)
if err != nil || i < 0 {
return nil, fmt.Errorf("invalid range")
}
r.start = i
if end == "" {
r.end = size - 1
} else {
i, err := strconv.ParseInt(end, 10, 64)
if err != nil || r.start > i {
return nil, fmt.Errorf("invalid range")
}
if i >= size {
i = size - 1
}
r.end = i
}
}
if r.start > size-1 {
continue
}
ranges = append(ranges, r)
}
return ranges, nil
}
func setVideoStreamingHeaders(req *http.Request) {
// Request optimizations for faster response
req.Header.Set("Accept", "*/*")
req.Header.Set("Accept-Encoding", "identity")
req.Header.Set("Connection", "keep-alive")
req.Header.Set("User-Agent", "VideoStream/1.0")
req.Header.Set("Priority", "u=1")
}
func setVideoResponseHeaders(w http.ResponseWriter, resp *http.Response, isRange bool) {
// Copy essential headers from upstream
if contentLength := resp.Header.Get("Content-Length"); contentLength != "" {
w.Header().Set("Content-Length", contentLength)
}
if contentRange := resp.Header.Get("Content-Range"); contentRange != "" && isRange {
w.Header().Set("Content-Range", contentRange)
}
w.WriteHeader(resp.StatusCode)
}

View File

@@ -8,21 +8,19 @@ import (
"path" "path"
"strconv" "strconv"
"strings" "strings"
"sync"
"time" "time"
) )
var builderPool = sync.Pool{ type contextKey string
New: func() interface{} { const (
buf := stringbuf.New("") // metadataOnlyKey is used to indicate that the request is for metadata only
return buf metadataOnlyKey contextKey = "metadataOnly"
}, )
}
func (h *Handler) handlePropfind(w http.ResponseWriter, r *http.Request) { func (h *Handler) handlePropfind(w http.ResponseWriter, r *http.Request) {
// Setup context for metadata only // Setup context for metadata only
ctx := context.WithValue(r.Context(), "metadataOnly", true) ctx := context.WithValue(r.Context(), metadataOnlyKey, true)
r = r.WithContext(ctx) r = r.WithContext(ctx)
cleanPath := path.Clean(r.URL.Path) cleanPath := path.Clean(r.URL.Path)
@@ -57,7 +55,6 @@ func (h *Handler) handlePropfind(w http.ResponseWriter, r *http.Request) {
rawEntries = append(rawEntries, h.getChildren(cleanPath)...) rawEntries = append(rawEntries, h.getChildren(cleanPath)...)
} }
now := time.Now().UTC().Format("2006-01-02T15:04:05.000-07:00")
entries := make([]entry, 0, len(rawEntries)+1) entries := make([]entry, 0, len(rawEntries)+1)
// Add the current file itself // Add the current file itself
entries = append(entries, entry{ entries = append(entries, entry{
@@ -65,7 +62,7 @@ func (h *Handler) handlePropfind(w http.ResponseWriter, r *http.Request) {
escName: xmlEscape(fi.Name()), escName: xmlEscape(fi.Name()),
isDir: fi.IsDir(), isDir: fi.IsDir(),
size: fi.Size(), size: fi.Size(),
modTime: fi.ModTime().Format("2006-01-02T15:04:05.000-07:00"), modTime: fi.ModTime().Format(time.RFC3339),
}) })
for _, info := range rawEntries { for _, info := range rawEntries {
@@ -81,13 +78,11 @@ func (h *Handler) handlePropfind(w http.ResponseWriter, r *http.Request) {
escName: xmlEscape(nm), escName: xmlEscape(nm),
isDir: info.IsDir(), isDir: info.IsDir(),
size: info.Size(), size: info.Size(),
modTime: info.ModTime().Format("2006-01-02T15:04:05.000-07:00"), modTime: info.ModTime().Format(time.RFC3339),
}) })
} }
sb := builderPool.Get().(stringbuf.StringBuf) sb := stringbuf.New("")
sb.Reset()
defer builderPool.Put(sb)
// XML header and main element // XML header and main element
_, _ = sb.WriteString(`<?xml version="1.0" encoding="UTF-8"?>`) _, _ = sb.WriteString(`<?xml version="1.0" encoding="UTF-8"?>`)
@@ -112,7 +107,7 @@ func (h *Handler) handlePropfind(w http.ResponseWriter, r *http.Request) {
} }
_, _ = sb.WriteString(`<d:getlastmodified>`) _, _ = sb.WriteString(`<d:getlastmodified>`)
_, _ = sb.WriteString(now) _, _ = sb.WriteString(e.modTime)
_, _ = sb.WriteString(`</d:getlastmodified>`) _, _ = sb.WriteString(`</d:getlastmodified>`)
_, _ = sb.WriteString(`<d:displayname>`) _, _ = sb.WriteString(`<d:displayname>`)

View File

@@ -106,6 +106,19 @@
</li> </li>
{{- end}} {{- end}}
{{$isBadPath := hasSuffix .Path "__bad__"}} {{$isBadPath := hasSuffix .Path "__bad__"}}
{{- if and $isBadPath (gt (len .Children) 0) }}
<li>
<span class="file-number">&nbsp;</span>
<span class="file-name">&nbsp;</span>
<span class="file-info">&nbsp;</span>
<button
class="delete-btn"
id="delete-all-btn"
data-name="{{.DeleteAllBadTorrentKey}}">
Delete All
</button>
</li>
{{- end}}
{{- range $i, $file := .Children}} {{- range $i, $file := .Children}}
<li class="{{if $isBadPath}}disabled{{end}}"> <li class="{{if $isBadPath}}disabled{{end}}">
<a {{ if not $isBadPath}}href="{{urlpath (printf "%s/%s" $.Path $file.Name)}}"{{end}}> <a {{ if not $isBadPath}}href="{{urlpath (printf "%s/%s" $.Path $file.Name)}}"{{end}}>
@@ -118,7 +131,7 @@
</a> </a>
{{- if and $.CanDelete }} {{- if and $.CanDelete }}
<button <button
class="delete-btn" class="delete-btn delete-with-id-btn"
data-name="{{$file.Name}}" data-name="{{$file.Name}}"
data-path="{{printf "%s/%s" $.Path $file.ID}}"> data-path="{{printf "%s/%s" $.Path $file.ID}}">
Delete Delete
@@ -128,7 +141,7 @@
{{- end}} {{- end}}
</ul> </ul>
<script> <script>
document.querySelectorAll('.delete-btn').forEach(btn=>{ document.querySelectorAll('.delete-with-id-btn').forEach(btn=>{
btn.addEventListener('click', ()=>{ btn.addEventListener('click', ()=>{
let p = btn.getAttribute('data-path'); let p = btn.getAttribute('data-path');
let name = btn.getAttribute('data-name'); let name = btn.getAttribute('data-name');
@@ -137,6 +150,14 @@
.then(_=>location.reload()); .then(_=>location.reload());
}); });
}); });
const deleteAllButton = document.getElementById('delete-all-btn');
deleteAllButton.addEventListener('click', () => {
let p = deleteAllButton.getAttribute('data-name');
if (!confirm('Delete all entries marked Bad?')) return;
fetch(p, { method: 'DELETE' })
.then(_=>location.reload());
});
</script> </script>
</body> </body>
</html> </html>

View File

@@ -7,7 +7,7 @@ import (
"github.com/go-chi/chi/v5" "github.com/go-chi/chi/v5"
"github.com/go-chi/chi/v5/middleware" "github.com/go-chi/chi/v5/middleware"
"github.com/sirrobot01/decypharr/internal/config" "github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/pkg/service" "github.com/sirrobot01/decypharr/pkg/store"
"html/template" "html/template"
"net/http" "net/http"
"net/url" "net/url"
@@ -90,14 +90,13 @@ type WebDav struct {
} }
func New() *WebDav { func New() *WebDav {
svc := service.GetService()
urlBase := config.Get().URLBase urlBase := config.Get().URLBase
w := &WebDav{ w := &WebDav{
Handlers: make([]*Handler, 0), Handlers: make([]*Handler, 0),
URLBase: urlBase, URLBase: urlBase,
} }
for name, c := range svc.Debrid.Caches { for name, c := range store.Get().Debrid().Caches() {
h := NewHandler(name, urlBase, c, c.GetLogger()) h := NewHandler(name, urlBase, c, c.Logger())
w.Handlers = append(w.Handlers, h) w.Handlers = append(w.Handlers, h)
} }
return w return w

View File

@@ -1,72 +0,0 @@
package worker
import (
"context"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/pkg/service"
"sync"
"time"
)
var (
_logInstance zerolog.Logger
)
func getLogger() zerolog.Logger {
return _logInstance
}
func Start(ctx context.Context) error {
cfg := config.Get()
// Start Arr Refresh Worker
_logInstance = logger.New("worker")
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
cleanUpQueuesWorker(ctx, cfg)
}()
wg.Wait()
return nil
}
func cleanUpQueuesWorker(ctx context.Context, cfg *config.Config) {
// Start Clean up Queues Worker
_logger := getLogger()
_logger.Debug().Msg("Clean up Queues Worker started")
cleanupCtx := context.WithValue(ctx, "worker", "cleanup")
cleanupTicker := time.NewTicker(time.Duration(10) * time.Second)
var cleanupMutex sync.Mutex
for {
select {
case <-cleanupCtx.Done():
_logger.Debug().Msg("Clean up Queues Worker stopped")
return
case <-cleanupTicker.C:
if cleanupMutex.TryLock() {
go func() {
defer cleanupMutex.Unlock()
cleanUpQueues()
}()
}
}
}
}
func cleanUpQueues() {
// Clean up queues
_logger := getLogger()
for _, a := range service.GetService().Arr.GetAll() {
if !a.Cleanup {
continue
}
if err := a.CleanupQueue(); err != nil {
_logger.Error().Err(err).Msg("Error cleaning up queue")
}
}
}

View File

@@ -1,57 +0,0 @@
#!/bin/bash
# deploy.sh
# Function to display usage
usage() {
echo "Usage: $0 [-b|--beta] <version>"
echo "Example for main: $0 v1.0.0"
echo "Example for beta: $0 -b v1.0.0"
exit 1
}
# Parse arguments
BETA=false
while [[ "$#" -gt 0 ]]; do
case $1 in
-b|--beta) BETA=true; shift ;;
-*) echo "Unknown parameter: $1"; usage ;;
*) VERSION="$1"; shift ;;
esac
done
# Check if version is provided
if [ -z "$VERSION" ]; then
echo "Error: Version is required"
usage
fi
# Validate version format
if ! [[ $VERSION =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "Error: Version must be in format v1.0.0"
exit 1
fi
# Set tag based on branch
if [ "$BETA" = true ]; then
TAG="$VERSION-beta"
BRANCH="beta"
else
TAG="$VERSION"
BRANCH="main"
fi
echo "Deploying version $VERSION to $BRANCH branch..."
# Ensure we're on the right branch
git checkout $BRANCH || exit 1
# Create and push tag
echo "Creating tag $TAG..."
git tag "$TAG" || exit 1
git push origin "$TAG" || exit 1
echo "Deployment initiated successfully!"
echo "GitHub Actions will handle the release process."
echo "Check the progress at: https://github.com/sirrobot01/decypharr/actions"