23 Commits
v1.1.4 ... main

Author SHA1 Message Date
f5b1f100e2 fix: handle TorBox 'checking' state as downloading, add symlink completion logging
All checks were successful
CI/CD / Build & Push Docker Image (push) Successful in 1m10s
- Add 'checking' to the list of downloading states in getTorboxStatus()
  so TorBox torrents in transitional state don't get marked as error
- Add debug log before calling onSuccess to trace state update flow

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 11:56:14 -08:00
4cf8246550 fix: add safety check to force state update after symlink processing
All checks were successful
CI/CD / Build & Push Docker Image (push) Successful in 1m16s
Add a fallback in onSuccess that ensures the torrent state is set to
'pausedUP' after updateTorrent returns. This catches edge cases where
updateTorrent might not set the state correctly.

The check:
1. Verifies state after updateTorrent returns
2. If state is not 'pausedUP' but we have a valid symlink path, force update
3. Logs a warning with diagnostic info when fallback triggers

This should definitively fix the TorBox downloads stuck in 'downloading'
state issue (dcy-355).

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 11:47:43 -08:00
3f96382e76 debug: add comprehensive logging for TorBox state transition
All checks were successful
CI/CD / Build & Push Docker Image (push) Successful in 1m13s
Add logging at key points in the download flow to diagnose why TorBox
downloads get stuck in 'downloading' state despite completion:

- Log initial state at processFiles start
- Log when download loop exits
- Log when onSuccess callback is invoked
- Log condition evaluation in updateTorrent
- Log when state is set to pausedUP

Also fix missing return after onFailed for empty symlink path edge case.

Part of dcy-355 investigation.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 11:46:33 -08:00
43a94118f4 debug: add logging and case-insensitive check for TorBox status
All checks were successful
CI/CD / Build & Push Docker Image (push) Successful in 1m15s
Added debug logging to see actual DownloadState values from TorBox API.
Also made status comparison case-insensitive.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 11:30:23 -08:00
38d59cb01d fix: treat TorBox cached/completed status as downloaded
All checks were successful
CI/CD / Build & Push Docker Image (push) Successful in 1m11s
For instant/cached TorBox downloads, the API returns DownloadFinished=false
(no download happened - content was already cached) with DownloadState="cached"
or "completed". These should be treated as "downloaded" since content is
immediately available.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 11:27:05 -08:00
b306248db6 fix: directly set pausedUP state when debrid reports downloaded
All checks were successful
CI/CD / Build & Push Docker Image (push) Successful in 1m27s
Previous fix relied on IsReady() calculation which might still fail
due to edge cases with progress/AmountLeft values. This fix directly
sets state to pausedUP when debridTorrent.Status == "downloaded" and
TorrentPath is set, bypassing the IsReady() check entirely.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-31 11:16:41 -08:00
3a5289cb1d fix: TorBox downloads stuck in 'downloading' state
All checks were successful
CI/CD / Build & Push Docker Image (push) Successful in 1m17s
Fixed race condition where TorBox reports DownloadFinished=true but
Progress < 1.0, causing IsReady() to return false and state to stay
"downloading" instead of transitioning to "pausedUP".

Also added Gitea CI workflow to push images to internal registry.

Fixes: dcy-355

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 22:57:36 -08:00
Mukhtar Akere
207d43b13f fix issuses with rclone on windows
Some checks failed
ci / deploy (push) Successful in 1m6s
Release Docker Build / docker (push) Has been cancelled
GoReleaser / goreleaser (push) Has been cancelled
2025-10-27 11:56:23 +01:00
Mukhtar Akere
9f9a85d302 fix gh workflow 2025-10-27 11:49:12 +01:00
Mukhtar Akere
2712315108 hotfix: downloader deleting files with multi-season
Some checks failed
Beta Docker Build / docker (push) Failing after 2m53s
2025-10-27 11:45:12 +01:00
Mukhtar Akere
1f384ba4f7 remove ci/cd test 2025-10-26 11:41:39 +01:00
Mukhtar Akere
7db79e99ba minor fixes 2025-10-22 16:57:06 +01:00
Mukhtar Akere
ad394c86ee Merge branch 'beta' of github.com:sirrobot01/decypharr into beta 2025-10-22 16:52:04 +01:00
crashxer
7af90ebe47 Add feature to remove torrent tracker URLs from torrents for private tracker downloads (#99)
- Remove trackers from torrenst/magnet URI

---------

Co-authored-by: Mukhtar Akere <akeremukhtar10@gmail.com>
2025-10-22 16:44:23 +01:00
Duc Nghiem Xuan
7032cc368b Update version badge link format (#163)
Fix version badge link to use correct version format.
2025-10-22 16:40:24 +01:00
rotecode
f21f5cad94 fix regression in webdav file removal logic (#162) 2025-10-21 10:50:26 +01:00
Mukhtar Akere
f93d1a5913 minor cleanup
Some checks failed
Release Docker Build / docker (push) Has been cancelled
GoReleaser / goreleaser (push) Has been cancelled
2025-10-16 21:04:14 +01:00
Mukhtar Akere
2a4f09c06d Delete failed download link for next retry 2025-10-15 11:56:15 +01:00
Mukhtar Akere
b1b6353fb3 fix download_uncached bug 2025-10-13 20:37:59 +01:00
Mukhtar Akere
df7979c430 Fix some minor issues with authentication and qbit auth 2025-10-13 20:12:17 +01:00
Mukhtar Akere
726f97e13c chore:
- Rewrite arr storage to fix issues with repair
- Fix issues with restarts taking longer than expected
- Add bw_limit to rclone config
- Add support for skipping multi-season
- Other minor bug fixes
2025-10-13 17:02:50 +01:00
Mukhtar Akere
ab485adfc8 hotfix 2025-10-08 09:14:55 +01:00
Mukhtar Akere
700d00b802 - Fix issues with new setup
- Fix arr setup getting thr wrong crendentials
- Add file link invalidator
- Other minor bug fixes
2025-10-08 08:13:13 +01:00
61 changed files with 1570 additions and 743 deletions

55
.gitea/workflows/ci.yml Normal file
View File

@@ -0,0 +1,55 @@
name: CI/CD
on:
push:
branches: [master, main]
pull_request:
branches: [master, main]
workflow_dispatch:
permissions:
contents: read
actions: write
jobs:
build-and-push:
name: Build & Push Docker Image
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
outputs:
image_tag: ${{ steps.meta.outputs.tag }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Generate image metadata
id: meta
run: |
SHORT_SHA=$(echo "${{ github.sha }}" | cut -c1-7)
echo "tag=${SHORT_SHA}" >> $GITHUB_OUTPUT
echo "Image will be tagged: ${SHORT_SHA}"
- name: Login to registry
run: |
echo "${{ secrets.REGISTRY_PASSWORD }}" | docker login registry.johnogle.info -u ${{ secrets.REGISTRY_USERNAME }} --password-stdin
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: |
registry.johnogle.info/johno/decypharr:${{ steps.meta.outputs.tag }}
registry.johnogle.info/johno/decypharr:latest
cache-from: type=gha
cache-to: type=gha,mode=max
build-args: |
VERSION=${{ steps.meta.outputs.tag }}
CHANNEL=dev

11
.gitignore vendored
View File

@@ -3,7 +3,9 @@ config.json
.idea/
.DS_Store
*.torrent
!testdata/*.torrent
*.magnet
!testdata/*.magnet
*.db
*.log
*.log.*
@@ -17,4 +19,11 @@ auth.json
node_modules/
.venv/
.stignore
.stfolder/**
.stfolder/**
# Gas Town (added by gt)
.runtime/
.claude/
.logs/
.beads/
state.json

View File

@@ -0,0 +1,117 @@
# Private Tracker Downloads
It is against the rules of most private trackers to download using debrid services. That's because debrid services do not seed back.
Despite that, **many torrents from private trackers are cached on debrid services**.
This can happen if the exact same torrent is uploaded to a public tracker or if another user downloads the torrent from the private tracker using their debrid account.
However, you do **_NOT_** want to be the first person who downloads and caches the private tracker torrent because it is a very quick way to get your private tracker account banned.
Fortunately, decypharr offers a feature that allows you to check whether a private tracker torrent has _already_ been cached.
In a way, this feature lets you use your private trackers to find hashes for the latest releases that have not yet been indexed by zilean, torrentio, and other debrid-focused indexers.
This allows you to add private tracker torrents to your debrid account without breaking the most common private tracker rules. This significantly reduces the chance of account bans, **but please read the `Risks` section below** for more details and other precautions you should make.
## Risks
A lot of care has gone into ensuring this feature is compliant with most private tracker rules:
- The passkey is not leaked
- The private tracker announce URLs are not leaked
- The private tracker swarm is not leaked
- Even the torrent content is not leaked (by you)
You are merely downloading it from another source. It's not much different than downloading a torrent that has been uploaded to MegaUpload or another file hoster.
**But it is NOT completely risk-free.**
### Suspicious-looking activity
To use this feature, you must download the `.torrent` file from the private tracker. But since you will never leech the content, it can make your account look suspicious.
In fact, there is a strictly forbidden technique called `ghostleeching` that also requires downloading of the `.torrent` file, and tracker admins might suspect that this is what you are doing.
We know of one user who got banned from a Unit3D-based tracker for this.
**Here is what is recommended:**
- Be a good private tracker user in general. Perma-seed, upload, contribute
- Only enable `Interactive Search` in the arrs (disable `Automatic Search`)
- Only use it for content that is not on public sources yet, and you need to watch **RIGHT NOW** without having time to wait for the download to finish
- Do **NOT** use it to avoid seeding
### Accidentally disable this feature
Another big risk is that you might accidentally disable the feature. The consequence will be that you actually leech the torrent from the tracker, don't seed it, and expose the private swarm to an untrusted third party.
You should avoid this at all costs.
Therefore, to reduce the risk further, it is recommended to enable the feature using both methods:
1. Using the global `Always Remove Tracker URLs` setting in your decypharr `config.json`
2. And by enabling the `First and Last First` setting in Radarr / Sonarr
This way, if one of them gets disabled, you have another backup.
## How to enable this feature
### Always Remove Tracker URLs
- In the web UI under `Settings -> QBitTorrent -> Always Remove Tracker URLs`
- Or in your `config.json` by setting the `qbittorrent.always_rm_tracker_url` to `true`
This ensures that the Tracker URLs are removed from **ALL torrents** (regardless of whether they are public, private, or how they were added).
But this can make downloads of uncached torrents slower or stall because the tracker helps the client find peers to download from.
If the torrent file has no tracker URLs, the torrent client can try to find peers for public torrents using [DHT](https://en.wikipedia.org/wiki/Mainline_DHT). However, this may be less efficient than connecting to a tracker, and the downloads may be slower or stall.
If you only download cached torrents, there is no further downside to enabling this option.
### Only on specific Arr-app clients and indexers
Alternatively, you can toggle it only for specific download clients and indexers in the Arr-apps...
- Enable `Show Advanced Settings` in your Arr app
- Add a new download client in `Settings -> Download Clients` and call it something like `Decypharr (Private)`
- Enable the `First and Last First` checkbox, which will tell Decypharr to remove the tracker URLs
- Add a duplicate version of your private tracker indexer for Decypharr downloads
- Untick `Enable Automatic Search`
- Tick `Enable Interactive Search`
- Set `Download Client` to your new `Decypharr (Private)` client (requires `Show Advanced Settings`)
If you are using Prowlarr to sync your indexers, you can't set the `Download Client` in Prowlarr. You must update it directly in your Arr-apps after the indexers get synced. But future updates to the indexers won't reset the setting.
### Test it
After enabling the feature, try adding a [public torrent](https://ubuntu.com/download/alternative-downloads) through the Decypharr UI and a **public torrent** through your Arr-apps.
Then check the decypharr log to check for a log entry like...
```log
Removed 2 tracker URLs from torrent file
```
If you see this log entry, it means the tracker URLs are being stripped from your torrents and you can safely enable it on private tracker indexers.
## How it works
When you add a new torrent through the QBitTorrent API or through the Web UI, decypharr converts your torrent into a magnet link and then uses your debrid service's API to download that magnet link.
The torrent magnet link contains:
1. The `info hash` that uniquely identifies the torrent, files, and file names
2. The torrent name
3. The URLs of the tracker to connect to
Private tracker URLs in torrents contain a `passkey`. This is a unique identifier that ties the torrent file to your private tracker account.
Only if the `passkey` is valid will the tracker allow the torrent client to connect and download the files. This is also how private torrent trackers measure your downloads and uploads.
The `Remove Tracker URLs` feature removes all the tracker URLs (which include your private `passkey`). This means when decypharr attempts to download the torrent, it only passes the `info hash` and torrent name to the debrid service.
Without the tracker URLs, your debrid service has no way to connect to the private tracker to download the files, and your `passkey` and the private torrent tracker swarm are not exposed.
**But if the torrent is already cached, it's immediately added to your account.**

View File

@@ -21,6 +21,7 @@ If it's the first time you're accessing the UI, you will be prompted to set up y
- Click on **Qbittorrent** in the tab
- Set the **Download Folder** to where you want Decypharr to save downloaded files. These files will be symlinked to the mount folder you configured earlier.
- Set **Always Remove Tracker URLs** if you want to always remove the tracker URLs torrents and magnet links. This is useful if you want to [download private tracker torrents](features/private-tracker-downloads.md) without breaking the rules, but will make uncached torrents always stall.
You can leave the remaining settings as default for now.
### Arrs Configuration
@@ -42,6 +43,7 @@ To connect Decypharr to your Sonarr or Radarr instance:
- **Category**: e.g., `sonarr`, `radarr` (match what you configured in Decypharr)
- **Use SSL**: `No`
- **Sequential Download**: `No` or `Yes` (if you want to download torrents locally instead of symlink)
- **First and Last First**: `No` by default or `Yes` if you want to remove torrent tracker URLs from the torrents. This can make it possible to [download private trackers torrents without breaking the rules](features/private-tracker-downloads.md).
3. Click **Test** to verify the connection
4. Click **Save** to add the download client

View File

@@ -66,6 +66,7 @@ nav:
- Features:
- Overview: features/index.md
- Repair Worker: features/repair-worker.md
- Private Tracker Downloads: features/private-tracker-downloads.md
- Guides:
- Overview: guides/index.md
- Manual Downloading: guides/downloading.md

2
go.mod
View File

@@ -11,6 +11,7 @@ require (
github.com/go-co-op/gocron/v2 v2.16.1
github.com/google/uuid v1.6.0
github.com/gorilla/sessions v1.4.0
github.com/puzpuzpuz/xsync/v4 v4.1.0
github.com/robfig/cron/v3 v3.0.1
github.com/rs/zerolog v1.33.0
github.com/stanNthe5/stringbuf v0.0.3
@@ -34,7 +35,6 @@ require (
github.com/mattn/go-colorable v0.1.14 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/puzpuzpuz/xsync/v4 v4.1.0 // indirect
github.com/rogpeppe/go-internal v1.14.1 // indirect
golang.org/x/sys v0.33.0 // indirect
)

19
internal/config/auth.go Normal file
View File

@@ -0,0 +1,19 @@
package config
import "golang.org/x/crypto/bcrypt"
func VerifyAuth(username, password string) bool {
// If you're storing hashed password, use bcrypt to compare
if username == "" {
return false
}
auth := Get().GetAuth()
if auth == nil {
return false
}
if username != auth.Username {
return false
}
err := bcrypt.CompareHashAndPassword([]byte(auth.Password), []byte(password))
return err == nil
}

View File

@@ -49,14 +49,15 @@ type Debrid struct {
}
type QBitTorrent struct {
Username string `json:"username,omitempty"`
Password string `json:"password,omitempty"`
Port string `json:"port,omitempty"` // deprecated
DownloadFolder string `json:"download_folder,omitempty"`
Categories []string `json:"categories,omitempty"`
RefreshInterval int `json:"refresh_interval,omitempty"`
SkipPreCache bool `json:"skip_pre_cache,omitempty"`
MaxDownloads int `json:"max_downloads,omitempty"`
Username string `json:"username,omitempty"`
Password string `json:"password,omitempty"`
Port string `json:"port,omitempty"` // deprecated
DownloadFolder string `json:"download_folder,omitempty"`
Categories []string `json:"categories,omitempty"`
RefreshInterval int `json:"refresh_interval,omitempty"`
SkipPreCache bool `json:"skip_pre_cache,omitempty"`
MaxDownloads int `json:"max_downloads,omitempty"`
AlwaysRmTrackerUrls bool `json:"always_rm_tracker_urls,omitempty"`
}
type Arr struct {
@@ -106,6 +107,7 @@ type Rclone struct {
VfsReadChunkSizeLimit string `json:"vfs_read_chunk_size_limit,omitempty"` // Max chunk size (default off)
VfsReadAhead string `json:"vfs_read_ahead,omitempty"` // read ahead size
BufferSize string `json:"buffer_size,omitempty"` // Buffer size for reading files (default 16M)
BwLimit string `json:"bw_limit,omitempty"` // Bandwidth limit (default off)
VfsCacheMinFreeSpace string `json:"vfs_cache_min_free_space,omitempty"`
VfsFastFingerprint bool `json:"vfs_fast_fingerprint,omitempty"`
@@ -152,6 +154,7 @@ type Config struct {
DiscordWebhook string `json:"discord_webhook_url,omitempty"`
RemoveStalledAfter string `json:"remove_stalled_after,omitzero"`
CallbackURL string `json:"callback_url,omitempty"`
EnableWebdavAuth bool `json:"enable_webdav_auth,omitempty"`
}
func (c *Config) JsonFile() string {
@@ -337,12 +340,12 @@ func (c *Config) SaveAuth(auth *Auth) error {
return os.WriteFile(c.AuthFile(), data, 0644)
}
func (c *Config) NeedsSetup() error {
func (c *Config) CheckSetup() error {
return ValidateConfig(c)
}
func (c *Config) NeedsAuth() bool {
return !c.UseAuth && c.GetAuth().Username == ""
return c.UseAuth && (c.Auth == nil || c.Auth.Username == "" || c.Auth.Password == "")
}
func (c *Config) updateDebrid(d Debrid) Debrid {

View File

@@ -7,10 +7,6 @@ import (
"encoding/json"
"errors"
"fmt"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/logger"
"go.uber.org/ratelimit"
"golang.org/x/net/proxy"
"io"
"math/rand"
"net"
@@ -20,6 +16,11 @@ import (
"strings"
"sync"
"time"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/logger"
"go.uber.org/ratelimit"
"golang.org/x/net/proxy"
)
func JoinURL(base string, paths ...string) (string, error) {
@@ -422,3 +423,42 @@ func SetProxy(transport *http.Transport, proxyURL string) {
}
return
}
func ValidateURL(urlStr string) error {
if urlStr == "" {
return fmt.Errorf("URL cannot be empty")
}
// Try parsing as full URL first
u, err := url.Parse(urlStr)
if err == nil && u.Scheme != "" && u.Host != "" {
// It's a full URL, validate scheme
if u.Scheme != "http" && u.Scheme != "https" {
return fmt.Errorf("URL scheme must be http or https")
}
return nil
}
// Check if it's a host:port format (no scheme)
if strings.Contains(urlStr, ":") && !strings.Contains(urlStr, "://") {
// Try parsing with http:// prefix
testURL := "http://" + urlStr
u, err := url.Parse(testURL)
if err != nil {
return fmt.Errorf("invalid host:port format: %w", err)
}
if u.Host == "" {
return fmt.Errorf("host is required in host:port format")
}
// Validate port number
if u.Port() == "" {
return fmt.Errorf("port is required in host:port format")
}
return nil
}
return fmt.Errorf("invalid URL format: %s", urlStr)
}

View File

@@ -0,0 +1,45 @@
package testutil
import (
"os"
"path/filepath"
"strings"
)
// GetTestDataPath returns the path to the testdata directory in the project root
func GetTestDataPath() string {
return filepath.Join("..", "..", "testdata")
}
// GetTestDataFilePath returns the path to a specific file in the testdata directory
func GetTestDataFilePath(filename string) string {
return filepath.Join(GetTestDataPath(), filename)
}
// GetTestTorrentPath returns the path to the Ubuntu test torrent file
func GetTestTorrentPath() string {
return GetTestDataFilePath("ubuntu-25.04-desktop-amd64.iso.torrent")
}
// GetTestMagnetPath returns the path to the Ubuntu test magnet file
func GetTestMagnetPath() string {
return GetTestDataFilePath("ubuntu-25.04-desktop-amd64.iso.magnet")
}
// GetTestDataBytes reads and returns the raw bytes of a test data file
func GetTestDataBytes(filename string) ([]byte, error) {
filePath := GetTestDataFilePath(filename)
return os.ReadFile(filePath)
}
// GetTestDataContent reads and returns the content of a test data file
func GetTestDataContent(filename string) (string, error) {
content, err := GetTestDataBytes(filename)
return strings.TrimSpace(string(content)), err
}
// GetTestMagnetContent reads and returns the content of the Ubuntu test magnet file
func GetTestMagnetContent() (string, error) {
return GetTestDataContent("ubuntu-25.04-desktop-amd64.iso.magnet")
}

View File

@@ -84,3 +84,54 @@ func readSmallChunks(file *os.File, startPos int64, totalToRead int, chunkSize i
}
return nil
}
func EnsureDir(dirPath string) error {
if dirPath == "" {
return fmt.Errorf("directory path is empty")
}
_, err := os.Stat(dirPath)
if os.IsNotExist(err) {
// Directory does not exist, create it
if err := os.MkdirAll(dirPath, 0755); err != nil {
return fmt.Errorf("failed to create directory: %v", err)
}
return nil
}
return err
}
func FormatSize(bytes int64) string {
const (
KB = 1024
MB = 1024 * KB
GB = 1024 * MB
TB = 1024 * GB
)
var size float64
var unit string
switch {
case bytes >= TB:
size = float64(bytes) / TB
unit = "TB"
case bytes >= GB:
size = float64(bytes) / GB
unit = "GB"
case bytes >= MB:
size = float64(bytes) / MB
unit = "MB"
case bytes >= KB:
size = float64(bytes) / KB
unit = "KB"
default:
size = float64(bytes)
unit = "bytes"
}
// Format to 2 decimal places for larger units, no decimals for bytes
if unit == "bytes" {
return fmt.Sprintf("%.0f %s", size, unit)
}
return fmt.Sprintf("%.2f %s", size, unit)
}

View File

@@ -7,17 +7,17 @@ import (
"encoding/base32"
"encoding/hex"
"fmt"
"github.com/anacrolix/torrent/metainfo"
"github.com/sirrobot01/decypharr/internal/request"
"io"
"log"
"net/http"
"net/url"
"os"
"path/filepath"
"regexp"
"strings"
"time"
"github.com/anacrolix/torrent/metainfo"
"github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/internal/request"
)
var (
@@ -36,7 +36,18 @@ func (m *Magnet) IsTorrent() bool {
return m.File != nil
}
func GetMagnetFromFile(file io.Reader, filePath string) (*Magnet, error) {
// stripTrackersFromMagnet removes trackers from a magnet and returns a modified copy
func stripTrackersFromMagnet(mi metainfo.Magnet, fileType string) metainfo.Magnet {
originalTrackerCount := len(mi.Trackers)
if len(mi.Trackers) > 0 {
log := logger.Default()
mi.Trackers = nil
log.Printf("Removed %d tracker URLs from %s", originalTrackerCount, fileType)
}
return mi
}
func GetMagnetFromFile(file io.Reader, filePath string, rmTrackerUrls bool) (*Magnet, error) {
var (
m *Magnet
err error
@@ -46,14 +57,14 @@ func GetMagnetFromFile(file io.Reader, filePath string) (*Magnet, error) {
if err != nil {
return nil, err
}
m, err = GetMagnetFromBytes(torrentData)
m, err = GetMagnetFromBytes(torrentData, rmTrackerUrls)
if err != nil {
return nil, err
}
} else {
// .magnet file
magnetLink := ReadMagnetFile(file)
m, err = GetMagnetInfo(magnetLink)
m, err = GetMagnetInfo(magnetLink, rmTrackerUrls)
if err != nil {
return nil, err
}
@@ -62,52 +73,42 @@ func GetMagnetFromFile(file io.Reader, filePath string) (*Magnet, error) {
return m, nil
}
func GetMagnetFromUrl(url string) (*Magnet, error) {
func GetMagnetFromUrl(url string, rmTrackerUrls bool) (*Magnet, error) {
if strings.HasPrefix(url, "magnet:") {
return GetMagnetInfo(url)
return GetMagnetInfo(url, rmTrackerUrls)
} else if strings.HasPrefix(url, "http") {
return OpenMagnetHttpURL(url)
return OpenMagnetHttpURL(url, rmTrackerUrls)
}
return nil, fmt.Errorf("invalid url")
}
func GetMagnetFromBytes(torrentData []byte) (*Magnet, error) {
func GetMagnetFromBytes(torrentData []byte, rmTrackerUrls bool) (*Magnet, error) {
// Create a scanner to read the file line by line
mi, err := metainfo.Load(bytes.NewReader(torrentData))
if err != nil {
return nil, err
}
hash := mi.HashInfoBytes()
infoHash := hash.HexString()
info, err := mi.UnmarshalInfo()
if err != nil {
return nil, err
}
magnetMeta := mi.Magnet(&hash, &info)
if rmTrackerUrls {
magnetMeta = stripTrackersFromMagnet(magnetMeta, "torrent file")
}
magnet := &Magnet{
InfoHash: infoHash,
Name: info.Name,
Size: info.Length,
Link: mi.Magnet(&hash, &info).String(),
Link: magnetMeta.String(),
File: torrentData,
}
return magnet, nil
}
func OpenMagnetFile(filePath string) string {
file, err := os.Open(filePath)
if err != nil {
log.Println("Error opening file:", err)
return ""
}
defer func(file *os.File) {
err := file.Close()
if err != nil {
return
}
}(file) // Ensure the file is closed after the function ends
return ReadMagnetFile(file)
}
func ReadMagnetFile(file io.Reader) string {
scanner := bufio.NewScanner(file)
for scanner.Scan() {
@@ -119,12 +120,13 @@ func ReadMagnetFile(file io.Reader) string {
// Check for any errors during scanning
if err := scanner.Err(); err != nil {
log := logger.Default()
log.Println("Error reading file:", err)
}
return ""
}
func OpenMagnetHttpURL(magnetLink string) (*Magnet, error) {
func OpenMagnetHttpURL(magnetLink string, rmTrackerUrls bool) (*Magnet, error) {
resp, err := http.Get(magnetLink)
if err != nil {
return nil, fmt.Errorf("error making GET request: %v", err)
@@ -139,34 +141,35 @@ func OpenMagnetHttpURL(magnetLink string) (*Magnet, error) {
if err != nil {
return nil, fmt.Errorf("error reading response body: %v", err)
}
return GetMagnetFromBytes(torrentData)
return GetMagnetFromBytes(torrentData, rmTrackerUrls)
}
func GetMagnetInfo(magnetLink string) (*Magnet, error) {
func GetMagnetInfo(magnetLink string, rmTrackerUrls bool) (*Magnet, error) {
if magnetLink == "" {
return nil, fmt.Errorf("error getting magnet from file")
}
magnetURI, err := url.Parse(magnetLink)
mi, err := metainfo.ParseMagnetUri(magnetLink)
if err != nil {
return nil, fmt.Errorf("error parsing magnet link")
return nil, fmt.Errorf("error parsing magnet link: %w", err)
}
query := magnetURI.Query()
xt := query.Get("xt")
dn := query.Get("dn")
// Extract BTIH
parts := strings.Split(xt, ":")
btih := ""
if len(parts) > 2 {
btih = parts[2]
// Strip all announce URLs if requested
if rmTrackerUrls {
mi = stripTrackersFromMagnet(mi, "magnet link")
}
btih := mi.InfoHash.HexString()
dn := mi.DisplayName
// Reconstruct the magnet link using the (possibly modified) spec
finalLink := mi.String()
magnet := &Magnet{
InfoHash: btih,
Name: dn,
Size: 0,
Link: magnetLink,
Link: finalLink,
}
return magnet, nil
}

View File

@@ -0,0 +1,198 @@
package utils
import (
"net/http"
"net/http/httptest"
"os"
"path/filepath"
"strings"
"testing"
"github.com/sirrobot01/decypharr/internal/testutil"
)
// checkMagnet is a helper function that verifies magnet properties
func checkMagnet(t *testing.T, magnet *Magnet, expectedInfoHash, expectedName, expectedLink string, expectedTrackerCount int, shouldBeTorrent bool) {
t.Helper() // This marks the function as a test helper
// Verify basic properties
if magnet.Name != expectedName {
t.Errorf("Expected name '%s', got '%s'", expectedName, magnet.Name)
}
if magnet.InfoHash != expectedInfoHash {
t.Errorf("Expected InfoHash '%s', got '%s'", expectedInfoHash, magnet.InfoHash)
}
if magnet.Link != expectedLink {
t.Errorf("Expected Link '%s', got '%s'", expectedLink, magnet.Link)
}
// Verify the magnet link contains the essential info hash
if !strings.Contains(magnet.Link, "xt=urn:btih:"+expectedInfoHash) {
t.Error("Magnet link should contain info hash")
}
// Verify tracker count
trCount := strings.Count(magnet.Link, "tr=")
if trCount != expectedTrackerCount {
t.Errorf("Expected %d tracker URLs, got %d", expectedTrackerCount, trCount)
}
}
// testMagnetFromFile is a helper function for tests that use GetMagnetFromFile with file operations
func testMagnetFromFile(t *testing.T, filePath string, rmTrackerUrls bool, expectedInfoHash, expectedName, expectedLink string, expectedTrackerCount int) {
t.Helper()
file, err := os.Open(filePath)
if err != nil {
t.Fatalf("Failed to open torrent file %s: %v", filePath, err)
}
defer file.Close()
magnet, err := GetMagnetFromFile(file, filepath.Base(filePath), rmTrackerUrls)
if err != nil {
t.Fatalf("GetMagnetFromFile failed: %v", err)
}
checkMagnet(t, magnet, expectedInfoHash, expectedName, expectedLink, expectedTrackerCount, true)
// Log the result
if rmTrackerUrls {
t.Logf("Generated clean magnet link: %s", magnet.Link)
} else {
t.Logf("Generated magnet link with trackers: %s", magnet.Link)
}
}
func TestGetMagnetFromFile_RealTorrentFile_StripTrue(t *testing.T) {
expectedInfoHash := "8a19577fb5f690970ca43a57ff1011ae202244b8"
expectedName := "ubuntu-25.04-desktop-amd64.iso"
expectedLink := "magnet:?xt=urn:btih:8a19577fb5f690970ca43a57ff1011ae202244b8&dn=ubuntu-25.04-desktop-amd64.iso"
expectedTrackerCount := 0 // Should be 0 when stripping trackers
torrentPath := testutil.GetTestTorrentPath()
testMagnetFromFile(t, torrentPath, true, expectedInfoHash, expectedName, expectedLink, expectedTrackerCount)
}
func TestGetMagnetFromFile_RealTorrentFile_StripFalse(t *testing.T) {
expectedInfoHash := "8a19577fb5f690970ca43a57ff1011ae202244b8"
expectedName := "ubuntu-25.04-desktop-amd64.iso"
expectedLink := "magnet:?xt=urn:btih:8a19577fb5f690970ca43a57ff1011ae202244b8&dn=ubuntu-25.04-desktop-amd64.iso&tr=https%3A%2F%2Ftorrent.ubuntu.com%2Fannounce&tr=https%3A%2F%2Fipv6.torrent.ubuntu.com%2Fannounce"
expectedTrackerCount := 2 // Should be 2 when preserving trackers
torrentPath := testutil.GetTestTorrentPath()
testMagnetFromFile(t, torrentPath, false, expectedInfoHash, expectedName, expectedLink, expectedTrackerCount)
}
func TestGetMagnetFromFile_MagnetFile_StripTrue(t *testing.T) {
expectedInfoHash := "8a19577fb5f690970ca43a57ff1011ae202244b8"
expectedName := "ubuntu-25.04-desktop-amd64.iso"
expectedLink := "magnet:?xt=urn:btih:8a19577fb5f690970ca43a57ff1011ae202244b8&dn=ubuntu-25.04-desktop-amd64.iso"
expectedTrackerCount := 0 // Should be 0 when stripping trackers
torrentPath := testutil.GetTestMagnetPath()
testMagnetFromFile(t, torrentPath, true, expectedInfoHash, expectedName, expectedLink, expectedTrackerCount)
}
func TestGetMagnetFromFile_MagnetFile_StripFalse(t *testing.T) {
expectedInfoHash := "8a19577fb5f690970ca43a57ff1011ae202244b8"
expectedName := "ubuntu-25.04-desktop-amd64.iso"
expectedLink := "magnet:?xt=urn:btih:8a19577fb5f690970ca43a57ff1011ae202244b8&dn=ubuntu-25.04-desktop-amd64.iso&tr=https%3A%2F%2Fipv6.torrent.ubuntu.com%2Fannounce&tr=https%3A%2F%2Ftorrent.ubuntu.com%2Fannounce"
expectedTrackerCount := 2
torrentPath := testutil.GetTestMagnetPath()
testMagnetFromFile(t, torrentPath, false, expectedInfoHash, expectedName, expectedLink, expectedTrackerCount)
}
func TestGetMagnetFromUrl_MagnetLink_StripTrue(t *testing.T) {
expectedInfoHash := "8a19577fb5f690970ca43a57ff1011ae202244b8"
expectedName := "ubuntu-25.04-desktop-amd64.iso"
expectedLink := "magnet:?xt=urn:btih:8a19577fb5f690970ca43a57ff1011ae202244b8&dn=ubuntu-25.04-desktop-amd64.iso"
expectedTrackerCount := 0
// Load the magnet URL from the test file
magnetUrl, err := testutil.GetTestMagnetContent()
if err != nil {
t.Fatalf("Failed to load magnet URL from test file: %v", err)
}
magnet, err := GetMagnetFromUrl(magnetUrl, true)
if err != nil {
t.Fatalf("GetMagnetFromUrl failed: %v", err)
}
checkMagnet(t, magnet, expectedInfoHash, expectedName, expectedLink, expectedTrackerCount, false)
t.Logf("Generated clean magnet link: %s", magnet.Link)
}
func TestGetMagnetFromUrl_MagnetLink_StripFalse(t *testing.T) {
expectedInfoHash := "8a19577fb5f690970ca43a57ff1011ae202244b8"
expectedName := "ubuntu-25.04-desktop-amd64.iso"
expectedLink := "magnet:?xt=urn:btih:8a19577fb5f690970ca43a57ff1011ae202244b8&dn=ubuntu-25.04-desktop-amd64.iso&tr=https%3A%2F%2Fipv6.torrent.ubuntu.com%2Fannounce&tr=https%3A%2F%2Ftorrent.ubuntu.com%2Fannounce"
expectedTrackerCount := 2
// Load the magnet URL from the test file
magnetUrl, err := testutil.GetTestMagnetContent()
if err != nil {
t.Fatalf("Failed to load magnet URL from test file: %v", err)
}
magnet, err := GetMagnetFromUrl(magnetUrl, false)
if err != nil {
t.Fatalf("GetMagnetFromUrl failed: %v", err)
}
checkMagnet(t, magnet, expectedInfoHash, expectedName, expectedLink, expectedTrackerCount, false)
t.Logf("Generated magnet link with trackers: %s", magnet.Link)
}
// testMagnetFromHttpTorrent is a helper function for tests that use GetMagnetFromUrl with HTTP torrent links
func testMagnetFromHttpTorrent(t *testing.T, torrentPath string, rmTrackerUrls bool, expectedInfoHash, expectedName, expectedLink string, expectedTrackerCount int) {
t.Helper()
// Read the torrent file content
torrentData, err := testutil.GetTestDataBytes(torrentPath)
if err != nil {
t.Fatalf("Failed to read torrent file: %v", err)
}
// Create a test HTTP server that serves the torrent file
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/x-bittorrent")
w.Write(torrentData)
}))
defer server.Close()
// Test the function with the mock server URL
magnet, err := GetMagnetFromUrl(server.URL, rmTrackerUrls)
if err != nil {
t.Fatalf("GetMagnetFromUrl failed: %v", err)
}
checkMagnet(t, magnet, expectedInfoHash, expectedName, expectedLink, expectedTrackerCount, true)
// Log the result
if rmTrackerUrls {
t.Logf("Generated clean magnet link from HTTP torrent: %s", magnet.Link)
} else {
t.Logf("Generated magnet link with trackers from HTTP torrent: %s", magnet.Link)
}
}
func TestGetMagnetFromUrl_TorrentLink_StripTrue(t *testing.T) {
expectedInfoHash := "8a19577fb5f690970ca43a57ff1011ae202244b8"
expectedName := "ubuntu-25.04-desktop-amd64.iso"
expectedLink := "magnet:?xt=urn:btih:8a19577fb5f690970ca43a57ff1011ae202244b8&dn=ubuntu-25.04-desktop-amd64.iso"
expectedTrackerCount := 0
testMagnetFromHttpTorrent(t, "ubuntu-25.04-desktop-amd64.iso.torrent", true, expectedInfoHash, expectedName, expectedLink, expectedTrackerCount)
}
func TestGetMagnetFromUrl_TorrentLink_StripFalse(t *testing.T) {
expectedInfoHash := "8a19577fb5f690970ca43a57ff1011ae202244b8"
expectedName := "ubuntu-25.04-desktop-amd64.iso"
expectedLink := "magnet:?xt=urn:btih:8a19577fb5f690970ca43a57ff1011ae202244b8&dn=ubuntu-25.04-desktop-amd64.iso&tr=https%3A%2F%2Ftorrent.ubuntu.com%2Fannounce&tr=https%3A%2F%2Fipv6.torrent.ubuntu.com%2Fannounce"
expectedTrackerCount := 2
testMagnetFromHttpTorrent(t, "ubuntu-25.04-desktop-amd64.iso.torrent", false, expectedInfoHash, expectedName, expectedLink, expectedTrackerCount)
}

View File

@@ -2,6 +2,7 @@ package arr
import (
"bytes"
"cmp"
"context"
"crypto/tls"
"encoding/json"
@@ -33,12 +34,14 @@ const (
Radarr Type = "radarr"
Lidarr Type = "lidarr"
Readarr Type = "readarr"
Others Type = "others"
)
type Arr struct {
Name string `json:"name"`
Host string `json:"host"`
Token string `json:"token"`
Name string `json:"name"`
Host string `json:"host"`
Token string `json:"token"`
Type Type `json:"type"`
Cleanup bool `json:"cleanup"`
SkipRepair bool `json:"skip_repair"`
@@ -110,7 +113,11 @@ func (a *Arr) Request(method, endpoint string, payload interface{}) (*http.Respo
func (a *Arr) Validate() error {
if a.Token == "" || a.Host == "" {
return nil
return fmt.Errorf("arr not configured")
}
if request.ValidateURL(a.Host) != nil {
return fmt.Errorf("invalid arr host URL")
}
resp, err := a.Request("GET", "/api/v3/health", nil)
if err != nil {
@@ -147,7 +154,7 @@ func InferType(host, name string) Type {
case strings.Contains(host, "readarr") || strings.Contains(name, "readarr"):
return Readarr
default:
return ""
return Others
}
}
@@ -158,7 +165,11 @@ func NewStorage() *Storage {
continue // Skip if host or token is not set
}
name := a.Name
arrs[name] = New(name, a.Host, a.Token, a.Cleanup, a.SkipRepair, a.DownloadUncached, a.SelectedDebrid, a.Source)
as := New(name, a.Host, a.Token, a.Cleanup, a.SkipRepair, a.DownloadUncached, a.SelectedDebrid, a.Source)
if request.ValidateURL(as.Host) != nil {
continue
}
arrs[a.Name] = as
}
return &Storage{
Arrs: arrs,
@@ -172,6 +183,11 @@ func (s *Storage) AddOrUpdate(arr *Arr) {
if arr.Host == "" || arr.Token == "" || arr.Name == "" {
return
}
// Check the host URL
if request.ValidateURL(arr.Host) != nil {
return
}
s.Arrs[arr.Name] = arr
}
@@ -191,6 +207,87 @@ func (s *Storage) GetAll() []*Arr {
return arrs
}
func (s *Storage) SyncToConfig() []config.Arr {
s.mu.Lock()
defer s.mu.Unlock()
cfg := config.Get()
arrConfigs := make(map[string]config.Arr)
for _, a := range cfg.Arrs {
if a.Host == "" || a.Token == "" {
continue // Skip empty arrs
}
arrConfigs[a.Name] = a
}
for name, arr := range s.Arrs {
exists, ok := arrConfigs[name]
if ok {
// Update existing arr config
// Check if the host URL is valid
if request.ValidateURL(arr.Host) == nil {
exists.Host = arr.Host
}
exists.Token = cmp.Or(exists.Token, arr.Token)
exists.Cleanup = arr.Cleanup
exists.SkipRepair = arr.SkipRepair
exists.DownloadUncached = arr.DownloadUncached
exists.SelectedDebrid = arr.SelectedDebrid
arrConfigs[name] = exists
} else {
// Add new arr config
arrConfigs[name] = config.Arr{
Name: arr.Name,
Host: arr.Host,
Token: arr.Token,
Cleanup: arr.Cleanup,
SkipRepair: arr.SkipRepair,
DownloadUncached: arr.DownloadUncached,
SelectedDebrid: arr.SelectedDebrid,
Source: arr.Source,
}
}
}
// Convert map to slice
arrs := make([]config.Arr, 0, len(arrConfigs))
for _, a := range arrConfigs {
arrs = append(arrs, a)
}
return arrs
}
func (s *Storage) SyncFromConfig(arrs []config.Arr) {
s.mu.Lock()
defer s.mu.Unlock()
arrConfigs := make(map[string]*Arr)
for _, a := range arrs {
arrConfigs[a.Name] = New(a.Name, a.Host, a.Token, a.Cleanup, a.SkipRepair, a.DownloadUncached, a.SelectedDebrid, a.Source)
}
// Add or update arrs from config
for name, arr := range s.Arrs {
if ac, ok := arrConfigs[name]; ok {
// Update existing arr
// is the host URL valid?
if request.ValidateURL(ac.Host) == nil {
ac.Host = arr.Host
}
ac.Token = cmp.Or(ac.Token, arr.Token)
ac.Cleanup = arr.Cleanup
ac.SkipRepair = arr.SkipRepair
ac.DownloadUncached = arr.DownloadUncached
ac.SelectedDebrid = arr.SelectedDebrid
ac.Source = arr.Source
arrConfigs[name] = ac
} else {
arrConfigs[name] = arr
}
}
// Replace the arrs map
s.Arrs = arrConfigs
}
func (s *Storage) StartWorker(ctx context.Context) error {
ticker := time.NewTicker(10 * time.Second)

View File

@@ -27,4 +27,5 @@ type Client interface {
GetProfile() (*types.Profile, error)
GetAvailableSlots() (int, error)
SyncAccounts() error // Updates each accounts details(like traffic, username, etc.)
DeleteDownloadLink(account *account.Account, downloadLink types.DownloadLink) error
}

View File

@@ -312,13 +312,9 @@ func Process(ctx context.Context, store *Storage, selectedDebrid string, magnet
// Override first, arr second, debrid third
if overrideDownloadUncached {
debridTorrent.DownloadUncached = true
} else if a.DownloadUncached != nil {
if !overrideDownloadUncached && a.DownloadUncached != nil {
// Arr cached is set
debridTorrent.DownloadUncached = *a.DownloadUncached
} else {
debridTorrent.DownloadUncached = false
overrideDownloadUncached = *a.DownloadUncached
}
for _, db := range clients {
@@ -331,8 +327,9 @@ func Process(ctx context.Context, store *Storage, selectedDebrid string, magnet
Str("Action", action).
Msg("Processing torrent")
if !overrideDownloadUncached && a.DownloadUncached == nil {
debridTorrent.DownloadUncached = db.GetDownloadUncached()
// If debrid.DownloadUnached is true, it overrides everything
if db.GetDownloadUncached() || overrideDownloadUncached {
debridTorrent.DownloadUncached = true
}
dbt, err := db.SubmitMagnet(debridTorrent)

View File

@@ -497,3 +497,8 @@ func (ad *AllDebrid) AccountManager() *account.Manager {
func (ad *AllDebrid) SyncAccounts() error {
return nil
}
func (ad *AllDebrid) DeleteDownloadLink(account *account.Account, downloadLink types.DownloadLink) error {
account.DeleteDownloadLink(downloadLink.Link)
return nil
}

View File

@@ -518,3 +518,8 @@ func (dl *DebridLink) AccountManager() *account.Manager {
func (dl *DebridLink) SyncAccounts() error {
return nil
}
func (dl *DebridLink) DeleteDownloadLink(account *account.Account, downloadLink types.DownloadLink) error {
account.DeleteDownloadLink(downloadLink.Link)
return nil
}

View File

@@ -347,18 +347,11 @@ func (r *RealDebrid) addTorrent(t *types.Torrent) (*types.Torrent, error) {
if resp.StatusCode == 509 {
return nil, utils.TooManyActiveDownloadsError
}
bodyBytes, _ := io.ReadAll(resp.Body)
return nil, fmt.Errorf("realdebrid API error: Status: %d || Body: %s", resp.StatusCode, string(bodyBytes))
}
defer func(Body io.ReadCloser) {
_ = Body.Close()
}(resp.Body)
bodyBytes, err := io.ReadAll(resp.Body)
if err != nil {
return nil, fmt.Errorf("reading response body: %w", err)
}
if err = json.Unmarshal(bodyBytes, &data); err != nil {
if err = json.NewDecoder(resp.Body).Decode(&data); err != nil {
return nil, err
}
t.Id = data.Id
@@ -379,6 +372,7 @@ func (r *RealDebrid) addMagnet(t *types.Torrent) (*types.Torrent, error) {
if err != nil {
return nil, err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusCreated {
// Handle multiple_downloads
@@ -386,15 +380,10 @@ func (r *RealDebrid) addMagnet(t *types.Torrent) (*types.Torrent, error) {
return nil, utils.TooManyActiveDownloadsError
}
bodyBytes, _ := io.ReadAll(resp.Body)
bodyBytes, _ := io.ReadAll(io.LimitReader(resp.Body, 1024))
return nil, fmt.Errorf("realdebrid API error: Status: %d || Body: %s", resp.StatusCode, string(bodyBytes))
}
defer resp.Body.Close()
bodyBytes, err := io.ReadAll(resp.Body)
if err != nil {
return nil, fmt.Errorf("reading response body: %w", err)
}
if err = json.Unmarshal(bodyBytes, &data); err != nil {
if err = json.NewDecoder(resp.Body).Decode(&data); err != nil {
return nil, err
}
t.Id = data.Id
@@ -412,19 +401,15 @@ func (r *RealDebrid) GetTorrent(torrentId string) (*types.Torrent, error) {
return nil, err
}
defer resp.Body.Close()
bodyBytes, err := io.ReadAll(resp.Body)
if err != nil {
return nil, fmt.Errorf("reading response body: %w", err)
}
if resp.StatusCode != http.StatusOK {
bodyBytes, _ := io.ReadAll(io.LimitReader(resp.Body, 1024))
if resp.StatusCode == http.StatusNotFound {
return nil, utils.TorrentNotFoundError
}
return nil, fmt.Errorf("realdebrid API error: Status: %d || Body: %s", resp.StatusCode, string(bodyBytes))
return nil, fmt.Errorf("realdebrid API error: Status: %d || Body %s", resp.StatusCode, string(bodyBytes))
}
var data torrentInfo
err = json.Unmarshal(bodyBytes, &data)
if err != nil {
if err = json.NewDecoder(resp.Body).Decode(&data); err != nil {
return nil, err
}
t := &types.Torrent{
@@ -455,19 +440,15 @@ func (r *RealDebrid) UpdateTorrent(t *types.Torrent) error {
return err
}
defer resp.Body.Close()
bodyBytes, err := io.ReadAll(resp.Body)
if err != nil {
return fmt.Errorf("reading response body: %w", err)
}
if resp.StatusCode != http.StatusOK {
if resp.StatusCode == http.StatusNotFound {
return utils.TorrentNotFoundError
}
bodyBytes, _ := io.ReadAll(io.LimitReader(resp.Body, 1024))
return fmt.Errorf("realdebrid API error: Status: %d || Body: %s", resp.StatusCode, string(bodyBytes))
}
var data torrentInfo
err = json.Unmarshal(bodyBytes, &data)
if err != nil {
if err = json.NewDecoder(resp.Body).Decode(&data); err != nil {
return err
}
t.Name = data.Filename
@@ -657,13 +638,9 @@ func (r *RealDebrid) getDownloadLink(account *account.Account, file *types.File)
}(resp.Body)
if resp.StatusCode != http.StatusOK {
// Read the response body to get the error message
b, err := io.ReadAll(resp.Body)
if err != nil {
return emptyLink, err
}
var data ErrorResponse
if err = json.Unmarshal(b, &data); err != nil {
return emptyLink, fmt.Errorf("error unmarshalling %d || %s \n %s", resp.StatusCode, err, string(b))
if err = json.NewDecoder(resp.Body).Decode(&data); err != nil {
return emptyLink, fmt.Errorf("error unmarshalling %d || %s", resp.StatusCode, err)
}
switch data.ErrorCode {
case 19, 24, 35:
@@ -674,12 +651,8 @@ func (r *RealDebrid) getDownloadLink(account *account.Account, file *types.File)
return emptyLink, fmt.Errorf("realdebrid API error: Status: %d || Code: %d", resp.StatusCode, data.ErrorCode)
}
}
b, err := io.ReadAll(resp.Body)
if err != nil {
return emptyLink, err
}
var data UnrestrictResponse
if err = json.Unmarshal(b, &data); err != nil {
if err = json.NewDecoder(resp.Body).Decode(&data); err != nil {
return emptyLink, fmt.Errorf("realdebrid API error: Error unmarshalling response: %w", err)
}
if data.Download == "" {
@@ -758,14 +731,10 @@ func (r *RealDebrid) getTorrents(offset int, limit int) (int, []*types.Torrent,
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
return 0, torrents, err
}
totalItems, _ := strconv.Atoi(resp.Header.Get("X-Total-Count"))
var data []TorrentsResponse
if err = json.Unmarshal(body, &data); err != nil {
return 0, torrents, err
if err := json.NewDecoder(resp.Body).Decode(&data); err != nil {
return 0, nil, fmt.Errorf("failed to decode response: %w", err)
}
filenames := map[string]struct{}{}
for _, t := range data {
@@ -1017,3 +986,20 @@ func (r *RealDebrid) syncAccount(account *account.Account) error {
//r.accountsManager.Update(account)
return nil
}
func (r *RealDebrid) DeleteDownloadLink(account *account.Account, downloadLink types.DownloadLink) error {
url := fmt.Sprintf("%s/downloads/delete/%s", r.Host, downloadLink.Id)
req, _ := http.NewRequest(http.MethodDelete, url, nil)
resp, err := account.Client().Do(req)
if err != nil {
return err
}
defer func(Body io.ReadCloser) {
_ = Body.Close()
}(resp.Body)
if resp.StatusCode != http.StatusNoContent {
return fmt.Errorf("realdebrid API error: %d", resp.StatusCode)
}
account.DeleteDownloadLink(downloadLink.Link)
return nil
}

View File

@@ -171,13 +171,24 @@ func (tb *Torbox) SubmitMagnet(torrent *types.Torrent) (*types.Torrent, error) {
}
func (tb *Torbox) getTorboxStatus(status string, finished bool) string {
if finished {
// Log raw values for debugging
tb.logger.Debug().
Str("download_state", status).
Bool("download_finished", finished).
Msg("getTorboxStatus called")
// For cached/completed torrents, content is immediately available even if
// DownloadFinished=false (no download actually happened - it was already cached)
// Use case-insensitive comparison for safety
statusLower := strings.ToLower(status)
if finished || statusLower == "cached" || statusLower == "completed" {
return "downloaded"
}
downloading := []string{"completed", "cached", "paused", "downloading", "uploading",
downloading := []string{"paused", "downloading", "uploading",
"checkingResumeData", "metaDL", "pausedUP", "queuedUP", "checkingUP",
"forcedUP", "allocating", "downloading", "metaDL", "pausedDL",
"queuedDL", "checkingDL", "forcedDL", "checkingResumeData", "moving"}
"queuedDL", "checkingDL", "forcedDL", "checkingResumeData", "moving",
"checking"}
var determinedStatus string
switch {
@@ -654,3 +665,8 @@ func (tb *Torbox) AccountManager() *account.Manager {
func (tb *Torbox) SyncAccounts() error {
return nil
}
func (tb *Torbox) DeleteDownloadLink(account *account.Account, downloadLink types.DownloadLink) error {
account.DeleteDownloadLink(downloadLink.Link)
return nil
}

View File

@@ -23,6 +23,7 @@ import (
"github.com/sirrobot01/decypharr/pkg/rclone"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
"golang.org/x/sync/singleflight"
"encoding/json"
_ "time/tzdata"
@@ -88,6 +89,7 @@ type Cache struct {
invalidDownloadLinks *xsync.Map[string, string]
repairRequest *xsync.Map[string, *reInsertRequest]
failedToReinsert *xsync.Map[string, struct{}]
failedLinksCounter *xsync.Map[string, atomic.Int32] // link -> counter
// repair
repairChan chan RepairRequest
@@ -112,7 +114,8 @@ type Cache struct {
config config.Debrid
customFolders []string
mounter *rclone.Mount
httpClient *http.Client
downloadSG singleflight.Group
streamClient *http.Client
}
func NewDebridCache(dc config.Debrid, client common.Client, mounter *rclone.Mount) *Cache {
@@ -160,10 +163,13 @@ func NewDebridCache(dc config.Debrid, client common.Client, mounter *rclone.Moun
_log := logger.New(fmt.Sprintf("%s-webdav", client.Name()))
transport := &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
TLSHandshakeTimeout: 10 * time.Second,
ResponseHeaderTimeout: 30 * time.Second,
MaxIdleConns: 10,
MaxIdleConnsPerHost: 2,
TLSHandshakeTimeout: 30 * time.Second,
ResponseHeaderTimeout: 60 * time.Second,
MaxIdleConns: 100,
MaxIdleConnsPerHost: 20,
IdleConnTimeout: 90 * time.Second,
DisableKeepAlives: false,
ForceAttemptHTTP2: false,
}
httpClient := &http.Client{
Transport: transport,
@@ -189,10 +195,11 @@ func NewDebridCache(dc config.Debrid, client common.Client, mounter *rclone.Moun
mounter: mounter,
ready: make(chan struct{}),
httpClient: httpClient,
invalidDownloadLinks: xsync.NewMap[string, string](),
repairRequest: xsync.NewMap[string, *reInsertRequest](),
failedToReinsert: xsync.NewMap[string, struct{}](),
failedLinksCounter: xsync.NewMap[string, atomic.Int32](),
streamClient: httpClient,
repairChan: make(chan RepairRequest, 100), // Initialize the repair channel, max 100 requests buffered
}
@@ -224,14 +231,12 @@ func (c *Cache) Reset() {
}
}
if err := c.scheduler.StopJobs(); err != nil {
c.logger.Error().Err(err).Msg("Failed to stop scheduler jobs")
}
if err := c.scheduler.Shutdown(); err != nil {
c.logger.Error().Err(err).Msg("Failed to stop scheduler")
}
go func() {
// Shutdown the scheduler (this will stop all jobs)
if err := c.scheduler.Shutdown(); err != nil {
c.logger.Error().Err(err).Msg("Failed to stop scheduler")
}
}()
// Stop the listing debouncer
c.listingDebouncer.Stop()
@@ -924,7 +929,3 @@ func (c *Cache) Logger() zerolog.Logger {
func (c *Cache) GetConfig() config.Debrid {
return c.config
}
func (c *Cache) Download(req *http.Request) (*http.Response, error) {
return c.httpClient.Do(req)
}

View File

@@ -3,50 +3,50 @@ package store
import (
"errors"
"fmt"
"sync/atomic"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
)
type downloadLinkRequest struct {
result string
err error
done chan struct{}
}
func newDownloadLinkRequest() *downloadLinkRequest {
return &downloadLinkRequest{
done: make(chan struct{}),
}
}
func (r *downloadLinkRequest) Complete(result string, err error) {
r.result = result
r.err = err
close(r.done)
}
func (r *downloadLinkRequest) Wait() (string, error) {
<-r.done
return r.result, r.err
}
const (
MaxLinkFailures = 10
)
func (c *Cache) GetDownloadLink(torrentName, filename, fileLink string) (types.DownloadLink, error) {
// Check link cache
if dl, err := c.checkDownloadLink(fileLink); err == nil && !dl.Empty() {
return dl, nil
// Check
counter, ok := c.failedLinksCounter.Load(fileLink)
if ok && counter.Load() >= MaxLinkFailures {
return types.DownloadLink{}, fmt.Errorf("file link %s has failed %d times, not retrying", fileLink, counter.Load())
}
dl, err := c.fetchDownloadLink(torrentName, filename, fileLink)
// Use singleflight to deduplicate concurrent requests
v, err, _ := c.downloadSG.Do(fileLink, func() (interface{}, error) {
// Double-check cache inside singleflight (another goroutine might have filled it)
if dl, err := c.checkDownloadLink(fileLink); err == nil && !dl.Empty() {
return dl, nil
}
// Fetch the download link
dl, err := c.fetchDownloadLink(torrentName, filename, fileLink)
if err != nil {
c.downloadSG.Forget(fileLink)
return types.DownloadLink{}, err
}
if dl.Empty() {
c.downloadSG.Forget(fileLink)
err = fmt.Errorf("download link is empty for %s in torrent %s", filename, torrentName)
return types.DownloadLink{}, err
}
return dl, nil
})
if err != nil {
return types.DownloadLink{}, err
}
if dl.Empty() {
err = fmt.Errorf("download link is empty for %s in torrent %s", filename, torrentName)
return types.DownloadLink{}, err
}
return dl, err
return v.(types.DownloadLink), nil
}
func (c *Cache) fetchDownloadLink(torrentName, filename, fileLink string) (types.DownloadLink, error) {
@@ -146,7 +146,17 @@ func (c *Cache) checkDownloadLink(link string) (types.DownloadLink, error) {
return types.DownloadLink{}, fmt.Errorf("download link not found for %s", link)
}
func (c *Cache) MarkDownloadLinkAsInvalid(downloadLink types.DownloadLink, reason string) {
func (c *Cache) IncrementFailedLinkCounter(link string) int32 {
counter, _ := c.failedLinksCounter.LoadOrCompute(link, func() (atomic.Int32, bool) {
return atomic.Int32{}, true
})
return counter.Add(1)
}
func (c *Cache) MarkLinkAsInvalid(downloadLink types.DownloadLink, reason string) {
// Increment file link error counter
c.IncrementFailedLinkCounter(downloadLink.Link)
c.invalidDownloadLinks.Store(downloadLink.DownloadLink, reason)
// Remove the download api key from active
if reason == "bandwidth_exceeded" {
@@ -162,12 +172,28 @@ func (c *Cache) MarkDownloadLinkAsInvalid(downloadLink types.DownloadLink, reaso
return
}
accountManager.Disable(account)
} else if reason == "link_not_found" {
// Let's try to delete the download link from the account, so we can fetch a new one next time
accountManager := c.client.AccountManager()
account, err := accountManager.GetAccount(downloadLink.Token)
if err != nil {
c.logger.Error().Err(err).Str("token", utils.Mask(downloadLink.Token)).Msg("Failed to get account to delete download link")
return
}
if account == nil {
c.logger.Error().Str("token", utils.Mask(downloadLink.Token)).Msg("Account not found to delete download link")
return
}
if err := c.client.DeleteDownloadLink(account, downloadLink); err != nil {
c.logger.Error().Err(err).Str("token", utils.Mask(downloadLink.Token)).Msg("Failed to delete download link from account")
return
}
}
}
func (c *Cache) downloadLinkIsInvalid(downloadLink string) bool {
if reason, ok := c.invalidDownloadLinks.Load(downloadLink); ok {
c.logger.Debug().Msgf("Download link %s is invalid: %s", downloadLink, reason)
if _, ok := c.invalidDownloadLinks.Load(downloadLink); ok {
return true
}
return false

View File

@@ -1,8 +1,9 @@
package store
import (
"github.com/sirrobot01/decypharr/pkg/debrid/types"
"sort"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
)
// MergeFiles merges the files from multiple torrents into a single map.

236
pkg/debrid/store/stream.go Normal file
View File

@@ -0,0 +1,236 @@
package store
import (
"context"
"errors"
"fmt"
"io"
"math/rand"
"net"
"net/http"
"strings"
"time"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
)
const (
MaxNetworkRetries = 5
MaxLinkRetries = 10
)
type StreamError struct {
Err error
Retryable bool
LinkError bool // true if we should try a new link
}
func (e StreamError) Error() string {
return e.Err.Error()
}
// isConnectionError checks if the error is related to connection issues
func (c *Cache) isConnectionError(err error) bool {
if err == nil {
return false
}
errStr := err.Error()
// Check for common connection errors
if strings.Contains(errStr, "EOF") ||
strings.Contains(errStr, "connection reset by peer") ||
strings.Contains(errStr, "broken pipe") ||
strings.Contains(errStr, "connection refused") {
return true
}
// Check for net.Error types
var netErr net.Error
return errors.As(err, &netErr)
}
func (c *Cache) Stream(ctx context.Context, start, end int64, linkFunc func() (types.DownloadLink, error)) (*http.Response, error) {
var lastErr error
downloadLink, err := linkFunc()
if err != nil {
return nil, fmt.Errorf("failed to get download link: %w", err)
}
// Outer loop: Link retries
for retry := 0; retry < MaxLinkRetries; retry++ {
select {
case <-ctx.Done():
return nil, ctx.Err()
default:
}
resp, err := c.doRequest(ctx, downloadLink.DownloadLink, start, end)
if err != nil {
// Network/connection error
lastErr = err
c.logger.Trace().
Int("retries", retry).
Err(err).
Msg("Network request failed, retrying")
// Backoff and continue network retry
if retry < MaxLinkRetries {
backoff := time.Duration(retry+1) * time.Second
jitter := time.Duration(rand.Intn(1000)) * time.Millisecond
select {
case <-time.After(backoff + jitter):
case <-ctx.Done():
return nil, ctx.Err()
}
continue
} else {
return nil, fmt.Errorf("network request failed after retries: %w", lastErr)
}
}
// Got response - check status
if resp.StatusCode == http.StatusOK || resp.StatusCode == http.StatusPartialContent {
return resp, nil
}
// Bad status code - handle error
streamErr := c.handleHTTPError(resp, downloadLink)
resp.Body.Close()
if !streamErr.Retryable {
return nil, streamErr // Fatal error
}
if streamErr.LinkError {
lastErr = streamErr
// Try new link
downloadLink, err = linkFunc()
if err != nil {
return nil, fmt.Errorf("failed to get download link: %w", err)
}
continue
}
// Retryable HTTP error (429, 503, 404 etc.) - retry network
lastErr = streamErr
c.logger.Trace().
Err(lastErr).
Str("downloadLink", downloadLink.DownloadLink).
Str("link", downloadLink.Link).
Int("retries", retry).
Int("statusCode", resp.StatusCode).
Msg("HTTP error, retrying")
if retry < MaxNetworkRetries-1 {
backoff := time.Duration(retry+1) * time.Second
jitter := time.Duration(rand.Intn(1000)) * time.Millisecond
select {
case <-time.After(backoff + jitter):
case <-ctx.Done():
return nil, ctx.Err()
}
}
}
return nil, fmt.Errorf("stream failed after %d link retries: %w", MaxLinkRetries, lastErr)
}
func (c *Cache) StreamReader(ctx context.Context, start, end int64, linkFunc func() (types.DownloadLink, error)) (io.ReadCloser, error) {
resp, err := c.Stream(ctx, start, end, linkFunc)
if err != nil {
return nil, err
}
// Validate we got the expected content
if resp.ContentLength == 0 {
resp.Body.Close()
return nil, fmt.Errorf("received empty response")
}
return resp.Body, nil
}
func (c *Cache) doRequest(ctx context.Context, url string, start, end int64) (*http.Response, error) {
var lastErr error
// Retry loop specifically for connection-level failures (EOF, reset, etc.)
for connRetry := 0; connRetry < 3; connRetry++ {
req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
if err != nil {
return nil, StreamError{Err: err, Retryable: false}
}
// Set range header
if start > 0 || end > 0 {
rangeHeader := fmt.Sprintf("bytes=%d-", start)
if end > 0 {
rangeHeader = fmt.Sprintf("bytes=%d-%d", start, end)
}
req.Header.Set("Range", rangeHeader)
}
// Set optimized headers for streaming
req.Header.Set("Connection", "keep-alive")
req.Header.Set("Accept-Encoding", "identity") // Disable compression for streaming
req.Header.Set("Cache-Control", "no-cache")
resp, err := c.streamClient.Do(req)
if err != nil {
lastErr = err
// Check if it's a connection error that we should retry
if c.isConnectionError(err) && connRetry < 2 {
// Brief backoff before retrying with fresh connection
time.Sleep(time.Duration(connRetry+1) * 100 * time.Millisecond)
continue
}
return nil, StreamError{Err: err, Retryable: true}
}
return resp, nil
}
return nil, StreamError{Err: fmt.Errorf("connection retry exhausted: %w", lastErr), Retryable: true}
}
func (c *Cache) handleHTTPError(resp *http.Response, downloadLink types.DownloadLink) StreamError {
switch resp.StatusCode {
case http.StatusNotFound:
c.MarkLinkAsInvalid(downloadLink, "link_not_found")
return StreamError{
Err: errors.New("download link not found"),
Retryable: true,
LinkError: true,
}
case http.StatusServiceUnavailable:
body, _ := io.ReadAll(resp.Body)
bodyStr := strings.ToLower(string(body))
if strings.Contains(bodyStr, "bandwidth") || strings.Contains(bodyStr, "traffic") {
c.MarkLinkAsInvalid(downloadLink, "bandwidth_exceeded")
return StreamError{
Err: errors.New("bandwidth limit exceeded"),
Retryable: true,
LinkError: true,
}
}
fallthrough
case http.StatusTooManyRequests:
return StreamError{
Err: fmt.Errorf("HTTP %d: rate limited", resp.StatusCode),
Retryable: true,
LinkError: false,
}
default:
retryable := resp.StatusCode >= 500
body, _ := io.ReadAll(resp.Body)
return StreamError{
Err: fmt.Errorf("HTTP %d: %s", resp.StatusCode, string(body)),
Retryable: retryable,
LinkError: false,
}
}
}

View File

@@ -2,6 +2,7 @@ package store
import (
"context"
"github.com/go-co-op/gocron/v2"
"github.com/sirrobot01/decypharr/internal/utils"
)

View File

@@ -2,7 +2,6 @@ package types
import (
"fmt"
"net/url"
"os"
"path/filepath"
"sync"
@@ -182,20 +181,10 @@ type DownloadLink struct {
ExpiresAt time.Time
}
func isValidURL(str string) bool {
u, err := url.Parse(str)
// A valid URL should parse without error, and have a non-empty scheme and host.
return err == nil && u.Scheme != "" && u.Host != ""
}
func (dl *DownloadLink) Valid() error {
if dl.Empty() {
return EmptyDownloadLinkError
}
// Check if the link is actually a valid URL
if !isValidURL(dl.DownloadLink) {
return ErrDownloadLinkNotFound
}
return nil
}

View File

@@ -6,14 +6,12 @@ import (
"encoding/base64"
"fmt"
"net/http"
"net/url"
"strings"
"github.com/go-chi/chi/v5"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/pkg/arr"
"github.com/sirrobot01/decypharr/pkg/wire"
"golang.org/x/crypto/bcrypt"
)
type contextKey string
@@ -24,45 +22,6 @@ const (
arrKey contextKey = "arr"
)
func validateServiceURL(urlStr string) error {
if urlStr == "" {
return fmt.Errorf("URL cannot be empty")
}
// Try parsing as full URL first
u, err := url.Parse(urlStr)
if err == nil && u.Scheme != "" && u.Host != "" {
// It's a full URL, validate scheme
if u.Scheme != "http" && u.Scheme != "https" {
return fmt.Errorf("URL scheme must be http or https")
}
return nil
}
// Check if it's a host:port format (no scheme)
if strings.Contains(urlStr, ":") && !strings.Contains(urlStr, "://") {
// Try parsing with http:// prefix
testURL := "http://" + urlStr
u, err := url.Parse(testURL)
if err != nil {
return fmt.Errorf("invalid host:port format: %w", err)
}
if u.Host == "" {
return fmt.Errorf("host is required in host:port format")
}
// Validate port number
if u.Port() == "" {
return fmt.Errorf("port is required in host:port format")
}
return nil
}
return fmt.Errorf("invalid URL format: %s", urlStr)
}
func getCategory(ctx context.Context) string {
if category, ok := ctx.Value(categoryKey).(string); ok {
return category
@@ -187,21 +146,23 @@ func (q *QBit) authenticate(category, username, password string) (*arr.Arr, erro
}
a.Host = username
a.Token = password
if cfg.UseAuth {
if a.Host == "" || a.Token == "" {
return nil, fmt.Errorf("unauthorized: Host and token are required for authentication(you've enabled authentication)")
}
// try to use either Arr validate, or user auth validation
if err := a.Validate(); err != nil {
// If this failed, try to use user auth validation
if !verifyAuth(username, password) {
return nil, fmt.Errorf("unauthorized: invalid credentials")
}
}
arrValidated := false // This is a flag to indicate if arr validation was successful
if (a.Host == "" || a.Token == "") && cfg.UseAuth {
return nil, fmt.Errorf("unauthorized: Host and token are required for authentication(you've enabled authentication)")
}
if err := a.Validate(); err == nil {
arrValidated = true
}
if !arrValidated && cfg.UseAuth {
// If arr validation failed, try to use user auth validation
if !config.VerifyAuth(username, password) {
return nil, fmt.Errorf("unauthorized: invalid credentials")
}
}
a.Source = "auto"
arrs.AddOrUpdate(a)
return a, nil
}
@@ -264,19 +225,3 @@ func hashesContext(next http.Handler) http.Handler {
next.ServeHTTP(w, r.WithContext(ctx))
})
}
func verifyAuth(username, password string) bool {
// If you're storing hashed password, use bcrypt to compare
if username == "" {
return false
}
auth := config.Get().GetAuth()
if auth == nil {
return false
}
if username != auth.Username {
return false
}
err := bcrypt.CompareHashAndPassword([]byte(auth.Password), []byte(password))
return err == nil
}

View File

@@ -102,6 +102,13 @@ func (q *QBit) handleTorrentsAdd(w http.ResponseWriter, r *http.Request) {
if strings.ToLower(r.FormValue("sequentialDownload")) == "true" {
action = "download"
}
rmTrackerUrls := strings.ToLower(r.FormValue("firstLastPiecePrio")) == "true"
// Check config setting - if always remove tracker URLs is enabled, force it to true
if q.AlwaysRmTrackerUrls {
rmTrackerUrls = true
}
debridName := r.FormValue("debrid")
category := r.FormValue("category")
_arr := getArrFromContext(ctx)
@@ -118,7 +125,7 @@ func (q *QBit) handleTorrentsAdd(w http.ResponseWriter, r *http.Request) {
urlList = append(urlList, strings.TrimSpace(u))
}
for _, url := range urlList {
if err := q.addMagnet(ctx, url, _arr, debridName, action); err != nil {
if err := q.addMagnet(ctx, url, _arr, debridName, action, rmTrackerUrls); err != nil {
q.logger.Debug().Msgf("Error adding magnet: %s", err.Error())
http.Error(w, err.Error(), http.StatusBadRequest)
return
@@ -131,7 +138,7 @@ func (q *QBit) handleTorrentsAdd(w http.ResponseWriter, r *http.Request) {
if r.MultipartForm != nil && r.MultipartForm.File != nil {
if files := r.MultipartForm.File["torrents"]; len(files) > 0 {
for _, fileHeader := range files {
if err := q.addTorrent(ctx, fileHeader, _arr, debridName, action); err != nil {
if err := q.addTorrent(ctx, fileHeader, _arr, debridName, action, rmTrackerUrls); err != nil {
q.logger.Debug().Err(err).Msgf("Error adding torrent")
http.Error(w, err.Error(), http.StatusBadRequest)
return

View File

@@ -8,25 +8,27 @@ import (
)
type QBit struct {
Username string
Password string
DownloadFolder string
Categories []string
storage *wire.TorrentStorage
logger zerolog.Logger
Tags []string
Username string
Password string
DownloadFolder string
Categories []string
AlwaysRmTrackerUrls bool
storage *wire.TorrentStorage
logger zerolog.Logger
Tags []string
}
func New() *QBit {
_cfg := config.Get()
cfg := _cfg.QBitTorrent
return &QBit{
Username: cfg.Username,
Password: cfg.Password,
DownloadFolder: cfg.DownloadFolder,
Categories: cfg.Categories,
storage: wire.Get().Torrents(),
logger: logger.New("qbit"),
Username: cfg.Username,
Password: cfg.Password,
DownloadFolder: cfg.DownloadFolder,
Categories: cfg.Categories,
AlwaysRmTrackerUrls: cfg.AlwaysRmTrackerUrls,
storage: wire.Get().Torrents(),
logger: logger.New("qbit"),
}
}

View File

@@ -3,24 +3,25 @@ package qbit
import (
"context"
"fmt"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/arr"
"github.com/sirrobot01/decypharr/pkg/wire"
"io"
"mime/multipart"
"strings"
"time"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/arr"
"github.com/sirrobot01/decypharr/pkg/wire"
)
// All torrent-related helpers goes here
func (q *QBit) addMagnet(ctx context.Context, url string, arr *arr.Arr, debrid string, action string) error {
magnet, err := utils.GetMagnetFromUrl(url)
func (q *QBit) addMagnet(ctx context.Context, url string, arr *arr.Arr, debrid string, action string, rmTrackerUrls bool) error {
magnet, err := utils.GetMagnetFromUrl(url, rmTrackerUrls)
if err != nil {
return fmt.Errorf("error parsing magnet link: %w", err)
}
_store := wire.Get()
importReq := wire.NewImportRequest(debrid, q.DownloadFolder, magnet, arr, action, false, "", wire.ImportTypeQBitTorrent)
importReq := wire.NewImportRequest(debrid, q.DownloadFolder, magnet, arr, action, false, "", wire.ImportTypeQBitTorrent, false)
err = _store.AddTorrent(ctx, importReq)
if err != nil {
@@ -29,16 +30,16 @@ func (q *QBit) addMagnet(ctx context.Context, url string, arr *arr.Arr, debrid s
return nil
}
func (q *QBit) addTorrent(ctx context.Context, fileHeader *multipart.FileHeader, arr *arr.Arr, debrid string, action string) error {
func (q *QBit) addTorrent(ctx context.Context, fileHeader *multipart.FileHeader, arr *arr.Arr, debrid string, action string, rmTrackerUrls bool) error {
file, _ := fileHeader.Open()
defer file.Close()
var reader io.Reader = file
magnet, err := utils.GetMagnetFromFile(reader, fileHeader.Filename)
magnet, err := utils.GetMagnetFromFile(reader, fileHeader.Filename, rmTrackerUrls)
if err != nil {
return fmt.Errorf("error reading file: %s \n %w", fileHeader.Filename, err)
}
_store := wire.Get()
importReq := wire.NewImportRequest(debrid, q.DownloadFolder, magnet, arr, action, false, "", wire.ImportTypeQBitTorrent)
importReq := wire.NewImportRequest(debrid, q.DownloadFolder, magnet, arr, action, false, "", wire.ImportTypeQBitTorrent, false)
err = _store.AddTorrent(ctx, importReq)
if err != nil {
return fmt.Errorf("failed to process torrent: %w", err)

View File

@@ -191,7 +191,6 @@ func (f *HttpFile) ReadAt(p []byte, off int64) (n int, err error) {
bytesRead, err := io.ReadFull(resp.Body, p)
return bytesRead, err
case http.StatusOK:
// Some servers return the full content instead of partial
fullData, err := io.ReadAll(resp.Body)
if err != nil {
return 0, fmt.Errorf("%w: %v", ErrNetworkError, err)
@@ -684,18 +683,3 @@ func (r *Reader) ExtractFile(file *File) ([]byte, error) {
return r.readBytes(file.DataOffset, int(file.CompressedSize))
}
// Helper functions
func min(a, b int) int {
if a < b {
return a
}
return b
}
func max(a, b int) int {
if a > b {
return a
}
return b
}

View File

@@ -4,6 +4,7 @@ import (
"fmt"
"os"
"os/exec"
"runtime"
"strconv"
"time"
@@ -52,9 +53,14 @@ func (m *Manager) mountWithRetry(mountPath, provider, webdavURL string, maxRetri
func (m *Manager) performMount(mountPath, provider, webdavURL string) error {
cfg := config.Get()
// Create mount directory
if err := os.MkdirAll(mountPath, 0755); err != nil {
return fmt.Errorf("failed to create mount directory %s: %w", mountPath, err)
// Create mount directory(except on Windows, cos winFSP handles it)
if runtime.GOOS != "windows" {
if err := os.MkdirAll(mountPath, 0755); err != nil {
return fmt.Errorf("failed to create mount directory %s: %w", mountPath, err)
}
} else {
// In fact, delete the mount if it exists, to avoid issues
_ = os.Remove(mountPath) // Ignore error
}
// Check if already mounted
@@ -111,6 +117,10 @@ func (m *Manager) performMount(mountPath, provider, webdavURL string) error {
configOpts["BufferSize"] = cfg.Rclone.BufferSize
}
if cfg.Rclone.BwLimit != "" {
configOpts["BwLimit"] = cfg.Rclone.BwLimit
}
if len(configOpts) > 0 {
// Only add _config if there are options to set
mountArgs["_config"] = configOpts

View File

@@ -5,16 +5,6 @@ import (
"encoding/json"
"errors"
"fmt"
"github.com/go-co-op/gocron/v2"
"github.com/google/uuid"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/arr"
"github.com/sirrobot01/decypharr/pkg/debrid"
"golang.org/x/sync/errgroup"
"net"
"net/http"
"net/url"
@@ -25,6 +15,17 @@ import (
"strings"
"sync"
"time"
"github.com/go-co-op/gocron/v2"
"github.com/google/uuid"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger"
"github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/arr"
"github.com/sirrobot01/decypharr/pkg/debrid"
"golang.org/x/sync/errgroup"
)
type Repair struct {
@@ -105,10 +106,6 @@ func New(arrs *arr.Storage, engine *debrid.Storage) *Repair {
func (r *Repair) Reset() {
// Stop scheduler
if r.scheduler != nil {
if err := r.scheduler.StopJobs(); err != nil {
r.logger.Error().Err(err).Msg("Error stopping scheduler")
}
if err := r.scheduler.Shutdown(); err != nil {
r.logger.Error().Err(err).Msg("Error shutting down scheduler")
}

View File

@@ -4,15 +4,16 @@ import (
"context"
"errors"
"fmt"
"io"
"net/http"
"os"
"path/filepath"
"github.com/go-chi/chi/v5"
"github.com/go-chi/chi/v5/middleware"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/logger"
"io"
"net/http"
"os"
"path/filepath"
)
type Server struct {
@@ -24,6 +25,8 @@ func New(handlers map[string]http.Handler) *Server {
l := logger.New("http")
r := chi.NewRouter()
r.Use(middleware.Recoverer)
r.Use(middleware.StripSlashes)
r.Use(middleware.RedirectSlashes)
cfg := config.Get()

View File

@@ -2,13 +2,15 @@ package web
import (
"fmt"
"github.com/sirrobot01/decypharr/pkg/wire"
"golang.org/x/crypto/bcrypt"
"net/http"
"strings"
"time"
"github.com/sirrobot01/decypharr/pkg/wire"
"golang.org/x/crypto/bcrypt"
"encoding/json"
"github.com/go-chi/chi/v5"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/request"
@@ -18,8 +20,8 @@ import (
)
func (wb *Web) handleGetArrs(w http.ResponseWriter, r *http.Request) {
_store := wire.Get()
request.JSONResponse(w, _store.Arr().GetAll(), http.StatusOK)
arrStorage := wire.Get().Arr()
request.JSONResponse(w, arrStorage.GetAll(), http.StatusOK)
}
func (wb *Web) handleAddContent(w http.ResponseWriter, r *http.Request) {
@@ -41,8 +43,16 @@ func (wb *Web) handleAddContent(w http.ResponseWriter, r *http.Request) {
if downloadFolder == "" {
downloadFolder = config.Get().QBitTorrent.DownloadFolder
}
skipMultiSeason := r.FormValue("skipMultiSeason") == "true"
downloadUncached := r.FormValue("downloadUncached") == "true"
rmTrackerUrls := r.FormValue("rmTrackerUrls") == "true"
// Check config setting - if always remove tracker URLs is enabled, force it to true
cfg := config.Get()
if cfg.QBitTorrent.AlwaysRmTrackerUrls {
rmTrackerUrls = true
}
_arr := _store.Arr().Get(arrName)
if _arr == nil {
@@ -60,13 +70,13 @@ func (wb *Web) handleAddContent(w http.ResponseWriter, r *http.Request) {
}
for _, url := range urlList {
magnet, err := utils.GetMagnetFromUrl(url)
magnet, err := utils.GetMagnetFromUrl(url, rmTrackerUrls)
if err != nil {
errs = append(errs, fmt.Sprintf("Failed to parse URL %s: %v", url, err))
continue
}
importReq := wire.NewImportRequest(debridName, downloadFolder, magnet, _arr, action, downloadUncached, callbackUrl, wire.ImportTypeAPI)
importReq := wire.NewImportRequest(debridName, downloadFolder, magnet, _arr, action, downloadUncached, callbackUrl, wire.ImportTypeAPI, skipMultiSeason)
if err := _store.AddTorrent(ctx, importReq); err != nil {
wb.logger.Error().Err(err).Str("url", url).Msg("Failed to add torrent")
errs = append(errs, fmt.Sprintf("URL %s: %v", url, err))
@@ -85,13 +95,13 @@ func (wb *Web) handleAddContent(w http.ResponseWriter, r *http.Request) {
continue
}
magnet, err := utils.GetMagnetFromFile(file, fileHeader.Filename)
magnet, err := utils.GetMagnetFromFile(file, fileHeader.Filename, rmTrackerUrls)
if err != nil {
errs = append(errs, fmt.Sprintf("Failed to parse torrent file %s: %v", fileHeader.Filename, err))
continue
}
importReq := wire.NewImportRequest(debridName, downloadFolder, magnet, _arr, action, downloadUncached, callbackUrl, wire.ImportTypeAPI)
importReq := wire.NewImportRequest(debridName, downloadFolder, magnet, _arr, action, downloadUncached, callbackUrl, wire.ImportTypeAPI, skipMultiSeason)
err = _store.AddTorrent(ctx, importReq)
if err != nil {
wb.logger.Error().Err(err).Str("file", fileHeader.Filename).Msg("Failed to add torrent")
@@ -183,38 +193,9 @@ func (wb *Web) handleDeleteTorrents(w http.ResponseWriter, r *http.Request) {
}
func (wb *Web) handleGetConfig(w http.ResponseWriter, r *http.Request) {
// Merge config arrs, with arr Storage
unique := map[string]config.Arr{}
cfg := config.Get()
arrStorage := wire.Get().Arr()
// Add existing Arrs from storage
for _, a := range arrStorage.GetAll() {
if _, ok := unique[a.Name]; !ok {
// Only add if not already in the unique map
unique[a.Name] = config.Arr{
Name: a.Name,
Host: a.Host,
Token: a.Token,
Cleanup: a.Cleanup,
SkipRepair: a.SkipRepair,
DownloadUncached: a.DownloadUncached,
SelectedDebrid: a.SelectedDebrid,
Source: a.Source,
}
}
}
for _, a := range cfg.Arrs {
if a.Host == "" || a.Token == "" {
continue // Skip empty arrs
}
unique[a.Name] = a
}
cfg.Arrs = make([]config.Arr, 0, len(unique))
for _, a := range unique {
cfg.Arrs = append(cfg.Arrs, a)
}
cfg := config.Get()
cfg.Arrs = arrStorage.SyncToConfig()
// Create response with API token info
type ConfigResponse struct {
@@ -271,10 +252,7 @@ func (wb *Web) handleUpdateConfig(w http.ResponseWriter, r *http.Request) {
currentConfig.Rclone = updatedConfig.Rclone
// Update Debrids
if len(updatedConfig.Debrids) > 0 {
currentConfig.Debrids = updatedConfig.Debrids
// Clear legacy single debrid if using array
}
currentConfig.Debrids = updatedConfig.Debrids
// Update Arrs through the service
storage := wire.Get()
@@ -290,28 +268,8 @@ func (wb *Web) handleUpdateConfig(w http.ResponseWriter, r *http.Request) {
}
currentConfig.Arrs = newConfigArrs
// Add config arr into the config
for _, a := range currentConfig.Arrs {
if a.Host == "" || a.Token == "" {
continue // Skip empty arrs
}
existingArr := arrStorage.Get(a.Name)
if existingArr != nil {
// Update existing Arr
existingArr.Host = a.Host
existingArr.Token = a.Token
existingArr.Cleanup = a.Cleanup
existingArr.SkipRepair = a.SkipRepair
existingArr.DownloadUncached = a.DownloadUncached
existingArr.SelectedDebrid = a.SelectedDebrid
existingArr.Source = a.Source
arrStorage.AddOrUpdate(existingArr)
} else {
// Create new Arr if it doesn't exist
newArr := arr.New(a.Name, a.Host, a.Token, a.Cleanup, a.SkipRepair, a.DownloadUncached, a.SelectedDebrid, a.Source)
arrStorage.AddOrUpdate(newArr)
}
}
// Sync arrStorage with the new arrs
arrStorage.SyncFromConfig(currentConfig.Arrs)
if err := currentConfig.Save(); err != nil {
http.Error(w, "Error saving config: "+err.Error(), http.StatusInternalServerError)

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -388,7 +388,7 @@ class DecypharrUtils {
if (versionBadge) {
versionBadge.innerHTML = `
<a href="https://github.com/sirrobot01/decypharr/releases/tag/${data.version}"
<a href="https://github.com/sirrobot01/decypharr/releases/tag/v${data.version}"
target="_blank"
class="text-current hover:text-primary transition-colors">
${data.channel}-${data.version}
@@ -718,4 +718,4 @@ window.createToast = (message, type, duration) => window.decypharrUtils.createTo
// Export for ES6 modules if needed
if (typeof module !== 'undefined' && module.exports) {
module.exports = DecypharrUtils;
}
}

View File

@@ -114,7 +114,7 @@ class ConfigManager {
populateQBittorrentSettings(qbitConfig) {
if (!qbitConfig) return;
const fields = ['download_folder', 'refresh_interval', 'max_downloads', 'skip_pre_cache'];
const fields = ['download_folder', 'refresh_interval', 'max_downloads', 'skip_pre_cache', 'always_rm_tracker_urls'];
fields.forEach(field => {
const element = document.querySelector(`[name="qbit.${field}"]`);
@@ -150,7 +150,7 @@ class ConfigManager {
const fields = [
'enabled', 'rc_port', 'mount_path', 'cache_dir', 'transfers', 'vfs_cache_mode', 'vfs_cache_max_size', 'vfs_cache_max_age',
'vfs_cache_poll_interval', 'vfs_read_chunk_size', 'vfs_read_chunk_size_limit', 'buffer_size',
'vfs_cache_poll_interval', 'vfs_read_chunk_size', 'vfs_read_chunk_size_limit', 'buffer_size', 'bw_limit',
'uid', 'gid', 'vfs_read_ahead', 'attr_timeout', 'dir_cache_time', 'poll_interval', 'umask',
'no_modtime', 'no_checksum', 'log_level', 'vfs_cache_min_free_space', 'vfs_fast_fingerprint', 'vfs_read_chunk_streams',
'async_read', 'use_mmap'
@@ -1183,7 +1183,8 @@ class ConfigManager {
download_folder: document.querySelector('[name="qbit.download_folder"]').value,
refresh_interval: parseInt(document.querySelector('[name="qbit.refresh_interval"]').value) || 30,
max_downloads: parseInt(document.querySelector('[name="qbit.max_downloads"]').value) || 0,
skip_pre_cache: document.querySelector('[name="qbit.skip_pre_cache"]').checked
skip_pre_cache: document.querySelector('[name="qbit.skip_pre_cache"]').checked,
always_rm_tracker_urls: document.querySelector('[name="qbit.always_rm_tracker_urls"]').checked
};
}
@@ -1245,6 +1246,7 @@ class ConfigManager {
rc_port: getElementValue('rc_port', "5572"),
mount_path: getElementValue('mount_path'),
buffer_size: getElementValue('buffer_size'),
bw_limit: getElementValue('bw_limit'),
cache_dir: getElementValue('cache_dir'),
transfers: getElementValue('transfers', 8),
vfs_cache_mode: getElementValue('vfs_cache_mode', 'off'),

View File

@@ -9,6 +9,7 @@ class DownloadManager {
arr: document.getElementById('arr'),
downloadAction: document.getElementById('downloadAction'),
downloadUncached: document.getElementById('downloadUncached'),
rmTrackerUrls: document.getElementById('rmTrackerUrls'),
downloadFolder: document.getElementById('downloadFolder'),
debrid: document.getElementById('debrid'),
submitBtn: document.getElementById('submitDownload'),
@@ -34,6 +35,7 @@ class DownloadManager {
this.refs.arr.addEventListener('change', () => this.saveOptions());
this.refs.downloadAction.addEventListener('change', () => this.saveOptions());
this.refs.downloadUncached.addEventListener('change', () => this.saveOptions());
this.refs.rmTrackerUrls.addEventListener('change', () => this.saveOptions());
this.refs.downloadFolder.addEventListener('change', () => this.saveOptions());
// File input enhancement
@@ -48,12 +50,14 @@ class DownloadManager {
category: localStorage.getItem('downloadCategory') || '',
action: localStorage.getItem('downloadAction') || 'symlink',
uncached: localStorage.getItem('downloadUncached') === 'true',
rmTrackerUrls: localStorage.getItem('rmTrackerUrls') === 'true',
folder: localStorage.getItem('downloadFolder') || this.downloadFolder
};
this.refs.arr.value = savedOptions.category;
this.refs.downloadAction.value = savedOptions.action;
this.refs.downloadUncached.checked = savedOptions.uncached;
this.refs.rmTrackerUrls.checked = savedOptions.rmTrackerUrls;
this.refs.downloadFolder.value = savedOptions.folder;
}
@@ -61,6 +65,12 @@ class DownloadManager {
localStorage.setItem('downloadCategory', this.refs.arr.value);
localStorage.setItem('downloadAction', this.refs.downloadAction.value);
localStorage.setItem('downloadUncached', this.refs.downloadUncached.checked.toString());
// Only save rmTrackerUrls if not disabled (i.e., not forced by config)
if (!this.refs.rmTrackerUrls.disabled) {
localStorage.setItem('rmTrackerUrls', this.refs.rmTrackerUrls.checked.toString());
}
localStorage.setItem('downloadFolder', this.refs.downloadFolder.value);
}
@@ -114,6 +124,7 @@ class DownloadManager {
formData.append('downloadFolder', this.refs.downloadFolder.value);
formData.append('action', this.refs.downloadAction.value);
formData.append('downloadUncached', this.refs.downloadUncached.checked);
formData.append('rmTrackerUrls', this.refs.rmTrackerUrls.checked);
if (this.refs.debrid) {
formData.append('debrid', this.refs.debrid.value);

View File

@@ -12,16 +12,16 @@ import (
func (wb *Web) setupMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
cfg := config.Get()
needsAuth := cfg.NeedsSetup()
if needsAuth != nil && r.URL.Path != "/config" && r.URL.Path != "/api/config" {
http.Redirect(w, r, fmt.Sprintf("/config?inco=%s", needsAuth.Error()), http.StatusSeeOther)
needsSetup := cfg.CheckSetup()
if needsSetup != nil && r.URL.Path != "/settings" && r.URL.Path != "/api/config" {
http.Redirect(w, r, fmt.Sprintf("/settings?inco=%s", needsSetup.Error()), http.StatusSeeOther)
return
}
// strip inco from URL
if inco := r.URL.Query().Get("inco"); inco != "" && needsAuth == nil && r.URL.Path == "/config" {
if inco := r.URL.Query().Get("inco"); inco != "" && needsSetup == nil && r.URL.Path == "/settings" {
// redirect to the same URL without the inco parameter
http.Redirect(w, r, "/config", http.StatusSeeOther)
http.Redirect(w, r, "/settings", http.StatusSeeOther)
}
next.ServeHTTP(w, r)
})
@@ -79,8 +79,11 @@ func (wb *Web) isAPIRequest(r *http.Request) bool {
func (wb *Web) sendJSONError(w http.ResponseWriter, message string, statusCode int) {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(statusCode)
json.NewEncoder(w).Encode(map[string]interface{}{
err := json.NewEncoder(w).Encode(map[string]interface{}{
"error": message,
"status": statusCode,
})
if err != nil {
return
}
}

View File

@@ -1,53 +1,58 @@
package web
import (
"github.com/go-chi/chi/v5"
"io/fs"
"net/http"
"github.com/go-chi/chi/v5"
)
func (wb *Web) Routes() http.Handler {
r := chi.NewRouter()
// Load static files from embedded filesystem
staticFS, err := fs.Sub(assetsEmbed, "assets/build")
if err != nil {
panic(err)
}
imagesFS, err := fs.Sub(imagesEmbed, "assets/images")
if err != nil {
panic(err)
}
r.Handle("/assets/*", http.StripPrefix("/assets/", http.FileServer(http.FS(staticFS))))
r.Handle("/images/*", http.StripPrefix("/images/", http.FileServer(http.FS(imagesFS))))
// Static assets - always public
staticFS, _ := fs.Sub(assetsEmbed, "assets/build")
imagesFS, _ := fs.Sub(imagesEmbed, "assets/images")
r.Handle("/assets/*", http.StripPrefix(wb.urlBase+"assets/", http.FileServer(http.FS(staticFS))))
r.Handle("/images/*", http.StripPrefix(wb.urlBase+"images/", http.FileServer(http.FS(imagesFS))))
// Public routes - no auth needed
r.Get("/version", wb.handleGetVersion)
r.Get("/login", wb.LoginHandler)
r.Post("/login", wb.LoginHandler)
r.Get("/register", wb.RegisterHandler)
r.Post("/register", wb.RegisterHandler)
r.Get("/skip-auth", wb.skipAuthHandler)
r.Get("/version", wb.handleGetVersion)
r.Post("/skip-auth", wb.skipAuthHandler)
// Protected routes - require auth
r.Group(func(r chi.Router) {
r.Use(wb.authMiddleware)
r.Use(wb.setupMiddleware)
// Web pages
r.Get("/", wb.IndexHandler)
r.Get("/download", wb.DownloadHandler)
r.Get("/repair", wb.RepairHandler)
r.Get("/stats", wb.StatsHandler)
r.Get("/config", wb.ConfigHandler)
r.Get("/settings", wb.ConfigHandler)
// API routes
r.Route("/api", func(r chi.Router) {
// Arr management
r.Get("/arrs", wb.handleGetArrs)
r.Post("/add", wb.handleAddContent)
// Repair operations
r.Post("/repair", wb.handleRepairMedia)
r.Get("/repair/jobs", wb.handleGetRepairJobs)
r.Post("/repair/jobs/{id}/process", wb.handleProcessRepairJob)
r.Post("/repair/jobs/{id}/stop", wb.handleStopRepairJob)
r.Delete("/repair/jobs", wb.handleDeleteRepairJob)
// Torrent management
r.Get("/torrents", wb.handleGetTorrents)
r.Delete("/torrents/{category}/{hash}", wb.handleDeleteTorrent)
r.Delete("/torrents/", wb.handleDeleteTorrents)
r.Delete("/torrents", wb.handleDeleteTorrents) // Fixed trailing slash
// Config/Auth
r.Get("/config", wb.handleGetConfig)
r.Post("/config", wb.handleUpdateConfig)
r.Post("/refresh-token", wb.handleRefreshAPIToken)

View File

@@ -1,5 +1,15 @@
{{ define "config" }}
<div class="space-y-6">
{{ if .NeedSetup }}
<div role="alert" class="alert alert-warning">
<i class="bi bi-exclamation-triangle text-xl"></i>
<div>
<h3 class="font-bold">Configuration Required</h3>
<div class="text-sm">Your configuration is incomplete. Please complete the setup below.</div>
</div>
</div>
{{ end }}
<form id="configForm" class="space-y-6">
<div class="card bg-base-100 shadow-xl">
<div class="card-body">
@@ -336,6 +346,16 @@
</div>
</label>
</div>
<div class="form-control">
<label class="label cursor-pointer justify-start gap-3">
<input type="checkbox" class="checkbox" name="qbit.always_rm_tracker_urls" id="qbit.always_rm_tracker_urls">
<div>
<span class="label-text font-medium">Always Remove Tracker URLs</span>
<div class="label-text-alt">Allows you to <a href="https://sirrobot01.github.io/decypharr/features/repair-worker/private-tracker-downloads" class="link link-hover font-semibold" target="_blank">download private tracker torrents</a> with lower risk</div>
</div>
</label>
</div>
</div>
</div>
</div>
@@ -461,7 +481,7 @@
<h3 class="text-lg font-semibold mb-4 flex items-center">
<i class="bi bi-folder mr-2"></i>Mount Configuration
</h3>
<div class="grid grid-cols-3 gap-4">
<div class="grid grid-cols-1 lg:grid-cols-4 gap-4">
<div class="form-control">
<label class="label" for="rclone.mount_path">
<span class="label-text font-medium">Global Mount Path</span>
@@ -523,11 +543,20 @@
<label class="label" for="rclone.buffer_size">
<span class="label-text font-medium">Buffer Size</span>
</label>
<input type="text" class="input input-bordered" name="rclone.buffer_size" id="rclone.buffer_size" placeholder="10M" min="0">
<input type="text" class="input input-bordered" name="rclone.buffer_size" id="rclone.buffer_size" placeholder="10M">
<div class="label">
<span class="label-text-alt">Buffer Size(This caches to memory, be wary!!)</span>
</div>
</div>
<div class="form-control">
<label class="label" for="rclone.bw_limit">
<span class="label-text font-medium">Bandwidth Limit</span>
</label>
<input type="text" class="input input-bordered" name="rclone.bw_limit" id="rclone.bw_limit" placeholder="100M">
<div class="label">
<span class="label-text-alt">Bandwidth limit (e.g., 100M, 1G, leave empty for unlimited)</span>
</div>
</div>
<div class="form-control">
<label class="label" for="rclone.attr_timeout">
<span class="label-text font-medium">Attribute Caching Timeout</span>

View File

@@ -1,10 +1,22 @@
{{ define "download" }}
<div class="space-y-6">
{{ if .NeedSetup }}
<div role="alert" class="alert alert-warning">
<i class="bi bi-exclamation-triangle text-xl"></i>
<div>
<h3 class="font-bold">Configuration Required</h3>
<div class="text-sm">Your configuration is incomplete. Please complete the setup in the <a
href="{{.URLBase}}settings" class="link link-hover font-semibold">Settings page</a>.
</div>
</div>
</div>
{{ end }}
<div class="card bg-base-100 shadow-xl">
<div class="card-body">
<form id="downloadForm" enctype="multipart/form-data" class="space-y-3">
<div class="space-y-2">
<div class="form-control">
<form id="downloadForm" enctype="multipart/form-data" class="space-y-6">
<div class="flex gap-4">
<div class="form-control flex-1">
<label class="label" for="magnetURI">
<span class="label-text font-semibold">
<i class="bi bi-magnet mr-2 text-primary"></i>Torrent Links
@@ -17,9 +29,7 @@
placeholder="Paste your magnet links or torrent URLs here, one per line..."></textarea>
</div>
<div class="divider">OR</div>
<div class="form-control">
<div class="form-control flex-1">
<label class="label">
<span class="label-text font-semibold">
<i class="bi bi-file-earmark-arrow-up mr-2 text-secondary"></i>Upload Torrent Files
@@ -40,86 +50,93 @@
</div>
</div>
<div class="divider"></div>
<div class="divider">Download Settings</div>
<div class="grid grid-cols-1 lg:grid-cols-2 gap-3">
<div class="space-y-2">
<h3 class="text-lg font-semibold flex items-center">
<i class="bi bi-gear mr-2 text-info"></i>Download Settings
</h3>
<div class="form-control">
<label class="label" for="downloadAction">
<span class="label-text">Post Download Action</span>
</label>
<select class="select select-bordered" id="downloadAction" name="downloadAction">
<option value="symlink" selected>Create Symlink</option>
<option value="download">Download Files</option>
<option value="none">No Action</option>
</select>
<div class="label">
<span class="label-text-alt">How to handle files after download completion</span>
</div>
</div>
<div class="form-control">
<label class="label" for="downloadFolder">
<span class="label-text">Download Folder</span>
</label>
<input type="text"
class="input input-bordered"
id="downloadFolder"
name="downloadFolder"
placeholder="/downloads/torrents">
<div class="label">
<span class="label-text-alt">Leave empty to use default qBittorrent folder</span>
</div>
<div class="grid grid-cols-1 lg:grid-cols-3 gap-3 space-y-4">
<div class="form-control">
<label class="label" for="downloadAction">
<span class="label-text">Post Download Action</span>
</label>
<select class="select select-bordered" id="downloadAction" name="downloadAction">
<option value="symlink" selected>Create Symlink</option>
<option value="download">Download Files</option>
<option value="none">No Action</option>
</select>
<div class="label">
<span class="label-text-alt">How to handle files after download completion</span>
</div>
</div>
<div class="space-y-2">
<h3 class="text-lg font-semibold flex items-center">
<i class="bi bi-tags mr-2 text-warning"></i>Categorization
</h3>
<div class="form-control">
<label class="label" for="arr">
<span class="label-text">Arr Category</span>
</label>
<input type="text"
class="input input-bordered"
id="arr"
name="arr"
placeholder="sonarr, radarr, etc.">
<div class="label">
<span class="label-text-alt">Optional: Specify which Arr service should handle this</span>
</div>
<div class="form-control">
<label class="label" for="downloadFolder">
<span class="label-text">Download Folder</span>
</label>
<input type="text"
class="input input-bordered"
id="downloadFolder"
name="downloadFolder"
placeholder="/downloads/torrents">
<div class="label">
<span class="label-text-alt">Leave empty to use default qBittorrent folder</span>
</div>
{{ if .HasMultiDebrid }}
<div class="form-control">
<label class="label" for="debrid">
<span class="label-text">Debrid Service</span>
</label>
<select class="select select-bordered" id="debrid" name="debrid">
{{ range $index, $debrid := .Debrids }}
<option value="{{ $debrid }}" {{ if eq $index 0 }}selected{{end}}>
{{ $debrid }}
</option>
{{ end }}
</select>
<div class="label">
<span class="label-text-alt">Choose which debrid service to use</span>
</div>
</div>
{{ end }}
</div>
<div class="form-control">
<label class="label" for="arr">
<span class="label-text">Arr Category</span>
</label>
<input type="text"
class="input input-bordered"
id="arr"
name="arr"
placeholder="sonarr, radarr, etc.">
<div class="label">
<span class="label-text-alt">Optional: Specify which Arr service should handle this</span>
</div>
</div>
{{ if .HasMultiDebrid }}
<div class="form-control">
<label class="label" for="debrid">
<span class="label-text">Debrid Service</span>
</label>
<select class="select select-bordered" id="debrid" name="debrid">
{{ range $index, $debrid := .Debrids }}
<option value="{{ $debrid }}" {{ if eq $index 0 }}selected{{end}}>
{{ $debrid }}
</option>
{{ end }}
</select>
<div class="label">
<span class="label-text-alt">Choose which debrid service to use</span>
</div>
</div>
{{ end }}
<div class="form-control">
<label class="label cursor-pointer justify-start gap-3">
<input type="checkbox" class="checkbox" name="downloadUncached" id="downloadUncached">
<div>
<span class="label-text font-medium">Download Uncached Content</span>
<div class="label-text-alt">Allow downloading of content not cached by debrid service</div>
<div class="label-text-alt">Allow downloading of content not cached by debrid service
</div>
</div>
</label>
</div>
<div class="form-control">
<label class="label cursor-pointer justify-start gap-3">
<input type="checkbox" class="checkbox" name="skipMultiSeason" id="skipMultiSeason">
<div>
<span class="label-text font-medium">Skip Multi-Season Checker</span>
<div class="label-text-alt">Skip the multi-season episode checker for TV shows</div>
</div>
</label>
</div>
<div class="form-control">
<label class="label cursor-pointer justify-start gap-3">
<input type="checkbox" class="checkbox" name="rmTrackerUrls" id="rmTrackerUrls" {{ if .AlwaysRmTrackerUrls }}checked disabled{{ end }}>
<div>
<span class="label-text font-medium">Remove Tracker</span>
<div class="label-text-alt">Allows you to <a href="https://sirrobot01.github.io/decypharr/features/repair-worker/private-tracker-downloads" class="link link-hover font-semibold" target="_blank">download private tracker torrents</a> with lower risk</div>
</div>
</label>
</div>

View File

@@ -1,6 +1,16 @@
{{ define "index" }}
<div class="space-y-6">
{{ if .NeedSetup }}
<div role="alert" class="alert alert-warning">
<i class="bi bi-exclamation-triangle text-xl"></i>
<div>
<h3 class="font-bold">Configuration Required</h3>
<div class="text-sm">Your configuration is incomplete. Please complete the setup in the <a href="{{.URLBase}}settings" class="link link-hover font-semibold">Settings page</a>.</div>
</div>
</div>
{{ end }}
<div class="card bg-base-100 shadow-xl">
<div class="card-body">
<div class="flex flex-col lg:flex-row justify-between items-start lg:items-center gap-4">

View File

@@ -54,7 +54,7 @@
<li><a href="{{.URLBase}}repair" class="{{if eq .Page "repair"}}active{{end}}">
<i class="bi bi-wrench-adjustable text-accent"></i>Repair
</a></li>
<li><a href="{{.URLBase}}config" class="{{if eq .Page "config"}}active{{end}}">
<li><a href="{{.URLBase}}settings" class="{{if eq .Page "config"}}active{{end}}">
<i class="bi bi-gear text-info"></i>Settings
</a></li>
<li><a href="{{.URLBase}}webdav" target="_blank">
@@ -85,7 +85,7 @@
<i class="bi bi-wrench-adjustable"></i>
<span class="hidden xl:inline">Repair</span>
</a></li>
<li><a href="{{.URLBase}}config" class="{{if eq .Page "config"}}active{{end}} tooltip tooltip-bottom" data-tip="Settings">
<li><a href="{{.URLBase}}settings" class="{{if eq .Page "config"}}active{{end}} tooltip tooltip-bottom" data-tip="Settings">
<i class="bi bi-gear"></i>
<span class="hidden xl:inline">Settings</span>
</a></li>

View File

@@ -75,7 +75,7 @@
// Handle skip auth button
skipAuthBtn.addEventListener('click', function() {
window.decypharrUtils.fetcher('/skip-auth', { method: 'GET' })
window.decypharrUtils.fetcher('/skip-auth', { method: 'POST' })
.then(response => {
if (response.ok) {
window.location.href = window.decypharrUtils.joinURL(window.urlBase, '/');

View File

@@ -1,5 +1,15 @@
{{ define "repair" }}
<div class="space-y-6">
{{ if .NeedSetup }}
<div role="alert" class="alert alert-warning">
<i class="bi bi-exclamation-triangle text-xl"></i>
<div>
<h3 class="font-bold">Configuration Required</h3>
<div class="text-sm">Your configuration is incomplete. Please complete the setup in the <a href="{{.URLBase}}settings" class="link link-hover font-semibold">Settings page</a>.</div>
</div>
</div>
{{ end }}
<div class="card bg-base-100 shadow-xl">
<div class="card-body">
<h2 class="card-title text-2xl mb-6">

View File

@@ -114,9 +114,11 @@ func (wb *Web) RegisterHandler(w http.ResponseWriter, r *http.Request) {
func (wb *Web) IndexHandler(w http.ResponseWriter, r *http.Request) {
cfg := config.Get()
data := map[string]interface{}{
"URLBase": cfg.URLBase,
"Page": "index",
"Title": "Torrents",
"URLBase": cfg.URLBase,
"Page": "index",
"Title": "Torrents",
"NeedSetup": cfg.CheckSetup() != nil,
"SetupError": cfg.CheckSetup(),
}
_ = wb.templates.ExecuteTemplate(w, "layout", data)
}
@@ -128,12 +130,15 @@ func (wb *Web) DownloadHandler(w http.ResponseWriter, r *http.Request) {
debrids = append(debrids, d.Name)
}
data := map[string]interface{}{
"URLBase": cfg.URLBase,
"Page": "download",
"Title": "Download",
"Debrids": debrids,
"HasMultiDebrid": len(debrids) > 1,
"DownloadFolder": cfg.QBitTorrent.DownloadFolder,
"URLBase": cfg.URLBase,
"Page": "download",
"Title": "Download",
"Debrids": debrids,
"HasMultiDebrid": len(debrids) > 1,
"DownloadFolder": cfg.QBitTorrent.DownloadFolder,
"AlwaysRmTrackerUrls": cfg.QBitTorrent.AlwaysRmTrackerUrls,
"NeedSetup": cfg.CheckSetup() != nil,
"SetupError": cfg.CheckSetup(),
}
_ = wb.templates.ExecuteTemplate(w, "layout", data)
}
@@ -141,9 +146,11 @@ func (wb *Web) DownloadHandler(w http.ResponseWriter, r *http.Request) {
func (wb *Web) RepairHandler(w http.ResponseWriter, r *http.Request) {
cfg := config.Get()
data := map[string]interface{}{
"URLBase": cfg.URLBase,
"Page": "repair",
"Title": "Repair",
"URLBase": cfg.URLBase,
"Page": "repair",
"Title": "Repair",
"NeedSetup": cfg.CheckSetup() != nil,
"SetupError": cfg.CheckSetup(),
}
_ = wb.templates.ExecuteTemplate(w, "layout", data)
}
@@ -151,9 +158,11 @@ func (wb *Web) RepairHandler(w http.ResponseWriter, r *http.Request) {
func (wb *Web) ConfigHandler(w http.ResponseWriter, r *http.Request) {
cfg := config.Get()
data := map[string]interface{}{
"URLBase": cfg.URLBase,
"Page": "config",
"Title": "Config",
"URLBase": cfg.URLBase,
"Page": "config",
"Title": "Config",
"NeedSetup": cfg.CheckSetup() != nil,
"SetupError": cfg.CheckSetup(),
}
_ = wb.templates.ExecuteTemplate(w, "layout", data)
}

View File

@@ -61,6 +61,7 @@ type Web struct {
cookie *sessions.CookieStore
templates *template.Template
torrents *wire.TorrentStorage
urlBase string
}
func New() *Web {
@@ -87,5 +88,6 @@ func New() *Web {
templates: templates,
cookie: cookieStore,
torrents: wire.Get().Torrents(),
urlBase: cfg.URLBase,
}
}

View File

@@ -5,22 +5,12 @@ import (
"io"
"net/http"
"os"
"strings"
"time"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/debrid/store"
"github.com/sirrobot01/decypharr/pkg/debrid/types"
)
type retryAction int
const (
noRetry retryAction = iota
retryWithLimit
retryAlways
)
const (
MaxNetworkRetries = 3
MaxLinkRetries = 10
@@ -44,7 +34,6 @@ type File struct {
name string
torrentName string
link string
downloadLink types.DownloadLink
size int64
isDir bool
fileId string
@@ -70,17 +59,12 @@ func (f *File) Close() error {
// This is just to satisfy the os.File interface
f.content = nil
f.children = nil
f.downloadLink = types.DownloadLink{}
f.readOffset = 0
return nil
}
func (f *File) getDownloadLink() (types.DownloadLink, error) {
// Check if we already have a final URL cached
if f.downloadLink.Valid() == nil {
return f.downloadLink, nil
}
downloadLink, err := f.cache.GetDownloadLink(f.torrentName, f.name, f.link)
if err != nil {
return downloadLink, err
@@ -89,7 +73,6 @@ func (f *File) getDownloadLink() (types.DownloadLink, error) {
if err != nil {
return types.DownloadLink{}, err
}
f.downloadLink = downloadLink
return downloadLink, nil
}
@@ -137,163 +120,44 @@ func (f *File) StreamResponse(w http.ResponseWriter, r *http.Request) error {
if f.content != nil {
return f.servePreloadedContent(w, r)
}
_logger := f.cache.Logger()
return f.streamWithRetry(w, r, 0, 0)
start, end := f.getRange(r)
resp, err := f.cache.Stream(r.Context(), start, end, f.getDownloadLink)
if err != nil {
_logger.Error().Err(err).Str("file", f.name).Msg("Failed to stream with initial link")
return &streamError{Err: err, StatusCode: http.StatusRequestedRangeNotSatisfiable}
}
defer func(Body io.ReadCloser) {
_ = Body.Close()
}(resp.Body)
return f.handleSuccessfulResponse(w, resp, start, end)
}
func (f *File) streamWithRetry(w http.ResponseWriter, r *http.Request, networkRetries, recoverableRetries int) error {
_log := f.cache.Logger()
downloadLink, err := f.getDownloadLink()
if err != nil {
return &streamError{Err: err, StatusCode: http.StatusPreconditionFailed}
}
upstreamReq, err := http.NewRequest("GET", downloadLink.DownloadLink, nil)
if err != nil {
return &streamError{Err: err, StatusCode: http.StatusInternalServerError}
}
isRangeRequest := f.handleRangeRequest(upstreamReq, r, w)
if isRangeRequest == -1 {
return &streamError{Err: fmt.Errorf("invalid range"), StatusCode: http.StatusRequestedRangeNotSatisfiable}
}
resp, err := f.cache.Download(upstreamReq)
if err != nil {
// Network error - retry with limit
if networkRetries < MaxNetworkRetries {
_log.Debug().
Int("network_retries", networkRetries+1).
Err(err).
Msg("Network error, retrying")
return f.streamWithRetry(w, r, networkRetries+1, recoverableRetries)
}
return &streamError{Err: err, StatusCode: http.StatusServiceUnavailable}
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusPartialContent {
retryType, retryErr := f.handleUpstreamError(downloadLink, resp)
switch retryType {
case retryAlways:
if recoverableRetries >= MaxLinkRetries {
return &streamError{
Err: fmt.Errorf("max link retries exceeded (%d)", MaxLinkRetries),
StatusCode: http.StatusServiceUnavailable,
}
}
_log.Debug().
Int("recoverable_retries", recoverableRetries+1).
Str("file", f.name).
Msg("Recoverable error, retrying")
return f.streamWithRetry(w, r, 0, recoverableRetries+1) // Reset network retries
case retryWithLimit:
if networkRetries < MaxNetworkRetries {
_log.Debug().
Int("network_retries", networkRetries+1).
Str("file", f.name).
Msg("Network error, retrying")
return f.streamWithRetry(w, r, networkRetries+1, recoverableRetries)
}
fallthrough
case noRetry:
if retryErr != nil {
return retryErr
}
return &streamError{
Err: fmt.Errorf("non-retryable error: status %d", resp.StatusCode),
StatusCode: http.StatusBadGateway,
}
}
}
// Success - stream the response
func (f *File) handleSuccessfulResponse(w http.ResponseWriter, resp *http.Response, start, end int64) error {
statusCode := http.StatusOK
if isRangeRequest == 1 {
if start > 0 || end > 0 {
statusCode = http.StatusPartialContent
}
// Copy relevant headers
if contentLength := resp.Header.Get("Content-Length"); contentLength != "" {
w.Header().Set("Content-Length", contentLength)
}
if contentRange := resp.Header.Get("Content-Range"); contentRange != "" && isRangeRequest == 1 {
if contentRange := resp.Header.Get("Content-Range"); contentRange != "" && statusCode == http.StatusPartialContent {
w.Header().Set("Content-Range", contentRange)
}
// Copy other important headers
if contentType := resp.Header.Get("Content-Type"); contentType != "" {
w.Header().Set("Content-Type", contentType)
}
return f.streamBuffer(w, resp.Body, statusCode)
}
func (f *File) handleUpstreamError(downloadLink types.DownloadLink, resp *http.Response) (retryAction, error) {
_log := f.cache.Logger()
cleanupResp := func(resp *http.Response) {
if resp.Body != nil {
_, _ = io.Copy(io.Discard, resp.Body)
resp.Body.Close()
}
}
switch resp.StatusCode {
case http.StatusServiceUnavailable:
body, readErr := io.ReadAll(resp.Body)
cleanupResp(resp)
if readErr != nil {
_log.Error().Err(readErr).Msg("Failed to read response body")
return retryWithLimit, nil
}
bodyStr := string(body)
if strings.Contains(bodyStr, "you have exceeded your traffic") {
_log.Debug().
Str("token", utils.Mask(downloadLink.Token)).
Str("file", f.name).
Msg("Bandwidth exceeded for account, invalidating link")
f.cache.MarkDownloadLinkAsInvalid(f.downloadLink, "bandwidth_exceeded")
f.downloadLink = types.DownloadLink{}
return retryAlways, nil
}
return noRetry, &streamError{
Err: fmt.Errorf("service unavailable: %s", bodyStr),
StatusCode: http.StatusServiceUnavailable,
}
case http.StatusNotFound:
cleanupResp(resp)
_log.Debug().
Str("file", f.name).
Msg("Link not found, invalidating and regenerating")
f.cache.MarkDownloadLinkAsInvalid(f.downloadLink, "link_not_found")
f.downloadLink = types.DownloadLink{}
return retryAlways, nil
default:
body, _ := io.ReadAll(resp.Body)
cleanupResp(resp)
_log.Error().
Int("status_code", resp.StatusCode).
Str("file", f.name).
Str("response_body", string(body)).
Msg("Unexpected upstream error")
return retryWithLimit, &streamError{
Err: fmt.Errorf("upstream error %d: %s", resp.StatusCode, string(body)),
StatusCode: http.StatusBadGateway,
}
}
}
func (f *File) streamBuffer(w http.ResponseWriter, src io.Reader, statusCode int) error {
flusher, ok := w.(http.Flusher)
if !ok {
@@ -342,21 +206,21 @@ func (f *File) streamBuffer(w http.ResponseWriter, src io.Reader, statusCode int
}
}
func (f *File) handleRangeRequest(upstreamReq *http.Request, r *http.Request, w http.ResponseWriter) int {
func (f *File) getRange(r *http.Request) (int64, int64) {
rangeHeader := r.Header.Get("Range")
if rangeHeader == "" {
// For video files, apply byte range if exists
if byteRange, _ := f.getDownloadByteRange(); byteRange != nil {
upstreamReq.Header.Set("Range", fmt.Sprintf("bytes=%d-%d", byteRange[0], byteRange[1]))
return byteRange[0], byteRange[1]
}
return 0 // No range request
return 0, 0
}
// Parse range request
ranges, err := parseRange(rangeHeader, f.size)
if err != nil || len(ranges) != 1 {
w.Header().Set("Content-Range", fmt.Sprintf("bytes */%d", f.size))
return -1 // Invalid range
// Invalid range, return full content
return 0, 0
}
// Apply byte range offset if exists
@@ -367,9 +231,7 @@ func (f *File) handleRangeRequest(upstreamReq *http.Request, r *http.Request, w
start += byteRange[0]
end += byteRange[0]
}
upstreamReq.Header.Set("Range", fmt.Sprintf("bytes=%d-%d", start, end))
return 1 // Valid range request
return start, end
}
/*

View File

@@ -489,7 +489,6 @@ func (h *Handler) handleGet(w http.ResponseWriter, r *http.Request) {
}
return
}
return
}
func (h *Handler) handleHead(w http.ResponseWriter, r *http.Request) {

View File

@@ -128,14 +128,6 @@ func writeXml(w http.ResponseWriter, status int, buf stringbuf.StringBuf) {
_, _ = w.Write(buf.Bytes())
}
func hasHeadersWritten(w http.ResponseWriter) bool {
// Most ResponseWriter implementations support this
if hw, ok := w.(interface{ Written() bool }); ok {
return hw.Written()
}
return false
}
func isClientDisconnection(err error) bool {
if err == nil {
return false

View File

@@ -4,10 +4,6 @@ import (
"context"
"embed"
"fmt"
"github.com/go-chi/chi/v5"
"github.com/go-chi/chi/v5/middleware"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/pkg/wire"
"html/template"
"net/http"
"net/url"
@@ -16,6 +12,11 @@ import (
"strings"
"sync"
"time"
"github.com/go-chi/chi/v5"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/wire"
)
//go:embed templates/*
@@ -33,42 +34,8 @@ var (
}
return strings.Join(segments, "/")
},
"formatSize": func(bytes int64) string {
const (
KB = 1024
MB = 1024 * KB
GB = 1024 * MB
TB = 1024 * GB
)
var size float64
var unit string
switch {
case bytes >= TB:
size = float64(bytes) / TB
unit = "TB"
case bytes >= GB:
size = float64(bytes) / GB
unit = "GB"
case bytes >= MB:
size = float64(bytes) / MB
unit = "MB"
case bytes >= KB:
size = float64(bytes) / KB
unit = "KB"
default:
size = float64(bytes)
unit = "bytes"
}
// Format to 2 decimal places for larger units, no decimals for bytes
if unit == "bytes" {
return fmt.Sprintf("%.0f %s", size, unit)
}
return fmt.Sprintf("%.2f %s", size, unit)
},
"hasSuffix": strings.HasSuffix,
"formatSize": utils.FormatSize,
"hasSuffix": strings.HasSuffix,
}
tplRoot = template.Must(template.ParseFS(templatesFS, "templates/root.html"))
tplDirectory = template.Must(template.New("").Funcs(funcMap).ParseFS(templatesFS, "templates/directory.html"))
@@ -106,8 +73,8 @@ func New() *WebDav {
func (wd *WebDav) Routes() http.Handler {
wr := chi.NewRouter()
wr.Use(middleware.StripSlashes)
wr.Use(wd.commonMiddleware)
//wr.Use(wd.authMiddleware) Disable auth for now
wd.setupRootHandler(wr)
wd.mountHandlers(wr)
@@ -178,6 +145,21 @@ func (wd *WebDav) commonMiddleware(next http.Handler) http.Handler {
})
}
func (wd *WebDav) authMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
cfg := config.Get()
if cfg.UseAuth && cfg.EnableWebdavAuth {
username, password, ok := r.BasicAuth()
if !ok || !config.VerifyAuth(username, password) {
w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`)
http.Error(w, "Unauthorized", http.StatusUnauthorized)
return
}
}
next.ServeHTTP(w, r)
})
}
func (wd *WebDav) handleGetRoot() http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/html; charset=utf-8")

View File

@@ -610,7 +610,7 @@ func (s *Store) processMultiSeasonDownloads(torrent *Torrent, debridTorrent *typ
// Update season torrent with final path
seasonTorrent.TorrentPath = seasonDownloadPath
torrent.ContentPath = seasonDownloadPath
seasonTorrent.ContentPath = seasonDownloadPath
seasonTorrent.State = "pausedUP"
// Add the season torrent to storage

View File

@@ -6,16 +6,17 @@ import (
"context"
"encoding/json"
"fmt"
"net/http"
"net/url"
"sync"
"time"
"github.com/google/uuid"
"github.com/sirrobot01/decypharr/internal/config"
"github.com/sirrobot01/decypharr/internal/request"
"github.com/sirrobot01/decypharr/internal/utils"
"github.com/sirrobot01/decypharr/pkg/arr"
debridTypes "github.com/sirrobot01/decypharr/pkg/debrid/types"
"net/http"
"net/url"
"sync"
"time"
)
type ImportType string
@@ -34,6 +35,7 @@ type ImportRequest struct {
Action string `json:"action"`
DownloadUncached bool `json:"downloadUncached"`
CallBackUrl string `json:"callBackUrl"`
SkipMultiSeason bool `json:"skip_multi_season"`
Status string `json:"status"`
CompletedAt time.Time `json:"completedAt,omitempty"`
@@ -43,7 +45,7 @@ type ImportRequest struct {
Async bool `json:"async"`
}
func NewImportRequest(debrid string, downloadFolder string, magnet *utils.Magnet, arr *arr.Arr, action string, downloadUncached bool, callBackUrl string, importType ImportType) *ImportRequest {
func NewImportRequest(debrid string, downloadFolder string, magnet *utils.Magnet, arr *arr.Arr, action string, downloadUncached bool, callBackUrl string, importType ImportType, skipMultiSeason bool) *ImportRequest {
cfg := config.Get()
callBackUrl = cmp.Or(callBackUrl, cfg.CallbackURL)
return &ImportRequest{
@@ -57,6 +59,7 @@ func NewImportRequest(debrid string, downloadFolder string, magnet *utils.Magnet
DownloadUncached: downloadUncached,
CallBackUrl: callBackUrl,
Type: importType,
SkipMultiSeason: skipMultiSeason,
}
}

View File

@@ -3,6 +3,9 @@ package wire
import (
"cmp"
"context"
"sync"
"time"
"github.com/go-co-op/gocron/v2"
"github.com/rs/zerolog"
"github.com/sirrobot01/decypharr/internal/config"
@@ -11,8 +14,6 @@ import (
"github.com/sirrobot01/decypharr/pkg/debrid"
"github.com/sirrobot01/decypharr/pkg/rclone"
"github.com/sirrobot01/decypharr/pkg/repair"
"sync"
"time"
)
type Store struct {
@@ -101,7 +102,6 @@ func Reset() {
}
if instance.scheduler != nil {
_ = instance.scheduler.StopJobs()
_ = instance.scheduler.Shutdown()
}
}

View File

@@ -18,7 +18,6 @@ import (
func (s *Store) AddTorrent(ctx context.Context, importReq *ImportRequest) error {
torrent := createTorrentFromMagnet(importReq)
debridTorrent, err := debridTypes.Process(ctx, s.debrid, importReq.SelectedDebrid, importReq.Magnet, importReq.Arr, importReq.Action, importReq.DownloadUncached)
if err != nil {
var httpErr *utils.HTTPError
if ok := errors.As(err, &httpErr); ok {
@@ -54,6 +53,12 @@ func (s *Store) processFiles(torrent *Torrent, debridTorrent *types.Torrent, imp
return
}
s.logger.Debug().
Str("torrent_name", debridTorrent.Name).
Str("debrid_status", debridTorrent.Status).
Str("torrent_state", torrent.State).
Msg("processFiles started")
deb := s.debrid.Debrid(debridTorrent.Debrid)
client := deb.Client()
downloadingStatuses := client.GetDownloadingStatus()
@@ -91,13 +96,18 @@ func (s *Store) processFiles(torrent *Torrent, debridTorrent *types.Torrent, imp
if debridTorrent.Status == "downloaded" || !utils.Contains(downloadingStatuses, debridTorrent.Status) {
break
}
select {
case <-backoff.C:
// Increase interval gradually, cap at max
nextInterval := min(s.refreshInterval*2, 30*time.Second)
backoff.Reset(nextInterval)
}
<-backoff.C
// Reset the backoff timer
nextInterval := min(s.refreshInterval*2, 30*time.Second)
backoff.Reset(nextInterval)
}
s.logger.Debug().
Str("torrent_name", debridTorrent.Name).
Str("debrid_status", debridTorrent.Status).
Msg("Download loop exited, proceeding to post-processing")
var torrentSymlinkPath, torrentRclonePath string
debridTorrent.Arr = _arr
@@ -113,12 +123,31 @@ func (s *Store) processFiles(torrent *Torrent, debridTorrent *types.Torrent, imp
}()
s.logger.Error().Err(err).Msgf("Error occured while processing torrent %s", debridTorrent.Name)
importReq.markAsFailed(err, torrent, debridTorrent)
return
}
onSuccess := func(torrentSymlinkPath string) {
s.logger.Debug().
Str("torrent_name", debridTorrent.Name).
Str("symlink_path", torrentSymlinkPath).
Str("debrid_status", debridTorrent.Status).
Msg("onSuccess called")
torrent.TorrentPath = torrentSymlinkPath
s.updateTorrent(torrent, debridTorrent)
// Safety check: ensure state is set correctly after updateTorrent
// This catches any edge cases where updateTorrent doesn't set the state
if torrent.State != "pausedUP" && torrentSymlinkPath != "" {
s.logger.Warn().
Str("torrent_name", debridTorrent.Name).
Str("current_state", torrent.State).
Str("debrid_status", debridTorrent.Status).
Msg("State not pausedUP after updateTorrent, forcing state update")
torrent.State = "pausedUP"
torrent.Progress = 1.0
torrent.AmountLeft = 0
s.torrents.Update(torrent)
}
s.logger.Info().Msgf("Adding %s took %s", debridTorrent.Name, time.Since(timer))
go importReq.markAsCompleted(torrent, debridTorrent) // Mark the import request as completed, send callback if needed
@@ -133,11 +162,16 @@ func (s *Store) processFiles(torrent *Torrent, debridTorrent *types.Torrent, imp
}
// Check for multi-season torrent support
isMultiSeason, seasons, err := s.detectMultiSeason(debridTorrent)
if err != nil {
s.logger.Warn().Msgf("Error detecting multi-season for %s: %v", debridTorrent.Name, err)
// Continue with normal processing if detection fails
isMultiSeason = false
var isMultiSeason bool
var seasons []SeasonInfo
var err error
if !importReq.SkipMultiSeason {
isMultiSeason, seasons, err = s.detectMultiSeason(debridTorrent)
if err != nil {
s.logger.Warn().Msgf("Error detecting multi-season for %s: %v", debridTorrent.Name, err)
// Continue with normal processing if detection fails
isMultiSeason = false
}
}
switch importReq.Action {
@@ -199,7 +233,12 @@ func (s *Store) processFiles(torrent *Torrent, debridTorrent *types.Torrent, imp
if torrentSymlinkPath == "" {
err = fmt.Errorf("symlink path is empty for %s", debridTorrent.Name)
onFailed(err)
return
}
s.logger.Debug().
Str("torrent_name", debridTorrent.Name).
Str("symlink_path", torrentSymlinkPath).
Msg("Symlink processing complete, calling onSuccess")
onSuccess(torrentSymlinkPath)
return
case "download":
@@ -268,6 +307,12 @@ func (s *Store) partialTorrentUpdate(t *Torrent, debridTorrent *types.Torrent) *
if math.IsNaN(progress) || math.IsInf(progress, 0) {
progress = 0
}
// When debrid reports download complete, force progress to 100% to ensure
// IsReady() returns true. This fixes a race condition where TorBox can report
// DownloadFinished=true but Progress < 1.0, causing state to stay "downloading".
if debridTorrent.Status == "downloaded" {
progress = 1.0
}
sizeCompleted := int64(float64(totalSize) * progress)
var speed int64
@@ -312,6 +357,13 @@ func (s *Store) updateTorrent(t *Torrent, debridTorrent *types.Torrent) *Torrent
return t
}
s.logger.Debug().
Str("torrent_name", t.Name).
Str("debrid_status", debridTorrent.Status).
Str("torrent_path", t.TorrentPath).
Str("current_state", t.State).
Msg("updateTorrent called")
if debridClient := s.debrid.Clients()[debridTorrent.Debrid]; debridClient != nil {
if debridTorrent.Status != "downloaded" {
_ = debridClient.UpdateTorrent(debridTorrent)
@@ -320,7 +372,34 @@ func (s *Store) updateTorrent(t *Torrent, debridTorrent *types.Torrent) *Torrent
t = s.partialTorrentUpdate(t, debridTorrent)
t.ContentPath = t.TorrentPath
// When debrid reports download complete and we have a path, mark as ready.
// This is a direct fix for TorBox where IsReady() might fail due to
// progress/AmountLeft calculation issues.
if debridTorrent.Status == "downloaded" && t.TorrentPath != "" {
s.logger.Debug().
Str("torrent_name", t.Name).
Msg("Setting state to pausedUP (downloaded + path)")
t.State = "pausedUP"
t.Progress = 1.0
t.AmountLeft = 0
s.torrents.Update(t)
return t
}
// Log why the primary condition failed
s.logger.Debug().
Str("torrent_name", t.Name).
Str("debrid_status", debridTorrent.Status).
Str("torrent_path", t.TorrentPath).
Bool("is_ready", t.IsReady()).
Float64("progress", t.Progress).
Int64("amount_left", t.AmountLeft).
Msg("Primary pausedUP condition failed, checking IsReady")
if t.IsReady() {
s.logger.Debug().
Str("torrent_name", t.Name).
Msg("Setting state to pausedUP (IsReady=true)")
t.State = "pausedUP"
s.torrents.Update(t)
return t

View File

@@ -0,0 +1 @@
magnet:?xt=urn:btih:8a19577fb5f690970ca43a57ff1011ae202244b8&dn=ubuntu-25.04-desktop-amd64.iso&tr=https%3A//ipv6.torrent.ubuntu.com/announce&tr=https%3A//torrent.ubuntu.com/announce

Binary file not shown.