28 Commits

Author SHA1 Message Date
purple_emily
2e65ff9276 Pull changes from Torrentio 2024-04-01 17:20:33 +01:00
iPromKnight
684dbba2f0 RTN-025 and title category parsing (#195)
* update rtn to 025

* Implement movie / show type parsing

* switch to RTN in collectors

* ensure env for pythonnet is loaded, and that requirements copy for qbit

* version bump
2024-03-31 22:01:09 +01:00
iPromKnight
c75ecd2707 add qbit housekeeping service to remove stale torrents (#193)
* Add housekeeping service to clean stale torrents

* version bump
2024-03-30 11:52:23 +00:00
iPromKnight
c493ef3376 Hotfix category, and roll back RTN to 0.1.8 (#192)
* Hotfix categories

Also roll back RTN to 0.1.8 as regression introduced in 0.2

* bump version
2024-03-30 04:47:36 +00:00
iPromKnight
655a39e35c patch the query with execute (#191) 2024-03-30 01:54:06 +00:00
iPromKnight
cfeee62f6b patch ratio (#190)
* add configurable threshold, default 0.95

* version bump
2024-03-30 01:43:21 +00:00
iPromKnight
c6d4c06d70 hotfix categories from imdb result instead (#189)
* category mapping from imdb

* version bump
2024-03-30 01:26:02 +00:00
iPromKnight
08639a3254 Patch isMovie (#188)
* fix is movie

* version bump
2024-03-30 00:28:35 +00:00
iPromKnight
d430850749 Patch message contract names (#187)
* ensure unique message contract names per collector type

* version bump
2024-03-30 00:09:13 +00:00
iPromKnight
82c0ea459b change qbittorrent settings (#186) 2024-03-29 23:35:27 +00:00
iPromKnight
1e83b4c5d8 Patch the addon (#185) 2024-03-29 19:08:17 +00:00
iPromKnight
66609c2a46 trigram performance increased and housekeeping (#184)
* add new indexes, and change year column to int

* Change gist to gin, and change year to int

* Producer changes for new gin query

* Fully map the rtn response using json dump from Pydantic

Also updates Rtn to 0.1.9

* Add housekeeping script to reconcile imdb ids.

* Join Torrent onto the ingested torrent table

Ensure that a torrent can always find the details of where it came from, and how it was parsed.

* Version bump for release

* missing quote on table name
2024-03-29 19:01:48 +00:00
iPromKnight
2d78dc2735 version bump for release (#183) 2024-03-28 23:37:35 +00:00
iPromKnight
527d6cdf15 Upgrade RTN to 0.1.8, replace rabbitmq with drop in replacement lavinmq - better performance, lower resource usage. (#182) 2024-03-28 23:35:41 +00:00
iPromKnight
bb260d78d6 Address Issues in build (#180)
- CIS-DI-0001
- CIS-DI-0006
- CIS-DI-0008
- DKL-LI-0003
2024-03-28 10:47:13 +00:00
iPromKnight
baec0450bf Hotfix ingestor github flow, and move to top level src folder (foldedr per service) (#179) 2024-03-28 10:20:26 +00:00
iPromKnight
4308a0ee71 [wip] bridge python and c# and bring in rank torrent name (#177)
* [wip] bridge python and c# and bring in rank torrent name

* Container restores package now

Includes two dev scripts to install the python packages locally for debugging purposes.

* Introduce slightly turned title matching scoring, by making it length aware

this should help with sequels such as Terminator 2, vs Terminator etc

* Version bump

Also fixes postgres healthcheck so that it utilises the user from the stack.env file
2024-03-28 10:13:50 +00:00
RohirrimRider
cc15a69517 fix torrent_ingestor ci (#178) 2024-03-27 21:38:59 -05:00
RohirrimRider
a6d3a4a066 init ingest torrents from annatar (#157)
* init ingest torrents from annatar

* works

* mv annatar to src/

* done

* add ci and readme

---------

Co-authored-by: Brett <eruiluvatar@pnbx.xyz>
2024-03-27 21:35:03 -05:00
iPromKnight
9430704205 rename commited .env file to stack.env (#176) 2024-03-27 12:57:14 +00:00
iPromKnight
6cc857bdc3 rename .env to stack.env (#175) 2024-03-27 12:37:11 +00:00
iPromKnight
cc2adbfca5 Recreate single docker-compose file (#174)
Clean it up - also comment all services
2024-03-27 12:21:40 +00:00
iPromKnight
9f928f9b66 Allow trackers url to be configurable + version bump (#173)
this allows people to use only the udp collection, only the tcp collection, or all.
2024-03-26 12:17:47 +00:00
iPromKnight
a50b5071b3 key prefixes per collector (#172)
* Ensure the collectors manage sagas in their own keyspace, as we do not want overlap (they have the same correlation ids internally from the exchange)

* version bump
2024-03-26 11:56:14 +00:00
iPromKnight
72db18f0ad add missing env (#171)
* add missing env

* version bump
2024-03-26 11:16:21 +00:00
iPromKnight
d70cef1b86 addon fix (#170)
* addon fix

* version bump
2024-03-26 10:25:43 +00:00
iPromKnight
e1e718cd22 includes qbit collector fix (#169) 2024-03-26 10:17:04 +00:00
iPromKnight
c3e58e4234 Fix redis connection strings for consistency across languages. (#168)
* Fix redis connection strings across languages

* compose version bump
2024-03-26 09:26:35 +00:00
106 changed files with 3244 additions and 1371 deletions

View File

@@ -0,0 +1,15 @@
name: Build and Push Torrent Ingestor Service
on:
push:
paths:
- 'src/torrent-ingestor/**'
jobs:
process:
uses: ./.github/workflows/base_image_workflow.yaml
secrets: inherit
with:
CONTEXT: ./src/torrent-ingestor
DOCKERFILE: ./src/torrent-ingestor/Dockerfile
IMAGE_NAME: knightcrawler-torrent-ingestor

7
.gitignore vendored
View File

@@ -610,4 +610,9 @@ fabric.properties
**/caddy/logs/** **/caddy/logs/**
# Mac directory indexes # Mac directory indexes
.DS_Store .DS_Store
deployment/docker/stack.env
src/producer/src/python/
src/debrid-collector/python/
src/qbit-collector/python/

View File

@@ -51,11 +51,11 @@ Download and install [Docker Compose](https://docs.docker.com/compose/install/),
### Environment Setup ### Environment Setup
Before running the project, you need to set up the environment variables. Copy the `.env.example` file to `.env`: Before running the project, you need to set up the environment variables. Edit the values in `stack.env`:
```sh ```sh
cd deployment/docker cd deployment/docker
cp .env.example .env code stack.env
``` ```
Then set any of the values you wouldd like to customize. Then set any of the values you wouldd like to customize.
@@ -67,9 +67,6 @@ Then set any of the values you wouldd like to customize.
By default, Knight Crawler is configured to be *relatively* conservative in its resource usage. If running on a decent machine (16GB RAM, i5+ or equivalent), you can increase some settings to increase consumer throughput. This is especially helpful if you have a large backlog from [importing databases](#importing-external-dumps). By default, Knight Crawler is configured to be *relatively* conservative in its resource usage. If running on a decent machine (16GB RAM, i5+ or equivalent), you can increase some settings to increase consumer throughput. This is especially helpful if you have a large backlog from [importing databases](#importing-external-dumps).
In your `.env` file, under the `# Consumer` section increase `CONSUMER_REPLICAS` from `3` to `15`.
You can also increase `JOB_CONCURRENCY` from `5` to `10`.
### DebridMediaManager setup (optional) ### DebridMediaManager setup (optional)
There are some optional steps you should take to maximise the number of movies/tv shows we can find. There are some optional steps you should take to maximise the number of movies/tv shows we can find.
@@ -90,9 +87,9 @@ We can search DebridMediaManager hash lists which are hosted on GitHub. This all
(checked) Public Repositories (read-only) (checked) Public Repositories (read-only)
``` ```
4. Click `Generate token` 4. Click `Generate token`
5. Take the new token and add it to the bottom of the [.env](deployment/docker/.env) file 5. Take the new token and add it to the bottom of the [stack.env](deployment/docker/stack.env) file
``` ```
GithubSettings__PAT=<YOUR TOKEN HERE> GITHUB_PAT=<YOUR TOKEN HERE>
``` ```
### Configure external access ### Configure external access
@@ -143,7 +140,7 @@ Remove or comment out the port for the addon, and connect it to Caddy:
addon: addon:
<<: *knightcrawler-app <<: *knightcrawler-app
env_file: env_file:
- .env - stack.env
hostname: knightcrawler-addon hostname: knightcrawler-addon
image: gabisonfire/knightcrawler-addon:latest image: gabisonfire/knightcrawler-addon:latest
labels: labels:

View File

@@ -1,7 +0,0 @@
version: "3.9"
name: "knightcrawler"
include:
- components/network.yaml
- components/volumes.yaml
- components/infrastructure.yaml
- components/knightcrawler.yaml

View File

@@ -12,8 +12,11 @@ enabled=false
program= program=
[BitTorrent] [BitTorrent]
Session\AnonymousModeEnabled=true
Session\BTProtocol=TCP
Session\DefaultSavePath=/downloads/ Session\DefaultSavePath=/downloads/
Session\ExcludedFileNames= Session\ExcludedFileNames=
Session\MaxActiveCheckingTorrents=5
Session\MaxActiveDownloads=10 Session\MaxActiveDownloads=10
Session\MaxActiveTorrents=50 Session\MaxActiveTorrents=50
Session\MaxActiveUploads=50 Session\MaxActiveUploads=50
@@ -50,9 +53,10 @@ MailNotification\req_auth=true
WebUI\Address=* WebUI\Address=*
WebUI\AuthSubnetWhitelist=0.0.0.0/0 WebUI\AuthSubnetWhitelist=0.0.0.0/0
WebUI\AuthSubnetWhitelistEnabled=true WebUI\AuthSubnetWhitelistEnabled=true
WebUI\HostHeaderValidation=false
WebUI\LocalHostAuth=false WebUI\LocalHostAuth=false
WebUI\ServerDomains=* WebUI\ServerDomains=*
[RSS] [RSS]
AutoDownloader\DownloadRepacks=true AutoDownloader\DownloadRepacks=true
AutoDownloader\SmartEpisodeFilter=s(\\d+)e(\\d+), (\\d+)x(\\d+), "(\\d{4}[.\\-]\\d{1,2}[.\\-]\\d{1,2})", "(\\d{1,2}[.\\-]\\d{1,2}[.\\-]\\d{4})" AutoDownloader\SmartEpisodeFilter=s(\\d+)e(\\d+), (\\d+)x(\\d+), "(\\d{4}[.\\-]\\d{1,2}[.\\-]\\d{1,2})", "(\\d{1,2}[.\\-]\\d{1,2}[.\\-]\\d{4})"

View File

@@ -0,0 +1,244 @@
version: "3.9"
name: knightcrawler
networks:
knightcrawler-network:
name: knightcrawler-network
driver: bridge
volumes:
postgres:
lavinmq:
redis:
services:
## Postgres is the database that is used by the services.
## All downloaded metadata is stored in this database.
postgres:
env_file: stack.env
healthcheck:
test: [ "CMD", "sh", "-c", "pg_isready -h localhost -U $$POSTGRES_USER" ]
timeout: 10s
interval: 10s
retries: 3
start_period: 10s
image: postgres:latest
# # If you need the database to be accessible from outside, please open the below port.
# # Furthermore, please, please, please, change the username and password in the stack.env file.
# # If you want to enhance your security even more, create a new user for the database with a strong password.
# ports:
# - "5432:5432"
networks:
- knightcrawler-network
restart: unless-stopped
volumes:
- postgres:/var/lib/postgresql/data
## Redis is used as a cache for the services.
## It is used to store the infohashes that are currently being processed in sagas, as well as intrim data.
redis:
env_file: stack.env
healthcheck:
test: ["CMD-SHELL", "redis-cli ping"]
timeout: 10s
interval: 10s
retries: 3
start_period: 10s
image: redis/redis-stack:latest
# # If you need redis to be accessible from outside, please open the below port.
# ports:
# - "6379:6379"
networks:
- knightcrawler-network
restart: unless-stopped
volumes:
- redis:/data
## LavinMQ is used as a message broker for the services.
## It is a high performance drop in replacement for RabbitMQ.
## It is used to communicate between the services.
lavinmq:
env_file: stack.env
# # If you need the database to be accessible from outside, please open the below port.
# # Furthermore, please, please, please, look at the documentation for lavinmq / rabbitmq on how to secure the service.
# ports:
# - "5672:5672"
# - "15672:15672"
# - "15692:15692"
image: cloudamqp/lavinmq:latest
healthcheck:
test: ["CMD-SHELL", "lavinmqctl status"]
timeout: 10s
interval: 10s
retries: 3
start_period: 10s
restart: unless-stopped
networks:
- knightcrawler-network
volumes:
- lavinmq:/var/lib/lavinmq/
## The addon. This is what is used in stremio
addon:
depends_on:
metadata:
condition: service_completed_successfully
migrator:
condition: service_completed_successfully
postgres:
condition: service_healthy
lavinmq:
condition: service_healthy
redis:
condition: service_healthy
env_file: stack.env
hostname: knightcrawler-addon
image: gabisonfire/knightcrawler-addon:2.0.18
labels:
logging: promtail
networks:
- knightcrawler-network
ports:
- "7000:7000"
restart: unless-stopped
## The consumer is responsible for consuming infohashes and orchestrating download of metadata.
consumer:
depends_on:
metadata:
condition: service_completed_successfully
migrator:
condition: service_completed_successfully
postgres:
condition: service_healthy
lavinmq:
condition: service_healthy
redis:
condition: service_healthy
env_file: stack.env
image: gabisonfire/knightcrawler-consumer:2.0.18
labels:
logging: promtail
networks:
- knightcrawler-network
restart: unless-stopped
## The debrid collector is responsible for downloading metadata from debrid services. (Currently only RealDebrid is supported)
debridcollector:
depends_on:
metadata:
condition: service_completed_successfully
migrator:
condition: service_completed_successfully
postgres:
condition: service_healthy
lavinmq:
condition: service_healthy
redis:
condition: service_healthy
env_file: stack.env
image: gabisonfire/knightcrawler-debrid-collector:2.0.18
labels:
logging: promtail
networks:
- knightcrawler-network
restart: unless-stopped
## The metadata service is responsible for downloading imdb publically available datasets.
## This is used to enrich the metadata during production of ingested infohashes.
metadata:
depends_on:
migrator:
condition: service_completed_successfully
env_file: stack.env
image: gabisonfire/knightcrawler-metadata:2.0.18
networks:
- knightcrawler-network
restart: "no"
## The migrator is responsible for migrating the database schema.
migrator:
depends_on:
postgres:
condition: service_healthy
env_file: stack.env
image: gabisonfire/knightcrawler-migrator:2.0.18
networks:
- knightcrawler-network
restart: "no"
## The producer is responsible for producing infohashes by acquiring for various sites, including DMM.
producer:
depends_on:
metadata:
condition: service_completed_successfully
migrator:
condition: service_completed_successfully
postgres:
condition: service_healthy
lavinmq:
condition: service_healthy
redis:
condition: service_healthy
env_file: stack.env
image: gabisonfire/knightcrawler-producer:2.0.18
labels:
logging: promtail
networks:
- knightcrawler-network
restart: unless-stopped
## QBit collector utilizes QBitTorrent to download metadata.
qbitcollector:
depends_on:
metadata:
condition: service_completed_successfully
migrator:
condition: service_completed_successfully
postgres:
condition: service_healthy
lavinmq:
condition: service_healthy
redis:
condition: service_healthy
qbittorrent:
condition: service_healthy
deploy:
replicas: ${QBIT_REPLICAS:-0}
env_file: stack.env
image: gabisonfire/knightcrawler-qbit-collector:2.0.18
labels:
logging: promtail
networks:
- knightcrawler-network
restart: unless-stopped
## QBitTorrent is a torrent client that can be used to download torrents. In this case its used to download metadata.
## The QBit collector requires this.
qbittorrent:
deploy:
replicas: ${QBIT_REPLICAS:-0}
env_file: stack.env
environment:
PGID: "1000"
PUID: "1000"
TORRENTING_PORT: "6881"
WEBUI_PORT: "8080"
healthcheck:
test: ["CMD-SHELL", "curl --fail http://localhost:8080"]
timeout: 10s
interval: 10s
retries: 3
start_period: 10s
image: lscr.io/linuxserver/qbittorrent:latest
networks:
- knightcrawler-network
ports:
- "6881:6881/tcp"
- "6881:6881/udp"
# if you want to expose the webui, uncomment the following line
# - "8001:8080"
restart: unless-stopped
volumes:
- ./config/qbit/qbittorrent.conf:/config/qBittorrent/qBittorrent.conf

View File

@@ -16,7 +16,7 @@ rule_files:
scrape_configs: scrape_configs:
- job_name: "rabbitmq" - job_name: "rabbitmq"
static_configs: static_configs:
- targets: ["rabbitmq:15692"] - targets: ["lavinmq:15692"]
- job_name: "postgres-exporter" - job_name: "postgres-exporter"
static_configs: static_configs:
- targets: ["postgres-exporter:9187"] - targets: ["postgres-exporter:9187"]

View File

@@ -4,8 +4,8 @@ x-basehealth: &base-health
retries: 3 retries: 3
start_period: 10s start_period: 10s
x-rabbithealth: &rabbitmq-health x-lavinhealth: &lavinmq-health
test: rabbitmq-diagnostics -q ping test: [ "CMD-SHELL", "lavinmqctl status" ]
<<: *base-health <<: *base-health
x-redishealth: &redis-health x-redishealth: &redis-health
@@ -13,7 +13,7 @@ x-redishealth: &redis-health
<<: *base-health <<: *base-health
x-postgreshealth: &postgresdb-health x-postgreshealth: &postgresdb-health
test: pg_isready test: [ "CMD", "sh", "-c", "pg_isready -h localhost -U $$POSTGRES_USER" ]
<<: *base-health <<: *base-health
x-qbit: &qbit-health x-qbit: &qbit-health
@@ -35,7 +35,7 @@ services:
- postgres:/var/lib/postgresql/data - postgres:/var/lib/postgresql/data
healthcheck: *postgresdb-health healthcheck: *postgresdb-health
restart: unless-stopped restart: unless-stopped
env_file: ../.env env_file: ../../.env
networks: networks:
- knightcrawler-network - knightcrawler-network
@@ -48,25 +48,23 @@ services:
- redis:/data - redis:/data
restart: unless-stopped restart: unless-stopped
healthcheck: *redis-health healthcheck: *redis-health
env_file: ../.env env_file: ../../.env
networks: networks:
- knightcrawler-network - knightcrawler-network
rabbitmq: lavinmq:
image: rabbitmq:3-management env_file: stack.env
# # If you need the database to be accessible from outside, please open the below port. # # If you need the database to be accessible from outside, please open the below port.
# # Furthermore, please, please, please, look at the documentation for rabbit on how to secure the service. # # Furthermore, please, please, please, look at the documentation for lavinmq / rabbitmq on how to secure the service.
# ports: # ports:
# - "5672:5672" # - "5672:5672"
# - "15672:15672" # - "15672:15672"
# - "15692:15692" # - "15692:15692"
volumes: image: cloudamqp/lavinmq:latest
- rabbitmq:/var/lib/rabbitmq healthcheck: *lavinmq-health
restart: unless-stopped restart: unless-stopped
healthcheck: *rabbitmq-health volumes:
env_file: ../.env - lavinmq:/var/lib/lavinmq/
networks:
- knightcrawler-network
## QBitTorrent is a torrent client that can be used to download torrents. In this case its used to download metadata. ## QBitTorrent is a torrent client that can be used to download torrents. In this case its used to download metadata.
## The QBit collector requires this. ## The QBit collector requires this.
@@ -80,10 +78,10 @@ services:
ports: ports:
- 6881:6881 - 6881:6881
- 6881:6881/udp - 6881:6881/udp
env_file: ../.env env_file: ../../.env
networks: networks:
- knightcrawler-network - knightcrawler-network
restart: unless-stopped restart: unless-stopped
healthcheck: *qbit-health healthcheck: *qbit-health
volumes: volumes:
- ./config/qbit/qbittorrent.conf:/config/qBittorrent/qBittorrent.conf - ../../config/qbit/qbittorrent.conf:/config/qBittorrent/qBittorrent.conf

View File

@@ -1,7 +1,7 @@
x-apps: &knightcrawler-app x-apps: &knightcrawler-app
labels: labels:
logging: "promtail" logging: "promtail"
env_file: ../.env env_file: ../../.env
networks: networks:
- knightcrawler-network - knightcrawler-network
@@ -11,7 +11,7 @@ x-depends: &knightcrawler-app-depends
condition: service_healthy condition: service_healthy
postgres: postgres:
condition: service_healthy condition: service_healthy
rabbitmq: lavinmq:
condition: service_healthy condition: service_healthy
migrator: migrator:
condition: service_completed_successfully condition: service_completed_successfully
@@ -20,8 +20,8 @@ x-depends: &knightcrawler-app-depends
services: services:
metadata: metadata:
image: gabisonfire/knightcrawler-metadata:2.0.1 image: gabisonfire/knightcrawler-metadata:2.0.18
env_file: ../.env env_file: ../../.env
networks: networks:
- knightcrawler-network - knightcrawler-network
restart: no restart: no
@@ -30,8 +30,8 @@ services:
condition: service_completed_successfully condition: service_completed_successfully
migrator: migrator:
image: gabisonfire/knightcrawler-migrator:2.0.1 image: gabisonfire/knightcrawler-migrator:2.0.18
env_file: ../.env env_file: ../../.env
networks: networks:
- knightcrawler-network - knightcrawler-network
restart: no restart: no
@@ -40,7 +40,7 @@ services:
condition: service_healthy condition: service_healthy
addon: addon:
image: gabisonfire/knightcrawler-addon:2.0.1 image: gabisonfire/knightcrawler-addon:2.0.18
<<: [*knightcrawler-app, *knightcrawler-app-depends] <<: [*knightcrawler-app, *knightcrawler-app-depends]
restart: unless-stopped restart: unless-stopped
hostname: knightcrawler-addon hostname: knightcrawler-addon
@@ -48,22 +48,22 @@ services:
- "7000:7000" - "7000:7000"
consumer: consumer:
image: gabisonfire/knightcrawler-consumer:2.0.1 image: gabisonfire/knightcrawler-consumer:2.0.18
<<: [*knightcrawler-app, *knightcrawler-app-depends] <<: [*knightcrawler-app, *knightcrawler-app-depends]
restart: unless-stopped restart: unless-stopped
debridcollector: debridcollector:
image: gabisonfire/knightcrawler-debrid-collector:2.0.1 image: gabisonfire/knightcrawler-debrid-collector:2.0.18
<<: [*knightcrawler-app, *knightcrawler-app-depends] <<: [*knightcrawler-app, *knightcrawler-app-depends]
restart: unless-stopped restart: unless-stopped
producer: producer:
image: gabisonfire/knightcrawler-producer:2.0.1 image: gabisonfire/knightcrawler-producer:2.0.18
<<: [*knightcrawler-app, *knightcrawler-app-depends] <<: [*knightcrawler-app, *knightcrawler-app-depends]
restart: unless-stopped restart: unless-stopped
qbitcollector: qbitcollector:
image: gabisonfire/knightcrawler-qbit-collector:2.0.1 image: gabisonfire/knightcrawler-qbit-collector:2.0.18
<<: [*knightcrawler-app, *knightcrawler-app-depends] <<: [*knightcrawler-app, *knightcrawler-app-depends]
restart: unless-stopped restart: unless-stopped
depends_on: depends_on:

View File

@@ -1,4 +1,4 @@
volumes: volumes:
postgres: postgres:
redis: redis:
rabbitmq: lavinmq:

View File

@@ -0,0 +1,7 @@
version: "3.9"
name: "knightcrawler"
include:
- ./components/network.yaml
- ./components/volumes.yaml
- ./components/infrastructure.yaml
- ./components/knightcrawler.yaml

View File

@@ -9,10 +9,12 @@ POSTGRES_PASSWORD=postgres
POSTGRES_DB=knightcrawler POSTGRES_DB=knightcrawler
# Redis # Redis
REDIS_CONNECTION_STRING=redis:6379 REDIS_HOST=redis
REDIS_PORT=6379
REDIS_EXTRA=abortConnect=false,allowAdmin=true
# RabbitMQ # AMQP
RABBITMQ_HOST=rabbitmq RABBITMQ_HOST=lavinmq
RABBITMQ_USER=guest RABBITMQ_USER=guest
RABBITMQ_PASSWORD=guest RABBITMQ_PASSWORD=guest
RABBITMQ_CONSUMER_QUEUE_NAME=ingested RABBITMQ_CONSUMER_QUEUE_NAME=ingested
@@ -28,6 +30,11 @@ METADATA_INSERT_BATCH_SIZE=50000
COLLECTOR_QBIT_ENABLED=false COLLECTOR_QBIT_ENABLED=false
COLLECTOR_DEBRID_ENABLED=true COLLECTOR_DEBRID_ENABLED=true
COLLECTOR_REAL_DEBRID_API_KEY= COLLECTOR_REAL_DEBRID_API_KEY=
QBIT_HOST=http://qbittorrent:8080
QBIT_TRACKERS_URL=https://raw.githubusercontent.com/ngosang/trackerslist/master/trackers_all_http.txt
# Number of replicas for the qBittorrent collector and qBitTorrent client. Should be 0 or 1.
QBIT_REPLICAS=0
# Addon # Addon
DEBUG_MODE=false DEBUG_MODE=false

View File

@@ -25,7 +25,9 @@ export const cinemetaConfig = {
} }
export const cacheConfig = { export const cacheConfig = {
REDIS_CONNECTION_STRING: process.env.REDIS_CONNECTION_STRING || 'redis://localhost:6379/0', REDIS_HOST: process.env.REDIS_HOST || 'redis',
REDIS_PORT: process.env.REDIS_PORT || '6379',
REDIS_EXTRA: process.env.REDIS_EXTRA || '',
NO_CACHE: parseBool(process.env.NO_CACHE, false), NO_CACHE: parseBool(process.env.NO_CACHE, false),
IMDB_TTL: parseInt(process.env.IMDB_TTL || 60 * 60 * 4), // 4 Hours IMDB_TTL: parseInt(process.env.IMDB_TTL || 60 * 60 * 4), // 4 Hours
STREAM_TTL: parseInt(process.env.STREAM_TTL || 60 * 60 * 4), // 1 Hour STREAM_TTL: parseInt(process.env.STREAM_TTL || 60 * 60 * 4), // 1 Hour
@@ -40,3 +42,5 @@ export const cacheConfig = {
STALE_ERROR_AGE: parseInt(process.env.STALE_ERROR_AGE) || 7 * 24 * 60 * 60, // 7 days STALE_ERROR_AGE: parseInt(process.env.STALE_ERROR_AGE) || 7 * 24 * 60 * 60, // 7 days
GLOBAL_KEY_PREFIX: process.env.GLOBAL_KEY_PREFIX || 'jackettio-addon', GLOBAL_KEY_PREFIX: process.env.GLOBAL_KEY_PREFIX || 'jackettio-addon',
} }
cacheConfig.REDIS_CONNECTION_STRING = 'redis://' + cacheConfig.REDIS_HOST + ':' + cacheConfig.REDIS_PORT + '?' + cacheConfig.REDIS_EXTRA;

View File

@@ -1,8 +1,12 @@
export const cacheConfig = { export const cacheConfig = {
REDIS_CONNECTION_STRING: process.env.REDIS_CONNECTION_STRING || 'redis://localhost:6379/0', REDIS_HOST: process.env.REDIS_HOST || 'redis',
REDIS_PORT: process.env.REDIS_PORT || '6379',
REDIS_EXTRA: process.env.REDIS_EXTRA || '',
NO_CACHE: parseBool(process.env.NO_CACHE, false), NO_CACHE: parseBool(process.env.NO_CACHE, false),
} }
cacheConfig.REDIS_CONNECTION_STRING = 'redis://' + cacheConfig.REDIS_HOST + ':' + cacheConfig.REDIS_PORT + '?' + cacheConfig.REDIS_EXTRA;
export const databaseConfig = { export const databaseConfig = {
POSTGRES_HOST: process.env.POSTGRES_HOST || 'postgres', POSTGRES_HOST: process.env.POSTGRES_HOST || 'postgres',
POSTGRES_PORT: process.env.POSTGRES_PORT || '5432', POSTGRES_PORT: process.env.POSTGRES_PORT || '5432',

View File

@@ -14,13 +14,12 @@ const Torrent = database.define('torrent',
{ {
infoHash: { type: Sequelize.STRING(64), primaryKey: true }, infoHash: { type: Sequelize.STRING(64), primaryKey: true },
provider: { type: Sequelize.STRING(32), allowNull: false }, provider: { type: Sequelize.STRING(32), allowNull: false },
torrentId: { type: Sequelize.STRING(128) }, ingestedTorrentId: { type: Sequelize.BIGINT, allowNull: false },
title: { type: Sequelize.STRING(256), allowNull: false }, title: { type: Sequelize.STRING(256), allowNull: false },
size: { type: Sequelize.BIGINT }, size: { type: Sequelize.BIGINT },
type: { type: Sequelize.STRING(16), allowNull: false }, type: { type: Sequelize.STRING(16), allowNull: false },
uploadDate: { type: Sequelize.DATE, allowNull: false }, uploadDate: { type: Sequelize.DATE, allowNull: false },
seeders: { type: Sequelize.SMALLINT }, seeders: { type: Sequelize.SMALLINT },
trackers: { type: Sequelize.STRING(4096) },
languages: { type: Sequelize.STRING(4096) }, languages: { type: Sequelize.STRING(4096) },
resolution: { type: Sequelize.STRING(16) } resolution: { type: Sequelize.STRING(16) }
} }

View File

@@ -9,187 +9,187 @@ const KEY = 'alldebrid';
const AGENT = 'knightcrawler'; const AGENT = 'knightcrawler';
export async function getCachedStreams(streams, apiKey) { export async function getCachedStreams(streams, apiKey) {
const options = await getDefaultOptions(); const options = await getDefaultOptions();
const AD = new AllDebridClient(apiKey, options); const AD = new AllDebridClient(apiKey, options);
const hashes = streams.map(stream => stream.infoHash); const hashes = streams.map(stream => stream.infoHash);
const available = await AD.magnet.instant(hashes) const available = await AD.magnet.instant(hashes)
.catch(error => { .catch(error => {
if (toCommonError(error)) { if (toCommonError(error)) {
return Promise.reject(error); return Promise.reject(error);
} }
console.warn(`Failed AllDebrid cached [${hashes[0]}] torrent availability request:`, error); console.warn(`Failed AllDebrid cached [${hashes[0]}] torrent availability request:`, error);
return undefined; return undefined;
}); });
return available?.data?.magnets && streams return available?.data?.magnets && streams
.reduce((mochStreams, stream) => { .reduce((mochStreams, stream) => {
const cachedEntry = available.data.magnets.find(magnet => stream.infoHash === magnet.hash.toLowerCase()); const cachedEntry = available.data.magnets.find(magnet => stream.infoHash === magnet.hash.toLowerCase());
const streamTitleParts = stream.title.replace(/\n👤.*/s, '').split('\n'); const streamTitleParts = stream.title.replace(/\n👤.*/s, '').split('\n');
const fileName = streamTitleParts[streamTitleParts.length - 1]; const fileName = streamTitleParts[streamTitleParts.length - 1];
const fileIndex = streamTitleParts.length === 2 ? stream.fileIdx : null; const fileIndex = streamTitleParts.length === 2 ? stream.fileIdx : null;
const encodedFileName = encodeURIComponent(fileName); const encodedFileName = encodeURIComponent(fileName);
mochStreams[stream.infoHash] = { mochStreams[`${stream.infoHash}@${stream.fileIdx}`] = {
url: `${apiKey}/${stream.infoHash}/${encodedFileName}/${fileIndex}`, url: `${apiKey}/${stream.infoHash}/${encodedFileName}/${fileIndex}`,
cached: cachedEntry?.instant cached: cachedEntry?.instant
} }
return mochStreams; return mochStreams;
}, {}) }, {})
} }
export async function getCatalog(apiKey, offset = 0) { export async function getCatalog(apiKey, offset = 0) {
if (offset > 0) { if (offset > 0) {
return []; return [];
} }
const options = await getDefaultOptions(); const options = await getDefaultOptions();
const AD = new AllDebridClient(apiKey, options); const AD = new AllDebridClient(apiKey, options);
return AD.magnet.status() return AD.magnet.status()
.then(response => response.data.magnets) .then(response => response.data.magnets)
.then(torrents => (torrents || []) .then(torrents => (torrents || [])
.filter(torrent => torrent && statusReady(torrent.statusCode)) .filter(torrent => torrent && statusReady(torrent.statusCode))
.map(torrent => ({ .map(torrent => ({
id: `${KEY}:${torrent.id}`, id: `${KEY}:${torrent.id}`,
type: Type.OTHER, type: Type.OTHER,
name: torrent.filename name: torrent.filename
}))); })));
} }
export async function getItemMeta(itemId, apiKey) { export async function getItemMeta(itemId, apiKey) {
const options = await getDefaultOptions(); const options = await getDefaultOptions();
const AD = new AllDebridClient(apiKey, options); const AD = new AllDebridClient(apiKey, options);
return AD.magnet.status(itemId) return AD.magnet.status(itemId)
.then(response => response.data.magnets) .then(response => response.data.magnets)
.then(torrent => ({ .then(torrent => ({
id: `${KEY}:${torrent.id}`, id: `${KEY}:${torrent.id}`,
type: Type.OTHER, type: Type.OTHER,
name: torrent.filename, name: torrent.filename,
infoHash: torrent.hash.toLowerCase(), infoHash: torrent.hash.toLowerCase(),
videos: torrent.links videos: torrent.links
.filter(file => isVideo(file.filename)) .filter(file => isVideo(file.filename))
.map((file, index) => ({ .map((file, index) => ({
id: `${KEY}:${torrent.id}:${index}`, id: `${KEY}:${torrent.id}:${index}`,
title: file.filename, title: file.filename,
released: new Date(torrent.uploadDate * 1000 - index).toISOString(), released: new Date(torrent.uploadDate * 1000 - index).toISOString(),
streams: [{ url: `${apiKey}/${torrent.hash.toLowerCase()}/${encodeURIComponent(file.filename)}/${index}` }] streams: [{ url: `${apiKey}/${torrent.hash.toLowerCase()}/${encodeURIComponent(file.filename)}/${index}` }]
})) }))
})) }))
} }
export async function resolve({ ip, apiKey, infoHash, cachedEntryInfo, fileIndex }) { export async function resolve({ ip, apiKey, infoHash, cachedEntryInfo, fileIndex }) {
console.log(`Unrestricting AllDebrid ${infoHash} [${fileIndex}]`); console.log(`Unrestricting AllDebrid ${infoHash} [${fileIndex}]`);
const options = await getDefaultOptions(ip); const options = await getDefaultOptions(ip);
const AD = new AllDebridClient(apiKey, options); const AD = new AllDebridClient(apiKey, options);
return _resolve(AD, infoHash, cachedEntryInfo, fileIndex) return _resolve(AD, infoHash, cachedEntryInfo, fileIndex)
.catch(error => { .catch(error => {
if (errorExpiredSubscriptionError(error)) { if (errorExpiredSubscriptionError(error)) {
console.log(`Access denied to AllDebrid ${infoHash} [${fileIndex}]`); console.log(`Access denied to AllDebrid ${infoHash} [${fileIndex}]`);
return StaticResponse.FAILED_ACCESS; return StaticResponse.FAILED_ACCESS;
} else if (error.code === 'MAGNET_TOO_MANY') { } else if (error.code === 'MAGNET_TOO_MANY') {
console.log(`Deleting and retrying adding to AllDebrid ${infoHash} [${fileIndex}]...`); console.log(`Deleting and retrying adding to AllDebrid ${infoHash} [${fileIndex}]...`);
return _deleteAndRetry(AD, infoHash, cachedEntryInfo, fileIndex); return _deleteAndRetry(AD, infoHash, cachedEntryInfo, fileIndex);
} }
return Promise.reject(`Failed AllDebrid adding torrent ${JSON.stringify(error)}`); return Promise.reject(`Failed AllDebrid adding torrent ${JSON.stringify(error)}`);
}); });
} }
async function _resolve(AD, infoHash, cachedEntryInfo, fileIndex) { async function _resolve(AD, infoHash, cachedEntryInfo, fileIndex) {
const torrent = await _createOrFindTorrent(AD, infoHash); const torrent = await _createOrFindTorrent(AD, infoHash);
if (torrent && statusReady(torrent.statusCode)) { if (torrent && statusReady(torrent.statusCode)) {
return _unrestrictLink(AD, torrent, cachedEntryInfo, fileIndex); return _unrestrictLink(AD, torrent, cachedEntryInfo, fileIndex);
} else if (torrent && statusDownloading(torrent.statusCode)) { } else if (torrent && statusDownloading(torrent.statusCode)) {
console.log(`Downloading to AllDebrid ${infoHash} [${fileIndex}]...`); console.log(`Downloading to AllDebrid ${infoHash} [${fileIndex}]...`);
return StaticResponse.DOWNLOADING; return StaticResponse.DOWNLOADING;
} else if (torrent && statusHandledError(torrent.statusCode)) { } else if (torrent && statusHandledError(torrent.statusCode)) {
console.log(`Retrying downloading to AllDebrid ${infoHash} [${fileIndex}]...`); console.log(`Retrying downloading to AllDebrid ${infoHash} [${fileIndex}]...`);
return _retryCreateTorrent(AD, infoHash, cachedEntryInfo, fileIndex); return _retryCreateTorrent(AD, infoHash, cachedEntryInfo, fileIndex);
} }
return Promise.reject(`Failed AllDebrid adding torrent ${JSON.stringify(torrent)}`); return Promise.reject(`Failed AllDebrid adding torrent ${JSON.stringify(torrent)}`);
} }
async function _createOrFindTorrent(AD, infoHash) { async function _createOrFindTorrent(AD, infoHash) {
return _findTorrent(AD, infoHash) return _findTorrent(AD, infoHash)
.catch(() => _createTorrent(AD, infoHash)); .catch(() => _createTorrent(AD, infoHash));
} }
async function _retryCreateTorrent(AD, infoHash, encodedFileName, fileIndex) { async function _retryCreateTorrent(AD, infoHash, encodedFileName, fileIndex) {
const newTorrent = await _createTorrent(AD, infoHash); const newTorrent = await _createTorrent(AD, infoHash);
return newTorrent && statusReady(newTorrent.statusCode) return newTorrent && statusReady(newTorrent.statusCode)
? _unrestrictLink(AD, newTorrent, encodedFileName, fileIndex) ? _unrestrictLink(AD, newTorrent, encodedFileName, fileIndex)
: StaticResponse.FAILED_DOWNLOAD; : StaticResponse.FAILED_DOWNLOAD;
} }
async function _deleteAndRetry(AD, infoHash, encodedFileName, fileIndex) { async function _deleteAndRetry(AD, infoHash, encodedFileName, fileIndex) {
const torrents = await AD.magnet.status().then(response => response.data.magnets); const torrents = await AD.magnet.status().then(response => response.data.magnets);
const lastTorrent = torrents[torrents.length - 1]; const lastTorrent = torrents[torrents.length - 1];
return AD.magnet.delete(lastTorrent.id) return AD.magnet.delete(lastTorrent.id)
.then(() => _retryCreateTorrent(AD, infoHash, encodedFileName, fileIndex)); .then(() => _retryCreateTorrent(AD, infoHash, encodedFileName, fileIndex));
} }
async function _findTorrent(AD, infoHash) { async function _findTorrent(AD, infoHash) {
const torrents = await AD.magnet.status().then(response => response.data.magnets); const torrents = await AD.magnet.status().then(response => response.data.magnets);
const foundTorrents = torrents.filter(torrent => torrent.hash.toLowerCase() === infoHash); const foundTorrents = torrents.filter(torrent => torrent.hash.toLowerCase() === infoHash);
const nonFailedTorrent = foundTorrents.find(torrent => !statusError(torrent.statusCode)); const nonFailedTorrent = foundTorrents.find(torrent => !statusError(torrent.statusCode));
const foundTorrent = nonFailedTorrent || foundTorrents[0]; const foundTorrent = nonFailedTorrent || foundTorrents[0];
return foundTorrent || Promise.reject('No recent torrent found'); return foundTorrent || Promise.reject('No recent torrent found');
} }
async function _createTorrent(AD, infoHash) { async function _createTorrent(AD, infoHash) {
const magnetLink = await getMagnetLink(infoHash); const magnetLink = await getMagnetLink(infoHash);
const uploadResponse = await AD.magnet.upload(magnetLink); const uploadResponse = await AD.magnet.upload(magnetLink);
const torrentId = uploadResponse.data.magnets[0].id; const torrentId = uploadResponse.data.magnets[0].id;
return AD.magnet.status(torrentId).then(statusResponse => statusResponse.data.magnets); return AD.magnet.status(torrentId).then(statusResponse => statusResponse.data.magnets);
} }
async function _unrestrictLink(AD, torrent, encodedFileName, fileIndex) { async function _unrestrictLink(AD, torrent, encodedFileName, fileIndex) {
const targetFileName = decodeURIComponent(encodedFileName); const targetFileName = decodeURIComponent(encodedFileName);
const videos = torrent.links.filter(link => isVideo(link.filename)); const videos = torrent.links.filter(link => isVideo(link.filename)).sort((a, b) => b.size - a.size);
const targetVideo = Number.isInteger(fileIndex) const targetVideo = Number.isInteger(fileIndex)
? videos.find(video => sameFilename(targetFileName, video.filename)) ? videos.find(video => sameFilename(targetFileName, video.filename))
: videos.sort((a, b) => b.size - a.size)[0]; : videos[0];
if (!targetVideo && torrent.links.every(link => isArchive(link.filename))) { if (!targetVideo && torrent.links.every(link => isArchive(link.filename))) {
console.log(`Only AllDebrid archive is available for [${torrent.hash}] ${encodedFileName}`) console.log(`Only AllDebrid archive is available for [${torrent.hash}] ${encodedFileName}`)
return StaticResponse.FAILED_RAR; return StaticResponse.FAILED_RAR;
} }
if (!targetVideo || !targetVideo.link || !targetVideo.link.length) { if (!targetVideo || !targetVideo.link || !targetVideo.link.length) {
return Promise.reject(`No AllDebrid links found for [${torrent.hash}] ${encodedFileName}`); return Promise.reject(`No AllDebrid links found for [${torrent.hash}] ${encodedFileName}`);
} }
const unrestrictedLink = await AD.link.unlock(targetVideo.link).then(response => response.data.link); const unrestrictedLink = await AD.link.unlock(targetVideo.link).then(response => response.data.link);
console.log(`Unrestricted AllDebrid ${torrent.hash} [${fileIndex}] to ${unrestrictedLink}`); console.log(`Unrestricted AllDebrid ${torrent.hash} [${fileIndex}] to ${unrestrictedLink}`);
return unrestrictedLink; return unrestrictedLink;
} }
async function getDefaultOptions(ip) { async function getDefaultOptions(ip) {
return { base_agent: AGENT, timeout: 10000 }; return { base_agent: AGENT, timeout: 10000 };
} }
export function toCommonError(error) { export function toCommonError(error) {
if (error && error.code === 'AUTH_BAD_APIKEY') { if (error && error.code === 'AUTH_BAD_APIKEY') {
return BadTokenError; return BadTokenError;
} }
if (error && error.code === 'AUTH_USER_BANNED') { if (error && error.code === 'AUTH_USER_BANNED') {
return AccessDeniedError; return AccessDeniedError;
} }
return undefined; return undefined;
} }
function statusError(statusCode) { function statusError(statusCode) {
return [5, 6, 7, 8, 9, 10, 11].includes(statusCode); return [5, 6, 7, 8, 9, 10, 11].includes(statusCode);
} }
function statusHandledError(statusCode) { function statusHandledError(statusCode) {
return [5, 7, 9, 10, 11].includes(statusCode); return [5, 7, 9, 10, 11].includes(statusCode);
} }
function statusDownloading(statusCode) { function statusDownloading(statusCode) {
return [0, 1, 2, 3].includes(statusCode); return [0, 1, 2, 3].includes(statusCode);
} }
function statusReady(statusCode) { function statusReady(statusCode) {
return statusCode === 4; return statusCode === 4;
} }
function errorExpiredSubscriptionError(error) { function errorExpiredSubscriptionError(error) {
return ['AUTH_BAD_APIKEY', 'MUST_BE_PREMIUM', 'MAGNET_MUST_BE_PREMIUM', 'FREE_TRIAL_LIMIT_REACHED', 'AUTH_USER_BANNED'] return ['AUTH_BAD_APIKEY', 'MUST_BE_PREMIUM', 'MAGNET_MUST_BE_PREMIUM', 'FREE_TRIAL_LIMIT_REACHED', 'AUTH_USER_BANNED']
.includes(error.code); .includes(error.code);
} }

View File

@@ -8,148 +8,148 @@ import StaticResponse from './static.js';
const KEY = 'debridlink'; const KEY = 'debridlink';
export async function getCachedStreams(streams, apiKey) { export async function getCachedStreams(streams, apiKey) {
const options = await getDefaultOptions(); const options = await getDefaultOptions();
const DL = new DebridLinkClient(apiKey, options); const DL = new DebridLinkClient(apiKey, options);
const hashBatches = chunkArray(streams.map(stream => stream.infoHash), 50) const hashBatches = chunkArray(streams.map(stream => stream.infoHash), 50)
.map(batch => batch.join(',')); .map(batch => batch.join(','));
const available = await Promise.all(hashBatches.map(hashes => DL.seedbox.cached(hashes))) const available = await Promise.all(hashBatches.map(hashes => DL.seedbox.cached(hashes)))
.then(results => results.map(result => result.value)) .then(results => results.map(result => result.value))
.then(results => results.reduce((all, result) => Object.assign(all, result), {})) .then(results => results.reduce((all, result) => Object.assign(all, result), {}))
.catch(error => { .catch(error => {
if (toCommonError(error)) { if (toCommonError(error)) {
return Promise.reject(error); return Promise.reject(error);
} }
console.warn('Failed DebridLink cached torrent availability request:', error); console.warn('Failed DebridLink cached torrent availability request:', error);
return undefined; return undefined;
}); });
return available && streams return available && streams
.reduce((mochStreams, stream) => { .reduce((mochStreams, stream) => {
const cachedEntry = available[stream.infoHash]; const cachedEntry = available[stream.infoHash];
mochStreams[stream.infoHash] = { mochStreams[`${stream.infoHash}@${stream.fileIdx}`] = {
url: `${apiKey}/${stream.infoHash}/null/${stream.fileIdx}`, url: `${apiKey}/${stream.infoHash}/null/${stream.fileIdx}`,
cached: !!cachedEntry cached: !!cachedEntry
}; };
return mochStreams; return mochStreams;
}, {}) }, {})
} }
export async function getCatalog(apiKey, offset = 0) { export async function getCatalog(apiKey, offset = 0) {
if (offset > 0) { if (offset > 0) {
return []; return [];
} }
const options = await getDefaultOptions(); const options = await getDefaultOptions();
const DL = new DebridLinkClient(apiKey, options); const DL = new DebridLinkClient(apiKey, options);
return DL.seedbox.list() return DL.seedbox.list()
.then(response => response.value) .then(response => response.value)
.then(torrents => (torrents || []) .then(torrents => (torrents || [])
.filter(torrent => torrent && statusReady(torrent)) .filter(torrent => torrent && statusReady(torrent))
.map(torrent => ({ .map(torrent => ({
id: `${KEY}:${torrent.id}`, id: `${KEY}:${torrent.id}`,
type: Type.OTHER, type: Type.OTHER,
name: torrent.name name: torrent.name
}))); })));
} }
export async function getItemMeta(itemId, apiKey, ip) { export async function getItemMeta(itemId, apiKey, ip) {
const options = await getDefaultOptions(ip); const options = await getDefaultOptions(ip);
const DL = new DebridLinkClient(apiKey, options); const DL = new DebridLinkClient(apiKey, options);
return DL.seedbox.list(itemId) return DL.seedbox.list(itemId)
.then(response => response.value[0]) .then(response => response.value[0])
.then(torrent => ({ .then(torrent => ({
id: `${KEY}:${torrent.id}`, id: `${KEY}:${torrent.id}`,
type: Type.OTHER, type: Type.OTHER,
name: torrent.name, name: torrent.name,
infoHash: torrent.hashString.toLowerCase(), infoHash: torrent.hashString.toLowerCase(),
videos: torrent.files videos: torrent.files
.filter(file => isVideo(file.name)) .filter(file => isVideo(file.name))
.map((file, index) => ({ .map((file, index) => ({
id: `${KEY}:${torrent.id}:${index}`, id: `${KEY}:${torrent.id}:${index}`,
title: file.name, title: file.name,
released: new Date(torrent.created * 1000 - index).toISOString(), released: new Date(torrent.created * 1000 - index).toISOString(),
streams: [{ url: file.downloadUrl }] streams: [{ url: file.downloadUrl }]
})) }))
})) }))
} }
export async function resolve({ ip, apiKey, infoHash, fileIndex }) { export async function resolve({ ip, apiKey, infoHash, fileIndex }) {
console.log(`Unrestricting DebridLink ${infoHash} [${fileIndex}]`); console.log(`Unrestricting DebridLink ${infoHash} [${fileIndex}]`);
const options = await getDefaultOptions(ip); const options = await getDefaultOptions(ip);
const DL = new DebridLinkClient(apiKey, options); const DL = new DebridLinkClient(apiKey, options);
return _resolve(DL, infoHash, fileIndex) return _resolve(DL, infoHash, fileIndex)
.catch(error => { .catch(error => {
if (errorExpiredSubscriptionError(error)) { if (errorExpiredSubscriptionError(error)) {
console.log(`Access denied to DebridLink ${infoHash} [${fileIndex}]`); console.log(`Access denied to DebridLink ${infoHash} [${fileIndex}]`);
return StaticResponse.FAILED_ACCESS; return StaticResponse.FAILED_ACCESS;
} }
return Promise.reject(`Failed DebridLink adding torrent ${JSON.stringify(error)}`); return Promise.reject(`Failed DebridLink adding torrent ${JSON.stringify(error)}`);
}); });
} }
async function _resolve(DL, infoHash, fileIndex) { async function _resolve(DL, infoHash, fileIndex) {
const torrent = await _createOrFindTorrent(DL, infoHash); const torrent = await _createOrFindTorrent(DL, infoHash);
if (torrent && statusReady(torrent)) { if (torrent && statusReady(torrent)) {
return _unrestrictLink(DL, torrent, fileIndex); return _unrestrictLink(DL, torrent, fileIndex);
} else if (torrent && statusDownloading(torrent)) { } else if (torrent && statusDownloading(torrent)) {
console.log(`Downloading to DebridLink ${infoHash} [${fileIndex}]...`); console.log(`Downloading to DebridLink ${infoHash} [${fileIndex}]...`);
return StaticResponse.DOWNLOADING; return StaticResponse.DOWNLOADING;
} }
return Promise.reject(`Failed DebridLink adding torrent ${JSON.stringify(torrent)}`); return Promise.reject(`Failed DebridLink adding torrent ${JSON.stringify(torrent)}`);
} }
async function _createOrFindTorrent(DL, infoHash) { async function _createOrFindTorrent(DL, infoHash) {
return _findTorrent(DL, infoHash) return _findTorrent(DL, infoHash)
.catch(() => _createTorrent(DL, infoHash)); .catch(() => _createTorrent(DL, infoHash));
} }
async function _findTorrent(DL, infoHash) { async function _findTorrent(DL, infoHash) {
const torrents = await DL.seedbox.list().then(response => response.value); const torrents = await DL.seedbox.list().then(response => response.value);
const foundTorrents = torrents.filter(torrent => torrent.hashString.toLowerCase() === infoHash); const foundTorrents = torrents.filter(torrent => torrent.hashString.toLowerCase() === infoHash);
return foundTorrents[0] || Promise.reject('No recent torrent found'); return foundTorrents[0] || Promise.reject('No recent torrent found');
} }
async function _createTorrent(DL, infoHash) { async function _createTorrent(DL, infoHash) {
const magnetLink = await getMagnetLink(infoHash); const magnetLink = await getMagnetLink(infoHash);
const uploadResponse = await DL.seedbox.add(magnetLink, null, true); const uploadResponse = await DL.seedbox.add(magnetLink, null, true);
return uploadResponse.value; return uploadResponse.value;
} }
async function _unrestrictLink(DL, torrent, fileIndex) { async function _unrestrictLink(DL, torrent, fileIndex) {
const targetFile = Number.isInteger(fileIndex) const targetFile = Number.isInteger(fileIndex)
? torrent.files[fileIndex] ? torrent.files[fileIndex]
: torrent.files.filter(file => file.downloadPercent === 100).sort((a, b) => b.size - a.size)[0]; : torrent.files.filter(file => file.downloadPercent === 100).sort((a, b) => b.size - a.size)[0];
if (!targetFile && torrent.files.every(file => isArchive(file.downloadUrl))) { if (!targetFile && torrent.files.every(file => isArchive(file.downloadUrl))) {
console.log(`Only DebridLink archive is available for [${torrent.hashString}] ${fileIndex}`) console.log(`Only DebridLink archive is available for [${torrent.hashString}] ${fileIndex}`)
return StaticResponse.FAILED_RAR; return StaticResponse.FAILED_RAR;
} }
if (!targetFile || !targetFile.downloadUrl) { if (!targetFile || !targetFile.downloadUrl) {
return Promise.reject(`No DebridLink links found for index ${fileIndex} in: ${JSON.stringify(torrent)}`); return Promise.reject(`No DebridLink links found for index ${fileIndex} in: ${JSON.stringify(torrent)}`);
} }
console.log(`Unrestricted DebridLink ${torrent.hashString} [${fileIndex}] to ${targetFile.downloadUrl}`); console.log(`Unrestricted DebridLink ${torrent.hashString} [${fileIndex}] to ${targetFile.downloadUrl}`);
return targetFile.downloadUrl; return targetFile.downloadUrl;
} }
async function getDefaultOptions(ip) { async function getDefaultOptions(ip) {
return { ip, timeout: 10000 }; return { ip, timeout: 10000 };
} }
export function toCommonError(error) { export function toCommonError(error) {
if (error === 'badToken') { if (error === 'badToken') {
return BadTokenError; return BadTokenError;
} }
return undefined; return undefined;
} }
function statusDownloading(torrent) { function statusDownloading(torrent) {
return torrent.downloadPercent < 100 return torrent.downloadPercent < 100
} }
function statusReady(torrent) { function statusReady(torrent) {
return torrent.downloadPercent === 100; return torrent.downloadPercent === 100;
} }
function errorExpiredSubscriptionError(error) { function errorExpiredSubscriptionError(error) {
return ['freeServerOverload', 'maxTorrent', 'maxLink', 'maxLinkHost', 'maxData', 'maxDataHost'].includes(error); return ['freeServerOverload', 'maxTorrent', 'maxLink', 'maxLinkHost', 'maxData', 'maxDataHost'].includes(error);
} }

View File

@@ -15,226 +15,226 @@ const RESOLVE_TIMEOUT = 2 * 60 * 1000; // 2 minutes
const MIN_API_KEY_SYMBOLS = 15; const MIN_API_KEY_SYMBOLS = 15;
const TOKEN_BLACKLIST = []; const TOKEN_BLACKLIST = [];
export const MochOptions = { export const MochOptions = {
realdebrid: { realdebrid: {
key: 'realdebrid', key: 'realdebrid',
instance: realdebrid, instance: realdebrid,
name: "RealDebrid", name: "RealDebrid",
shortName: 'RD', shortName: 'RD',
catalog: true catalog: true
}, },
premiumize: { premiumize: {
key: 'premiumize', key: 'premiumize',
instance: premiumize, instance: premiumize,
name: 'Premiumize', name: 'Premiumize',
shortName: 'PM', shortName: 'PM',
catalog: true catalog: true
}, },
alldebrid: { alldebrid: {
key: 'alldebrid', key: 'alldebrid',
instance: alldebrid, instance: alldebrid,
name: 'AllDebrid', name: 'AllDebrid',
shortName: 'AD', shortName: 'AD',
catalog: true catalog: true
}, },
debridlink: { debridlink: {
key: 'debridlink', key: 'debridlink',
instance: debridlink, instance: debridlink,
name: 'DebridLink', name: 'DebridLink',
shortName: 'DL', shortName: 'DL',
catalog: true catalog: true
}, },
offcloud: { offcloud: {
key: 'offcloud', key: 'offcloud',
instance: offcloud, instance: offcloud,
name: 'Offcloud', name: 'Offcloud',
shortName: 'OC', shortName: 'OC',
catalog: true catalog: true
}, },
putio: { putio: {
key: 'putio', key: 'putio',
instance: putio, instance: putio,
name: 'Put.io', name: 'Put.io',
shortName: 'Putio', shortName: 'Putio',
catalog: true catalog: true
} }
}; };
const unrestrictQueues = {} const unrestrictQueues = {}
Object.values(MochOptions) Object.values(MochOptions)
.map(moch => moch.key) .map(moch => moch.key)
.forEach(mochKey => unrestrictQueues[mochKey] = new namedQueue((task, callback) => task.method() .forEach(mochKey => unrestrictQueues[mochKey] = new namedQueue((task, callback) => task.method()
.then(result => callback(false, result)) .then(result => callback(false, result))
.catch((error => callback(error))), 200)); .catch((error => callback(error))), 200));
export function hasMochConfigured(config) { export function hasMochConfigured(config) {
return Object.keys(MochOptions).find(moch => config?.[moch]) return Object.keys(MochOptions).find(moch => config?.[moch])
} }
export async function applyMochs(streams, config) { export async function applyMochs(streams, config) {
if (!streams?.length || !hasMochConfigured(config)) { if (!streams?.length || !hasMochConfigured(config)) {
return streams; return streams;
} }
return Promise.all(Object.keys(config) return Promise.all(Object.keys(config)
.filter(configKey => MochOptions[configKey]) .filter(configKey => MochOptions[configKey])
.map(configKey => MochOptions[configKey]) .map(configKey => MochOptions[configKey])
.map(moch => { .map(moch => {
if (isInvalidToken(config[moch.key], moch.key)) { if (isInvalidToken(config[moch.key], moch.key)) {
return { moch, error: BadTokenError }; return { moch, error: BadTokenError };
} }
return moch.instance.getCachedStreams(streams, config[moch.key]) return moch.instance.getCachedStreams(streams, config[moch.key])
.then(mochStreams => ({ moch, mochStreams })) .then(mochStreams => ({ moch, mochStreams }))
.catch(rawError => { .catch(rawError => {
const error = moch.instance.toCommonError(rawError) || rawError; const error = moch.instance.toCommonError(rawError) || rawError;
if (error === BadTokenError) { if (error === BadTokenError) {
blackListToken(config[moch.key], moch.key); blackListToken(config[moch.key], moch.key);
} }
return { moch, error }; return { moch, error };
}) })
})) }))
.then(results => processMochResults(streams, config, results)); .then(results => processMochResults(streams, config, results));
} }
export async function resolve(parameters) { export async function resolve(parameters) {
const moch = MochOptions[parameters.mochKey]; const moch = MochOptions[parameters.mochKey];
if (!moch) { if (!moch) {
return Promise.reject(new Error(`Not a valid moch provider: ${parameters.mochKey}`)); return Promise.reject(new Error(`Not a valid moch provider: ${parameters.mochKey}`));
} }
if (!parameters.apiKey || !parameters.infoHash || !parameters.cachedEntryInfo) { if (!parameters.apiKey || !parameters.infoHash || !parameters.cachedEntryInfo) {
return Promise.reject(new Error("No valid parameters passed")); return Promise.reject(new Error("No valid parameters passed"));
} }
const id = `${parameters.ip}_${parameters.mochKey}_${parameters.apiKey}_${parameters.infoHash}_${parameters.fileIndex}`; const id = `${parameters.ip}_${parameters.mochKey}_${parameters.apiKey}_${parameters.infoHash}_${parameters.fileIndex}`;
const method = () => timeout(RESOLVE_TIMEOUT, cacheWrapResolvedUrl(id, () => moch.instance.resolve(parameters))) const method = () => timeout(RESOLVE_TIMEOUT, cacheWrapResolvedUrl(id, () => moch.instance.resolve(parameters)))
.catch(error => { .catch(error => {
console.warn(error); console.warn(error);
return StaticResponse.FAILED_UNEXPECTED; return StaticResponse.FAILED_UNEXPECTED;
}) })
.then(url => isStaticUrl(url) ? `${parameters.host}/${url}` : url); .then(url => isStaticUrl(url) ? `${parameters.host}/${url}` : url);
const unrestrictQueue = unrestrictQueues[moch.key]; const unrestrictQueue = unrestrictQueues[moch.key];
return new Promise(((resolve, reject) => { return new Promise(((resolve, reject) => {
unrestrictQueue.push({ id, method }, (error, result) => result ? resolve(result) : reject(error)); unrestrictQueue.push({ id, method }, (error, result) => result ? resolve(result) : reject(error));
})); }));
} }
export async function getMochCatalog(mochKey, config) { export async function getMochCatalog(mochKey, config) {
const moch = MochOptions[mochKey]; const moch = MochOptions[mochKey];
if (!moch) { if (!moch) {
return Promise.reject(new Error(`Not a valid moch provider: ${mochKey}`)); return Promise.reject(new Error(`Not a valid moch provider: ${mochKey}`));
} }
if (isInvalidToken(config[mochKey], mochKey)) { if (isInvalidToken(config[mochKey], mochKey)) {
return Promise.reject(new Error(`Invalid API key for moch provider: ${mochKey}`)); return Promise.reject(new Error(`Invalid API key for moch provider: ${mochKey}`));
} }
return moch.instance.getCatalog(config[moch.key], config.skip, config.ip) return moch.instance.getCatalog(config[moch.key], config.skip, config.ip)
.catch(rawError => { .catch(rawError => {
const commonError = moch.instance.toCommonError(rawError); const commonError = moch.instance.toCommonError(rawError);
if (commonError === BadTokenError) { if (commonError === BadTokenError) {
blackListToken(config[moch.key], moch.key); blackListToken(config[moch.key], moch.key);
} }
return commonError ? [] : Promise.reject(rawError); return commonError ? [] : Promise.reject(rawError);
}); });
} }
export async function getMochItemMeta(mochKey, itemId, config) { export async function getMochItemMeta(mochKey, itemId, config) {
const moch = MochOptions[mochKey]; const moch = MochOptions[mochKey];
if (!moch) { if (!moch) {
return Promise.reject(new Error(`Not a valid moch provider: ${mochKey}`)); return Promise.reject(new Error(`Not a valid moch provider: ${mochKey}`));
} }
return moch.instance.getItemMeta(itemId, config[moch.key], config.ip) return moch.instance.getItemMeta(itemId, config[moch.key], config.ip)
.then(meta => enrichMeta(meta)) .then(meta => enrichMeta(meta))
.then(meta => { .then(meta => {
meta.videos.forEach(video => video.streams.forEach(stream => { meta.videos.forEach(video => video.streams.forEach(stream => {
if (!stream.url.startsWith('http')) { if (!stream.url.startsWith('http')) {
stream.url = `${config.host}/${moch.key}/${stream.url}/${streamFilename(video)}` stream.url = `${config.host}/${moch.key}/${stream.url}/${streamFilename(video)}`
} }
stream.behaviorHints = { bingeGroup: itemId } stream.behaviorHints = { bingeGroup: itemId }
})) }))
return meta; return meta;
}); });
} }
function processMochResults(streams, config, results) { function processMochResults(streams, config, results) {
const errorResults = results const errorResults = results
.map(result => errorStreamResponse(result.moch.key, result.error, config)) .map(result => errorStreamResponse(result.moch.key, result.error, config))
.filter(errorResponse => errorResponse); .filter(errorResponse => errorResponse);
if (errorResults.length) { if (errorResults.length) {
return errorResults; return errorResults;
} }
const includeTorrentLinks = options.includeTorrentLinks(config); const includeTorrentLinks = options.includeTorrentLinks(config);
const excludeDownloadLinks = options.excludeDownloadLinks(config); const excludeDownloadLinks = options.excludeDownloadLinks(config);
const mochResults = results.filter(result => result?.mochStreams); const mochResults = results.filter(result => result?.mochStreams);
const cachedStreams = mochResults const cachedStreams = mochResults
.reduce((resultStreams, mochResult) => populateCachedLinks(resultStreams, mochResult, config), streams); .reduce((resultStreams, mochResult) => populateCachedLinks(resultStreams, mochResult, config), streams);
const resultStreams = excludeDownloadLinks ? cachedStreams : populateDownloadLinks(cachedStreams, mochResults, config); const resultStreams = excludeDownloadLinks ? cachedStreams : populateDownloadLinks(cachedStreams, mochResults, config);
return includeTorrentLinks ? resultStreams : resultStreams.filter(stream => stream.url); return includeTorrentLinks ? resultStreams : resultStreams.filter(stream => stream.url);
} }
function populateCachedLinks(streams, mochResult, config) { function populateCachedLinks(streams, mochResult, config) {
return streams.map(stream => { return streams.map(stream => {
const cachedEntry = stream.infoHash && mochResult.mochStreams[stream.infoHash]; const cachedEntry = stream.infoHash && mochResult.mochStreams[`${stream.infoHash}@${stream.fileIdx}`];
if (cachedEntry?.cached) { if (cachedEntry?.cached) {
return { return {
name: `[${mochResult.moch.shortName}+] ${stream.name}`, name: `[${mochResult.moch.shortName}+] ${stream.name}`,
title: stream.title, title: stream.title,
url: `${config.host}/${mochResult.moch.key}/${cachedEntry.url}/${streamFilename(stream)}`, url: `${config.host}/${mochResult.moch.key}/${cachedEntry.url}/${streamFilename(stream)}`,
behaviorHints: stream.behaviorHints behaviorHints: stream.behaviorHints
}; };
} }
return stream; return stream;
}); });
} }
function populateDownloadLinks(streams, mochResults, config) { function populateDownloadLinks(streams, mochResults, config) {
const torrentStreams = streams.filter(stream => stream.infoHash); const torrentStreams = streams.filter(stream => stream.infoHash);
const seededStreams = streams.filter(stream => !stream.title.includes('👤 0')); const seededStreams = streams.filter(stream => !stream.title.includes('👤 0'));
torrentStreams.forEach(stream => mochResults.forEach(mochResult => { torrentStreams.forEach(stream => mochResults.forEach(mochResult => {
const cachedEntry = mochResult.mochStreams[stream.infoHash]; const cachedEntry = mochResult.mochStreams[`${stream.infoHash}@${stream.fileIdx}`];
const isCached = cachedEntry?.cached; const isCached = cachedEntry?.cached;
if (!isCached && isHealthyStreamForDebrid(seededStreams, stream)) { if (!isCached && isHealthyStreamForDebrid(seededStreams, stream)) {
streams.push({ streams.push({
name: `[${mochResult.moch.shortName} download] ${stream.name}`, name: `[${mochResult.moch.shortName} download] ${stream.name}`,
title: stream.title, title: stream.title,
url: `${config.host}/${mochResult.moch.key}/${cachedEntry.url}/${streamFilename(stream)}`, url: `${config.host}/${mochResult.moch.key}/${cachedEntry.url}/${streamFilename(stream)}`,
behaviorHints: stream.behaviorHints behaviorHints: stream.behaviorHints
}) })
} }
})); }));
return streams; return streams;
} }
function isHealthyStreamForDebrid(streams, stream) { function isHealthyStreamForDebrid(streams, stream) {
const isZeroSeeders = stream.title.includes('👤 0'); const isZeroSeeders = stream.title.includes('👤 0');
const is4kStream = stream.name.includes('4k'); const is4kStream = stream.name.includes('4k');
const isNotEnoughOptions = streams.length <= 5; const isNotEnoughOptions = streams.length <= 5;
return !isZeroSeeders || is4kStream || isNotEnoughOptions; return !isZeroSeeders || is4kStream || isNotEnoughOptions;
} }
function isInvalidToken(token, mochKey) { function isInvalidToken(token, mochKey) {
return token.length < MIN_API_KEY_SYMBOLS || TOKEN_BLACKLIST.includes(`${mochKey}|${token}`); return token.length < MIN_API_KEY_SYMBOLS || TOKEN_BLACKLIST.includes(`${mochKey}|${token}`);
} }
function blackListToken(token, mochKey) { function blackListToken(token, mochKey) {
const tokenKey = `${mochKey}|${token}`; const tokenKey = `${mochKey}|${token}`;
console.log(`Blacklisting invalid token: ${tokenKey}`) console.log(`Blacklisting invalid token: ${tokenKey}`)
TOKEN_BLACKLIST.push(tokenKey); TOKEN_BLACKLIST.push(tokenKey);
} }
function errorStreamResponse(mochKey, error, config) { function errorStreamResponse(mochKey, error, config) {
if (error === BadTokenError) { if (error === BadTokenError) {
return { return {
name: `KnightCrawler\n${MochOptions[mochKey].shortName} error`, name: `KnightCrawler\n${MochOptions[mochKey].shortName} error`,
title: `Invalid ${MochOptions[mochKey].name} ApiKey/Token!`, title: `Invalid ${MochOptions[mochKey].name} ApiKey/Token!`,
url: `${config.host}/${StaticResponse.FAILED_ACCESS}` url: `${config.host}/${StaticResponse.FAILED_ACCESS}`
}; };
} }
if (error === AccessDeniedError) { if (error === AccessDeniedError) {
return { return {
name: `KnightCrawler\n${MochOptions[mochKey].shortName} error`, name: `KnightCrawler\n${MochOptions[mochKey].shortName} error`,
title: `Expired/invalid ${MochOptions[mochKey].name} subscription!`, title: `Expired/invalid ${MochOptions[mochKey].name} subscription!`,
url: `${config.host}/${StaticResponse.FAILED_ACCESS}` url: `${config.host}/${StaticResponse.FAILED_ACCESS}`
}; };
} }
return undefined; return undefined;
} }

View File

@@ -1,63 +1,63 @@
import * as repository from '../lib/repository.js'; import * as repository from '../lib/repository.js';
const METAHUB_URL = 'https://images.metahub.space' const METAHUB_URL = 'https://images.metahub.space'
export const BadTokenError = { code: 'BAD_TOKEN' } export const BadTokenError = { code: 'BAD_TOKEN' }
export const AccessDeniedError = { code: 'ACCESS_DENIED' } export const AccessDeniedError = { code: 'ACCESS_DENIED' }
export function chunkArray(arr, size) { export function chunkArray(arr, size) {
return arr.length > size return arr.length > size
? [arr.slice(0, size), ...chunkArray(arr.slice(size), size)] ? [arr.slice(0, size), ...chunkArray(arr.slice(size), size)]
: [arr]; : [arr];
} }
export function streamFilename(stream) { export function streamFilename(stream) {
const titleParts = stream.title.replace(/\n👤.*/s, '').split('\n'); const titleParts = stream.title.replace(/\n👤.*/s, '').split('\n');
const filename = titleParts.pop().split('/').pop(); const filename = titleParts.pop().split('/').pop();
return encodeURIComponent(filename) return encodeURIComponent(filename)
} }
export async function enrichMeta(itemMeta) { export async function enrichMeta(itemMeta) {
const infoHashes = [...new Set([itemMeta.infoHash] const infoHashes = [...new Set([itemMeta.infoHash]
.concat(itemMeta.videos.map(video => video.infoHash)) .concat(itemMeta.videos.map(video => video.infoHash))
.filter(infoHash => infoHash))]; .filter(infoHash => infoHash))];
const files = infoHashes.length ? await repository.getFiles(infoHashes).catch(() => []) : []; const files = infoHashes.length ? await repository.getFiles(infoHashes).catch(() => []) : [];
const commonImdbId = itemMeta.infoHash && mostCommonValue(files.map(file => file.imdbId)); const commonImdbId = itemMeta.infoHash && mostCommonValue(files.map(file => file.imdbId));
if (files.length) { if (files.length) {
return { return {
...itemMeta, ...itemMeta,
logo: commonImdbId && `${METAHUB_URL}/logo/medium/${commonImdbId}/img`, logo: commonImdbId && `${METAHUB_URL}/logo/medium/${commonImdbId}/img`,
poster: commonImdbId && `${METAHUB_URL}/poster/medium/${commonImdbId}/img`, poster: commonImdbId && `${METAHUB_URL}/poster/medium/${commonImdbId}/img`,
background: commonImdbId && `${METAHUB_URL}/background/medium/${commonImdbId}/img`, background: commonImdbId && `${METAHUB_URL}/background/medium/${commonImdbId}/img`,
videos: itemMeta.videos.map(video => { videos: itemMeta.videos.map(video => {
const file = files.find(file => sameFilename(video.title, file.title)); const file = files.find(file => sameFilename(video.title, file.title));
if (file?.imdbId) { if (file?.imdbId) {
if (file.imdbSeason && file.imdbEpisode) { if (file.imdbSeason && file.imdbEpisode) {
video.id = `${file.imdbId}:${file.imdbSeason}:${file.imdbEpisode}`; video.id = `${file.imdbId}:${file.imdbSeason}:${file.imdbEpisode}`;
video.season = file.imdbSeason; video.season = file.imdbSeason;
video.episode = file.imdbEpisode; video.episode = file.imdbEpisode;
video.thumbnail = `https://episodes.metahub.space/${file.imdbId}/${video.season}/${video.episode}/w780.jpg` video.thumbnail = `https://episodes.metahub.space/${file.imdbId}/${video.season}/${video.episode}/w780.jpg`
} else { } else {
video.id = file.imdbId; video.id = file.imdbId;
video.thumbnail = `${METAHUB_URL}/background/small/${file.imdbId}/img`; video.thumbnail = `${METAHUB_URL}/background/small/${file.imdbId}/img`;
} }
}
return video;
})
} }
return video;
})
} }
} return itemMeta
return itemMeta
} }
export function sameFilename(filename, expectedFilename) { export function sameFilename(filename, expectedFilename) {
const offset = filename.length - expectedFilename.length; const offset = filename.length - expectedFilename.length;
for (let i = 0; i < expectedFilename.length; i++) { for (let i = 0; i < expectedFilename.length; i++) {
if (filename[offset + i] !== expectedFilename[i] && expectedFilename[i] !== '<27>') { if (filename[offset + i] !== expectedFilename[i] && expectedFilename[i] !== '<27>') {
return false; return false;
}
} }
} return true;
return true;
} }
function mostCommonValue(array) { function mostCommonValue(array) {
return array.sort((a, b) => array.filter(v => v === a).length - array.filter(v => v === b).length).pop(); return array.sort((a, b) => array.filter(v => v === a).length - array.filter(v => v === b).length).pop();
} }

View File

@@ -9,178 +9,178 @@ import StaticResponse from './static.js';
const KEY = 'offcloud'; const KEY = 'offcloud';
export async function getCachedStreams(streams, apiKey) { export async function getCachedStreams(streams, apiKey) {
const options = await getDefaultOptions(); const options = await getDefaultOptions();
const OC = new OffcloudClient(apiKey, options); const OC = new OffcloudClient(apiKey, options);
const hashBatches = chunkArray(streams.map(stream => stream.infoHash), 100); const hashBatches = chunkArray(streams.map(stream => stream.infoHash), 100);
const available = await Promise.all(hashBatches.map(hashes => OC.instant.cache(hashes))) const available = await Promise.all(hashBatches.map(hashes => OC.instant.cache(hashes)))
.then(results => results.map(result => result.cachedItems)) .then(results => results.map(result => result.cachedItems))
.then(results => results.reduce((all, result) => all.concat(result), [])) .then(results => results.reduce((all, result) => all.concat(result), []))
.catch(error => { .catch(error => {
if (toCommonError(error)) { if (toCommonError(error)) {
return Promise.reject(error); return Promise.reject(error);
} }
console.warn('Failed Offcloud cached torrent availability request:', error); console.warn('Failed Offcloud cached torrent availability request:', error);
return undefined; return undefined;
}); });
return available && streams return available && streams
.reduce((mochStreams, stream) => { .reduce((mochStreams, stream) => {
const isCached = available.includes(stream.infoHash); const isCached = available.includes(stream.infoHash);
const streamTitleParts = stream.title.replace(/\n👤.*/s, '').split('\n'); const streamTitleParts = stream.title.replace(/\n👤.*/s, '').split('\n');
const fileName = streamTitleParts[streamTitleParts.length - 1]; const fileName = streamTitleParts[streamTitleParts.length - 1];
const fileIndex = streamTitleParts.length === 2 ? stream.fileIdx : null; const encodedFileName = encodeURIComponent(fileName);
const encodedFileName = encodeURIComponent(fileName); mochStreams[`${stream.infoHash}@${stream.fileIdx}`] = {
mochStreams[stream.infoHash] = { url: `${apiKey}/${stream.infoHash}/${encodedFileName}/${stream.fileIdx}`,
url: `${apiKey}/${stream.infoHash}/${encodedFileName}/${fileIndex}`, cached: isCached
cached: isCached };
}; return mochStreams;
return mochStreams; }, {})
}, {})
} }
export async function getCatalog(apiKey, offset = 0) { export async function getCatalog(apiKey, offset = 0) {
if (offset > 0) { if (offset > 0) {
return []; return [];
} }
const options = await getDefaultOptions(); const options = await getDefaultOptions();
const OC = new OffcloudClient(apiKey, options); const OC = new OffcloudClient(apiKey, options);
return OC.cloud.history() return OC.cloud.history()
.then(torrents => torrents) .then(torrents => torrents)
.then(torrents => (torrents || []) .then(torrents => (torrents || [])
.map(torrent => ({ .map(torrent => ({
id: `${KEY}:${torrent.requestId}`, id: `${KEY}:${torrent.requestId}`,
type: Type.OTHER, type: Type.OTHER,
name: torrent.fileName name: torrent.fileName
}))); })));
} }
export async function getItemMeta(itemId, apiKey, ip) { export async function getItemMeta(itemId, apiKey, ip) {
const options = await getDefaultOptions(ip); const options = await getDefaultOptions(ip);
const OC = new OffcloudClient(apiKey, options); const OC = new OffcloudClient(apiKey, options);
const torrents = await OC.cloud.history(); const torrents = await OC.cloud.history();
const torrent = torrents.find(torrent => torrent.requestId === itemId) const torrent = torrents.find(torrent => torrent.requestId === itemId)
const infoHash = torrent && magnet.decode(torrent.originalLink).infoHash const infoHash = torrent && magnet.decode(torrent.originalLink).infoHash
const createDate = torrent ? new Date(torrent.createdOn) : new Date(); const createDate = torrent ? new Date(torrent.createdOn) : new Date();
return _getFileUrls(OC, torrent) return _getFileUrls(OC, torrent)
.then(files => ({ .then(files => ({
id: `${KEY}:${itemId}`, id: `${KEY}:${itemId}`,
type: Type.OTHER, type: Type.OTHER,
name: torrent.name, name: torrent.name,
infoHash: infoHash, infoHash: infoHash,
videos: files videos: files
.filter(file => isVideo(file)) .filter(file => isVideo(file))
.map((file, index) => ({ .map((file, index) => ({
id: `${KEY}:${itemId}:${index}`, id: `${KEY}:${itemId}:${index}`,
title: file.split('/').pop(), title: file.split('/').pop(),
released: new Date(createDate.getTime() - index).toISOString(), released: new Date(createDate.getTime() - index).toISOString(),
streams: [{ url: file }] streams: [{ url: file }]
})) }))
})) }))
} }
export async function resolve({ ip, apiKey, infoHash, cachedEntryInfo, fileIndex }) { export async function resolve({ ip, apiKey, infoHash, cachedEntryInfo, fileIndex }) {
console.log(`Unrestricting Offcloud ${infoHash} [${fileIndex}]`); console.log(`Unrestricting Offcloud ${infoHash} [${fileIndex}]`);
const options = await getDefaultOptions(ip); const options = await getDefaultOptions(ip);
const OC = new OffcloudClient(apiKey, options); const OC = new OffcloudClient(apiKey, options);
return _resolve(OC, infoHash, cachedEntryInfo, fileIndex) return _resolve(OC, infoHash, cachedEntryInfo, fileIndex)
.catch(error => { .catch(error => {
if (errorExpiredSubscriptionError(error)) { if (errorExpiredSubscriptionError(error)) {
console.log(`Access denied to Offcloud ${infoHash} [${fileIndex}]`); console.log(`Access denied to Offcloud ${infoHash} [${fileIndex}]`);
return StaticResponse.FAILED_ACCESS; return StaticResponse.FAILED_ACCESS;
} }
return Promise.reject(`Failed Offcloud adding torrent ${JSON.stringify(error)}`); return Promise.reject(`Failed Offcloud adding torrent ${JSON.stringify(error)}`);
}); });
} }
async function _resolve(OC, infoHash, cachedEntryInfo, fileIndex) { async function _resolve(OC, infoHash, cachedEntryInfo, fileIndex) {
const torrent = await _createOrFindTorrent(OC, infoHash) const torrent = await _createOrFindTorrent(OC, infoHash)
.then(info => info.requestId ? OC.cloud.status(info.requestId) : Promise.resolve(info)) .then(info => info.requestId ? OC.cloud.status(info.requestId) : Promise.resolve(info))
.then(info => info.status || info); .then(info => info.status || info);
if (torrent && statusReady(torrent)) { if (torrent && statusReady(torrent)) {
return _unrestrictLink(OC, infoHash, torrent, cachedEntryInfo, fileIndex); return _unrestrictLink(OC, infoHash, torrent, cachedEntryInfo, fileIndex);
} else if (torrent && statusDownloading(torrent)) { } else if (torrent && statusDownloading(torrent)) {
console.log(`Downloading to Offcloud ${infoHash} [${fileIndex}]...`); console.log(`Downloading to Offcloud ${infoHash} [${fileIndex}]...`);
return StaticResponse.DOWNLOADING; return StaticResponse.DOWNLOADING;
} else if (torrent && statusError(torrent)) { } else if (torrent && statusError(torrent)) {
console.log(`Retry failed download in Offcloud ${infoHash} [${fileIndex}]...`); console.log(`Retry failed download in Offcloud ${infoHash} [${fileIndex}]...`);
return _retryCreateTorrent(OC, infoHash, cachedEntryInfo, fileIndex); return _retryCreateTorrent(OC, infoHash, cachedEntryInfo, fileIndex);
} }
return Promise.reject(`Failed Offcloud adding torrent ${JSON.stringify(torrent)}`); return Promise.reject(`Failed Offcloud adding torrent ${JSON.stringify(torrent)}`);
} }
async function _createOrFindTorrent(OC, infoHash) { async function _createOrFindTorrent(OC, infoHash) {
return _findTorrent(OC, infoHash) return _findTorrent(OC, infoHash)
.catch(() => _createTorrent(OC, infoHash)); .catch(() => _createTorrent(OC, infoHash));
} }
async function _findTorrent(OC, infoHash) { async function _findTorrent(OC, infoHash) {
const torrents = await OC.cloud.history(); const torrents = await OC.cloud.history();
const foundTorrents = torrents.filter(torrent => torrent.originalLink.toLowerCase().includes(infoHash)); const foundTorrents = torrents.filter(torrent => torrent.originalLink.toLowerCase().includes(infoHash));
const nonFailedTorrent = foundTorrents.find(torrent => !statusError(torrent)); const nonFailedTorrent = foundTorrents.find(torrent => !statusError(torrent));
const foundTorrent = nonFailedTorrent || foundTorrents[0]; const foundTorrent = nonFailedTorrent || foundTorrents[0];
return foundTorrent || Promise.reject('No recent torrent found'); return foundTorrent || Promise.reject('No recent torrent found');
} }
async function _createTorrent(OC, infoHash) { async function _createTorrent(OC, infoHash) {
const magnetLink = await getMagnetLink(infoHash); const magnetLink = await getMagnetLink(infoHash);
return OC.cloud.download(magnetLink) return OC.cloud.download(magnetLink)
} }
async function _retryCreateTorrent(OC, infoHash, cachedEntryInfo, fileIndex) { async function _retryCreateTorrent(OC, infoHash, cachedEntryInfo, fileIndex) {
const newTorrent = await _createTorrent(OC, infoHash); const newTorrent = await _createTorrent(OC, infoHash);
return newTorrent && statusReady(newTorrent.status) return newTorrent && statusReady(newTorrent.status)
? _unrestrictLink(OC, infoHash, newTorrent, cachedEntryInfo, fileIndex) ? _unrestrictLink(OC, infoHash, newTorrent, cachedEntryInfo, fileIndex)
: StaticResponse.FAILED_DOWNLOAD; : StaticResponse.FAILED_DOWNLOAD;
} }
async function _unrestrictLink(OC, infoHash, torrent, cachedEntryInfo, fileIndex) { async function _unrestrictLink(OC, infoHash, torrent, cachedEntryInfo, fileIndex) {
const targetFileName = decodeURIComponent(cachedEntryInfo); const targetFileName = decodeURIComponent(cachedEntryInfo);
const files = await _getFileUrls(OC, torrent) const files = await _getFileUrls(OC, torrent)
const targetFile = files.find(file => sameFilename(targetFileName, file.split('/').pop())) const targetFile = files.find(file => file.includes(`/${torrent.requestId}/${fileIndex}/`))
|| files.find(file => isVideo(file)) || files.find(file => sameFilename(targetFileName, file.split('/').pop()))
|| files.pop(); || files.find(file => isVideo(file))
|| files.pop();
if (!targetFile) { if (!targetFile) {
return Promise.reject(`No Offcloud links found for index ${fileIndex} in: ${JSON.stringify(torrent)}`); return Promise.reject(`No Offcloud links found for index ${fileIndex} in: ${JSON.stringify(torrent)}`);
} }
console.log(`Unrestricted Offcloud ${infoHash} [${fileIndex}] to ${targetFile}`); console.log(`Unrestricted Offcloud ${infoHash} [${fileIndex}] to ${targetFile}`);
return targetFile; return targetFile;
} }
async function _getFileUrls(OC, torrent) { async function _getFileUrls(OC, torrent) {
return OC.cloud.explore(torrent.requestId) return OC.cloud.explore(torrent.requestId)
.catch(error => { .catch(error => {
if (error === 'Bad archive') { if (error === 'Bad archive') {
return [`https://${torrent.server}.offcloud.com/cloud/download/${torrent.requestId}/${torrent.fileName}`]; return [`https://${torrent.server}.offcloud.com/cloud/download/${torrent.requestId}/${torrent.fileName}`];
} }
throw error; throw error;
}) })
} }
async function getDefaultOptions(ip) { async function getDefaultOptions(ip) {
return { ip, timeout: 10000 }; return { ip, timeout: 10000 };
} }
export function toCommonError(error) { export function toCommonError(error) {
if (error?.error === 'NOAUTH' || error?.message?.startsWith('Cannot read property')) { if (error?.error === 'NOAUTH' || error?.message?.startsWith('Cannot read property')) {
return BadTokenError; return BadTokenError;
} }
return undefined; return undefined;
} }
function statusDownloading(torrent) { function statusDownloading(torrent) {
return ['downloading', 'created'].includes(torrent.status); return ['downloading', 'created'].includes(torrent.status);
} }
function statusError(torrent) { function statusError(torrent) {
return ['error', 'canceled'].includes(torrent.status); return ['error', 'canceled'].includes(torrent.status);
} }
function statusReady(torrent) { function statusReady(torrent) {
return torrent.status === 'downloaded'; return torrent.status === 'downloaded';
} }
function errorExpiredSubscriptionError(error) { function errorExpiredSubscriptionError(error) {
return error?.includes && (error.includes('not_available') || error.includes('NOAUTH') || error.includes('premium membership')); return error?.includes && (error.includes('not_available') || error.includes('NOAUTH') || error.includes('premium membership'));
} }

View File

@@ -9,187 +9,187 @@ import StaticResponse from './static.js';
const KEY = 'premiumize'; const KEY = 'premiumize';
export async function getCachedStreams(streams, apiKey) { export async function getCachedStreams(streams, apiKey) {
const options = await getDefaultOptions(); const options = await getDefaultOptions();
const PM = new PremiumizeClient(apiKey, options); const PM = new PremiumizeClient(apiKey, options);
return Promise.all(chunkArray(streams, 100) return Promise.all(chunkArray(streams, 100)
.map(chunkedStreams => _getCachedStreams(PM, apiKey, chunkedStreams))) .map(chunkedStreams => _getCachedStreams(PM, apiKey, chunkedStreams)))
.then(results => results.reduce((all, result) => Object.assign(all, result), {})); .then(results => results.reduce((all, result) => Object.assign(all, result), {}));
} }
async function _getCachedStreams(PM, apiKey, streams) { async function _getCachedStreams(PM, apiKey, streams) {
const hashes = streams.map(stream => stream.infoHash); const hashes = streams.map(stream => stream.infoHash);
return PM.cache.check(hashes) return PM.cache.check(hashes)
.catch(error => { .catch(error => {
if (toCommonError(error)) { if (toCommonError(error)) {
return Promise.reject(error); return Promise.reject(error);
} }
console.warn('Failed Premiumize cached torrent availability request:', error); console.warn('Failed Premiumize cached torrent availability request:', error);
return undefined; return undefined;
}) })
.then(available => streams .then(available => streams
.reduce((mochStreams, stream, index) => { .reduce((mochStreams, stream, index) => {
const streamTitleParts = stream.title.replace(/\n👤.*/s, '').split('\n'); const streamTitleParts = stream.title.replace(/\n👤.*/s, '').split('\n');
const fileName = streamTitleParts[streamTitleParts.length - 1]; const fileName = streamTitleParts[streamTitleParts.length - 1];
const fileIndex = streamTitleParts.length === 2 ? stream.fileIdx : null; const fileIndex = streamTitleParts.length === 2 ? stream.fileIdx : null;
const encodedFileName = encodeURIComponent(fileName); const encodedFileName = encodeURIComponent(fileName);
mochStreams[`${stream.infoHash}@${stream.fileIdx}`] = { mochStreams[`${stream.infoHash}@${stream.fileIdx}`] = {
url: `${apiKey}/${stream.infoHash}/${encodedFileName}/${fileIndex}`, url: `${apiKey}/${stream.infoHash}/${encodedFileName}/${fileIndex}`,
cached: available?.response[index] cached: available?.response[index]
}; };
return mochStreams; return mochStreams;
}, {})); }, {}));
} }
export async function getCatalog(apiKey, offset = 0) { export async function getCatalog(apiKey, offset = 0) {
if (offset > 0) { if (offset > 0) {
return []; return [];
} }
const options = await getDefaultOptions(); const options = await getDefaultOptions();
const PM = new PremiumizeClient(apiKey, options); const PM = new PremiumizeClient(apiKey, options);
return PM.folder.list() return PM.folder.list()
.then(response => response.content) .then(response => response.content)
.then(torrents => (torrents || []) .then(torrents => (torrents || [])
.filter(torrent => torrent && torrent.type === 'folder') .filter(torrent => torrent && torrent.type === 'folder')
.map(torrent => ({ .map(torrent => ({
id: `${KEY}:${torrent.id}`, id: `${KEY}:${torrent.id}`,
type: Type.OTHER, type: Type.OTHER,
name: torrent.name name: torrent.name
}))); })));
} }
export async function getItemMeta(itemId, apiKey, ip) { export async function getItemMeta(itemId, apiKey, ip) {
const options = await getDefaultOptions(); const options = await getDefaultOptions();
const PM = new PremiumizeClient(apiKey, options); const PM = new PremiumizeClient(apiKey, options);
const rootFolder = await PM.folder.list(itemId, null); const rootFolder = await PM.folder.list(itemId, null);
const infoHash = await _findInfoHash(PM, itemId); const infoHash = await _findInfoHash(PM, itemId);
return getFolderContents(PM, itemId, ip) return getFolderContents(PM, itemId, ip)
.then(contents => ({ .then(contents => ({
id: `${KEY}:${itemId}`, id: `${KEY}:${itemId}`,
type: Type.OTHER, type: Type.OTHER,
name: rootFolder.name, name: rootFolder.name,
infoHash: infoHash, infoHash: infoHash,
videos: contents videos: contents
.map((file, index) => ({ .map((file, index) => ({
id: `${KEY}:${file.id}:${index}`, id: `${KEY}:${file.id}:${index}`,
title: file.name, title: file.name,
released: new Date(file.created_at * 1000 - index).toISOString(), released: new Date(file.created_at * 1000 - index).toISOString(),
streams: [{ url: file.link || file.stream_link }] streams: [{ url: file.link || file.stream_link }]
})) }))
})) }))
} }
async function getFolderContents(PM, itemId, ip, folderPrefix = '') { async function getFolderContents(PM, itemId, ip, folderPrefix = '') {
return PM.folder.list(itemId, null, ip) return PM.folder.list(itemId, null, ip)
.then(response => response.content) .then(response => response.content)
.then(contents => Promise.all(contents .then(contents => Promise.all(contents
.filter(content => content.type === 'folder') .filter(content => content.type === 'folder')
.map(content => getFolderContents(PM, content.id, ip, [folderPrefix, content.name].join('/')))) .map(content => getFolderContents(PM, content.id, ip, [folderPrefix, content.name].join('/'))))
.then(otherContents => otherContents.reduce((a, b) => a.concat(b), [])) .then(otherContents => otherContents.reduce((a, b) => a.concat(b), []))
.then(otherContents => contents .then(otherContents => contents
.filter(content => content.type === 'file' && isVideo(content.name)) .filter(content => content.type === 'file' && isVideo(content.name))
.map(content => ({ ...content, name: [folderPrefix, content.name].join('/') })) .map(content => ({ ...content, name: [folderPrefix, content.name].join('/') }))
.concat(otherContents))); .concat(otherContents)));
} }
export async function resolve({ ip, isBrowser, apiKey, infoHash, cachedEntryInfo, fileIndex }) { export async function resolve({ ip, isBrowser, apiKey, infoHash, cachedEntryInfo, fileIndex }) {
console.log(`Unrestricting Premiumize ${infoHash} [${fileIndex}] for IP ${ip} from browser=${isBrowser}`); console.log(`Unrestricting Premiumize ${infoHash} [${fileIndex}] for IP ${ip} from browser=${isBrowser}`);
const options = await getDefaultOptions(); const options = await getDefaultOptions();
const PM = new PremiumizeClient(apiKey, options); const PM = new PremiumizeClient(apiKey, options);
return _getCachedLink(PM, infoHash, cachedEntryInfo, fileIndex, ip, isBrowser) return _getCachedLink(PM, infoHash, cachedEntryInfo, fileIndex, ip, isBrowser)
.catch(() => _resolve(PM, infoHash, cachedEntryInfo, fileIndex, ip, isBrowser)) .catch(() => _resolve(PM, infoHash, cachedEntryInfo, fileIndex, ip, isBrowser))
.catch(error => { .catch(error => {
if (error?.message?.includes('Account not premium.')) { if (error?.message?.includes('Account not premium.')) {
console.log(`Access denied to Premiumize ${infoHash} [${fileIndex}]`); console.log(`Access denied to Premiumize ${infoHash} [${fileIndex}]`);
return StaticResponse.FAILED_ACCESS; return StaticResponse.FAILED_ACCESS;
} }
return Promise.reject(`Failed Premiumize adding torrent ${JSON.stringify(error)}`); return Promise.reject(`Failed Premiumize adding torrent ${JSON.stringify(error)}`);
}); });
} }
async function _resolve(PM, infoHash, cachedEntryInfo, fileIndex, ip, isBrowser) { async function _resolve(PM, infoHash, cachedEntryInfo, fileIndex, ip, isBrowser) {
const torrent = await _createOrFindTorrent(PM, infoHash); const torrent = await _createOrFindTorrent(PM, infoHash);
if (torrent && statusReady(torrent.status)) { if (torrent && statusReady(torrent.status)) {
return _getCachedLink(PM, infoHash, cachedEntryInfo, fileIndex, ip, isBrowser); return _getCachedLink(PM, infoHash, cachedEntryInfo, fileIndex, ip, isBrowser);
} else if (torrent && statusDownloading(torrent.status)) { } else if (torrent && statusDownloading(torrent.status)) {
console.log(`Downloading to Premiumize ${infoHash} [${fileIndex}]...`); console.log(`Downloading to Premiumize ${infoHash} [${fileIndex}]...`);
return StaticResponse.DOWNLOADING; return StaticResponse.DOWNLOADING;
} else if (torrent && statusError(torrent.status)) { } else if (torrent && statusError(torrent.status)) {
console.log(`Retrying downloading to Premiumize ${infoHash} [${fileIndex}]...`); console.log(`Retrying downloading to Premiumize ${infoHash} [${fileIndex}]...`);
return _retryCreateTorrent(PM, infoHash, cachedEntryInfo, fileIndex); return _retryCreateTorrent(PM, infoHash, cachedEntryInfo, fileIndex);
} }
return Promise.reject(`Failed Premiumize adding torrent ${JSON.stringify(torrent)}`); return Promise.reject(`Failed Premiumize adding torrent ${JSON.stringify(torrent)}`);
} }
async function _getCachedLink(PM, infoHash, encodedFileName, fileIndex, ip, isBrowser) { async function _getCachedLink(PM, infoHash, encodedFileName, fileIndex, ip, isBrowser) {
const cachedTorrent = await PM.transfer.directDownload(magnet.encode({ infoHash }), ip); const cachedTorrent = await PM.transfer.directDownload(magnet.encode({ infoHash }), ip);
if (cachedTorrent?.content?.length) { if (cachedTorrent?.content?.length) {
const targetFileName = decodeURIComponent(encodedFileName); const targetFileName = decodeURIComponent(encodedFileName);
const videos = cachedTorrent.content.filter(file => isVideo(file.path)); const videos = cachedTorrent.content.filter(file => isVideo(file.path)).sort((a, b) => b.size - a.size);
const targetVideo = Number.isInteger(fileIndex) const targetVideo = Number.isInteger(fileIndex)
? videos.find(video => sameFilename(video.path, targetFileName)) ? videos.find(video => sameFilename(video.path, targetFileName))
: videos.sort((a, b) => b.size - a.size)[0]; : videos[0];
if (!targetVideo && videos.every(video => isArchive(video.path))) { if (!targetVideo && videos.every(video => isArchive(video.path))) {
console.log(`Only Premiumize archive is available for [${infoHash}] ${fileIndex}`) console.log(`Only Premiumize archive is available for [${infoHash}] ${fileIndex}`)
return StaticResponse.FAILED_RAR; return StaticResponse.FAILED_RAR;
}
const streamLink = isBrowser && targetVideo.transcode_status === 'finished' && targetVideo.stream_link;
const unrestrictedLink = streamLink || targetVideo.link;
console.log(`Unrestricted Premiumize ${infoHash} [${fileIndex}] to ${unrestrictedLink}`);
return unrestrictedLink;
} }
const streamLink = isBrowser && targetVideo.transcode_status === 'finished' && targetVideo.stream_link; return Promise.reject('No cached entry found');
const unrestrictedLink = streamLink || targetVideo.link;
console.log(`Unrestricted Premiumize ${infoHash} [${fileIndex}] to ${unrestrictedLink}`);
return unrestrictedLink;
}
return Promise.reject('No cached entry found');
} }
async function _createOrFindTorrent(PM, infoHash) { async function _createOrFindTorrent(PM, infoHash) {
return _findTorrent(PM, infoHash) return _findTorrent(PM, infoHash)
.catch(() => _createTorrent(PM, infoHash)); .catch(() => _createTorrent(PM, infoHash));
} }
async function _findTorrent(PM, infoHash) { async function _findTorrent(PM, infoHash) {
const torrents = await PM.transfer.list().then(response => response.transfers); const torrents = await PM.transfer.list().then(response => response.transfers);
const foundTorrents = torrents.filter(torrent => torrent.src.toLowerCase().includes(infoHash)); const foundTorrents = torrents.filter(torrent => torrent.src.toLowerCase().includes(infoHash));
const nonFailedTorrent = foundTorrents.find(torrent => !statusError(torrent.statusCode)); const nonFailedTorrent = foundTorrents.find(torrent => !statusError(torrent.statusCode));
const foundTorrent = nonFailedTorrent || foundTorrents[0]; const foundTorrent = nonFailedTorrent || foundTorrents[0];
return foundTorrent || Promise.reject('No recent torrent found'); return foundTorrent || Promise.reject('No recent torrent found');
} }
async function _findInfoHash(PM, itemId) { async function _findInfoHash(PM, itemId) {
const torrents = await PM.transfer.list().then(response => response.transfers); const torrents = await PM.transfer.list().then(response => response.transfers);
const foundTorrent = torrents.find(torrent => `${torrent.file_id}` === itemId || `${torrent.folder_id}` === itemId); const foundTorrent = torrents.find(torrent => `${torrent.file_id}` === itemId || `${torrent.folder_id}` === itemId);
return foundTorrent?.src ? magnet.decode(foundTorrent.src).infoHash : undefined; return foundTorrent?.src ? magnet.decode(foundTorrent.src).infoHash : undefined;
} }
async function _createTorrent(PM, infoHash) { async function _createTorrent(PM, infoHash) {
const magnetLink = await getMagnetLink(infoHash); const magnetLink = await getMagnetLink(infoHash);
return PM.transfer.create(magnetLink).then(() => _findTorrent(PM, infoHash)); return PM.transfer.create(magnetLink).then(() => _findTorrent(PM, infoHash));
} }
async function _retryCreateTorrent(PM, infoHash, encodedFileName, fileIndex) { async function _retryCreateTorrent(PM, infoHash, encodedFileName, fileIndex) {
const newTorrent = await _createTorrent(PM, infoHash).then(() => _findTorrent(PM, infoHash)); const newTorrent = await _createTorrent(PM, infoHash).then(() => _findTorrent(PM, infoHash));
return newTorrent && statusReady(newTorrent.status) return newTorrent && statusReady(newTorrent.status)
? _getCachedLink(PM, infoHash, encodedFileName, fileIndex) ? _getCachedLink(PM, infoHash, encodedFileName, fileIndex)
: StaticResponse.FAILED_DOWNLOAD; : StaticResponse.FAILED_DOWNLOAD;
} }
export function toCommonError(error) { export function toCommonError(error) {
if (error && error.message === 'Not logged in.') { if (error && error.message === 'Not logged in.') {
return BadTokenError; return BadTokenError;
} }
return undefined; return undefined;
} }
function statusError(status) { function statusError(status) {
return ['deleted', 'error', 'timeout'].includes(status); return ['deleted', 'error', 'timeout'].includes(status);
} }
function statusDownloading(status) { function statusDownloading(status) {
return ['waiting', 'queued', 'running'].includes(status); return ['waiting', 'queued', 'running'].includes(status);
} }
function statusReady(status) { function statusReady(status) {
return ['finished', 'seeding'].includes(status); return ['finished', 'seeding'].includes(status);
} }
async function getDefaultOptions(ip) { async function getDefaultOptions(ip) {
return { timeout: 5000 }; return { timeout: 5000 };
} }

View File

@@ -11,205 +11,205 @@ const PutioAPI = PutioClient.default;
const KEY = 'putio'; const KEY = 'putio';
export async function getCachedStreams(streams, apiKey) { export async function getCachedStreams(streams, apiKey) {
return streams return streams
.reduce((mochStreams, stream) => { .reduce((mochStreams, stream) => {
const streamTitleParts = stream.title.replace(/\n👤.*/s, '').split('\n'); const streamTitleParts = stream.title.replace(/\n👤.*/s, '').split('\n');
const fileName = streamTitleParts[streamTitleParts.length - 1]; const fileName = streamTitleParts[streamTitleParts.length - 1];
const fileIndex = streamTitleParts.length === 2 ? stream.fileIdx : null; const fileIndex = streamTitleParts.length === 2 ? stream.fileIdx : null;
const encodedFileName = encodeURIComponent(fileName); const encodedFileName = encodeURIComponent(fileName);
mochStreams[stream.infoHash] = { mochStreams[`${stream.infoHash}@${stream.fileIdx}`] = {
url: `${apiKey}/${stream.infoHash}/${encodedFileName}/${fileIndex}`, url: `${apiKey}/${stream.infoHash}/${encodedFileName}/${fileIndex}`,
cached: false cached: false
}; };
return mochStreams; return mochStreams;
}, {}); }, {});
} }
export async function getCatalog(apiKey, offset = 0) { export async function getCatalog(apiKey, offset = 0) {
if (offset > 0) { if (offset > 0) {
return []; return [];
} }
const Putio = createPutioAPI(apiKey) const Putio = createPutioAPI(apiKey)
return Putio.Files.Query(0) return Putio.Files.Query(0)
.then(response => response?.body?.files) .then(response => response?.body?.files)
.then(files => (files || []) .then(files => (files || [])
.map(file => ({ .map(file => ({
id: `${KEY}:${file.id}`, id: `${KEY}:${file.id}`,
type: Type.OTHER, type: Type.OTHER,
name: file.name name: file.name
}))); })));
} }
export async function getItemMeta(itemId, apiKey) { export async function getItemMeta(itemId, apiKey) {
const Putio = createPutioAPI(apiKey) const Putio = createPutioAPI(apiKey)
const infoHash = await _findInfoHash(Putio, itemId) const infoHash = await _findInfoHash(Putio, itemId)
return getFolderContents(Putio, itemId) return getFolderContents(Putio, itemId)
.then(contents => ({ .then(contents => ({
id: `${KEY}:${itemId}`, id: `${KEY}:${itemId}`,
type: Type.OTHER, type: Type.OTHER,
name: contents.name, name: contents.name,
infoHash: infoHash, infoHash: infoHash,
videos: contents videos: contents
.map((file, index) => ({ .map((file, index) => ({
id: `${KEY}:${file.id}:${index}`, id: `${KEY}:${file.id}:${index}`,
title: file.name, title: file.name,
released: new Date(file.created_at).toISOString(), released: new Date(file.created_at).toISOString(),
streams: [{ url: `${apiKey}/null/null/${file.id}` }] streams: [{ url: `${apiKey}/null/null/${file.id}` }]
})) }))
})) }))
} }
async function getFolderContents(Putio, itemId, folderPrefix = '') { async function getFolderContents(Putio, itemId, folderPrefix = '') {
return await Putio.Files.Query(itemId) return await Putio.Files.Query(itemId)
.then(response => response?.body) .then(response => response?.body)
.then(body => body?.files?.length ? body.files : [body?.parent].filter(x => x)) .then(body => body?.files?.length ? body.files : [body?.parent].filter(x => x))
.then(contents => Promise.all(contents .then(contents => Promise.all(contents
.filter(content => content.file_type === 'FOLDER') .filter(content => content.file_type === 'FOLDER')
.map(content => getFolderContents(Putio, content.id, [folderPrefix, content.name].join('/')))) .map(content => getFolderContents(Putio, content.id, [folderPrefix, content.name].join('/'))))
.then(otherContents => otherContents.reduce((a, b) => a.concat(b), [])) .then(otherContents => otherContents.reduce((a, b) => a.concat(b), []))
.then(otherContents => contents .then(otherContents => contents
.filter(content => content.file_type === 'VIDEO') .filter(content => content.file_type === 'VIDEO')
.map(content => ({ ...content, name: [folderPrefix, content.name].join('/') })) .map(content => ({ ...content, name: [folderPrefix, content.name].join('/') }))
.concat(otherContents))); .concat(otherContents)));
} }
export async function resolve({ ip, apiKey, infoHash, cachedEntryInfo, fileIndex }) { export async function resolve({ ip, apiKey, infoHash, cachedEntryInfo, fileIndex }) {
console.log(`Unrestricting Putio ${infoHash} [${fileIndex}]`); console.log(`Unrestricting Putio ${infoHash} [${fileIndex}]`);
const Putio = createPutioAPI(apiKey) const Putio = createPutioAPI(apiKey)
return _resolve(Putio, infoHash, cachedEntryInfo, fileIndex) return _resolve(Putio, infoHash, cachedEntryInfo, fileIndex)
.catch(error => { .catch(error => {
if (error?.data?.status_code === 401) { if (error?.data?.status_code === 401) {
console.log(`Access denied to Putio ${infoHash} [${fileIndex}]`); console.log(`Access denied to Putio ${infoHash} [${fileIndex}]`);
return StaticResponse.FAILED_ACCESS; return StaticResponse.FAILED_ACCESS;
} }
return Promise.reject(`Failed Putio adding torrent ${JSON.stringify(error.data || error)}`); return Promise.reject(`Failed Putio adding torrent ${JSON.stringify(error.data || error)}`);
}); });
} }
async function _resolve(Putio, infoHash, cachedEntryInfo, fileIndex) { async function _resolve(Putio, infoHash, cachedEntryInfo, fileIndex) {
if (infoHash === 'null') { if (infoHash === 'null') {
return _unrestrictVideo(Putio, fileIndex); return _unrestrictVideo(Putio, fileIndex);
} }
const torrent = await _createOrFindTorrent(Putio, infoHash); const torrent = await _createOrFindTorrent(Putio, infoHash);
if (torrent && statusReady(torrent.status)) { if (torrent && statusReady(torrent.status)) {
return _unrestrictLink(Putio, torrent, cachedEntryInfo, fileIndex); return _unrestrictLink(Putio, torrent, cachedEntryInfo, fileIndex);
} else if (torrent && statusDownloading(torrent.status)) { } else if (torrent && statusDownloading(torrent.status)) {
console.log(`Downloading to Putio ${infoHash} [${fileIndex}]...`); console.log(`Downloading to Putio ${infoHash} [${fileIndex}]...`);
return StaticResponse.DOWNLOADING; return StaticResponse.DOWNLOADING;
} else if (torrent && statusError(torrent.status)) { } else if (torrent && statusError(torrent.status)) {
console.log(`Retrying downloading to Putio ${infoHash} [${fileIndex}]...`); console.log(`Retrying downloading to Putio ${infoHash} [${fileIndex}]...`);
return _retryCreateTorrent(Putio, infoHash, cachedEntryInfo, fileIndex); return _retryCreateTorrent(Putio, infoHash, cachedEntryInfo, fileIndex);
} }
return Promise.reject("Failed Putio adding torrent"); return Promise.reject("Failed Putio adding torrent");
} }
async function _createOrFindTorrent(Putio, infoHash) { async function _createOrFindTorrent(Putio, infoHash) {
return _findTorrent(Putio, infoHash) return _findTorrent(Putio, infoHash)
.catch(() => _createTorrent(Putio, infoHash)); .catch(() => _createTorrent(Putio, infoHash));
} }
async function _retryCreateTorrent(Putio, infoHash, encodedFileName, fileIndex) { async function _retryCreateTorrent(Putio, infoHash, encodedFileName, fileIndex) {
const newTorrent = await _createTorrent(Putio, infoHash); const newTorrent = await _createTorrent(Putio, infoHash);
return newTorrent && statusReady(newTorrent.status) return newTorrent && statusReady(newTorrent.status)
? _unrestrictLink(Putio, newTorrent, encodedFileName, fileIndex) ? _unrestrictLink(Putio, newTorrent, encodedFileName, fileIndex)
: StaticResponse.FAILED_DOWNLOAD; : StaticResponse.FAILED_DOWNLOAD;
} }
async function _findTorrent(Putio, infoHash) { async function _findTorrent(Putio, infoHash) {
const torrents = await Putio.Transfers.Query().then(response => response.data.transfers); const torrents = await Putio.Transfers.Query().then(response => response.data.transfers);
const foundTorrents = torrents.filter(torrent => torrent.source.toLowerCase().includes(infoHash)); const foundTorrents = torrents.filter(torrent => torrent.source.toLowerCase().includes(infoHash));
const nonFailedTorrent = foundTorrents.find(torrent => !statusError(torrent.status)); const nonFailedTorrent = foundTorrents.find(torrent => !statusError(torrent.status));
const foundTorrent = nonFailedTorrent || foundTorrents[0]; const foundTorrent = nonFailedTorrent || foundTorrents[0];
if (foundTorrents && !foundTorrents.userfile_exists) { if (foundTorrents && !foundTorrents.userfile_exists) {
return await Putio.Transfers.Cancel(foundTorrents.id).then(() => Promise.reject()) return await Putio.Transfers.Cancel(foundTorrents.id).then(() => Promise.reject())
} }
return foundTorrent || Promise.reject('No recent torrent found in Putio'); return foundTorrent || Promise.reject('No recent torrent found in Putio');
} }
async function _findInfoHash(Putio, fileId) { async function _findInfoHash(Putio, fileId) {
const torrents = await Putio.Transfers.Query().then(response => response?.data?.transfers); const torrents = await Putio.Transfers.Query().then(response => response?.data?.transfers);
const foundTorrent = torrents.find(torrent => `${torrent.file_id}` === fileId); const foundTorrent = torrents.find(torrent => `${torrent.file_id}` === fileId);
return foundTorrent?.source ? decode(foundTorrent.source).infoHash : undefined; return foundTorrent?.source ? decode(foundTorrent.source).infoHash : undefined;
} }
async function _createTorrent(Putio, infoHash) { async function _createTorrent(Putio, infoHash) {
const magnetLink = await getMagnetLink(infoHash); const magnetLink = await getMagnetLink(infoHash);
// Add the torrent and then delay for 3 secs for putio to process it and then check it's status. // Add the torrent and then delay for 3 secs for putio to process it and then check it's status.
return Putio.Transfers.Add({ url: magnetLink }) return Putio.Transfers.Add({ url: magnetLink })
.then(response => _getNewTorrent(Putio, response.data.transfer.id)); .then(response => _getNewTorrent(Putio, response.data.transfer.id));
} }
async function _getNewTorrent(Putio, torrentId, pollCounter = 0, pollRate = 2000, maxPollNumber = 15) { async function _getNewTorrent(Putio, torrentId, pollCounter = 0, pollRate = 2000, maxPollNumber = 15) {
return Putio.Transfers.Get(torrentId) return Putio.Transfers.Get(torrentId)
.then(response => response.data.transfer) .then(response => response.data.transfer)
.then(torrent => statusProcessing(torrent.status) && pollCounter < maxPollNumber .then(torrent => statusProcessing(torrent.status) && pollCounter < maxPollNumber
? delay(pollRate).then(() => _getNewTorrent(Putio, torrentId, pollCounter + 1)) ? delay(pollRate).then(() => _getNewTorrent(Putio, torrentId, pollCounter + 1))
: torrent); : torrent);
} }
async function _unrestrictLink(Putio, torrent, encodedFileName, fileIndex) { async function _unrestrictLink(Putio, torrent, encodedFileName, fileIndex) {
const targetVideo = await _getTargetFile(Putio, torrent, encodedFileName, fileIndex); const targetVideo = await _getTargetFile(Putio, torrent, encodedFileName, fileIndex);
return _unrestrictVideo(Putio, targetVideo.id); return _unrestrictVideo(Putio, targetVideo.id);
} }
async function _unrestrictVideo(Putio, videoId) { async function _unrestrictVideo(Putio, videoId) {
const response = await Putio.File.GetStorageURL(videoId); const response = await Putio.File.GetStorageURL(videoId);
const downloadUrl = response.data.url const downloadUrl = response.data.url
console.log(`Unrestricted Putio [${videoId}] to ${downloadUrl}`); console.log(`Unrestricted Putio [${videoId}] to ${downloadUrl}`);
return downloadUrl; return downloadUrl;
} }
async function _getTargetFile(Putio, torrent, encodedFileName, fileIndex) { async function _getTargetFile(Putio, torrent, encodedFileName, fileIndex) {
const targetFileName = decodeURIComponent(encodedFileName); const targetFileName = decodeURIComponent(encodedFileName);
let targetFile; let targetFile;
let files = await _getFiles(Putio, torrent.file_id); let files = await _getFiles(Putio, torrent.file_id);
let videos = []; let videos = [];
while (!targetFile && files.length) { while (!targetFile && files.length) {
const folders = files.filter(file => file.file_type === 'FOLDER'); const folders = files.filter(file => file.file_type === 'FOLDER');
videos = videos.concat(files.filter(file => isVideo(file.name))); videos = videos.concat(files.filter(file => isVideo(file.name))).sort((a, b) => b.size - a.size);
// when specific file index is defined search by filename // when specific file index is defined search by filename
// when it's not defined find all videos and take the largest one // when it's not defined find all videos and take the largest one
targetFile = Number.isInteger(fileIndex) targetFile = Number.isInteger(fileIndex)
? videos.find(video => sameFilename(targetFileName, video.name)) ? videos.find(video => sameFilename(targetFileName, video.name))
: !folders.length && videos.sort((a, b) => b.size - a.size)[0]; : !folders.length && videos[0];
files = !targetFile files = !targetFile
? await Promise.all(folders.map(folder => _getFiles(Putio, folder.id))) ? await Promise.all(folders.map(folder => _getFiles(Putio, folder.id)))
.then(results => results.reduce((a, b) => a.concat(b), [])) .then(results => results.reduce((a, b) => a.concat(b), []))
: []; : [];
} }
return targetFile || Promise.reject(`No target file found for Putio [${torrent.hash}] ${targetFileName}`); return targetFile || Promise.reject(`No target file found for Putio [${torrent.hash}] ${targetFileName}`);
} }
async function _getFiles(Putio, fileId) { async function _getFiles(Putio, fileId) {
const response = await Putio.Files.Query(fileId) const response = await Putio.Files.Query(fileId)
.catch(error => Promise.reject({ ...error.data, path: error.request.path })); .catch(error => Promise.reject({ ...error.data, path: error.request.path }));
return response.data.files.length return response.data.files.length
? response.data.files ? response.data.files
: [response.data.parent]; : [response.data.parent];
} }
function createPutioAPI(apiKey) { function createPutioAPI(apiKey) {
const clientId = apiKey.replace(/@.*/, ''); const clientId = apiKey.replace(/@.*/, '');
const token = apiKey.replace(/.*@/, ''); const token = apiKey.replace(/.*@/, '');
const Putio = new PutioAPI({ clientID: clientId }); const Putio = new PutioAPI({ clientID: clientId });
Putio.setToken(token); Putio.setToken(token);
return Putio; return Putio;
} }
function statusError(status) { function statusError(status) {
return ['ERROR'].includes(status); return ['ERROR'].includes(status);
} }
function statusDownloading(status) { function statusDownloading(status) {
return ['WAITING', 'IN_QUEUE', 'DOWNLOADING'].includes(status); return ['WAITING', 'IN_QUEUE', 'DOWNLOADING'].includes(status);
} }
function statusProcessing(status) { function statusProcessing(status) {
return ['WAITING', 'IN_QUEUE', 'COMPLETING'].includes(status); return ['WAITING', 'IN_QUEUE', 'COMPLETING'].includes(status);
} }
function statusReady(status) { function statusReady(status) {
return ['COMPLETED', 'SEEDING'].includes(status); return ['COMPLETED', 'SEEDING'].includes(status);
} }

View File

@@ -15,385 +15,385 @@ const KEY = 'realdebrid';
const DEBRID_DOWNLOADS = 'Downloads'; const DEBRID_DOWNLOADS = 'Downloads';
export async function getCachedStreams(streams, apiKey) { export async function getCachedStreams(streams, apiKey) {
const hashes = streams.map(stream => stream.infoHash); const hashes = streams.map(stream => stream.infoHash);
const available = await _getInstantAvailable(hashes, apiKey); const available = await _getInstantAvailable(hashes, apiKey);
return available && streams return available && streams
.reduce((mochStreams, stream) => { .reduce((mochStreams, stream) => {
const cachedEntry = available[stream.infoHash]; const cachedEntry = available[stream.infoHash];
const cachedIds = _getCachedFileIds(stream.fileIdx, cachedEntry); const cachedIds = _getCachedFileIds(stream.fileIdx, cachedEntry);
mochStreams[stream.infoHash] = { mochStreams[`${stream.infoHash}@${stream.fileIdx}`] = {
url: `${apiKey}/${stream.infoHash}/null/${stream.fileIdx}`, url: `${apiKey}/${stream.infoHash}/null/${stream.fileIdx}`,
cached: !!cachedIds.length cached: !!cachedIds.length
}; };
return mochStreams; return mochStreams;
}, {}) }, {})
} }
async function _getInstantAvailable(hashes, apiKey, retries = 3, maxChunkSize = 150) { async function _getInstantAvailable(hashes, apiKey, retries = 3, maxChunkSize = 150) {
const cachedResults = await getCachedAvailabilityResults(hashes); const cachedResults = await getCachedAvailabilityResults(hashes);
const missingHashes = hashes.filter(infoHash => !cachedResults[infoHash]); const missingHashes = hashes.filter(infoHash => !cachedResults[infoHash]);
if (!missingHashes.length) { if (!missingHashes.length) {
return cachedResults return cachedResults
} }
const options = await getDefaultOptions(); const options = await getDefaultOptions();
const RD = new RealDebridClient(apiKey, options); const RD = new RealDebridClient(apiKey, options);
const hashBatches = chunkArray(missingHashes, maxChunkSize) const hashBatches = chunkArray(missingHashes, maxChunkSize)
return Promise.all(hashBatches.map(batch => RD.torrents.instantAvailability(batch) return Promise.all(hashBatches.map(batch => RD.torrents.instantAvailability(batch)
.then(response => { .then(response => {
if (typeof response !== 'object') { if (typeof response !== 'object') {
return Promise.reject(new Error('RD returned non JSON response: ' + response)); return Promise.reject(new Error('RD returned non JSON response: ' + response));
} }
return processAvailabilityResults(response); return processAvailabilityResults(response);
}))) })))
.then(results => results.reduce((all, result) => Object.assign(all, result), {})) .then(results => results.reduce((all, result) => Object.assign(all, result), {}))
.then(results => cacheAvailabilityResults(results)) .then(results => cacheAvailabilityResults(results))
.then(results => Object.assign(cachedResults, results)) .then(results => Object.assign(cachedResults, results))
.catch(error => { .catch(error => {
if (toCommonError(error)) { if (toCommonError(error)) {
return Promise.reject(error); return Promise.reject(error);
} }
if (!error && maxChunkSize !== 1) { if (!error && maxChunkSize !== 1) {
// sometimes due to large response size RD responds with an empty body. Reduce chunk size to reduce body // sometimes due to large response size RD responds with an empty body. Reduce chunk size to reduce body
console.log(`Reducing chunk size for availability request: ${hashes[0]}`); console.log(`Reducing chunk size for availability request: ${hashes[0]}`);
return _getInstantAvailable(hashes, apiKey, retries - 1, Math.ceil(maxChunkSize / 10)); return _getInstantAvailable(hashes, apiKey, retries - 1, Math.ceil(maxChunkSize / 10));
} }
if (retries > 0 && NON_BLACKLIST_ERRORS.some(v => error?.message?.includes(v))) { if (retries > 0 && NON_BLACKLIST_ERRORS.some(v => error?.message?.includes(v))) {
return _getInstantAvailable(hashes, apiKey, retries - 1); return _getInstantAvailable(hashes, apiKey, retries - 1);
} }
console.warn(`Failed RealDebrid cached [${hashes[0]}] torrent availability request:`, error.message); console.warn(`Failed RealDebrid cached [${hashes[0]}] torrent availability request:`, error.message);
return undefined; return undefined;
}); });
} }
function processAvailabilityResults(availabilityResults) { function processAvailabilityResults(availabilityResults) {
const processedResults = {}; const processedResults = {};
Object.entries(availabilityResults) Object.entries(availabilityResults)
.forEach(([infoHash, hosterResults]) => processedResults[infoHash] = getCachedIds(hosterResults)); .forEach(([infoHash, hosterResults]) => processedResults[infoHash] = getCachedIds(hosterResults));
return processedResults; return processedResults;
} }
function getCachedIds(hosterResults) { function getCachedIds(hosterResults) {
if (!hosterResults || Array.isArray(hosterResults)) { if (!hosterResults || Array.isArray(hosterResults)) {
return []; return [];
} }
// if not all cached files are videos, then the torrent will be zipped to a rar // if not all cached files are videos, then the torrent will be zipped to a rar
return Object.values(hosterResults) return Object.values(hosterResults)
.reduce((a, b) => a.concat(b), []) .reduce((a, b) => a.concat(b), [])
.filter(cached => Object.keys(cached).length && Object.values(cached).every(file => isVideo(file.filename))) .filter(cached => Object.keys(cached).length && Object.values(cached).every(file => isVideo(file.filename)))
.map(cached => Object.keys(cached)) .map(cached => Object.keys(cached))
.sort((a, b) => b.length - a.length) .sort((a, b) => b.length - a.length)
.filter((cached, index, array) => index === 0 || cached.some(id => !array[0].includes(id))); .filter((cached, index, array) => index === 0 || cached.some(id => !array[0].includes(id)));
} }
function _getCachedFileIds(fileIndex, cachedResults) { function _getCachedFileIds(fileIndex, cachedResults) {
if (!cachedResults || !Array.isArray(cachedResults)) { if (!cachedResults || !Array.isArray(cachedResults)) {
return []; return [];
} }
const cachedIds = Number.isInteger(fileIndex) const cachedIds = Number.isInteger(fileIndex)
? cachedResults.find(ids => ids.includes(`${fileIndex + 1}`)) ? cachedResults.find(ids => ids.includes(`${fileIndex + 1}`))
: cachedResults[0]; : cachedResults[0];
return cachedIds || []; return cachedIds || [];
} }
export async function getCatalog(apiKey, offset, ip) { export async function getCatalog(apiKey, offset, ip) {
if (offset > 0) { if (offset > 0) {
return []; return [];
} }
const options = await getDefaultOptions(ip); const options = await getDefaultOptions(ip);
const RD = new RealDebridClient(apiKey, options); const RD = new RealDebridClient(apiKey, options);
const downloadsMeta = { const downloadsMeta = {
id: `${KEY}:${DEBRID_DOWNLOADS}`, id: `${KEY}:${DEBRID_DOWNLOADS}`,
type: Type.OTHER, type: Type.OTHER,
name: DEBRID_DOWNLOADS name: DEBRID_DOWNLOADS
}; };
const torrentMetas = await _getAllTorrents(RD) const torrentMetas = await _getAllTorrents(RD)
.then(torrents => Array.isArray(torrents) ? torrents : []) .then(torrents => Array.isArray(torrents) ? torrents : [])
.then(torrents => torrents .then(torrents => torrents
.filter(torrent => torrent && statusReady(torrent.status)) .filter(torrent => torrent && statusReady(torrent.status))
.map(torrent => ({ .map(torrent => ({
id: `${KEY}:${torrent.id}`, id: `${KEY}:${torrent.id}`,
type: Type.OTHER, type: Type.OTHER,
name: torrent.filename name: torrent.filename
}))); })));
return [downloadsMeta].concat(torrentMetas) return [downloadsMeta].concat(torrentMetas)
} }
export async function getItemMeta(itemId, apiKey, ip) { export async function getItemMeta(itemId, apiKey, ip) {
const options = await getDefaultOptions(ip); const options = await getDefaultOptions(ip);
const RD = new RealDebridClient(apiKey, options); const RD = new RealDebridClient(apiKey, options);
if (itemId === DEBRID_DOWNLOADS) { if (itemId === DEBRID_DOWNLOADS) {
const videos = await _getAllDownloads(RD) const videos = await _getAllDownloads(RD)
.then(downloads => downloads .then(downloads => downloads
.map(download => ({ .map(download => ({
id: `${KEY}:${DEBRID_DOWNLOADS}:${download.id}`, id: `${KEY}:${DEBRID_DOWNLOADS}:${download.id}`,
// infoHash: allTorrents // infoHash: allTorrents
// .filter(torrent => (torrent.links || []).find(link => link === download.link)) // .filter(torrent => (torrent.links || []).find(link => link === download.link))
// .map(torrent => torrent.hash.toLowerCase())[0], // .map(torrent => torrent.hash.toLowerCase())[0],
title: download.filename, title: download.filename,
released: new Date(download.generated).toISOString(), released: new Date(download.generated).toISOString(),
streams: [{ url: download.download }] streams: [{ url: download.download }]
}))); })));
return { return {
id: `${KEY}:${DEBRID_DOWNLOADS}`, id: `${KEY}:${DEBRID_DOWNLOADS}`,
type: Type.OTHER, type: Type.OTHER,
name: DEBRID_DOWNLOADS, name: DEBRID_DOWNLOADS,
videos: videos videos: videos
}; };
} }
return _getTorrentInfo(RD, itemId) return _getTorrentInfo(RD, itemId)
.then(torrent => ({ .then(torrent => ({
id: `${KEY}:${torrent.id}`, id: `${KEY}:${torrent.id}`,
type: Type.OTHER, type: Type.OTHER,
name: torrent.filename, name: torrent.filename,
infoHash: torrent.hash.toLowerCase(), infoHash: torrent.hash.toLowerCase(),
videos: torrent.files videos: torrent.files
.filter(file => file.selected) .filter(file => file.selected)
.filter(file => isVideo(file.path)) .filter(file => isVideo(file.path))
.map((file, index) => ({ .map((file, index) => ({
id: `${KEY}:${torrent.id}:${file.id}`, id: `${KEY}:${torrent.id}:${file.id}`,
title: file.path, title: file.path,
released: new Date(new Date(torrent.added).getTime() - index).toISOString(), released: new Date(new Date(torrent.added).getTime() - index).toISOString(),
streams: [{ url: `${apiKey}/${torrent.hash.toLowerCase()}/null/${file.id - 1}` }] streams: [{ url: `${apiKey}/${torrent.hash.toLowerCase()}/null/${file.id - 1}` }]
})) }))
})) }))
} }
async function _getAllTorrents(RD, page = 1) { async function _getAllTorrents(RD, page = 1) {
return RD.torrents.get(page - 1, page, CATALOG_PAGE_SIZE) return RD.torrents.get(page - 1, page, CATALOG_PAGE_SIZE)
.then(torrents => torrents && torrents.length === CATALOG_PAGE_SIZE && page < CATALOG_MAX_PAGE .then(torrents => torrents && torrents.length === CATALOG_PAGE_SIZE && page < CATALOG_MAX_PAGE
? _getAllTorrents(RD, page + 1) ? _getAllTorrents(RD, page + 1)
.then(nextTorrents => torrents.concat(nextTorrents)) .then(nextTorrents => torrents.concat(nextTorrents))
.catch(() => torrents) .catch(() => torrents)
: torrents) : torrents)
} }
async function _getAllDownloads(RD, page = 1) { async function _getAllDownloads(RD, page = 1) {
return RD.downloads.get(page - 1, page, CATALOG_PAGE_SIZE); return RD.downloads.get(page - 1, page, CATALOG_PAGE_SIZE);
} }
export async function resolve({ ip, isBrowser, apiKey, infoHash, fileIndex }) { export async function resolve({ ip, isBrowser, apiKey, infoHash, fileIndex }) {
console.log(`Unrestricting RealDebrid ${infoHash} [${fileIndex}]`); console.log(`Unrestricting RealDebrid ${infoHash} [${fileIndex}]`);
const options = await getDefaultOptions(ip); const options = await getDefaultOptions(ip);
const RD = new RealDebridClient(apiKey, options); const RD = new RealDebridClient(apiKey, options);
const cachedFileIds = await _resolveCachedFileIds(infoHash, fileIndex, apiKey); const cachedFileIds = await _resolveCachedFileIds(infoHash, fileIndex, apiKey);
return _resolve(RD, infoHash, cachedFileIds, fileIndex, isBrowser) return _resolve(RD, infoHash, cachedFileIds, fileIndex, isBrowser)
.catch(error => { .catch(error => {
if (accessDeniedError(error)) { if (accessDeniedError(error)) {
console.log(`Access denied to RealDebrid ${infoHash} [${fileIndex}]`); console.log(`Access denied to RealDebrid ${infoHash} [${fileIndex}]`);
return StaticResponse.FAILED_ACCESS; return StaticResponse.FAILED_ACCESS;
} }
if (infringingFile(error)) { if (infringingFile(error)) {
console.log(`Infringing file removed from RealDebrid ${infoHash} [${fileIndex}]`); console.log(`Infringing file removed from RealDebrid ${infoHash} [${fileIndex}]`);
return StaticResponse.FAILED_INFRINGEMENT; return StaticResponse.FAILED_INFRINGEMENT;
} }
return Promise.reject(`Failed RealDebrid adding torrent ${JSON.stringify(error)}`); return Promise.reject(`Failed RealDebrid adding torrent ${JSON.stringify(error)}`);
}); });
} }
async function _resolveCachedFileIds(infoHash, fileIndex, apiKey) { async function _resolveCachedFileIds(infoHash, fileIndex, apiKey) {
const available = await _getInstantAvailable([infoHash], apiKey); const available = await _getInstantAvailable([infoHash], apiKey);
const cachedEntry = available?.[infoHash]; const cachedEntry = available?.[infoHash];
const cachedIds = _getCachedFileIds(fileIndex, cachedEntry); const cachedIds = _getCachedFileIds(fileIndex, cachedEntry);
return cachedIds?.join(','); return cachedIds?.join(',');
} }
async function _resolve(RD, infoHash, cachedFileIds, fileIndex, isBrowser) { async function _resolve(RD, infoHash, cachedFileIds, fileIndex, isBrowser) {
const torrentId = await _createOrFindTorrentId(RD, infoHash, cachedFileIds, fileIndex); const torrentId = await _createOrFindTorrentId(RD, infoHash, cachedFileIds, fileIndex);
const torrent = await _getTorrentInfo(RD, torrentId); const torrent = await _getTorrentInfo(RD, torrentId);
if (torrent && statusReady(torrent.status)) { if (torrent && statusReady(torrent.status)) {
return _unrestrictLink(RD, torrent, fileIndex, isBrowser); return _unrestrictLink(RD, torrent, fileIndex, isBrowser);
} else if (torrent && statusDownloading(torrent.status)) { } else if (torrent && statusDownloading(torrent.status)) {
console.log(`Downloading to RealDebrid ${infoHash} [${fileIndex}]...`); console.log(`Downloading to RealDebrid ${infoHash} [${fileIndex}]...`);
return StaticResponse.DOWNLOADING; return StaticResponse.DOWNLOADING;
} else if (torrent && statusMagnetError(torrent.status)) { } else if (torrent && statusMagnetError(torrent.status)) {
console.log(`Failed RealDebrid opening torrent ${infoHash} [${fileIndex}] due to magnet error`); console.log(`Failed RealDebrid opening torrent ${infoHash} [${fileIndex}] due to magnet error`);
return StaticResponse.FAILED_OPENING; return StaticResponse.FAILED_OPENING;
} else if (torrent && statusError(torrent.status)) { } else if (torrent && statusError(torrent.status)) {
return _retryCreateTorrent(RD, infoHash, fileIndex); return _retryCreateTorrent(RD, infoHash, fileIndex);
} else if (torrent && (statusWaitingSelection(torrent.status) || statusOpening(torrent.status))) { } else if (torrent && (statusWaitingSelection(torrent.status) || statusOpening(torrent.status))) {
console.log(`Trying to select files on RealDebrid ${infoHash} [${fileIndex}]...`); console.log(`Trying to select files on RealDebrid ${infoHash} [${fileIndex}]...`);
return _selectTorrentFiles(RD, torrent) return _selectTorrentFiles(RD, torrent)
.then(() => { .then(() => {
console.log(`Downloading to RealDebrid ${infoHash} [${fileIndex}]...`); console.log(`Downloading to RealDebrid ${infoHash} [${fileIndex}]...`);
return StaticResponse.DOWNLOADING return StaticResponse.DOWNLOADING
}) })
.catch(error => { .catch(error => {
console.log(`Failed RealDebrid opening torrent ${infoHash} [${fileIndex}]:`, error); console.log(`Failed RealDebrid opening torrent ${infoHash} [${fileIndex}]:`, error);
return StaticResponse.FAILED_OPENING; return StaticResponse.FAILED_OPENING;
}); });
} }
return Promise.reject(`Failed RealDebrid adding torrent ${JSON.stringify(torrent)}`); return Promise.reject(`Failed RealDebrid adding torrent ${JSON.stringify(torrent)}`);
} }
async function _createOrFindTorrentId(RD, infoHash, cachedFileIds, fileIndex) { async function _createOrFindTorrentId(RD, infoHash, cachedFileIds, fileIndex) {
return _findTorrent(RD, infoHash, fileIndex) return _findTorrent(RD, infoHash, fileIndex)
.catch(() => _createTorrentId(RD, infoHash, cachedFileIds)); .catch(() => _createTorrentId(RD, infoHash, cachedFileIds));
} }
async function _findTorrent(RD, infoHash, fileIndex) { async function _findTorrent(RD, infoHash, fileIndex) {
const torrents = await RD.torrents.get(0, 1) || []; const torrents = await RD.torrents.get(0, 1) || [];
const foundTorrents = torrents const foundTorrents = torrents
.filter(torrent => torrent.hash.toLowerCase() === infoHash) .filter(torrent => torrent.hash.toLowerCase() === infoHash)
.filter(torrent => !statusError(torrent.status)); .filter(torrent => !statusError(torrent.status));
const foundTorrent = await _findBestFitTorrent(RD, foundTorrents, fileIndex); const foundTorrent = await _findBestFitTorrent(RD, foundTorrents, fileIndex);
return foundTorrent?.id || Promise.reject('No recent torrent found'); return foundTorrent?.id || Promise.reject('No recent torrent found');
} }
async function _findBestFitTorrent(RD, torrents, fileIndex) { async function _findBestFitTorrent(RD, torrents, fileIndex) {
if (torrents.length === 1) { if (torrents.length === 1) {
return torrents[0]; return torrents[0];
} }
const torrentInfos = await Promise.all(torrents.map(torrent => _getTorrentInfo(RD, torrent.id))); const torrentInfos = await Promise.all(torrents.map(torrent => _getTorrentInfo(RD, torrent.id)));
const bestFitTorrents = torrentInfos const bestFitTorrents = torrentInfos
.filter(torrent => torrent.files.find(f => f.id === fileIndex + 1 && f.selected)) .filter(torrent => torrent.files.find(f => f.id === fileIndex + 1 && f.selected))
.sort((a, b) => b.links.length - a.links.length); .sort((a, b) => b.links.length - a.links.length);
return bestFitTorrents[0] || torrents[0]; return bestFitTorrents[0] || torrents[0];
} }
async function _getTorrentInfo(RD, torrentId) { async function _getTorrentInfo(RD, torrentId) {
if (!torrentId || typeof torrentId === 'object') { if (!torrentId || typeof torrentId === 'object') {
return torrentId || Promise.reject('No RealDebrid torrentId provided') return torrentId || Promise.reject('No RealDebrid torrentId provided')
} }
return RD.torrents.info(torrentId); return RD.torrents.info(torrentId);
} }
async function _createTorrentId(RD, infoHash, cachedFileIds) { async function _createTorrentId(RD, infoHash, cachedFileIds) {
const magnetLink = await getMagnetLink(infoHash); const magnetLink = await getMagnetLink(infoHash);
const addedMagnet = await RD.torrents.addMagnet(magnetLink); const addedMagnet = await RD.torrents.addMagnet(magnetLink);
if (cachedFileIds && !['null', 'undefined'].includes(cachedFileIds)) { if (cachedFileIds && !['null', 'undefined'].includes(cachedFileIds)) {
await RD.torrents.selectFiles(addedMagnet.id, cachedFileIds); await RD.torrents.selectFiles(addedMagnet.id, cachedFileIds);
} }
return addedMagnet.id; return addedMagnet.id;
} }
async function _recreateTorrentId(RD, infoHash, fileIndex) { async function _recreateTorrentId(RD, infoHash, fileIndex) {
const newTorrentId = await _createTorrentId(RD, infoHash); const newTorrentId = await _createTorrentId(RD, infoHash);
await _selectTorrentFiles(RD, { id: newTorrentId }, fileIndex); await _selectTorrentFiles(RD, { id: newTorrentId }, fileIndex);
return newTorrentId; return newTorrentId;
} }
async function _retryCreateTorrent(RD, infoHash, fileIndex) { async function _retryCreateTorrent(RD, infoHash, fileIndex) {
console.log(`Retry failed download in RealDebrid ${infoHash} [${fileIndex}]...`); console.log(`Retry failed download in RealDebrid ${infoHash} [${fileIndex}]...`);
const newTorrentId = await _recreateTorrentId(RD, infoHash, fileIndex); const newTorrentId = await _recreateTorrentId(RD, infoHash, fileIndex);
const newTorrent = await _getTorrentInfo(RD, newTorrentId); const newTorrent = await _getTorrentInfo(RD, newTorrentId);
return newTorrent && statusReady(newTorrent.status) return newTorrent && statusReady(newTorrent.status)
? _unrestrictLink(RD, newTorrent, fileIndex) ? _unrestrictLink(RD, newTorrent, fileIndex)
: StaticResponse.FAILED_DOWNLOAD; : StaticResponse.FAILED_DOWNLOAD;
} }
async function _selectTorrentFiles(RD, torrent, fileIndex) { async function _selectTorrentFiles(RD, torrent, fileIndex) {
torrent = statusWaitingSelection(torrent.status) ? torrent : await _openTorrent(RD, torrent.id); torrent = statusWaitingSelection(torrent.status) ? torrent : await _openTorrent(RD, torrent.id);
if (torrent?.files && statusWaitingSelection(torrent.status)) { if (torrent?.files && statusWaitingSelection(torrent.status)) {
const videoFileIds = Number.isInteger(fileIndex) ? `${fileIndex + 1}` : torrent.files const videoFileIds = Number.isInteger(fileIndex) ? `${fileIndex + 1}` : torrent.files
.filter(file => isVideo(file.path)) .filter(file => isVideo(file.path))
.filter(file => file.bytes > MIN_SIZE) .filter(file => file.bytes > MIN_SIZE)
.map(file => file.id) .map(file => file.id)
.join(','); .join(',');
return RD.torrents.selectFiles(torrent.id, videoFileIds); return RD.torrents.selectFiles(torrent.id, videoFileIds);
} }
return Promise.reject('Failed RealDebrid torrent file selection') return Promise.reject('Failed RealDebrid torrent file selection')
} }
async function _openTorrent(RD, torrentId, pollCounter = 0, pollRate = 2000, maxPollNumber = 15) { async function _openTorrent(RD, torrentId, pollCounter = 0, pollRate = 2000, maxPollNumber = 15) {
return _getTorrentInfo(RD, torrentId) return _getTorrentInfo(RD, torrentId)
.then(torrent => torrent && statusOpening(torrent.status) && pollCounter < maxPollNumber .then(torrent => torrent && statusOpening(torrent.status) && pollCounter < maxPollNumber
? delay(pollRate).then(() => _openTorrent(RD, torrentId, pollCounter + 1)) ? delay(pollRate).then(() => _openTorrent(RD, torrentId, pollCounter + 1))
: torrent); : torrent);
} }
async function _unrestrictLink(RD, torrent, fileIndex, isBrowser) { async function _unrestrictLink(RD, torrent, fileIndex, isBrowser) {
const targetFile = torrent.files.find(file => file.id === fileIndex + 1) const targetFile = torrent.files.find(file => file.id === fileIndex + 1)
|| torrent.files.filter(file => file.selected).sort((a, b) => b.bytes - a.bytes)[0]; || torrent.files.filter(file => file.selected).sort((a, b) => b.bytes - a.bytes)[0];
if (!targetFile.selected) { if (!targetFile.selected) {
console.log(`Target RealDebrid file is not downloaded: ${JSON.stringify(targetFile)}`); console.log(`Target RealDebrid file is not downloaded: ${JSON.stringify(targetFile)}`);
await _recreateTorrentId(RD, torrent.hash.toLowerCase(), fileIndex); await _recreateTorrentId(RD, torrent.hash.toLowerCase(), fileIndex);
return StaticResponse.DOWNLOADING; return StaticResponse.DOWNLOADING;
} }
const selectedFiles = torrent.files.filter(file => file.selected); const selectedFiles = torrent.files.filter(file => file.selected);
const fileLink = torrent.links.length === 1 const fileLink = torrent.links.length === 1
? torrent.links[0] ? torrent.links[0]
: torrent.links[selectedFiles.indexOf(targetFile)]; : torrent.links[selectedFiles.indexOf(targetFile)];
if (!fileLink?.length) { if (!fileLink?.length) {
console.log(`No RealDebrid links found for ${torrent.hash} [${fileIndex}]`); console.log(`No RealDebrid links found for ${torrent.hash} [${fileIndex}]`);
return _retryCreateTorrent(RD, torrent.hash, fileIndex) return _retryCreateTorrent(RD, torrent.hash, fileIndex)
} }
return _unrestrictFileLink(RD, fileLink, torrent, fileIndex, isBrowser); return _unrestrictFileLink(RD, fileLink, torrent, fileIndex, isBrowser);
} }
async function _unrestrictFileLink(RD, fileLink, torrent, fileIndex, isBrowser) { async function _unrestrictFileLink(RD, fileLink, torrent, fileIndex, isBrowser) {
return RD.unrestrict.link(fileLink) return RD.unrestrict.link(fileLink)
.then(response => { .then(response => {
if (isArchive(response.download)) { if (isArchive(response.download)) {
if (torrent.files.filter(file => file.selected).length > 1) { if (torrent.files.filter(file => file.selected).length > 1) {
return _retryCreateTorrent(RD, torrent.hash, fileIndex) return _retryCreateTorrent(RD, torrent.hash, fileIndex)
} }
return StaticResponse.FAILED_RAR; return StaticResponse.FAILED_RAR;
} }
// if (isBrowser && response.streamable) { // if (isBrowser && response.streamable) {
// return RD.streaming.transcode(response.id) // return RD.streaming.transcode(response.id)
// .then(streamResponse => streamResponse.apple.full) // .then(streamResponse => streamResponse.apple.full)
// } // }
return response.download; return response.download;
}) })
.then(unrestrictedLink => { .then(unrestrictedLink => {
console.log(`Unrestricted RealDebrid ${torrent.hash} [${fileIndex}] to ${unrestrictedLink}`); console.log(`Unrestricted RealDebrid ${torrent.hash} [${fileIndex}] to ${unrestrictedLink}`);
return unrestrictedLink; return unrestrictedLink;
}) })
.catch(error => { .catch(error => {
if (error.code === 19) { if (error.code === 19) {
return _retryCreateTorrent(RD, torrent.hash.toLowerCase(), fileIndex); return _retryCreateTorrent(RD, torrent.hash.toLowerCase(), fileIndex);
} }
return Promise.reject(error); return Promise.reject(error);
}); });
} }
export function toCommonError(error) { export function toCommonError(error) {
if (error && error.code === 8) { if (error && error.code === 8) {
return BadTokenError; return BadTokenError;
} }
if (error && accessDeniedError(error)) { if (error && accessDeniedError(error)) {
return AccessDeniedError; return AccessDeniedError;
} }
return undefined; return undefined;
} }
function statusError(status) { function statusError(status) {
return ['error', 'magnet_error'].includes(status); return ['error', 'magnet_error'].includes(status);
} }
function statusMagnetError(status) { function statusMagnetError(status) {
return status === 'magnet_error'; return status === 'magnet_error';
} }
function statusOpening(status) { function statusOpening(status) {
return status === 'magnet_conversion'; return status === 'magnet_conversion';
} }
function statusWaitingSelection(status) { function statusWaitingSelection(status) {
return status === 'waiting_files_selection'; return status === 'waiting_files_selection';
} }
function statusDownloading(status) { function statusDownloading(status) {
return ['downloading', 'uploading', 'queued'].includes(status); return ['downloading', 'uploading', 'queued'].includes(status);
} }
function statusReady(status) { function statusReady(status) {
return ['downloaded', 'dead'].includes(status); return ['downloaded', 'dead'].includes(status);
} }
function accessDeniedError(error) { function accessDeniedError(error) {
return [9, 20].includes(error?.code); return [9, 20].includes(error?.code);
} }
function infringingFile(error) { function infringingFile(error) {
return error && error.code === 35; return error && error.code === 35;
} }
async function getDefaultOptions(ip) { async function getDefaultOptions(ip) {
return { ip, timeout: 10000 }; return { ip, timeout: 15000 };
} }

View File

@@ -17,7 +17,6 @@
<PackageReference Include="Microsoft.Extensions.Http" Version="8.0.0" /> <PackageReference Include="Microsoft.Extensions.Http" Version="8.0.0" />
<PackageReference Include="Microsoft.Extensions.Http.Polly" Version="8.0.3" /> <PackageReference Include="Microsoft.Extensions.Http.Polly" Version="8.0.3" />
<PackageReference Include="Polly" Version="8.3.1" /> <PackageReference Include="Polly" Version="8.3.1" />
<PackageReference Include="PromKnight.ParseTorrentTitle" Version="1.0.4" />
<PackageReference Include="Serilog" Version="3.1.1" /> <PackageReference Include="Serilog" Version="3.1.1" />
<PackageReference Include="Serilog.AspNetCore" Version="8.0.1" /> <PackageReference Include="Serilog.AspNetCore" Version="8.0.1" />
<PackageReference Include="Serilog.Sinks.Console" Version="5.0.1" /> <PackageReference Include="Serilog.Sinks.Console" Version="5.0.1" />
@@ -29,10 +28,30 @@
<None Include="Configuration\logging.json"> <None Include="Configuration\logging.json">
<CopyToOutputDirectory>Always</CopyToOutputDirectory> <CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None> </None>
<None Update="requirements.txt">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
<Content Remove="eng\**" />
<None Remove="eng\**" />
</ItemGroup>
<ItemGroup Condition="'$(Configuration)' == 'Debug'">
<Content Remove="python\**" />
<None Include="python\**">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
</ItemGroup> </ItemGroup>
<ItemGroup> <ItemGroup>
<ProjectReference Include="..\shared\SharedContracts.csproj" /> <ProjectReference Include="..\shared\SharedContracts.csproj" />
</ItemGroup> </ItemGroup>
<ItemGroup>
<Compile Remove="eng\**" />
</ItemGroup>
<ItemGroup>
<EmbeddedResource Remove="eng\**" />
</ItemGroup>
</Project> </Project>

View File

@@ -6,6 +6,12 @@ Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "SharedContracts", "..\share
EndProject EndProject
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "shared", "shared", "{2C0A0F53-28E6-404F-9EFE-DADFBEF8338B}" Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "shared", "shared", "{2C0A0F53-28E6-404F-9EFE-DADFBEF8338B}"
EndProject EndProject
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "eng", "eng", "{72A042C3-B4F3-45C5-AC20-041FE8F41EFC}"
ProjectSection(SolutionItems) = preProject
eng\install-python-reqs.ps1 = eng\install-python-reqs.ps1
eng\install-python-reqs.sh = eng\install-python-reqs.sh
EndProjectSection
EndProject
Global Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Any CPU = Debug|Any CPU Debug|Any CPU = Debug|Any CPU

View File

@@ -9,12 +9,23 @@ RUN dotnet restore -a $TARGETARCH
RUN dotnet publish -c Release --no-restore -o /src/out -a $TARGETARCH RUN dotnet publish -c Release --no-restore -o /src/out -a $TARGETARCH
FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine3.19
WORKDIR /app WORKDIR /app
ENV PYTHONUNBUFFERED=1
RUN apk add --update --no-cache python3=~3.11.8-r0 py3-pip && ln -sf python3 /usr/bin/python
COPY --from=build /src/out . COPY --from=build /src/out .
RUN rm -rf /app/python && mkdir -p /app/python
RUN pip3 install -r /app/requirements.txt -t /app/python
RUN addgroup -S debrid && adduser -S -G debrid debrid RUN addgroup -S debrid && adduser -S -G debrid debrid
USER debrid USER debrid
HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \ HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \
CMD pgrep -f dotnet || exit 1 CMD pgrep -f dotnet || exit 1
ENV PYTHONNET_PYDLL=/usr/lib/libpython3.11.so.1.0
ENTRYPOINT ["dotnet", "DebridCollector.dll"] ENTRYPOINT ["dotnet", "DebridCollector.dll"]

View File

@@ -1,5 +1,3 @@
using DebridCollector.Features.Configuration;
namespace DebridCollector.Extensions; namespace DebridCollector.Extensions;
public static class ServiceCollectionExtensions public static class ServiceCollectionExtensions
@@ -17,7 +15,8 @@ public static class ServiceCollectionExtensions
var serviceConfiguration = services.LoadConfigurationFromEnv<DebridCollectorConfiguration>(); var serviceConfiguration = services.LoadConfigurationFromEnv<DebridCollectorConfiguration>();
services.AddRealDebridClient(serviceConfiguration); services.AddRealDebridClient(serviceConfiguration);
services.AddSingleton<IParseTorrentTitle, ParseTorrentTitle>(); services.RegisterPythonEngine();
services.AddSingleton<IRankTorrentName, RankTorrentName>();
services.AddHostedService<DebridRequestProcessor>(); services.AddHostedService<DebridRequestProcessor>();
return services; return services;
@@ -62,7 +61,10 @@ public static class ServiceCollectionExtensions
cfg.UseMessageRetry(r => r.Intervals(1000,2000,5000)); cfg.UseMessageRetry(r => r.Intervals(1000,2000,5000));
cfg.UseInMemoryOutbox(); cfg.UseInMemoryOutbox();
}) })
.RedisRepository(redisConfiguration.ConnectionString) .RedisRepository(redisConfiguration.ConnectionString, options =>
{
options.KeyPrefix = "debrid-collector:";
})
.Endpoint( .Endpoint(
e => e =>
{ {

View File

@@ -1,6 +1,4 @@
using DebridCollector.Features.Configuration; namespace DebridCollector.Features.Debrid;
namespace DebridCollector.Features.Debrid;
public static class ServiceCollectionExtensions public static class ServiceCollectionExtensions
{ {

View File

@@ -3,10 +3,11 @@ namespace DebridCollector.Features.Worker;
public static class DebridMetaToTorrentMeta public static class DebridMetaToTorrentMeta
{ {
public static IReadOnlyList<TorrentFile> MapMetadataToFilesCollection( public static IReadOnlyList<TorrentFile> MapMetadataToFilesCollection(
IParseTorrentTitle torrentTitle, IRankTorrentName rankTorrentName,
Torrent torrent, Torrent torrent,
string ImdbId, string ImdbId,
FileDataDictionary Metadata) FileDataDictionary Metadata,
ILogger<WriteMetadataConsumer> logger)
{ {
try try
{ {
@@ -26,23 +27,30 @@ public static class DebridMetaToTorrentMeta
Size = metadataEntry.Value.Filesize.GetValueOrDefault(), Size = metadataEntry.Value.Filesize.GetValueOrDefault(),
}; };
var parsedTitle = torrentTitle.Parse(file.Title); var parsedTitle = rankTorrentName.Parse(file.Title, false);
file.ImdbSeason = parsedTitle.Seasons.FirstOrDefault(); if (!parsedTitle.Success)
file.ImdbEpisode = parsedTitle.Episodes.FirstOrDefault(); {
logger.LogWarning("Failed to parse title {Title} for metadata mapping", file.Title);
continue;
}
file.ImdbSeason = parsedTitle.Response?.Season?.FirstOrDefault() ?? 0;
file.ImdbEpisode = parsedTitle.Response?.Episode?.FirstOrDefault() ?? 0;
files.Add(file); files.Add(file);
} }
return files; return files;
} }
catch (Exception) catch (Exception ex)
{ {
logger.LogWarning("Failed to map metadata to files collection: {Exception}", ex.Message);
return []; return [];
} }
} }
public static async Task<IReadOnlyList<SubtitleFile>> MapMetadataToSubtitlesCollection(IDataStorage storage, string InfoHash, FileDataDictionary Metadata) public static async Task<IReadOnlyList<SubtitleFile>> MapMetadataToSubtitlesCollection(IDataStorage storage, string InfoHash, FileDataDictionary Metadata, ILogger<WriteMetadataConsumer> logger)
{ {
try try
{ {
@@ -74,8 +82,9 @@ public static class DebridMetaToTorrentMeta
return files; return files;
} }
catch (Exception) catch (Exception ex)
{ {
logger.LogWarning("Failed to map metadata to subtitles collection: {Exception}", ex.Message);
return []; return [];
} }
} }

View File

@@ -53,6 +53,12 @@ public class InfohashMetadataSagaStateMachine : MassTransitStateMachine<Infohash
.Then( .Then(
context => context =>
{ {
if (!context.Message.WithFiles)
{
logger.LogInformation("No files written for torrent {InfoHash} in Saga {SagaId}", context.Saga.Torrent.InfoHash, context.Saga.CorrelationId);
return;
}
logger.LogInformation("Metadata Written for torrent {InfoHash} in Saga {SagaId}", context.Saga.Torrent.InfoHash, context.Saga.CorrelationId); logger.LogInformation("Metadata Written for torrent {InfoHash} in Saga {SagaId}", context.Saga.Torrent.InfoHash, context.Saga.CorrelationId);
}) })
.TransitionTo(Completed) .TransitionTo(Completed)

View File

@@ -1,22 +1,22 @@
namespace DebridCollector.Features.Worker; namespace DebridCollector.Features.Worker;
[EntityName("perform-metadata-request")] [EntityName("perform-metadata-request-debrid-collector")]
public record PerformMetadataRequest(Guid CorrelationId, string InfoHash) : CorrelatedBy<Guid>; public record PerformMetadataRequest(Guid CorrelationId, string InfoHash) : CorrelatedBy<Guid>;
[EntityName("torrent-metadata-response")] [EntityName("torrent-metadata-response-debrid-collector")]
public record GotMetadata(TorrentMetadataResponse Metadata) : CorrelatedBy<Guid> public record GotMetadata(TorrentMetadataResponse Metadata) : CorrelatedBy<Guid>
{ {
public Guid CorrelationId { get; init; } = Metadata.CorrelationId; public Guid CorrelationId { get; init; } = Metadata.CorrelationId;
} }
[EntityName("write-metadata")] [EntityName("write-metadata-debrid-collector")]
public record WriteMetadata(Torrent Torrent, TorrentMetadataResponse Metadata, string ImdbId) : CorrelatedBy<Guid> public record WriteMetadata(Torrent Torrent, TorrentMetadataResponse Metadata, string ImdbId) : CorrelatedBy<Guid>
{ {
public Guid CorrelationId { get; init; } = Metadata.CorrelationId; public Guid CorrelationId { get; init; } = Metadata.CorrelationId;
} }
[EntityName("metadata-written")] [EntityName("metadata-written-debrid-colloctor")]
public record MetadataWritten(TorrentMetadataResponse Metadata) : CorrelatedBy<Guid> public record MetadataWritten(TorrentMetadataResponse Metadata, bool WithFiles) : CorrelatedBy<Guid>
{ {
public Guid CorrelationId { get; init; } = Metadata.CorrelationId; public Guid CorrelationId { get; init; } = Metadata.CorrelationId;
} }

View File

@@ -1,25 +1,28 @@
namespace DebridCollector.Features.Worker; namespace DebridCollector.Features.Worker;
public class WriteMetadataConsumer(IParseTorrentTitle parseTorrentTitle, IDataStorage dataStorage) : IConsumer<WriteMetadata> public class WriteMetadataConsumer(IRankTorrentName rankTorrentName, IDataStorage dataStorage, ILogger<WriteMetadataConsumer> logger) : IConsumer<WriteMetadata>
{ {
public async Task Consume(ConsumeContext<WriteMetadata> context) public async Task Consume(ConsumeContext<WriteMetadata> context)
{ {
var request = context.Message; var request = context.Message;
var torrentFiles = DebridMetaToTorrentMeta.MapMetadataToFilesCollection(parseTorrentTitle, request.Torrent, request.ImdbId, request.Metadata.Metadata); var torrentFiles = DebridMetaToTorrentMeta.MapMetadataToFilesCollection(rankTorrentName, request.Torrent, request.ImdbId, request.Metadata.Metadata, logger);
if (torrentFiles.Any()) if (!torrentFiles.Any())
{ {
await dataStorage.InsertFiles(torrentFiles); await context.Publish(new MetadataWritten(request.Metadata, false));
return;
var subtitles = await DebridMetaToTorrentMeta.MapMetadataToSubtitlesCollection(dataStorage, request.Torrent.InfoHash, request.Metadata.Metadata);
if (subtitles.Any())
{
await dataStorage.InsertSubtitles(subtitles);
}
} }
await context.Publish(new MetadataWritten(request.Metadata)); await dataStorage.InsertFiles(torrentFiles);
var subtitles = await DebridMetaToTorrentMeta.MapMetadataToSubtitlesCollection(dataStorage, request.Torrent.InfoHash, request.Metadata.Metadata, logger);
if (subtitles.Any())
{
await dataStorage.InsertSubtitles(subtitles);
}
await context.Publish(new MetadataWritten(request.Metadata, true));
} }
} }

View File

@@ -4,17 +4,18 @@ global using System.Text.Json;
global using System.Text.Json.Serialization; global using System.Text.Json.Serialization;
global using System.Threading.Channels; global using System.Threading.Channels;
global using DebridCollector.Extensions; global using DebridCollector.Extensions;
global using DebridCollector.Features.Configuration;
global using DebridCollector.Features.Debrid; global using DebridCollector.Features.Debrid;
global using DebridCollector.Features.Worker; global using DebridCollector.Features.Worker;
global using MassTransit; global using MassTransit;
global using MassTransit.Mediator;
global using Microsoft.AspNetCore.Builder; global using Microsoft.AspNetCore.Builder;
global using Microsoft.Extensions.DependencyInjection; global using Microsoft.Extensions.DependencyInjection;
global using Polly; global using Polly;
global using Polly.Extensions.Http; global using Polly.Extensions.Http;
global using PromKnight.ParseTorrentTitle;
global using SharedContracts.Configuration; global using SharedContracts.Configuration;
global using SharedContracts.Dapper; global using SharedContracts.Dapper;
global using SharedContracts.Extensions; global using SharedContracts.Extensions;
global using SharedContracts.Models; global using SharedContracts.Models;
global using SharedContracts.Python;
global using SharedContracts.Python.RTN;
global using SharedContracts.Requests; global using SharedContracts.Requests;

View File

@@ -0,0 +1,2 @@
mkdir -p ../python
python -m pip install -r ../requirements.txt -t ../python/

View File

@@ -0,0 +1,5 @@
#!/bin/bash
rm -rf ../python
mkdir -p ../python
python3 -m pip install -r ../requirements.txt -t ../python/

View File

@@ -0,0 +1 @@
rank-torrent-name==0.2.5

View File

@@ -72,7 +72,7 @@ public class BasicsFile(ILogger<BasicsFile> logger, ImdbDbService dbService): IF
Category = csv.GetField(1), Category = csv.GetField(1),
Title = csv.GetField(2), Title = csv.GetField(2),
Adult = isAdultSet && adult == 1, Adult = isAdultSet && adult == 1,
Year = csv.GetField(5), Year = csv.GetField(5) == @"\N" ? 0 : int.Parse(csv.GetField(5)),
}; };
if (cancellationToken.IsCancellationRequested) if (cancellationToken.IsCancellationRequested)

View File

@@ -6,5 +6,5 @@ public class ImdbBasicEntry
public string? Category { get; set; } public string? Category { get; set; }
public string? Title { get; set; } public string? Title { get; set; }
public bool Adult { get; set; } public bool Adult { get; set; }
public string? Year { get; set; } public int Year { get; set; }
} }

View File

@@ -17,7 +17,7 @@ public class ImdbDbService(PostgresConfiguration configuration, ILogger<ImdbDbSe
await writer.WriteAsync(entry.ImdbId, NpgsqlDbType.Text); await writer.WriteAsync(entry.ImdbId, NpgsqlDbType.Text);
await writer.WriteAsync(entry.Category, NpgsqlDbType.Text); await writer.WriteAsync(entry.Category, NpgsqlDbType.Text);
await writer.WriteAsync(entry.Title, NpgsqlDbType.Text); await writer.WriteAsync(entry.Title, NpgsqlDbType.Text);
await writer.WriteAsync(entry.Year, NpgsqlDbType.Text); await writer.WriteAsync(entry.Year, NpgsqlDbType.Integer);
await writer.WriteAsync(entry.Adult, NpgsqlDbType.Boolean); await writer.WriteAsync(entry.Adult, NpgsqlDbType.Boolean);
} }
catch (Npgsql.PostgresException e) catch (Npgsql.PostgresException e)
@@ -116,7 +116,7 @@ public class ImdbDbService(PostgresConfiguration configuration, ILogger<ImdbDbSe
ExecuteCommandAsync( ExecuteCommandAsync(
async connection => async connection =>
{ {
await using var command = new NpgsqlCommand($"CREATE INDEX title_gist ON {TableNames.MetadataTable} USING gist(title gist_trgm_ops)", connection); await using var command = new NpgsqlCommand($"CREATE INDEX title_gin ON {TableNames.MetadataTable} USING gin(title gin_trgm_ops)", connection);
await command.ExecuteNonQueryAsync(); await command.ExecuteNonQueryAsync();
}, "Error while creating index on imdb_metadata table"); }, "Error while creating index on imdb_metadata table");
@@ -125,7 +125,7 @@ public class ImdbDbService(PostgresConfiguration configuration, ILogger<ImdbDbSe
async connection => async connection =>
{ {
logger.LogInformation("Dropping Trigrams index if it exists already"); logger.LogInformation("Dropping Trigrams index if it exists already");
await using var dropCommand = new NpgsqlCommand("DROP INDEX if exists title_gist", connection); await using var dropCommand = new NpgsqlCommand("DROP INDEX if exists title_gin", connection);
await dropCommand.ExecuteNonQueryAsync(); await dropCommand.ExecuteNonQueryAsync();
}, $"Error while dropping index on {TableNames.MetadataTable} table"); }, $"Error while dropping index on {TableNames.MetadataTable} table");

View File

@@ -0,0 +1,35 @@
-- Purpose: Change the year column to integer and add a search function that allows for searching by year.
ALTER TABLE imdb_metadata
ALTER COLUMN year TYPE integer USING (CASE WHEN year = '\N' THEN 0 ELSE year::integer END);
-- Remove the old search function
DROP FUNCTION IF EXISTS search_imdb_meta(TEXT, TEXT, TEXT, INT);
-- Add the new search function that allows for searching by year with a plus/minus one year range
CREATE OR REPLACE FUNCTION search_imdb_meta(search_term TEXT, category_param TEXT DEFAULT NULL, year_param INT DEFAULT NULL, limit_param INT DEFAULT 10)
RETURNS TABLE(imdb_id character varying(16), title character varying(1000),category character varying(50),year INT, score REAL) AS $$
BEGIN
SET pg_trgm.similarity_threshold = 0.9;
RETURN QUERY
SELECT imdb_metadata.imdb_id, imdb_metadata.title, imdb_metadata.category, imdb_metadata.year, similarity(imdb_metadata.title, search_term) as score
FROM imdb_metadata
WHERE (imdb_metadata.title % search_term)
AND (imdb_metadata.adult = FALSE)
AND (category_param IS NULL OR imdb_metadata.category = category_param)
AND (year_param IS NULL OR imdb_metadata.year BETWEEN year_param - 1 AND year_param + 1)
ORDER BY score DESC
LIMIT limit_param;
END; $$
LANGUAGE plpgsql;
-- Drop the old indexes
DROP INDEX IF EXISTS idx_imdb_metadata_adult;
DROP INDEX IF EXISTS idx_imdb_metadata_category;
DROP INDEX IF EXISTS idx_imdb_metadata_year;
DROP INDEX IF EXISTS title_gist;
-- Add indexes for the new columns
CREATE INDEX idx_imdb_metadata_adult ON imdb_metadata(adult);
CREATE INDEX idx_imdb_metadata_category ON imdb_metadata(category);
CREATE INDEX idx_imdb_metadata_year ON imdb_metadata(year);
CREATE INDEX title_gin ON imdb_metadata USING gin(title gin_trgm_ops);

View File

@@ -0,0 +1,40 @@
-- Purpose: Add the jsonb column to the ingested_torrents table to store the response from RTN
ALTER TABLE ingested_torrents
ADD COLUMN IF NOT EXISTS rtn_response jsonb;
-- Purpose: Drop torrentId column from torrents table
ALTER TABLE torrents
DROP COLUMN IF EXISTS "torrentId";
-- Purpose: Drop Trackers column from torrents table
ALTER TABLE torrents
DROP COLUMN IF EXISTS "trackers";
-- Purpose: Create a foreign key relationsship if it does not already exist between torrents and the source table ingested_torrents, but do not cascade on delete.
ALTER TABLE torrents
ADD COLUMN IF NOT EXISTS "ingestedTorrentId" bigint;
DO $$
BEGIN
IF EXISTS (
SELECT 1
FROM information_schema.table_constraints
WHERE constraint_name = 'fk_torrents_info_hash'
)
THEN
ALTER TABLE torrents
DROP CONSTRAINT fk_torrents_info_hash;
END IF;
END $$;
ALTER TABLE torrents
ADD CONSTRAINT fk_torrents_info_hash
FOREIGN KEY ("ingestedTorrentId")
REFERENCES ingested_torrents("id")
ON DELETE NO ACTION;
UPDATE torrents
SET "ingestedTorrentId" = ingested_torrents."id"
FROM ingested_torrents
WHERE torrents."infoHash" = ingested_torrents."info_hash"
AND torrents."provider" = ingested_torrents."source";

View File

@@ -0,0 +1,55 @@
DROP FUNCTION IF EXISTS kc_maintenance_reconcile_dmm_imdb_ids();
CREATE OR REPLACE FUNCTION kc_maintenance_reconcile_dmm_imdb_ids()
RETURNS INTEGER AS $$
DECLARE
rec RECORD;
imdb_rec RECORD;
rows_affected INTEGER := 0;
BEGIN
RAISE NOTICE 'Starting Reconciliation of DMM IMDB Ids...';
FOR rec IN
SELECT
it."id" as "ingestion_id",
t."infoHash",
it."category" as "ingestion_category",
f."id" as "file_Id",
f."title" as "file_Title",
(rtn_response->>'raw_title')::text as "raw_title",
(rtn_response->>'parsed_title')::text as "parsed_title",
(rtn_response->>'year')::int as "year"
FROM torrents t
JOIN ingested_torrents it ON t."ingestedTorrentId" = it."id"
JOIN files f ON t."infoHash" = f."infoHash"
WHERE t."provider" = 'DMM'
LOOP
RAISE NOTICE 'Processing record with file_Id: %', rec."file_Id";
FOR imdb_rec IN
SELECT * FROM search_imdb_meta(
rec."parsed_title",
CASE
WHEN rec."ingestion_category" = 'tv' THEN 'tvSeries'
WHEN rec."ingestion_category" = 'movies' THEN 'movie'
END,
CASE
WHEN rec."year" = 0 THEN NULL
ELSE rec."year" END,
1)
LOOP
IF imdb_rec IS NOT NULL THEN
RAISE NOTICE 'Updating file_Id: % with imdbId: %, parsed title: %, imdb title: %', rec."file_Id", imdb_rec."imdb_id", rec."parsed_title", imdb_rec."title";
UPDATE "files"
SET "imdbId" = imdb_rec."imdb_id"
WHERE "id" = rec."file_Id";
rows_affected := rows_affected + 1;
ELSE
RAISE NOTICE 'No IMDB ID found for file_Id: %, parsed title: %, imdb title: %, setting imdbId to NULL', rec."file_Id", rec."parsed_title", imdb_rec."title";
UPDATE "files"
SET "imdbId" = NULL
WHERE "id" = rec."file_Id";
END IF;
END LOOP;
END LOOP;
RAISE NOTICE 'Finished reconciliation. Total rows affected: %', rows_affected;
RETURN rows_affected;
END;
$$ LANGUAGE plpgsql;

View File

@@ -0,0 +1,19 @@
-- Remove the old search function
DROP FUNCTION IF EXISTS search_imdb_meta(TEXT, TEXT, INT, INT);
-- Add the new search function that allows for searching by year with a plus/minus one year range
CREATE OR REPLACE FUNCTION search_imdb_meta(search_term TEXT, category_param TEXT DEFAULT NULL, year_param INT DEFAULT NULL, limit_param INT DEFAULT 10, similarity_threshold REAL DEFAULT 0.95)
RETURNS TABLE(imdb_id character varying(16), title character varying(1000),category character varying(50),year INT, score REAL) AS $$
BEGIN
SET pg_trgm.similarity_threshold = similarity_threshold;
RETURN QUERY
SELECT imdb_metadata.imdb_id, imdb_metadata.title, imdb_metadata.category, imdb_metadata.year, similarity(imdb_metadata.title, search_term) as score
FROM imdb_metadata
WHERE (imdb_metadata.title % search_term)
AND (imdb_metadata.adult = FALSE)
AND (category_param IS NULL OR imdb_metadata.category = category_param)
AND (year_param IS NULL OR imdb_metadata.year BETWEEN year_param - 1 AND year_param + 1)
ORDER BY score DESC
LIMIT limit_param;
END; $$
LANGUAGE plpgsql;

View File

@@ -0,0 +1,19 @@
-- Remove the old search function
DROP FUNCTION IF EXISTS search_imdb_meta(TEXT, TEXT, INT, INT);
-- Add the new search function that allows for searching by year with a plus/minus one year range
CREATE OR REPLACE FUNCTION search_imdb_meta(search_term TEXT, category_param TEXT DEFAULT NULL, year_param INT DEFAULT NULL, limit_param INT DEFAULT 10, similarity_threshold REAL DEFAULT 0.95)
RETURNS TABLE(imdb_id character varying(16), title character varying(1000),category character varying(50),year INT, score REAL) AS $$
BEGIN
EXECUTE format('SET pg_trgm.similarity_threshold = %L', similarity_threshold);
RETURN QUERY
SELECT imdb_metadata.imdb_id, imdb_metadata.title, imdb_metadata.category, imdb_metadata.year, similarity(imdb_metadata.title, search_term) as score
FROM imdb_metadata
WHERE (imdb_metadata.title % search_term)
AND (imdb_metadata.adult = FALSE)
AND (category_param IS NULL OR imdb_metadata.category = category_param)
AND (year_param IS NULL OR imdb_metadata.year BETWEEN year_param - 1 AND year_param + 1)
ORDER BY score DESC
LIMIT limit_param;
END; $$
LANGUAGE plpgsql;

View File

@@ -0,0 +1,2 @@
**/python/
.idea/

View File

@@ -6,6 +6,12 @@ Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "SharedContracts", "..\share
EndProject EndProject
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "shared", "shared", "{FF5CA857-51E8-4446-8840-2A1D24ED3952}" Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "shared", "shared", "{FF5CA857-51E8-4446-8840-2A1D24ED3952}"
EndProject EndProject
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "eng", "eng", "{1AE7F597-24C4-4575-B59F-67A625D95C1E}"
ProjectSection(SolutionItems) = preProject
eng\install-python-reqs.ps1 = eng\install-python-reqs.ps1
eng\install-python-reqs.sh = eng\install-python-reqs.sh
EndProjectSection
EndProject
Global Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Any CPU = Debug|Any CPU Debug|Any CPU = Debug|Any CPU

View File

@@ -0,0 +1,3 @@
remove-item -recurse -force ../src/python
mkdir -p ../src/python
pip install -r ../src/requirements.txt -t ../src/python/

View File

@@ -0,0 +1,5 @@
#!/bin/bash
rm -rf ../src/python
mkdir -p ../src/python
python3 -m pip install -r ../src/requirements.txt -t ../src/python/

View File

@@ -0,0 +1,2 @@
**/python/
.idea/

View File

@@ -8,13 +8,27 @@ WORKDIR /src/producer/src
RUN dotnet restore -a $TARGETARCH RUN dotnet restore -a $TARGETARCH
RUN dotnet publish -c Release --no-restore -o /src/out -a $TARGETARCH RUN dotnet publish -c Release --no-restore -o /src/out -a $TARGETARCH
FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine3.19
FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine
WORKDIR /app WORKDIR /app
ENV PYTHONUNBUFFERED=1
RUN apk add --update --no-cache python3=~3.11.8-r0 py3-pip && ln -sf python3 /usr/bin/python
COPY --from=build /src/out . COPY --from=build /src/out .
RUN rm -rf /app/python && mkdir -p /app/python
RUN pip3 install -r /app/requirements.txt -t /app/python
RUN addgroup -S producer && adduser -S -G producer producer RUN addgroup -S producer && adduser -S -G producer producer
USER producer USER producer
HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \ HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \
CMD pgrep -f dotnet || exit 1 CMD pgrep -f dotnet || exit 1
ENV PYTHONNET_PYDLL=/usr/lib/libpython3.11.so.1.0
ENTRYPOINT ["dotnet", "Producer.dll"] ENTRYPOINT ["dotnet", "Producer.dll"]

View File

@@ -5,7 +5,7 @@ public partial class DebridMediaManagerCrawler(
ILogger<DebridMediaManagerCrawler> logger, ILogger<DebridMediaManagerCrawler> logger,
IDataStorage storage, IDataStorage storage,
GithubConfiguration githubConfiguration, GithubConfiguration githubConfiguration,
IParseTorrentTitle parseTorrentTitle, IRankTorrentName rankTorrentName,
IDistributedCache cache) : BaseCrawler(logger, storage) IDistributedCache cache) : BaseCrawler(logger, storage)
{ {
[GeneratedRegex("""<iframe src="https:\/\/debridmediamanager.com\/hashlist#(.*)"></iframe>""")] [GeneratedRegex("""<iframe src="https:\/\/debridmediamanager.com\/hashlist#(.*)"></iframe>""")]
@@ -107,100 +107,69 @@ public partial class DebridMediaManagerCrawler(
{ {
return null; return null;
} }
var parsedTorrent = parseTorrentTitle.Parse(torrentTitle.CleanTorrentTitleForImdb());
var (cached, cachedResult) = await CheckIfInCacheAndReturn(parsedTorrent.Title); var parsedTorrent = rankTorrentName.Parse(torrentTitle);
if (!parsedTorrent.Success)
{
return null;
}
var torrentType = parsedTorrent.Response.IsMovie ? "movie" : "tvSeries";
var cacheKey = GetCacheKey(torrentType, parsedTorrent.Response.ParsedTitle, parsedTorrent.Response.Year);
var (cached, cachedResult) = await CheckIfInCacheAndReturn(cacheKey);
if (cached) if (cached)
{ {
logger.LogInformation("[{ImdbId}] Found cached imdb result for {Title}", cachedResult.ImdbId, parsedTorrent.Title); logger.LogInformation("[{ImdbId}] Found cached imdb result for {Title}", cachedResult.ImdbId, parsedTorrent.Response.ParsedTitle);
return new() return MapToTorrent(cachedResult, bytesElement, hashElement, parsedTorrent);
{
Source = Source,
Name = cachedResult.Title,
Imdb = cachedResult.ImdbId,
Size = bytesElement.GetInt64().ToString(),
InfoHash = hashElement.ToString(),
Seeders = 0,
Leechers = 0,
Category = parsedTorrent.TorrentType switch
{
TorrentType.Movie => "movies",
TorrentType.Tv => "tv",
_ => "unknown",
},
};
} }
var imdbEntry = await Storage.FindImdbMetadata(parsedTorrent.Title, parsedTorrent.TorrentType, parsedTorrent.Year);
if (imdbEntry.Count == 0) int? year = parsedTorrent.Response.Year != 0 ? parsedTorrent.Response.Year : null;
var imdbEntry = await Storage.FindImdbMetadata(parsedTorrent.Response.ParsedTitle, torrentType, year);
if (imdbEntry is null)
{ {
return null; return null;
} }
var scoredTitles = await ScoreTitles(parsedTorrent, imdbEntry); await AddToCache(cacheKey, imdbEntry);
if (!scoredTitles.Success) logger.LogInformation("[{ImdbId}] Found best match for {Title}: {BestMatch} with score {Score}", imdbEntry.ImdbId, parsedTorrent.Response.ParsedTitle, imdbEntry.Title, imdbEntry.Score);
{
return null;
}
logger.LogInformation("[{ImdbId}] Found best match for {Title}: {BestMatch} with score {Score}", scoredTitles.BestMatch.Value.ImdbId, parsedTorrent.Title, scoredTitles.BestMatch.Value.Title, scoredTitles.BestMatch.Score);
var torrent = new IngestedTorrent return MapToTorrent(imdbEntry, bytesElement, hashElement, parsedTorrent);
}
private IngestedTorrent MapToTorrent(ImdbEntry result, JsonElement bytesElement, JsonElement hashElement, ParseTorrentTitleResponse parsedTorrent) =>
new()
{ {
Source = Source, Source = Source,
Name = scoredTitles.BestMatch.Value.Title, Name = result.Title,
Imdb = scoredTitles.BestMatch.Value.ImdbId, Imdb = result.ImdbId,
Size = bytesElement.GetInt64().ToString(), Size = bytesElement.GetInt64().ToString(),
InfoHash = hashElement.ToString(), InfoHash = hashElement.ToString(),
Seeders = 0, Seeders = 0,
Leechers = 0, Leechers = 0,
Category = parsedTorrent.TorrentType switch Category = AssignCategory(result),
{ RtnResponse = parsedTorrent.Response.ToJson(),
TorrentType.Movie => "movies",
TorrentType.Tv => "tv",
_ => "unknown",
},
}; };
return torrent; private Task AddToCache(string cacheKey, ImdbEntry best)
}
private async Task<(bool Success, ExtractedResult<ImdbEntry>? BestMatch)> ScoreTitles(TorrentMetadata parsedTorrent, List<ImdbEntry> imdbEntries)
{
var lowerCaseTitle = parsedTorrent.Title.ToLowerInvariant();
// Scoring directly operates on the List<ImdbEntry>, no need for lookup table.
var scoredResults = Process.ExtractAll(new(){Title = lowerCaseTitle}, imdbEntries, x => x.Title?.ToLowerInvariant(), scorer: new DefaultRatioScorer(), cutoff: 90);
var best = scoredResults.MaxBy(x => x.Score);
if (best is null)
{
return (false, null);
}
await AddToCache(lowerCaseTitle, best);
return (true, best);
}
private Task AddToCache(string lowerCaseTitle, ExtractedResult<ImdbEntry> best)
{ {
var cacheOptions = new DistributedCacheEntryOptions var cacheOptions = new DistributedCacheEntryOptions
{ {
AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(15), AbsoluteExpirationRelativeToNow = TimeSpan.FromDays(1),
}; };
return cache.SetStringAsync(lowerCaseTitle, JsonSerializer.Serialize(best.Value), cacheOptions); return cache.SetStringAsync(cacheKey, JsonSerializer.Serialize(best), cacheOptions);
} }
private async Task<(bool Success, ImdbEntry? Entry)> CheckIfInCacheAndReturn(string title) private async Task<(bool Success, ImdbEntry? Entry)> CheckIfInCacheAndReturn(string cacheKey)
{ {
var cachedImdbId = await cache.GetStringAsync(title.ToLowerInvariant()); var cachedImdbId = await cache.GetStringAsync(cacheKey);
if (!string.IsNullOrEmpty(cachedImdbId)) if (!string.IsNullOrEmpty(cachedImdbId))
{ {
@@ -240,4 +209,14 @@ public partial class DebridMediaManagerCrawler(
return (pageIngested, name); return (pageIngested, name);
} }
private static string AssignCategory(ImdbEntry entry) =>
entry.Category.ToLower() switch
{
var category when string.Equals(category, "movie", StringComparison.OrdinalIgnoreCase) => "movies",
var category when string.Equals(category, "tvSeries", StringComparison.OrdinalIgnoreCase) => "tv",
_ => "unknown",
};
private static string GetCacheKey(string category, string title, int year) => $"{category.ToLowerInvariant()}:{year}:{title.ToLowerInvariant()}";
} }

View File

@@ -0,0 +1,24 @@
namespace Producer.Features.DataProcessing
{
public class LengthAwareRatioScorer : IRatioScorer
{
private readonly IRatioScorer _defaultScorer = new DefaultRatioScorer();
public int Score(string input1, string input2)
{
var score = _defaultScorer.Score(input1, input2);
var lengthRatio = (double)Math.Min(input1.Length, input2.Length) / Math.Max(input1.Length, input2.Length);
var result = (int)(score * lengthRatio);
return result > 100 ? 100 : result;
}
public int Score(string input1, string input2, PreprocessMode preprocessMode)
{
var score = _defaultScorer.Score(input1, input2, preprocessMode);
var lengthRatio = (double)Math.Min(input1.Length, input2.Length) / Math.Max(input1.Length, input2.Length);
var result = (int)(score * lengthRatio);
return result > 100 ? 100 : result;
}
}
}

View File

@@ -9,7 +9,8 @@ internal static class ServiceCollectionExtensions
services.AddTransient<IDataStorage, DapperDataStorage>(); services.AddTransient<IDataStorage, DapperDataStorage>();
services.AddTransient<IMessagePublisher, TorrentPublisher>(); services.AddTransient<IMessagePublisher, TorrentPublisher>();
services.AddSingleton<IParseTorrentTitle, ParseTorrentTitle>(); services.RegisterPythonEngine();
services.AddSingleton<IRankTorrentName, RankTorrentName>();
services.AddStackExchangeRedisCache(options => services.AddStackExchangeRedisCache(options =>
{ {
options.Configuration = redisConfiguration.ConnectionString; options.Configuration = redisConfiguration.ConnectionString;

View File

@@ -7,6 +7,8 @@ global using System.Text.RegularExpressions;
global using System.Xml.Linq; global using System.Xml.Linq;
global using FuzzySharp; global using FuzzySharp;
global using FuzzySharp.Extractor; global using FuzzySharp.Extractor;
global using FuzzySharp.PreProcess;
global using FuzzySharp.SimilarityRatio.Scorer;
global using FuzzySharp.SimilarityRatio.Scorer.StrategySensitive; global using FuzzySharp.SimilarityRatio.Scorer.StrategySensitive;
global using LZStringCSharp; global using LZStringCSharp;
global using MassTransit; global using MassTransit;
@@ -23,11 +25,10 @@ global using Producer.Features.Crawlers.Torrentio;
global using Producer.Features.CrawlerSupport; global using Producer.Features.CrawlerSupport;
global using Producer.Features.DataProcessing; global using Producer.Features.DataProcessing;
global using Producer.Features.JobSupport; global using Producer.Features.JobSupport;
global using PromKnight.ParseTorrentTitle;
global using Serilog;
global using SharedContracts.Configuration; global using SharedContracts.Configuration;
global using SharedContracts.Dapper; global using SharedContracts.Dapper;
global using SharedContracts.Extensions; global using SharedContracts.Extensions;
global using SharedContracts.Models; global using SharedContracts.Models;
global using SharedContracts.Requests; global using SharedContracts.Python;
global using StackExchange.Redis; global using SharedContracts.Python.RTN;
global using SharedContracts.Requests;

View File

@@ -19,6 +19,7 @@
<PackageReference Include="Microsoft.Extensions.Hosting" Version="8.0.0" /> <PackageReference Include="Microsoft.Extensions.Hosting" Version="8.0.0" />
<PackageReference Include="Microsoft.Extensions.Http" Version="8.0.0" /> <PackageReference Include="Microsoft.Extensions.Http" Version="8.0.0" />
<PackageReference Include="Polly" Version="8.3.0" /> <PackageReference Include="Polly" Version="8.3.0" />
<PackageReference Include="pythonnet" Version="3.0.3" />
<PackageReference Include="Quartz.Extensions.DependencyInjection" Version="3.8.0" /> <PackageReference Include="Quartz.Extensions.DependencyInjection" Version="3.8.0" />
<PackageReference Include="Quartz.Extensions.Hosting" Version="3.8.0" /> <PackageReference Include="Quartz.Extensions.Hosting" Version="3.8.0" />
<PackageReference Include="Serilog" Version="3.1.1" /> <PackageReference Include="Serilog" Version="3.1.1" />
@@ -32,11 +33,14 @@
<None Include="Configuration\*.json"> <None Include="Configuration\*.json">
<CopyToOutputDirectory>Always</CopyToOutputDirectory> <CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None> </None>
<None Update="requirements.txt">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
</ItemGroup> </ItemGroup>
<ItemGroup> <ItemGroup Condition="'$(Configuration)' == 'Debug'">
<Content Remove="Data\**" /> <Content Remove="python\**" />
<None Include="Data\**"> <None Include="python\**">
<CopyToOutputDirectory>Always</CopyToOutputDirectory> <CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None> </None>
</ItemGroup> </ItemGroup>

View File

@@ -0,0 +1 @@
rank-torrent-name==0.2.5

View File

@@ -9,12 +9,23 @@ RUN dotnet restore -a $TARGETARCH
RUN dotnet publish -c Release --no-restore -o /src/out -a $TARGETARCH RUN dotnet publish -c Release --no-restore -o /src/out -a $TARGETARCH
FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine3.19
WORKDIR /app WORKDIR /app
ENV PYTHONUNBUFFERED=1
RUN apk add --update --no-cache python3=~3.11.8-r0 py3-pip && ln -sf python3 /usr/bin/python
COPY --from=build /src/out . COPY --from=build /src/out .
RUN rm -rf /app/python && mkdir -p /app/python
RUN pip3 install -r /app/requirements.txt -t /app/python
RUN addgroup -S qbit && adduser -S -G qbit qbit RUN addgroup -S qbit && adduser -S -G qbit qbit
USER qbit USER qbit
HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \ HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \
CMD pgrep -f dotnet || exit 1 CMD pgrep -f dotnet || exit 1
ENTRYPOINT ["dotnet", "QbitCollector.dll"] ENV PYTHONNET_PYDLL=/usr/lib/libpython3.11.so.1.0
ENTRYPOINT ["dotnet", "QBitCollector.dll"]

View File

@@ -13,11 +13,13 @@ public static class ServiceCollectionExtensions
internal static IServiceCollection AddServiceConfiguration(this IServiceCollection services) internal static IServiceCollection AddServiceConfiguration(this IServiceCollection services)
{ {
services.AddQBitTorrentClient(); services.AddQBitTorrentClient();
services.AddSingleton<IParseTorrentTitle, ParseTorrentTitle>(); services.RegisterPythonEngine();
services.AddSingleton<IRankTorrentName, RankTorrentName>();
services.AddSingleton<QbitRequestProcessor>(); services.AddSingleton<QbitRequestProcessor>();
services.AddHttpClient(); services.AddHttpClient();
services.AddSingleton<ITrackersService, TrackersService>(); services.AddSingleton<ITrackersService, TrackersService>();
services.AddHostedService<TrackersBackgroundService>(); services.AddHostedService<TrackersBackgroundService>();
services.AddHostedService<HousekeepingBackgroundService>();
return services; return services;
} }
@@ -99,7 +101,10 @@ public static class ServiceCollectionExtensions
timeout.Timeout = TimeSpan.FromMinutes(1); timeout.Timeout = TimeSpan.FromMinutes(1);
}); });
}) })
.RedisRepository(redisConfiguration.ConnectionString); .RedisRepository(redisConfiguration.ConnectionString, options =>
{
options.KeyPrefix = "qbit-collector:";
});
private static void AddQBitTorrentClient(this IServiceCollection services) private static void AddQBitTorrentClient(this IServiceCollection services)
{ {

View File

@@ -0,0 +1,52 @@
namespace QBitCollector.Features.Qbit;
public class HousekeepingBackgroundService(IQBittorrentClient client, ILogger<HousekeepingBackgroundService> logger) : BackgroundService
{
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
logger.LogInformation("Service is Running.");
await DoWork();
using PeriodicTimer timer = new(TimeSpan.FromMinutes(2));
try
{
while (await timer.WaitForNextTickAsync(stoppingToken))
{
await DoWork();
}
}
catch (OperationCanceledException)
{
logger.LogInformation("Service stopping.");
}
}
private async Task DoWork()
{
try
{
logger.LogInformation("Cleaning Stale Entries in Qbit...");
var torrents = await client.GetTorrentListAsync();
foreach (var torrentInfo in torrents)
{
if (!(torrentInfo.AddedOn < DateTimeOffset.UtcNow.AddMinutes(-1)))
{
continue;
}
logger.LogInformation("Torrent [{InfoHash}] Identified as stale because was added at {AddedOn}", torrentInfo.Hash, torrentInfo.AddedOn);
await client.DeleteAsync(new[] {torrentInfo.Hash}, deleteDownloadedData: true);
logger.LogInformation("Cleaned up stale torrent: [{InfoHash}]", torrentInfo.Hash);
}
}
catch (Exception e)
{
logger.LogError(e, "Error cleaning up stale torrents this interval.");
}
}
}

View File

@@ -3,7 +3,9 @@ namespace QBitCollector.Features.Qbit;
public class QbitConfiguration public class QbitConfiguration
{ {
private const string Prefix = "QBIT"; private const string Prefix = "QBIT";
private const string ConnectionStringVariable = "HOST"; private const string HOST_VARIABLE = "HOST";
private const string TRACKERS_URL_VARIABLE = "TRACKERS_URL";
public string? Host { get; init; } = Prefix.GetRequiredEnvironmentVariableAsString(ConnectionStringVariable); public string? Host { get; init; } = Prefix.GetRequiredEnvironmentVariableAsString(HOST_VARIABLE);
public string? TrackersUrl { get; init; } = Prefix.GetRequiredEnvironmentVariableAsString(TRACKERS_URL_VARIABLE);
} }

View File

@@ -1,8 +1,7 @@
namespace QBitCollector.Features.Trackers; namespace QBitCollector.Features.Trackers;
public class TrackersService(IDistributedCache cache, HttpClient client, IMemoryCache memoryCache) : ITrackersService public class TrackersService(IDistributedCache cache, HttpClient client, IMemoryCache memoryCache, QbitConfiguration configuration) : ITrackersService
{ {
private const string TrackersListUrl = "https://ngosang.github.io/trackerslist/trackers_all.txt";
private const string CacheKey = "trackers"; private const string CacheKey = "trackers";
public async Task<List<string>> GetTrackers() public async Task<List<string>> GetTrackers()
@@ -42,7 +41,7 @@ public class TrackersService(IDistributedCache cache, HttpClient client, IMemory
private async Task<List<string>> GetTrackersAsync() private async Task<List<string>> GetTrackersAsync()
{ {
var response = await client.GetStringAsync(TrackersListUrl); var response = await client.GetStringAsync(configuration.TrackersUrl);
var lines = response.Split(["\r\n", "\r", "\n"], StringSplitOptions.None); var lines = response.Split(["\r\n", "\r", "\n"], StringSplitOptions.None);

View File

@@ -3,10 +3,11 @@ namespace QBitCollector.Features.Worker;
public static class QbitMetaToTorrentMeta public static class QbitMetaToTorrentMeta
{ {
public static IReadOnlyList<TorrentFile> MapMetadataToFilesCollection( public static IReadOnlyList<TorrentFile> MapMetadataToFilesCollection(
IParseTorrentTitle torrentTitle, IRankTorrentName rankTorrentName,
Torrent torrent, Torrent torrent,
string ImdbId, string ImdbId,
IReadOnlyList<TorrentContent> Metadata) IReadOnlyList<TorrentContent> Metadata,
ILogger<WriteQbitMetadataConsumer> logger)
{ {
try try
{ {
@@ -24,23 +25,31 @@ public static class QbitMetaToTorrentMeta
Size = metadataEntry.Size, Size = metadataEntry.Size,
}; };
var parsedTitle = torrentTitle.Parse(file.Title); var parsedTitle = rankTorrentName.Parse(file.Title, false);
if (!parsedTitle.Success)
{
logger.LogWarning("Failed to parse title {Title} for metadata mapping", file.Title);
continue;
}
file.ImdbSeason = parsedTitle.Seasons.FirstOrDefault(); file.ImdbSeason = parsedTitle.Response?.Season?.FirstOrDefault() ?? 0;
file.ImdbEpisode = parsedTitle.Episodes.FirstOrDefault(); file.ImdbEpisode = parsedTitle.Response?.Episode?.FirstOrDefault() ?? 0;
files.Add(file); files.Add(file);
} }
return files; return files;
} }
catch (Exception) catch (Exception ex)
{ {
logger.LogWarning("Failed to map metadata to files collection: {Exception}", ex.Message);
return []; return [];
} }
} }
public static async Task<IReadOnlyList<SubtitleFile>> MapMetadataToSubtitlesCollection(IDataStorage storage, string InfoHash, IReadOnlyList<TorrentContent> Metadata) public static async Task<IReadOnlyList<SubtitleFile>> MapMetadataToSubtitlesCollection(IDataStorage storage, string InfoHash, IReadOnlyList<TorrentContent> Metadata,
ILogger<WriteQbitMetadataConsumer> logger)
{ {
try try
{ {
@@ -70,8 +79,9 @@ public static class QbitMetaToTorrentMeta
return files; return files;
} }
catch (Exception) catch (Exception ex)
{ {
logger.LogWarning("Failed to map metadata to subtitles collection: {Exception}", ex.Message);
return []; return [];
} }
} }

View File

@@ -53,6 +53,12 @@ public class QbitMetadataSagaStateMachine : MassTransitStateMachine<QbitMetadata
.Then( .Then(
context => context =>
{ {
if (!context.Message.WithFiles)
{
logger.LogInformation("No files written for torrent {InfoHash} in Saga {SagaId}", context.Saga.Torrent.InfoHash, context.Saga.CorrelationId);
return;
}
logger.LogInformation("Metadata Written for torrent {InfoHash} in Saga {SagaId}", context.Saga.Torrent.InfoHash, context.Saga.CorrelationId); logger.LogInformation("Metadata Written for torrent {InfoHash} in Saga {SagaId}", context.Saga.Torrent.InfoHash, context.Saga.CorrelationId);
}) })
.TransitionTo(Completed) .TransitionTo(Completed)

View File

@@ -1,22 +1,24 @@
namespace QBitCollector.Features.Worker; namespace QBitCollector.Features.Worker;
[EntityName("perform-metadata-request")] [EntityName("perform-metadata-request-qbit-collector")]
public record PerformQbitMetadataRequest(Guid CorrelationId, string InfoHash) : CorrelatedBy<Guid>; public record PerformQbitMetadataRequest(Guid CorrelationId, string InfoHash) : CorrelatedBy<Guid>;
[EntityName("torrent-metadata-response")] [EntityName("torrent-metadata-response-qbit-collector")]
public record GotQbitMetadata(QBitMetadataResponse Metadata) : CorrelatedBy<Guid> public record GotQbitMetadata(QBitMetadataResponse Metadata) : CorrelatedBy<Guid>
{ {
public Guid CorrelationId { get; init; } = Metadata.CorrelationId; public Guid CorrelationId { get; init; } = Metadata.CorrelationId;
} }
[EntityName("write-metadata")] [EntityName("write-metadata-qbit-collector")]
public record WriteQbitMetadata(Torrent Torrent, QBitMetadataResponse Metadata, string ImdbId) : CorrelatedBy<Guid> public record WriteQbitMetadata(Torrent Torrent, QBitMetadataResponse Metadata, string ImdbId) : CorrelatedBy<Guid>
{ {
public Guid CorrelationId { get; init; } = Metadata.CorrelationId; public Guid CorrelationId { get; init; } = Metadata.CorrelationId;
} }
[EntityName("metadata-written")] [EntityName("metadata-written-qbit-collector")]
public record QbitMetadataWritten(QBitMetadataResponse Metadata) : CorrelatedBy<Guid> public record QbitMetadataWritten(QBitMetadataResponse Metadata, bool WithFiles) : CorrelatedBy<Guid>
{ {
public Guid CorrelationId { get; init; } = Metadata.CorrelationId; public Guid CorrelationId { get; init; } = Metadata.CorrelationId;
public QBitMetadataResponse Metadata { get; init; } = Metadata;
} }

View File

@@ -1,25 +1,30 @@
namespace QBitCollector.Features.Worker; namespace QBitCollector.Features.Worker;
public class WriteQbitMetadataConsumer(IParseTorrentTitle parseTorrentTitle, IDataStorage dataStorage) : IConsumer<WriteQbitMetadata> public class WriteQbitMetadataConsumer(IRankTorrentName rankTorrentName, IDataStorage dataStorage, ILogger<WriteQbitMetadataConsumer> logger) : IConsumer<WriteQbitMetadata>
{ {
public async Task Consume(ConsumeContext<WriteQbitMetadata> context) public async Task Consume(ConsumeContext<WriteQbitMetadata> context)
{ {
var request = context.Message; var request = context.Message;
var torrentFiles = QbitMetaToTorrentMeta.MapMetadataToFilesCollection(parseTorrentTitle, request.Torrent, request.ImdbId, request.Metadata.Metadata);
if (torrentFiles.Any()) var torrentFiles = QbitMetaToTorrentMeta.MapMetadataToFilesCollection(
rankTorrentName, request.Torrent, request.ImdbId, request.Metadata.Metadata, logger);
if (!torrentFiles.Any())
{ {
await dataStorage.InsertFiles(torrentFiles); await context.Publish(new QbitMetadataWritten(request.Metadata, false));
return;
var subtitles = await QbitMetaToTorrentMeta.MapMetadataToSubtitlesCollection(dataStorage, request.Torrent.InfoHash, request.Metadata.Metadata);
if (subtitles.Any())
{
await dataStorage.InsertSubtitles(subtitles);
}
} }
await context.Publish(new QbitMetadataWritten(request.Metadata)); await dataStorage.InsertFiles(torrentFiles);
var subtitles = await QbitMetaToTorrentMeta.MapMetadataToSubtitlesCollection(
dataStorage, request.Torrent.InfoHash, request.Metadata.Metadata, logger);
if (subtitles.Any())
{
await dataStorage.InsertSubtitles(subtitles);
}
await context.Publish(new QbitMetadataWritten(request.Metadata, true));
} }
} }

View File

@@ -1,17 +1,11 @@
// Global using directives // Global using directives
global using System.Text.Json; global using System.Text.Json;
global using System.Text.Json.Serialization;
global using System.Threading.Channels;
global using MassTransit; global using MassTransit;
global using MassTransit.Mediator;
global using Microsoft.AspNetCore.Builder; global using Microsoft.AspNetCore.Builder;
global using Microsoft.Extensions.Caching.Distributed; global using Microsoft.Extensions.Caching.Distributed;
global using Microsoft.Extensions.Caching.Memory; global using Microsoft.Extensions.Caching.Memory;
global using Microsoft.Extensions.DependencyInjection; global using Microsoft.Extensions.DependencyInjection;
global using Polly;
global using Polly.Extensions.Http;
global using PromKnight.ParseTorrentTitle;
global using QBitCollector.Extensions; global using QBitCollector.Extensions;
global using QBitCollector.Features.Qbit; global using QBitCollector.Features.Qbit;
global using QBitCollector.Features.Trackers; global using QBitCollector.Features.Trackers;
@@ -21,4 +15,6 @@ global using SharedContracts.Configuration;
global using SharedContracts.Dapper; global using SharedContracts.Dapper;
global using SharedContracts.Extensions; global using SharedContracts.Extensions;
global using SharedContracts.Models; global using SharedContracts.Models;
global using SharedContracts.Python;
global using SharedContracts.Python.RTN;
global using SharedContracts.Requests; global using SharedContracts.Requests;

View File

@@ -18,7 +18,6 @@
<PackageReference Include="Microsoft.Extensions.Http" Version="8.0.0" /> <PackageReference Include="Microsoft.Extensions.Http" Version="8.0.0" />
<PackageReference Include="Microsoft.Extensions.Http.Polly" Version="8.0.3" /> <PackageReference Include="Microsoft.Extensions.Http.Polly" Version="8.0.3" />
<PackageReference Include="Polly" Version="8.3.1" /> <PackageReference Include="Polly" Version="8.3.1" />
<PackageReference Include="PromKnight.ParseTorrentTitle" Version="1.0.4" />
<PackageReference Include="QBittorrent.Client" Version="1.9.23349.1" /> <PackageReference Include="QBittorrent.Client" Version="1.9.23349.1" />
<PackageReference Include="Serilog" Version="3.1.1" /> <PackageReference Include="Serilog" Version="3.1.1" />
<PackageReference Include="Serilog.AspNetCore" Version="8.0.1" /> <PackageReference Include="Serilog.AspNetCore" Version="8.0.1" />
@@ -31,10 +30,30 @@
<None Include="Configuration\logging.json"> <None Include="Configuration\logging.json">
<CopyToOutputDirectory>Always</CopyToOutputDirectory> <CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None> </None>
<Content Remove="eng\**" />
<None Remove="eng\**" />
<None Update="requirements.txt">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
</ItemGroup> </ItemGroup>
<ItemGroup> <ItemGroup>
<ProjectReference Include="..\shared\SharedContracts.csproj" /> <ProjectReference Include="..\shared\SharedContracts.csproj" />
</ItemGroup> </ItemGroup>
<ItemGroup Condition="'$(Configuration)' == 'Debug'">
<Content Remove="python\**" />
<None Include="python\**">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
</ItemGroup>
<ItemGroup>
<Compile Remove="eng\**" />
</ItemGroup>
<ItemGroup>
<EmbeddedResource Remove="eng\**" />
</ItemGroup>
</Project> </Project>

View File

@@ -6,6 +6,12 @@ Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "shared", "shared", "{2C0A0F
EndProject EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "QBitCollector", "QBitCollector.csproj", "{1EF124BE-6EBE-4D9E-846C-FFF814999F3B}" Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "QBitCollector", "QBitCollector.csproj", "{1EF124BE-6EBE-4D9E-846C-FFF814999F3B}"
EndProject EndProject
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "eng", "eng", "{2F2EA33A-1303-405D-939B-E9394D262BC9}"
ProjectSection(SolutionItems) = preProject
eng\install-python-reqs.ps1 = eng\install-python-reqs.ps1
eng\install-python-reqs.sh = eng\install-python-reqs.sh
EndProjectSection
EndProject
Global Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Any CPU = Debug|Any CPU Debug|Any CPU = Debug|Any CPU

View File

@@ -0,0 +1,3 @@
Remove-Item -Recurse -Force ../python
mkdir -p ../python
python -m pip install -r ../requirements.txt -t ../python/

View File

@@ -0,0 +1,5 @@
#!/bin/bash
rm -rf ../python
mkdir -p ../python
python3 -m pip install -r ../requirements.txt -t ../python/

View File

@@ -0,0 +1 @@
rank-torrent-name==0.2.5

View File

@@ -3,7 +3,12 @@ namespace SharedContracts.Configuration;
public class RedisConfiguration public class RedisConfiguration
{ {
private const string Prefix = "REDIS"; private const string Prefix = "REDIS";
private const string ConnectionStringVariable = "CONNECTION_STRING"; private const string HostVariable = "HOST";
private const string PortVariable = "PORT";
public string? ConnectionString { get; init; } = Prefix.GetRequiredEnvironmentVariableAsString(ConnectionStringVariable) + ",abortConnect=false,allowAdmin=true"; private const string ExtraVariable = "EXTRA";
private string Host { get; init; } = Prefix.GetRequiredEnvironmentVariableAsString(HostVariable);
private int PORT { get; init; } = Prefix.GetEnvironmentVariableAsInt(PortVariable, 6379);
private string EXTRA { get; init; } = Prefix.GetOptionalEnvironmentVariableAsString(ExtraVariable, "abortConnect=false,allowAdmin=true");
public string ConnectionString => $"{Host}:{PORT},{EXTRA}";
} }

View File

@@ -9,9 +9,9 @@ public class DapperDataStorage(PostgresConfiguration configuration, RabbitMqConf
const string query = const string query =
""" """
INSERT INTO ingested_torrents INSERT INTO ingested_torrents
("name", "source", "category", "info_hash", "size", "seeders", "leechers", "imdb", "processed", "createdAt", "updatedAt") ("name", "source", "category", "info_hash", "size", "seeders", "leechers", "imdb", "processed", "createdAt", "updatedAt", "rtn_response")
VALUES VALUES
(@Name, @Source, @Category, @InfoHash, @Size, @Seeders, @Leechers, @Imdb, @Processed, @CreatedAt, @UpdatedAt) (@Name, @Source, @Category, @InfoHash, @Size, @Seeders, @Leechers, @Imdb, @Processed, @CreatedAt, @UpdatedAt, @RtnResponse::jsonb)
ON CONFLICT (source, info_hash) DO NOTHING ON CONFLICT (source, info_hash) DO NOTHING
"""; """;
@@ -110,21 +110,21 @@ public class DapperDataStorage(PostgresConfiguration configuration, RabbitMqConf
public async Task<List<ImdbEntry>> GetImdbEntriesForRequests(int year, int batchSize, string? stateLastProcessedImdbId, CancellationToken cancellationToken = default) => public async Task<List<ImdbEntry>> GetImdbEntriesForRequests(int year, int batchSize, string? stateLastProcessedImdbId, CancellationToken cancellationToken = default) =>
await ExecuteCommandAsync(async connection => await ExecuteCommandAsync(async connection =>
{ {
const string query = @"SELECT imdb_id AS ImdbId, title as Title, category as Category, year as Year, adult as Adult FROM imdb_metadata WHERE CAST(NULLIF(Year, '\N') AS INTEGER) <= @Year AND imdb_id > @LastProcessedImdbId ORDER BY ImdbId LIMIT @BatchSize"; const string query = @"SELECT imdb_id AS ImdbId, title as Title, category as Category, year as Year, adult as Adult FROM imdb_metadata WHERE Year <= @Year AND imdb_id > @LastProcessedImdbId ORDER BY ImdbId LIMIT @BatchSize";
var result = await connection.QueryAsync<ImdbEntry>(query, new { Year = year, LastProcessedImdbId = stateLastProcessedImdbId, BatchSize = batchSize }); var result = await connection.QueryAsync<ImdbEntry>(query, new { Year = year, LastProcessedImdbId = stateLastProcessedImdbId, BatchSize = batchSize });
return result.ToList(); return result.ToList();
}, "Error getting imdb metadata.", cancellationToken); }, "Error getting imdb metadata.", cancellationToken);
public async Task<List<ImdbEntry>> FindImdbMetadata(string? parsedTorrentTitle, TorrentType torrentType, string? year, CancellationToken cancellationToken = default) => public async Task<ImdbEntry?> FindImdbMetadata(string? parsedTorrentTitle, string torrentType, int? year, CancellationToken cancellationToken = default) =>
await ExecuteCommandAsync(async connection => await ExecuteCommandAsync(async connection =>
{ {
var query = $"select \"imdb_id\" as \"ImdbId\", \"title\" as \"Title\", \"year\" as \"Year\" from search_imdb_meta('{parsedTorrentTitle.Replace("'", "").Replace("\"", "")}', '{(torrentType == TorrentType.Movie ? "movie" : "tvSeries")}'"; var query = $"select \"imdb_id\" as \"ImdbId\", \"title\" as \"Title\", \"year\" as \"Year\", \"score\" as Score, \"category\" as Category from search_imdb_meta('{parsedTorrentTitle.Replace("'", "").Replace("\"", "")}', '{torrentType}'";
query += year is not null ? $", '{year}'" : ", NULL"; query += year is not null ? $", {year}" : ", NULL";
query += ", 15)"; query += ", 1)";
var result = await connection.QueryAsync<ImdbEntry>(query); var result = await connection.QueryAsync<ImdbEntry>(query);
var results = result.ToList();
return result.ToList(); return results.FirstOrDefault();
}, "Error finding imdb metadata.", cancellationToken); }, "Error finding imdb metadata.", cancellationToken);
public Task InsertTorrent(Torrent torrent, CancellationToken cancellationToken = default) => public Task InsertTorrent(Torrent torrent, CancellationToken cancellationToken = default) =>
@@ -134,9 +134,9 @@ public class DapperDataStorage(PostgresConfiguration configuration, RabbitMqConf
const string query = const string query =
""" """
INSERT INTO "torrents" INSERT INTO "torrents"
("infoHash", "provider", "torrentId", "title", "size", "type", "uploadDate", "seeders", "trackers", "languages", "resolution", "reviewed", "opened", "createdAt", "updatedAt") ("infoHash", "ingestedTorrentId", "provider", "title", "size", "type", "uploadDate", "seeders", "languages", "resolution", "reviewed", "opened", "createdAt", "updatedAt")
VALUES VALUES
(@InfoHash, @Provider, @TorrentId, @Title, 0, @Type, NOW(), @Seeders, NULL, NULL, NULL, false, false, NOW(), NOW()) (@InfoHash, @IngestedTorrentId, @Provider, @Title, 0, @Type, NOW(), @Seeders, NULL, NULL, false, false, NOW(), NOW())
ON CONFLICT ("infoHash") DO NOTHING ON CONFLICT ("infoHash") DO NOTHING
"""; """;
@@ -167,12 +167,7 @@ public class DapperDataStorage(PostgresConfiguration configuration, RabbitMqConf
INSERT INTO subtitles INSERT INTO subtitles
("infoHash", "fileIndex", "fileId", "title") ("infoHash", "fileIndex", "fileId", "title")
VALUES VALUES
(@InfoHash, @FileIndex, @FileId, @Title) (@InfoHash, @FileIndex, @FileId, @Title);
ON CONFLICT
("infoHash", "fileIndex")
DO UPDATE SET
"fileId" = COALESCE(subtitles."fileId", EXCLUDED."fileId"),
"title" = COALESCE(subtitles."title", EXCLUDED."title");
"""; """;
await connection.ExecuteAsync(query, subtitles); await connection.ExecuteAsync(query, subtitles);

View File

@@ -9,7 +9,7 @@ public interface IDataStorage
Task<DapperResult<PageIngestedResult, PageIngestedResult>> MarkPageAsIngested(string pageId, CancellationToken cancellationToken = default); Task<DapperResult<PageIngestedResult, PageIngestedResult>> MarkPageAsIngested(string pageId, CancellationToken cancellationToken = default);
Task<DapperResult<int, int>> GetRowCountImdbMetadata(CancellationToken cancellationToken = default); Task<DapperResult<int, int>> GetRowCountImdbMetadata(CancellationToken cancellationToken = default);
Task<List<ImdbEntry>> GetImdbEntriesForRequests(int year, int batchSize, string? stateLastProcessedImdbId, CancellationToken cancellationToken = default); Task<List<ImdbEntry>> GetImdbEntriesForRequests(int year, int batchSize, string? stateLastProcessedImdbId, CancellationToken cancellationToken = default);
Task<List<ImdbEntry>> FindImdbMetadata(string? parsedTorrentTitle, TorrentType parsedTorrentTorrentType, string? parsedTorrentYear, CancellationToken cancellationToken = default); Task<ImdbEntry?> FindImdbMetadata(string? parsedTorrentTitle, string parsedTorrentTorrentType, int? parsedTorrentYear, CancellationToken cancellationToken = default);
Task InsertTorrent(Torrent torrent, CancellationToken cancellationToken = default); Task InsertTorrent(Torrent torrent, CancellationToken cancellationToken = default);
Task InsertFiles(IEnumerable<TorrentFile> files, CancellationToken cancellationToken = default); Task InsertFiles(IEnumerable<TorrentFile> files, CancellationToken cancellationToken = default);
Task InsertSubtitles(IEnumerable<SubtitleFile> subtitles, CancellationToken cancellationToken = default); Task InsertSubtitles(IEnumerable<SubtitleFile> subtitles, CancellationToken cancellationToken = default);

View File

@@ -1,4 +1,3 @@
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.DependencyInjection.Extensions; using Microsoft.Extensions.DependencyInjection.Extensions;
namespace SharedContracts.Extensions; namespace SharedContracts.Extensions;

View File

@@ -0,0 +1,14 @@
namespace SharedContracts.Extensions;
public static class JsonExtensions
{
private static readonly JsonSerializerOptions JsonSerializerOptions = new()
{
PropertyNamingPolicy = JsonNamingPolicy.CamelCase,
WriteIndented = false,
ReferenceHandler = ReferenceHandler.IgnoreCycles,
NumberHandling = JsonNumberHandling.Strict,
};
public static string AsJson<T>(this T obj) => JsonSerializer.Serialize(obj, JsonSerializerOptions);
}

View File

@@ -1,5 +1,3 @@
using System.Text.RegularExpressions;
namespace SharedContracts.Extensions; namespace SharedContracts.Extensions;
public static partial class StringExtensions public static partial class StringExtensions

View File

@@ -1,16 +1,19 @@
// Global using directives // Global using directives
global using System.Text.Json; global using System.Text.Json;
global using System.Text.Json.Serialization;
global using System.Text.RegularExpressions;
global using Dapper; global using Dapper;
global using MassTransit; global using MassTransit;
global using Microsoft.AspNetCore.Builder; global using Microsoft.AspNetCore.Builder;
global using Microsoft.AspNetCore.Hosting; global using Microsoft.AspNetCore.Hosting;
global using Microsoft.Extensions.Configuration; global using Microsoft.Extensions.Configuration;
global using Microsoft.Extensions.DependencyInjection;
global using Microsoft.Extensions.Hosting; global using Microsoft.Extensions.Hosting;
global using Microsoft.Extensions.Logging; global using Microsoft.Extensions.Logging;
global using Npgsql; global using Npgsql;
global using PromKnight.ParseTorrentTitle; global using Python.Runtime;
global using Serilog; global using Serilog;
global using SharedContracts.Configuration; global using SharedContracts.Configuration;
global using SharedContracts.Extensions; global using SharedContracts.Extensions;
global using SharedContracts.Models; global using SharedContracts.Models;

View File

@@ -7,4 +7,5 @@ public class ImdbEntry
public string? Category { get; set; } public string? Category { get; set; }
public string? Year { get; set; } public string? Year { get; set; }
public bool? Adult { get; set; } public bool? Adult { get; set; }
public decimal? Score { get; set; }
} }

View File

@@ -12,7 +12,9 @@ public class IngestedTorrent
public int Leechers { get; set; } public int Leechers { get; set; }
public string? Imdb { get; set; } public string? Imdb { get; set; }
public bool Processed { get; set; } = false; public bool Processed { get; set; }
public DateTime CreatedAt { get; set; } = DateTime.UtcNow; public DateTime CreatedAt { get; set; } = DateTime.UtcNow;
public DateTime UpdatedAt { get; set; } = DateTime.UtcNow; public DateTime UpdatedAt { get; set; } = DateTime.UtcNow;
public string? RtnResponse { get; set; }
} }

View File

@@ -3,6 +3,7 @@ namespace SharedContracts.Models;
public class Torrent public class Torrent
{ {
public string? InfoHash { get; set; } public string? InfoHash { get; set; }
public long? IngestedTorrentId { get; set; }
public string? Provider { get; set; } public string? Provider { get; set; }
public string? TorrentId { get; set; } public string? TorrentId { get; set; }
public string? Title { get; set; } public string? Title { get; set; }

View File

@@ -0,0 +1,13 @@
namespace SharedContracts.Python;
public interface IPythonEngineService
{
ILogger<PythonEngineService> Logger { get; }
Task InitializePythonEngine(CancellationToken cancellationToken);
T ExecuteCommandOrScript<T>(string command, PyModule module, bool throwOnErrors);
T ExecutePythonOperation<T>(Func<T> operation, string operationName, bool throwOnErrors);
T ExecutePythonOperationWithDefault<T>(Func<T> operation, T? defaultValue, string operationName, bool throwOnErrors, bool logErrors);
Task StopPythonEngine(CancellationToken cancellationToken);
dynamic? Sys { get; }
}

View File

@@ -0,0 +1,8 @@
namespace SharedContracts.Python;
public class PythonEngineManager(IPythonEngineService pythonEngineService) : IHostedService
{
public Task StartAsync(CancellationToken cancellationToken) => pythonEngineService.InitializePythonEngine(cancellationToken);
public Task StopAsync(CancellationToken cancellationToken) => pythonEngineService.StopPythonEngine(cancellationToken);
}

View File

@@ -0,0 +1,124 @@
namespace SharedContracts.Python;
public class PythonEngineService(ILogger<PythonEngineService> logger) : IPythonEngineService
{
private IntPtr _mainThreadState;
private bool _isInitialized;
public ILogger<PythonEngineService> Logger { get; } = logger;
public dynamic? Sys { get; private set; }
public Task InitializePythonEngine(CancellationToken cancellationToken)
{
if (_isInitialized)
{
return Task.CompletedTask;
}
try
{
var pythonDllEnv = Environment.GetEnvironmentVariable("PYTHONNET_PYDLL");
if (string.IsNullOrWhiteSpace(pythonDllEnv))
{
Logger.LogWarning("PYTHONNET_PYDLL env is not set. Exiting Application");
Environment.Exit(1);
return Task.CompletedTask;
}
Runtime.PythonDLL = pythonDllEnv;
PythonEngine.Initialize();
_mainThreadState = PythonEngine.BeginAllowThreads();
_isInitialized = true;
Logger.LogInformation("Python engine initialized");
}
catch (Exception e)
{
Logger.LogError(e, $"Failed to initialize Python engine: {e.Message}");
Environment.Exit(1);
}
return Task.CompletedTask;
}
public T ExecuteCommandOrScript<T>(string command, PyModule module, bool throwOnErrors) =>
ExecutePythonOperation(
() =>
{
var pyCompile = PythonEngine.Compile(command);
var nativeResult = module.Execute(pyCompile);
return nativeResult.As<T>();
}, nameof(ExecuteCommandOrScript), throwOnErrors);
public T ExecutePythonOperation<T>(Func<T> operation, string operationName, bool throwOnErrors) =>
ExecutePythonOperationWithDefault(operation, default, operationName, throwOnErrors, true);
public T ExecutePythonOperationWithDefault<T>(Func<T> operation, T? defaultValue, string operationName, bool throwOnErrors, bool logErrors) =>
ExecutePythonOperationInternal(operation, defaultValue, operationName, throwOnErrors, logErrors);
public void ExecuteOnGIL(Action act, bool throwOnErrors)
{
Sys ??= LoadSys();
try
{
using var gil = Py.GIL();
act();
}
catch (Exception ex)
{
Logger.LogError(ex, "Python Error: {Message} ({OperationName})", ex.Message, nameof(ExecuteOnGIL));
if (throwOnErrors)
{
throw;
}
}
}
public Task StopPythonEngine(CancellationToken cancellationToken)
{
PythonEngine.EndAllowThreads(_mainThreadState);
PythonEngine.Shutdown();
return Task.CompletedTask;
}
private static dynamic LoadSys()
{
using var gil = Py.GIL();
var sys = Py.Import("sys");
return sys;
}
// ReSharper disable once EntityNameCapturedOnly.Local
private T ExecutePythonOperationInternal<T>(Func<T> operation, T? defaultValue, string operationName, bool throwOnErrors, bool logErrors)
{
Sys ??= LoadSys();
var result = defaultValue;
try
{
using var gil = Py.GIL();
result = operation();
}
catch (Exception ex)
{
if (logErrors)
{
Logger.LogError(ex, "Python Error: {Message} ({OperationName})", ex.Message, nameof(operationName));
}
if (throwOnErrors)
{
throw;
}
}
return result;
}
}

View File

@@ -0,0 +1,6 @@
namespace SharedContracts.Python.RTN;
public interface IRankTorrentName
{
ParseTorrentTitleResponse Parse(string title, bool trashGarbage = true);
}

View File

@@ -0,0 +1,3 @@
namespace SharedContracts.Python.RTN;
public record ParseTorrentTitleResponse(bool Success, RtnResponse? Response);

View File

@@ -0,0 +1,59 @@
namespace SharedContracts.Python.RTN;
public class RankTorrentName : IRankTorrentName
{
private readonly IPythonEngineService _pythonEngineService;
private const string RtnModuleName = "RTN";
private dynamic? _rtn;
public RankTorrentName(IPythonEngineService pythonEngineService)
{
_pythonEngineService = pythonEngineService;
InitModules();
}
public ParseTorrentTitleResponse Parse(string title, bool trashGarbage = true) =>
_pythonEngineService.ExecutePythonOperationWithDefault(
() =>
{
var result = _rtn?.parse(title, trashGarbage);
return ParseResult(result);
}, new ParseTorrentTitleResponse(false, null), nameof(Parse), throwOnErrors: false, logErrors: false);
private static ParseTorrentTitleResponse ParseResult(dynamic result)
{
if (result == null)
{
return new(false, null);
}
var json = result.model_dump_json()?.As<string?>();
if (json is null || string.IsNullOrEmpty(json))
{
return new(false, null);
}
var mediaType = result.GetAttr("type")?.As<string>();
if (string.IsNullOrEmpty(mediaType))
{
return new(false, null);
}
var response = JsonSerializer.Deserialize<RtnResponse>(json);
response.IsMovie = mediaType.Equals("movie", StringComparison.OrdinalIgnoreCase);
return new(true, response);
}
private void InitModules() =>
_rtn =
_pythonEngineService.ExecutePythonOperation(() =>
{
_pythonEngineService.Sys.path.append(Path.Combine(AppContext.BaseDirectory, "python"));
return Py.Import(RtnModuleName);
}, nameof(InitModules), throwOnErrors: false);
}

View File

@@ -0,0 +1,83 @@
namespace SharedContracts.Python.RTN;
public class RtnResponse
{
[JsonPropertyName("raw_title")]
public string? RawTitle { get; set; }
[JsonPropertyName("parsed_title")]
public string? ParsedTitle { get; set; }
[JsonPropertyName("fetch")]
public bool Fetch { get; set; }
[JsonPropertyName("is_4k")]
public bool Is4K { get; set; }
[JsonPropertyName("is_multi_audio")]
public bool IsMultiAudio { get; set; }
[JsonPropertyName("is_multi_subtitle")]
public bool IsMultiSubtitle { get; set; }
[JsonPropertyName("is_complete")]
public bool IsComplete { get; set; }
[JsonPropertyName("year")]
public int Year { get; set; }
[JsonPropertyName("resolution")]
public List<string>? Resolution { get; set; }
[JsonPropertyName("quality")]
public List<string>? Quality { get; set; }
[JsonPropertyName("season")]
public List<int>? Season { get; set; }
[JsonPropertyName("episode")]
public List<int>? Episode { get; set; }
[JsonPropertyName("codec")]
public List<string>? Codec { get; set; }
[JsonPropertyName("audio")]
public List<string>? Audio { get; set; }
[JsonPropertyName("subtitles")]
public List<string>? Subtitles { get; set; }
[JsonPropertyName("language")]
public List<string>? Language { get; set; }
[JsonPropertyName("bit_depth")]
public List<int>? BitDepth { get; set; }
[JsonPropertyName("hdr")]
public string? Hdr { get; set; }
[JsonPropertyName("proper")]
public bool Proper { get; set; }
[JsonPropertyName("repack")]
public bool Repack { get; set; }
[JsonPropertyName("remux")]
public bool Remux { get; set; }
[JsonPropertyName("upscaled")]
public bool Upscaled { get; set; }
[JsonPropertyName("remastered")]
public bool Remastered { get; set; }
[JsonPropertyName("directors_cut")]
public bool DirectorsCut { get; set; }
[JsonPropertyName("extended")]
public bool Extended { get; set; }
public bool IsMovie { get; set; }
public string ToJson() => this.AsJson();
}

View File

@@ -0,0 +1,12 @@
namespace SharedContracts.Python;
public static class ServiceCollectionExtensions
{
public static IServiceCollection RegisterPythonEngine(this IServiceCollection services)
{
services.AddSingleton<IPythonEngineService, PythonEngineService>();
services.AddHostedService<PythonEngineManager>();
return services;
}
}

View File

@@ -16,7 +16,7 @@
<PackageReference Include="MassTransit.Abstractions" Version="8.2.0" /> <PackageReference Include="MassTransit.Abstractions" Version="8.2.0" />
<PackageReference Include="MassTransit.RabbitMQ" Version="8.2.0" /> <PackageReference Include="MassTransit.RabbitMQ" Version="8.2.0" />
<PackageReference Include="Npgsql" Version="8.0.2" /> <PackageReference Include="Npgsql" Version="8.0.2" />
<PackageReference Include="PromKnight.ParseTorrentTitle" Version="1.0.4" /> <PackageReference Include="pythonnet" Version="3.0.3" />
<PackageReference Include="Serilog" Version="3.1.1" /> <PackageReference Include="Serilog" Version="3.1.1" />
<PackageReference Include="Serilog.Extensions.Hosting" Version="8.0.0" /> <PackageReference Include="Serilog.Extensions.Hosting" Version="8.0.0" />
<PackageReference Include="Serilog.Settings.Configuration" Version="8.0.0" /> <PackageReference Include="Serilog.Settings.Configuration" Version="8.0.0" />

View File

@@ -82,11 +82,4 @@ public static class ServiceCollectionExtensions
x.AddConsumer<PerformIngestionConsumer>(); x.AddConsumer<PerformIngestionConsumer>();
} }
internal static IServiceCollection AddServiceConfiguration(this IServiceCollection services)
{
services.AddSingleton<IParseTorrentTitle, ParseTorrentTitle>();
return services;
}
} }

View File

@@ -11,6 +11,7 @@ public class PerformIngestionConsumer(IDataStorage dataStorage, ILogger<PerformI
var torrent = new Torrent var torrent = new Torrent
{ {
InfoHash = request.IngestedTorrent.InfoHash.ToLowerInvariant(), InfoHash = request.IngestedTorrent.InfoHash.ToLowerInvariant(),
IngestedTorrentId = request.IngestedTorrent.Id,
Provider = request.IngestedTorrent.Source, Provider = request.IngestedTorrent.Source,
Title = request.IngestedTorrent.Name, Title = request.IngestedTorrent.Name,
Type = request.IngestedTorrent.Category, Type = request.IngestedTorrent.Category,

View File

@@ -5,7 +5,6 @@ global using MassTransit;
global using MassTransit.Mediator; global using MassTransit.Mediator;
global using Microsoft.AspNetCore.Builder; global using Microsoft.AspNetCore.Builder;
global using Microsoft.Extensions.DependencyInjection; global using Microsoft.Extensions.DependencyInjection;
global using PromKnight.ParseTorrentTitle;
global using SharedContracts.Configuration; global using SharedContracts.Configuration;
global using SharedContracts.Dapper; global using SharedContracts.Dapper;
global using SharedContracts.Extensions; global using SharedContracts.Extensions;

View File

@@ -10,7 +10,6 @@ builder.Host
builder.Services builder.Services
.RegisterMassTransit() .RegisterMassTransit()
.AddServiceConfiguration()
.AddDatabase(); .AddDatabase();
var app = builder.Build(); var app = builder.Build();

View File

@@ -16,7 +16,6 @@
<PackageReference Include="Microsoft.Extensions.Http" Version="8.0.0" /> <PackageReference Include="Microsoft.Extensions.Http" Version="8.0.0" />
<PackageReference Include="Microsoft.Extensions.Http.Polly" Version="8.0.3" /> <PackageReference Include="Microsoft.Extensions.Http.Polly" Version="8.0.3" />
<PackageReference Include="Polly" Version="8.3.1" /> <PackageReference Include="Polly" Version="8.3.1" />
<PackageReference Include="PromKnight.ParseTorrentTitle" Version="1.0.4" />
<PackageReference Include="Serilog" Version="3.1.1" /> <PackageReference Include="Serilog" Version="3.1.1" />
<PackageReference Include="Serilog.AspNetCore" Version="8.0.1" /> <PackageReference Include="Serilog.AspNetCore" Version="8.0.1" />
<PackageReference Include="Serilog.Sinks.Console" Version="5.0.1" /> <PackageReference Include="Serilog.Sinks.Console" Version="5.0.1" />

View File

@@ -0,0 +1 @@
Dockerfile

View File

@@ -0,0 +1,7 @@
#!/bin/bash
layout python
if ! has poetry; then
pip install poetry
fi

View File

@@ -0,0 +1 @@
3.11

Some files were not shown because too many files have changed in this diff Show More