45 Commits

Author SHA1 Message Date
iPromKnight
ad9549c695 Version bump for release (#208) 2024-04-22 12:46:02 +01:00
David Young
1e85cb00ff INNER JOIN when selecting files and torrents to avoid null results (#207)
* INNER JOIN when selecting files and torrents to avoid null results

Signed-off-by: David Young <davidy@funkypenguin.co.nz>

* Extend fix to all torrent types

Signed-off-by: David Young <davidy@funkypenguin.co.nz>

---------

Signed-off-by: David Young <davidy@funkypenguin.co.nz>
2024-04-22 12:43:57 +01:00
iPromKnight
da640a4071 Fix namespaces on extracted scraper info (#204)
* Fix namespaces on extracted scrapers

* version bump
2024-04-11 18:56:29 +01:00
iPromKnight
e6a63fd72e Allow configuration of producer urls (#203)
* Allow configuration of urls in scrapers by mounting the scrapers.json file over the one in the container

* version bump
2024-04-11 18:23:42 +01:00
iPromKnight
02101ac50a Allow qbit concurrency to be configurable (#200) 2024-04-11 18:02:29 +01:00
iPromKnight
3c8ffd5082 Fix Duplicates (#199)
* Fix Duplicates

* Version
2024-04-02 20:31:22 +01:00
iPromKnight
79e0a0f102 DMM Offline (#198)
* Process DMM all locally

single call to github to download the repo archive.
remove need for PAT
update RTN to 0.2.13
change to batch_parse for title parsing from RTN

* introduce concurrent dictionary, and parallelism
2024-04-02 17:01:22 +01:00
purple_emily
6181207513 Fix incorrect file index stored (#197)
* Fix incorrect file index stored

* Update `rank-torrent-name` to latest version

* Knight Crawler version update
2024-04-01 23:08:32 +01:00
iPromKnight
684dbba2f0 RTN-025 and title category parsing (#195)
* update rtn to 025

* Implement movie / show type parsing

* switch to RTN in collectors

* ensure env for pythonnet is loaded, and that requirements copy for qbit

* version bump
2024-03-31 22:01:09 +01:00
iPromKnight
c75ecd2707 add qbit housekeeping service to remove stale torrents (#193)
* Add housekeeping service to clean stale torrents

* version bump
2024-03-30 11:52:23 +00:00
iPromKnight
c493ef3376 Hotfix category, and roll back RTN to 0.1.8 (#192)
* Hotfix categories

Also roll back RTN to 0.1.8 as regression introduced in 0.2

* bump version
2024-03-30 04:47:36 +00:00
iPromKnight
655a39e35c patch the query with execute (#191) 2024-03-30 01:54:06 +00:00
iPromKnight
cfeee62f6b patch ratio (#190)
* add configurable threshold, default 0.95

* version bump
2024-03-30 01:43:21 +00:00
iPromKnight
c6d4c06d70 hotfix categories from imdb result instead (#189)
* category mapping from imdb

* version bump
2024-03-30 01:26:02 +00:00
iPromKnight
08639a3254 Patch isMovie (#188)
* fix is movie

* version bump
2024-03-30 00:28:35 +00:00
iPromKnight
d430850749 Patch message contract names (#187)
* ensure unique message contract names per collector type

* version bump
2024-03-30 00:09:13 +00:00
iPromKnight
82c0ea459b change qbittorrent settings (#186) 2024-03-29 23:35:27 +00:00
iPromKnight
1e83b4c5d8 Patch the addon (#185) 2024-03-29 19:08:17 +00:00
iPromKnight
66609c2a46 trigram performance increased and housekeeping (#184)
* add new indexes, and change year column to int

* Change gist to gin, and change year to int

* Producer changes for new gin query

* Fully map the rtn response using json dump from Pydantic

Also updates Rtn to 0.1.9

* Add housekeeping script to reconcile imdb ids.

* Join Torrent onto the ingested torrent table

Ensure that a torrent can always find the details of where it came from, and how it was parsed.

* Version bump for release

* missing quote on table name
2024-03-29 19:01:48 +00:00
iPromKnight
2d78dc2735 version bump for release (#183) 2024-03-28 23:37:35 +00:00
iPromKnight
527d6cdf15 Upgrade RTN to 0.1.8, replace rabbitmq with drop in replacement lavinmq - better performance, lower resource usage. (#182) 2024-03-28 23:35:41 +00:00
iPromKnight
bb260d78d6 Address Issues in build (#180)
- CIS-DI-0001
- CIS-DI-0006
- CIS-DI-0008
- DKL-LI-0003
2024-03-28 10:47:13 +00:00
iPromKnight
baec0450bf Hotfix ingestor github flow, and move to top level src folder (foldedr per service) (#179) 2024-03-28 10:20:26 +00:00
iPromKnight
4308a0ee71 [wip] bridge python and c# and bring in rank torrent name (#177)
* [wip] bridge python and c# and bring in rank torrent name

* Container restores package now

Includes two dev scripts to install the python packages locally for debugging purposes.

* Introduce slightly turned title matching scoring, by making it length aware

this should help with sequels such as Terminator 2, vs Terminator etc

* Version bump

Also fixes postgres healthcheck so that it utilises the user from the stack.env file
2024-03-28 10:13:50 +00:00
RohirrimRider
cc15a69517 fix torrent_ingestor ci (#178) 2024-03-27 21:38:59 -05:00
RohirrimRider
a6d3a4a066 init ingest torrents from annatar (#157)
* init ingest torrents from annatar

* works

* mv annatar to src/

* done

* add ci and readme

---------

Co-authored-by: Brett <eruiluvatar@pnbx.xyz>
2024-03-27 21:35:03 -05:00
iPromKnight
9430704205 rename commited .env file to stack.env (#176) 2024-03-27 12:57:14 +00:00
iPromKnight
6cc857bdc3 rename .env to stack.env (#175) 2024-03-27 12:37:11 +00:00
iPromKnight
cc2adbfca5 Recreate single docker-compose file (#174)
Clean it up - also comment all services
2024-03-27 12:21:40 +00:00
iPromKnight
9f928f9b66 Allow trackers url to be configurable + version bump (#173)
this allows people to use only the udp collection, only the tcp collection, or all.
2024-03-26 12:17:47 +00:00
iPromKnight
a50b5071b3 key prefixes per collector (#172)
* Ensure the collectors manage sagas in their own keyspace, as we do not want overlap (they have the same correlation ids internally from the exchange)

* version bump
2024-03-26 11:56:14 +00:00
iPromKnight
72db18f0ad add missing env (#171)
* add missing env

* version bump
2024-03-26 11:16:21 +00:00
iPromKnight
d70cef1b86 addon fix (#170)
* addon fix

* version bump
2024-03-26 10:25:43 +00:00
iPromKnight
e1e718cd22 includes qbit collector fix (#169) 2024-03-26 10:17:04 +00:00
iPromKnight
c3e58e4234 Fix redis connection strings for consistency across languages. (#168)
* Fix redis connection strings across languages

* compose version bump
2024-03-26 09:26:35 +00:00
iPromKnight
d584102d60 image updates for patched release (#167) 2024-03-26 00:27:54 +00:00
iPromKnight
fe4bb59502 fix indenting on env file (#166)
* fix images :/

* fix indenting
2024-03-26 00:22:33 +00:00
iPromKnight
472b3342d5 fix images :/ (#165) 2024-03-26 00:01:59 +00:00
iPromKnight
b035ef596b change tag glob (#164) 2024-03-25 23:41:58 +00:00
iPromKnight
9a831e92d0 Producer / Consumer / Collector rewrite (#160)
* Converted metadata service to redis

* move to postgres instead

* fix global usings

* [skip ci] optimize wolverine by prebuilding static types

* [skip ci] Stop indexing mac folder indexes

* [skip ci] producer, metadata and migrations

removed mongodb
added redis cache
imdb meta in postgres
Enable pgtrm
Create trigrams index
Add search meta postgres function

* [skip ci] get rid of node folder, replace mongo with redis in consumer

also wire up postgres metadata searches

* [skip ci] change mongo to redis in the addon

* [skip ci] jackettio to redis

* Rest of mongo removed...

* Cleaner rerunning of metadata - without conflicts

* Add akas import as well as basic metadata

* Include episodes file too

* cascade truncate pre-import

* reverse order to avoid cascadeing

* separate out clean to separate handler

* Switch producer to use metadata matching pre-preocessing dmm

* More work

* Still porting PTN

* PTN port, adding tests

* [skip ci] Codec tests

* [skip ci] Complete Collection handler tests

* [skip ci] container tests

* [skip ci] Convert handlers tests

* [skip ci] DateHandler tests

* [skip ci] Dual Audio matching tests

* [skip ci] episode code tests

* [skip ci] Extended handler tests

* [skip ci] group handler tests

* [skip ci] some broken stuff right now

* [skip ci] more ptn

* [skip ci] PTN now in a separate nuget package, rebased this on the redis changes - i need them.

* [skip ci] Wire up PTN port. Tired - will test tomorrow

* [skip ci] Needs a lot of work - too many titles being missed now

* cleaner. done?

* Handle the date in the imdb search

- add integer function to confirm its a valid integer
- use the input date as a range of -+1 year

* [skip ci] Start of collector service for RD

[skip ci] WIP

Implemented metadata saga, along with channels to process up to a maximum of 100 infohashes each time
The saga will rety for each infohas by requeuing up to three times, before just marking as complete for that infoHash - meaning no data will be updated in the db for that torrent.

[skip ci] Ready to test with queue publishing

Will provision a fanout exchange if it doesn't exist, and create and bind a queue to it. Listens to the queue with 50 prefetch count.
Still needs PTN rewrite bringing in to parse the filename response from real debrid, and extract season and episode numbers if the file is a tvshow

[skip ci] Add Debrid Collector Build Job

Debrid Collector ready for testing

New consumer, new collector, producer has meta lookup and anti porn measures

[skip ci] WIP - moving from wolverine to MassTransit.

 not happy that wolverine cannot effectively control saga concurrency. we need to really.

[skip ci] Producer and new Consumer moved to MassTransit

Just the debrid collector to go now, then to write the optional qbit collector.

Collector now switched to mass transit too

hide porn titles in logs, clean up cache name in redis for imdb titles

[skip ci] Allow control of queues

[skip ci] Update deployment

Remove old consumer, fix deployment files, fix dockerfiles for shared project import

fix base deployment

* Add collector missing env var

* edits to kick off builds

* Add optional qbit deployment which qbit collector will use

* Qbit collector done

* reorder compose, and bring both qbit and qbitcollector into the compose, with 0 replicas as default

* Clean up compose file

* Ensure debrid collector errors if no debrid api key
2024-03-25 23:32:28 +00:00
iPromKnight
9c6c1ac249 Update compose versions 1.0.1, ready for tag push (#163) 2024-03-25 20:38:11 +00:00
iPromKnight
0ddfac57f7 Build on Tag Pushes. (#162)
* enable tag and pr builds

* Build on Tag Pushes
2024-03-25 20:27:37 +00:00
iPromKnight
9fbd750cd2 enable tag and pr builds (#161) 2024-03-25 20:24:14 +00:00
Knight Crawler
5fc2027cfa Option to manually trigger each workflow (#159)
Co-authored-by: purple_emily <purple_emily@protonmail.com>
2024-03-20 20:26:32 +00:00
purple_emily
2d39476c65 Push dev builds & ready to tag semver (#153) 2024-03-14 14:27:19 +00:00
493 changed files with 6570 additions and 476514 deletions

View File

@@ -6,12 +6,16 @@ on:
CONTEXT:
required: true
type: string
DOCKERFILE:
required: true
type: string
IMAGE_NAME:
required: true
type: string
env:
CONTEXT: ${{ inputs.CONTEXT }}
DOCKERFILE: ${{ inputs.DOCKERFILE }}
IMAGE_NAME: ${{ inputs.IMAGE_NAME }}
PLATFORMS: linux/amd64,linux/arm64
@@ -21,11 +25,13 @@ jobs:
steps:
- name: Setting variables
run: |
echo "CONTEXT=${{ env.CONTEXT }}
echo "IMAGE_NAME=${{ env.IMAGE_NAME }}
echo "CONTEXT=${{ env.CONTEXT }}"
echo "DOCKERFILE=${{ env.DOCKERFILE }}"
echo "IMAGE_NAME=${{ env.IMAGE_NAME }}"
echo "PLATFORMS=${{ env.PLATFORMS }}"
outputs:
CONTEXT: ${{ env.CONTEXT }}
DOCKERFILE: ${{ env.DOCKERFILE }}
IMAGE_NAME: ${{ env.IMAGE_NAME }}
PLATFORMS: ${{ env.PLATFORMS }}
@@ -70,14 +76,17 @@ jobs:
flavor: |
latest=auto
tags: |
type=edge,branch=master,commit=${{ github.sha }}
type=ref,event=tag
type=ref,event=pr
type=sha,commit=${{ github.sha }}
type=semver,pattern={{version}}
type=raw,value=latest,enable={{is_default_branch}}
- name: Build image for scanning
uses: docker/build-push-action@v5
with:
context: ${{ needs.set-vars.outputs.CONTEXT }}
file: ${{ needs.set-vars.outputs.DOCKERFILE }}
push: true
provenance: false
tags: localhost:5000/dockle-examine-image:test
@@ -130,10 +139,11 @@ jobs:
sarif_file: 'trivy-results-os.sarif'
- name: Push Service Image to repo
if: github.ref == 'refs/heads/main' || github.ref == 'refs/heads/master'
# if: github.ref == 'refs/heads/main' || github.ref == 'refs/heads/master'
uses: docker/build-push-action@v5
with:
context: ${{ needs.set-vars.outputs.CONTEXT }}
file: ${{ needs.set-vars.outputs.DOCKERFILE }}
push: true
provenance: false
tags: ${{ steps.docker-metadata.outputs.tags }}

View File

@@ -2,13 +2,17 @@ name: Build and Push Addon Service
on:
push:
tags:
- '**'
paths:
- 'src/node/addon/**'
- 'src/addon/**'
workflow_dispatch:
jobs:
process:
uses: ./.github/workflows/base_image_workflow.yaml
secrets: inherit
with:
CONTEXT: ./src/node/addon/
CONTEXT: ./src/addon/
DOCKERFILE: ./src/addon/Dockerfile
IMAGE_NAME: knightcrawler-addon

View File

@@ -2,13 +2,17 @@ name: Build and Push Consumer Service
on:
push:
tags:
- '**'
paths:
- 'src/node/consumer/**'
- 'src/torrent-consumer/**'
workflow_dispatch:
jobs:
process:
uses: ./.github/workflows/base_image_workflow.yaml
secrets: inherit
with:
CONTEXT: ./src/node/consumer/
CONTEXT: ./src/
DOCKERFILE: ./src/torrent-consumer/Dockerfile
IMAGE_NAME: knightcrawler-consumer

View File

@@ -0,0 +1,18 @@
name: Build and Push Debrid Collector Service
on:
push:
tags:
- '**'
paths:
- 'src/debrid-collector/**'
workflow_dispatch:
jobs:
process:
uses: ./.github/workflows/base_image_workflow.yaml
secrets: inherit
with:
CONTEXT: ./src/
DOCKERFILE: ./src/debrid-collector/Dockerfile
IMAGE_NAME: knightcrawler-debrid-collector

View File

@@ -2,13 +2,17 @@ name: Build and Push Jackett Addon Service
on:
push:
tags:
- '**'
paths:
- 'src/node/addon-jackett/**'
- 'src/addon-jackett/**'
workflow_dispatch:
jobs:
process:
uses: ./.github/workflows/base_image_workflow.yaml
secrets: inherit
with:
CONTEXT: ./src/node/addon-jackett/
CONTEXT: ./src/addon-jackett/
DOCKERFILE: ./src/addon-jackett/Dockerfile
IMAGE_NAME: knightcrawler-addon-jackett

View File

@@ -2,8 +2,11 @@ name: Build and Push Metadata Service
on:
push:
tags:
- '**'
paths:
- 'src/metadata/**'
workflow_dispatch:
jobs:
process:
@@ -11,4 +14,5 @@ jobs:
secrets: inherit
with:
CONTEXT: ./src/metadata/
DOCKERFILE: ./src/metadata/Dockerfile
IMAGE_NAME: knightcrawler-metadata

View File

@@ -2,8 +2,11 @@ name: Build and Push Migrator Service
on:
push:
tags:
- '**'
paths:
- 'src/migrator/**'
workflow_dispatch:
jobs:
process:
@@ -11,4 +14,5 @@ jobs:
secrets: inherit
with:
CONTEXT: ./src/migrator/
DOCKERFILE: ./src/migrator/Dockerfile
IMAGE_NAME: knightcrawler-migrator

View File

@@ -2,13 +2,17 @@ name: Build and Push Producer Service
on:
push:
tags:
- '**'
paths:
- 'src/producer/**'
workflow_dispatch:
jobs:
process:
uses: ./.github/workflows/base_image_workflow.yaml
secrets: inherit
with:
CONTEXT: ./src/producer/
CONTEXT: ./src/
DOCKERFILE: ./src/producer/src/Dockerfile
IMAGE_NAME: knightcrawler-producer

View File

@@ -0,0 +1,18 @@
name: Build and Push Qbit Collector Service
on:
push:
tags:
- '**'
paths:
- 'src/qbit-collector/**'
workflow_dispatch:
jobs:
process:
uses: ./.github/workflows/base_image_workflow.yaml
secrets: inherit
with:
CONTEXT: ./src/
DOCKERFILE: ./src/qbit-collector/Dockerfile
IMAGE_NAME: knightcrawler-qbit-collector

View File

@@ -2,8 +2,11 @@ name: Build and Push Tissue Service
on:
push:
tags:
- '**'
paths:
- 'src/tissue/**'
workflow_dispatch:
jobs:
process:
@@ -11,4 +14,5 @@ jobs:
secrets: inherit
with:
CONTEXT: ./src/tissue/
DOCKERFILE: ./src/tissue/Dockerfile
IMAGE_NAME: knightcrawler-tissue

View File

@@ -0,0 +1,15 @@
name: Build and Push Torrent Ingestor Service
on:
push:
paths:
- 'src/torrent-ingestor/**'
jobs:
process:
uses: ./.github/workflows/base_image_workflow.yaml
secrets: inherit
with:
CONTEXT: ./src/torrent-ingestor
DOCKERFILE: ./src/torrent-ingestor/Dockerfile
IMAGE_NAME: knightcrawler-torrent-ingestor

10
.gitignore vendored
View File

@@ -395,8 +395,6 @@ dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
@@ -610,3 +608,11 @@ fabric.properties
# Caddy logs
!**/caddy/logs/.gitkeep
**/caddy/logs/**
# Mac directory indexes
.DS_Store
deployment/docker/stack.env
src/producer/src/python/
src/debrid-collector/python/
src/qbit-collector/python/

View File

@@ -7,9 +7,6 @@
## Contents
> [!CAUTION]
> Until we reach `v1.0.0`, please consider releases as alpha.
> [!IMPORTANT]
> The latest change renames the project and requires a [small migration](#selfhostio-to-knightcrawler-migration).
- [Contents](#contents)
@@ -54,11 +51,11 @@ Download and install [Docker Compose](https://docs.docker.com/compose/install/),
### Environment Setup
Before running the project, you need to set up the environment variables. Copy the `.env.example` file to `.env`:
Before running the project, you need to set up the environment variables. Edit the values in `stack.env`:
```sh
cd deployment/docker
cp .env.example .env
code stack.env
```
Then set any of the values you wouldd like to customize.
@@ -70,9 +67,6 @@ Then set any of the values you wouldd like to customize.
By default, Knight Crawler is configured to be *relatively* conservative in its resource usage. If running on a decent machine (16GB RAM, i5+ or equivalent), you can increase some settings to increase consumer throughput. This is especially helpful if you have a large backlog from [importing databases](#importing-external-dumps).
In your `.env` file, under the `# Consumer` section increase `CONSUMER_REPLICAS` from `3` to `15`.
You can also increase `JOB_CONCURRENCY` from `5` to `10`.
### DebridMediaManager setup (optional)
There are some optional steps you should take to maximise the number of movies/tv shows we can find.
@@ -93,9 +87,9 @@ We can search DebridMediaManager hash lists which are hosted on GitHub. This all
(checked) Public Repositories (read-only)
```
4. Click `Generate token`
5. Take the new token and add it to the bottom of the [.env](deployment/docker/.env) file
5. Take the new token and add it to the bottom of the [stack.env](deployment/docker/stack.env) file
```
GithubSettings__PAT=<YOUR TOKEN HERE>
GITHUB_PAT=<YOUR TOKEN HERE>
```
### Configure external access
@@ -146,7 +140,7 @@ Remove or comment out the port for the addon, and connect it to Caddy:
addon:
<<: *knightcrawler-app
env_file:
- .env
- stack.env
hostname: knightcrawler-addon
image: gabisonfire/knightcrawler-addon:latest
labels:

View File

@@ -1,55 +0,0 @@
# General environment variables
TZ=London/Europe
# PostgreSQL
POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=knightcrawler
# MongoDB
MONGODB_HOST=mongodb
MONGODB_PORT=27017
MONGODB_DB=knightcrawler
MONGODB_USER=mongo
MONGODB_PASSWORD=mongo
# RabbitMQ
RABBITMQ_HOST=rabbitmq
RABBITMQ_USER=guest
RABBITMQ_PASSWORD=guest
RABBITMQ_QUEUE_NAME=ingested
RABBITMQ_DURABLE=true
RABBITMQ_MAX_QUEUE_SIZE=0
RABBITMQ_MAX_PUBLISH_BATCH_SIZE=500
RABBITMQ_PUBLISH_INTERVAL_IN_SECONDS=10
# Metadata
## Only used if DATA_ONCE is set to false. If true, the schedule is ignored
METADATA_DOWNLOAD_IMDB_DATA_SCHEDULE="0 0 1 * *"
## If true, the metadata will be downloaded once and then the schedule will be ignored
METADATA_DOWNLOAD_IMDB_DATA_ONCE=true
## Controls the amount of records processed in memory at any given time during import, higher values will consume more memory
METADATA_INSERT_BATCH_SIZE=25000
# Addon
DEBUG_MODE=false
# Consumer
JOB_CONCURRENCY=5
JOBS_ENABLED=true
## can be debug for extra verbosity (a lot more verbosity - useful for development)
LOG_LEVEL=info
MAX_CONNECTIONS_PER_TORRENT=10
MAX_CONNECTIONS_OVERALL=100
TORRENT_TIMEOUT=30000
UDP_TRACKERS_ENABLED=true
CONSUMER_REPLICAS=3
## Fix for #66 - toggle on for development
AUTO_CREATE_AND_APPLY_MIGRATIONS=false
## Allows control of the threshold for matching titles to the IMDB dataset. The closer to 0, the more strict the matching.
TITLE_MATCH_THRESHOLD=0.25
# Producer
GITHUB_PAT=

View File

@@ -0,0 +1,64 @@
[Application]
FileLogger\Age=1
FileLogger\AgeType=1
FileLogger\Backup=true
FileLogger\DeleteOld=true
FileLogger\Enabled=true
FileLogger\MaxSizeBytes=66560
FileLogger\Path=/config/qBittorrent/logs
[AutoRun]
enabled=false
program=
[BitTorrent]
Session\AnonymousModeEnabled=true
Session\BTProtocol=TCP
Session\ConnectionSpeed=150
Session\DefaultSavePath=/downloads/
Session\ExcludedFileNames=
Session\MaxActiveCheckingTorrents=20
Session\MaxActiveDownloads=20
Session\MaxActiveTorrents=50
Session\MaxActiveUploads=50
Session\MaxConcurrentHTTPAnnounces=1000
Session\MaxConnections=2000
Session\Port=6881
Session\QueueingSystemEnabled=true
Session\TempPath=/downloads/incomplete/
Session\TorrentStopCondition=MetadataReceived
[Core]
AutoDeleteAddedTorrentFile=Never
[LegalNotice]
Accepted=true
[Meta]
MigrationVersion=6
[Network]
PortForwardingEnabled=true
Proxy\HostnameLookupEnabled=false
Proxy\Profiles\BitTorrent=true
Proxy\Profiles\Misc=true
Proxy\Profiles\RSS=true
[Preferences]
Connection\PortRangeMin=6881
Connection\ResolvePeerCountries=false
Connection\UPnP=false
Downloads\SavePath=/downloads/
Downloads\TempPath=/downloads/incomplete/
General\Locale=en
MailNotification\req_auth=true
WebUI\Address=*
WebUI\AuthSubnetWhitelist=0.0.0.0/0
WebUI\AuthSubnetWhitelistEnabled=true
WebUI\HostHeaderValidation=false
WebUI\LocalHostAuth=false
WebUI\ServerDomains=*
[RSS]
AutoDownloader\DownloadRepacks=true
AutoDownloader\SmartEpisodeFilter=s(\\d+)e(\\d+), (\\d+)x(\\d+), "(\\d{4}[.\\-]\\d{1,2}[.\\-]\\d{1,2})", "(\\d{1,2}[.\\-]\\d{1,2}[.\\-]\\d{4})"

View File

@@ -1,139 +1,244 @@
version: "3.9"
name: knightcrawler
x-restart: &restart-policy "unless-stopped"
networks:
knightcrawler-network:
name: knightcrawler-network
driver: bridge
x-basehealth: &base-health
interval: 10s
timeout: 10s
retries: 3
start_period: 10s
x-rabbithealth: &rabbitmq-health
test: rabbitmq-diagnostics -q ping
<<: *base-health
x-mongohealth: &mongodb-health
test: ["CMD", "mongosh", "--eval", "db.adminCommand('ping')"]
<<: *base-health
x-postgreshealth: &postgresdb-health
test: pg_isready
<<: *base-health
x-apps: &knightcrawler-app
depends_on:
mongodb:
condition: service_healthy
postgres:
condition: service_healthy
rabbitmq:
condition: service_healthy
restart: *restart-policy
volumes:
postgres:
lavinmq:
redis:
services:
## Postgres is the database that is used by the services.
## All downloaded metadata is stored in this database.
postgres:
env_file: stack.env
healthcheck:
test: [ "CMD", "sh", "-c", "pg_isready -h localhost -U $$POSTGRES_USER" ]
timeout: 10s
interval: 10s
retries: 3
start_period: 10s
image: postgres:latest
env_file: .env
environment:
PGUSER: postgres # needed for healthcheck.
# # If you need the database to be accessible from outside, please open the below port.
# # Furthermore, please, please, please, change the username and password in the .env file.
# # Furthermore, please, please, please, change the username and password in the stack.env file.
# # If you want to enhance your security even more, create a new user for the database with a strong password.
# ports:
# - "5432:5432"
networks:
- knightcrawler-network
restart: unless-stopped
volumes:
- postgres:/var/lib/postgresql/data
healthcheck: *postgresdb-health
restart: *restart-policy
networks:
- knightcrawler-network
mongodb:
image: mongo:latest
env_file: .env
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGODB_USER:?Variable MONGODB_USER not set}
MONGO_INITDB_ROOT_PASSWORD: ${MONGODB_PASSWORD:?Variable MONGODB_PASSWORD not set}
# # If you need the database to be accessible from outside, please open the below port.
# # Furthermore, please, please, please, change the username and password in the .env file.
## Redis is used as a cache for the services.
## It is used to store the infohashes that are currently being processed in sagas, as well as intrim data.
redis:
env_file: stack.env
healthcheck:
test: ["CMD-SHELL", "redis-cli ping"]
timeout: 10s
interval: 10s
retries: 3
start_period: 10s
image: redis/redis-stack:latest
# # If you need redis to be accessible from outside, please open the below port.
# ports:
# - "27017:27017"
volumes:
- mongo:/data/db
restart: *restart-policy
healthcheck: *mongodb-health
# - "6379:6379"
networks:
- knightcrawler-network
restart: unless-stopped
volumes:
- redis:/data
rabbitmq:
image: rabbitmq:3-management
## LavinMQ is used as a message broker for the services.
## It is a high performance drop in replacement for RabbitMQ.
## It is used to communicate between the services.
lavinmq:
env_file: stack.env
# # If you need the database to be accessible from outside, please open the below port.
# # Furthermore, please, please, please, look at the documentation for rabbit on how to secure the service.
# # Furthermore, please, please, please, look at the documentation for lavinmq / rabbitmq on how to secure the service.
# ports:
# - "5672:5672"
# - "15672:15672"
# - "15692:15692"
image: cloudamqp/lavinmq:latest
healthcheck:
test: ["CMD-SHELL", "lavinmqctl status"]
timeout: 10s
interval: 10s
retries: 3
start_period: 10s
restart: unless-stopped
networks:
- knightcrawler-network
volumes:
- rabbitmq:/var/lib/rabbitmq
hostname: ${RABBITMQ_HOST}
restart: *restart-policy
healthcheck: *rabbitmq-health
networks:
- knightcrawler-network
producer:
image: gabisonfire/knightcrawler-producer:latest
labels:
logging: "promtail"
env_file: .env
<<: *knightcrawler-app
networks:
- knightcrawler-network
consumer:
image: gabisonfire/knightcrawler-consumer:latest
env_file: .env
labels:
logging: "promtail"
deploy:
replicas: ${CONSUMER_REPLICAS}
<<: *knightcrawler-app
networks:
- knightcrawler-network
metadata:
image: gabisonfire/knightcrawler-metadata:latest
env_file: .env
labels:
logging: "promtail"
restart: no
networks:
- knightcrawler-network
- lavinmq:/var/lib/lavinmq/
## The addon. This is what is used in stremio
addon:
<<: *knightcrawler-app
env_file: .env
depends_on:
metadata:
condition: service_completed_successfully
migrator:
condition: service_completed_successfully
postgres:
condition: service_healthy
lavinmq:
condition: service_healthy
redis:
condition: service_healthy
env_file: stack.env
hostname: knightcrawler-addon
image: gabisonfire/knightcrawler-addon:latest
image: gabisonfire/knightcrawler-addon:2.0.25
labels:
logging: "promtail"
logging: promtail
networks:
- knightcrawler-network
# - caddy
ports:
- "7000:7000"
restart: unless-stopped
## The consumer is responsible for consuming infohashes and orchestrating download of metadata.
consumer:
depends_on:
metadata:
condition: service_completed_successfully
migrator:
condition: service_completed_successfully
postgres:
condition: service_healthy
lavinmq:
condition: service_healthy
redis:
condition: service_healthy
env_file: stack.env
image: gabisonfire/knightcrawler-consumer:2.0.25
labels:
logging: promtail
networks:
- knightcrawler-network
restart: unless-stopped
networks:
knightcrawler-network:
driver: bridge
name: knightcrawler-network
## The debrid collector is responsible for downloading metadata from debrid services. (Currently only RealDebrid is supported)
debridcollector:
depends_on:
metadata:
condition: service_completed_successfully
migrator:
condition: service_completed_successfully
postgres:
condition: service_healthy
lavinmq:
condition: service_healthy
redis:
condition: service_healthy
env_file: stack.env
image: gabisonfire/knightcrawler-debrid-collector:2.0.25
labels:
logging: promtail
networks:
- knightcrawler-network
restart: unless-stopped
# caddy:
# name: caddy
# external: true
## The metadata service is responsible for downloading imdb publically available datasets.
## This is used to enrich the metadata during production of ingested infohashes.
metadata:
depends_on:
migrator:
condition: service_completed_successfully
env_file: stack.env
image: gabisonfire/knightcrawler-metadata:2.0.25
networks:
- knightcrawler-network
restart: "no"
volumes:
postgres:
mongo:
rabbitmq:
## The migrator is responsible for migrating the database schema.
migrator:
depends_on:
postgres:
condition: service_healthy
env_file: stack.env
image: gabisonfire/knightcrawler-migrator:2.0.25
networks:
- knightcrawler-network
restart: "no"
## The producer is responsible for producing infohashes by acquiring for various sites, including DMM.
producer:
depends_on:
metadata:
condition: service_completed_successfully
migrator:
condition: service_completed_successfully
postgres:
condition: service_healthy
lavinmq:
condition: service_healthy
redis:
condition: service_healthy
env_file: stack.env
image: gabisonfire/knightcrawler-producer:2.0.25
labels:
logging: promtail
networks:
- knightcrawler-network
restart: unless-stopped
## QBit collector utilizes QBitTorrent to download metadata.
qbitcollector:
depends_on:
metadata:
condition: service_completed_successfully
migrator:
condition: service_completed_successfully
postgres:
condition: service_healthy
lavinmq:
condition: service_healthy
redis:
condition: service_healthy
qbittorrent:
condition: service_healthy
deploy:
replicas: ${QBIT_REPLICAS:-0}
env_file: stack.env
image: gabisonfire/knightcrawler-qbit-collector:2.0.25
labels:
logging: promtail
networks:
- knightcrawler-network
restart: unless-stopped
## QBitTorrent is a torrent client that can be used to download torrents. In this case its used to download metadata.
## The QBit collector requires this.
qbittorrent:
deploy:
replicas: ${QBIT_REPLICAS:-0}
env_file: stack.env
environment:
PGID: "1000"
PUID: "1000"
TORRENTING_PORT: "6881"
WEBUI_PORT: "8080"
healthcheck:
test: ["CMD-SHELL", "curl --fail http://localhost:8080"]
timeout: 10s
interval: 10s
retries: 3
start_period: 10s
image: lscr.io/linuxserver/qbittorrent:latest
networks:
- knightcrawler-network
ports:
- "6881:6881/tcp"
- "6881:6881/udp"
# if you want to expose the webui, uncomment the following line
# - "8001:8080"
restart: unless-stopped
volumes:
- ./config/qbit/qbittorrent.conf:/config/qBittorrent/qBittorrent.conf

View File

@@ -16,7 +16,7 @@ rule_files:
scrape_configs:
- job_name: "rabbitmq"
static_configs:
- targets: ["rabbitmq:15692"]
- targets: ["lavinmq:15692"]
- job_name: "postgres-exporter"
static_configs:
- targets: ["postgres-exporter:9187"]

View File

@@ -0,0 +1,87 @@
x-basehealth: &base-health
interval: 10s
timeout: 10s
retries: 3
start_period: 10s
x-lavinhealth: &lavinmq-health
test: [ "CMD-SHELL", "lavinmqctl status" ]
<<: *base-health
x-redishealth: &redis-health
test: redis-cli ping
<<: *base-health
x-postgreshealth: &postgresdb-health
test: [ "CMD", "sh", "-c", "pg_isready -h localhost -U $$POSTGRES_USER" ]
<<: *base-health
x-qbit: &qbit-health
test: "curl --fail http://localhost:8080"
<<: *base-health
services:
postgres:
image: postgres:latest
environment:
PGUSER: postgres # needed for healthcheck.
# # If you need the database to be accessible from outside, please open the below port.
# # Furthermore, please, please, please, change the username and password in the .env file.
# # If you want to enhance your security even more, create a new user for the database with a strong password.
# ports:
# - "5432:5432"
volumes:
- postgres:/var/lib/postgresql/data
healthcheck: *postgresdb-health
restart: unless-stopped
env_file: ../../.env
networks:
- knightcrawler-network
redis:
image: redis/redis-stack:latest
# # If you need redis to be accessible from outside, please open the below port.
# ports:
# - "6379:6379"
volumes:
- redis:/data
restart: unless-stopped
healthcheck: *redis-health
env_file: ../../.env
networks:
- knightcrawler-network
lavinmq:
env_file: stack.env
# # If you need the database to be accessible from outside, please open the below port.
# # Furthermore, please, please, please, look at the documentation for lavinmq / rabbitmq on how to secure the service.
# ports:
# - "5672:5672"
# - "15672:15672"
# - "15692:15692"
image: cloudamqp/lavinmq:latest
healthcheck: *lavinmq-health
restart: unless-stopped
volumes:
- lavinmq:/var/lib/lavinmq/
## QBitTorrent is a torrent client that can be used to download torrents. In this case its used to download metadata.
## The QBit collector requires this.
qbittorrent:
image: lscr.io/linuxserver/qbittorrent:latest
environment:
- PUID=1000
- PGID=1000
- WEBUI_PORT=8080
- TORRENTING_PORT=6881
ports:
- 6881:6881
- 6881:6881/udp
env_file: ../../.env
networks:
- knightcrawler-network
restart: unless-stopped
healthcheck: *qbit-health
volumes:
- ../../config/qbit/qbittorrent.conf:/config/qBittorrent/qBittorrent.conf

View File

@@ -0,0 +1,71 @@
x-apps: &knightcrawler-app
labels:
logging: "promtail"
env_file: ../../.env
networks:
- knightcrawler-network
x-depends: &knightcrawler-app-depends
depends_on:
redis:
condition: service_healthy
postgres:
condition: service_healthy
lavinmq:
condition: service_healthy
migrator:
condition: service_completed_successfully
metadata:
condition: service_completed_successfully
services:
metadata:
image: gabisonfire/knightcrawler-metadata:2.0.25
env_file: ../../.env
networks:
- knightcrawler-network
restart: no
depends_on:
migrator:
condition: service_completed_successfully
migrator:
image: gabisonfire/knightcrawler-migrator:2.0.25
env_file: ../../.env
networks:
- knightcrawler-network
restart: no
depends_on:
postgres:
condition: service_healthy
addon:
image: gabisonfire/knightcrawler-addon:2.0.25
<<: [*knightcrawler-app, *knightcrawler-app-depends]
restart: unless-stopped
hostname: knightcrawler-addon
ports:
- "7000:7000"
consumer:
image: gabisonfire/knightcrawler-consumer:2.0.25
<<: [*knightcrawler-app, *knightcrawler-app-depends]
restart: unless-stopped
debridcollector:
image: gabisonfire/knightcrawler-debrid-collector:2.0.25
<<: [*knightcrawler-app, *knightcrawler-app-depends]
restart: unless-stopped
producer:
image: gabisonfire/knightcrawler-producer:2.0.25
<<: [*knightcrawler-app, *knightcrawler-app-depends]
restart: unless-stopped
qbitcollector:
image: gabisonfire/knightcrawler-qbit-collector:2.0.25
<<: [*knightcrawler-app, *knightcrawler-app-depends]
restart: unless-stopped
depends_on:
qbittorrent:
condition: service_healthy

View File

@@ -0,0 +1,4 @@
networks:
knightcrawler-network:
driver: bridge
name: knightcrawler-network

View File

@@ -0,0 +1,4 @@
volumes:
postgres:
redis:
lavinmq:

View File

@@ -0,0 +1,7 @@
services:
qbittorrent:
deploy:
replicas: 0
qbitcollector:
deploy:
replicas: 0

View File

@@ -0,0 +1,7 @@
version: "3.9"
name: "knightcrawler"
include:
- ./components/network.yaml
- ./components/volumes.yaml
- ./components/infrastructure.yaml
- ./components/knightcrawler.yaml

View File

@@ -0,0 +1,41 @@
# General environment variables
TZ=London/Europe
# PostgreSQL
POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=knightcrawler
# Redis
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_EXTRA=abortConnect=false,allowAdmin=true
# AMQP
RABBITMQ_HOST=lavinmq
RABBITMQ_USER=guest
RABBITMQ_PASSWORD=guest
RABBITMQ_CONSUMER_QUEUE_NAME=ingested
RABBITMQ_DURABLE=true
RABBITMQ_MAX_QUEUE_SIZE=0
RABBITMQ_MAX_PUBLISH_BATCH_SIZE=500
RABBITMQ_PUBLISH_INTERVAL_IN_SECONDS=10
# Metadata
METADATA_INSERT_BATCH_SIZE=50000
# Collectors
COLLECTOR_QBIT_ENABLED=false
COLLECTOR_DEBRID_ENABLED=true
COLLECTOR_REAL_DEBRID_API_KEY=
QBIT_HOST=http://qbittorrent:8080
QBIT_TRACKERS_URL=https://raw.githubusercontent.com/ngosang/trackerslist/master/trackers_all_http.txt
QBIT_CONCURRENCY=8
# Number of replicas for the qBittorrent collector and qBitTorrent client. Should be 0 or 1.
QBIT_REPLICAS=0
# Addon
DEBUG_MODE=false

View File

@@ -14,7 +14,6 @@
"axios": "^1.6.1",
"bottleneck": "^2.19.5",
"cache-manager": "^3.4.4",
"cache-manager-mongodb": "^0.3.0",
"cors": "^2.8.5",
"debrid-link-api": "^1.0.1",
"express": "^4.18.2",
@@ -33,7 +32,11 @@
"user-agents": "^1.0.1444",
"video-name-parser": "^1.4.6",
"xml-js": "^1.6.11",
"xml2js": "^0.6.2"
"xml2js": "^0.6.2",
"@redis/client": "^1.5.14",
"@redis/json": "^1.0.6",
"@redis/search": "^1.1.6",
"cache-manager-redis-store": "^2.0.0"
},
"devDependencies": {
"@types/node": "^20.11.6",

View File

@@ -1,7 +1,7 @@
import cacheManager from 'cache-manager';
import mangodbStore from 'cache-manager-mongodb';
import { isStaticUrl } from '../moch/static.js';
import {cacheConfig} from "./settings.js";
import redisStore from 'cache-manager-redis-store';
const STREAM_KEY_PREFIX = `${cacheConfig.GLOBAL_KEY_PREFIX}|stream`;
const IMDB_KEY_PREFIX = `${cacheConfig.GLOBAL_KEY_PREFIX}|imdb`;
@@ -12,28 +12,20 @@ const memoryCache = initiateMemoryCache();
const remoteCache = initiateRemoteCache();
function initiateRemoteCache() {
if (cacheConfig.NO_CACHE) {
return null;
} else if (cacheConfig.MONGODB_URI) {
return cacheManager.caching({
store: mangodbStore,
uri: cacheConfig.MONGODB_URI,
options: {
collection: 'jackettio_addon_collection',
socketTimeoutMS: 120000,
useNewUrlParser: true,
useUnifiedTopology: false,
ttl: cacheConfig.STREAM_EMPTY_TTL
},
ttl: cacheConfig.STREAM_EMPTY_TTL,
ignoreCacheErrors: true
});
} else {
return cacheManager.caching({
store: 'memory',
ttl: cacheConfig.STREAM_EMPTY_TTL
});
}
if (cacheConfig.NO_CACHE) {
return null;
} else if (cacheConfig.REDIS_CONNECTION_STRING) {
return cacheManager.caching({
store: redisStore,
ttl: cacheConfig.STREAM_EMPTY_TTL,
url: cacheConfig.REDIS_CONNECTION_STRING
});
} else {
return cacheManager.caching({
store: 'memory',
ttl: cacheConfig.STREAM_EMPTY_TTL
});
}
}
function initiateMemoryCache() {

View File

@@ -25,7 +25,9 @@ export const cinemetaConfig = {
}
export const cacheConfig = {
MONGODB_URI: process.env.MONGODB_URI,
REDIS_HOST: process.env.REDIS_HOST || 'redis',
REDIS_PORT: process.env.REDIS_PORT || '6379',
REDIS_EXTRA: process.env.REDIS_EXTRA || '',
NO_CACHE: parseBool(process.env.NO_CACHE, false),
IMDB_TTL: parseInt(process.env.IMDB_TTL || 60 * 60 * 4), // 4 Hours
STREAM_TTL: parseInt(process.env.STREAM_TTL || 60 * 60 * 4), // 1 Hour
@@ -40,3 +42,5 @@ export const cacheConfig = {
STALE_ERROR_AGE: parseInt(process.env.STALE_ERROR_AGE) || 7 * 24 * 60 * 60, // 7 days
GLOBAL_KEY_PREFIX: process.env.GLOBAL_KEY_PREFIX || 'jackettio-addon',
}
cacheConfig.REDIS_CONNECTION_STRING = 'redis://' + cacheConfig.REDIS_HOST + ':' + cacheConfig.REDIS_PORT + '?' + cacheConfig.REDIS_EXTRA;

File diff suppressed because it is too large Load Diff

View File

@@ -14,7 +14,6 @@
"axios": "^1.6.1",
"bottleneck": "^2.19.5",
"cache-manager": "^3.4.4",
"cache-manager-mongodb": "^0.3.0",
"cors": "^2.8.5",
"debrid-link-api": "^1.0.1",
"express-rate-limit": "^6.7.0",
@@ -35,7 +34,11 @@
"stremio-addon-sdk": "^1.6.10",
"swagger-stats": "^0.99.7",
"ua-parser-js": "^1.0.36",
"user-agents": "^1.0.1444"
"user-agents": "^1.0.1444",
"@redis/client": "^1.5.14",
"@redis/json": "^1.0.6",
"@redis/search": "^1.1.6",
"cache-manager-redis-store": "^2.0.0"
},
"devDependencies": {
"@types/node": "^20.11.6",

View File

@@ -1,7 +1,7 @@
import cacheManager from 'cache-manager';
import mangodbStore from 'cache-manager-mongodb';
import { cacheConfig } from './config.js';
import { isStaticUrl } from '../moch/static.js';
import redisStore from "cache-manager-redis-store";
const GLOBAL_KEY_PREFIX = 'knightcrawler-addon';
const STREAM_KEY_PREFIX = `${GLOBAL_KEY_PREFIX}|stream`;
@@ -21,19 +21,11 @@ const remoteCache = initiateRemoteCache();
function initiateRemoteCache() {
if (cacheConfig.NO_CACHE) {
return null;
} else if (cacheConfig.MONGO_URI) {
} else if (cacheConfig.REDIS_CONNECTION_STRING) {
return cacheManager.caching({
store: mangodbStore,
uri: cacheConfig.MONGO_URI,
options: {
collection: 'knightcrawler_addon_collection',
socketTimeoutMS: 120000,
useNewUrlParser: true,
useUnifiedTopology: false,
ttl: STREAM_EMPTY_TTL
},
store: redisStore,
ttl: STREAM_EMPTY_TTL,
ignoreCacheErrors: true
url: cacheConfig.REDIS_CONNECTION_STRING
});
} else {
return cacheManager.caching({

View File

@@ -1,17 +1,11 @@
export const cacheConfig = {
MONGODB_HOST: process.env.MONGODB_HOST || 'mongodb',
MONGODB_PORT: process.env.MONGODB_PORT || '27017',
MONGODB_DB: process.env.MONGODB_DB || 'knightcrawler',
MONGODB_USER: process.env.MONGODB_USER || 'mongo',
MONGODB_PASSWORD: process.env.MONGODB_PASSWORD || 'mongo',
COLLECTION_NAME: process.env.MONGODB_ADDON_COLLECTION || 'knightcrawler_addon_collection',
REDIS_HOST: process.env.REDIS_HOST || 'redis',
REDIS_PORT: process.env.REDIS_PORT || '6379',
REDIS_EXTRA: process.env.REDIS_EXTRA || '',
NO_CACHE: parseBool(process.env.NO_CACHE, false),
}
// Combine the environment variables into a connection string
// The combined string will look something like:
// 'mongodb://mongo:mongo@localhost:27017/knightcrawler?authSource=admin'
cacheConfig.MONGO_URI = 'mongodb://' + cacheConfig.MONGODB_USER + ':' + cacheConfig.MONGODB_PASSWORD + '@' + cacheConfig.MONGODB_HOST + ':' + cacheConfig.MONGODB_PORT + '/' + cacheConfig.MONGODB_DB + '?authSource=admin';
cacheConfig.REDIS_CONNECTION_STRING = 'redis://' + cacheConfig.REDIS_HOST + ':' + cacheConfig.REDIS_PORT + '?' + cacheConfig.REDIS_EXTRA;
export const databaseConfig = {
POSTGRES_HOST: process.env.POSTGRES_HOST || 'postgres',

View File

@@ -14,13 +14,12 @@ const Torrent = database.define('torrent',
{
infoHash: { type: Sequelize.STRING(64), primaryKey: true },
provider: { type: Sequelize.STRING(32), allowNull: false },
torrentId: { type: Sequelize.STRING(128) },
ingestedTorrentId: { type: Sequelize.BIGINT, allowNull: false },
title: { type: Sequelize.STRING(256), allowNull: false },
size: { type: Sequelize.BIGINT },
type: { type: Sequelize.STRING(16), allowNull: false },
uploadDate: { type: Sequelize.DATE, allowNull: false },
seeders: { type: Sequelize.SMALLINT },
trackers: { type: Sequelize.STRING(4096) },
languages: { type: Sequelize.STRING(4096) },
resolution: { type: Sequelize.STRING(16) }
}
@@ -85,7 +84,7 @@ export function getImdbIdMovieEntries(imdbId) {
where: {
imdbId: { [Op.eq]: imdbId }
},
include: [Torrent],
include: { model: Torrent, required: true },
limit: 500,
order: [
[Torrent, 'size', 'DESC']
@@ -100,7 +99,7 @@ export function getImdbIdSeriesEntries(imdbId, season, episode) {
imdbSeason: { [Op.eq]: season },
imdbEpisode: { [Op.eq]: episode }
},
include: [Torrent],
include: { model: Torrent, required: true },
limit: 500,
order: [
[Torrent, 'size', 'DESC']
@@ -113,7 +112,7 @@ export function getKitsuIdMovieEntries(kitsuId) {
where: {
kitsuId: { [Op.eq]: kitsuId }
},
include: [Torrent],
include: { model: Torrent, required: true },
limit: 500,
order: [
[Torrent, 'size', 'DESC']
@@ -127,7 +126,7 @@ export function getKitsuIdSeriesEntries(kitsuId, episode) {
kitsuId: { [Op.eq]: kitsuId },
kitsuEpisode: { [Op.eq]: episode }
},
include: [Torrent],
include: { model: Torrent, required: true },
limit: 500,
order: [
[Torrent, 'size', 'DESC']

Some files were not shown because too many files have changed in this diff Show More