68 Commits

Author SHA1 Message Date
iPromKnight
f56f205bbe helm chart [wip] 2024-03-30 23:54:34 +00:00
iPromKnight
c75ecd2707 add qbit housekeeping service to remove stale torrents (#193)
* Add housekeeping service to clean stale torrents

* version bump
2024-03-30 11:52:23 +00:00
iPromKnight
c493ef3376 Hotfix category, and roll back RTN to 0.1.8 (#192)
* Hotfix categories

Also roll back RTN to 0.1.8 as regression introduced in 0.2

* bump version
2024-03-30 04:47:36 +00:00
iPromKnight
655a39e35c patch the query with execute (#191) 2024-03-30 01:54:06 +00:00
iPromKnight
cfeee62f6b patch ratio (#190)
* add configurable threshold, default 0.95

* version bump
2024-03-30 01:43:21 +00:00
iPromKnight
c6d4c06d70 hotfix categories from imdb result instead (#189)
* category mapping from imdb

* version bump
2024-03-30 01:26:02 +00:00
iPromKnight
08639a3254 Patch isMovie (#188)
* fix is movie

* version bump
2024-03-30 00:28:35 +00:00
iPromKnight
d430850749 Patch message contract names (#187)
* ensure unique message contract names per collector type

* version bump
2024-03-30 00:09:13 +00:00
iPromKnight
82c0ea459b change qbittorrent settings (#186) 2024-03-29 23:35:27 +00:00
iPromKnight
1e83b4c5d8 Patch the addon (#185) 2024-03-29 19:08:17 +00:00
iPromKnight
66609c2a46 trigram performance increased and housekeeping (#184)
* add new indexes, and change year column to int

* Change gist to gin, and change year to int

* Producer changes for new gin query

* Fully map the rtn response using json dump from Pydantic

Also updates Rtn to 0.1.9

* Add housekeeping script to reconcile imdb ids.

* Join Torrent onto the ingested torrent table

Ensure that a torrent can always find the details of where it came from, and how it was parsed.

* Version bump for release

* missing quote on table name
2024-03-29 19:01:48 +00:00
iPromKnight
2d78dc2735 version bump for release (#183) 2024-03-28 23:37:35 +00:00
iPromKnight
527d6cdf15 Upgrade RTN to 0.1.8, replace rabbitmq with drop in replacement lavinmq - better performance, lower resource usage. (#182) 2024-03-28 23:35:41 +00:00
iPromKnight
bb260d78d6 Address Issues in build (#180)
- CIS-DI-0001
- CIS-DI-0006
- CIS-DI-0008
- DKL-LI-0003
2024-03-28 10:47:13 +00:00
iPromKnight
baec0450bf Hotfix ingestor github flow, and move to top level src folder (foldedr per service) (#179) 2024-03-28 10:20:26 +00:00
iPromKnight
4308a0ee71 [wip] bridge python and c# and bring in rank torrent name (#177)
* [wip] bridge python and c# and bring in rank torrent name

* Container restores package now

Includes two dev scripts to install the python packages locally for debugging purposes.

* Introduce slightly turned title matching scoring, by making it length aware

this should help with sequels such as Terminator 2, vs Terminator etc

* Version bump

Also fixes postgres healthcheck so that it utilises the user from the stack.env file
2024-03-28 10:13:50 +00:00
RohirrimRider
cc15a69517 fix torrent_ingestor ci (#178) 2024-03-27 21:38:59 -05:00
RohirrimRider
a6d3a4a066 init ingest torrents from annatar (#157)
* init ingest torrents from annatar

* works

* mv annatar to src/

* done

* add ci and readme

---------

Co-authored-by: Brett <eruiluvatar@pnbx.xyz>
2024-03-27 21:35:03 -05:00
iPromKnight
9430704205 rename commited .env file to stack.env (#176) 2024-03-27 12:57:14 +00:00
iPromKnight
6cc857bdc3 rename .env to stack.env (#175) 2024-03-27 12:37:11 +00:00
iPromKnight
cc2adbfca5 Recreate single docker-compose file (#174)
Clean it up - also comment all services
2024-03-27 12:21:40 +00:00
iPromKnight
9f928f9b66 Allow trackers url to be configurable + version bump (#173)
this allows people to use only the udp collection, only the tcp collection, or all.
2024-03-26 12:17:47 +00:00
iPromKnight
a50b5071b3 key prefixes per collector (#172)
* Ensure the collectors manage sagas in their own keyspace, as we do not want overlap (they have the same correlation ids internally from the exchange)

* version bump
2024-03-26 11:56:14 +00:00
iPromKnight
72db18f0ad add missing env (#171)
* add missing env

* version bump
2024-03-26 11:16:21 +00:00
iPromKnight
d70cef1b86 addon fix (#170)
* addon fix

* version bump
2024-03-26 10:25:43 +00:00
iPromKnight
e1e718cd22 includes qbit collector fix (#169) 2024-03-26 10:17:04 +00:00
iPromKnight
c3e58e4234 Fix redis connection strings for consistency across languages. (#168)
* Fix redis connection strings across languages

* compose version bump
2024-03-26 09:26:35 +00:00
iPromKnight
d584102d60 image updates for patched release (#167) 2024-03-26 00:27:54 +00:00
iPromKnight
fe4bb59502 fix indenting on env file (#166)
* fix images :/

* fix indenting
2024-03-26 00:22:33 +00:00
iPromKnight
472b3342d5 fix images :/ (#165) 2024-03-26 00:01:59 +00:00
iPromKnight
b035ef596b change tag glob (#164) 2024-03-25 23:41:58 +00:00
iPromKnight
9a831e92d0 Producer / Consumer / Collector rewrite (#160)
* Converted metadata service to redis

* move to postgres instead

* fix global usings

* [skip ci] optimize wolverine by prebuilding static types

* [skip ci] Stop indexing mac folder indexes

* [skip ci] producer, metadata and migrations

removed mongodb
added redis cache
imdb meta in postgres
Enable pgtrm
Create trigrams index
Add search meta postgres function

* [skip ci] get rid of node folder, replace mongo with redis in consumer

also wire up postgres metadata searches

* [skip ci] change mongo to redis in the addon

* [skip ci] jackettio to redis

* Rest of mongo removed...

* Cleaner rerunning of metadata - without conflicts

* Add akas import as well as basic metadata

* Include episodes file too

* cascade truncate pre-import

* reverse order to avoid cascadeing

* separate out clean to separate handler

* Switch producer to use metadata matching pre-preocessing dmm

* More work

* Still porting PTN

* PTN port, adding tests

* [skip ci] Codec tests

* [skip ci] Complete Collection handler tests

* [skip ci] container tests

* [skip ci] Convert handlers tests

* [skip ci] DateHandler tests

* [skip ci] Dual Audio matching tests

* [skip ci] episode code tests

* [skip ci] Extended handler tests

* [skip ci] group handler tests

* [skip ci] some broken stuff right now

* [skip ci] more ptn

* [skip ci] PTN now in a separate nuget package, rebased this on the redis changes - i need them.

* [skip ci] Wire up PTN port. Tired - will test tomorrow

* [skip ci] Needs a lot of work - too many titles being missed now

* cleaner. done?

* Handle the date in the imdb search

- add integer function to confirm its a valid integer
- use the input date as a range of -+1 year

* [skip ci] Start of collector service for RD

[skip ci] WIP

Implemented metadata saga, along with channels to process up to a maximum of 100 infohashes each time
The saga will rety for each infohas by requeuing up to three times, before just marking as complete for that infoHash - meaning no data will be updated in the db for that torrent.

[skip ci] Ready to test with queue publishing

Will provision a fanout exchange if it doesn't exist, and create and bind a queue to it. Listens to the queue with 50 prefetch count.
Still needs PTN rewrite bringing in to parse the filename response from real debrid, and extract season and episode numbers if the file is a tvshow

[skip ci] Add Debrid Collector Build Job

Debrid Collector ready for testing

New consumer, new collector, producer has meta lookup and anti porn measures

[skip ci] WIP - moving from wolverine to MassTransit.

 not happy that wolverine cannot effectively control saga concurrency. we need to really.

[skip ci] Producer and new Consumer moved to MassTransit

Just the debrid collector to go now, then to write the optional qbit collector.

Collector now switched to mass transit too

hide porn titles in logs, clean up cache name in redis for imdb titles

[skip ci] Allow control of queues

[skip ci] Update deployment

Remove old consumer, fix deployment files, fix dockerfiles for shared project import

fix base deployment

* Add collector missing env var

* edits to kick off builds

* Add optional qbit deployment which qbit collector will use

* Qbit collector done

* reorder compose, and bring both qbit and qbitcollector into the compose, with 0 replicas as default

* Clean up compose file

* Ensure debrid collector errors if no debrid api key
2024-03-25 23:32:28 +00:00
iPromKnight
9c6c1ac249 Update compose versions 1.0.1, ready for tag push (#163) 2024-03-25 20:38:11 +00:00
iPromKnight
0ddfac57f7 Build on Tag Pushes. (#162)
* enable tag and pr builds

* Build on Tag Pushes
2024-03-25 20:27:37 +00:00
iPromKnight
9fbd750cd2 enable tag and pr builds (#161) 2024-03-25 20:24:14 +00:00
Knight Crawler
5fc2027cfa Option to manually trigger each workflow (#159)
Co-authored-by: purple_emily <purple_emily@protonmail.com>
2024-03-20 20:26:32 +00:00
purple_emily
2d39476c65 Push dev builds & ready to tag semver (#153) 2024-03-14 14:27:19 +00:00
iPromKnight
e7f987a0d7 Merge pull request #151 from Gabisonfire/feature/tissue-corn-sanitizer
Improve producer matching - Add tissue service
2024-03-12 10:31:18 +00:00
iPromKnight
79a6aa3cb0 Improve producer matching - Add tissue service
Tissue service will sanitize the existign database of ingested torrents by matching existing titles with new banned word lists. Now with added kleenex
2024-03-12 10:29:13 +00:00
iPromKnight
e24d81dd96 Merge pull request #149 from Gabisonfire/improve-consumer
Simplification of parsing in consumer
2024-03-11 11:00:23 +00:00
iPromKnight
aeb83c19f8 Simplification of parsing in consumer
should speed up massively especially if imdbIds are found from mongodb
2024-03-11 10:56:04 +00:00
iPromKnight
e23ee974e2 Merge pull request #148 from Gabisonfire/hotfix/nyaa
Fix nyaa category
2024-03-11 09:03:22 +00:00
iPromKnight
5c310427b4 Fix nyaa category 2024-03-11 08:59:55 +00:00
iPromKnight
b3d9be0b7a Merge pull request #147 from Gabisonfire/force_build
accidentally skipped build on last pr
2024-03-10 23:37:28 +00:00
iPromKnight
dda81ec5bf accidentally skipped build on last pr
tired..
2024-03-10 23:37:16 +00:00
iPromKnight
8eae288f10 Merge pull request #146 from Gabisonfire/hotfix/default_season_1
[skip ci] Final hotfix
2024-03-10 23:34:41 +00:00
iPromKnight
75ac89489e [skip ci] Final hotfix 2024-03-10 23:34:35 +00:00
iPromKnight
fa27b0cda9 Merge pull request #145 from Gabisonfire/hotfix/series_consumer
Fix series parsing
2024-03-10 22:28:00 +00:00
iPromKnight
500dd0d725 patch type 2024-03-10 22:28:06 +00:00
iPromKnight
6f4bc10f5a Fix series parsing 2024-03-10 21:38:55 +00:00
iPromKnight
1b3c190ed1 Merge pull request #144 from Gabisonfire/reduce_cpu_cycles
reduce cpu cycles in parsing in producer
2024-03-10 15:14:37 +00:00
iPromKnight
02150482df reduce cpu cycles in parsing in producer 2024-03-10 15:14:17 +00:00
iPromKnight
f18cd5b1ac Merge pull request #143 from Gabisonfire/extra_terms
Few extra terms getting through
2024-03-10 14:54:58 +00:00
iPromKnight
2e774058ff Few extra terms getting through 2024-03-10 14:54:25 +00:00
iPromKnight
4e84d7c9c3 Merge pull request #142 from Gabisonfire/feature/dmm-improvements
remove log line of adult content
2024-03-10 13:55:20 +00:00
iPromKnight
ad04d323b4 remove log line of adult content 2024-03-10 13:54:35 +00:00
iPromKnight
7d0b779bc8 Merge pull request #129 from Gabisonfire/feature/dmm-improvements
Improvements for DMM
2024-03-10 13:52:53 +00:00
iPromKnight
e2b45e799d [skip ci] Remove Debug logged adult terms found 2024-03-10 13:49:51 +00:00
iPromKnight
6c03f79933 Complete 2024-03-10 13:48:27 +00:00
iPromKnight
c8a1ebd8ae Bump large file to 2500kb because of Jav list.
Doesn't make sense to enable lfs for this file.
2024-03-10 13:48:14 +00:00
iPromKnight
320fccc8e8 [skip ci] More work on parsing - seasons to fix still and use banned words 2024-03-10 12:48:19 +00:00
iPromKnight
51246ed352 Ignore producer data dir from codespell hook 2024-03-10 12:48:19 +00:00
iPromKnight
8d82a17876 re-disable services other than dmm while developing
re-enable

disable again - will squash dont worry

enable again

disable again
2024-03-10 12:48:19 +00:00
iPromKnight
f719520b3b [skip ci] Ignore all run profiles to prevent pat leaking
reenable these, testing only producer should build
2024-03-10 12:48:19 +00:00
iPromKnight
bacb50e060 [skip ci] remove extra package no longer in use 2024-03-10 12:48:19 +00:00
iPromKnight
6600fceb1a Wip Blacklisting dmm porn
Create adult text classifier ML Model

wip - starting to write PTN in c#

More work on season, show and movie parsing

Remove ML project
2024-03-10 12:48:16 +00:00
purple_emily
5aba05f2b4 Merge pull request #141 from Gabisonfire/generic-fixes
Typo at the end of the staging environment
2024-03-10 12:45:58 +00:00
purple_emily
601dbdf64f Typo at the end of the staging environment 2024-03-10 12:27:21 +00:00
477 changed files with 264412 additions and 44026 deletions

View File

@@ -6,12 +6,16 @@ on:
CONTEXT:
required: true
type: string
DOCKERFILE:
required: true
type: string
IMAGE_NAME:
required: true
type: string
env:
CONTEXT: ${{ inputs.CONTEXT }}
DOCKERFILE: ${{ inputs.DOCKERFILE }}
IMAGE_NAME: ${{ inputs.IMAGE_NAME }}
PLATFORMS: linux/amd64,linux/arm64
@@ -21,11 +25,13 @@ jobs:
steps:
- name: Setting variables
run: |
echo "CONTEXT=${{ env.CONTEXT }}
echo "IMAGE_NAME=${{ env.IMAGE_NAME }}
echo "CONTEXT=${{ env.CONTEXT }}"
echo "DOCKERFILE=${{ env.DOCKERFILE }}"
echo "IMAGE_NAME=${{ env.IMAGE_NAME }}"
echo "PLATFORMS=${{ env.PLATFORMS }}"
outputs:
CONTEXT: ${{ env.CONTEXT }}
DOCKERFILE: ${{ env.DOCKERFILE }}
IMAGE_NAME: ${{ env.IMAGE_NAME }}
PLATFORMS: ${{ env.PLATFORMS }}
@@ -70,14 +76,17 @@ jobs:
flavor: |
latest=auto
tags: |
type=edge,branch=master,commit=${{ github.sha }}
type=ref,event=tag
type=ref,event=pr
type=sha,commit=${{ github.sha }}
type=semver,pattern={{version}}
type=raw,value=latest,enable={{is_default_branch}}
- name: Build image for scanning
uses: docker/build-push-action@v5
with:
context: ${{ needs.set-vars.outputs.CONTEXT }}
file: ${{ needs.set-vars.outputs.DOCKERFILE }}
push: true
provenance: false
tags: localhost:5000/dockle-examine-image:test
@@ -130,10 +139,11 @@ jobs:
sarif_file: 'trivy-results-os.sarif'
- name: Push Service Image to repo
if: github.ref == 'refs/heads/main' || github.ref == 'refs/heads/master'
# if: github.ref == 'refs/heads/main' || github.ref == 'refs/heads/master'
uses: docker/build-push-action@v5
with:
context: ${{ needs.set-vars.outputs.CONTEXT }}
file: ${{ needs.set-vars.outputs.DOCKERFILE }}
push: true
provenance: false
tags: ${{ steps.docker-metadata.outputs.tags }}

View File

@@ -2,13 +2,17 @@ name: Build and Push Addon Service
on:
push:
tags:
- '**'
paths:
- 'src/node/addon/**'
- 'src/addon/**'
workflow_dispatch:
jobs:
process:
uses: ./.github/workflows/base_image_workflow.yaml
secrets: inherit
with:
CONTEXT: ./src/node/addon/
CONTEXT: ./src/addon/
DOCKERFILE: ./src/addon/Dockerfile
IMAGE_NAME: knightcrawler-addon

View File

@@ -2,13 +2,17 @@ name: Build and Push Consumer Service
on:
push:
tags:
- '**'
paths:
- 'src/node/consumer/**'
- 'src/torrent-consumer/**'
workflow_dispatch:
jobs:
process:
uses: ./.github/workflows/base_image_workflow.yaml
secrets: inherit
with:
CONTEXT: ./src/node/consumer/
CONTEXT: ./src/
DOCKERFILE: ./src/torrent-consumer/Dockerfile
IMAGE_NAME: knightcrawler-consumer

View File

@@ -0,0 +1,18 @@
name: Build and Push Debrid Collector Service
on:
push:
tags:
- '**'
paths:
- 'src/debrid-collector/**'
workflow_dispatch:
jobs:
process:
uses: ./.github/workflows/base_image_workflow.yaml
secrets: inherit
with:
CONTEXT: ./src/
DOCKERFILE: ./src/debrid-collector/Dockerfile
IMAGE_NAME: knightcrawler-debrid-collector

View File

@@ -2,13 +2,17 @@ name: Build and Push Jackett Addon Service
on:
push:
tags:
- '**'
paths:
- 'src/node/addon-jackett/**'
- 'src/addon-jackett/**'
workflow_dispatch:
jobs:
process:
uses: ./.github/workflows/base_image_workflow.yaml
secrets: inherit
with:
CONTEXT: ./src/node/addon-jackett/
CONTEXT: ./src/addon-jackett/
DOCKERFILE: ./src/addon-jackett/Dockerfile
IMAGE_NAME: knightcrawler-addon-jackett

View File

@@ -2,8 +2,11 @@ name: Build and Push Metadata Service
on:
push:
tags:
- '**'
paths:
- 'src/metadata/**'
workflow_dispatch:
jobs:
process:
@@ -11,4 +14,5 @@ jobs:
secrets: inherit
with:
CONTEXT: ./src/metadata/
DOCKERFILE: ./src/metadata/Dockerfile
IMAGE_NAME: knightcrawler-metadata

View File

@@ -2,8 +2,11 @@ name: Build and Push Migrator Service
on:
push:
tags:
- '**'
paths:
- 'src/migrator/**'
workflow_dispatch:
jobs:
process:
@@ -11,4 +14,5 @@ jobs:
secrets: inherit
with:
CONTEXT: ./src/migrator/
DOCKERFILE: ./src/migrator/Dockerfile
IMAGE_NAME: knightcrawler-migrator

View File

@@ -2,13 +2,17 @@ name: Build and Push Producer Service
on:
push:
tags:
- '**'
paths:
- 'src/producer/**'
workflow_dispatch:
jobs:
process:
uses: ./.github/workflows/base_image_workflow.yaml
secrets: inherit
with:
CONTEXT: ./src/producer/
CONTEXT: ./src/
DOCKERFILE: ./src/producer/src/Dockerfile
IMAGE_NAME: knightcrawler-producer

View File

@@ -0,0 +1,18 @@
name: Build and Push Qbit Collector Service
on:
push:
tags:
- '**'
paths:
- 'src/qbit-collector/**'
workflow_dispatch:
jobs:
process:
uses: ./.github/workflows/base_image_workflow.yaml
secrets: inherit
with:
CONTEXT: ./src/
DOCKERFILE: ./src/qbit-collector/Dockerfile
IMAGE_NAME: knightcrawler-qbit-collector

18
.github/workflows/build_tissue.yaml vendored Normal file
View File

@@ -0,0 +1,18 @@
name: Build and Push Tissue Service
on:
push:
tags:
- '**'
paths:
- 'src/tissue/**'
workflow_dispatch:
jobs:
process:
uses: ./.github/workflows/base_image_workflow.yaml
secrets: inherit
with:
CONTEXT: ./src/tissue/
DOCKERFILE: ./src/tissue/Dockerfile
IMAGE_NAME: knightcrawler-tissue

View File

@@ -0,0 +1,15 @@
name: Build and Push Torrent Ingestor Service
on:
push:
paths:
- 'src/torrent-ingestor/**'
jobs:
process:
uses: ./.github/workflows/base_image_workflow.yaml
secrets: inherit
with:
CONTEXT: ./src/torrent-ingestor
DOCKERFILE: ./src/torrent-ingestor/Dockerfile
IMAGE_NAME: knightcrawler-torrent-ingestor

9
.gitignore vendored
View File

@@ -355,6 +355,9 @@ MigrationBackup/
# Fody - auto-generated XML schema
FodyWeavers.xsd
# Jetbrains ide's run profiles (Could contain sensative information)
**/.run/
# VS Code files for those working on multiple tools
.vscode/*
!.vscode/settings.json
@@ -392,8 +395,6 @@ dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
@@ -607,3 +608,7 @@ fabric.properties
# Caddy logs
!**/caddy/logs/.gitkeep
**/caddy/logs/**
# Mac directory indexes
.DS_Store
deployment/docker/stack.env

View File

@@ -3,6 +3,7 @@ repos:
rev: v4.5.0
hooks:
- id: check-added-large-files
args: ['--maxkb=2500']
- id: check-json
- id: check-toml
- id: check-xml
@@ -15,5 +16,6 @@ repos:
rev: v2.2.6
hooks:
- id: codespell
exclude: ^src/node/consumer/test/
exclude: |
(?x)^(src/node/consumer/test/.*|src/producer/Data/.*|src/tissue/Data/.*)$
args: ["-L", "strem,chage"]

View File

@@ -7,9 +7,6 @@
## Contents
> [!CAUTION]
> Until we reach `v1.0.0`, please consider releases as alpha.
> [!IMPORTANT]
> The latest change renames the project and requires a [small migration](#selfhostio-to-knightcrawler-migration).
- [Contents](#contents)
@@ -54,11 +51,11 @@ Download and install [Docker Compose](https://docs.docker.com/compose/install/),
### Environment Setup
Before running the project, you need to set up the environment variables. Copy the `.env.example` file to `.env`:
Before running the project, you need to set up the environment variables. Edit the values in `stack.env`:
```sh
cd deployment/docker
cp .env.example .env
code stack.env
```
Then set any of the values you wouldd like to customize.
@@ -70,9 +67,6 @@ Then set any of the values you wouldd like to customize.
By default, Knight Crawler is configured to be *relatively* conservative in its resource usage. If running on a decent machine (16GB RAM, i5+ or equivalent), you can increase some settings to increase consumer throughput. This is especially helpful if you have a large backlog from [importing databases](#importing-external-dumps).
In your `.env` file, under the `# Consumer` section increase `CONSUMER_REPLICAS` from `3` to `15`.
You can also increase `JOB_CONCURRENCY` from `5` to `10`.
### DebridMediaManager setup (optional)
There are some optional steps you should take to maximise the number of movies/tv shows we can find.
@@ -93,9 +87,9 @@ We can search DebridMediaManager hash lists which are hosted on GitHub. This all
(checked) Public Repositories (read-only)
```
4. Click `Generate token`
5. Take the new token and add it to the bottom of the [.env](deployment/docker/.env) file
5. Take the new token and add it to the bottom of the [stack.env](deployment/docker/stack.env) file
```
GithubSettings__PAT=<YOUR TOKEN HERE>
GITHUB_PAT=<YOUR TOKEN HERE>
```
### Configure external access
@@ -146,7 +140,7 @@ Remove or comment out the port for the addon, and connect it to Caddy:
addon:
<<: *knightcrawler-app
env_file:
- .env
- stack.env
hostname: knightcrawler-addon
image: gabisonfire/knightcrawler-addon:latest
labels:

View File

@@ -1,55 +0,0 @@
# General environment variables
TZ=London/Europe
# PostgreSQL
POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=knightcrawler
# MongoDB
MONGODB_HOST=mongodb
MONGODB_PORT=27017
MONGODB_DB=knightcrawler
MONGODB_USER=mongo
MONGODB_PASSWORD=mongo
# RabbitMQ
RABBITMQ_HOST=rabbitmq
RABBITMQ_USER=guest
RABBITMQ_PASSWORD=guest
RABBITMQ_QUEUE_NAME=ingested
RABBITMQ_DURABLE=true
RABBITMQ_MAX_QUEUE_SIZE=0
RABBITMQ_MAX_PUBLISH_BATCH_SIZE=500
RABBITMQ_PUBLISH_INTERVAL_IN_SECONDS=10
# Metadata
## Only used if DATA_ONCE is set to false. If true, the schedule is ignored
METADATA_DOWNLOAD_IMDB_DATA_SCHEDULE="0 0 1 * *"
## If true, the metadata will be downloaded once and then the schedule will be ignored
METADATA_DOWNLOAD_IMDB_DATA_ONCE=true
## Controls the amount of records processed in memory at any given time during import, higher values will consume more memory
METADATA_INSERT_BATCH_SIZE=25000
# Addon
DEBUG_MODE=false
# Consumer
JOB_CONCURRENCY=5
JOBS_ENABLED=true
## can be debug for extra verbosity (a lot more verbosity - useful for development)
LOG_LEVEL=info
MAX_CONNECTIONS_PER_TORRENT=10
MAX_CONNECTIONS_OVERALL=100
TORRENT_TIMEOUT=30000
UDP_TRACKERS_ENABLED=true
CONSUMER_REPLICAS=3
## Fix for #66 - toggle on for development
AUTO_CREATE_AND_APPLY_MIGRATIONS=false
## Allows control of the threshold for matching titles to the IMDB dataset. The closer to 0, the more strict the matching.
TITLE_MATCH_THRESHOLD=0.25
# Producer
GITHUB_PAT=

View File

@@ -0,0 +1,62 @@
[Application]
FileLogger\Age=1
FileLogger\AgeType=1
FileLogger\Backup=true
FileLogger\DeleteOld=true
FileLogger\Enabled=true
FileLogger\MaxSizeBytes=66560
FileLogger\Path=/config/qBittorrent/logs
[AutoRun]
enabled=false
program=
[BitTorrent]
Session\AnonymousModeEnabled=true
Session\BTProtocol=TCP
Session\DefaultSavePath=/downloads/
Session\ExcludedFileNames=
Session\MaxActiveCheckingTorrents=5
Session\MaxActiveDownloads=10
Session\MaxActiveTorrents=50
Session\MaxActiveUploads=50
Session\MaxConnections=2000
Session\Port=6881
Session\QueueingSystemEnabled=true
Session\TempPath=/downloads/incomplete/
Session\TorrentStopCondition=MetadataReceived
[Core]
AutoDeleteAddedTorrentFile=Never
[LegalNotice]
Accepted=true
[Meta]
MigrationVersion=6
[Network]
PortForwardingEnabled=true
Proxy\HostnameLookupEnabled=false
Proxy\Profiles\BitTorrent=true
Proxy\Profiles\Misc=true
Proxy\Profiles\RSS=true
[Preferences]
Connection\PortRangeMin=6881
Connection\ResolvePeerCountries=false
Connection\UPnP=false
Downloads\SavePath=/downloads/
Downloads\TempPath=/downloads/incomplete/
General\Locale=en
MailNotification\req_auth=true
WebUI\Address=*
WebUI\AuthSubnetWhitelist=0.0.0.0/0
WebUI\AuthSubnetWhitelistEnabled=true
WebUI\HostHeaderValidation=false
WebUI\LocalHostAuth=false
WebUI\ServerDomains=*
[RSS]
AutoDownloader\DownloadRepacks=true
AutoDownloader\SmartEpisodeFilter=s(\\d+)e(\\d+), (\\d+)x(\\d+), "(\\d{4}[.\\-]\\d{1,2}[.\\-]\\d{1,2})", "(\\d{1,2}[.\\-]\\d{1,2}[.\\-]\\d{4})"

View File

@@ -1,139 +1,244 @@
version: "3.9"
name: knightcrawler
x-restart: &restart-policy "unless-stopped"
networks:
knightcrawler-network:
name: knightcrawler-network
driver: bridge
x-basehealth: &base-health
interval: 10s
timeout: 10s
retries: 3
start_period: 10s
x-rabbithealth: &rabbitmq-health
test: rabbitmq-diagnostics -q ping
<<: *base-health
x-mongohealth: &mongodb-health
test: ["CMD", "mongosh", "--eval", "db.adminCommand('ping')"]
<<: *base-health
x-postgreshealth: &postgresdb-health
test: pg_isready
<<: *base-health
x-apps: &knightcrawler-app
depends_on:
mongodb:
condition: service_healthy
postgres:
condition: service_healthy
rabbitmq:
condition: service_healthy
restart: *restart-policy
volumes:
postgres:
lavinmq:
redis:
services:
## Postgres is the database that is used by the services.
## All downloaded metadata is stored in this database.
postgres:
env_file: stack.env
healthcheck:
test: [ "CMD", "sh", "-c", "pg_isready -h localhost -U $$POSTGRES_USER" ]
timeout: 10s
interval: 10s
retries: 3
start_period: 10s
image: postgres:latest
env_file: .env
environment:
PGUSER: postgres # needed for healthcheck.
# # If you need the database to be accessible from outside, please open the below port.
# # Furthermore, please, please, please, change the username and password in the .env file.
# # Furthermore, please, please, please, change the username and password in the stack.env file.
# # If you want to enhance your security even more, create a new user for the database with a strong password.
# ports:
# - "5432:5432"
networks:
- knightcrawler-network
restart: unless-stopped
volumes:
- postgres:/var/lib/postgresql/data
healthcheck: *postgresdb-health
restart: *restart-policy
networks:
- knightcrawler-network
mongodb:
image: mongo:latest
env_file: .env
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGODB_USER:?Variable MONGODB_USER not set}
MONGO_INITDB_ROOT_PASSWORD: ${MONGODB_PASSWORD:?Variable MONGODB_PASSWORD not set}
# # If you need the database to be accessible from outside, please open the below port.
# # Furthermore, please, please, please, change the username and password in the .env file.
## Redis is used as a cache for the services.
## It is used to store the infohashes that are currently being processed in sagas, as well as intrim data.
redis:
env_file: stack.env
healthcheck:
test: ["CMD-SHELL", "redis-cli ping"]
timeout: 10s
interval: 10s
retries: 3
start_period: 10s
image: redis/redis-stack:latest
# # If you need redis to be accessible from outside, please open the below port.
# ports:
# - "27017:27017"
volumes:
- mongo:/data/db
restart: *restart-policy
healthcheck: *mongodb-health
# - "6379:6379"
networks:
- knightcrawler-network
restart: unless-stopped
volumes:
- redis:/data
rabbitmq:
image: rabbitmq:3-management
## LavinMQ is used as a message broker for the services.
## It is a high performance drop in replacement for RabbitMQ.
## It is used to communicate between the services.
lavinmq:
env_file: stack.env
# # If you need the database to be accessible from outside, please open the below port.
# # Furthermore, please, please, please, look at the documentation for rabbit on how to secure the service.
# # Furthermore, please, please, please, look at the documentation for lavinmq / rabbitmq on how to secure the service.
# ports:
# - "5672:5672"
# - "15672:15672"
# - "15692:15692"
image: cloudamqp/lavinmq:latest
healthcheck:
test: ["CMD-SHELL", "lavinmqctl status"]
timeout: 10s
interval: 10s
retries: 3
start_period: 10s
restart: unless-stopped
networks:
- knightcrawler-network
volumes:
- rabbitmq:/var/lib/rabbitmq
hostname: ${RABBITMQ_HOST}
restart: *restart-policy
healthcheck: *rabbitmq-health
networks:
- knightcrawler-network
producer:
image: gabisonfire/knightcrawler-producer:latest
labels:
logging: "promtail"
env_file: .env
<<: *knightcrawler-app
networks:
- knightcrawler-network
consumer:
image: gabisonfire/knightcrawler-consumer:latest
env_file: .env
labels:
logging: "promtail"
deploy:
replicas: ${CONSUMER_REPLICAS}
<<: *knightcrawler-app
networks:
- knightcrawler-network
metadata:
image: gabisonfire/knightcrawler-metadata:latest
env_file: .env
labels:
logging: "promtail"
restart: no
networks:
- knightcrawler-network
- lavinmq:/var/lib/lavinmq/
## The addon. This is what is used in stremio
addon:
<<: *knightcrawler-app
env_file: .env
depends_on:
metadata:
condition: service_completed_successfully
migrator:
condition: service_completed_successfully
postgres:
condition: service_healthy
lavinmq:
condition: service_healthy
redis:
condition: service_healthy
env_file: stack.env
hostname: knightcrawler-addon
image: gabisonfire/knightcrawler-addon:latest
image: gabisonfire/knightcrawler-addon:2.0.17
labels:
logging: "promtail"
logging: promtail
networks:
- knightcrawler-network
# - caddy
ports:
- "7000:7000"
restart: unless-stopped
## The consumer is responsible for consuming infohashes and orchestrating download of metadata.
consumer:
depends_on:
metadata:
condition: service_completed_successfully
migrator:
condition: service_completed_successfully
postgres:
condition: service_healthy
lavinmq:
condition: service_healthy
redis:
condition: service_healthy
env_file: stack.env
image: gabisonfire/knightcrawler-consumer:2.0.17
labels:
logging: promtail
networks:
- knightcrawler-network
restart: unless-stopped
networks:
knightcrawler-network:
driver: bridge
name: knightcrawler-network
## The debrid collector is responsible for downloading metadata from debrid services. (Currently only RealDebrid is supported)
debridcollector:
depends_on:
metadata:
condition: service_completed_successfully
migrator:
condition: service_completed_successfully
postgres:
condition: service_healthy
lavinmq:
condition: service_healthy
redis:
condition: service_healthy
env_file: stack.env
image: gabisonfire/knightcrawler-debrid-collector:2.0.17
labels:
logging: promtail
networks:
- knightcrawler-network
restart: unless-stopped
# caddy:
# name: caddy
# external: true
## The metadata service is responsible for downloading imdb publically available datasets.
## This is used to enrich the metadata during production of ingested infohashes.
metadata:
depends_on:
migrator:
condition: service_completed_successfully
env_file: stack.env
image: gabisonfire/knightcrawler-metadata:2.0.17
networks:
- knightcrawler-network
restart: "no"
volumes:
postgres:
mongo:
rabbitmq:
## The migrator is responsible for migrating the database schema.
migrator:
depends_on:
postgres:
condition: service_healthy
env_file: stack.env
image: gabisonfire/knightcrawler-migrator:2.0.17
networks:
- knightcrawler-network
restart: "no"
## The producer is responsible for producing infohashes by acquiring for various sites, including DMM.
producer:
depends_on:
metadata:
condition: service_completed_successfully
migrator:
condition: service_completed_successfully
postgres:
condition: service_healthy
lavinmq:
condition: service_healthy
redis:
condition: service_healthy
env_file: stack.env
image: gabisonfire/knightcrawler-producer:2.0.17
labels:
logging: promtail
networks:
- knightcrawler-network
restart: unless-stopped
## QBit collector utilizes QBitTorrent to download metadata.
qbitcollector:
depends_on:
metadata:
condition: service_completed_successfully
migrator:
condition: service_completed_successfully
postgres:
condition: service_healthy
lavinmq:
condition: service_healthy
redis:
condition: service_healthy
qbittorrent:
condition: service_healthy
deploy:
replicas: ${QBIT_REPLICAS:-0}
env_file: stack.env
image: gabisonfire/knightcrawler-qbit-collector:2.0.17
labels:
logging: promtail
networks:
- knightcrawler-network
restart: unless-stopped
## QBitTorrent is a torrent client that can be used to download torrents. In this case its used to download metadata.
## The QBit collector requires this.
qbittorrent:
deploy:
replicas: ${QBIT_REPLICAS:-0}
env_file: stack.env
environment:
PGID: "1000"
PUID: "1000"
TORRENTING_PORT: "6881"
WEBUI_PORT: "8080"
healthcheck:
test: ["CMD-SHELL", "curl --fail http://localhost:8080"]
timeout: 10s
interval: 10s
retries: 3
start_period: 10s
image: lscr.io/linuxserver/qbittorrent:latest
networks:
- knightcrawler-network
ports:
- "6881:6881/tcp"
- "6881:6881/udp"
# if you want to expose the webui, uncomment the following line
# - "8001:8080"
restart: unless-stopped
volumes:
- ./config/qbit/qbittorrent.conf:/config/qBittorrent/qBittorrent.conf

View File

@@ -4,7 +4,7 @@
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## Once you have confirmed Caddy works you should comment out
## the below line:
acme_ca https://acme-staging-v02.api.letsencrypt.org/director
acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
}
(security-headers) {

View File

@@ -16,7 +16,7 @@ rule_files:
scrape_configs:
- job_name: "rabbitmq"
static_configs:
- targets: ["rabbitmq:15692"]
- targets: ["lavinmq:15692"]
- job_name: "postgres-exporter"
static_configs:
- targets: ["postgres-exporter:9187"]

View File

@@ -0,0 +1,87 @@
x-basehealth: &base-health
interval: 10s
timeout: 10s
retries: 3
start_period: 10s
x-lavinhealth: &lavinmq-health
test: [ "CMD-SHELL", "lavinmqctl status" ]
<<: *base-health
x-redishealth: &redis-health
test: redis-cli ping
<<: *base-health
x-postgreshealth: &postgresdb-health
test: [ "CMD", "sh", "-c", "pg_isready -h localhost -U $$POSTGRES_USER" ]
<<: *base-health
x-qbit: &qbit-health
test: "curl --fail http://localhost:8080"
<<: *base-health
services:
postgres:
image: postgres:latest
environment:
PGUSER: postgres # needed for healthcheck.
# # If you need the database to be accessible from outside, please open the below port.
# # Furthermore, please, please, please, change the username and password in the .env file.
# # If you want to enhance your security even more, create a new user for the database with a strong password.
# ports:
# - "5432:5432"
volumes:
- postgres:/var/lib/postgresql/data
healthcheck: *postgresdb-health
restart: unless-stopped
env_file: ../../.env
networks:
- knightcrawler-network
redis:
image: redis/redis-stack:latest
# # If you need redis to be accessible from outside, please open the below port.
# ports:
# - "6379:6379"
volumes:
- redis:/data
restart: unless-stopped
healthcheck: *redis-health
env_file: ../../.env
networks:
- knightcrawler-network
lavinmq:
env_file: stack.env
# # If you need the database to be accessible from outside, please open the below port.
# # Furthermore, please, please, please, look at the documentation for lavinmq / rabbitmq on how to secure the service.
# ports:
# - "5672:5672"
# - "15672:15672"
# - "15692:15692"
image: cloudamqp/lavinmq:latest
healthcheck: *lavinmq-health
restart: unless-stopped
volumes:
- lavinmq:/var/lib/lavinmq/
## QBitTorrent is a torrent client that can be used to download torrents. In this case its used to download metadata.
## The QBit collector requires this.
qbittorrent:
image: lscr.io/linuxserver/qbittorrent:latest
environment:
- PUID=1000
- PGID=1000
- WEBUI_PORT=8080
- TORRENTING_PORT=6881
ports:
- 6881:6881
- 6881:6881/udp
env_file: ../../.env
networks:
- knightcrawler-network
restart: unless-stopped
healthcheck: *qbit-health
volumes:
- ../../config/qbit/qbittorrent.conf:/config/qBittorrent/qBittorrent.conf

View File

@@ -0,0 +1,71 @@
x-apps: &knightcrawler-app
labels:
logging: "promtail"
env_file: ../../.env
networks:
- knightcrawler-network
x-depends: &knightcrawler-app-depends
depends_on:
redis:
condition: service_healthy
postgres:
condition: service_healthy
lavinmq:
condition: service_healthy
migrator:
condition: service_completed_successfully
metadata:
condition: service_completed_successfully
services:
metadata:
image: gabisonfire/knightcrawler-metadata:2.0.17
env_file: ../../.env
networks:
- knightcrawler-network
restart: no
depends_on:
migrator:
condition: service_completed_successfully
migrator:
image: gabisonfire/knightcrawler-migrator:2.0.17
env_file: ../../.env
networks:
- knightcrawler-network
restart: no
depends_on:
postgres:
condition: service_healthy
addon:
image: gabisonfire/knightcrawler-addon:2.0.17
<<: [*knightcrawler-app, *knightcrawler-app-depends]
restart: unless-stopped
hostname: knightcrawler-addon
ports:
- "7000:7000"
consumer:
image: gabisonfire/knightcrawler-consumer:2.0.17
<<: [*knightcrawler-app, *knightcrawler-app-depends]
restart: unless-stopped
debridcollector:
image: gabisonfire/knightcrawler-debrid-collector:2.0.17
<<: [*knightcrawler-app, *knightcrawler-app-depends]
restart: unless-stopped
producer:
image: gabisonfire/knightcrawler-producer:2.0.17
<<: [*knightcrawler-app, *knightcrawler-app-depends]
restart: unless-stopped
qbitcollector:
image: gabisonfire/knightcrawler-qbit-collector:2.0.17
<<: [*knightcrawler-app, *knightcrawler-app-depends]
restart: unless-stopped
depends_on:
qbittorrent:
condition: service_healthy

View File

@@ -0,0 +1,4 @@
networks:
knightcrawler-network:
driver: bridge
name: knightcrawler-network

View File

@@ -0,0 +1,4 @@
volumes:
postgres:
redis:
lavinmq:

View File

@@ -0,0 +1,7 @@
services:
qbittorrent:
deploy:
replicas: 0
qbitcollector:
deploy:
replicas: 0

View File

@@ -0,0 +1,7 @@
version: "3.9"
name: "knightcrawler"
include:
- ./components/network.yaml
- ./components/volumes.yaml
- ./components/infrastructure.yaml
- ./components/knightcrawler.yaml

View File

@@ -0,0 +1,43 @@
# General environment variables
TZ=London/Europe
# PostgreSQL
POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=knightcrawler
# Redis
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_EXTRA=abortConnect=false,allowAdmin=true
# AMQP
RABBITMQ_HOST=lavinmq
RABBITMQ_USER=guest
RABBITMQ_PASSWORD=guest
RABBITMQ_CONSUMER_QUEUE_NAME=ingested
RABBITMQ_DURABLE=true
RABBITMQ_MAX_QUEUE_SIZE=0
RABBITMQ_MAX_PUBLISH_BATCH_SIZE=500
RABBITMQ_PUBLISH_INTERVAL_IN_SECONDS=10
# Metadata
METADATA_INSERT_BATCH_SIZE=50000
# Collectors
COLLECTOR_QBIT_ENABLED=false
COLLECTOR_DEBRID_ENABLED=true
COLLECTOR_REAL_DEBRID_API_KEY=
QBIT_HOST=http://qbittorrent:8080
QBIT_TRACKERS_URL=https://raw.githubusercontent.com/ngosang/trackerslist/master/trackers_all_http.txt
# Number of replicas for the qBittorrent collector and qBitTorrent client. Should be 0 or 1.
QBIT_REPLICAS=0
# Addon
DEBUG_MODE=false
# Producer
GITHUB_PAT=

View File

@@ -0,0 +1,6 @@
apiVersion: v2
appVersion: 2.0.17
description: A helm chart for Knightcrawler
name: knightcrawler
type: application
version: 0.1.0

View File

@@ -0,0 +1,6 @@
Congratulations,
Knightcrawler is now deployed. This may take a while to be up and responding.

View File

@@ -0,0 +1,27 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: '{{ .Release.Name }}-config'
labels:
project: '{{ .Chart.Name }}'
release: '{{ .Release.Name }}'
data:
COLLECTOR_DEBRID_ENABLED: '{{ .Values.knightcrawler.debridcollector.enabled }}'
COLLECTOR_QBIT_ENABLED: '{{ .Values.knightcrawler.qbitcollector.enabled }}'
DEBUG_MODE: '{{ .Values.knightcrawler.debug }}'
METADATA_INSERT_BATCH_SIZE: '{{ .Values.environment.metadata.insertBatchSize }}'
POSTGRES_DB: '{{ .Values.environment.postgres.dbName }}'
POSTGRES_HOST: '{{ if .Values.environment.postgres.external }}{{ .Values.environment.postgres.host }}{{ else }}{{ .Release.Name }}-postgres{{ end }}'
POSTGRES_PORT: '{{ .Values.environment.postgres.port }}'
QBIT_HOST: '{{ .Values.environment.qbitcollector.qbitHost }}'
QBIT_TRACKERS_URL: '{{ .Values.environment.qbitcollector.trackersUrl }}'
RABBITMQ_CONSUMER_QUEUE_NAME: '{{ .Values.environment.producer.queueName }}'
RABBITMQ_DURABLE: '{{ .Values.environment.producer.durable }}'
RABBITMQ_HOST: '{{ if .Values.environment.lavinmq.external }}{{ .Values.environment.lavinmq.host }}{{ else }}{{ .Release.Name }}-lavinmq{{ end }}'
RABBITMQ_MAX_PUBLISH_BATCH_SIZE: '{{ .Values.environment.producer.maxPublishBatchSize }}'
RABBITMQ_MAX_QUEUE_SIZE: '{{ .Values.environment.producer.maxQueueSize }}'
RABBITMQ_PUBLISH_INTERVAL_IN_SECONDS: '{{ .Values.environment.producer.publishIntervalSeconds }}'
REDIS_EXTRA: '{{ .Values.environment.redis.extra }}'
REDIS_HOST: '{{ if .Values.environment.redis.external }}{{ .Values.environment.redis.host }}{{ else }}{{ .Release.Name }}-redis{{ end }}'
REDIS_PORT: '{{ .Values.environment.redis.port }}'
TZ: '{{ .Values.shared.timezone }}'

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: Secret
metadata:
name: '{{ .Release.Name }}-secrets'
labels:
project: '{{ .Chart.Name }}'
release: '{{ .Release.Name }}'
type: Opaque
data:
GITHUB_PAT: '{{ .Values.environment.producer.githubPat | b64enc }}'
COLLECTOR_REAL_DEBRID_API_KEY: '{{ .Values.environment.debridcollector.realDebridApiKey | b64enc }}'
POSTGRES_USER: '{{ .Values.environment.postgres.user | b64enc }}'
POSTGRES_PASSWORD: '{{ .Values.environment.postgres.password | b64enc }}'
RABBITMQ_PASSWORD: '{{ .Values.environment.lavinmq.password | b64enc }}'
RABBITMQ_USER: '{{ .Values.environment.lavinmq.user | b64enc }}'

View File

@@ -0,0 +1,25 @@
{{ if .Values.infrastructure.lavinmq.enabled }}
apiVersion: v1
kind: Service
metadata:
name: '{{ .Release.Name }}-lavinmq'
labels:
component: lavinmq
project: '{{ .Chart.Name }}'
release: '{{ .Release.Name }}'
spec:
selector:
component: lavinmq
release: '{{ .Release.Name }}'
type: ClusterIP
ports:
- protocol: TCP
port: 5672
targetPort: 5672
- protocol: TCP
port: 15672
targetPort: 15672
- protocol: TCP
port: 15692
targetPort: 15692
{{- end -}}

View File

@@ -0,0 +1,60 @@
{{ if .Values.infrastructure.lavinmq.enabled }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: '{{ .Release.Name }}-lavinmq'
labels:
component: lavinmq
project: '{{ .Chart.Name }}'
release: '{{ .Release.Name }}'
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-weight": "0"
spec:
serviceName: '{{ .Release.Name }}-lavinmq'
replicas: 1
selector:
matchLabels:
component: lavinmq
release: '{{ .Release.Name }}'
template:
metadata:
labels:
component: lavinmq
release: '{{ .Release.Name }}'
spec:
containers:
- name: lavinmq
image: '{{ .Values.infrastructure.lavinmq.image }}:{{ .Values.infrastructure.lavinmq.tag }}'
ports:
- name: lavinmq
containerPort: 5672
- name: lavinmq-15672
containerPort: 15672
- name: lavinmq-15692
containerPort: 15692
envFrom:
- configMapRef:
name: '{{ .Release.Name }}-config'
- secretRef:
name: '{{ .Release.Name }}-secrets'
volumeMounts:
- mountPath: /var/lib/lavinmq
name: lavinmq
livenessProbe:
exec:
command:
- lavinmqctl status
periodSeconds: 10
initialDelaySeconds: 10
successThreshold: 1
failureThreshold: 3
volumeClaimTemplates:
- metadata:
name: lavinmq
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: '{{ .Values.persistence.lavinmq.capacity }}'
{{- end -}}

View File

@@ -0,0 +1,19 @@
{{ if .Values.infrastructure.postgres.enabled }}
apiVersion: v1
kind: Service
metadata:
name: '{{ .Release.Name }}-postgres'
labels:
component: postgres
project: '{{ .Chart.Name }}'
release: '{{ .Release.Name }}'
spec:
selector:
component: postgres
release: '{{ .Release.Name }}'
type: ClusterIP
ports:
- protocol: TCP
port: 5432
targetPort: 5432
{{- end -}}

View File

@@ -0,0 +1,58 @@
{{ if .Values.infrastructure.postgres.enabled }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: '{{ .Release.Name }}-postgres'
labels:
component: postgres
project: '{{ .Chart.Name }}'
release: '{{ .Release.Name }}'
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-weight": "0"
spec:
serviceName: '{{ .Release.Name }}-postgres'
replicas: 1
selector:
matchLabels:
component: postgres
release: '{{ .Release.Name }}'
template:
metadata:
labels:
component: postgres
release: '{{ .Release.Name }}'
spec:
containers:
- name: postgres
image: '{{ .Values.infrastructure.postgres.image }}:{{ .Values.infrastructure.postgres.tag }}'
ports:
- name: postgres
containerPort: 5432
envFrom:
- configMapRef:
name: '{{ .Release.Name }}-config'
- secretRef:
name: '{{ .Release.Name }}-secrets'
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres
livenessProbe:
exec:
command:
- sh
- -c
- pg_isready -h localhost -U $POSTGRES_USER
periodSeconds: 10
initialDelaySeconds: 10
successThreshold: 1
failureThreshold: 3
volumeClaimTemplates:
- metadata:
name: postgres
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: '{{ .Values.persistence.postgres.capacity }}'
{{- end -}}

View File

@@ -0,0 +1,57 @@
{{ if .Values.knightcrawler.qbitcollector.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: '{{ .Release.Name }}-qbittorrent'
labels:
component: qbittorrent
project: '{{ .Chart.Name }}'
release: '{{ .Release.Name }}'
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-weight": "0"
spec:
replicas: 1
selector:
matchLabels:
component: qbittorrent
release: '{{ .Release.Name }}'
template:
metadata:
labels:
component: qbittorrent
release: '{{ .Release.Name }}'
spec:
containers:
- name: qbittorrent
image: '{{ .Values.infrastructure.qbittorrent.image }}:{{ .Values.infrastructure.qbittorrent.tag }}'
ports:
- name: qbittorrent
containerPort: 6881
- name: qbittorrent-6881
containerPort: 6881
- name: qbittorrent-8080
containerPort: 8080
env:
- name: PUID
value: '{{ .Values.environment.qbittorrent.puid }}'
- name: PGID
value: '{{ .Values.environment.qbittorrent.pgid }}'
- name: TORRENTING_PORT
value: '{{ .Values.environment.qbittorrent.torrentingPort }}'
- name: WEBUI_PORT
value: '{{ .Values.environment.qbittorrent.webuiPort }}'
envFrom:
- configMapRef:
name: '{{ .Release.Name }}-config'
- secretRef:
name: '{{ .Release.Name }}-secrets'
livenessProbe:
exec:
command:
- curl --fail http://localhost:8080
periodSeconds: 10
initialDelaySeconds: 10
successThreshold: 1
failureThreshold: 3
{{- end -}}

View File

@@ -0,0 +1,25 @@
{{ if .Values.knightcrawler.qbitcollector.enabled }}
apiVersion: v1
kind: Service
metadata:
name: '{{ .Release.Name }}-qbittorrent'
labels:
component: qbittorrent
project: '{{ .Chart.Name }}'
release: '{{ .Release.Name }}'
spec:
selector:
component: qbittorrent
release: '{{ .Release.Name }}'
type: ClusterIP
ports:
- protocol: TCP
port: 6881
targetPort: 6881
- protocol: TCP
port: 6881
targetPort: 6881
- protocol: TCP
port: 8080
targetPort: 8080
{{- end -}}

View File

@@ -0,0 +1,19 @@
{{ if .Values.infrastructure.redis.enabled }}
apiVersion: v1
kind: Service
metadata:
name: '{{ .Release.Name }}-redis'
labels:
component: redis
project: '{{ .Chart.Name }}'
release: '{{ .Release.Name }}'
spec:
selector:
component: redis
release: '{{ .Release.Name }}'
type: ClusterIP
ports:
- protocol: TCP
port: 6379
targetPort: 6379
{{- end -}}

View File

@@ -0,0 +1,56 @@
{{ if .Values.infrastructure.redis.enabled }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: '{{ .Release.Name }}-redis'
labels:
component: redis
project: '{{ .Chart.Name }}'
release: '{{ .Release.Name }}'
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-weight": "0"
spec:
serviceName: '{{ .Release.Name }}-redis'
replicas: 1
selector:
matchLabels:
component: redis
release: '{{ .Release.Name }}'
template:
metadata:
labels:
component: redis
release: '{{ .Release.Name }}'
spec:
containers:
- name: redis
image: '{{ .Values.infrastructure.redis.image }}:{{ .Values.infrastructure.redis.tag }}'
ports:
- name: redis
containerPort: 6379
envFrom:
- configMapRef:
name: '{{ .Release.Name }}-config'
- secretRef:
name: '{{ .Release.Name }}-secrets'
volumeMounts:
- mountPath: /data
name: redis
livenessProbe:
exec:
command:
- redis-cli ping
periodSeconds: 10
initialDelaySeconds: 10
successThreshold: 1
failureThreshold: 3
volumeClaimTemplates:
- metadata:
name: redis
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: '{{ .Values.persistence.redis.capacity }}'
{{- end -}}

View File

@@ -0,0 +1,28 @@
apiVersion: batch/v1
kind: Job
metadata:
name: '{{ .Release.Name }}-metadata'
labels:
component: metadata
project: '{{ .Chart.Name }}'
release: '{{ .Release.Name }}'
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-weight": "2"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
labels:
component: metadata
release: '{{ .Release.Name }}'
spec:
restartPolicy: OnFailure
containers:
- name: metadata
image: '{{ .Values.knightcrawler.metadata.image }}{{ if ne .Values.knightcrawler.globalImageTagOverride "" }}:{{ .Values.knightcrawler.globalImageTagOverride }}{{else}}:{{ .Values.knightcrawler.metadata.tag}}{{ end }}'
envFrom:
- configMapRef:
name: '{{ .Release.Name }}-config'
- secretRef:
name: '{{ .Release.Name }}-secrets'

View File

@@ -0,0 +1,28 @@
apiVersion: batch/v1
kind: Job
metadata:
name: '{{ .Release.Name }}-migrator'
labels:
component: migrator
project: '{{ .Chart.Name }}'
release: '{{ .Release.Name }}'
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-weight": "1"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
labels:
component: migrator
release: '{{ .Release.Name }}'
spec:
restartPolicy: OnFailure
containers:
- name: migrator
image: '{{ .Values.knightcrawler.migrator.image }}{{ if ne .Values.knightcrawler.globalImageTagOverride "" }}:{{ .Values.knightcrawler.globalImageTagOverride }}{{else}}:{{ .Values.knightcrawler.migrator.tag}}{{ end }}'
envFrom:
- configMapRef:
name: '{{ .Release.Name }}-config'
- secretRef:
name: '{{ .Release.Name }}-secrets'

View File

@@ -0,0 +1,35 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: '{{ .Release.Name }}-addon'
labels:
component: addon
project: '{{ .Chart.Name }}'
release: '{{ .Release.Name }}'
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-weight": "4"
spec:
replicas: {{ .Values.knightcrawler.addon.replicas }}
selector:
matchLabels:
component: addon
release: '{{ .Release.Name }}'
template:
metadata:
labels:
component: addon
release: '{{ .Release.Name }}'
spec:
containers:
- name: addon
image: '{{ .Values.knightcrawler.addon.image }}{{ if ne .Values.knightcrawler.globalImageTagOverride "" }}:{{ .Values.knightcrawler.globalImageTagOverride }}{{else}}:{{ .Values.knightcrawler.addon.tag}}{{ end }}'
ports:
- name: addon
containerPort: 7000
envFrom:
- configMapRef:
name: '{{ .Release.Name }}-config'
- secretRef:
name: '{{ .Release.Name }}-secrets'

View File

@@ -0,0 +1,32 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: '{{ .Release.Name }}-consumer'
labels:
component: consumer
project: '{{ .Chart.Name }}'
release: '{{ .Release.Name }}'
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-weight": "4"
spec:
replicas: {{ .Values.knightcrawler.consumer.replicas }}
selector:
matchLabels:
component: consumer
release: '{{ .Release.Name }}'
template:
metadata:
labels:
component: consumer
release: '{{ .Release.Name }}'
spec:
containers:
- name: consumer
image: '{{ .Values.knightcrawler.consumer.image }}{{ if ne .Values.knightcrawler.globalImageTagOverride "" }}:{{ .Values.knightcrawler.globalImageTagOverride }}{{else}}:{{ .Values.knightcrawler.consumer.tag}}{{ end }}'
envFrom:
- configMapRef:
name: '{{ .Release.Name }}-config'
- secretRef:
name: '{{ .Release.Name }}-secrets'

View File

@@ -0,0 +1,31 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: '{{ .Release.Name }}-debridcollector'
labels:
component: debridcollector
project: '{{ .Chart.Name }}'
release: '{{ .Release.Name }}'
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-weight": "4"
spec:
replicas: {{ .Values.knightcrawler.debridcollector.replicas }}
selector:
matchLabels:
component: debridcollector
release: '{{ .Release.Name }}'
template:
metadata:
labels:
component: debridcollector
release: '{{ .Release.Name }}'
spec:
containers:
- name: debridcollector
image: '{{ .Values.knightcrawler.debridcollector.image }}{{ if ne .Values.knightcrawler.globalImageTagOverride "" }}:{{ .Values.knightcrawler.globalImageTagOverride }}{{else}}:{{ .Values.knightcrawler.debridcollector.tag}}{{ end }}'
envFrom:
- configMapRef:
name: '{{ .Release.Name }}-config'
- secretRef:
name: '{{ .Release.Name }}-secrets'

View File

@@ -0,0 +1,31 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: '{{ .Release.Name }}-producer'
labels:
component: producer
project: '{{ .Chart.Name }}'
release: '{{ .Release.Name }}'
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-weight": "4"
spec:
replicas: {{ .Values.knightcrawler.producer.replicas }}
selector:
matchLabels:
component: producer
release: '{{ .Release.Name }}'
template:
metadata:
labels:
component: producer
release: '{{ .Release.Name }}'
spec:
containers:
- name: producer
image: '{{ .Values.knightcrawler.producer.image }}{{ if ne .Values.knightcrawler.globalImageTagOverride "" }}:{{ .Values.knightcrawler.globalImageTagOverride }}{{else}}:{{ .Values.knightcrawler.producer.tag}}{{ end }}'
envFrom:
- configMapRef:
name: '{{ .Release.Name }}-config'
- secretRef:
name: '{{ .Release.Name }}-secrets'

View File

@@ -0,0 +1,33 @@
{{ if .Values.knightcrawler.qbitcollector.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: '{{ .Release.Name }}-qbitcollector'
labels:
component: qbitcollector
project: '{{ .Chart.Name }}'
release: '{{ .Release.Name }}'
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-weight": "4"
spec:
replicas: {{ .Values.knightcrawler.qbitcollector.replicas }}
selector:
matchLabels:
component: qbitcollector
release: '{{ .Release.Name }}'
template:
metadata:
labels:
component: qbitcollector
release: '{{ .Release.Name }}'
spec:
containers:
- name: qbitcollector
image: '{{ .Values.knightcrawler.qbitcollector.image }}{{ if ne .Values.knightcrawler.globalImageTagOverride "" }}:{{ .Values.knightcrawler.globalImageTagOverride }}{{else}}:{{ .Values.knightcrawler.qbitcollector.tag}}{{ end }}'
envFrom:
- configMapRef:
name: '{{ .Release.Name }}-config'
- secretRef:
name: '{{ .Release.Name }}-secrets'
{{- end -}}

View File

@@ -0,0 +1,17 @@
apiVersion: v1
kind: Service
metadata:
name: '{{ .Release.Name }}-addon'
labels:
component: addon
project: '{{ .Chart.Name }}'
release: '{{ .Release.Name }}'
spec:
selector:
component: addon
release: '{{ .Release.Name }}'
type: ClusterIP
ports:
- protocol: TCP
port: 7000
targetPort: 7000

100
deployment/k8s/values.yaml Normal file
View File

@@ -0,0 +1,100 @@
knightcrawler:
debug: false
globalImageTagOverride: ""
addon:
image: "gabisonfire/knightcrawler-addon"
tag: "2.0.17"
replicas: 1
consumer:
image: "gabisonfire/knightcrawler-consumer"
tag: "2.0.17"
replicas: 1
metadata:
image: "gabisonfire/knightcrawler-metadata"
tag: "2.0.17"
replicas: 1
migrator:
image: "gabisonfire/knightcrawler-migrator"
tag: "2.0.17"
replicas: 1
debridcollector:
image: "gabisonfire/knightcrawler-debrid-collector"
tag: "2.0.17"
enabled: true
replicas: 1
qbitcollector:
image: "gabisonfire/knightcrawler-qbit-collector"
tag: "2.0.17"
enabled: false
replicas: 1
producer:
image: "gabisonfire/knightcrawler-producer"
tag: "2.0.17"
replicas: 1
infrastructure:
lavinmq:
image: "cloudamqp/lavinmq"
tag: "latest"
enabled: true
postgres:
image: "postgres"
tag: "latest"
enabled: true
redis:
image: "redis/redis-stack-server"
tag: "latest"
enabled: true
qbittorrent:
image: "lscr.io/linuxserver/qbittorrent"
tag: "latest"
environment:
redis:
external: false
host: ""
port: "6379"
extra: "abortConnect=false,allowAdmin=true"
postgres:
external: false
host: ""
port: "5432"
dbName: "knightcrawler"
user: "postgres"
password: "postgres"
lavinmq:
external: false
host: ""
user: "guest"
password: "guest"
qbitcollector:
qbitHost: "http://qbittorrent:8080"
trackersUrl: "https://raw.githubusercontent.com/ngosang/trackerslist/master/trackers_all_http.txt"
debridcollector:
realDebridApiKey: ""
producer:
githubPat: ""
queueName: "ingested"
durable: true
maxPublishBatchSize: 500
maxQueueSize: 0
publishIntervalSeconds: 10
metadata:
insertBatchSize: 50000
qbittorrent:
pgid: "1000"
puid: "1000"
torrentingPort: "6881"
webuiPort: "8080"
persistence:
storageClassName: ""
redis:
capacity: 1Gi
postgres:
capacity: 1Gi
lavinmq:
capacity: 1Gi
shared:
timezone: "London/Europe"

View File

@@ -14,7 +14,6 @@
"axios": "^1.6.1",
"bottleneck": "^2.19.5",
"cache-manager": "^3.4.4",
"cache-manager-mongodb": "^0.3.0",
"cors": "^2.8.5",
"debrid-link-api": "^1.0.1",
"express": "^4.18.2",
@@ -33,7 +32,11 @@
"user-agents": "^1.0.1444",
"video-name-parser": "^1.4.6",
"xml-js": "^1.6.11",
"xml2js": "^0.6.2"
"xml2js": "^0.6.2",
"@redis/client": "^1.5.14",
"@redis/json": "^1.0.6",
"@redis/search": "^1.1.6",
"cache-manager-redis-store": "^2.0.0"
},
"devDependencies": {
"@types/node": "^20.11.6",

View File

@@ -1,7 +1,7 @@
import cacheManager from 'cache-manager';
import mangodbStore from 'cache-manager-mongodb';
import { isStaticUrl } from '../moch/static.js';
import {cacheConfig} from "./settings.js";
import redisStore from 'cache-manager-redis-store';
const STREAM_KEY_PREFIX = `${cacheConfig.GLOBAL_KEY_PREFIX}|stream`;
const IMDB_KEY_PREFIX = `${cacheConfig.GLOBAL_KEY_PREFIX}|imdb`;
@@ -12,28 +12,20 @@ const memoryCache = initiateMemoryCache();
const remoteCache = initiateRemoteCache();
function initiateRemoteCache() {
if (cacheConfig.NO_CACHE) {
return null;
} else if (cacheConfig.MONGODB_URI) {
return cacheManager.caching({
store: mangodbStore,
uri: cacheConfig.MONGODB_URI,
options: {
collection: 'jackettio_addon_collection',
socketTimeoutMS: 120000,
useNewUrlParser: true,
useUnifiedTopology: false,
ttl: cacheConfig.STREAM_EMPTY_TTL
},
ttl: cacheConfig.STREAM_EMPTY_TTL,
ignoreCacheErrors: true
});
} else {
return cacheManager.caching({
store: 'memory',
ttl: cacheConfig.STREAM_EMPTY_TTL
});
}
if (cacheConfig.NO_CACHE) {
return null;
} else if (cacheConfig.REDIS_CONNECTION_STRING) {
return cacheManager.caching({
store: redisStore,
ttl: cacheConfig.STREAM_EMPTY_TTL,
url: cacheConfig.REDIS_CONNECTION_STRING
});
} else {
return cacheManager.caching({
store: 'memory',
ttl: cacheConfig.STREAM_EMPTY_TTL
});
}
}
function initiateMemoryCache() {

View File

@@ -25,7 +25,9 @@ export const cinemetaConfig = {
}
export const cacheConfig = {
MONGODB_URI: process.env.MONGODB_URI,
REDIS_HOST: process.env.REDIS_HOST || 'redis',
REDIS_PORT: process.env.REDIS_PORT || '6379',
REDIS_EXTRA: process.env.REDIS_EXTRA || '',
NO_CACHE: parseBool(process.env.NO_CACHE, false),
IMDB_TTL: parseInt(process.env.IMDB_TTL || 60 * 60 * 4), // 4 Hours
STREAM_TTL: parseInt(process.env.STREAM_TTL || 60 * 60 * 4), // 1 Hour
@@ -40,3 +42,5 @@ export const cacheConfig = {
STALE_ERROR_AGE: parseInt(process.env.STALE_ERROR_AGE) || 7 * 24 * 60 * 60, // 7 days
GLOBAL_KEY_PREFIX: process.env.GLOBAL_KEY_PREFIX || 'jackettio-addon',
}
cacheConfig.REDIS_CONNECTION_STRING = 'redis://' + cacheConfig.REDIS_HOST + ':' + cacheConfig.REDIS_PORT + '?' + cacheConfig.REDIS_EXTRA;

Some files were not shown because too many files have changed in this diff Show More