45 Commits

Author SHA1 Message Date
iPromKnight
72db18f0ad add missing env (#171)
* add missing env

* version bump
2024-03-26 11:16:21 +00:00
iPromKnight
d70cef1b86 addon fix (#170)
* addon fix

* version bump
2024-03-26 10:25:43 +00:00
iPromKnight
e1e718cd22 includes qbit collector fix (#169) 2024-03-26 10:17:04 +00:00
iPromKnight
c3e58e4234 Fix redis connection strings for consistency across languages. (#168)
* Fix redis connection strings across languages

* compose version bump
2024-03-26 09:26:35 +00:00
iPromKnight
d584102d60 image updates for patched release (#167) 2024-03-26 00:27:54 +00:00
iPromKnight
fe4bb59502 fix indenting on env file (#166)
* fix images :/

* fix indenting
2024-03-26 00:22:33 +00:00
iPromKnight
472b3342d5 fix images :/ (#165) 2024-03-26 00:01:59 +00:00
iPromKnight
b035ef596b change tag glob (#164) 2024-03-25 23:41:58 +00:00
iPromKnight
9a831e92d0 Producer / Consumer / Collector rewrite (#160)
* Converted metadata service to redis

* move to postgres instead

* fix global usings

* [skip ci] optimize wolverine by prebuilding static types

* [skip ci] Stop indexing mac folder indexes

* [skip ci] producer, metadata and migrations

removed mongodb
added redis cache
imdb meta in postgres
Enable pgtrm
Create trigrams index
Add search meta postgres function

* [skip ci] get rid of node folder, replace mongo with redis in consumer

also wire up postgres metadata searches

* [skip ci] change mongo to redis in the addon

* [skip ci] jackettio to redis

* Rest of mongo removed...

* Cleaner rerunning of metadata - without conflicts

* Add akas import as well as basic metadata

* Include episodes file too

* cascade truncate pre-import

* reverse order to avoid cascadeing

* separate out clean to separate handler

* Switch producer to use metadata matching pre-preocessing dmm

* More work

* Still porting PTN

* PTN port, adding tests

* [skip ci] Codec tests

* [skip ci] Complete Collection handler tests

* [skip ci] container tests

* [skip ci] Convert handlers tests

* [skip ci] DateHandler tests

* [skip ci] Dual Audio matching tests

* [skip ci] episode code tests

* [skip ci] Extended handler tests

* [skip ci] group handler tests

* [skip ci] some broken stuff right now

* [skip ci] more ptn

* [skip ci] PTN now in a separate nuget package, rebased this on the redis changes - i need them.

* [skip ci] Wire up PTN port. Tired - will test tomorrow

* [skip ci] Needs a lot of work - too many titles being missed now

* cleaner. done?

* Handle the date in the imdb search

- add integer function to confirm its a valid integer
- use the input date as a range of -+1 year

* [skip ci] Start of collector service for RD

[skip ci] WIP

Implemented metadata saga, along with channels to process up to a maximum of 100 infohashes each time
The saga will rety for each infohas by requeuing up to three times, before just marking as complete for that infoHash - meaning no data will be updated in the db for that torrent.

[skip ci] Ready to test with queue publishing

Will provision a fanout exchange if it doesn't exist, and create and bind a queue to it. Listens to the queue with 50 prefetch count.
Still needs PTN rewrite bringing in to parse the filename response from real debrid, and extract season and episode numbers if the file is a tvshow

[skip ci] Add Debrid Collector Build Job

Debrid Collector ready for testing

New consumer, new collector, producer has meta lookup and anti porn measures

[skip ci] WIP - moving from wolverine to MassTransit.

 not happy that wolverine cannot effectively control saga concurrency. we need to really.

[skip ci] Producer and new Consumer moved to MassTransit

Just the debrid collector to go now, then to write the optional qbit collector.

Collector now switched to mass transit too

hide porn titles in logs, clean up cache name in redis for imdb titles

[skip ci] Allow control of queues

[skip ci] Update deployment

Remove old consumer, fix deployment files, fix dockerfiles for shared project import

fix base deployment

* Add collector missing env var

* edits to kick off builds

* Add optional qbit deployment which qbit collector will use

* Qbit collector done

* reorder compose, and bring both qbit and qbitcollector into the compose, with 0 replicas as default

* Clean up compose file

* Ensure debrid collector errors if no debrid api key
2024-03-25 23:32:28 +00:00
iPromKnight
9c6c1ac249 Update compose versions 1.0.1, ready for tag push (#163) 2024-03-25 20:38:11 +00:00
iPromKnight
0ddfac57f7 Build on Tag Pushes. (#162)
* enable tag and pr builds

* Build on Tag Pushes
2024-03-25 20:27:37 +00:00
iPromKnight
9fbd750cd2 enable tag and pr builds (#161) 2024-03-25 20:24:14 +00:00
Knight Crawler
5fc2027cfa Option to manually trigger each workflow (#159)
Co-authored-by: purple_emily <purple_emily@protonmail.com>
2024-03-20 20:26:32 +00:00
purple_emily
2d39476c65 Push dev builds & ready to tag semver (#153) 2024-03-14 14:27:19 +00:00
iPromKnight
e7f987a0d7 Merge pull request #151 from Gabisonfire/feature/tissue-corn-sanitizer
Improve producer matching - Add tissue service
2024-03-12 10:31:18 +00:00
iPromKnight
79a6aa3cb0 Improve producer matching - Add tissue service
Tissue service will sanitize the existign database of ingested torrents by matching existing titles with new banned word lists. Now with added kleenex
2024-03-12 10:29:13 +00:00
iPromKnight
e24d81dd96 Merge pull request #149 from Gabisonfire/improve-consumer
Simplification of parsing in consumer
2024-03-11 11:00:23 +00:00
iPromKnight
aeb83c19f8 Simplification of parsing in consumer
should speed up massively especially if imdbIds are found from mongodb
2024-03-11 10:56:04 +00:00
iPromKnight
e23ee974e2 Merge pull request #148 from Gabisonfire/hotfix/nyaa
Fix nyaa category
2024-03-11 09:03:22 +00:00
iPromKnight
5c310427b4 Fix nyaa category 2024-03-11 08:59:55 +00:00
iPromKnight
b3d9be0b7a Merge pull request #147 from Gabisonfire/force_build
accidentally skipped build on last pr
2024-03-10 23:37:28 +00:00
iPromKnight
dda81ec5bf accidentally skipped build on last pr
tired..
2024-03-10 23:37:16 +00:00
iPromKnight
8eae288f10 Merge pull request #146 from Gabisonfire/hotfix/default_season_1
[skip ci] Final hotfix
2024-03-10 23:34:41 +00:00
iPromKnight
75ac89489e [skip ci] Final hotfix 2024-03-10 23:34:35 +00:00
iPromKnight
fa27b0cda9 Merge pull request #145 from Gabisonfire/hotfix/series_consumer
Fix series parsing
2024-03-10 22:28:00 +00:00
iPromKnight
500dd0d725 patch type 2024-03-10 22:28:06 +00:00
iPromKnight
6f4bc10f5a Fix series parsing 2024-03-10 21:38:55 +00:00
iPromKnight
1b3c190ed1 Merge pull request #144 from Gabisonfire/reduce_cpu_cycles
reduce cpu cycles in parsing in producer
2024-03-10 15:14:37 +00:00
iPromKnight
02150482df reduce cpu cycles in parsing in producer 2024-03-10 15:14:17 +00:00
iPromKnight
f18cd5b1ac Merge pull request #143 from Gabisonfire/extra_terms
Few extra terms getting through
2024-03-10 14:54:58 +00:00
iPromKnight
2e774058ff Few extra terms getting through 2024-03-10 14:54:25 +00:00
iPromKnight
4e84d7c9c3 Merge pull request #142 from Gabisonfire/feature/dmm-improvements
remove log line of adult content
2024-03-10 13:55:20 +00:00
iPromKnight
ad04d323b4 remove log line of adult content 2024-03-10 13:54:35 +00:00
iPromKnight
7d0b779bc8 Merge pull request #129 from Gabisonfire/feature/dmm-improvements
Improvements for DMM
2024-03-10 13:52:53 +00:00
iPromKnight
e2b45e799d [skip ci] Remove Debug logged adult terms found 2024-03-10 13:49:51 +00:00
iPromKnight
6c03f79933 Complete 2024-03-10 13:48:27 +00:00
iPromKnight
c8a1ebd8ae Bump large file to 2500kb because of Jav list.
Doesn't make sense to enable lfs for this file.
2024-03-10 13:48:14 +00:00
iPromKnight
320fccc8e8 [skip ci] More work on parsing - seasons to fix still and use banned words 2024-03-10 12:48:19 +00:00
iPromKnight
51246ed352 Ignore producer data dir from codespell hook 2024-03-10 12:48:19 +00:00
iPromKnight
8d82a17876 re-disable services other than dmm while developing
re-enable

disable again - will squash dont worry

enable again

disable again
2024-03-10 12:48:19 +00:00
iPromKnight
f719520b3b [skip ci] Ignore all run profiles to prevent pat leaking
reenable these, testing only producer should build
2024-03-10 12:48:19 +00:00
iPromKnight
bacb50e060 [skip ci] remove extra package no longer in use 2024-03-10 12:48:19 +00:00
iPromKnight
6600fceb1a Wip Blacklisting dmm porn
Create adult text classifier ML Model

wip - starting to write PTN in c#

More work on season, show and movie parsing

Remove ML project
2024-03-10 12:48:16 +00:00
purple_emily
5aba05f2b4 Merge pull request #141 from Gabisonfire/generic-fixes
Typo at the end of the staging environment
2024-03-10 12:45:58 +00:00
purple_emily
601dbdf64f Typo at the end of the staging environment 2024-03-10 12:27:21 +00:00
432 changed files with 261949 additions and 44425 deletions

View File

@@ -6,12 +6,16 @@ on:
CONTEXT:
required: true
type: string
DOCKERFILE:
required: true
type: string
IMAGE_NAME:
required: true
type: string
env:
CONTEXT: ${{ inputs.CONTEXT }}
DOCKERFILE: ${{ inputs.DOCKERFILE }}
IMAGE_NAME: ${{ inputs.IMAGE_NAME }}
PLATFORMS: linux/amd64,linux/arm64
@@ -21,11 +25,13 @@ jobs:
steps:
- name: Setting variables
run: |
echo "CONTEXT=${{ env.CONTEXT }}
echo "IMAGE_NAME=${{ env.IMAGE_NAME }}
echo "CONTEXT=${{ env.CONTEXT }}"
echo "DOCKERFILE=${{ env.DOCKERFILE }}"
echo "IMAGE_NAME=${{ env.IMAGE_NAME }}"
echo "PLATFORMS=${{ env.PLATFORMS }}"
outputs:
CONTEXT: ${{ env.CONTEXT }}
DOCKERFILE: ${{ env.DOCKERFILE }}
IMAGE_NAME: ${{ env.IMAGE_NAME }}
PLATFORMS: ${{ env.PLATFORMS }}
@@ -70,14 +76,17 @@ jobs:
flavor: |
latest=auto
tags: |
type=edge,branch=master,commit=${{ github.sha }}
type=ref,event=tag
type=ref,event=pr
type=sha,commit=${{ github.sha }}
type=semver,pattern={{version}}
type=raw,value=latest,enable={{is_default_branch}}
- name: Build image for scanning
uses: docker/build-push-action@v5
with:
context: ${{ needs.set-vars.outputs.CONTEXT }}
file: ${{ needs.set-vars.outputs.DOCKERFILE }}
push: true
provenance: false
tags: localhost:5000/dockle-examine-image:test
@@ -130,10 +139,11 @@ jobs:
sarif_file: 'trivy-results-os.sarif'
- name: Push Service Image to repo
if: github.ref == 'refs/heads/main' || github.ref == 'refs/heads/master'
# if: github.ref == 'refs/heads/main' || github.ref == 'refs/heads/master'
uses: docker/build-push-action@v5
with:
context: ${{ needs.set-vars.outputs.CONTEXT }}
file: ${{ needs.set-vars.outputs.DOCKERFILE }}
push: true
provenance: false
tags: ${{ steps.docker-metadata.outputs.tags }}

View File

@@ -2,13 +2,17 @@ name: Build and Push Addon Service
on:
push:
tags:
- '**'
paths:
- 'src/node/addon/**'
- 'src/addon/**'
workflow_dispatch:
jobs:
process:
uses: ./.github/workflows/base_image_workflow.yaml
secrets: inherit
with:
CONTEXT: ./src/node/addon/
CONTEXT: ./src/addon/
DOCKERFILE: ./src/addon/Dockerfile
IMAGE_NAME: knightcrawler-addon

View File

@@ -2,13 +2,17 @@ name: Build and Push Consumer Service
on:
push:
tags:
- '**'
paths:
- 'src/node/consumer/**'
- 'src/torrent-consumer/**'
workflow_dispatch:
jobs:
process:
uses: ./.github/workflows/base_image_workflow.yaml
secrets: inherit
with:
CONTEXT: ./src/node/consumer/
CONTEXT: ./src/
DOCKERFILE: ./src/torrent-consumer/Dockerfile
IMAGE_NAME: knightcrawler-consumer

View File

@@ -0,0 +1,18 @@
name: Build and Push Debrid Collector Service
on:
push:
tags:
- '**'
paths:
- 'src/debrid-collector/**'
workflow_dispatch:
jobs:
process:
uses: ./.github/workflows/base_image_workflow.yaml
secrets: inherit
with:
CONTEXT: ./src/
DOCKERFILE: ./src/debrid-collector/Dockerfile
IMAGE_NAME: knightcrawler-debrid-collector

View File

@@ -1,86 +0,0 @@
name: Build documentation
# TODO: Only run on ./docs folder change
on:
push:
branches: ["master"]
paths:
- 'docs/**'
# Specify to run a workflow manually from the Actions tab on GitHub
workflow_dispatch:
permissions:
id-token: write
pages: write
env:
INSTANCE: Writerside/kc
ARTIFACT: webHelpKC2-all.zip
DOCS_FOLDER: ./docs
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Build Writerside docs using Docker
uses: JetBrains/writerside-github-action@v4
with:
instance: ${{ env.INSTANCE }}
artifact: ${{ env.ARTIFACT }}
location: ${{ env.DOCS_FOLDER }}
- name: Upload artifact
uses: actions/upload-artifact@v3
with:
name: docs
path: |
artifacts/${{ env.ARTIFACT }}
artifacts/report.json
retention-days: 7
test:
needs: build
runs-on: ubuntu-latest
steps:
- name: Download artifacts
uses: actions/download-artifact@v3
with:
name: docs
path: artifacts
- name: Test documentation
uses: JetBrains/writerside-checker-action@v1
with:
instance: ${{ env.INSTANCE }}
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
needs: [build, test]
runs-on: ubuntu-latest
steps:
- name: Download artifacts
uses: actions/download-artifact@v3
with:
name: docs
- name: Unzip artifact
run: unzip -O UTF-8 -qq '${{ env.ARTIFACT }}' -d dir
- name: Setup Pages
uses: actions/configure-pages@v4
- name: Package and upload Pages artifact
uses: actions/upload-pages-artifact@v3
with:
path: dir
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4

View File

@@ -2,13 +2,17 @@ name: Build and Push Jackett Addon Service
on:
push:
tags:
- '**'
paths:
- 'src/node/addon-jackett/**'
- 'src/addon-jackett/**'
workflow_dispatch:
jobs:
process:
uses: ./.github/workflows/base_image_workflow.yaml
secrets: inherit
with:
CONTEXT: ./src/node/addon-jackett/
CONTEXT: ./src/addon-jackett/
DOCKERFILE: ./src/addon-jackett/Dockerfile
IMAGE_NAME: knightcrawler-addon-jackett

View File

@@ -2,8 +2,11 @@ name: Build and Push Metadata Service
on:
push:
tags:
- '**'
paths:
- 'src/metadata/**'
workflow_dispatch:
jobs:
process:
@@ -11,4 +14,5 @@ jobs:
secrets: inherit
with:
CONTEXT: ./src/metadata/
DOCKERFILE: ./src/metadata/Dockerfile
IMAGE_NAME: knightcrawler-metadata

View File

@@ -2,8 +2,11 @@ name: Build and Push Migrator Service
on:
push:
tags:
- '**'
paths:
- 'src/migrator/**'
workflow_dispatch:
jobs:
process:
@@ -11,4 +14,5 @@ jobs:
secrets: inherit
with:
CONTEXT: ./src/migrator/
DOCKERFILE: ./src/migrator/Dockerfile
IMAGE_NAME: knightcrawler-migrator

View File

@@ -2,13 +2,17 @@ name: Build and Push Producer Service
on:
push:
tags:
- '**'
paths:
- 'src/producer/**'
workflow_dispatch:
jobs:
process:
uses: ./.github/workflows/base_image_workflow.yaml
secrets: inherit
with:
CONTEXT: ./src/producer/
CONTEXT: ./src/
DOCKERFILE: ./src/producer/src/Dockerfile
IMAGE_NAME: knightcrawler-producer

View File

@@ -0,0 +1,18 @@
name: Build and Push Qbit Collector Service
on:
push:
tags:
- '**'
paths:
- 'src/qbit-collector/**'
workflow_dispatch:
jobs:
process:
uses: ./.github/workflows/base_image_workflow.yaml
secrets: inherit
with:
CONTEXT: ./src/
DOCKERFILE: ./src/qbit-collector/Dockerfile
IMAGE_NAME: knightcrawler-qbit-collector

18
.github/workflows/build_tissue.yaml vendored Normal file
View File

@@ -0,0 +1,18 @@
name: Build and Push Tissue Service
on:
push:
tags:
- '**'
paths:
- 'src/tissue/**'
workflow_dispatch:
jobs:
process:
uses: ./.github/workflows/base_image_workflow.yaml
secrets: inherit
with:
CONTEXT: ./src/tissue/
DOCKERFILE: ./src/tissue/Dockerfile
IMAGE_NAME: knightcrawler-tissue

8
.gitignore vendored
View File

@@ -355,6 +355,9 @@ MigrationBackup/
# Fody - auto-generated XML schema
FodyWeavers.xsd
# Jetbrains ide's run profiles (Could contain sensative information)
**/.run/
# VS Code files for those working on multiple tools
.vscode/*
!.vscode/settings.json
@@ -392,8 +395,6 @@ dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
@@ -607,3 +608,6 @@ fabric.properties
# Caddy logs
!**/caddy/logs/.gitkeep
**/caddy/logs/**
# Mac directory indexes
.DS_Store

View File

@@ -3,6 +3,7 @@ repos:
rev: v4.5.0
hooks:
- id: check-added-large-files
args: ['--maxkb=2500']
- id: check-json
- id: check-toml
- id: check-xml
@@ -15,5 +16,6 @@ repos:
rev: v2.2.6
hooks:
- id: codespell
exclude: ^src/node/consumer/test/
exclude: |
(?x)^(src/node/consumer/test/.*|src/producer/Data/.*|src/tissue/Data/.*)$
args: ["-L", "strem,chage"]

View File

@@ -7,9 +7,6 @@
## Contents
> [!CAUTION]
> Until we reach `v1.0.0`, please consider releases as alpha.
> [!IMPORTANT]
> The latest change renames the project and requires a [small migration](#selfhostio-to-knightcrawler-migration).
- [Contents](#contents)

View File

@@ -8,48 +8,32 @@ POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=knightcrawler
# MongoDB
MONGODB_HOST=mongodb
MONGODB_PORT=27017
MONGODB_DB=knightcrawler
MONGODB_USER=mongo
MONGODB_PASSWORD=mongo
# Redis
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_EXTRA=abortConnect=false,allowAdmin=true
# RabbitMQ
RABBITMQ_HOST=rabbitmq
RABBITMQ_USER=guest
RABBITMQ_PASSWORD=guest
RABBITMQ_QUEUE_NAME=ingested
RABBITMQ_CONSUMER_QUEUE_NAME=ingested
RABBITMQ_DURABLE=true
RABBITMQ_MAX_QUEUE_SIZE=0
RABBITMQ_MAX_PUBLISH_BATCH_SIZE=500
RABBITMQ_PUBLISH_INTERVAL_IN_SECONDS=10
# Metadata
## Only used if DATA_ONCE is set to false. If true, the schedule is ignored
METADATA_DOWNLOAD_IMDB_DATA_SCHEDULE="0 0 1 * *"
## If true, the metadata will be downloaded once and then the schedule will be ignored
METADATA_DOWNLOAD_IMDB_DATA_ONCE=true
## Controls the amount of records processed in memory at any given time during import, higher values will consume more memory
METADATA_INSERT_BATCH_SIZE=25000
METADATA_INSERT_BATCH_SIZE=50000
# Collectors
COLLECTOR_QBIT_ENABLED=false
COLLECTOR_DEBRID_ENABLED=true
COLLECTOR_REAL_DEBRID_API_KEY=
QBIT_HOST=http://qbittorrent:8080
# Addon
DEBUG_MODE=false
# Consumer
JOB_CONCURRENCY=5
JOBS_ENABLED=true
## can be debug for extra verbosity (a lot more verbosity - useful for development)
LOG_LEVEL=info
MAX_CONNECTIONS_PER_TORRENT=10
MAX_CONNECTIONS_OVERALL=100
TORRENT_TIMEOUT=30000
UDP_TRACKERS_ENABLED=true
CONSUMER_REPLICAS=3
## Fix for #66 - toggle on for development
AUTO_CREATE_AND_APPLY_MIGRATIONS=false
## Allows control of the threshold for matching titles to the IMDB dataset. The closer to 0, the more strict the matching.
TITLE_MATCH_THRESHOLD=0.25
# Producer
GITHUB_PAT=

View File

@@ -0,0 +1,58 @@
[Application]
FileLogger\Age=1
FileLogger\AgeType=1
FileLogger\Backup=true
FileLogger\DeleteOld=true
FileLogger\Enabled=true
FileLogger\MaxSizeBytes=66560
FileLogger\Path=/config/qBittorrent/logs
[AutoRun]
enabled=false
program=
[BitTorrent]
Session\DefaultSavePath=/downloads/
Session\ExcludedFileNames=
Session\MaxActiveDownloads=10
Session\MaxActiveTorrents=50
Session\MaxActiveUploads=50
Session\MaxConnections=2000
Session\Port=6881
Session\QueueingSystemEnabled=true
Session\TempPath=/downloads/incomplete/
Session\TorrentStopCondition=MetadataReceived
[Core]
AutoDeleteAddedTorrentFile=Never
[LegalNotice]
Accepted=true
[Meta]
MigrationVersion=6
[Network]
PortForwardingEnabled=true
Proxy\HostnameLookupEnabled=false
Proxy\Profiles\BitTorrent=true
Proxy\Profiles\Misc=true
Proxy\Profiles\RSS=true
[Preferences]
Connection\PortRangeMin=6881
Connection\ResolvePeerCountries=false
Connection\UPnP=false
Downloads\SavePath=/downloads/
Downloads\TempPath=/downloads/incomplete/
General\Locale=en
MailNotification\req_auth=true
WebUI\Address=*
WebUI\AuthSubnetWhitelist=0.0.0.0/0
WebUI\AuthSubnetWhitelistEnabled=true
WebUI\LocalHostAuth=false
WebUI\ServerDomains=*
[RSS]
AutoDownloader\DownloadRepacks=true
AutoDownloader\SmartEpisodeFilter=s(\\d+)e(\\d+), (\\d+)x(\\d+), "(\\d{4}[.\\-]\\d{1,2}[.\\-]\\d{1,2})", "(\\d{1,2}[.\\-]\\d{1,2}[.\\-]\\d{4})"

View File

@@ -0,0 +1,89 @@
x-basehealth: &base-health
interval: 10s
timeout: 10s
retries: 3
start_period: 10s
x-rabbithealth: &rabbitmq-health
test: rabbitmq-diagnostics -q ping
<<: *base-health
x-redishealth: &redis-health
test: redis-cli ping
<<: *base-health
x-postgreshealth: &postgresdb-health
test: pg_isready
<<: *base-health
x-qbit: &qbit-health
test: "curl --fail http://localhost:8080"
<<: *base-health
services:
postgres:
image: postgres:latest
environment:
PGUSER: postgres # needed for healthcheck.
# # If you need the database to be accessible from outside, please open the below port.
# # Furthermore, please, please, please, change the username and password in the .env file.
# # If you want to enhance your security even more, create a new user for the database with a strong password.
# ports:
# - "5432:5432"
volumes:
- postgres:/var/lib/postgresql/data
healthcheck: *postgresdb-health
restart: unless-stopped
env_file: ../.env
networks:
- knightcrawler-network
redis:
image: redis/redis-stack:latest
# # If you need redis to be accessible from outside, please open the below port.
# ports:
# - "6379:6379"
volumes:
- redis:/data
restart: unless-stopped
healthcheck: *redis-health
env_file: ../.env
networks:
- knightcrawler-network
rabbitmq:
image: rabbitmq:3-management
# # If you need the database to be accessible from outside, please open the below port.
# # Furthermore, please, please, please, look at the documentation for rabbit on how to secure the service.
# ports:
# - "5672:5672"
# - "15672:15672"
# - "15692:15692"
volumes:
- rabbitmq:/var/lib/rabbitmq
restart: unless-stopped
healthcheck: *rabbitmq-health
env_file: ../.env
networks:
- knightcrawler-network
## QBitTorrent is a torrent client that can be used to download torrents. In this case its used to download metadata.
## The QBit collector requires this.
qbittorrent:
image: lscr.io/linuxserver/qbittorrent:latest
environment:
- PUID=1000
- PGID=1000
- WEBUI_PORT=8080
- TORRENTING_PORT=6881
ports:
- 6881:6881
- 6881:6881/udp
env_file: ../.env
networks:
- knightcrawler-network
restart: unless-stopped
healthcheck: *qbit-health
volumes:
- ./config/qbit/qbittorrent.conf:/config/qBittorrent/qBittorrent.conf

View File

@@ -0,0 +1,71 @@
x-apps: &knightcrawler-app
labels:
logging: "promtail"
env_file: ../.env
networks:
- knightcrawler-network
x-depends: &knightcrawler-app-depends
depends_on:
redis:
condition: service_healthy
postgres:
condition: service_healthy
rabbitmq:
condition: service_healthy
migrator:
condition: service_completed_successfully
metadata:
condition: service_completed_successfully
services:
metadata:
image: gabisonfire/knightcrawler-metadata:2.0.5
env_file: ../.env
networks:
- knightcrawler-network
restart: no
depends_on:
migrator:
condition: service_completed_successfully
migrator:
image: gabisonfire/knightcrawler-migrator:2.0.5
env_file: ../.env
networks:
- knightcrawler-network
restart: no
depends_on:
postgres:
condition: service_healthy
addon:
image: gabisonfire/knightcrawler-addon:2.0.5
<<: [*knightcrawler-app, *knightcrawler-app-depends]
restart: unless-stopped
hostname: knightcrawler-addon
ports:
- "7000:7000"
consumer:
image: gabisonfire/knightcrawler-consumer:2.0.5
<<: [*knightcrawler-app, *knightcrawler-app-depends]
restart: unless-stopped
debridcollector:
image: gabisonfire/knightcrawler-debrid-collector:2.0.5
<<: [*knightcrawler-app, *knightcrawler-app-depends]
restart: unless-stopped
producer:
image: gabisonfire/knightcrawler-producer:2.0.5
<<: [*knightcrawler-app, *knightcrawler-app-depends]
restart: unless-stopped
qbitcollector:
image: gabisonfire/knightcrawler-qbit-collector:2.0.5
<<: [*knightcrawler-app, *knightcrawler-app-depends]
restart: unless-stopped
depends_on:
qbittorrent:
condition: service_healthy

View File

@@ -0,0 +1,4 @@
networks:
knightcrawler-network:
driver: bridge
name: knightcrawler-network

View File

@@ -0,0 +1,4 @@
volumes:
postgres:
redis:
rabbitmq:

View File

@@ -0,0 +1,7 @@
services:
qbittorrent:
deploy:
replicas: 0
qbitcollector:
deploy:
replicas: 0

View File

@@ -0,0 +1,7 @@
version: "3.9"
name: "knightcrawler"
include:
- components/network.yaml
- components/volumes.yaml
- components/infrastructure.yaml
- components/knightcrawler.yaml

View File

@@ -1,139 +0,0 @@
name: knightcrawler
x-restart: &restart-policy "unless-stopped"
x-basehealth: &base-health
interval: 10s
timeout: 10s
retries: 3
start_period: 10s
x-rabbithealth: &rabbitmq-health
test: rabbitmq-diagnostics -q ping
<<: *base-health
x-mongohealth: &mongodb-health
test: ["CMD", "mongosh", "--eval", "db.adminCommand('ping')"]
<<: *base-health
x-postgreshealth: &postgresdb-health
test: pg_isready
<<: *base-health
x-apps: &knightcrawler-app
depends_on:
mongodb:
condition: service_healthy
postgres:
condition: service_healthy
rabbitmq:
condition: service_healthy
restart: *restart-policy
services:
postgres:
image: postgres:latest
env_file: .env
environment:
PGUSER: postgres # needed for healthcheck.
# # If you need the database to be accessible from outside, please open the below port.
# # Furthermore, please, please, please, change the username and password in the .env file.
# # If you want to enhance your security even more, create a new user for the database with a strong password.
# ports:
# - "5432:5432"
volumes:
- postgres:/var/lib/postgresql/data
healthcheck: *postgresdb-health
restart: *restart-policy
networks:
- knightcrawler-network
mongodb:
image: mongo:latest
env_file: .env
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGODB_USER:?Variable MONGODB_USER not set}
MONGO_INITDB_ROOT_PASSWORD: ${MONGODB_PASSWORD:?Variable MONGODB_PASSWORD not set}
# # If you need the database to be accessible from outside, please open the below port.
# # Furthermore, please, please, please, change the username and password in the .env file.
# ports:
# - "27017:27017"
volumes:
- mongo:/data/db
restart: *restart-policy
healthcheck: *mongodb-health
networks:
- knightcrawler-network
rabbitmq:
image: rabbitmq:3-management
# # If you need the database to be accessible from outside, please open the below port.
# # Furthermore, please, please, please, look at the documentation for rabbit on how to secure the service.
# ports:
# - "5672:5672"
# - "15672:15672"
# - "15692:15692"
volumes:
- rabbitmq:/var/lib/rabbitmq
hostname: ${RABBITMQ_HOST}
restart: *restart-policy
healthcheck: *rabbitmq-health
networks:
- knightcrawler-network
producer:
image: gabisonfire/knightcrawler-producer:latest
labels:
logging: "promtail"
env_file: .env
<<: *knightcrawler-app
networks:
- knightcrawler-network
consumer:
image: gabisonfire/knightcrawler-consumer:latest
env_file: .env
labels:
logging: "promtail"
deploy:
replicas: ${CONSUMER_REPLICAS}
<<: *knightcrawler-app
networks:
- knightcrawler-network
metadata:
image: gabisonfire/knightcrawler-metadata:latest
env_file: .env
labels:
logging: "promtail"
restart: no
networks:
- knightcrawler-network
addon:
<<: *knightcrawler-app
env_file: .env
hostname: knightcrawler-addon
image: gabisonfire/knightcrawler-addon:latest
labels:
logging: "promtail"
networks:
- knightcrawler-network
# - caddy
ports:
- "7000:7000"
networks:
knightcrawler-network:
driver: bridge
name: knightcrawler-network
# caddy:
# name: caddy
# external: true
volumes:
postgres:
mongo:
rabbitmq:

View File

@@ -4,7 +4,7 @@
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## Once you have confirmed Caddy works you should comment out
## the below line:
acme_ca https://acme-staging-v02.api.letsencrypt.org/director
acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
}
(security-headers) {

View File

@@ -1,14 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<buildprofiles xsi:noNamespaceSchemaLocation="https://resources.jetbrains.com/writerside/1.0/build-profiles.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<variables>
<header-logo>knight-crawler-logo.png</header-logo>
</variables>
<build-profile instance="kc">
<variables>
<noindex-content>true</noindex-content>
</variables>
</build-profile>
</buildprofiles>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 568 KiB

View File

@@ -1,13 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE instance-profile
SYSTEM "https://resources.jetbrains.com/writerside/1.0/product-profile.dtd">
<instance-profile id="kc" name="Knight Crawler"
start-page="Overview.md">
<toc-element topic="Overview.md"/>
<toc-element topic="Getting-started.md">
</toc-element>
<toc-element topic="External-access.md"/>
<toc-element topic="Supported-Debrid-services.md"/>
</instance-profile>

View File

@@ -1,57 +0,0 @@
# External access
This guide outlines how to use Knight Crawler on devices like your TV. While it's currently limited to the device of
installation, we can change that. With some extra effort, we'll show you how to make it accessible on other devices.
This limitation is set by Stremio, as [explained here](https://github.com/Stremio/stremio-features/issues/687#issuecomment-1890546094).
## What to keep in mind
Before we make Knight Crawler available outside your home network, we've got to talk about safety. No software is
perfect, including ours. Knight Crawler is built on lots of different parts, some made by other people. So, if we keep
it just for your home network, it's a bit safer. But if you want to use it over the internet, just know that keeping
your devices secure is up to you. We won't be responsible for any problems or lost data if you use Knight Crawler that way.
## Initial setup
To enable external access for Knight Crawler, whether it's within your home network or over the internet, you'll
need to follow these initial setup steps:
- Set up Caddy, a powerful and easy-to-use web server.
- Disable the open port in the Knight Crawler <path>docker-compose.yaml</path> file.
### Caddy
A basic Caddy configuration is included with Knight Crawler in the deployment directory.
<path>deployment/docker/optional-services/caddy</path>
```Generic
deployment/
└── docker/
└── optional-services/
└── caddy/
├── config/
│ ├── snippets/
│ │ └── cloudflare-replace-X-Forwarded-For
│ └── Caddyfile
├── logs/
└── docker-compose.yaml
```
ports:
- "8080:8080"
By disabling the default port, Knight Crawler will only be accessible internally within your network, ensuring added security.
## Home network access
## Internet access
### Through a VPN
### On the public web
## Troubleshooting?
## Additional Resources?

View File

@@ -1,192 +0,0 @@
# Getting started
Knight Crawler is provided as an all-in-one solution. This means we include all the necessary software you need to get started
out of the box.
## Before you start
Make sure that you have:
- A place to host Knight Crawler
- [Docker](https://docs.docker.com/get-docker/) and [Compose](https://docs.docker.com/compose/install/) installed
- A [GitHub](https://github.com/) account _(optional)_
## Download the files
Installing Knight Crawler is as simple as downloading a copy of the [deployment directory](https://github.com/Gabisonfire/knightcrawler/tree/master/deployment/docker).
A basic installation requires only two files:
- <path>deployment/docker/.env.example</path>
- <path>deployment/docker/docker-compose.yaml</path>.
For this guide I will be placing them in a directory on my home drive <path>~/knightcrawler</path>.
Rename the <path>.env.example</path> file to be <path>.env</path>
```
~/
└── knightcrawler/
├── .env
└── docker-compose.yaml
```
## Initial configuration
Below are a few recommended configuration changes.
Open the <path>.env</path> file in your favourite editor.
> If you are using an external database, configure it in the <path>.env</path> file. Don't forget to disable the ones
> included in the <path>docker-compose.yaml</path>.
### Database credentials
It is strongly recommended that you change the credentials for the databases included with Knight Crawler. This is best done
before running Knight Crawler for the first time. It is much harder to change the passwords once the services have been started
for the first time.
```Bash
POSTGRES_PASSWORD=postgres
...
MONGODB_PASSWORD=mongo
...
RABBITMQ_PASSWORD=guest
```
Here's a few options on generating a secure password:
```Bash
# Linux
tr -cd '[:alnum:]' < /dev/urandom | fold -w 64 | head -n 1
# Or you could use openssl
openssl rand -hex 32
```
```Python
# Python
import secrets
print(secrets.token_hex(32))
```
### Your time zone
```Bash
TZ=London/Europe
```
A list of time zones can be found on [Wikipedia](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones)
### Consumers
```Bash
JOB_CONCURRENCY=5
...
MAX_CONNECTIONS_PER_TORRENT=10
...
CONSUMER_REPLICAS=3
```
These are totally subjective to your machine and network capacity. The above default is pretty minimal and will work on
most machines.
`JOB_CONCURRENCY` is how many films and tv shows the consumers should process at once. As this affects every consumer
this will likely cause exponential
strain on your system. It's probably best to leave this at 5, but you can try experimenting with it if you wish.
`MAX_CONNECTIONS_PER_TORRENT` is how many peers the consumer will attempt to connect to when it is trying to collect
metadata.
Increasing this value can speed up processing, but you will eventually reach a point where more connections are being
made than
your router can handle. This will then cause a cascading fail where your internet stops working. If you are going to
increase this value
then try increasing it by 10 at a time.
> Increasing this value increases the max connections for every parallel job, for every consumer. For example
> with the default values above this means that Knight Crawler will be on average making `(5 x 3) x 10 = 150`
> connections at any one time.
>
{style="warning"}
`CONSUMER_REPLICAS` is how many consumers should be initially started. You can increase or decrease the number of consumers whilst the
service is running by running the command `docker compose up -d --scale consumer=<number>`.
### GitHub personal access token
This step is optional but strongly recommended. [Debrid Media Manager](https://debridmediamanager.com/start) is a media library manager
for Debrid services. When a user of this service chooses to export/share their library publicly it is saved to a public GitHub repository.
This is, essentially, a repository containing a vast amount of ready to go films and tv shows. Knight Crawler comes with the ability to
read these exported lists, but it requires a GitHub account to make it work.
Knight Crawler needs a personal access token with read-only access to public repositories. This means we can not access any private
repositories you have.
1. Navigate to GitHub settings ([GitHub token settings](https://github.com/settings/tokens?type=beta)):
- Navigate to `GitHub settings`.
- Click on `Developer Settings`.
- Select `Personal access tokens`.
- Choose `Fine-grained tokens`.
2. Press `Generate new token`.
3. Fill out the form with the following information:
```Generic
Token name:
KnightCrawler
Expiration:
90 days
Description:
<blank>
Repository access:
(checked) Public Repositories (read-only)
```
4. Click `Generate token`.
5. Take the new token and add it to the bottom of the <path>.env</path> file:
```Bash
# Producer
GITHUB_PAT=<YOUR TOKEN HERE>
```
## Start Knight Crawler
To start Knight Crawler use the following command:
```Bash
docker compose up -d
```
Then we can follow the logs to watch it start:
```Bash
docker compose logs -f --since 1m
```
> Knight Crawler will only be accessible on the machine you run it on, to make it accessible from other machines navigate to [External access](External-access.md).
>
{style="note"}
To stop following the logs press <shortcut>Ctrl+C</shortcut> at any time.
The Knight Crawler configuration page should now be accessible in your web browser at [http://localhost:7000](http://localhost:7000)
## Start more consumers
If you wish to speed up the processing of the films and tv shows that Knight Crawler finds, then you'll likely want to
increase the number of consumers.
The below command can be used to both increase or decrease the number of running consumers. Gradually increase the number
until you encounter any issues and then decrease until stable.
```Bash
docker compose up -d --scale consumer=<number>
```
## Stop Knight Crawler
Knight Crawler can be stopped with the following command:
```Bash
docker compose down
```

View File

@@ -1,30 +0,0 @@
# Overview
<img alt="The image shows a Knight in silvery armour looking forwards." src="knight-crawler-logo.png" title="Knight Crawler logo" width="100"/>
Knight Crawler is a self-hosted [Stremio](https://www.stremio.com/) addon for streaming torrents via
a [Debrid](Supported-Debrid-services.md "Click for a list of Debrid services we support") service.
We are active on [Discord](https://discord.gg/8fQdxay9z2) for both support and casual conversation.
> Knight Crawler is currently alpha software.
>
> Users are responsible for ensuring their data is backed up regularly.
>
> Please read the changelogs before updating to the latest version.
>
{style="warning"}
## What does Knight Crawler do?
Knight Crawler is an addon for [Stremio](https://www.stremio.com/). It began as a fork of the very popular
[Torrentio](https://github.com/TheBeastLT/torrentio-scraper) addon. Knight crawler essentially does the following:
1. It searches the internet for available films and tv shows.
2. It collects as much information as it can about each film and tv show it finds.
3. It then stores this information to a database for easy access.
When you choose on a film or tv show to watch on Stremio, a request will be sent to your installation of Knight Crawler.
Knight Crawler will query the database and return a list of all the copies it has stored in the database as Debrid
links.
This enables playback to begin immediately for your chosen media.

View File

@@ -1,3 +0,0 @@
# Supported Debrid services
Start typing here...

View File

@@ -1,8 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE ihp SYSTEM "https://resources.jetbrains.com/writerside/1.0/ihp.dtd">
<ihp version="2.0">
<topics dir="topics" web-path="topics"/>
<images dir="images" web-path="knightcrawler"/>
<instance src="kc.tree"/>
</ihp>

View File

@@ -14,7 +14,6 @@
"axios": "^1.6.1",
"bottleneck": "^2.19.5",
"cache-manager": "^3.4.4",
"cache-manager-mongodb": "^0.3.0",
"cors": "^2.8.5",
"debrid-link-api": "^1.0.1",
"express": "^4.18.2",
@@ -33,7 +32,11 @@
"user-agents": "^1.0.1444",
"video-name-parser": "^1.4.6",
"xml-js": "^1.6.11",
"xml2js": "^0.6.2"
"xml2js": "^0.6.2",
"@redis/client": "^1.5.14",
"@redis/json": "^1.0.6",
"@redis/search": "^1.1.6",
"cache-manager-redis-store": "^2.0.0"
},
"devDependencies": {
"@types/node": "^20.11.6",

View File

@@ -1,7 +1,7 @@
import cacheManager from 'cache-manager';
import mangodbStore from 'cache-manager-mongodb';
import { isStaticUrl } from '../moch/static.js';
import {cacheConfig} from "./settings.js";
import redisStore from 'cache-manager-redis-store';
const STREAM_KEY_PREFIX = `${cacheConfig.GLOBAL_KEY_PREFIX}|stream`;
const IMDB_KEY_PREFIX = `${cacheConfig.GLOBAL_KEY_PREFIX}|imdb`;
@@ -12,28 +12,20 @@ const memoryCache = initiateMemoryCache();
const remoteCache = initiateRemoteCache();
function initiateRemoteCache() {
if (cacheConfig.NO_CACHE) {
return null;
} else if (cacheConfig.MONGODB_URI) {
return cacheManager.caching({
store: mangodbStore,
uri: cacheConfig.MONGODB_URI,
options: {
collection: 'jackettio_addon_collection',
socketTimeoutMS: 120000,
useNewUrlParser: true,
useUnifiedTopology: false,
ttl: cacheConfig.STREAM_EMPTY_TTL
},
ttl: cacheConfig.STREAM_EMPTY_TTL,
ignoreCacheErrors: true
});
} else {
return cacheManager.caching({
store: 'memory',
ttl: cacheConfig.STREAM_EMPTY_TTL
});
}
if (cacheConfig.NO_CACHE) {
return null;
} else if (cacheConfig.REDIS_CONNECTION_STRING) {
return cacheManager.caching({
store: redisStore,
ttl: cacheConfig.STREAM_EMPTY_TTL,
url: cacheConfig.REDIS_CONNECTION_STRING
});
} else {
return cacheManager.caching({
store: 'memory',
ttl: cacheConfig.STREAM_EMPTY_TTL
});
}
}
function initiateMemoryCache() {

View File

@@ -25,7 +25,9 @@ export const cinemetaConfig = {
}
export const cacheConfig = {
MONGODB_URI: process.env.MONGODB_URI,
REDIS_HOST: process.env.REDIS_HOST || 'redis',
REDIS_PORT: process.env.REDIS_PORT || '6379',
REDIS_EXTRA: process.env.REDIS_EXTRA || '',
NO_CACHE: parseBool(process.env.NO_CACHE, false),
IMDB_TTL: parseInt(process.env.IMDB_TTL || 60 * 60 * 4), // 4 Hours
STREAM_TTL: parseInt(process.env.STREAM_TTL || 60 * 60 * 4), // 1 Hour
@@ -40,3 +42,5 @@ export const cacheConfig = {
STALE_ERROR_AGE: parseInt(process.env.STALE_ERROR_AGE) || 7 * 24 * 60 * 60, // 7 days
GLOBAL_KEY_PREFIX: process.env.GLOBAL_KEY_PREFIX || 'jackettio-addon',
}
cacheConfig.REDIS_CONNECTION_STRING = 'redis://' + cacheConfig.REDIS_HOST + ':' + cacheConfig.REDIS_PORT + '?' + cacheConfig.REDIS_EXTRA;

File diff suppressed because it is too large Load Diff

View File

@@ -14,7 +14,6 @@
"axios": "^1.6.1",
"bottleneck": "^2.19.5",
"cache-manager": "^3.4.4",
"cache-manager-mongodb": "^0.3.0",
"cors": "^2.8.5",
"debrid-link-api": "^1.0.1",
"express-rate-limit": "^6.7.0",
@@ -35,7 +34,11 @@
"stremio-addon-sdk": "^1.6.10",
"swagger-stats": "^0.99.7",
"ua-parser-js": "^1.0.36",
"user-agents": "^1.0.1444"
"user-agents": "^1.0.1444",
"@redis/client": "^1.5.14",
"@redis/json": "^1.0.6",
"@redis/search": "^1.1.6",
"cache-manager-redis-store": "^2.0.0"
},
"devDependencies": {
"@types/node": "^20.11.6",

View File

@@ -1,7 +1,7 @@
import cacheManager from 'cache-manager';
import mangodbStore from 'cache-manager-mongodb';
import { cacheConfig } from './config.js';
import { isStaticUrl } from '../moch/static.js';
import redisStore from "cache-manager-redis-store";
const GLOBAL_KEY_PREFIX = 'knightcrawler-addon';
const STREAM_KEY_PREFIX = `${GLOBAL_KEY_PREFIX}|stream`;
@@ -21,19 +21,11 @@ const remoteCache = initiateRemoteCache();
function initiateRemoteCache() {
if (cacheConfig.NO_CACHE) {
return null;
} else if (cacheConfig.MONGO_URI) {
} else if (cacheConfig.REDIS_CONNECTION_STRING) {
return cacheManager.caching({
store: mangodbStore,
uri: cacheConfig.MONGO_URI,
options: {
collection: 'knightcrawler_addon_collection',
socketTimeoutMS: 120000,
useNewUrlParser: true,
useUnifiedTopology: false,
ttl: STREAM_EMPTY_TTL
},
store: redisStore,
ttl: STREAM_EMPTY_TTL,
ignoreCacheErrors: true
url: cacheConfig.REDIS_CONNECTION_STRING
});
} else {
return cacheManager.caching({

View File

@@ -1,17 +1,11 @@
export const cacheConfig = {
MONGODB_HOST: process.env.MONGODB_HOST || 'mongodb',
MONGODB_PORT: process.env.MONGODB_PORT || '27017',
MONGODB_DB: process.env.MONGODB_DB || 'knightcrawler',
MONGODB_USER: process.env.MONGODB_USER || 'mongo',
MONGODB_PASSWORD: process.env.MONGODB_PASSWORD || 'mongo',
COLLECTION_NAME: process.env.MONGODB_ADDON_COLLECTION || 'knightcrawler_addon_collection',
REDIS_HOST: process.env.REDIS_HOST || 'redis',
REDIS_PORT: process.env.REDIS_PORT || '6379',
REDIS_EXTRA: process.env.REDIS_EXTRA || '',
NO_CACHE: parseBool(process.env.NO_CACHE, false),
}
// Combine the environment variables into a connection string
// The combined string will look something like:
// 'mongodb://mongo:mongo@localhost:27017/knightcrawler?authSource=admin'
cacheConfig.MONGO_URI = 'mongodb://' + cacheConfig.MONGODB_USER + ':' + cacheConfig.MONGODB_PASSWORD + '@' + cacheConfig.MONGODB_HOST + ':' + cacheConfig.MONGODB_PORT + '/' + cacheConfig.MONGODB_DB + '?authSource=admin';
cacheConfig.REDIS_CONNECTION_STRING = 'redis://' + cacheConfig.REDIS_HOST + ':' + cacheConfig.REDIS_PORT + '?' + cacheConfig.REDIS_EXTRA;
export const databaseConfig = {
POSTGRES_HOST: process.env.POSTGRES_HOST || 'postgres',

Some files were not shown because too many files have changed in this diff Show More