18 Commits

Author SHA1 Message Date
purple_emily
10d6544673 Re-enable master only 2024-03-10 13:18:02 +00:00
purple_emily
2134f2f51d Missing forward slash 2024-03-10 13:17:08 +00:00
purple_emily
64149c55a8 WIP: External access 2024-03-10 13:15:05 +00:00
purple_emily
5584143eeb Rename for consistency 2024-03-10 13:11:31 +00:00
purple_emily
d84a186132 Pre-commit run 2024-03-10 11:43:35 +00:00
purple_emily
7e338975ed WIP: External access 2024-03-10 11:40:59 +00:00
purple_emily
123b8aad96 WIP: Getting-started 2024-03-10 11:40:59 +00:00
purple_emily
c947b013a2 Only run when the docs folder is changed 2024-03-10 11:40:58 +00:00
purple_emily
f000ae6c12 WIP: Run Knight Crawler 2024-03-10 11:40:58 +00:00
purple_emily
108a4a9066 WIP: Run Knight Crawler 2024-03-10 11:40:58 +00:00
purple_emily
ad286a984f Add Getting started and WIP: Run Knight Crawler 2024-03-10 11:40:58 +00:00
purple_emily
71ec7bdf2b Add some line breaks for readability 2024-03-10 11:40:58 +00:00
purple_emily
ffeeb56610 Fix instance ID's 2024-03-10 11:40:58 +00:00
purple_emily
cfc2b8f601 Ew, we are still using master as the main branch... 2024-03-10 11:40:57 +00:00
purple_emily
66d37a8c05 Change images web-path as per documentation (THIS COULD BE WRONG) 2024-03-10 11:40:57 +00:00
purple_emily
b19d1f0bf4 Add KC logo to header 2024-03-10 11:40:57 +00:00
purple_emily
cbf5fda723 GitHub action to build documentation 2024-03-10 11:40:57 +00:00
purple_emily
5def66858f Overview page 2024-03-10 11:40:57 +00:00
9 changed files with 403 additions and 0 deletions

86
.github/workflows/build_docs.yaml vendored Normal file
View File

@@ -0,0 +1,86 @@
name: Build documentation
# TODO: Only run on ./docs folder change
on:
push:
branches: ["master"]
paths:
- 'docs/**'
# Specify to run a workflow manually from the Actions tab on GitHub
workflow_dispatch:
permissions:
id-token: write
pages: write
env:
INSTANCE: Writerside/kc
ARTIFACT: webHelpKC2-all.zip
DOCS_FOLDER: ./docs
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Build Writerside docs using Docker
uses: JetBrains/writerside-github-action@v4
with:
instance: ${{ env.INSTANCE }}
artifact: ${{ env.ARTIFACT }}
location: ${{ env.DOCS_FOLDER }}
- name: Upload artifact
uses: actions/upload-artifact@v3
with:
name: docs
path: |
artifacts/${{ env.ARTIFACT }}
artifacts/report.json
retention-days: 7
test:
needs: build
runs-on: ubuntu-latest
steps:
- name: Download artifacts
uses: actions/download-artifact@v3
with:
name: docs
path: artifacts
- name: Test documentation
uses: JetBrains/writerside-checker-action@v1
with:
instance: ${{ env.INSTANCE }}
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
needs: [build, test]
runs-on: ubuntu-latest
steps:
- name: Download artifacts
uses: actions/download-artifact@v3
with:
name: docs
- name: Unzip artifact
run: unzip -O UTF-8 -qq '${{ env.ARTIFACT }}' -d dir
- name: Setup Pages
uses: actions/configure-pages@v4
- name: Package and upload Pages artifact
uses: actions/upload-pages-artifact@v3
with:
path: dir
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4

View File

@@ -0,0 +1,14 @@
<?xml version="1.0" encoding="UTF-8"?>
<buildprofiles xsi:noNamespaceSchemaLocation="https://resources.jetbrains.com/writerside/1.0/build-profiles.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<variables>
<header-logo>knight-crawler-logo.png</header-logo>
</variables>
<build-profile instance="kc">
<variables>
<noindex-content>true</noindex-content>
</variables>
</build-profile>
</buildprofiles>

Binary file not shown.

After

Width:  |  Height:  |  Size: 568 KiB

13
docs/Writerside/kc.tree Normal file
View File

@@ -0,0 +1,13 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE instance-profile
SYSTEM "https://resources.jetbrains.com/writerside/1.0/product-profile.dtd">
<instance-profile id="kc" name="Knight Crawler"
start-page="Overview.md">
<toc-element topic="Overview.md"/>
<toc-element topic="Getting-started.md">
</toc-element>
<toc-element topic="External-access.md"/>
<toc-element topic="Supported-Debrid-services.md"/>
</instance-profile>

View File

@@ -0,0 +1,57 @@
# External access
This guide outlines how to use Knight Crawler on devices like your TV. While it's currently limited to the device of
installation, we can change that. With some extra effort, we'll show you how to make it accessible on other devices.
This limitation is set by Stremio, as [explained here](https://github.com/Stremio/stremio-features/issues/687#issuecomment-1890546094).
## What to keep in mind
Before we make Knight Crawler available outside your home network, we've got to talk about safety. No software is
perfect, including ours. Knight Crawler is built on lots of different parts, some made by other people. So, if we keep
it just for your home network, it's a bit safer. But if you want to use it over the internet, just know that keeping
your devices secure is up to you. We won't be responsible for any problems or lost data if you use Knight Crawler that way.
## Initial setup
To enable external access for Knight Crawler, whether it's within your home network or over the internet, you'll
need to follow these initial setup steps:
- Set up Caddy, a powerful and easy-to-use web server.
- Disable the open port in the Knight Crawler <path>docker-compose.yaml</path> file.
### Caddy
A basic Caddy configuration is included with Knight Crawler in the deployment directory.
<path>deployment/docker/optional-services/caddy</path>
```Generic
deployment/
└── docker/
└── optional-services/
└── caddy/
├── config/
│ ├── snippets/
│ │ └── cloudflare-replace-X-Forwarded-For
│ └── Caddyfile
├── logs/
└── docker-compose.yaml
```
ports:
- "8080:8080"
By disabling the default port, Knight Crawler will only be accessible internally within your network, ensuring added security.
## Home network access
## Internet access
### Through a VPN
### On the public web
## Troubleshooting?
## Additional Resources?

View File

@@ -0,0 +1,192 @@
# Getting started
Knight Crawler is provided as an all-in-one solution. This means we include all the necessary software you need to get started
out of the box.
## Before you start
Make sure that you have:
- A place to host Knight Crawler
- [Docker](https://docs.docker.com/get-docker/) and [Compose](https://docs.docker.com/compose/install/) installed
- A [GitHub](https://github.com/) account _(optional)_
## Download the files
Installing Knight Crawler is as simple as downloading a copy of the [deployment directory](https://github.com/Gabisonfire/knightcrawler/tree/master/deployment/docker).
A basic installation requires only two files:
- <path>deployment/docker/.env.example</path>
- <path>deployment/docker/docker-compose.yaml</path>.
For this guide I will be placing them in a directory on my home drive <path>~/knightcrawler</path>.
Rename the <path>.env.example</path> file to be <path>.env</path>
```
~/
└── knightcrawler/
├── .env
└── docker-compose.yaml
```
## Initial configuration
Below are a few recommended configuration changes.
Open the <path>.env</path> file in your favourite editor.
> If you are using an external database, configure it in the <path>.env</path> file. Don't forget to disable the ones
> included in the <path>docker-compose.yaml</path>.
### Database credentials
It is strongly recommended that you change the credentials for the databases included with Knight Crawler. This is best done
before running Knight Crawler for the first time. It is much harder to change the passwords once the services have been started
for the first time.
```Bash
POSTGRES_PASSWORD=postgres
...
MONGODB_PASSWORD=mongo
...
RABBITMQ_PASSWORD=guest
```
Here's a few options on generating a secure password:
```Bash
# Linux
tr -cd '[:alnum:]' < /dev/urandom | fold -w 64 | head -n 1
# Or you could use openssl
openssl rand -hex 32
```
```Python
# Python
import secrets
print(secrets.token_hex(32))
```
### Your time zone
```Bash
TZ=London/Europe
```
A list of time zones can be found on [Wikipedia](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones)
### Consumers
```Bash
JOB_CONCURRENCY=5
...
MAX_CONNECTIONS_PER_TORRENT=10
...
CONSUMER_REPLICAS=3
```
These are totally subjective to your machine and network capacity. The above default is pretty minimal and will work on
most machines.
`JOB_CONCURRENCY` is how many films and tv shows the consumers should process at once. As this affects every consumer
this will likely cause exponential
strain on your system. It's probably best to leave this at 5, but you can try experimenting with it if you wish.
`MAX_CONNECTIONS_PER_TORRENT` is how many peers the consumer will attempt to connect to when it is trying to collect
metadata.
Increasing this value can speed up processing, but you will eventually reach a point where more connections are being
made than
your router can handle. This will then cause a cascading fail where your internet stops working. If you are going to
increase this value
then try increasing it by 10 at a time.
> Increasing this value increases the max connections for every parallel job, for every consumer. For example
> with the default values above this means that Knight Crawler will be on average making `(5 x 3) x 10 = 150`
> connections at any one time.
>
{style="warning"}
`CONSUMER_REPLICAS` is how many consumers should be initially started. You can increase or decrease the number of consumers whilst the
service is running by running the command `docker compose up -d --scale consumer=<number>`.
### GitHub personal access token
This step is optional but strongly recommended. [Debrid Media Manager](https://debridmediamanager.com/start) is a media library manager
for Debrid services. When a user of this service chooses to export/share their library publicly it is saved to a public GitHub repository.
This is, essentially, a repository containing a vast amount of ready to go films and tv shows. Knight Crawler comes with the ability to
read these exported lists, but it requires a GitHub account to make it work.
Knight Crawler needs a personal access token with read-only access to public repositories. This means we can not access any private
repositories you have.
1. Navigate to GitHub settings ([GitHub token settings](https://github.com/settings/tokens?type=beta)):
- Navigate to `GitHub settings`.
- Click on `Developer Settings`.
- Select `Personal access tokens`.
- Choose `Fine-grained tokens`.
2. Press `Generate new token`.
3. Fill out the form with the following information:
```Generic
Token name:
KnightCrawler
Expiration:
90 days
Description:
<blank>
Repository access:
(checked) Public Repositories (read-only)
```
4. Click `Generate token`.
5. Take the new token and add it to the bottom of the <path>.env</path> file:
```Bash
# Producer
GITHUB_PAT=<YOUR TOKEN HERE>
```
## Start Knight Crawler
To start Knight Crawler use the following command:
```Bash
docker compose up -d
```
Then we can follow the logs to watch it start:
```Bash
docker compose logs -f --since 1m
```
> Knight Crawler will only be accessible on the machine you run it on, to make it accessible from other machines navigate to [External access](External-access.md).
>
{style="note"}
To stop following the logs press <shortcut>Ctrl+C</shortcut> at any time.
The Knight Crawler configuration page should now be accessible in your web browser at [http://localhost:7000](http://localhost:7000)
## Start more consumers
If you wish to speed up the processing of the films and tv shows that Knight Crawler finds, then you'll likely want to
increase the number of consumers.
The below command can be used to both increase or decrease the number of running consumers. Gradually increase the number
until you encounter any issues and then decrease until stable.
```Bash
docker compose up -d --scale consumer=<number>
```
## Stop Knight Crawler
Knight Crawler can be stopped with the following command:
```Bash
docker compose down
```

View File

@@ -0,0 +1,30 @@
# Overview
<img alt="The image shows a Knight in silvery armour looking forwards." src="knight-crawler-logo.png" title="Knight Crawler logo" width="100"/>
Knight Crawler is a self-hosted [Stremio](https://www.stremio.com/) addon for streaming torrents via
a [Debrid](Supported-Debrid-services.md "Click for a list of Debrid services we support") service.
We are active on [Discord](https://discord.gg/8fQdxay9z2) for both support and casual conversation.
> Knight Crawler is currently alpha software.
>
> Users are responsible for ensuring their data is backed up regularly.
>
> Please read the changelogs before updating to the latest version.
>
{style="warning"}
## What does Knight Crawler do?
Knight Crawler is an addon for [Stremio](https://www.stremio.com/). It began as a fork of the very popular
[Torrentio](https://github.com/TheBeastLT/torrentio-scraper) addon. Knight crawler essentially does the following:
1. It searches the internet for available films and tv shows.
2. It collects as much information as it can about each film and tv show it finds.
3. It then stores this information to a database for easy access.
When you choose on a film or tv show to watch on Stremio, a request will be sent to your installation of Knight Crawler.
Knight Crawler will query the database and return a list of all the copies it has stored in the database as Debrid
links.
This enables playback to begin immediately for your chosen media.

View File

@@ -0,0 +1,3 @@
# Supported Debrid services
Start typing here...

View File

@@ -0,0 +1,8 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE ihp SYSTEM "https://resources.jetbrains.com/writerside/1.0/ihp.dtd">
<ihp version="2.0">
<topics dir="topics" web-path="topics"/>
<images dir="images" web-path="knightcrawler"/>
<instance src="kc.tree"/>
</ihp>