Commit Graph

5 Commits

Author SHA1 Message Date
iPromKnight
95fa48c851 Woke up to see a discussion about torrentio scraping: powered by community
Was a little inspired. Now we have a database (self populating) of imdb id's - why shouldn't we actually have the ability to scrape any other instance of torrentio, or knightcrawler?

Also restructured the producer to be vertically sliced to make it easier to work with
Too much flicking back and forth between Jobs and Crawlers when configuring
2024-03-02 18:41:57 +00:00
iPromKnight
1b9a01c677 BREAKING: Cleanup RabbitMQ env vars, and Github Pat 2024-02-28 12:57:55 +00:00
iPromKnight
e461e26b0f Change postgres configuration in the producer to use the env vars from the stack 2024-02-04 15:03:07 +00:00
iPromKnight
68edaba308 Introduce max batch size, and configurable publish window
Still need to implement queue size limit
Also fixes env var consistency between addon and consumer
2024-02-02 13:49:54 +00:00
iPromKnight
ab17ef81be Big rewrite - distributed consumers for ingestion / scraping(scalable) - single producer written in c#.
Changed from page scraping to rss xml scraping
Includes RealDebridManager hashlist decoding (requires a github readonly PAT as requests must be authenticated) - This allows ingestion of 200k+ entries in a few hours.
Simplifies a lot of torrentio to deal with new data
2024-02-01 16:38:45 +00:00