Woke up to see a discussion about torrentio scraping: powered by community
Was a little inspired. Now we have a database (self populating) of imdb id's - why shouldn't we actually have the ability to scrape any other instance of torrentio, or knightcrawler? Also restructured the producer to be vertically sliced to make it easier to work with Too much flicking back and forth between Jobs and Crawlers when configuring
This commit is contained in:
@@ -15,6 +15,7 @@
|
||||
<PackageReference Include="MassTransit.RabbitMQ" Version="8.1.3" />
|
||||
<PackageReference Include="Microsoft.Extensions.Hosting" Version="8.0.0" />
|
||||
<PackageReference Include="Microsoft.Extensions.Http" Version="8.0.0" />
|
||||
<PackageReference Include="MongoDB.Driver" Version="2.24.0" />
|
||||
<PackageReference Include="Npgsql" Version="8.0.1" />
|
||||
<PackageReference Include="Quartz.Extensions.DependencyInjection" Version="3.8.0" />
|
||||
<PackageReference Include="Quartz.Extensions.Hosting" Version="3.8.0" />
|
||||
@@ -24,12 +25,8 @@
|
||||
</ItemGroup>
|
||||
|
||||
<ItemGroup>
|
||||
<Content Remove="Configuration\scrapers.json" />
|
||||
<None Include="Configuration\scrapers.json">
|
||||
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
|
||||
</None>
|
||||
<Content Remove="Configuration\logging.json" />
|
||||
<None Include="Configuration\logging.json">
|
||||
<Content Remove="Configuration\*.json" />
|
||||
<None Include="Configuration\*.json">
|
||||
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
|
||||
</None>
|
||||
</ItemGroup>
|
||||
|
||||
Reference in New Issue
Block a user