Feature request: scrape sites, add to db, but don't download
grantbarrett opened this issue · comments
Since some book catalogs are very large, it would be a useful feature to be able to run the scrapes to build the database for review, then mark books for download, then run the script again to fetch them. Perhaps that is beyond the purpose of this script, which seems more for broad archival purposes. But if I see a public domain book that is an edition I do not have, I want only that edition, and not the others which I already have. It would save a lot of unnecessary data transfer.
Alternately, being able to specify a list of ISBNs, titles, or keywords before scraping would also serve to reduce the total data transfer.
Thank you!