Multi-threaded Python website crawler that generates a file with all the domestic links on a given website.
- Open the file main.py and execute the main function.
- Enter the name of the project.
- Enter the URL that you would like to crawl.
- Wait for the spiders to finish crawling.
- You can view all URLs in the crawled.txt file.