An android web crawler working on Jsoup's library with multi threading.
The crawled files are saved inside an SQLite database.
A web crawler (also known as a web spider or web robot) is a program or automated script which browses the World Wide Web in a methodical, automated manner. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine, that will index the downloaded pages to provide fast searches.
Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code.
NOTE Not all files have been uploaded, just enough for the project to be recreated (Because of Github issues).
References: www.androidsrc.net || www.sciencedaily.com