Crawlie is a simple Elixir library for writing decently-performing crawlers with minimum effort.
See the crawlie_example project.
Crawlie uses Elixir's GenStage to parallelise the work. Most of the logic is handled by the UrlManager, which consumes the url collection passed by the user, receives the urls extracted by the subsequent processing, makes sure no url is processed more than once, makes sure that the "discovered urls" collection is as small as possible by traversing the url tree in a roughly depth-first manner.
The urls are requested from the UrlManager by a GenStage Flow, which in parallel fetches the urls using HTTPoison, and parses the responses using user-provided callbacks. Discovered urls get sent back to UrlManager.
Here's a rough diagram:
See the docs for supported options.
- Easier limiting the crawling to a (sub)domain
- Option of respecting
robots.txt
of the websites (on by default)
The package can be installed as:
- Add
crawlie
to your list of dependencies inmix.exs
:
```elixir
def deps do
[{:crawlie, "~> 0.3.0"}]
end
```
- Ensure
crawlie
is started before your application:
```elixir
def application do
[applications: [:crawlie]]
end
```