akirill0v / wombat

Lightweight Ruby web crawler/scraper with an elegant DSL which extracts structured data from pages.

Home Page:http://felipecsl.com/wombat/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool


[Gem Version]rubygems CI Build Status Dependency Status Code Climate Coverage Status

Web scraper with an elegant DSL that parses structured data from web pages.


gem install wombat

Scraping a page:

The simplest way to use Wombat is by calling Wombat.crawl and passing it a block:

require 'wombat'

Wombat.crawl do
  base_url "https://www.github.com"
  path "/"

  headline xpath: "//h1"
  subheading css: "p.subheading"

  what_is({ css: ".one-half h3" }, :list)

  links do
    explore xpath: '//*[@class="wrapper"]/div[1]/div[1]/div[2]/ul/li[1]/a' do |e|
      e.gsub(/Explore/, "Love")

    features css: '.features'
    enterprise css: '.enterprise'
    blog css: '.blog'
The code above is gonna return the following hash:
  "headline"=>"Build software better, together.",
  "subheading"=>"Powerful collaboration, code review, and code management for open source and private projects. Need private repositories? Upgraded plans start at $7/mo.",
    "Great collaboration starts with communication.",
    "Friction-less development across teams.",
    "World's largest open source community.",
    "Do more with powerful integrations."

This is just a sneak peek of what Wombat can do. For the complete documentation, please check the links below:

Contributing to Wombat

  • Check out the latest master to make sure the feature hasn't been implemented or the bug hasn't been fixed yet
  • Check out the issue tracker to make sure someone already hasn't requested it and/or contributed it
  • Fork the project
  • Start a feature/bugfix branch
  • Commit and push until you are happy with your contribution
  • Make sure to add tests for it. This is important so I don't break it in a future version unintentionally.
  • Please try not to mess with the Rakefile, version, or history. If you want to have your own version, or is otherwise necessary, that is fine, but please isolate to its own commit so I can cherry-pick around it.



Copyright (c) 2012 Felipe Lima. See LICENSE.txt for further details.


Lightweight Ruby web crawler/scraper with an elegant DSL which extracts structured data from pages.


License:MIT License


Language:Ruby 100.0%