batuhan / node-website-scraper

Download website to a local directory (including all css, images, js, etc.)

Home Page:https://www.npmjs.org/package/website-scraper

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Introduction

Download website to a local directory (including all css, images, js, etc.)

Build Status Build status Test Coverage Code Climate Dependency Status

Version Downloads Gitter

NPM Stats

You can try it in demo app (source)

Note: dynamic websites (where content is loaded by js) may be saved not correctly because website-scraper doesn't execute js, it only parses http responses for html and css files.

Installation

npm install website-scraper

Usage

var scrape = require('website-scraper');
var options = {
  urls: ['http://nodejs.org/'],
  directory: '/path/to/save/',
};

// with callback
scrape(options, function (error, result) {
	/* some code here */
});

// or with promise
scrape(options).then(function (result) {
	/* some code here */
});

API

scrape(options, callback)

Makes requests to urls and saves all files found with sources to directory.

options - object containing next options:

  • urls: array of urls to load and filenames for them (required, see example below)
  • directory: path to save loaded files (required)
  • sources: array of objects to load, specifies selectors and attribute values to select files for loading (optional, see example below)
  • recursive: boolean, if true scraper will follow anchors in html files. Don't forget to set maxDepth to avoid infinite downloading (optional, see example below)
  • maxDepth: positive number, maximum allowed depth for dependencies (optional, see example below)
  • request: object, custom options for request (optional, see example below)
  • subdirectories: array of objects, specifies subdirectories for file extensions. If null all files will be saved to directory (optional, see example below)
  • defaultFilename: filename for index page (optional, default: 'index.html')
  • prettifyUrls: whether urls should be 'prettified', by having the defaultFilename removed (optional, default: false)
  • ignoreErrors: boolean, if true scraper will continue downloading resources after error occured, if false - scraper will finish process and return error (optional, default: true)
  • urlFilter: function which is called for each url to check whether it should be scraped. (optional, see example below)
  • filenameGenerator: name of one of the bundled filenameGenerators, or a custom filenameGenerator function (optional, default: 'byType')
  • httpResponseHandler: function which is called on each response, allows to customize resource or reject its downloading (optional, see example below)

Default options you can find in lib/config/defaults.js.

callback - callback function (optional), includes following parameters:

  • error: if error - Error object, if success - null
  • result: if error - null, if success - array of Resource objects containing:
    • url: url of loaded page
    • filename: filename where page was saved (relative to directory)
    • children: array of children Resources

Filename Generators

The filename generator determines where the scraped files are saved.

byType (default)

When the byType filenameGenerator is used the downloaded files are saved by type (as defined by the subdirectories setting) or directly in the directory folder, if no subdirectory is specified for the specific type.

bySiteStructure

When the bySiteStructure filenameGenerator is used the downloaded files are saved in directory using same structure as on the website:

  • / => DIRECTORY/index.html
  • /about => DIRECTORY/about/index.html
  • /resources/javascript/libraries/jquery.min.js => DIRECTORY/resources/javascript/libraries/jquery.min.js

Http Response Handlers

HttpResponseHandler is used to reject resource downloading or customize resource text based on response data (for example, status code, content type, etc.) Function takes response argument - response object of request module and should return resolved Promise if resource should be downloaded or rejected with Error Promise if it should be skipped. Promise should be resolved with:

  • string which contains response body
  • or object with properies body (response body, string) and metadata - everything you want to save for this resource (like headers, original text, timestamps, etc.), scraper will not use this field at all, it is only for result.

See example of using httpResponseHandler.

Examples

Example 1

Let's scrape some pages from http://nodejs.org/ with images, css, js files and save them to /path/to/save/. Imagine we want to load:

and separate files into directories:

  • img for .jpg, .png, .svg (full path /path/to/save/img)
  • js for .js (full path /path/to/save/js)
  • css for .css (full path /path/to/save/css)
var scrape = require('website-scraper');
scrape({
  urls: [
    'http://nodejs.org/',	// Will be saved with default filename 'index.html'
    {url: 'http://nodejs.org/about', filename: 'about.html'},
    {url: 'http://blog.nodejs.org/', filename: 'blog.html'}
  ],
  directory: '/path/to/save',
  subdirectories: [
    {directory: 'img', extensions: ['.jpg', '.png', '.svg']},
    {directory: 'js', extensions: ['.js']},
    {directory: 'css', extensions: ['.css']}
  ],
  sources: [
    {selector: 'img', attr: 'src'},
    {selector: 'link[rel="stylesheet"]', attr: 'href'},
    {selector: 'script', attr: 'src'}
  ],
  request: {
    headers: {
      'User-Agent': 'Mozilla/5.0 (Linux; Android 4.2.1; en-us; Nexus 4 Build/JOP40D) AppleWebKit/535.19 (KHTML, like Gecko) Chrome/18.0.1025.166 Mobile Safari/535.19'
    }
  }
}).then(function (result) {
  console.log(result);
}).catch(function(err){
  console.log(err);
});

Example 2. Recursive downloading

// Links from example.com will be followed
// Links from links will be ignored because theirs depth = 2 is greater than maxDepth
var scrape = require('website-scraper');
scrape({
  urls: ['http://example.com/'],
  directory: '/path/to/save',
  recursive: true,
  maxDepth: 1
}).then(console.log).catch(console.log);

Example 3. Filtering out external resources

// Links to other websites are filtered out by the urlFilter
var scrape = require('website-scraper');
scrape({
  urls: ['http://example.com/'],
  urlFilter: function(url){
    return url.indexOf('http://example.com') === 0;
  },
  directory: '/path/to/save'
}).then(console.log).catch(console.log);

Example 4. Downloading an entire website

// Downloads all the crawlable files of example.com.
// The files are saved in the same structure as the structure of the website, by using the `bySiteStructure` filenameGenerator.
// Links to other websites are filtered out by the urlFilter
var scrape = require('website-scraper');
scrape({
  urls: ['http://example.com/'],
  urlFilter: function(url){
      return url.indexOf('http://example.com') === 0;
  },
  recursive: true,
  maxDepth: 100,
  prettifyUrls: true,
  filenameGenerator: 'bySiteStructure',
  directory: '/path/to/save'
}).then(console.log).catch(console.log);

Example 5. Rejecting resources with 404 status and adding metadata

var scrape = require('website-scraper');
scrape({
  urls: ['http://example.com/'],
  directory: '/path/to/save',
  httpResponseHandler: (response) => {
  	if (response.statusCode === 404) {
		return Promise.reject(new Error('status is 404'));
	} else {
		// if you don't need metadata - you can just return Promise.resolve(response.body)
		return Promise.resolve({
			body: response.body,
			metadata: {
				headers: response.headers,
				someOtherData: [ 1, 2, 3 ]
			}
		});
	}
  }
}).then(console.log).catch(console.log);

Log and debug

This module uses debug to log events. To enable logs you should use environment variable DEBUG. Next command will log everything from website-scraper

export DEBUG=website-scraper*; node app.js

Module has different loggers for levels: website-scraper:error, website-scraper:warn, website-scraper:info, website-scraper:debug, website-scraper:log. Please read debug documentation to find how to include/exclude specific loggers.

About

Download website to a local directory (including all css, images, js, etc.)

https://www.npmjs.org/package/website-scraper

License:MIT License


Languages

Language:JavaScript 94.3%Language:HTML 4.8%Language:CSS 0.9%