bda-research / node-crawler

Web Crawler/Spider for NodeJS + server-side jQuery ;-)

Home Page:http://node-crawler.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Some requests (stream based) never ends and block the queue

Verhov opened this issue · comments

Summary:

I parsed millions of domains and faced issue what some stream based domain request can permanently block the queue.
Timeout in this case does not fired and RAM is constantly leaking.
I found two same domains: https://goldfm.nu/, https://rsradio.online/.
It's really nice radio 😄 but totally block my crawler pattern))

Current behavior

I'am using timeout but looks it not work pretty correctly, callback never fired in this case:

_crawler = new Crawler({
      timeout: 9000,
      retries: 1,
      retryTimeout: 1000,
      debug: true,
      callback: (error, res, done) => {
          ...
          done()
      }
})

_crawler.queue([{ uri: 'https://goldfm.nu' }])

image

Issue

Definitely it because of this request starts media stream and node-crawler tried to get it all... request always in pending state.
image

Side issues

Also as stream is arriving it increase RAM and seem will thrown 'out of memory' exception.
image

Attempts to fix

I also tried to set accept header to html only, but it's doesn't have affect.
headers: { Accept: 'text/html,application/xhtml+xml,application/xml;q=0.9' },

Currently I just skip this url as the special case, but I think it may not be unique case.

Expected behavior

Timeout should fire an error when we did not receive a response within the allotted time.

Related issues

This issue is definitely related with request package.

Question

Do you have any ideas how to resolve this case?)

have the same issue. the spider must have not only a timeout, but also a limit on the download volume

Refer to my comment here: request/request#3341
Feel free to discuss if any more questions. and hope it helps.

Thanks @mike442144, but in this (crawling) context we can't blacklisted it before we face it.
And it's not very good to wait few days before the server decides to disconnect - we will receive continuous payload on both sides during this time.

I still don't know how we can identify this type of connection in advance and complete it - Iam tried to send a OPTIONS request at first, but it did't helps to detect next GET request type.

The most elegant solution in my opinion would be a some timeout, and 'response size limit' option that @slienceisgolden mentioned would be great (incl. other pitfalls: huge docs, files, other streams etc..).

Currently not working on it, but it's still relevant.

@Verhov Good idea to limit response body size, should work well in your case. Body size, also should be in the options for flexibility. Look forward to your merge request :)