harlan-zw / unlighthouse

Scan your entire site with Google Lighthouse in 2 minutes (on average). Open source, fully configurable with minimal setup.

Home Page:https://unlighthouse.dev

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Error: Cannot create new tab, and no tabs already open.

BennyAlex opened this issue · comments

Describe the bug

I run an audit for eye-able.com, the first pages working fine, then I get the maxEventListeners Warning. Then after a few more pages, I get the following error:

Error: Cannot create new tab, and no tabs already open.
    at C:\Users\benny\work\eye-able-performance\node_modules\lighthouse\lighthouse-core\gather\connections\cri.js:45:35
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async Driver.connect (C:\Users\benny\work\eye-able-performance\node_modules\lighthouse\lighthouse-core\gather\driver.js:119:5)
    at async GatherRunner.run (C:\Users\benny\work\eye-able-performance\node_modules\lighthouse\lighthouse-core\gather\gather-runner.js:484:7)
    at async Runner._gatherArtifactsFromBrowser (C:\Users\benny\work\eye-able-performance\node_modules\lighthouse\lighthouse-core\runner.js:264:23)
    at async Runner.gather (C:\Users\benny\work\eye-able-performance\node_modules\lighthouse\lighthouse-core\runner.js:170:21)
    at async lighthouse (C:\Users\benny\work\eye-able-performance\node_modules\lighthouse\lighthouse-core\index.js:50:21)
    at async file:///C:/Users/benny/work/eye-able-performance/node_modules/@unlighthouse/core/dist/process/lighthouse.mjs:16:26

Reproduction

export default { puppeteerOptions: { args: ["--no-sandbox"], headless: true, // headless works best, }, server: { open: false, }, puppeteerCluster: { concurrency: 'CONCURRENCY_BROWSER', // which mode to use maxConcurrency: 1, }, site: 'https://eye-able.com/', debug: true, cache: false, // disable caching of results scanner: { maxRoutes: 500, // max number of routes to scan samples: 2, // number of samples to take crawler: true, // enable the crawler robotsTxt: false, // don't respect robots.txt sitemap: false, // don't respect sitemap.xml, // exclude all pdfs and amp pages exclude: [ '/.?pdf', './amp', /* 'en-',/ ], // use desktop to scan device: 'desktop', // enable the throttling mode, good for getting more consistent results throttle: false, }, chrome: { useSystem: false, // use the bundled chrome, }, }

System / Nuxt Info

System:
    OS: Windows 10 10.0.22621
    CPU: (16) x64 AMD Ryzen 7 6800HS Creator Edition
    Memory: 4.61 GB / 13.69 GB
  Binaries:
    Node: 18.17.1 - C:\Program Files\nodejs\node.EXE
    npm: 9.6.7 - C:\Program Files\nodejs\npm.CMD
  Browsers:
    Edge: Spartan (44.22621.2134.0), Chromium (115.0.1901.203)

Running into the exact same issue but only when increasing the number of samples per page

Yup, when sample is 1, its working fine.

commented

Same for me:

export default {
  site: 'www.incubrain.org',
  debug: true,
  scanner: {
    throttle: false,
    device: 'desktop',
    // exclude: ['/private-zone/*'],
    samples: 2,
    dynamicSampling: 5
  },
  ci: {
    reporter: 'jsonExpanded',
    budget: {
      performance: 80,
      accessibility: 70,
      bestPractices: 80,
      seo: 80
    }
  }
}

if I change samples to anything above 1 this error occurs.