is there a way to close browser in puppeteer crawler?
is there a way to close browser in puppeteer crawler?
At a glance
The community member's crawler is getting stuck due to request timeouts when using a concurrency of 20. They suggest that closing the browser on request timeout could solve the issue. In the comments, other community members provide suggestions:
- Manage session, max pages, and retries in the crawler, or run the crawler with batches of URLs, as the issue may be that the crawler is being blocked per web visitor.
- Throw an error or return from the requestHandler, and reduce the concurrency to 1 to debug where the crawler is getting stuck.
There is no explicitly marked answer in the comments.
it supposed to be higher level logic: either manage session. max pages and retries in crawler or run crawler with batches of URLs, sounds like your crawler blocked per web visitor