Apify Discord Mirror

Updated 2 years ago

is there a way to close browser in puppeteer crawler?

At a glance

The community member's crawler is getting stuck due to request timeouts when using a concurrency of 20. They suggest that closing the browser on request timeout could solve the issue. In the comments, other community members provide suggestions:

- Manage session, max pages, and retries in the crawler, or run the crawler with batches of URLs, as the issue may be that the crawler is being blocked per web visitor.

- Throw an error or return from the requestHandler, and reduce the concurrency to 1 to debug where the crawler is getting stuck.

There is no explicitly marked answer in the comments.

my crawler got stuck getting request timeouts with concurrency of 20, if i could close browser on request timeout that could solve the issue.
Attachment
image.png
A
L
2 comments
it supposed to be higher level logic: either manage session. max pages and retries in crawler or run crawler with batches of URLs, sounds like your crawler blocked per web visitor
You can simply throw error or return from requestHandler. I would reduce concurrency to 1 and try to debug where it gets stuck
Add a reply
Sign up and join the conversation on Discord