Apify

Apify and Crawlee Official Forum

b
F
A
J
A

Named Request Queues not getting purged

Hey folks,

I am explicitly creating request queues for my crawlers to make sure that crawler specific run options such as maxRequestsPerCrawl can be set on a crawler to crawler basis, but the issue with this approach is that the request queues are not getting purged after every crawl, resulting in the crawlers resuming the session from before. These are the approaches I tried

  1. I have tried setting the option purgeRequestQueue to true explicitly in the crawler.run() func but it results in this error
Did not expect property purgeRequestQueue to exist, got true in object options` 2. setting it as a global variable in crawlee.json (it looks like crawlee is not picking up my crawlee.json file at all, because I tried to set logging levels in it and crawlee didnt pick it up). 3. tried using await purgeDefaultStorages()` in my entry file

None of these options are working, is there some other way to purge these queues? I know its set by default to purge them but its not working for my named queues.

Also, is using queues the best way to isolate crawler specific options for each crawler? because when I used the default queue and restricted crawls to some numeric value in one crawler, and when it shut down after reaching that value, all the other crawlers would also shut down logging that max requests per crawl has been reached despite me not having specified this option when I initialized the crawlers.
A
L
9 comments
P.S. also tried queue.drop() method, which dropped the whole queue instead of purging it as expected
also found RequestQueue.config.getStorageClient()
and I tried calling the purge function from it and it didnt work, P.S. typescript says purge might be undefined, so we need to first check if purge is not undefined before calling it
The default purge only cares about default (non-named) queue. Named ones need to be droped. There is no purge method as to clean it up without creating new one.
thanks, this is exactly what I ended up doing, I still feel we should have a purge method for them too, because dropping a queue and then re-instantiating it seems awkard to me, just one last thing, is using a separate queue advisable if the only use case is to isolate crawler specific options like maxRequestsPerCrawl to each crawler?
Generally each crawler should have separate queue otherwise it will mess up each others queue
even if they will run sequentially?
Sorry for late reply. In that case it is fine but you have to finish the request inside the run and add new ones in between
hey thanks, then just as a precaution, I'm assuming we have to override options like maxRequestPerCrawl for each crawler everytime just to make sure that the config from a previous crawler doesnt bleed onto other crawlers, I played around with a single queue and this was the main issue I found, anyways the separate queues work fine aside from the awkardness around purging it (re:above comment).
Add a reply
Sign up and join the conversation on Discord
Join