hey folks, basically the title, I want to stop scraping in the middle of a route handler if some conditon is met because the full function is a little expensive computationally.
I am using return; to exit the route handler when some condition is met but I am facing issues with the the node process hanging indefinitely after scraping is done. I had a previous thread using crawler.teardown() and how removing the return statement stopped the hanging issue.
But even in handlers where I am not calling crawler.teardow(), I am facing hanging issues.
no, the teardown usecase was different, I had to open a couple of pages in the route itself to grab info and wanted to skip it to avoid spamming the site if possible and also to stop the crawler asap but our main use case was to have some kind of "resume" for it.
and this is the route handler(https://pastebin.com/8JaY9PUR), the crawler's startup url only hits this route and is supposed to shut down but it hangs indefinitely