Hi everyone! Hope you're all doing well. I have a small question about Crawlee.
My use case is a little simpler than a crawler; I just want to scrape a single URL every few seconds.
To do this, I create a RequestList with just one url and start the Crawler. Sometimes, the crawler returns HTTP errors and fails. However, I don't mind as I'm going to run the crawler again after a few seconds and I'd prefer the errors to be ignored rather than automatically reclaimed.