I have Puppeteer scraper that is doing lots of actions on a page, at one point the browser fails. It's a page with infinite scroll and I have to click a button and scroll down. After 70-80 interactions the browser crashes, and the request is getting retried as usual. The main idea is that with those actions I'm collecting urls that I wan't to navigate. I want to somehow handle the browser crashing so I can start with those urls when the browser crashes.