Apify and Crawlee Official Forum

Updated 3 months ago

url not timing out for a reason.

Hello, my scraper goes in an infinite run on some url for a reason even tho I put a timeout time
Plain Text
 request = r.get(url=url, timeout=1.5)
. Any ideas why?
K
R
A
18 comments
I figured that it doesn't go in an infinite run but rather takes 1-2 minute to process the url
or 5 minute sometime
Plain Text
2024-03-14T07:05:07.582Z https://dunamisenergy.com is an invalid url
2024-03-14T07:05:07.603Z Processing https://broadsign.com...
2024-03-14T07:07:38.962Z Processing https://heartlandchargingservices.com...
Plain Text
2024-03-14T06:59:42.263Z Processing https://shinetsua.com...
2024-03-14T07:04:00.486Z Processing https://noralta.com...
Hello, is this a platform related issue or is it caused by some of your own logic how you process the request?
I believe it's platform related
could you give me id of the run
just advanced to level 6! Thanks for your contributions! πŸŽ‰
here yahoo.com takes 1M30 second
and yama-group takes 1m
in my script, there is at maximum in total 21 requests that times out after 1.5s
which is at maximum 31,5 second
and if the first request times out
Attachment
image.png
it should skip it
here there is a time out time aswell
Add a reply
Sign up and join the conversation on Discord