just advanced to level 3! Thanks for your contributions! π
From the documentation, it should pick a proxy with each session ID. With session pool,
I'd hope this gets automatically managed since it starts 2 scrapes concurrently, but I can mange it myself, that's okay too.
How do I make it use multiple session IDs then?
Just verified, It is the same session ID each time:
=== Visiting https://ifconfig.co/?a=1 ===
What is my IP address? β ifconfig.co
session_EimgHiOPKn
=== Visiting https://ifconfig.co/?a=1 ===
What is my IP address? β ifconfig.co
session_EimgHiOPKn
Proxy IP is 185.147.143.125
Proxy IP is 185.147.143.125
pushing data 15
pushing data 15
=== Visiting https://ifconfig.co/?a=3 ===
What is my IP address? β ifconfig.co
session_EimgHiOPKn
Proxy IP is 185.147.143.125
=== Visiting https://ifconfig.co/?a=4 ===
What is my IP address? β ifconfig.co
session_EimgHiOPKn
Proxy IP is 185.147.143.125
It would be nice if the documentation mentioned how to control this a little. Maybe it does, just didn't find it and asking here is not super practical time-wise π
How to rotate proxies / session pool IDs?
From reading the posts here I guess the browser has one session only and starting a new session means a new browser, thus concurrency doesn't mean multiple sessions at once, but just multiple tabs in one browser?
In that case, how do we implement concurrent scraping from multiple IPs?
I'd prefer to use each IP nicely and switch to a new one with a new session before I get blocked, not looking to burn each IP in the process.
Concurrency: How to use multiple proxies / session pool IDs?
So trying again to get this multiple IPs thing to work, I am running Crawlee twice to see if I can get it working that way.
It could almost work, but it is picking the same URLs from the queue in the same order so it's pointless unless I can get around that.
You answered yourself, for browser crawlers, session is bound to a browser instance, not Request. You can have multiple requests running concurrently with the same session and you can have multiple sessions/browsers running concurrently. You can set everything in
browserPoolOptions
, e.g. you can have only 1 page per browser and then it will run e.g. 10 browsers in parallel.
https://crawlee.dev/api/browser-pool/interface/BrowserPoolOptionsThanks! Not sure how I missed the browserPoolOptions
setting, was fixated on the sessions somehow.
Just by glancing it, I'm not sure what to focus on from the options
that's the same link as before
maxOpenPagesPerBrowser: 1
- you have 10 requests in parallel, each with separate browser
so that's the only option to set? session pool concurrency to 10
and this to 1
will open 10 browsers?
just advanced to level 4! Thanks for your contributions! π
This looks also handy:
browserPoolOptions: {
retireBrowserAfterPageCount: 10,
As I want the "surfing" seem as natural as possible, would that also retire the session?
Yeah, that means how often you want to rotate. 10 looks pretty natural I think
Btw is there a way to make it a calback so I can make it random?
You can manipulate the properties of the crawler as you go, typescript may complain but in JS world, you can do whatever π
This was helpful for me as well, , as I ran into the same issue expecting a rotation through proxy urls and instead saw all requests go through the same single one.
Perhaps adding a note to the ProxyConfiguration
or proxy management docs would help?