Apify and Crawlee Official Forum

Hi, I've seen mentions of a "pay per event" pricing model https://docs.apify.com/platform/actors/running/actors-in-store#pay-per-event & https://apify.com/mhamas/pay-per-event-example, but can't find how to use it for one of my actor, i only see rental or pay per result options.
How can we use this pay per event pricing model?
8 comments
M
A
S
D
J
Hello, I would like to ask if any Apify tool can, for example, find a similar image - https://i.postimg.cc/KzRHFKQc/55.jpg and extract the product name from the links to CSV. We can use Google Lens? I want to use this to automatically name antique products.

Thanks for the all informations and help! 👋
1 comment
A

I have created a scraper but am having issues posting it to the store. I opened my account 2 days ago and would like to start earning money on my scraper

I'm attempting to validate that the proxy works and am not having luck, should I expect the following to work?

Plain Text
~ λ curl --proxy http://proxy.apify.com:8000  -U 'groups-RESIDENTIAL,country-US:apify_proxy_redacted' -H "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"  https://httpbin.org/ip
curl: (56) CONNECT tunnel failed, response 403
3 comments
a
-
my account: https://apify.com/wudizhangzhi
actors: https://console.apify.com/actors/KAkfFaz8JVdvOQQ5F/source

Error: Operation failed! (You currently don’t have the necessary permissions to publish an Actor. This is expected behavior. Please contact support for assistance in resolving the issue.)

@Saurav Jain
2 comments
P
y
Guys im new to apify and i want to publish my newly built job scraper but when i setup monetization there are two business id and personal id option, where can i get this?
1 comment
S
Hi everyone,
I recently ran a Google Maps scraper (https://apify.com/compass/crawler-google-places) to collect place data, and I've discovered that there are many more places available than what was initially collected in my first run.
Current Situation:
  • Successfully completed an initial scrape
  • Have collected data for X places
  • Discovered there are significantly more places available
  • Already have a dataset from the first run
Questions:
Is it possible to increase the place limit on my existing run configuration?
If I need to create a new run, what's the best way to:
  • Import/merge my existing scraped data
  • Avoid duplicating places already collected
  • Continue from where the previous run stopped
Any guidance on the most efficient approach would be greatly appreciated.
Thanks in advance!
4 comments
P
C
S