The community member has a crawler that scrapes multiple websites, each with multiple URLs. They want to send the scraped data directly to a database instead of storing it on the EC2 instance. The community member asks if there is a way to pass additional arguments to the @router.default_handler function when running the crawler. A community member suggests using Request.from_url to pass arbitrary arguments, which can then be accessed in the handler via context.request.user_data. The original community member confirms that this is the solution they were looking for.
Not 100% if that is what you mean (or if it is the best solution), but you can pass arbitrary arguments to Request.from_url , which you can then read in the handler, e.g. what I do: for xxx in xxx: Request.from_url( url = url, label = xxx, user_data = { 'abc': abc, 'def': def } )
And then access it in the handler via context.request.user_data['abc']