Scrapy Shell 和 Scrapy Splash [英] Scrapy Shell and Scrapy Splash
问题描述
我们一直在使用 scrapy-splash
中间件来传递通过在 docker 容器内运行的 Splash
javascript 引擎抓取的 HTML 源代码.
We've been using scrapy-splash
middleware to pass the scraped HTML source through the Splash
javascript engine running inside a docker container.
如果我们想在spider中使用Splash,我们配置几个需要的项目设置并产生一个 Request
指定特定的 meta
参数:
If we want to use Splash in the spider, we configure several required project settings and yield a Request
specifying specific meta
arguments:
yield Request(url, self.parse_result, meta={
'splash': {
'args': {
# set rendering arguments here
'html': 1,
'png': 1,
# 'url' is prefilled from request url
},
# optional parameters
'endpoint': 'render.json', # optional; default is render.json
'splash_url': '<url>', # overrides SPLASH_URL
'slot_policy': scrapyjs.SlotPolicy.PER_DOMAIN,
}
})
这如文档所示.但是,我们如何在 Scrapy 中使用 scrapy-splash
壳?
This works as documented. But, how can we use scrapy-splash
inside the Scrapy Shell?
推荐答案
只需将您想要 shell 的 url 包装在 splash http api.
just wrap the url you want to shell to in splash http api.
所以你会想要这样的东西:
So you would want something like:
scrapy shell 'http://localhost:8050/render.html?url=http://domain.com/page-with-javascript.html&timeout=10&wait=0.5'
其中 localhost:port
是您的启动服务运行的地方url
是您要抓取的 url,不要忘记 urlquote 它!render.html
是可能的 http api 端点之一,在这种情况下返回重新编辑的 html 页面timeout
超时时间以秒为单位wait
在读取/保存 html 之前等待 javascript 执行的时间(以秒为单位).
where localhost:port
is where your splash service is running
url
is url you want to crawl and dont forget to urlquote it!
render.html
is one of the possible http api endpoints, returns redered html page in this case
timeout
time in seconds for timeout
wait
time in seconds to wait for javascript to execute before reading/saving the html.
这篇关于Scrapy Shell 和 Scrapy Splash的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!