从Flask路线开始scrapy [英] Start scrapy from Flask route
问题描述
我想建立一个抓取器,抓取一个网页的网址,并将结果返回给网页。现在我从终端开始scrapy并将响应存储在一个文件中。如何启动爬虫当一些输入到Flask,进程,并返回一个响应?
I want to build a crawler which takes the URL of a webpage to be scraped and returns the result back to a webpage. Right now I start scrapy from the terminal and store the response in a file. How can I start the crawler when some input is posted on to Flask, process, and return a response back?
推荐答案
在Flask应用程序中创建一个CrawlerProcess并以编程方式运行爬网。请参阅文档。
You need to create a CrawlerProcess inside your Flask application and run the crawl programmatically. See the docs.
import scrapy
from scrapy.crawler import CrawlerProcess
class MySpider(scrapy.Spider):
# Your spider definition
...
process = CrawlerProcess({
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
})
process.crawl(MySpider)
process.start() # The script will block here until the crawl is finished
在继续您的项目之前,我建议您查看Python任务队列(如 rq )。这将允许您在后台运行Scrapy抓取,并且在抓取正在运行时Flask应用程序不会冻结。
Before moving on with your project I advise you to look into a Python task queue (like rq). This will allow you to run Scrapy crawls in the background and your Flask application will not freeze while the scrapes are running.
这篇关于从Flask路线开始scrapy的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!