从给定的 URL 抓取数据并使用 scrapy 将其放入文件 [英] Grabbed data from a given URL and put it into a file using scrapy

查看:30
本文介绍了从给定的 URL 抓取数据并使用 scrapy 将其放入文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试深入抓取给定的网站并从所有页面中抓取文本.我正在使用scrapy来抓取网站

I am trying to scraped deeply a given web site and grab text from all over pages. I am using scrapy to scrape web site

这是我运行蜘蛛的方式scrapy crawl stack_crawler -o items.json

here is how i am running spider scrapy crawl stack_crawler -o items.json

item.json 文件为空

这是蜘蛛代码_快照

# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule

#from tutorial.items import TutorialItem

from tutorial.items import DmozItem

class StackCrawlerSpider(CrawlSpider):
    name = 'stack_crawler'
    allowed_domains = ['http://www.dmoz.org']
    start_urls = ['http://www.dmoz.org/']

    rules = (
        Rule(LinkExtractor(allow=r'Items/'), callback='parse_item', follow=True),
    )

    def parse_item(self, response):
        i = TutorialItem()
        i['domain_id'] = response.xpath('//input[@id="sid"]/@value').extract()
        i['name'] = response.xpath('//div[@id="name"]').extract()
        i['description'] = response.xpath('//div[@id="description"]').extract()
        return i

这是我在运行蜘蛛爬行时得到的日志

Here is the log that i getting when i am running spider to crawl

dummy-MacBook-Pro:spiders Dummy$ scrapy crawl stack_crawler -o items.json
2016-06-09 10:22:23 [scrapy] INFO: Scrapy 1.1.0 started (bot: tutorial)
2016-06-09 10:22:23 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'FEED_URI': 'items.json', 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tutorial', 'ROBOTSTXT_OBEY': True, 'FEED_FORMAT': 'json'}
2016-06-09 10:22:23 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.feedexport.FeedExporter',
 'scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
2016-06-09 10:22:23 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-06-09 10:22:23 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-06-09 10:22:23 [scrapy] INFO: Enabled item pipelines:
[]
2016-06-09 10:22:23 [scrapy] INFO: Spider opened
2016-06-09 10:22:23 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-06-09 10:22:23 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6024
2016-06-09 10:22:24 [scrapy] DEBUG: Crawled (200) <GET http://www.dmoz.org/robots.txt> (referer: None)
2016-06-09 10:22:24 [scrapy] DEBUG: Crawled (200) <GET http://www.dmoz.org/> (referer: None)
2016-06-09 10:22:24 [scrapy] INFO: Closing spider (finished)
2016-06-09 10:22:24 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 430,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 5694,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 2,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2016, 6, 9, 4, 52, 24, 862900),
 'log_count/DEBUG': 3,
 'log_count/INFO': 7,
 'response_received_count': 2,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2016, 6, 9, 4, 52, 23, 483092)}
2016-06-09 10:22:24 [scrapy] INFO: Spider closed (finished)

商品代码快照

import scrapy
class DmozItem(scrapy.Item):
    title = scrapy.Field()
    link = scrapy.Field()
    desc = scrapy.Field()

谁能帮我弄清楚我在代码级别做错了什么来获取数据.

Can any one help me figure out what am i doing wrong at code level to get data.

推荐答案

我认为你是scrapy的新手,你在代码中犯了很多错误

I think you are new to scrapy and you have made so many mistakes in your code

1.我们在scrapy中有默认函数parse或start_product_requests,所以你可以避免在那里使用LinkExtractor.使用 parse 函数并直接在那里获取 start_urls 响应.

1.we have default functions parse or start_product_requests in scrapy, So you can avoid using LinkExtractor there. Use parse function and get the start_urls response directly there.

2.您在 items.py 中定义了一项并使用了另一项.所以字段名称不同,会出现冲突.

2.You have define one item in items.py and using another. So the field names are different, get a conflict there.

3.您为字段取值的路径正确.

3.The path where you are taking values for fields are in correct.

你一定要试试这个

蜘蛛代码_快照

import scrapy

from lxml import html
from scrapy.spiders import CrawlSpider, Rule
from tutorial.items import DmozItem

class StackCrawlerSpider(CrawlSpider):
    name = 'stack_crawler'
    allowed_domains = ['http://www.dmoz.org']
    start_urls = ['http://www.dmoz.org/']

    def parse(self, response):  
        doc = html.fromstring(response.body)
        i = DmozItem()
        i['title'] = doc.xpath('//meta[@property="og:title"]/@content')
        i['link'] = response.url
        i['desc'] = doc.xpath('//meta[@name="description"]/@content')
        yield i

商品代码快照

import scrapy
class DmozItem(scrapy.Item):
    title = scrapy.Field()
    link = scrapy.Field()
    desc = scrapy.Field()

这行得通.

这篇关于从给定的 URL 抓取数据并使用 scrapy 将其放入文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆