为什么我的“scrapy"不刮什么? [英] Why does my "scrapy" not scrape anything?

查看:52
本文介绍了为什么我的“scrapy"不刮什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我不知道问题出在哪里可能非常容易解决,因为我是scrapy的新手.我希望找到解决办法.提前致谢.

I don't know where the issues lies probably super easy to fix since I am new to scrapy. I hope to find a solution. Thanks in advance.

我使用的是 utnutu 14.04,python 3.4

I am using utnutu 14.04, python 3.4

我的蜘蛛:

``

class EnActressSpider(scrapy.Spider):
    name = "en_name"
    allowed_domains = ["www.r18.com/", "r18.com/"]
    start_urls = ["http://www.r18.com/videos/vod/movies/actress/letter=a/sort=popular/page=1",]


def parse(self, response):
    for sel in response.xpath('//*[@id="contents"]/div[2]/section/div[3]/ul/li'):
        item = En_Actress()
        item['image_urls'] = sel.xpath('a/p/img/@src').extract()
        name_link = sel.xpath('a/@href').extract()
        request = scrapy.Request(name_link, callback = self.parse_item, dont_filter=True)
        request.meta['item'] = item
        yield request

    next_page = response.css("#contents > div.main > section > div.cmn-sec-item01.pb00 > div > ol > li.next > a::attr('href')")
    if next_page:
        url = response.urljoin(next_page[0].extract())
        yield scrapy.Request(url, self.parse, dont_filter=True)



def parse_item(self, response):
    item = reponse.meta['item']
    name = response.xpath('//*[@id="contents"]/div[1]/ul/li[5]/span/text()')
    item['name'] = name[0].encode('utf-8')
    yield item

``

日志:

``

{'downloader/request_bytes': 988,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 48547,
'downloader/response_count': 2,
'downloader/response_status_count/200': 1,
'downloader/response_status_count/301': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 7, 25, 6, 46, 36, 940936),
'log_count/DEBUG': 1,
'log_count/INFO': 1,
'response_received_count': 1,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'spider_exceptions/TypeError': 1,
'start_time': datetime.datetime(2016, 7, 25, 6, 46, 35, 908281)}

``

非常感谢任何帮助.

推荐答案

语法错误似乎很少.我已经清理了它,它似乎在这里工作正常.我所做的另一个编辑是从 Request 对象中删除了 dont_filter 参数,因为您不想刮掉重复项.还调整了 allowed_domains,因为它过滤掉了一些内容.将来你应该发布整个日志.

There seems to be few syntax errors. I've cleaned it up and it seems to be working fine here. Another edit I made is removed dont_filter parameter from Request objects since you don't want to scrape duplicates. Also adjusted allowed_domains since it was filtering out some content. In the future you should post whole log.

import scrapy
class EnActressSpider(scrapy.Spider):
    name = "en_name"
    allowed_domains = ["r18.com"]
    start_urls = ["http://www.r18.com/videos/vod/movies/actress/letter=a/sort=popular/page=1", ]

    def parse(self, response):
        for sel in response.xpath('//*[@id="contents"]/div[2]/section/div[3]/ul/li'):
            item = dict()
            item['image_urls'] = sel.xpath('a/p/img/@src').extract()
            name_link = sel.xpath('a/@href').extract_first()
            request = scrapy.Request(name_link, callback=self.parse_item)
            request.meta['item'] = item
            yield request

        next_page = response.css(
            "#contents > div.main > section > div.cmn-sec-item01.pb00 > "
            "div > ol > li.next > a::attr('href')").extract_first()
        if next_page:
            url = response.urljoin(next_page)
            yield scrapy.Request(url, self.parse)

    def parse_item(self, response):
        item = response.meta['item']
        name = response.xpath('//*[@id="contents"]/div[1]/ul/li[5]/span/text()').extract_first()
        item['name'] = name.encode('utf-8')
        yield item

这篇关于为什么我的“scrapy"不刮什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆