网页爬虫 - python爬取分页问题

查看:464
本文介绍了网页爬虫 - python爬取分页问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

问 题

我爬取的思路是先寻找所有网页,然后再请求所有网页,并将他们的内容用beautifulsoup解析出来,最后写进csv文件里面,但是却报错了.这是为什么呢?是我的思路出了问题吗?求各位大神帮助,我的代码如下:

# -*- coding:utf-8 -*-
import requests
from bs4 import BeautifulSoup
import csv

user_agent = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36'
url = 'http://finance.qq.com'

def get_url(url):
    links = []
    page_number = 1
    while page_number <=36:
        link = url+'/c/gdyw_'+str(page_number)+'.htm'
        links.append(link)
        page_number = page_number + 1
    return links

all_link = get_url(url)

def get_data(all_link):
    response = requests.get(all_link)
    soup = BeautifulSoup(response.text,'lxml')
    soup = soup.find('div',{'id':'listZone'}).findAll('a')
    return soup

def main():
    with open("test.csv", "w") as f:
        f.write("url\t titile\n")
        for item in get_data(all_link):
            f.write("{}\t{}\n".format(url + item.get("href"), item.get_text()))

if __name__ == "__main__":
    main()

报错内容:

Traceback (most recent call last):
File "D:/Python34/write_csv.py", line 33, in <module>
main()
File "D:/Python34/write_csv.py", line 29, in main
for item in get_data(all_link):
File "D:/Python34/write_csv.py", line 21, in get_data
response = requests.get(all_link)
File "D:Python34libsite-packagesrequestsapi.py", line 71, in get
return request('get', url, params=params, **kwargs)
File "D:Python34libsite-packagesrequestsapi.py", line 57, in request
return session.request(method=method, url=url, **kwargs)
File "D:Python34libsite-packagesrequestssessions.py", line 475, in request
resp = self.send(prep, **send_kwargs)
File "D:Python34libsite-packagesrequestssessions.py", line 579, in send
adapter = self.get_adapter(url=request.url)
File "D:Python34libsite-packagesrequestssessions.py", line 653, in get_adapter
raise InvalidSchema("No connection adapters were found for '%s'" % url)

解决方案

不能直接requests.get一个list的吧

http://docs.python-requests.o...

url – URL for the new Request object.

应该来个for循环一个个来


update:

我给你改了下程序: 至少Python3可以跑了 Python2试了下unicode问题懒的改了

def get_data(all_link):
    for uri in all_link:
        response = requests.get(uri)
        soup = BeautifulSoup(response.text,'lxml')
        soup = soup.find('div',{'id':'listZone'}).findAll('a')
        for small_soup in soup:
            yield small_soup

重写这段

这篇关于网页爬虫 - python爬取分页问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆