Python - 为什么这些数据被错误地写入文件? [英] Python - Why is this data being written to file incorrectly?

查看:15
本文介绍了Python - 为什么这些数据被错误地写入文件?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

只有第一个结果被写入 csv,每行一个 url 字母.这不是写入所有网址,而是每行一个.

Only the first result is being written to a csv, with one letter of the url per row. This is instead of all urls being written, one per row.

在这段代码的最后一部分中,我做错了什么导致 cvs 只写入一个结果而不是所有结果?

What am I not doing right in the last section of this code that is causing the cvs to be written only with one of the results instead of all of them?

import requests
from bs4 import BeautifulSoup
import csv

def grab_listings():
    url = ("http://www.gym-directory.com/listing-category/gyms-fitness-centres/")
    r = requests.get(url)
    soup = BeautifulSoup(r.text, 'html.parser')
    l_area = soup.find("div", {"class":"wlt_search_results"})
    for elem in l_area.findAll("a", {"class":"frame"}):
        return elem["href"]

    url = ("http://www.gym-directory.com/listing-category/gyms-fitness-centres/page/2/")
    r = requests.get(url)
    soup = BeautifulSoup(r.text, 'html.parser')
    l_area = soup.find("div", {"class":"wlt_search_results"})
    for elem in l_area.findAll("a", {"class":"frame"}):
        return elem["href"]

    url = ("http://www.gym-directory.com/listing-category/gyms-fitness-centres/page/3/")
    r = requests.get(url)
    soup = BeautifulSoup(r.text, 'html.parser')
    l_area = soup.find("div", {"class":"wlt_search_results"})
    for elem in l_area.findAll("a", {"class":"frame"}):
        return elem["href"]

    url = ("http://www.gym-directory.com/listing-category/gyms-fitness-centres/page/4/")
    r = requests.get(url)
    soup = BeautifulSoup(r.text, 'html.parser')
    l_area = soup.find("div", {"class":"wlt_search_results"})
    for elem in l_area.findAll("a", {"class":"frame"}):
        return elem["href"]

    url = ("http://www.gym-directory.com/listing-category/gyms-fitness-centres/page/5/")
    r = requests.get(url)
    soup = BeautifulSoup(r.text, 'html.parser')
    l_area = soup.find("div", {"class":"wlt_search_results"})
    for elem in l_area.findAll("a", {"class":"frame"}):
        return elem["href"]

    url = ("http://www.gym-directory.com/listing-category/gyms-fitness-centres/page/6/")
    r = requests.get(url)
    soup = BeautifulSoup(r.text, 'html.parser')
    l_area = soup.find("div", {"class":"wlt_search_results"})
    for elem in l_area.findAll("a", {"class":"frame"}):
        return elem["href"]

    url = ("http://www.gym-directory.com/listing-category/gyms-fitness-centres/page/7/")
    r = requests.get(url)
    soup = BeautifulSoup(r.text, 'html.parser')
    l_area = soup.find("div", {"class":"wlt_search_results"})
    for elem in l_area.findAll("a", {"class":"frame"}):
        return elem["href"]

    url = ("http://www.gym-directory.com/listing-category/gyms-fitness-centres/page/8/")
    r = requests.get(url)
    soup = BeautifulSoup(r.text, 'html.parser')
    l_area = soup.find("div", {"class":"wlt_search_results"})
    for elem in l_area.findAll("a", {"class":"frame"}):
        return elem["href"]

    url = ("http://www.gym-directory.com/listing-category/gyms-fitness-centres/page/9/")
    r = requests.get(url)
    soup = BeautifulSoup(r.text, 'html.parser')
    l_area = soup.find("div", {"class":"wlt_search_results"})
    for elem in l_area.findAll("a", {"class":"frame"}):
        return elem["href"]

l = grab_listings()


with open ("gyms.csv", "wb") as file:
        writer = csv.writer(file)
        for row in l:
            writer.writerow(row)

推荐答案

简化:

import requests
from bs4 import BeautifulSoup
import csv


def grab_listings():
    for i in range(0, 5):
        url = "http://www.gym-directory.com/listing-category/gyms-fitness-centres/page/{}/"

        r = requests.get(url.format(i + 1))
        soup = BeautifulSoup(r.text, 'html.parser')
        l_area = soup.find("div", {"class": "wlt_search_results"})

        for elem in l_area.findAll("a", {"class": "frame"}):
            yield elem["href"]

l = grab_listings()


with open("gyms.csv", "w") as file:
    writer = csv.writer(file)
    for row in l:
        writer.writerow(row)

这篇关于Python - 为什么这些数据被错误地写入文件?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆