如何使用python解析xml feed? [英] How to parse an xml feed using python?

查看:104
本文介绍了如何使用python解析xml feed?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试解析此xml(http://www.reddit.com/r/videos/top/.rss),但遇到了麻烦.我试图在每个项目中保存youtube链接,但是由于频道"子节点而遇到麻烦.如何达到此级别,以便随后可以遍历所有项目?

I am trying to parse this xml (http://www.reddit.com/r/videos/top/.rss) and am having troubles doing so. I am trying to save the youtube links in each of the items, but am having trouble because of the "channel" child node. How do I get to this level so I can then iterate through the items?

#reddit parse
reddit_file = urllib2.urlopen('http://www.reddit.com/r/videos/top/.rss')
#convert to string:
reddit_data = reddit_file.read()
#close file because we dont need it anymore:
reddit_file.close()

#entire feed
reddit_root = etree.fromstring(reddit_data)
channel = reddit_root.findall('{http://purl.org/dc/elements/1.1/}channel')
print channel

reddit_feed=[]
for entry in channel:   
    #get description, url, and thumbnail
    desc = #not sure how to get this

    reddit_feed.append([desc])

推荐答案

您可以尝试findall('channel/item')

import urllib2
from xml.etree import ElementTree as etree
#reddit parse
reddit_file = urllib2.urlopen('http://www.reddit.com/r/videos/top/.rss')
#convert to string:
reddit_data = reddit_file.read()
print reddit_data
#close file because we dont need it anymore:
reddit_file.close()

#entire feed
reddit_root = etree.fromstring(reddit_data)
item = reddit_root.findall('channel/item')
print item

reddit_feed=[]
for entry in item:   
    #get description, url, and thumbnail
    desc = entry.findtext('description')  
    reddit_feed.append([desc])

这篇关于如何使用python解析xml feed?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆