Python BeautifulSoup 刮表 [英] Python BeautifulSoup scrape tables
本文介绍了Python BeautifulSoup 刮表的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我正在尝试使用 BeautifulSoup 创建一个表格.我写了这个 Python 代码:
I am trying to create a table scrape with BeautifulSoup. I wrote this Python code:
import urllib2
from bs4 import BeautifulSoup
url = "http://dofollow.netsons.org/table1.htm" # change to whatever your url is
page = urllib2.urlopen(url).read()
soup = BeautifulSoup(page)
for i in soup.find_all('form'):
print i.attrs['class']
我需要抓取 Nome、Cognome、Email.
I need to scrape Nome, Cognome, Email.
推荐答案
Loop over table rows (tr
tag) 并获取里面的单元格的文本 (td
tag):
Loop over table rows (tr
tag) and get the text of cells (td
tag) inside:
for tr in soup.find_all('tr')[2:]:
tds = tr.find_all('td')
print "Nome: %s, Cognome: %s, Email: %s" %
(tds[0].text, tds[1].text, tds[2].text)
印刷品:
Nome: Massimo, Cognome: Allegri, Email: Allegri.Massimo@alitalia.it
Nome: Alessandra, Cognome: Anastasia, Email: Anastasia.Alessandra@alitalia.it
...
仅供参考,[2:]
这里的切片是跳过两个标题行.
FYI, [2:]
slice here is to skip two header rows.
UPD,以下是将结果保存到 txt 文件的方法:
UPD, here's how you can save results into txt file:
with open('output.txt', 'w') as f:
for tr in soup.find_all('tr')[2:]:
tds = tr.find_all('td')
f.write("Nome: %s, Cognome: %s, Email: %s
" %
(tds[0].text, tds[1].text, tds[2].text))
这篇关于Python BeautifulSoup 刮表的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文