检索使用Python和BeautifulSoup从网页上的链接 [英] retrieve links from web page using python and BeautifulSoup
问题描述
我如何可以检索网页的链接,并复制使用Python的链接的URL地址?
How can I retrieve the links of a webpage and copy the url address of the links using Python?
推荐答案
下面是一个使用SoupStrainer类BeautifulSoup一小片段:
Here's a short snippet using the SoupStrainer class in BeautifulSoup:
import httplib2
from BeautifulSoup import BeautifulSoup, SoupStrainer
http = httplib2.Http()
status, response = http.request('http://www.nytimes.com')
for link in BeautifulSoup(response, parseOnlyThese=SoupStrainer('a')):
if link.has_attr('href'):
print link['href']
在BeautifulSoup文档其实是相当不错的,并涵盖了一些典型场景:
The BeautifulSoup documentation is actually quite good, and covers a number of typical scenarios:
<一个href=\"http://www.crummy.com/software/BeautifulSoup/documentation.html\">http://www.crummy.com/software/BeautifulSoup/documentation.html
编辑:请注意,我用的SoupStrainer类,因为它是一个有点更高效(内存和速度明智的),如果你知道你提前解析什么
Note that I used the SoupStrainer class because it's a bit more efficient (memory and speed wise), if you know what you're parsing in advance.
这篇关于检索使用Python和BeautifulSoup从网页上的链接的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!