Python:将大型网页保存到文件 [英] Python: saving large web page to file
问题描述
首先让我说,我对编程并不陌生,但对python来说却是新手.
Let me start off by saying, I'm not new to programming but am very new to python.
我已经使用urllib2编写了一个程序,该程序请求一个网页,然后将其保存到文件中.该网页大约为300KB,虽然不算特别大,但似乎足以给我带来麻烦,所以我称其为大".
我正在使用一个简单的调用将直接从urlopen
返回的对象复制到文件中:
I've written a program using urllib2 that requests a web page that I would then like to save to a file. The web page is about 300KB, which doesn't strike me as particularly large but seems to be enough to give me trouble, so I'm calling it 'large'.
I'm using a simple call to copy directly from the object returned from urlopen
into the file:
file.write(webpage.read())
但是它只停留了几分钟,试图写入文件,最终我收到以下信息:
but it will just sit for minutes, trying to write into the file and I eventually receive the following:
Traceback (most recent call last):
File "program.py", line 51, in <module>
main()
File "program.py", line 43, in main
f.write(webpage.read())
File "/usr/lib/python2.7/socket.py", line 351, in read
data = self._sock.recv(rbufsize)
File "/usr/lib/python2.7/httplib.py", line 541, in read
return self._read_chunked(amt)
File "/usr/lib/python2.7/httplib.py", line 592, in _read_chunked
value.append(self._safe_read(amt))
File "/usr/lib/python2.7/httplib.py", line 649, in _safe_read
raise IncompleteRead(''.join(s), amt)
httplib.IncompleteRead: IncompleteRead(6384 bytes read, 1808 more expected)
我不知道为什么这会给程序带来如此多的痛苦?
I don't know why this should give the program so much grief?
这是我检索页面的方式
jar = cookielib.CookieJar()
cookie_processor = urllib2.HTTPCookieProcessor(jar);
opener = urllib2.build_opener(cookie_processor)
urllib2.install_opener(opener)
requ_login = urllib2.Request(LOGIN_PAGE,
data = urllib.urlencode( { 'destination' : "", 'username' : USERNAME, 'password' : PASSWORD } ))
requ_page = urllib2.Request(WEBPAGE)
try:
#login
urllib2.urlopen(requ_login)
#get desired page
portfolio = urllib2.urlopen(requ_page)
except urllib2.URLError as e:
print e.code, ": ", e.reason
推荐答案
我会使用方便的shutil模块提供的"nofollow>文件对象复印机功能.它可以在我的机器上工作:)
I'd use a handy fileobject copier function provided by shutil
module. It worked on my machine :)
>>> import urllib2
>>> import shutil
>>> remote_fo = urllib2.urlopen('http://docs.python.org/library/shutil.html')
>>> with open('bigfile', 'wb') as local_fo:
... shutil.copyfileobj(remote_fo, local_fo)
...
>>>
更新:您可能希望将第3个参数传递给copyfileobj
,该参数控制用于传输字节的内部缓冲区的大小.
UPDATE: You may want to pass the 3rd argument to copyfileobj
that controls the size of internal buffer used to transfer bytes.
UPDATE2 :shutil.copyfileobj.
没什么好想的,它只是从源文件对象中读取一部分字节并将其重复写入目标文件对象中,直到没有更多内容可读取为止.这是我从Python标准库中获取的实际源代码:
UPDATE2: There's nothing fancy about shutil.copyfileobj.
It simply reads a chunk of bytes from source file object and writes it the destination file object repeatedly until there's nothing more to read. Here's the actual source code of it that I grabbed from inside Python standard library:
def copyfileobj(fsrc, fdst, length=16*1024):
"""copy data from file-like object fsrc to file-like object fdst"""
while 1:
buf = fsrc.read(length)
if not buf:
break
fdst.write(buf)
这篇关于Python:将大型网页保存到文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!