BeautifulSoup 的 Python 高内存使用率 [英] Python high memory usage with BeautifulSoup
问题描述
我试图在 python 2.7.3 中使用 BeautifulSoup4 处理几个网页,但每次解析后内存使用量都会增加.
I was trying to process several web pages with BeautifulSoup4 in python 2.7.3 but after every parse the memory usage goes up and up.
这个简化的代码产生了相同的行为:
This simplified code produces the same behavior:
from bs4 import BeautifulSoup
def parse():
f = open("index.html", "r")
page = BeautifulSoup(f.read(), "lxml")
f.close()
while True:
parse()
raw_input()
在调用 parse() 5 次后,python 进程已经使用了 30 MB 的内存(使用的 HTML 文件大约为 100 kB)并且每次调用都会增加 4 MB.有没有办法释放内存或某种解决方法?
After calling parse() for five times the python process already uses 30 MB of memory (used HTML file was around 100 kB) and it goes up by 4 MB every call. Is there a way to free that memory or some kind of workaround?
更新:这种行为让我很头疼.即使应该长时间删除 BeautifulSoup 变量,此代码也很容易占用大量内存:
Update: This behavior gives me headaches. This code easily uses up plenty of memory even though the BeautifulSoup variable should be long deleted:
from bs4 import BeautifulSoup
import threading, httplib, gc
class pageThread(threading.Thread):
def run(self):
con = httplib.HTTPConnection("stackoverflow.com")
con.request("GET", "/")
res = con.getresponse()
if res.status == 200:
page = BeautifulSoup(res.read(), "lxml")
con.close()
def load():
t = list()
for i in range(5):
t.append(pageThread())
t[i].start()
for thread in t:
thread.join()
while not raw_input("load? "):
gc.collect()
load()
这可能是某种错误吗?
推荐答案
Try Beautiful Soup 的 分解 功能,当您处理完每个文件时,它会破坏树.
Try Beautiful Soup's decompose functionality, which destroys the tree, when you're done working with each file.
from bs4 import BeautifulSoup
def parse():
f = open("index.html", "r")
page = BeautifulSoup(f.read(), "lxml")
# page extraction goes here
page.decompose()
f.close()
while True:
parse()
raw_input()
这篇关于BeautifulSoup 的 Python 高内存使用率的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!