Django OperationalError:无法将新进程用于连接 [英] Django OperationalError: could not fork new process for connection
问题描述
今天早上,我已经开始在生产环境中收到这个错误,在Django-storage,Boto和Django-compression工作台昨天在S3上放置静态文件之后,尽管我不知道是否相关。 。
I've started getting this error on the production environment this morning, after getting Django-storages, Boto, and Django-compressor working to put static files on S3 yesterday, though I don't know if that is related...
OperationalError: could not fork new process for connection: Cannot allocate memory
could not fork new process for connection: Cannot allocate memory
could not fork new process for connection: Cannot allocate memory
Stacktrace (most recent call last):
File "django/core/handlers/base.py", line 89, in get_response
response = middleware_method(request)
File "reversion/middleware.py", line 17, in process_request
if hasattr(request, "user") and request.user.is_authenticated():
File "django/utils/functional.py", line 184, in inner
self._setup()
File "django/utils/functional.py", line 248, in _setup
self._wrapped = self._setupfunc()
File "django/contrib/auth/middleware.py", line 16, in <lambda>
request.user = SimpleLazyObject(lambda: get_user(request))
File "django/contrib/auth/middleware.py", line 8, in get_user
request._cached_user = auth.get_user(request)
File "django/contrib/auth/__init__.py", line 98, in get_user
user_id = request.session[SESSION_KEY]
File "django/contrib/sessions/backends/base.py", line 39, in __getitem__
return self._session[key]
File "django/contrib/sessions/backends/base.py", line 165, in _get_session
self._session_cache = self.load()
File "django/contrib/sessions/backends/db.py", line 19, in load
expire_date__gt=timezone.now()
File "django/db/models/manager.py", line 131, in get
return self.get_query_set().get(*args, **kwargs)
File "django/db/models/query.py", line 361, in get
num = len(clone)
File "django/db/models/query.py", line 85, in __len__
self._result_cache = list(self.iterator())
File "django/db/models/query.py", line 291, in iterator
for row in compiler.results_iter():
File "django/db/models/sql/compiler.py", line 763, in results_iter
for rows in self.execute_sql(MULTI):
File "django/db/models/sql/compiler.py", line 817, in execute_sql
cursor = self.connection.cursor()
File "django/db/backends/__init__.py", line 308, in cursor
cursor = util.CursorWrapper(self._cursor(), self)
File "django/db/backends/postgresql_psycopg2/base.py", line 177, in _cursor
self.connection = Database.connect(**conn_params)
File "psycopg2/__init__.py", line 178, in connect
return _connect(dsn, connection_factory=connection_factory, async=async)
我正在部署网站Heroku的。在重新启动应用程序后,它工作了一段时间,但几分钟后仍然停止工作。
I am deploying the site on Heroku. It works for a bit after I restart the application, but stops working again after a few minutes.
有什么可能导致这种情况的想法?
Any ideas as to what might be causing this?
推荐答案
我遇到了同样的问题,试图在heroku上设置一个简单的django Web应用程序和postgresql数据库,并设法解决它。
I encountered the same problem trying to set up a simple django web application with a postgresql database on heroku and managed to solve it.
我不完全理解错误,但修复程序相当简单:当您将查询创建的python列表传递到数据库时,您需要 em>限制列表的大小 。
I don't fully understand the error but the fix is fairly simple: when you are passing python lists created by queries to your database, you need to limit the size of the list.
所以例如,如果你作为上下文传递下列列表:
So for example if you are passing as context the following list:
set_list = userSetTable.objects.all()
return render(request,'fc / user.html',{'set_list':set_list,})
一个错误,因为set_list可能真的很大。您需要指定最大大小:
That will cause an error because set_list might be really big. You need to specify a maximum size:
set_list = userSetTable.objects.all()[0:20]
所以在现实世界的应用程序中,你可能希望将列表显示为页面结果或任何...点。
So in a real-world application, you might want to display the list as page results or whatever... you get the point.
这篇关于Django OperationalError:无法将新进程用于连接的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!