从 svn 存储库更新返回“Could not read chunk size";错误 [英] Updating from svn repository returns "Could not read chunk size" error
问题描述
使用 tortoise svn 客户端从 subversion 存储库更新时,出现如下错误:
When updating from subversion repository using tortoise svn client I get error looking like that:
Could not read chunk size: An existing connection was forcibly closed by the remote host.
这并不妨碍我更新,只是中断了更新过程,所以我必须重复更新几次,才能完成.
It doesn't prevent me from updating, just interrupts update process, so that I have to repeat update several times, before it is complete.
什么会导致这种行为以及如何解决?
What can cause such behaviour and how to fix it?
推荐答案
我从多台机器上的客户端收到Could not read chunk size"消息.
I was getting the "Could not read chunk size" message from clients on several machines.
解决这个问题的关键是 Apache 错误日志中的这个错误:
The key to figuring it out was this error in the Apache error log:
[Fri May 07 14:26:26 2010] [error] [client 155.35.175.50] Provider encountered an error while streaming a REPORT response. [500, #0]
[Fri May 07 14:26:26 2010] [error] [client 155.35.175.50] Problem replaying revision [500, #24]
[Fri May 07 14:26:26 2010] [error] [client 155.35.175.50] Can't open file '/usr/site/svnrep/impc/db/revs/16122': Too many open files [500, #24]
处理 svn 操作的 Apache 进程用完了文件描述符.在我的 Ubuntu 服务器上,我通过编辑 /etc/security/limits.conf
并将其添加到底部来修复它:
The Apache process handling the svn operation was running out of file descriptors. On my Ubuntu server, I fixed it by editing /etc/security/limits.conf
and adding this at the bottom:
* hard nofile 5000
* soft nofile 5000
这将文件描述符限制从 1024 增加到 5000.然后我登录一个新的 shell 并确认通过 ulimit -n
增加了限制.然后重启 Apache.
Which increases the file descriptor limit from 1024 to 5000. Then I logged in on a fresh shell and confirmed that the limit got increased via ulimit -n
. Then restarted Apache.
这篇关于从 svn 存储库更新返回“Could not read chunk size";错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!