Git的克隆:远端挂了意外,试图改变postBuffer但仍然失败 [英] Git cloning: remote end hung up unexpectedly, tried changing postBuffer but still failing

查看:1693
本文介绍了Git的克隆:远端挂了意外,试图改变postBuffer但仍然失败的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想克隆一个仓库。我第一次得到了82%的话,那没动了半个多小时,所以我取消了克隆,并开始了。在这之后,每次我尝试克隆它,我得到6-10%之间,那么它失败,出现错误远端挂了出乎意料的是,早EOF。我抬头错误,并试图每一个解决方案,我能找到,最流行的解决方案是增加postBuffer的最大尺寸。但是,它仍然保持失败每次。

我不知道这是否有差别,但我没有尝试在code,这是大多数其他人报告这个问题似乎试图做检查。我想克隆一个资源库。


解决方案

如果这是一个HTTP事务,你需要联系到位桶的支持为他们诊断出了什么问题,在服务器端。结果
如所提到的在,例如,<一href=\"https://github.com/git/git/blob/398dd4bd039680ba98497fbedffa415a43583c16/Documentation/howto/use-git-daemon.txt#L19-L21\"相对=nofollow> HOWTO /使用-git的守护

 致命:远程端挂机意外


  

这只是说的的东西的错在哪里。结果
  要找出的什么的出了错,你要问的服务器。


请注意,当到位桶将使用一个Git 2.5+(Q2 2015),客户端可能会落得一个更明确的错误信息,而不是:

 要求是比我们的最大尺寸较大的XXX
 尝试设置GIT_HTTP_MAX_REQUEST_BUFFER

(即设置 GIT_HTTP_MAX_REQUEST_BUFFER 在托管Git代码库的服务器的)

请参阅通过的 =htt​​ps://github.com / peff相对=nofollow>杰夫·王( peff ,5月20日2015年结果
(由 JUNIOÇ滨野合并 - gitster - 提交777e75b ,2015年6月1日)结果
测试适应 - 从:丹尼斯Kaarsemaker( seveas

新的环境变量 GIT_HTTP_MAX_REQUEST_BUFFER


  

GIT_HTTP_MAX_REQUEST_BUFFER 环境变量(或
   http.maxRequestBuffer 配置变量)可以设置为改变
  最大的裁判协商请求git会期间获取处理;任何
  获取需要更大的缓冲区将不会成功。


  
  

这个值通常不应需要改变,但可能是有益的,如果你是从资源库和一个非常大的数量裁判的读取。


  
  

值可以用单位来指定(例如, 100M 100兆字节)。默认值是10兆字节。


的解释很有意思:


  

HTTP后端:阀芯裁判协商请求缓冲


  
  

HTTP后端 产卵 upload-包 做裁判
  谈判,它流在HTTP请求体
  上传包,谁再流HTTP响应回
  客户机读取。结果
  从理论上讲,混帐可以去全双工;客户端可以消耗我们的响应,同时它仍然是发送请求。结果
  在实践中,然而,HTTP是一个半双工协议。结果
  即使我们的客户是准备同时读写,我们可能在路上其他HTTP基础设施,包括我们的产卵CGI Web服务器,或任何中间代理。


  
  

在至少一个记录的情况下时,这将导致僵局
  当试图抓取通过HTTP。结果
  什么情况主要是:


  
  

      
  1. 阿帕奇将代理请求到CGI,HTTP的后端。

  2.   
  3. HTTP后端用gzip膨胀的数据,并将结果上传包。

  4.   
  5. 上载包行为上的数据,并在管道回阿帕奇产生输出。 Apache不读,因为它是忙着写(步骤1)。

  6.   

  
  

这工作正常大部分时间,因为上传包
  输出系统中的管缓冲器结束,和Apache读取
  它只要它写完。但如果两个请求
  和响应超过系统管道缓冲区大小,则我们
  死锁(Apache的块写入HTTP的后端,
  HTTP后端块写入上传包,并上传包
  块写入到Apache)。


  
  

我们需要通过后台无论是输入打破僵局
  或输出。在这种情况下,这是理想的卷轴的输入,
  因为Apache不开始阅读或者标准输出的
  标准错误,直到我们已经消耗掉所有的输入。所以,直到我们
  这样做,我们甚至不能得到一个错误的信息传递给
  客户端。


  
  

解决方案是相当直接的:我们读请求
  身体成HTTP后端的内存缓冲区,释放
  阿帕奇,然后将自己的数据馈送到上传包


I'm trying to clone a repository. The first time I got to 82%, then it didn't budge for half an hour so I cancelled the clone and started over. After that, every time I try to clone it, I get between 6-10%, and then it fails with the error "The remote end hung up unexpectedly, early EOF." I looked up the error and tried every solution I could find, with the most popular solution being to increase postBuffer to the maximum size. However, it still keeps failing every time.

I'm not sure if it makes a difference, but I'm not trying to check in code, which was what most of the other people reporting this issue seemed to be trying to do. I'm trying to clone a repository.

解决方案

If this is an http transaction, you would need to contact BitBucket support for them to diagnose what went wrong on the server side.
As mentioned in, for example, "howto/use-git-daemon":

fatal: The remote end hung up unexpectedly

It only means that something went wrong.
To find out what went wrong, you have to ask the server.

Note that when BitBucket will use a Git 2.5+ (Q2 2015), the client might end up with a more explicit error message instead:

 request was larger than our maximum size xxx
 try setting GIT_HTTP_MAX_REQUEST_BUFFER"

(that is, setting GIT_HTTP_MAX_REQUEST_BUFFER on the Git repository hosting server)

See commit 6bc0cb5 by Jeff King (peff), 20 May 2015.
(Merged by Junio C Hamano -- gitster -- in commit 777e75b, 01 Jun 2015)
Test-adapted-from: Dennis Kaarsemaker (seveas)

The new environment variable is GIT_HTTP_MAX_REQUEST_BUFFER:

The GIT_HTTP_MAX_REQUEST_BUFFER environment variable (or the http.maxRequestBuffer config variable) may be set to change the largest ref negotiation request that git will handle during a fetch; any fetch requiring a larger buffer will not succeed.

This value should not normally need to be changed, but may be helpful if you are fetching from a repository with an extremely large number of refs.

The value can be specified with a unit (e.g., 100M for 100 megabytes). The default is 10 megabytes.

The explanation is very interesting:

http-backend: spool ref negotiation requests to buffer

When http-backend spawns "upload-pack" to do ref negotiation, it streams the http request body to upload-pack, who then streams the http response back to the client as it reads.
In theory, git can go full-duplex; the client can consume our response while it is still sending the request.
In practice, however, HTTP is a half-duplex protocol.
Even if our client is ready to read and write simultaneously, we may have other HTTP infrastructure in the way, including the webserver that spawns our CGI, or any intermediate proxies.

In at least one documented case, this leads to deadlock when trying a fetch over http.
What happens is basically:

  1. Apache proxies the request to the CGI, http-backend.
  2. http-backend gzip-inflates the data and sends the result to upload-pack.
  3. upload-pack acts on the data and generates output over the pipe back to Apache. Apache isn't reading because it's busy writing (step 1).

This works fine most of the time, because the upload-pack output ends up in a system pipe buffer, and Apache reads it as soon as it finishes writing. But if both the request and the response exceed the system pipe buffer size, then we deadlock (Apache blocks writing to http-backend, http-backend blocks writing to upload-pack, and upload-pack blocks writing to Apache).

We need to break the deadlock by spooling either the input or the output. In this case, it's ideal to spool the input, because Apache does not start reading either stdout or stderr until we have consumed all of the input. So until we do so, we cannot even get an error message out to the client.

The solution is fairly straight-forward: we read the request body into an in-memory buffer in http-backend, freeing up Apache, and then feed the data ourselves to upload-pack.

这篇关于Git的克隆:远端挂了意外,试图改变postBuffer但仍然失败的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆