如何将72GB转储文件导入到git中? [英] How can I import a 72GB dump file into git?

查看:142
本文介绍了如何将72GB转储文件导入到git中?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我用cvs2git(cvs2svn)迁移了一个旧的cvs仓库。结果转储文件现在是72GB大,我通过git fast-import导入转储的试验总是因为内存不足错误而失败:
$ b


fatal:内存不足,malloc失败(试图分配 6196691字节

fast-import:将崩溃报告转储到fast_import_crash_13097

错误:git-fast-import死于信号11


由此我的系统拥有32GB RAM和50GB交换空间。我使用Git 1.8.3.4(gcc44,python2.6.8,cvs2svn2.4.0)在Red Hat 5.3上运行导入。我也尝试过不限制堆栈大小和文件描述符,但内存错误仍然存​​在。



有没有人有任何想法?


$ ul
  • http://www.cvstrac.org/cvstrac/wiki?p=CvsRepositorySplittingrel =nofollow noreferrer>拆分cvs repo (每个回购应代表 git不能很好地处理的问题,所以应该将其遗漏在cvs repo中)文件



  • 然后,您将cvs(子)repos导入个人git repos。

    因为git是分布式的,而不是集中式的,所以你想保持每个git回购的合理大小。


    I have migrated an old cvs repository with cvs2git (cvs2svn). The resulted dump file is now 72GB big and my trials to import the dump via git fast-import always fail because of an out-of-memory error:

    fatal: Out of memory, malloc failed (tried to allocate 6196691 bytes)
    fast-import: dumping crash report to fast_import_crash_13097
    error: git-fast-import died of signal 11

    Whereby my System has 32GB RAM and 50GB swap. I am running the import on a Red Hat 5.3 with Git 1.8.3.4 (gcc44, python2.6.8, cvs2svn2.4.0). I have also tried to unlimit stack size and file descriptors, but the memory error is still there.

    Has anybody any idea?

    解决方案

    The idea is to:

    Then you would import the cvs (sub-)repos into individual git repos.
    Since git is distributed, and not centralized, you want to keep the size of each git repo reasonable.

    这篇关于如何将72GB转储文件导入到git中?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

    查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆