大型MySQL转储的导入速度很慢 [英] Slow Import of Large MySQL Dump

查看:169
本文介绍了大型MySQL转储的导入速度很慢的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试将非常大的MySQL转储文件导入新的MySQL服务器.但是,经过一定的时间后,它似乎停滞了.我大约有2.5亿行,甚至只运行一行转储都需要10到12秒的时间.它似乎挂在更新"上.

I'm trying to import a very large MySQL dump file into a new MySQL server. However, after a certain point, it seems to bog down. I'm about 250M rows in, and it's taking over 10-12 seconds to run even a single line of the dump. It seems to hang on "update".

我错过了一些应该做的事情以使导入速度更快吗?这是一个非常强大的服务器,所以我认为它不是I/O.

Am I missing something I should do to make this import go faster? It's a pretty beefy server so I don't think it's the I/O..

推荐答案

对于"beefy"服务器来说,对我来说似乎有用的是将文件分割成几个较小的文件,然后将它们并行导入.但这可能会破坏您的行顺序. (结果可能会因存储引擎而异)

What seemed to work for me with a 'beefy' server, was splitting the file into several smaller and importing them all in parallel. Altough that may break your row ordering. (and results may vary depending on storage engine)

IIRC可以使用某种脚本来执行此操作,但是该脚本没有得到维护.所以当我搞砸的时候,我只用了头/尾巴和管子.

IIRC there was some kind of script to do that, but it was unmaintained. So when I was messing with that, I used just head/tail and pipes.

如果您要插入多个表而不使用外键,我可能会为您提供我的简单perl脚本,该脚本用于将一个大转储拆分为每个表转储,然后可以并行导入.

If you are inserting into multiple tables and not using foreign keys, i could probably provide you my simple perl script that I use to split one big dump into per table dumps, which can in turn be imported in parallel.

可能会有所帮助.手动完成(在转储文件的开始处插入禁用,并在转储文件的末尾启用),或者在转储数据库时完成.这样可以加快插入速度,但是建立索引会花费很多时间.

Also this may help. Either done manually (insert disable at the beginning and enable at the end of your dump file) or when dumping the db. This should speed up your inserts, but it will take quite some time to build indexes.

这篇关于大型MySQL转储的导入速度很慢的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆