加速 mysql 转储和导入 [英] Speeding up mysql dumps and imports

查看:51
本文介绍了加速 mysql 转储和导入的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

是否有任何记录在案的技术来加速 mySQL 转储和导入?

Are there any documented techniques for speeding up mySQL dumps and imports?

这将包括 my.cnf 设置、使用 ramdisks 等.

This would include my.cnf settings, using ramdisks, etc.

只寻找记录在案的技术,最好有显示潜在加速的基准.

Looking only for documented techniques, preferably with benchmarks showing potential speed-up.

推荐答案

http://www.maatkit.org/ 有一个 mk-parallel-dump 和 mk-parallel-restore

http://www.maatkit.org/ has a mk-parallel-dump and mk-parallel-restore

如果您一直希望使用多线程 mysqldump,请不要再希望了.此工具并行转储 MySQL 表.它是一个更智能的 mysqldump,既可以作为 mysqldump 的包装器(具有合理的默认行为),也可以作为 SELECT INTO OUTFILE 的包装器.它专为处理非常大数据的高性能应用程序而设计,其中速度非常重要.它利用多个 CPU 和磁盘来更快地转储您的数据.

If you’ve been wishing for multi-threaded mysqldump, wish no more. This tool dumps MySQL tables in parallel. It is a much smarter mysqldump that can either act as a wrapper for mysqldump (with sensible default behavior) or as a wrapper around SELECT INTO OUTFILE. It is designed for high-performance applications on very large data sizes, where speed matters a lot. It takes advantage of multiple CPUs and disks to dump your data much faster.

mysqldump 中还有各种潜在的选项,例如在导入转储时不创建索引 - 而是在完成时将它们集中起来.

There are also various potential options in mysqldump such as not making indexes while the dump is being imported - but instead doing them en-mass on the completion.

这篇关于加速 mysql 转储和导入的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆