MySQL加载数据infile - 加速? [英] MySQL load data infile - acceleration?
问题描述
有时候,我必须为项目重新导入数据,因此在MySQL表中读取大约360万行(目前是InnoDB,但实际上我并不是真的限于这个引擎)。 加载数据infile ...已被证明是最快的解决方案,但它有一个权衡:
- 当没有键导入时,导入本身大约需要45秒,但密钥创建需要很长时间(已经运行20分钟...)。
- 使用表上的键进行导入会导致导入速度变慢
sometimes, I have to re-import data for a project, thus reading about 3.6 million rows into a MySQL table (currently InnoDB, but I am actually not really limited to this engine). "Load data infile..." has proved to be the fastest solution, however it has a tradeoff: - when importing without keys, the import itself takes about 45 seconds, but the key creation takes ages (already running for 20 minutes...). - doing import with keys on the table makes the import much slower
表中有3个字段的键,引用数字字段。
有没有办法加速这个?
There are keys over 3 fields of the table, referencing numeric fields. Is there any way to accelerate this?
另一个问题是:当我终止启动慢查询的进程时,它继续在数据库上运行。有没有办法在不重启mysqld的情况下终止查询?
Another issue is: when I terminate the process which has started a slow query, it continues running on the database. Is there any way to terminate the query without restarting mysqld?
非常感谢
DBa
Thanks a lot DBa
推荐答案
如果你正在使用innodb和批量加载这里有一些提示:
if you're using innodb and bulk loading here are a few tips:
将你的csv文件排序为主键顺序目标表:记住innodb使用
聚集主键,因此如果它已经排序,它将加载更快!
sort your csv file into the primary key order of the target table : remember innodb uses clustered primary keys so it will load faster if it's sorted !
我使用的典型加载数据infile:
typical load data infile i use:
truncate <table>;
set autocommit = 0;
load data infile <path> into table <table>...
commit;
可用于提高加载时间的其他优化:
other optimisations you can use to boost load times:
set unique_checks = 0;
set foreign_key_checks = 0;
set sql_log_bin=0;
将csv文件拆分为更小的块
split the csv file into smaller chunks
我在批量加载过程中观察到的典型导入统计数据:
typical import stats i have observed during bulk loads:
3.5 - 6.5 million rows imported per min
210 - 400 million rows per hour
这篇关于MySQL加载数据infile - 加速?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!