如何防止mysqldump将转储分成1MB的增量? [英] How to prevent mysqldump from splitting dumps into 1MB increments?

查看:114
本文介绍了如何防止mysqldump将转储分成1MB的增量?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个相当大的MySQL表(1150万行).就数据大小而言,该表约为2GB.

我的max_allowed_packet是64MB.我通过创建一批插入(每个500,000个值)来使用mysqldump备份表,因为使用mysqldump选项--skip-extended-insert生成的最终sql文件重新插入的时间太长了.

这是我正在运行的内容(来自perl脚本):

`mysqldump -u root -pmypassword --no-data mydb mytable > mybackup.sql`

my $offset = 0;
while ($offset < $row_count) {
    `mysqldump -u root -p[mypassword] --opt --no-create-info --skip-add-drop-table --where="1 LIMIT $offset, 500000" mydb mytable >> mybackup.sql`
}

生成的sql文件为900MB.检出grep -n '\-\- WHERE\: 1 LIMIT' mybackup.sql的以下输出:

80:-- WHERE:  1 LIMIT 0, 500000
158:-- WHERE:  1 LIMIT 500000, 500000
236:-- WHERE:  1 LIMIT 1000000, 500000
314:-- WHERE:  1 LIMIT 1500000, 500000
392:-- WHERE:  1 LIMIT 2000000, 500000
469:-- WHERE:  1 LIMIT 2500000, 500000
546:-- WHERE:  1 LIMIT 3000000, 500000
623:-- WHERE:  1 LIMIT 3500000, 500000
699:-- WHERE:  1 LIMIT 4000000, 500000
772:-- WHERE:  1 LIMIT 4500000, 500000
846:-- WHERE:  1 LIMIT 5000000, 500000
921:-- WHERE:  1 LIMIT 5500000, 500000
996:-- WHERE:  1 LIMIT 6000000, 500000
1072:-- WHERE:  1 LIMIT 6500000, 500000
1150:-- WHERE:  1 LIMIT 7000000, 500000
1229:-- WHERE:  1 LIMIT 7500000, 500000
1308:-- WHERE:  1 LIMIT 8000000, 500000
1386:-- WHERE:  1 LIMIT 8500000, 500000
1464:-- WHERE:  1 LIMIT 9000000, 500000
1542:-- WHERE:  1 LIMIT 9500000, 500000
1620:-- WHERE:  1 LIMIT 10000000, 500000
1697:-- WHERE:  1 LIMIT 10500000, 500000
1774:-- WHERE:  1 LIMIT 11000000, 500000
1851:-- WHERE:  1 LIMIT 11500000, 500000

...并且grep -c 'INSERT INTO ' mybackup.sql的结果为 923 .

这923条插入语句中的每条几乎每个都精确地为1MB.为什么mysqldump为每个命令生成这么多的插入语句.我本来希望只看到24条插入语句,但是该命令似乎为每个批次产生38条插入.

是否可以在my.cnf中放入某些内容或传递给mysqldump以阻止其将转储分成1MB增量的插入?

mysql 版本14.14分发5.5.44
mysqldump 版本10.13分发5.5.44

我使用mysqldump命令中的其他net_buffer_length=64M选项重新运行了作业.但是我得到了Warning: option 'net_buffer_length': unsigned value 67108864 adjusted to 16777216.我在my.cnf中进行了查看,看是否有任何设置为16M的东西,而key_bufferquery_cache_size是.我也将它们都设置为64M,然后重新运行,但是得到了相同的警告.

生成的转储文件似乎很好,并且每个插入语句现在约为16MB.是否可以进一步增加?是否可以选择允许的缓冲区长度上限?

我将my.cnf中的mysql net_buffer_length变量设置为64M,但是,如文档所述,它被设置为其最大值1048576(1MB).但是mysqldump的net_buffer_length选项使我可以将最大插入大小增加到16MB(即使它已从请求的64MB减少了).

我很高兴能够插入16MB的内容,但如果可以的话,我会对增加该内容感兴趣.


最后一个想法.似乎我完全在浪费时间尝试自己进行任何类型的批处理,因为默认情况下mysqldump会完全执行我想要的操作.所以,如果我只是跑步:

mysqldump -u root -p[mypassword] --net_buffer_length=16M mydb mytable > mybackup.sql

...对于任何表,无论有多大,我都不必担心插入太大,因为mysqldump永远不会创建大于16MB的表.

我不知道--skip-extended-insert可能还需要什么,但我无法想象必须再次使用它.

解决方案

mysqldump会根据您的my.ini设置来限制它的行长,可能在客户端上比在服务器上小.选项是net_buffer_length.

通常您会遇到相反的问题:在大型服务器上,此选项具有很大的价值,并且当您获得连续512 MB的行时,您将无法插入本地数据库或测试数据库./p>

选项

从那里被盗

要检查此变量的默认值,请使用以下命令:mysqldump -帮助| grep net_buffer_length

对我来说,这几乎是1 MB(即1046528),它产生了巨大的 转储文件中的行.根据5.1文档 变量可以在1024到1048576之间设置.但是对于任何值 低于4096,它告诉我:警告:选项'net_buffer_length': 无符号值4095调整为4096.所以可能是我的最小值 系统设置为4096.

与此相关的转储会导致更多理智的SQL文件:mysqldump --net_buffer_length = 4096 --create-options --default-character-set ="utf8" --host ="localhost" --hex-blob --lock-tables --password --quote-names --user = "myuser""mydatabase""mytable"> mytable.sql

I have a fairly large MySQL table (11.5 million rows). In terms of data size, the table is ~2GB.

My max_allowed_packet is 64MB. I'm backing up the table using mysqldump by creating a batch of inserts (500,000 values each), because the resulting sql file produced using the mysqldump option --skip-extended-insert just takes too long to re-insert.

This is what I'm running (from a perl script):

`mysqldump -u root -pmypassword --no-data mydb mytable > mybackup.sql`

my $offset = 0;
while ($offset < $row_count) {
    `mysqldump -u root -p[mypassword] --opt --no-create-info --skip-add-drop-table --where="1 LIMIT $offset, 500000" mydb mytable >> mybackup.sql`
}

The resulting sql file is 900MB. Check out the following output of grep -n '\-\- WHERE\: 1 LIMIT' mybackup.sql:

80:-- WHERE:  1 LIMIT 0, 500000
158:-- WHERE:  1 LIMIT 500000, 500000
236:-- WHERE:  1 LIMIT 1000000, 500000
314:-- WHERE:  1 LIMIT 1500000, 500000
392:-- WHERE:  1 LIMIT 2000000, 500000
469:-- WHERE:  1 LIMIT 2500000, 500000
546:-- WHERE:  1 LIMIT 3000000, 500000
623:-- WHERE:  1 LIMIT 3500000, 500000
699:-- WHERE:  1 LIMIT 4000000, 500000
772:-- WHERE:  1 LIMIT 4500000, 500000
846:-- WHERE:  1 LIMIT 5000000, 500000
921:-- WHERE:  1 LIMIT 5500000, 500000
996:-- WHERE:  1 LIMIT 6000000, 500000
1072:-- WHERE:  1 LIMIT 6500000, 500000
1150:-- WHERE:  1 LIMIT 7000000, 500000
1229:-- WHERE:  1 LIMIT 7500000, 500000
1308:-- WHERE:  1 LIMIT 8000000, 500000
1386:-- WHERE:  1 LIMIT 8500000, 500000
1464:-- WHERE:  1 LIMIT 9000000, 500000
1542:-- WHERE:  1 LIMIT 9500000, 500000
1620:-- WHERE:  1 LIMIT 10000000, 500000
1697:-- WHERE:  1 LIMIT 10500000, 500000
1774:-- WHERE:  1 LIMIT 11000000, 500000
1851:-- WHERE:  1 LIMIT 11500000, 500000

...and the result of grep -c 'INSERT INTO ' mybackup.sql is 923.

Each of those 923 insert statements is almost exactly 1MB each. Why is mysqldump producing so many insert statements for each command. I would have expected to only see 24 insert statements, but the command seems to be producing 38 inserts for each batch.

Is there something I can put in my.cnf or pass to mysqldump to stop it breaking the dump into inserts of 1MB increments?

mysql Ver 14.14 Distrib 5.5.44
mysqldump Ver 10.13 Distrib 5.5.44

I re-ran the job with the additional net_buffer_length=64M option in the mysqldump commands. But I got Warning: option 'net_buffer_length': unsigned value 67108864 adjusted to 16777216. I took a look in my.cnf to see if there was anything set to 16M, and key_buffer and query_cache_size were. I set them both to 64M too and re-ran, but got the same warning.

The resulting dump file seems fine, and the insert statements are now ~16MB each. Is it possible to increase that even further? Is there an option capping the allowed buffer length?

I set the mysql net_buffer_length variable in my.cnf to 64M but, like the documentation says, it was set to it's max value which is 1048576 (1MB). But the net_buffer_length option to mysqldump let me bring the max insert size up to 16MB (even though it was reduced from the requested 64MB).

I'm happy enough to go along with 16MB inserts, but I'd be interested in increasing that if I can.


Just one last thought. It seems like I'm completely wasting my time trying to do any kind of batching myself, because mysqldump will do exactly what I want itself by default. So if I just run:

mysqldump -u root -p[mypassword] --net_buffer_length=16M mydb mytable > mybackup.sql

...for any table, no matter how large, I never have to worry about the inserts being too big because mysqldump will never create one larger than 16MB.

I don't know what else --skip-extended-insert could be needed for, but I can't imagine I'll have to use it again.

解决方案

mysqldump limits it's line length according to your my.ini settings, possibly on your client they are smaller than on your server. The option is net_buffer_length.

Often you have the problem the other way round: On the big Server this option has a big value, and when you get lines with 512 MB in a row, you can not insert into the local database or the test database.

Option

Stolen from there:

To check the default value of this variable, use this: mysqldump --help | grep net_buffer_length

For me it was almost 1 MB (i.e. 1046528) and it produced enormous lines in the dump file. According to the 5.1 documentation the variable can be set between 1024 and 1048576. However for any value below 4096 it told me this: Warning: option 'net_buffer_length': unsigned value 4095 adjusted to 4096. So probably the minimum on my system was set to 4096.

Dumping with this resulted in a lot more sane SQL file: mysqldump --net_buffer_length=4096 --create-options --default-character-set="utf8" --host="localhost" --hex-blob --lock-tables --password --quote-names --user="myuser" "mydatabase" "mytable" > mytable.sql

这篇关于如何防止mysqldump将转储分成1MB的增量?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆