如何处理mysqldump创建的巨大行长度 [英] How to deal with enormous line lengths created by mysqldump
问题描述
我在cron作业中使用mysqldump备份了超过200万行的数据库.
I'm using mysqldump in a cron job to backup a database with over 2 million rows.
它将创建一个文本文件,该文件可用于从命令行恢复数据日志.
It creates a text file which can be used to restore the datalog from the command line.
我认为在恢复之前编辑转储会很有用,这是一种快速更改值和表或列名的方式-至少要等到我了解更多并对使用ALTER做到这一点充满信心之后,和更新.
I thought it would be useful to edit the dump before a restore as a quick way of changing values and table or column names - at least until I learn more and become confident about doing it with ALTER and UPDATE.
编辑大型文本文件并没有打扰我,但令我惊讶的是,在数据库的 250兆字节转储中,只有大约300行.每行的长度约为80万个字符.
Editing large text files does not bother me, but I was surprised to find that in a 250 megabyte dump of my database, there were only about 300 lines. Each line was something like 800k characters long.
是否还有另一种生成转储的方式,可以更好地控制行长?
Is there another way of generating dumps with more control over line length?
还是应该使用sed或Perl之类的工具对转储进行后处理?
Or should I post-process the dump with tools like sed or Perl?
推荐答案
默认情况下,mysqldump
每张表仅生成一个INSERT
命令 ,导致插入(很长)一行每个转储表的数据.这主要是因为批"插入比为每个表中的每个记录生成单独的INSERT
查询要快得多.
By default, mysqldump
generates only one INSERT
command per table, resulting in one (very long) line of inserted data for each table that got dumped. This is essentially because the "batch" inserts are much faster than if it generated a separate INSERT
query for every record in every table.
因此,并不是mysqldump
创建了任意长的行,而您可以施加其他一些截止长度.排长是有原因的.
So, it's not that mysqldump
has created arbitrarily long lines, and you can just impose some other cutoff length. The lines are long for a reason.
如果将INSERT
分解为多行非常重要,则可以使用以下内容进行指示:
If it's really important to get the INSERT
s broken down onto multiple lines, you can indicate that with:
mysqldump --extended-insert=FALSE --complete-insert=TRUE ...
但是请注意,以这种格式恢复表将花费更长的时间.
Note, however, that restoring tables will take longer in this format.
这篇关于如何处理mysqldump创建的巨大行长度的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!