即使使用max_allowed_pa​​cket参数,使用mysqldump时也会失去与mysql的连接 [英] getting Lost connection to mysql when using mysqldump even with max_allowed_packet parameter

查看:81
本文介绍了即使使用max_allowed_pa​​cket参数,使用mysqldump时也会失去与mysql的连接的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想将特定的表转储到我的远程服务器数据库中,该表工作正常,但是其中一个表是9m行,我得到:

I want to dump specific table in my remote server database, which works fine, but one of the tables is 9m rows and i get:

Lost connection to MySQL server during query when dumping table `table_name` at row: 2002359

因此,在在线阅读后,我了解到我需要增加我的max_allowed_pa​​cket,并有可能将其添加到我的命令中.

so after reading online i understood i need to increase my max_allowed_packet, and its possible to add it to my command.

因此我运行以下命令来转储我的表:

so im running the following command to dump my table:

mysqldump -uroot -h my.host -p'mypassword' --max_allowed_packet=512M db_name table_name | gzip  > dump_test.sql.gz

由于某种原因,我仍然得到:

and from some reason, i still get:

Lost connection to MySQL server during query when dumping table `table_name` at row: 2602499

我做错什么了吗?

它很奇怪,只有900万条记录...不太大.

its weird, only 9m records...not too big.

推荐答案

尝试将--quick选项添加到mysqldump命令中;它在大型桌子上工作得更好.它将行从结果集中流到输出,而不是对整个表进行筛选,然后将其写出.

Try adding the --quick option to your mysqldump command; it works better with large tables. It streams the rows from the resultset to the output rather than slurping the whole table, then writing it out.

 mysqldump -uroot -h my.host -p'mypassword' --quick --max_allowed_packet=512M db_name table_name | \
 gzip  > dump_test.sql.gz

您也可以尝试将--compress选项添加到mysqldump命令中.这使得它使用对您的MySQL服务器更具网络友好性的压缩连接协议.注意,您仍然需要gzip管道. MySQL的压缩协议不会导致转储从压缩的mysqldump中出来.

You can also try adding the --compress option to your mysqldump command. That makes it use the more network-friendly compressed connection protocol to your MySQL server. Notice that you still need the gzip pipe; MySQL's compressed protocol doesn't cause the dump to come out of mysqldump compressed.

服务器也有可能超时与mysqldump客户端的连接.您可以尝试重置超时时间.通过其他方式连接到服务器并发出这些查询,然后运行mysqldump作业.

It's also possible the server is timing out its connection to the mysqldump client. You can try resetting the timeout durations. Connect to your server via some other means and issue these queries, then run your mysqldump job.

这些将超时设置为一个日历日.

These set the timeouts to one calendar day.

    SET GLOBAL wait_timeout=86400;
    SET GLOBAL interactive_timeout=86400;

最后,如果您的服务器(通过路由器和防火墙)距离您的计算机较远,则可能是某些原因干扰了mysqldump的连接.一些劣质的路由器和防火墙对NAT(网络地址转换)会话具有时间限制.他们应该在使用过程中保持这些会话的活动状态,但是有些则没有.也许您正在达到公司为外部连接配置的时间或大小限制.

Finally, if your server is far away from your machine (through routers and firewalls) something may be disrupting mysqldump's connection. Some inferior routers and firewalls have time limits on NAT (network address translation) sessions. They're supposed to keep those sessions alive while they are in use, but some don't. Or maybe you're hitting a time or size limit configured by your company for external connections.

尝试登录更靠近服务器的计算机并在其上运行mysqldump. 然后使用其他方法(sftp?)将您的gz文件复制到您自己的计算机上.

Try logging into a machine closer to the server and running mysqldump on it. Then use some other means (sftp?) to copy your gz file to your own machine.

或者,您可能必须分割此文件的转储.您可以执行以下操作(未调试).

Or, you may have to segment the dump of this file. You can do something like this (not debugged).

mysqldump  -uroot -h my.host -p'mypassword'  \ 
          db_name table_name --skip-create-options --skip-add-drop-table \
          --where="id>=0 AND id < 1000000" | \
          gzip....

然后在这些行中重复.

          --where="id>=1000000 AND id < 2000000" | \

          --where="id>=2000000 AND id < 3000000" | \
          ...

直到获得所有行.颈部疼痛,但可以.

until you get all the rows. Pain in the neck, but it will work.

这篇关于即使使用max_allowed_pa​​cket参数,使用mysqldump时也会失去与mysql的连接的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆