将mysql表转储为CSV(stdout),然后将输出隧道传输到另一台服务器 [英] dumping a mysql table to CSV (stdout) and then tunneling the output to another server

查看:119
本文介绍了将mysql表转储为CSV(stdout),然后将输出隧道传输到另一台服务器的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试将数据库表移动到另一台服务器;复杂之处在于当前正在运行该表的机器几乎没有空间了;所以我正在寻找一种可以在网络上工作的解决方案.

I'm trying to move a database table to another server; the complication is that the machine currently running the table has little to no space left; so I'm looking for a solution that can work over the net.

我已经尝试过mysql从src机器中转储数据库,并在目的地将其通过管道传输到mysql中;但我的数据库有4800万行,甚至在关闭auto_commit时也是如此; trx_commit cmd为2;我的狗比较慢.

I have tried mysqldumping the database from the src machine and piping it into mysql at the dest; but my database has 48m rows and even when turning auto_commit off & trx_commit cmd to 2; I am getting some dog slow times.

mysqldump -uuser -ppass --opt dbname dbtable  | mysql -h remove.server  -uuser -pass dbname

然后,我尝试一次将mysqldump的行数百万转储.将它们清理到目标计算机上并执行mysql< file.sql,但这似乎越来越慢.我到达了第7个文件行(7,000,000);而接下来的一百万次导入花费了240分钟.

I then tried to mysqldump the rows a million at a time; scp them to the dest machine and do a mysql < file.sql but this seemed to get progressivly slower. I reached the 7th file (7,000,000) rows; and the following million import took 240 minutes.

我做了一些阅读,mysql建议使用CSV LOAD IN FILE样式导入比插入快约20倍.所以现在我被卡住了.

I did a little bit of reading around and mysql suggests that using CSV LOAD IN FILE style imports are ~20x faster than inserts. So now I'm stuck.

我可以计算出如何使用标准sql语法导出为CSV:

I can work out how to export as CSV using the standard sql syntax:

SELECT *
INTO OUTFILE '/tmp/tmpfile'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
ESCAPED BY '\\'
LINES TERMINATED BY '\n'
FROM table;

但是这显然不起作用,因为它将迅速消耗掉我已经很低的磁盘空间.所以我一直在寻找一个允许mysqldump转储csv到stdout的开关.从我所读的内容来看,这似乎是不可能的.我能想到的唯一方法是创建一个FIFO,并指向mysql转储到那里-然后编写一个脚本,该脚本同时读取FIFO并将其发送到dest服务器.但是,我不太确定如何与其他服务器同步的语法.这使我想到了下一个问题.

but this obviously doesn't work as it will quickly chew up my already low disk space. So I was looking for a switch that lets mysqldump dump csv's to stdout. From what I have read it doesn't appear possible. The only way I can think of doing it is creating a FIFO and pointing mysql to dump there - then write a script that reads the FIFO at the same time and sends it to the dest server. Not really sure on the syntax of how to sync to the other server though; which brings me to my next problem.

假设我可以让mysql将CSV转储到stdout而不是文件中;然后如何将该输出通过管道传输到dest服务器?我很高兴能在目标服务器上简单地获得一个csv文件,因为它有更多空间.因为这样我就可以从文件中简单地使用mysqlimport.

Assuming I can get mysql to dump CSVs to stdout rather than a file; how do I then pipe that output to the dest server? I'm happy if I can simply get a single csv file on the dest server as it has more space; because then I can simply use mysqlimport from the file.

这将我带到下一个要点...我希望能够做到这一点:

Which brings me to my next point... I would love to be able to do this:

mysqldump -uuser -ppass --opt dbname --tab /dev/stdout dbtable  | mysqlimport -h remove.server  -uuser -pass dbname 

但是看起来mysqlimport不支持管道.您必须将文件传递给它.

But it looks like mysqlimport doens't support piping to it; you have to pass it a file.

在键入此内容时只是有一个想法;

Just had a thought while typing this;

是否可以使用上面列出的FIFO方法;然后让mysqlimport从FIFO中读取并插入到dest服务器中?我猜唯一的问题是mysql可以比转储到dest服务器更快地转储.随后填满src服务器.

Would it be possible to use the FIFO method listed above; then get mysqlimport to read from the FIFO and insert into the dest server? I guess the only problem there would be that mysql can dump quicker than it can do the imports to the dest server; subsequently filling up the src server.

我对如何将mysql CSV转储到stdout并将其通过网络传输到dest服务器(最好是同时导入,但很高兴只是将文件作为dest转储)进行了迷惑.

I'm a bit lost on how to do a mysql CSV dump to stdout and transfer it over the net to a dest server (preferably importing at the same time, but happy to just dump as a file on the dest).

任何帮助将不胜感激!

干杯, 本

更新:我正在使用innodb表;而且我无法关闭src盒超过10分钟的时间.

UPDATE: I'm using innodb tables; and I can't shut the src box down for any period longer than 10mins.

更新:我现在正在使用sshfs将dest上的dir挂载到src上,并使mysql将csv转储到该文件夹​​中-似乎工作正常.然后,只需使用mysqlimport将其加载到目标数据库即可.

UPDATE: I am now using sshfs to mount a dir on the dest onto the src and getting mysql to dump a csv into that folder - seems to work perfectly. Then its just a matter of using mysqlimport to load it into the database at the dest.

更新:现在,我设法将数据保存到目标框中-导入仍然像使用INSERTS一样慢. 12小时内导入了900万行.这里不对劲.有什么想法吗?

UPDATE: So now I have managed to get the data onto the dest box - the import is still as slow as if it were done with INSERTS. 9m rows imported in 12 hours. Something isn't right here. Any ideas?

更新:对于那些有兴趣的人...这也不起作用: http://forums.mysql.com/read.php?22,154964

UPDATE: For those interested... This doesn't work either: http://forums.mysql.com/read.php?22,154964

推荐答案

原来问题出在我要插入的主机上.没有足够的RAM +速度慢的机器导致查询备份.

Turns out the problem was with the host I was inserting into. Not enough RAM + slow machine caused the queries to back up.

这篇关于将mysql表转储为CSV(stdout),然后将输出隧道传输到另一台服务器的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆