将一个非常大的表从一个DB2复制到另一个,使用perl和DBI [英] copying a very large table from one DB2 to another, using perl and DBI

查看:147
本文介绍了将一个非常大的表从一个DB2复制到另一个,使用perl和DBI的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我需要每天从一个DB2 DB到另一个DB2数据库复制一个非常大的(数百万行)表,我需要使用perl和DBI。



有没有更快的方法来做,而不是简单地fetchrow_array从第一个数据库的每一行,并将它们逐个插入到第二个数据块?这是我得到的:

  $ sth1 = $ udb1  - > prepare($ read_query); 
$ sth1 - >执行();
$ sth1 - > bind_columns(\(@row {@ {$ sth1 - > {NAME_1c}}}));

$ sth2 = $ udb2 - > prepare($ write_query);

while($ sth1 - > fetchrow_arrayref){
$ sth2 - >执行($ row {field_name_1},$ row {field_name_2});
}

我从类似的线程中实现了一些解决方案,但仍然很慢。当然,必须有一个更好的方法?

解决方案

如果你把它包装成一个事务,它应该工作得更快。
使用这样的东西:

  $ sth1 = $ udb1-> prepare($ read_query); 
$ sth1-> execute();
$ sth1-> bind_columns(\(@row {@ {$ sth1-> {NAME_1c}}}));

$ udb2-> begin_work();
$ sth2 = $ udb2-> prepare($ write_query);
while($ sth1-> fetchrow_arrayref()){
$ sth2-> execute($ row {field_name_1},$ row {field_name_2});
}
$ udb2-> commit();

如果您有数百万行,则可能需要每几千行执行一次。



现在,为什么它更快:



在你的情况下,每一个插入都是一个自动提交的事务。换句话说,服务器必须等到您的更改真正刷新到您的数百万行中的每一行 - 非常缓慢!



当您将其包装到事务中时,服务器可以一次刷新数千行到磁盘 - 更高效和更快速。



(如果您一遍又一遍地复制完全相同的表格,那将会更明智通过某种唯一键同步更改 - 应该是百万倍的速度)。


I need to copy, on a daily basis, a very large (millions of rows) table from one DB2 DB to another, and I need to use perl and DBI.

Is there a faster way to do this than to simply fetchrow_array each row from the first DB and insert them one-by-one into the second DB? Here's what I got:

$sth1 = $udb1 -> prepare($read_query);
$sth1 -> execute();
$sth1 -> bind_columns(\(@row{@{$sth1 -> {NAME_1c}}}));

$sth2 = $udb2 -> prepare($write_query);

while ($sth1 -> fetchrow_arrayref) {
    $sth2 -> execute($row{field_name_1}, $row{field_name_2});
}

I implemented some solutions from a similar thread, but it's still slow. Certainly there has to be a better way?

解决方案

If you wrap this into one transaction, it should work much faster. Use something like this:

$sth1 = $udb1->prepare($read_query);
$sth1->execute();
$sth1->bind_columns(\(@row{@{$sth1->{NAME_1c}}}));

$udb2->begin_work();
$sth2 = $udb2->prepare($write_query);
while ($sth1->fetchrow_arrayref()) {
    $sth2->execute($row{field_name_1}, $row{field_name_2});
}
$udb2->commit();

If you have millions of rows, you may want to perform commit every few thousand rows.

Now, reason why it is faster:

In your case, every single insert is one auto-committed transaction. In other words, server has to wait until your changes are really flushed to disk for every single of your millions of rows - very SLOW!

When you wrap it into transaction, server can flush thousands of rows to disk at once - much more efficient and faster.

(If you are copying exact same table over and over again, it would be wiser to synchronize changes by some sort of unique key instead - should be million times faster).

这篇关于将一个非常大的表从一个DB2复制到另一个,使用perl和DBI的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆