加快Perl DBI fetchrow_hashref的速度 [英] Speeding up perl DBI fetchrow_hashref

查看:41
本文介绍了加快Perl DBI fetchrow_hashref的速度的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一些看起来像这样的东西:

I have something that looks like this:

my $report = new ReportGenerator; #custom object
my $dbh = $dbc->prepare('SELECT * FROM some_table WHERE some_condition'); #DBI handle
$dbh->execute();
while(my $href = $dbh->fetchrow_hashref){
    $report->process_record($href);
}
$dbh->finish();
print $report->printReport();

我的问题是循环的每次迭代都很慢.问题是MySQL.我想知道是否有可能在while循环中放入某种包装程序,以使其一次获取多个记录,同时将所有记录获取到内存中也不可行.我不担心代码的效率(hashref与arrayref等).相反,我有兴趣一次获取10000条记录.

My problem is that each iteration of the loop is very slow. The problem is the MySQL. I was wondering if it was possible to put some kind of wrapper in the while loop to make it fetch more than one record at a time, at the same time, fetching all records into memory is not practical either. I am not worried about the efficiency of the code(hashref vs arrayref,etc..). Rather, I am interested in fetching lets say 10000 records at a time.

该数据库有约500万条记录.我无法更改/升级服务器.

The database has ~5 Million records. I can not change/upgrade the server.

谢谢

推荐答案

您可以使用fetchall_arrayref函数,该函数接受"maxrows"参数:

You can use the fetchall_arrayref function which accepts a 'maxrows' argument:

while (my $data = $dbc->fetchall_arrayref(undef, 10000)) {
  for my $row( @{$data} ) {
    $report->process_record($row);
  }
}

您还可以查看 RowCacheSize 属性,该属性试图控制多少记录是从驱动程序中获取的.

You could also look at the RowCacheSize property which attempts to control how many records are returned in a fetch from your driver.

这篇关于加快Perl DBI fetchrow_hashref的速度的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆