如何有效地解析Perl中的CSV文件? [英] How do I efficiently parse a CSV file in Perl?

查看:514
本文介绍了如何有效地解析Perl中的CSV文件?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在开发一个项目,涉及在Perl中解析一个大型csv格式的文件,并希望使事情更有效率。

I'm working on a project that involves parsing a large csv formatted file in Perl and am looking to make things more efficient.

我的方法是 split()文件,然后 split()每行再用逗号来获取字段。但是这是次优的,因为至少需要两次数据传递。 (一次按行划分,然后再次针对每行划分)。这是一个非常大的文件,所以切割加工一半将是整个应用程序的重大改进。

My approach has been to split() the file by lines first, and then split() each line again by commas to get the fields. But this suboptimal since at least two passes on the data are required. (once to split by lines, then once again for each line). This is a very large file, so cutting processing in half would be a significant improvement to the entire application.

我的问题是,什么是最有效的时间有效的解析方法一个只使用内置工具的大型CSV文件?

My question is, what is the most time efficient means of parsing a large CSV file using only built in tools?

注意:每行都有不同数量的令牌,因此我们不能忽略行并仅用逗号分隔。此外,我们可以假设字段将只包含字母数字ASCII数据(没有特殊字符或其他技巧)。

note: Each line has a varying number of tokens, so we can't just ignore lines and split by commas only. Also we can assume fields will contain only alphanumeric ascii data (no special characters or other tricks). Also, i don't want to get into parallel processing, although it might work effectively.

编辑

它只能涉及Perl 5.8附带的内置工具。由于官僚原因,我不能使用任何第三方模块(即使托管在cpan上)

It can only involve built-in tools that ship with Perl 5.8. For bureaucratic reasons, I cannot use any third party modules (even if hosted on cpan)

另一个编辑

让我们假设我们的解决方案只有在文件数据完全加载到内存后才允许处理。

Let's assume that our solution is only allowed to deal with the file data once it is entirely loaded into memory.

我刚刚知道这个问题是多么的蠢。对不起浪费你的时间。投票结束。

I just grasped how stupid this question is. Sorry for wasting your time. Voting to close.

推荐答案

正确的方法是使用 Text :: CSV_XS 。它将比任何你自己可能做的更快,更强大。如果您确定只使用核心功能,则根据速度与稳定性有几个选项。

The right way to do it -- by an order of magnitude -- is to use Text::CSV_XS. It will be much faster and much more robust than anything you're likely to do on your own. If you're determined to use only core functionality, you have a couple of options depending on speed vs robustness.

关于pure-Perl的最快速度是以逐行读取文件,然后天真地拆分数据:

About the fastest you'll get for pure-Perl is to read the file line by line and then naively split the data:

my $file = 'somefile.csv';
my @data;
open(my $fh, '<', $file) or die "Can't read file '$file' [$!]\n";
while (my $line = <$fh>) {
    chomp $line;
    my @fields = split(/,/, $line);
    push @data, \@fields;
}

如果任何字段包含嵌入的逗号,这将失败。一个更强大(但更慢)的方法是使用Text :: ParseWords。要这样做,将拆分替换为:

This will fail if any fields contain embedded commas. A more robust (but slower) approach would be to use Text::ParseWords. To do that, replace the split with this:

    my @fields = Text::ParseWords::parse_line(',', 0, $line);

这篇关于如何有效地解析Perl中的CSV文件?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆