使用Hadoop将文本文件中的段落处理为单个记录 [英] Processing paraphragraphs in text files as single records with Hadoop

查看:126
本文介绍了使用Hadoop将文本文件中的段落处理为单个记录的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

简化我的问题,我有一组带有记录的文本文件,由双换行符分隔。像


'多行文字'



'空行'



'多行文字'



'空行'

等等。

我需要分别转换每个多行单元,然后对它们执行mapreduce。



但是,我知道使用hadoop代码样板中的默认wordcount设置,以下函数中的变量的输入仅为单个行,并没有保证输入是连续的前一个输入行。

  public void map(LongWritable key,Text value ,
OutputCollector< Text,IntWritable>输出,
Reporter记者)抛出IOException;

我需要输入实际上是双换行符分隔的多行文本的一个单元。



一些搜索出现了一个 RecordReader 类,一个 getSplits 方法,但没有简单的代码示例,我可以将我的头包裹起来。 用空格字符替换多行文本中的所有换行符,并用它来完成。我宁愿不这样做,因为它有相当多的文本,并且在运行时方面很耗时。如果我这样做,我也必须修改很多代码,所以通过hadoop处理它对我来说是最有吸引力的。 解决方案

如果你的文件很小,那么它们不会被分割。基本上每个文件都是一个分配给一个映射器实例的分割。在这种情况下,我同意托马斯的观点。您可以通过串联字符串在您的映射器类中构建逻辑记录。你可以通过查找一个空字符串作为你的映射器的值来检测你的记录边界。



但是,如果文件很大并且分裂,除了实现自己的文本输入格式类以外,还可以看到其他选项。您可以克隆现有的Hadoop LineRecordReader和LineReader java类。您必须对您的LineReader类的版本进行小改动,以便记录分隔符将为两行,而不是一行。一旦完成,您的映射程序将收到多行作为输入值。


Simplifying my problem a bit, I have a set of text files with "records" that are delimited by double newline characters. Like

'multiline text'

'empty line'

'multiline text'

'empty line'

and so forth.

I need to transform each multiline unit separately and then perform mapreduce on them.

However, I am aware that with the default wordcount setting in the hadoop code boilerplate, the input to the value variable in the following function is just a single line and there are no guarantees that the input is contiguous with the previous input line.

public void map(LongWritable key, Text value, 
                OutputCollector<Text, IntWritable> output, 
                Reporter reporter) throws IOException ;

And I need it to be that the input value is actually one unit of the double newline delimited multiline text.

Some searching turned up a RecordReader class and a getSplits method but no simple code examples that I could wrap my head around.

An alternative solution is to just replace all newline characters in the multiline text with space characters and be done with it. I'd rather not do this because there's quite a bit of text and it's time consuming in terms of runtime. I also have to modify a lot of code if I do this so dealing with it through hadoop would be most attractive for me.

解决方案

If your files are small in size, then they won't get split. Essentially each file is one split assigned to one mapper instance. In this case, I agree with Thomas. You can build your logical record in your mapper class, by concatenating strings. You can detect your record boundary by looking for an empty string coming in as value to your mapper.

However, if the files are big and get split, then I don't see any other option but to implement your own text input format class. You could clone existing Hadoop LineRecordReader and LineReader java classes. You have to make a small change in your version of LineReader class so that the record delimiter will be two new lines, instead of one. Once this done, your mapper will receive multiple lines as input value.

这篇关于使用Hadoop将文本文件中的段落处理为单个记录的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆