如何在运行Hadoop MapReduce作业时将文件名/文件内容作为MAP的键/值输入? [英] How to get Filename/File Contents as key/value input for MAP when running a Hadoop MapReduce Job?

查看:382
本文介绍了如何在运行Hadoop MapReduce作业时将文件名/文件内容作为MAP的键/值输入?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在创建一个分析PDF,DOC和DOCX文件的程序。这些文件存储在HDFS中。



当我启动我的MapReduce作业时,我希望map函数将Filename作为关键字,将Binary Contents作为值。然后我想创建一个流读取器,我可以传递给PDF解析器库。我如何才能实现映射阶段的键/值对是filename / filecontents?



我使用Hadoop 0.20.2



这是开始作业的旧代码:

  public static void main(String [] args)throws异常{
JobConf conf =新的JobConf(PdfReader.class);
conf.setJobName(pdfreader);

conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);

conf.setMapperClass(Map.class);
conf.setReducerClass(Reduce.class);

conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);

FileInputFormat.setInputPaths(conf,new Path(args [0]));
FileOutputFormat.setOutputPath(conf,new Path(args [1]));

JobClient.runJob(conf);
}

我知道还有其他输入格式类型。但是有没有一种能够完全符合我的要求?我发现文档很模糊。如果有一个可用的,那么Map函数输入类型应该如何显示?

预先感谢!

解决方案

解决方案是创建您自己的FileInputFormat类来完成此操作。
您可以从FileInputFormat收到的FileSplit中获取输入文件的名称(getPath)。
请确保否决您的FileInputformat的isSplitable始终返回false。



您还需要一个自定义RecordReader,它将整个文件作为单个记录价值。

请小心处理太大的文件。您将有效地将整个文件加载到RAM中,任务跟踪器的默认设置是只有200MB的RAM。


I am creating a program to analyze PDF, DOC and DOCX files. These files are stored in HDFS.

When I start my MapReduce job, I want the map function to have the Filename as key and the Binary Contents as value. I then want to create a stream reader which I can pass to the PDF parser library. How can I achieve that the key/value pair for the Map Phase is filename/filecontents?

I am using Hadoop 0.20.2

This is older code that starts a job:

public static void main(String[] args) throws Exception {
 JobConf conf = new JobConf(PdfReader.class);
 conf.setJobName("pdfreader");

 conf.setOutputKeyClass(Text.class);
 conf.setOutputValueClass(IntWritable.class);

 conf.setMapperClass(Map.class);
 conf.setReducerClass(Reduce.class);

 conf.setInputFormat(TextInputFormat.class);
 conf.setOutputFormat(TextOutputFormat.class);

 FileInputFormat.setInputPaths(conf, new Path(args[0]));
 FileOutputFormat.setOutputPath(conf, new Path(args[1]));

 JobClient.runJob(conf);
}

I Know there are other inputformat types. But is there one that does exactly what I want? I find the documentation quite vague. If there is one available, then how should the Map function input types look?

Thanks in advance!

解决方案

The solution to this is to create your own FileInputFormat class that does this. You have access to the name of the input file from the FileSplit that this FileInputFormat receives (getPath). Be sure to overrule the isSplitable of your FileInputformat to always return false.

You will also need a custom RecordReader that returns the entire file as a single "Record" value.

Be careful in handling files that are too big. You will effectively load the entire file into RAM and the default setting for a task tracker is to have only 200MB RAM available.

这篇关于如何在运行Hadoop MapReduce作业时将文件名/文件内容作为MAP的键/值输入?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆