使用MapReduce API在HDFS中复制文件使用Gzip压缩 [英] Using MapReduce API To Copy Files Inside HDFS With Gzip Compression

查看:499
本文介绍了使用MapReduce API在HDFS中复制文件使用Gzip压缩的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在使用Java编写一个归档程序。将要归档的文件已驻留在HDFS中。我需要能够将文件从HDFS的一个位置移动到另一个位置,最终的文件用Gzip压缩。要移动的文件可能相当大,因此使用HDFS API移动它们并压缩它们可能是相当低效的。所以我想我可以写一个mapreduce工作到我的代码,为我做这个。



但是,我一直无法找到任何示例,可以使用MapReduce API复制这些文件,并以gzip格式输出。事实上,我甚至很难找到一个程序化的例子,如何通过mapreduce在HDFS中复制文件。



任何人都可以了解我如何使用MapReduce API完成这个任务?



编辑:这里是我迄今为止的作业配置代码,它是从阿马尔给我的帮助改编的:

  conf.setBoolean(mapred.output.compress,true); 
conf.set(mapred.output.compression.codec,org.apache.hadoop.io.compress.GzipCodec);
Job job = new Job(conf);
job.setJarByClass(LogArchiver.class);
job.setJobName(ArchiveMover _+ dbname);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
//job.setMapperClass(IdentityMapper.class);
//job.setReducerClass(IdentityReducer.class);
job.setInputFormatClass(NonSplittableTextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
job.setNumReduceTasks(0);
FileInputFormat.setInputPaths(job,new Path(archiveStaging +/+ dbname +/ * / *));
FileOutputFormat.setOutputPath(job,new Path(archiveRoot +/+ dbname));
job.submit();

这里是NonSplittableTextInputFormat的类声明,它在LogArchiver类中

  public class NonSplittableTextInputFormat extends TextInputFormat {
public NonSplittableTextInputFormat(){
}

@Override
protected boolean isSplitable(JobContext context,Path file){
return false;
}
}


解决方案

可以写一个 c> IdentityMapper IdentityReducer 下的 =nofollow
而不是纯文本文件,您可以生成gzip文件作为输出。在 run()中设置以下配置:

  conf.setBoolean mapred.output.compress,true); 
conf.set(mapred.output.compression.codec,org.apache.hadoop.io.compress.GzipCodec);

为了确保输入和输出中的文件数量相同,您必须执行以下两项操作:


  1. 实现NonSplittableTextInputFormat

  2. 将reduce任务设置为为了确保读取每个映射器的一个文件,您可以扩展 TextInputFormat code>如下:

      import org.apache.hadoop.fs。 
    import org.apache.hadoop.mapred.TextInputFormat;
    public class NonSplittableTextInputFormat extends TextInputFormat {
    @Override
    protected boolean isSplitable(FileSystem fs,Path file){
    return false;
    }
    }

    并使用上述实现:

      job.setInputFormatClass(NonSplittableTextInputFormat.class); 

    要将reduce任务设置为零,请执行以下操作:

      job.setNumReduceTasks(0); 

    这将为你完成这项工作,但最后一件事情是文件名不会是相同!但我确信这里也必须有一个解决方法。


    I'm writing an archiving program in Java. The files that will be archived already reside in HDFS. I need to be able to move the files from one location in HDFS to another location, with the final files being compressed with Gzip. The files to be moved can be quite large, and thus using the HDFS API to move them and compress them can be quite inefficient. So I was thinking that I could write a mapreduce job into my code to do that for me.

    However, I have been unable to find any examples that show me how I could copy those files using the MapReduce API and have them output in gzip format. In fact, I'm even struggling to find a programmatic example of how to copy files inside of HDFS through mapreduce at all.

    Can anybody shed some light on how I can accomplish this with the MapReduce API?

    Edit: Here's the job configuration code I have so far, which was adapted from the help that Amar has given me:

            conf.setBoolean("mapred.output.compress", true); 
            conf.set("mapred.output.compression.codec","org.apache.hadoop.io.compress.GzipCodec");
            Job job = new Job(conf);
            job.setJarByClass(LogArchiver.class);
            job.setJobName("ArchiveMover_"+dbname);
            job.setOutputKeyClass(Text.class);
            job.setOutputValueClass(Text.class);
            //job.setMapperClass(IdentityMapper.class);
            //job.setReducerClass(IdentityReducer.class);
            job.setInputFormatClass(NonSplittableTextInputFormat.class);
            job.setOutputFormatClass(TextOutputFormat.class);
            job.setNumReduceTasks(0);
            FileInputFormat.setInputPaths(job, new Path(archiveStaging+"/"+dbname+"/*/*"));
            FileOutputFormat.setOutputPath(job, new Path(archiveRoot+"/"+dbname));
            job.submit();
    

    Here is the class declaration for NonSplittableTextInputFormat which is inside of the LogArchiver class

    public class NonSplittableTextInputFormat extends TextInputFormat {
        public NonSplittableTextInputFormat () {
        }
    
        @Override
        protected boolean isSplitable(JobContext context, Path file) {
            return false;
        }
    }
    

    解决方案

    You may write a custom jar implementation with IdentityMapper and IdentityReducer. Instead of plain text files, you can generate gzip files as your output. Set the following configurations in run() :

    conf.setBoolean("mapred.output.compress", true); 
    conf.set("mapred.output.compression.codec","org.apache.hadoop.io.compress.GzipCodec");
    

    In order to ensure that number of files in input and output are same, just that the output files must be gzipped, you have to do 2 things:

    1. Implement a NonSplittableTextInputFormat
    2. Set reduce tasks to zero.

    In order to ensure that one file per mapper is read, you may extend the TextInputFormat as follows:

    import org.apache.hadoop.fs.*;
    import org.apache.hadoop.mapred.TextInputFormat;
    public class NonSplittableTextInputFormat extends TextInputFormat {
        @Override
        protected boolean isSplitable(FileSystem fs, Path file) {
            return false;
        }
    }
    

    and use the above implementation as :

    job.setInputFormatClass(NonSplittableTextInputFormat.class);
    

    To set reduce tasks to zero, do the following:

    job.setNumReduceTasks(0);
    

    This would get the job done for you but for one last thing that file names won't be the same! But I am sure for this too there must be a work-around here.

    这篇关于使用MapReduce API在HDFS中复制文件使用Gzip压缩的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆