如何(在 Hadoop 中)将数据放入正确类型的 map 和 reduce 函数中? [英] How (in Hadoop),is the data put into map and reduce functions in correct types?

查看:13
本文介绍了如何(在 Hadoop 中)将数据放入正确类型的 map 和 reduce 函数中?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有点难以理解 Hadoop 中的数据,如何将数据放入地图和缩减功能.我知道我们可以定义输入格式和输出格式,然后定义输入和输出的键类型.但是举个例子,如果我们想要一个对象作为输入类型,Hadoop 在内部是如何做到的呢?

I'm having a bit difficult in understanding in Hadoop, how the data put into the map and reduced functions. I know that we can define the input format and output format and then the key types for input and output. But for an example if we want an object to be the input type, how does Hadoop internally does that ?

谢谢...

推荐答案

您可以使用 Hadoop InputFormat 和 OutputFormat 接口来创建您的自定义格式..一个示例可以将 MapReduce 作业的输出格式化为 JSON..类似这-

you can use Hadoop InputFormat and OutputFormat interfaces to create your custom formats..an example could be to format the output of your MapReduce job as JSON..something like this -

public class JsonOutputFormat extends TextOutputFormat<Text, IntWritable> {
    @Override
    public RecordWriter<Text, IntWritable> getRecordWriter(
            TaskAttemptContext context) throws IOException, 
                  InterruptedException {
        Configuration conf = context.getConfiguration();
        Path path = getOutputPath(context);
        FileSystem fs = path.getFileSystem(conf);
        FSDataOutputStream out = 
                fs.create(new Path(path,context.getJobName()));
        return new JsonRecordWriter(out);
    }

    private static class JsonRecordWriter extends 
          LineRecordWriter<Text,IntWritable>{
        boolean firstRecord = true;
        @Override
        public synchronized void close(TaskAttemptContext context)
                throws IOException {
            out.writeChar('{');
            super.close(null);
        }

        @Override
        public synchronized void write(Text key, IntWritable value)
                throws IOException {
            if (!firstRecord){
                out.writeChars(",
");
                firstRecord = false;
            }
            out.writeChars(""" + key.toString() + "":""+
                    value.toString()+""");
        }

        public JsonRecordWriter(DataOutputStream out) 
                throws IOException{
            super(out);
            out.writeChar('}');
        }
    }
}

这篇关于如何(在 Hadoop 中)将数据放入正确类型的 map 和 reduce 函数中?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆