“java.io.IOException:传递一个删除或放入”在阅读HDFS和存储HBase时 [英] "java.io.IOException: Pass a Delete or a Put" when reading HDFS and storing HBase

查看:143
本文介绍了“java.io.IOException:传递一个删除或放入”在阅读HDFS和存储HBase时的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在一周内我一直在为这个错误而疯狂。有一个帖子有相同的问题通过删除或在hbase mapreduce中出现Put错误。但是这个决议并不适用于我。



我的驱动程序:

 配置conf = HBaseConfiguration.create(); 
工作职位;
尝试{
job = new Job(conf,Training);
job.setJarByClass(TrainingDriver.class);
job.setMapperClass(TrainingMapper.class);
job.setMapOutputKeyClass(LongWritable.class);
job.setMapOutputValueClass(Text.class);
FileInputFormat.setInputPaths(job,new Path(my / path));
扫描扫描=新扫描();
scan.setCaching(500); // 1是Scan中的默认设置,这对于MapReduce作业会很糟糕
scan.setCacheBlocks(false); //不要为MR作业设置为true
//设置其他扫描attrs
TableMapReduceUtil.initTableReducerJob(Constants.PREFIX_TABLE,
TrainingReducer.class,job);
job.setReducerClass(TrainingReducer.class);
//job.setNumReduceTasks(1); //至少一个,根据需要进行调整
尝试{
job.waitForCompletion(true);
} catch(ClassNotFoundException | InterruptedException e){
// TODO自动生成的catch块
e.printStackTrace();
}

} catch(IOException e){
// TODO自动生成的catch块
e.printStackTrace();

我的映射器:

  public class TrainingMapper extends 
Mapper< LongWritable,Text,LongWritable,Text> {
$ b $ public void map(LongWritable key,Text value,
Context context)
throws IOException {
context.write(key,new Text(generateNewText()) ;
}

My Reducer



< pre $ public class TrainingReducer extends TableReducer< LongWritable,Text,ImmutableBytesWritable> {
$ b $ public void reduce(LongWritable key,Iterator< Text> values,Context context)
抛出IOException {
while(values.hasNext()){
try {
Put put = new Put(Bytes.toBytes(key.toString()));
put.add(cf1.getBytes(),c1.getBytes(),values.next()。getBytes());
context.write(null,put);
} catch (InterruptedException e){
// TODO自动生成的catch块
e.printStackTrace();
}
}
}
}

你有什么经验吗?请告诉我如何解决它。

解决方案

我自己得到了解决方案。

在我的reduce函数之前插入Annotation @Override并像下面这样更改第二个reduce函数参数:
@Override
public void reduce(LongWritable key , Iterable 值,Context上下文)


I has been crazy with this error in a week. There was a post with the same problem Pass a Delete or a Put error in hbase mapreduce. But that resolution's not really work on mine.

My Driver:

 Configuration conf = HBaseConfiguration.create();
    Job job;
    try {
        job = new Job(conf, "Training");
        job.setJarByClass(TrainingDriver.class);
        job.setMapperClass(TrainingMapper.class);
        job.setMapOutputKeyClass(LongWritable.class);
        job.setMapOutputValueClass(Text.class);
        FileInputFormat.setInputPaths(job, new Path("my/path"));
        Scan scan = new Scan();
        scan.setCaching(500);        // 1 is the default in Scan, which will be bad for MapReduce jobs
        scan.setCacheBlocks(false);  // don't set to true for MR jobs
        // set other scan attrs
        TableMapReduceUtil.initTableReducerJob(Constants.PREFIX_TABLE,
                TrainingReducer.class, job);
        job.setReducerClass(TrainingReducer.class);
        //job.setNumReduceTasks(1);   // at least one, adjust as required
        try {
            job.waitForCompletion(true);
        } catch (ClassNotFoundException | InterruptedException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }

    } catch (IOException e) {
        // TODO Auto-generated catch block
        e.printStackTrace();
    }

My Mapper:

public class TrainingMapper extends
    Mapper<LongWritable, Text, LongWritable, Text> {

    public void map(LongWritable key, Text value,
        Context context)
        throws IOException {
    context.write(key, new Text(generateNewText());
}

My Reducer

public class TrainingReducer extends TableReducer<LongWritable,Text,ImmutableBytesWritable>{

    public void reduce(LongWritable key, Iterator<Text> values,Context context)
        throws IOException {
        while (values.hasNext()) {
             try {
                Put put = new Put(Bytes.toBytes(key.toString()));
                put.add("cf1".getBytes(), "c1".getBytes(), values.next().getBytes());
                context.write(null, put);
             } catch (InterruptedException e) {
                 // TODO Auto-generated catch block
                  e.printStackTrace();
             }
       }
   }
 }

Do you have any experience on that? Please tell me how to fix it.

解决方案

I got the solution by myself.

Insert Annotation @Override before my reduce function and change the second param of reduce function like this: @Override public void reduce(LongWritable key, Iterable values,Context context)

这篇关于“java.io.IOException:传递一个删除或放入”在阅读HDFS和存储HBase时的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆